Rebase function in BEP20 token contract - token

Can anyone please explain to me the rebase function of this solidity code? I am building a rebase token like Titano.
function rebase(uint256 epoch, int256 supplyDelta)
external
onlyOwner
returns (uint256)
{
require(!inSwap, "Try again");
if (supplyDelta == 0) {
emit LogRebase(epoch, _totalSupply);
return _totalSupply;
}
if (supplyDelta < 0) {
_totalSupply = _totalSupply.sub(uint256(-supplyDelta));
} else {
_totalSupply = _totalSupply.add(uint256(supplyDelta));
}
if (_totalSupply > MAX_SUPPLY) {
_totalSupply = MAX_SUPPLY;
}
_gonsPerFragment = TOTAL_GONS.div(_totalSupply);
pairContract.sync();
emit LogRebase(epoch, _totalSupply);
return _totalSupply;
}
You can see the full code of the contract here.

It changes the total supply by the supplyDelta param value.
A usual approach with rebase tokens is to store each holders stake percentage instead of their absolute token amount. Their actual token amount is then calculated by multiplying the stake percentage by a variable - in this case the _gonsPerFragment value.
Example:
Total supply 100, Alice owns 80% and Bob owns 20% of the tokens. This makes Alice owner of 80 tokens and Bob owner of 20 tokens.
Now lets start a new epoch and rebase the total supply by +200, making it total of 300. Alice now owns 240 tokens (still 80%) and Bob now owns 60 tokens (still 20%).

But how to automate this rebase? Despite its name, ‘smart’ contracts in Ethereum are not self-executing. You need an external source (either human or machine) to call the smart contract and execute its code.

Automation in Solidity is usually done by placing a function within the ERC20 transfer function. This is also how reflection tokens work. For Titano the rebase function is also within the transfer function and it executes in a set time interval.

Related

How do I structure f# code in idiomatic manner to cater for input states (dependency)

Whilst I am learning F#, I am trying to build a payroll processing engine to put in practice what I am learning.
On a high level, the payroll pipeline can be summarised as having the following steps
Input Earnings
Apply deductions on the earnings if any
Apply taxes on earnings after step 2
Apply any post tax deductions
I have got the following code that calculates the payroll for an employee
module Payroll=
let calculate(payPeriods: PayPeriod list, employee:Employee, payrollEntries: Entry list )=
// implementations, function calls go here
Now looking at step 3 above, you will see that we need to get tax rates (Steps have been overly simplified) to perform calculation.
Do we pass the tax rates as a parameter or is there another idiomatic way to achieve what I want to achieve.
The tax rates may be injected from a datastore.
How do I do to manage the tax part? Do inject the taxes in a parameter or I pass function that will allow me to manage this?
It is hard to answer your question without any calculations, but if the question is about structuring the code in a very general way, then I can give an example vaguely inspired by your question.
For simplicity, my earnings will be just a float:
type Earnings =
{ Amount : float }
There are also some environment parameters such as the tax and deductions:
type Environment =
{ Deductions : float
Tax : float }
Your core logic can be written as pure functions taking the Environment and Earnings:
let applyDeductions env earnings =
{ earnings with Amount = earnings.Amount - env.Deductions }
let applyTaxes env earnings =
{ earnings with Amount = earnings.Amount * (1.0 - env.Tax) }
To read input, you could read stuff from a console or a file, but here I'll just return a constant sample:
let readInput () =
{ Amount = 5000.0 }
Now, the main function initializes the environment (possibly from a file), reads the input and passes the env and the earnings to all the processing functions in a pipeline:
let run () =
let env = { Deductions = 1000.0; Tax = 0.2 }
let earnings =
readInput()
|> applyDeductions env
|> applyTaxes env
printfn "Final: %f" earnings.Amount
This is way simpler than what your snippet suggests, but the structure should work pretty much the same.

Issues setting a maximum amount of tokens in ERC20 contract

I've been trying to create a very simple ERC20 token with truffle in the rinkeby network. I placed the following code into my .sol file but the max supply doesnt seem to match.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import "#openzeppelin/contracts/token/ERC20/ERC20.sol";
contract Artytoken is ERC20 {
address public admin;
uint private _totalSupply;
constructor() ERC20('ArtyToken', 'AATK') {
admin = msg.sender;
_totalSupply = 1000000;
_mint(admin, _totalSupply);
}
}
In my Metamask address shows that i own "0.000000000001" and I've seen that in Etherscan it shows that the max total supply its "0.000000000001". What im doing wrong?
Thank you in advance! :)
The EVM does not support decimal numbers (to prevent rounding errors related to the network nodes running on different architectures), so all numbers are integers.
The ERC-20 token standard defines the decimals() function that your contract implements, effectively shifting all numbers by the declared number of zeros to simulate decimals.
So if you wanted to mint 1 token with 18 decimals, you'd need to pass the value 1000000000000000000 (18 zeros). Or as in your case, 1 million tokens (6 zeros) with 18 decimals is represented as 1000000000000000000000000 (24 zeros). Same goes the other way around, 0.5 of a token with 18 decimals is 500000000000000000 (17 zeros).
You can also use underscores (they effectively do nothing but visually separate the value) and scientific notation to mitigate human error while working with such large amount of zeros:
// 6 zeros and 18 zeros => 24 zeros
_totalSupply = 1_000_000 * 1e18;

In Twilio Studio Flow, how to use 'Gather widget' for one-off dollar/cents Stripe payment amount for 'Capture Payments' widget

I'm not a developer so I'm largely restricted to TWILIO STUDIO FLOW--with decent success.
Attempting now (unsuccessfully) to add an IVR payment option 'Press X to pay your bill'.
When prompted by a Gather widget the caller (customer) enters the bill amount. This gathered value becomes the input parameter to the Capture Payments widget for payment processing by Stripe.com.
I have confirmed that my Twilio-Stripe account 'Default' connection is correct by successfully processing a hard-coded charge amount (using Twilio's example Twiml code).
Since we need customer/callers to enter variable bill amounts, we're doing this via a Gather Input wizard (since I'm not a programmer). I've tried repeatedly to use the Studio Flow 'Capture Payment' wizard with three recurring problems.
Can someone tell me how to convert the digits entered into a dollar-and-cent value (effectively 'nnnn' x .01 = 'nn.nn') OR how to enter a decimal via the telephone touch pad
read back the 'xx dollars and xx cents' amount entered confirmation verbiage before continuing to process payment
Is there perhaps any obscure something else required to get a successful payment via the Flow Pay wizard (other than dragging the connector and providing valid CC credentials)? No matter what I have tried, it does not appear to recognize the DTMF codes and after three attempt (with a valid credit card), it continues to disconnect.
In the gather_confirm_amount widget input box is this verbiage:
You entered the amount of {{widgets.gather_amount_due.Digits | split: "." |
first}} dollars and {{widgets.gather_amount_due.Digits | split: "." | last}}
cents.
If this is correct, press 1.
This value is entered into the 'Charge Card With Amount' field using the following
{{widgets.gather_amount_due.Digits | split: "." | first}} dollars and
{{widgets.gather_amount_due.Digits | split: "." | last}}'''
When user enters into 100 into '''gather_amount_due''' widget I get 'You entered the amount of 100 dollars and 100 cents.'
Obviously, I am trying to return a response 'You entered ONE DOLLARS and ZERO CENTS'.
Also, the Stripe payment prompt says three times 'Please enter your credit card number' and then disconnects due to 3 max attempts. It does not appear to be acknowledging the DTMF inputs at all.
Thank you.
I'm trying to figure out the same issue, maybe you can prompt the customer to use * as the decimal point and then replace the character?
Reading it back to the customer I assume you got figured out? I'm using Studio and if you type in the number say $123.45 and use the say-as interpret-as"currency" and add the number with $ or USD it will dictate it (at least using the Polly voices). I basically looked up SSML tags and figured this out.
<say-as interpret-as="currency">$123.45</say-as>
I assume you debugged the DTMF issue but they say there may be some difficulty with DTMF on international or mobile calls. The other way would to be Speech to Text and may also solve the decimal point issue.
I have successfully been able to accept dollars and cents and read it back.
To have the caller enter dollars and cents you can do 1 of 2 ways. Either have the caller press "star" as a decimal point so for a $10.50 payment they would enter 1,0,*,5,0. Or they can just add 2 digits as cents so for $10.50 they would press 1,0,5,0 and for lets say $5 they would press 5,0,0.
Now we take the {{widgets.gather_amount_due.Digits}} and send it a function widget by placing it in a KEY VALUE function parameter and and taking those digits in the function and converting it into a decimal number like so.
In the examples below the KEY is "amount".
For method without star use this.
exports.handler = function(context, event, callback) {
//bring in amount
const amount = event.amount;
//add decimal
const response = (amount / 100).toFixed(2);
callback(null, response);
};
For method with star key use this.
exports.handler = function(context, event, callback) {
//bring in amount
const amount = event.amount;
//add decimal
const response = amount.replace('*', '.');
callback(null, response);
};
Then take the return of the function {{widgets.YOUR_FUNCTION_NAME.body}} and place it in the "Amount" field of the pay widget and viola! You can also take that same return and place it in a say widget to read back the amount.

What is the correct way to set StopLoss and TakeProfit in OrderSend() in MetaTrader4 EA?

I'm trying to figure out if there is a correct way to set the Stop Loss (SL) and Take Profit (TP) levels, when sending an order in an Expert Advisor, in MQL4 (Metatrader4). The functional template is:
OrderSend( symbol, cmd, volume, price, slippage, stoploss, takeprofit, comment, magic, expiration, arrow_color);
So naturally I've tried to do the following:
double dSL = Point*MM_SL;
double dTP = Point*MM_TP;
if (buy) { cmd = OP_BUY; price = Ask; SL = ND(Bid - dSL); TP = ND(Ask + dTP); }
if (sell) { cmd = OP_SELL; price = Bid; SL = ND(Ask + dSL); TP = ND(Bid - dTP); }
ticket = OrderSend(SYM, cmd, LOTS, price, SLIP, SL, TP, comment, magic, 0, Blue);
However, there are as many variations as there are scripts and EA's. So far I have come across these.
In the MQL4 Reference in the MetaEditor, the documentation say to use:
OrderSend(Symbol(),OP_BUY,Lots,Ask,3,
NormalizeDouble(Bid - StopLoss*Point,Digits),
NormalizeDouble(Ask + TakeProfit*Point,Digits),
"My order #2",3,D'2005.10.10 12:30',Red);
While in the "same" documentation online, they use:
double stoploss = NormalizeDouble(Bid - minstoplevel*Point,Digits);
double takeprofit = NormalizeDouble(Bid + minstoplevel*Point,Digits);
int ticket=OrderSend(Symbol(),OP_BUY,1,price,3,stoploss,takeprofit,"My order",16384,0,clrGreen);
And so it goes on with various flavors, here, here and here...
Assuming we are interested in a OP_BUY and have the signs correct, we have the options for basing our SL and TP values on:
bid, bid
bid, ask
ask, ask
ask, bid
So what is the correct way to set the SL and TP for a BUY?
(What are the advantages or disadvantages of using the various variations?)
EDIT: 2018-06-12
Apart a few details, the answer is actually quite simple, although not obvious. Perhaps because MT4 only show Bid prices on the chart (by default) and not both Ask and Bid.
So because: Ask > Bid and Ask - Bid = Slippage, it doesn't matter which we choose as long as we know about the slippage. Then depending on what price you are following on the chart, you may wish to decide on using one over the other, adding or subtracting the Slippage accordingly.
So when you use the measure tool to get the Pip difference of currently shown prices, vs your "exact" SL/TP settings, you need to keep this in mind.
So to avoid having to put the Slippage in my code above, I used the following for OP_BUY: TP = ND(Bid + dTP); (and the opposite for OP_SELL.)
If you buy, you OP_BUY at Ask and close (SL, TP) at Bid.
If you sell, OP_SELL operation is made at Bid price, and closes at Ask.
Both SL and TP should stay at least within STOP_LEVEL * Point() distance from the current price to close ( Bid for buy, Ask for sell).
It is possible that STOP_LEVEL is zero - in such cases ( while MT4 accepts the order ) the Broker may reject it, based on its own algorithms ( Terms and Conditions may call it a "floating Stoplevel" rule or some similar Marketing-wise "re-dressed" term ).
It is adviced to send an OrderSend() request with zero values of SL and TP and modify it after you see that the order was sent successfully. Sometimes it is not required, sometimes that is even mandatory.
There is no difference between the two links you gave us: you may compute SL and TP and then pass them into the function or compute them based on OrderOpenPrice() +/- distance * Point().
So what is the correct way to set the SL and TP for a BUY ?
There is no such thing as "The Correct Way", there are rules to meet
Level 0: Syntax is to meet the call-signature ( the easiest one )
Level 1: all at Market XTO-s have to meet the right level of the current Price +/- slippage, make sure to repeat a RefreshRates()-test as close to the PriceDOMAIN-levels settings, otherwise they get rejected from the Broker side ( blocking one's trading engine at a non-deterministic add-on RTT-latency ) + GetLastError() == 129 | ERR_INVALID_PRICE
Level 2: yet another rules get set from Broker-side, in theire respective Service / Product definition in [ Trading Terms and Conditions ]. If one's OrderSend()-request fails to meet any one of these, again, the XTO will get rejected, having same adverse blocking effects, as noted in Level 1.
Some Brokers do not allow some XTO situations due to their T&C, so re-read such conditions with a due care. Any single of theirs rule, if violated, will lead to your XTO-instruction to get legally rejected, with all adverse effects, as noted above. Check all rules, as you will not like to see any of the following error-states + any of others, restricted by your Broker's T&C :
ERR_LONG_POSITIONS_ONLY_ALLOWED Buy orders only allowed
ERR_TRADE_TOO_MANY_ORDERS The amount of open and pending orders has reached the limit set by the broker
ERR_TRADE_HEDGE_PROHIBITED An attempt to open an order opposite to the existing one when hedging is disabled
ERR_TRADE_PROHIBITED_BY_FIFO An attempt to close an order contravening the FIFO rule
ERR_INVALID_STOPS Invalid stops
ERR_INVALID_TRADE_VOLUME Invalid trade volume
...
..
.
#ASSUME NOTHING ; Is the best & safest design-side (self)-directive

How to calculate RPG Level Progression as percentage

I'm designing an RPG game where a user may accumulate experience points (XP) and level up, based on XP. Calculating the current level is trivial; if else seems to be most efficent.
I would like to calculate percent of progression for the current level. Based on the assumption that I have 10 levels, where each level is capped at a somewhat exponential value:
typedef NS_ENUM(NSInteger, RPGLevelCap) {
RPGLevelCap1=499,
RPGLevelCap2=1249,
RPGLevelCap3=2249,
RPGLevelCap4=3499,
RPGLevelCap5=4999,
RPGLevelCap6=6999,
RPGLevelCap7=9999,
RPGLevelCap8=14999,
RPGLevelCap9=19999,
RPGLevelCap10=19999 // Anything beyond previous level is Lvl 10; display as 100%
};
What's an efficient, yet easily understandable way, to calculate a user's level progression based on their current level?
An if else statement is both hard to understand and maintain, but may be fairly efficient:
float levelProgression=0;
// Calculate level progression as a fraction of 1
if(xp <= RPGLevelCap1)
{
levelProgression = ((float)xp / RPGLevelCap1);
}
else if (xp <=RPGLevelCap2)
{
levelProgression = ((float)(xp-RPGLevelCap1) / (RPGLevelCap2-RPGLevelCap1));
}
else if (xp <=RPGLevelCap3)
{
levelProgression = ((float)(xp-RPGLevelCap2) / (RPGLevelCap3-RPGLevelCap2));
}
...
else if (xp>RPGLevelCap10)
{
levelProgression = 1;
}
Given that the level caps are inconsistent...how should I handle this problem?
Hmm. A simple way would be to store the level cap values in an array. Find the player's current level based on the largest value it's less than. (level one is 0 to 499, level two is 500 to 1249, etc.) Use a loop to find the user's level rather than a set of if/else statements.
Then calculate the range of the player's current level, (end_of_range - start_of_range)
0 - 499 = 499
500 - 1249 = 749,
etc.
If a player is at 600 points, he's a level 2 character, in the 500-1249 range.
He's at 600-500 or 100 points into the range. (600-500)/749*100 is the player's percent complete in that range. 13.35% complete, in this example.
There are a few ways you can approach this. My weapon of choice here is to embody the XP values within the concept of LevelData versus using an enum, array of XP values, etc. The benefit of something like this is that for each level, you'll typically have many configurable values (for example level based multipliers) based on level. This way they are all in once place.
In your LevelData, there are different ways you can encode XP. These range from the XP total for the next level, or the beginning and end XP total for the next level. Obviously there are other permutations of this.
I usually use the later, mainly because it prevents me from having to "know" the LevelData for the previous level.
So I would typically have this in JSON
{
"levelData": [
{
"levelStartXP": "0",
"levelUpXP": "250",
"levelUpBonus": 0
},
{
"levelStartXP": "250",
"levelUpXP": "1000",
"levelUpBonus": 50
},
{
"levelStartXP": "1000",
"levelUpXP": "2500",
"levelUpBonus": 100
},
]
}
This is just a boring example. I then of course have a LevelData class which is embodies each level. I also have a LevelDataManager. That manager is used to vend out information per level. Having convenience methods help. For examples, good ones to have are:
- (NSInteger)levelWithXP:(NSInteger)xp; // for a given XP, give me the appropriate level
- (LevelData *)levelDataWithXP:(NSInteger)xp; //for a given XP, give me the appropriate LevelData
- (LevelData *)levelDataWithLevel:(NSInteger)level; // for a given level, give me the appropriate LevelData
- (NSInteger)levelUpXPForLevel:(NSInteger)level; // for a given level, give me the XP value needed for the next level)
I just arbitrarily used NSInteger, use the appropriate data type for your case.
Just what you want to support is really up to you.
The gist of the whole thing is try not to store individual level components. Rather aggregate them in LevelData or some other collection, so you have all the info per level at your disposal and create some form of manager/interface to get you the information you need.
So back to you your question. Let's say we have a class for LevelData (assume it is using the JSON above, and those fields are represented by properties) and a LevelDataManager instance called theLevelDataMgr, you could compute the % based on something like:
LevelData *levelData = [theLevelDataMgr levelDataWithLevel:currLevel]; // currLevel is the current level
float xpDiffInLevel = levelData.levelUpXP - levelStartXP;
float xpInLevel = currXP - levelStartXP; // currXP is the current XP of the user
float pctOfXP = xpInLevel / xpDiffInLevel; // You should add divide by zero check
And yes, if you wanted, you could have the LevelData class contain a method to do this calculation for you to help encapsulate it even better.
Code isn't tested and is listed to just give you a better idea on how to do it. Also, how you decide to store your XP for each level dictates how this would work. This code is based on the method I usually use.

Resources