Issues setting a maximum amount of tokens in ERC20 contract - token

I've been trying to create a very simple ERC20 token with truffle in the rinkeby network. I placed the following code into my .sol file but the max supply doesnt seem to match.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import "#openzeppelin/contracts/token/ERC20/ERC20.sol";
contract Artytoken is ERC20 {
address public admin;
uint private _totalSupply;
constructor() ERC20('ArtyToken', 'AATK') {
admin = msg.sender;
_totalSupply = 1000000;
_mint(admin, _totalSupply);
}
}
In my Metamask address shows that i own "0.000000000001" and I've seen that in Etherscan it shows that the max total supply its "0.000000000001". What im doing wrong?
Thank you in advance! :)

The EVM does not support decimal numbers (to prevent rounding errors related to the network nodes running on different architectures), so all numbers are integers.
The ERC-20 token standard defines the decimals() function that your contract implements, effectively shifting all numbers by the declared number of zeros to simulate decimals.
So if you wanted to mint 1 token with 18 decimals, you'd need to pass the value 1000000000000000000 (18 zeros). Or as in your case, 1 million tokens (6 zeros) with 18 decimals is represented as 1000000000000000000000000 (24 zeros). Same goes the other way around, 0.5 of a token with 18 decimals is 500000000000000000 (17 zeros).
You can also use underscores (they effectively do nothing but visually separate the value) and scientific notation to mitigate human error while working with such large amount of zeros:
// 6 zeros and 18 zeros => 24 zeros
_totalSupply = 1_000_000 * 1e18;

Related

bin2dec for 16 bit signed binary values (in google sheets)

In google sheets, I'm trying to convert a 16-bit signed binary number to its decimal equivalent, but the built in function that does that only takes up to 10 bits. Other solutions to the problem that I've seen don't preserve the signedness.
So far I've tried:
bin2dec on the leftmost 8 bits * 2^8 + bin2dec on the rightmost 8 bits
hex2dec on the result of bin2dec on the leftmost 8 bits concatenated with bin2dec on the rightmost 8 bits
I've also seen a suggestion that multiplies each bit by its power of 2, eliminating bin2dec altogether.
Any suggestions?
You will need to use a custom function
function binary2decimal(bin) {
return parseInt(bin, 2);
}
Let's assume that your binary number is in cell A2.
First, set the formatting as follows: Format > Number > Plain text.
Then place the following formula in, say, B2:
=ArrayFormula(SUM(SPLIT(REGEXREPLACE(SUBSTITUTE(A2&"","-",""),"(\d)","$1|"),"|")*(2^SEQUENCE(1,LEN(SUBSTITUTE(A2&"","-","")),LEN(SUBSTITUTE(A2&"","-",""))-1,-1))*IF(LEFT(A2)="-",-1,1)))
This formula will process any length binary number, positive or negative, from 1 bit to 16 bits (and, in fact, to a length of 45 or 46 bits).
What this formula does is SPLIT the binary number (without the negative sign if it exists) into its separate bits, one per column; multiply each of those by 2 raised to the power of each element of an equal-sized degressive SEQUENCE that runs from a high of the LEN (i.e., number) of bits down to zero; and finally apply the negative sign conditionally IF one exists.
If you need to process a range where every value is a positive or negative binary number with exactly 16 bits, you can do so. Suppose that your 16-bit binary numbers are in the range A2:A. First, be sure to select all of Column A and set the formatting to "Plain text" as described above. Then place the following array formula into, say, B2 (being sure that B2:B is empty first):
=ArrayFormula(MMULT(SPLIT(REGEXREPLACE(SUBSTITUTE(FILTER(A2:A,A2:A<>"")&"","-",""),"(\d)","$1|"),"|")*(2^SEQUENCE(1,16,15,-1)),SEQUENCE(16,1,1,0))*IF(LEFT(FILTER(A2:A,A2:A<>""))="-",-1,1))

MQL4 StringToDouble alters the value of the variable?

MQL4 documentation states that the value limits for double type variables is:
"Minimal Positive Value" = 2.2250738585072014e-308
"Maximum Value" = 1.7976931348623158e+308
See https://docs.mql4.com/basis/types/double
Why does StringToDouble() alter the value converted?
Am I doing one thing while expecting a different result?
void OnStart() {
string s1 = "5554535251504900090807060504030201";
double d1 = StringToDouble(s1);
string s2 = DoubleToString(d1);
Print("s2<",s2,">");
printf("%099.8f",d1);
Print("s1<",s1,">");
return;<br>
}
Here's what I get when I run that code:
s1<5554535251504900090807060504030201>
d1<000000000000000000000000000000000000000000000000000000005554535251504899684469244159852544.00000000>
s2<5554535251504899684469244159852544>
5554535251504900090807060504030201 amounts to5.55454E+33.
Obviously, that doesn't even come remotely close to the 1.7976931348623158e+308 limit.
What am I missing here?
Q : "What am I missing here?"
The documented facts.
MQL4 uses no more than 4-bytes to store int.
MQL4 uses no more than 8-bytes to store double.
IEEE-754 standard defines the rest - how many bits from those 64 are reserved for: exponent ( -308, 0, +308 )
sign ( +, - ) and
the rest, for normalised form of the mantissa : 0.???????...????
Argument, that an actual number is far from either "edge" of < DBL_MIN, DBL_MAX > does explain nothing about the shallow-ness of the exact number reduced-precision representation ( see DBL_EPSILON ~ 2E-16 or DBL_DIG ~ 15-significant digits, or DBL_MANT_DIG ~ 53-bits, left from a 64-bit ( 8-Byte ) storage-cell for mantissa ).
There are many numbers, that simply cannot be stored exactly, using IEEE-754 floating point number representation.
Tons of literature explain this, so feel free to dig deeper, or may use another tools, that rely on infinite-(unlimited)-precision number representation, should your use-case requires that.

How to convert a floating point number to a string with max. 2 decimal digits in Delphi

How can I convert a floating point number to a string with a maximum of 2 decimal digits in Delphi7?
I've tried using:
FloatToStrF(Query.FieldByName('Quantity').AsFloat, ffGeneral, 18, 2, FS);
But with the above, sometimes more than 2 decimal digits are given back, ie. the result is: 15,60000009
Use ffFixed instead of ffGeneral.
ffGeneral ignores the Decimal parameter.
When you use ffGeneral, the 18 is saying that you want 18 significant decimal digits. The routine will then express that number in the shortest manner, using scientific notation if necessary. The 2 is ignored.
When you use ffFixed, you are saying you want 2 digits after the decimal point.
If you are wondering about why you sometimes get values that seem to be imprecise, there is much to be found on this site and others that will explain how floating-point numbers work.
In this case, AsFloat is returning a double, which like (most) other floating-point formats, stores its value in binary. In the same way that 1/3 cannot be written in decimal with finite digits, neither can 15.6 be represented in binary in a finite number of bits. The system chooses the closest possible value that can be stored in a double. The exact value, in decimal, is:
15.5999999999999996447286321199499070644378662109375
If you had asked for 16 digits of precision, the value would've been rounded off to 15.6. But you asked for 18 digits, so you get 15.5999999999999996.
If you really mean what you write (MAX 2 decimal digits) and does not mean ALWAYS 2 decimal digits, then the two code snippets in the comments won't give you want you asked for (they will return a string that ALWAYS has two decimal digits, ie. ONE is returned as "1.00" (or "1,00" for Format depending on your decimal point).
If you truly want an option with MAX 2 decimal digits, you'll have to do a little post-processing of the returned string.
FUNCTION FloatToStrMaxDecimals(F : Extended ; MaxDecimals : BYTE) : STRING;
BEGIN
Result:=Format('%.'+IntToStr(MaxDecimals)+'f',[F]);
WHILE Result[LENGTH(Result)]='0' DO DELETE(Result,LENGTH(Result),1);
IF Result[LENGTH(Result)] IN ['.',','] THEN DELETE(Result,LENGTH(Result),1)
END;
An alternative (and probably faster) implementation could be:
FUNCTION FloatToStrMaxDecimals(F : Extended ; MaxDecimals : BYTE) : STRING;
BEGIN
Result:=Format('%.'+IntToStr(MaxDecimals)+'f',[F]);
WHILE Result[LENGTH(Result)]='0' DO SetLength(Result,PRED(LENGTH(Result)));
IF Result[LENGTH(Result)] IN ['.',','] THEN SetLength(Result,PRED(LENGTH(Result)))
END;
This function will return a floating point number with MAX the number of specified decimal digits, ie. one half with MAX 2 digits will return "0.5" and one third with MAX 2 decimal digits will return "0.33" and two thirds with MAX 2 decimal digits will return "0.67". TEN with MAX 2 decimal digits will return "10".
The final IF statement should really test for the proper decimal point, but I don't think any value other than period or comma is possible, and if one of these are left as the last character in the string after having stripped all zeroes from the end, then it MUST be a decimal point.
Also note, that this code assumes that strings are indexed with 1 for the first character, as it always is in Delphi 7. If you need this code for the mobile compilers in newer Delphi versions, you'll need to update the code. I'll leave that exercise up to the reader :-).
i use this function in my application:
function sclCurrencyND(Const F: Currency; GlobalDegit: word = 2): Currency;
var R: Real; Fact: Currency;
begin
Fact:= power(10, GlobalDegit);
Result:= int(F*Fact)/Fact;
end;

sscanf in flex changing value of input

I'm using flex and bison to read in a file that has text but also floating point numbers. Everything seems to be working fine, except that I've noticed that it sometimes changes the values of the numbers. For example,
-4.036 is (sometimes) becoming -4.0359998, and
-3.92 is (sometimes) becoming -3.9200001
The .l file is using the lines
static float fvalue ;
sscanf(specctra_dsn_file_yytext, "%f", &fvalue) ;
The values pass through the yacc parser and arrive at my own .cpp file as floats with the values described. Not all of the values are changed, and even the same value is changed in some occurrences, and unchanged in others.
Please let me know if I should add more information.
float cannot represent every number. It is typically 32-bit and so is limited to at most 232 different numbers. -4.036 and -3.92 are not in that set on your platform.
<float> is typically encoded using IEEE 754 single-precision binary floating-point format: binary32 and rarely encodes fractional decimal values exactly. When assigning values like "-3.92", the actual values saved will be one close to that, but maybe not exact. IOWs, the conversion of -3.92 to float was not exact had it been done by assignment or sscanf().
float x1 = -3.92;
// float has an exact value of -3.9200000762939453125
// View # 6 significant digits -3.92000
// OP reported -3.9200001
float x2 = -4.036;
// float has an exact value of -4.035999774932861328125
// View # 6 significant digits -4.03600
// OP reported -4.0359998
Printing these values to beyond a certain number of significant decimal digits (typically 6 for float) can be expected to not match the original assignment. See Printf width specifier to maintain precision of floating-point value for a deeper C post.
OP could lower expectations of how many digits will match. Alternatively could use double and then only see this problem when typically more than 15 significant decimal digits are viewed.

Large lua numbers are being printed incorrectly

I have the following test case:
Lua 5.3.2 Copyright (C) 1994-2015 Lua.org, PUC-Rio
> foo = 1000000000000000000
> bar = foo + 1
> bar
1000000000000000001
> string.format("%.0f", foo)
1000000000000000000
> string.format("%.0f", bar)
1000000000000000000
That last line should be 1000000000000000001, since that's the value of bar, but for some reason it's not. This doesn't only apply to 1000000000000000000, I've yet to find another number over that one which gives the correct value. Can anyone give an explanation for why this happens?
You're formatting the number as floating-point, not integer. That's what %.0f is doing. At some point, floats lose precision. double, for example, will lose precision after about 16 decimal digits.
If you want to format an integer as an integer, then you need to format it as an integer, using standard printf rules:
string.format("%i", bar)
log2(1000000000000000000) is between 59 and 60, which means that the binary representation of that number needs 60 bits. double-precision floating point numbers have only 53 bits of precision, plus a power-of-two exponent with 11 bits of range. So to store that large of a number as floating point (which is what you requested with the %f format specifier), six to seven bits of precision are chopped off the end of the number, and the whole thing is multiplied by a power of two to get it back in range (259 in this case, I think). Chopping off those final bits removes the precision that allows 1000000000000000000 and 1000000000000000001 to be distinct from each other.
(This is not a particularly precise description of floating point, apologies if my numbers or descriptions are not exact.)

Resources