MQL4 StringToDouble alters the value of the variable? - mql4

MQL4 documentation states that the value limits for double type variables is:
"Minimal Positive Value" = 2.2250738585072014e-308
"Maximum Value" = 1.7976931348623158e+308
See https://docs.mql4.com/basis/types/double
Why does StringToDouble() alter the value converted?
Am I doing one thing while expecting a different result?
void OnStart() {
string s1 = "5554535251504900090807060504030201";
double d1 = StringToDouble(s1);
string s2 = DoubleToString(d1);
Print("s2<",s2,">");
printf("%099.8f",d1);
Print("s1<",s1,">");
return;<br>
}
Here's what I get when I run that code:
s1<5554535251504900090807060504030201>
d1<000000000000000000000000000000000000000000000000000000005554535251504899684469244159852544.00000000>
s2<5554535251504899684469244159852544>
5554535251504900090807060504030201 amounts to5.55454E+33.
Obviously, that doesn't even come remotely close to the 1.7976931348623158e+308 limit.
What am I missing here?

Q : "What am I missing here?"
The documented facts.
MQL4 uses no more than 4-bytes to store int.
MQL4 uses no more than 8-bytes to store double.
IEEE-754 standard defines the rest - how many bits from those 64 are reserved for: exponent ( -308, 0, +308 )
sign ( +, - ) and
the rest, for normalised form of the mantissa : 0.???????...????
Argument, that an actual number is far from either "edge" of < DBL_MIN, DBL_MAX > does explain nothing about the shallow-ness of the exact number reduced-precision representation ( see DBL_EPSILON ~ 2E-16 or DBL_DIG ~ 15-significant digits, or DBL_MANT_DIG ~ 53-bits, left from a 64-bit ( 8-Byte ) storage-cell for mantissa ).
There are many numbers, that simply cannot be stored exactly, using IEEE-754 floating point number representation.
Tons of literature explain this, so feel free to dig deeper, or may use another tools, that rely on infinite-(unlimited)-precision number representation, should your use-case requires that.

Related

Why multiply two double in dart result in very strange number

Can anyone explain why the result is 252.99999999999997 and not 253? What should be used instead to get 253?
double x = 2.11;
double y = 0.42;
print(((x + y) * 100)); // print 252.99999999999997
I am basically trying to convert a currency value with 2 decimal (ie £2.11) into pence/cent (ie 211p)
Thanks
In short: Because many fractional double values are not precise, and adding imprecise values can give even more imprecise results. That's an inherent property of IEEE-754 floating point numbers, which is what Dart (and most other languages and the CPUs running them) are working with.
Neither of the rational numbers 2.11 and 0.42 are precisely representable as a double value. When you write 2.11 as source code, the meaning of that is the actual double values that is closest to the mathematical number 2.11.
The value of 2.11 is precisely 2.109999999999999875655021241982467472553253173828125.
The value of 0.42 is precisely 0.419999999999999984456877655247808434069156646728515625.
As you can see, both are slightly smaller than the value you intended.
Then you add those two values, which gives the precise double result 2.529999999999999804600747665972448885440826416015625. This loses a few of the last digits of the 0.42 to rounding, and since both were already smaller than 2.11 and 0.42, the result is now even more smaller than 2.53.
Finally you multiply that by 100, which gives the precise result 252.999999999999971578290569595992565155029296875.
This is different from the double value 253.0.
The double.toString method doesn't return a string of the exact value, but it does return different strings for different values, and since the value is different from 253.0, it must return a different string. It then returns a string of the shortest number which is still closer to the result than to the next adjacent double value, and that is the string you see.

single, double and precision

I know that storing single value (or double) can not be very precise. so storing for example 125.12 can result in 125.1200074788. now in delphi their is some usefull function like samevalue or comparevalue that take an epsilon as param and say that 125.1200074788 or for exemple 125.1200087952 is equal.
but i often see in code stuff like : if aSingleVar = 0 then ... and this in fact as i see always work. why ? why storing for exemple 0 in a single var keep the exact value ?
Only values that are in form m*2^e, where m and e are integers can be stored in a floating point variable (not all of them though, it depends on precision). 0 has this form, and 125.12 does not, as it equals 3128/25, and 1/25 is not an integer power of 2.
Comparing 125.12 to a single (or double) precision variable will most probably return always False, because a literal 125.12 will be treated as an extended precision number, and no single (or double) precision number would have such a value.
Looks like a good use for the BigDecimals unit by Rudy Velthuis. Millions of decimal places of accuracy and precision.

sscanf in flex changing value of input

I'm using flex and bison to read in a file that has text but also floating point numbers. Everything seems to be working fine, except that I've noticed that it sometimes changes the values of the numbers. For example,
-4.036 is (sometimes) becoming -4.0359998, and
-3.92 is (sometimes) becoming -3.9200001
The .l file is using the lines
static float fvalue ;
sscanf(specctra_dsn_file_yytext, "%f", &fvalue) ;
The values pass through the yacc parser and arrive at my own .cpp file as floats with the values described. Not all of the values are changed, and even the same value is changed in some occurrences, and unchanged in others.
Please let me know if I should add more information.
float cannot represent every number. It is typically 32-bit and so is limited to at most 232 different numbers. -4.036 and -3.92 are not in that set on your platform.
<float> is typically encoded using IEEE 754 single-precision binary floating-point format: binary32 and rarely encodes fractional decimal values exactly. When assigning values like "-3.92", the actual values saved will be one close to that, but maybe not exact. IOWs, the conversion of -3.92 to float was not exact had it been done by assignment or sscanf().
float x1 = -3.92;
// float has an exact value of -3.9200000762939453125
// View # 6 significant digits -3.92000
// OP reported -3.9200001
float x2 = -4.036;
// float has an exact value of -4.035999774932861328125
// View # 6 significant digits -4.03600
// OP reported -4.0359998
Printing these values to beyond a certain number of significant decimal digits (typically 6 for float) can be expected to not match the original assignment. See Printf width specifier to maintain precision of floating-point value for a deeper C post.
OP could lower expectations of how many digits will match. Alternatively could use double and then only see this problem when typically more than 15 significant decimal digits are viewed.

How to parse a decimal fraction into Rational in Haskell?

I've been participating in a programming contest and one of the problems' input data included a fractional number in a decimal format: 0.75 is one example.
Parsing that into Double is trivial (I can use read for that), but the loss of precision is painful. One needs to be very careful with Double comparisons (I wasn't), which seems redundant since one has Rational data type in Haskell.
When trying to use that, I've discovered that to read a Rational one has to provide a string in the following format: numerator % denominator, which I, obviously, do not have.
So, the question is:
What is the easiest way to parse a decimal representation of a fraction into Rational?
The number of external dependencies should be taken into consideration too, since I can't install additional libraries into the online judge.
The function you want is Numeric.readFloat:
Numeric Data.Ratio> fst . head $ readFloat "0.75" :: Rational
3 % 4
How about the following (GHCi session):
> :m + Data.Ratio
> approxRational (read "0.1" :: Double) 0.01
1 % 10
Of course you have to pick your epsilon appropriately.
Perhaps you'd get extra points in the contest for implementing it yourself:
import Data.Ratio ( (%) )
readRational :: String -> Rational
readRational input = read intPart % 1 + read fracPart % (10 ^ length fracPart)
where (intPart, fromDot) = span (/='.') input
fracPart = if null fromDot then "0" else tail fromDot

How to define -1 as a uint64 in a match clause?

let myuint64 = 10uL
match myuint64 with
| -1 -> ()
| _ -> ()
How do I define the given -1 as a uint64 value?
> match 0UL-1UL with
- |System.UInt64.MaxValue -> "-1"
- |_ -> "???"
- ;;
val it : string = "-1"
Let me leave alone the fact that you can't really represent a negative value with a data type that can only store positive values (and zero of course).
If, on the other hand, you were storing it in a signed value, -1 would be stored as all bits set.
So basically, I will assume you want to find a way to represent -1 as a bit-wise value that will be compatible with -1 as a signed value.
The value would then be, in C# and C/C++ syntax, 0xffffffffffffffff. Exactly how to specify that in F# I don't know.
I don't know F# at all, but if it's anything like any other languages, a UInt64 can't be -1. Ever. UInt means unsigned integer, which means it can only represent positive values.
To expand on other answers:
When a type starts with a u it means unsigned. What signed/unsigned means is this:
Numbers are stored using a certain number of bits. In the case of int64 and uint64, 64 bits are used. If the number is signed, the 1st bit is not used as part of the number itself, only the other 63 are. That bit is used to say whether the number is negative. If the number is unsigned, then all bits including the 1st bit are used as part of the number and the number is always non-negative (ie: is positive or 0).
Well you could assign it -1 and on most architectures store the 2's complement in there. The signed and unsigned stuff are really only for the type checking. There is no negative sign in hardware.
I have no idea if f# type checker is smart enough to know that a lexical constant -1 is a negative number and should not be put in a uint64.
C definitely does not care.
#include <stdio.h>
#include <inttypes.h>
main()
{
uint64_t x = -1;
printf("0x%x\n", x); // 0xffffffff
}
if F# will convert it for you then -1UL would work. If not then you can specify it as 0xFFFFFFFFFFFFFFFFUL and add a comment to remember that it's -1.
Don't have the F# tools installed at the moment so I cannot verify this.
If you want to go with a signed int:
-1: int64
but you can't match a negative number to a uint, as others have stated.

Resources