zig print float precision - zig

In zig it is possible to print float values in decimal notation by using "{d}". This will automatically print the value at full precision. Is there way to specify the number of digits? Either for each value, or as some kind of global setting?

This will limit the number of digits after the decimal point, with rounding and zero-padding:
format(w, "{d:.1}", .{0.05}) == "0.1"
format(w, "{d:.3}", .{0.05}) == "0.050"
More info

Related

Why multiply two double in dart result in very strange number

Can anyone explain why the result is 252.99999999999997 and not 253? What should be used instead to get 253?
double x = 2.11;
double y = 0.42;
print(((x + y) * 100)); // print 252.99999999999997
I am basically trying to convert a currency value with 2 decimal (ie £2.11) into pence/cent (ie 211p)
Thanks
In short: Because many fractional double values are not precise, and adding imprecise values can give even more imprecise results. That's an inherent property of IEEE-754 floating point numbers, which is what Dart (and most other languages and the CPUs running them) are working with.
Neither of the rational numbers 2.11 and 0.42 are precisely representable as a double value. When you write 2.11 as source code, the meaning of that is the actual double values that is closest to the mathematical number 2.11.
The value of 2.11 is precisely 2.109999999999999875655021241982467472553253173828125.
The value of 0.42 is precisely 0.419999999999999984456877655247808434069156646728515625.
As you can see, both are slightly smaller than the value you intended.
Then you add those two values, which gives the precise double result 2.529999999999999804600747665972448885440826416015625. This loses a few of the last digits of the 0.42 to rounding, and since both were already smaller than 2.11 and 0.42, the result is now even more smaller than 2.53.
Finally you multiply that by 100, which gives the precise result 252.999999999999971578290569595992565155029296875.
This is different from the double value 253.0.
The double.toString method doesn't return a string of the exact value, but it does return different strings for different values, and since the value is different from 253.0, it must return a different string. It then returns a string of the shortest number which is still closer to the result than to the next adjacent double value, and that is the string you see.

sscanf in flex changing value of input

I'm using flex and bison to read in a file that has text but also floating point numbers. Everything seems to be working fine, except that I've noticed that it sometimes changes the values of the numbers. For example,
-4.036 is (sometimes) becoming -4.0359998, and
-3.92 is (sometimes) becoming -3.9200001
The .l file is using the lines
static float fvalue ;
sscanf(specctra_dsn_file_yytext, "%f", &fvalue) ;
The values pass through the yacc parser and arrive at my own .cpp file as floats with the values described. Not all of the values are changed, and even the same value is changed in some occurrences, and unchanged in others.
Please let me know if I should add more information.
float cannot represent every number. It is typically 32-bit and so is limited to at most 232 different numbers. -4.036 and -3.92 are not in that set on your platform.
<float> is typically encoded using IEEE 754 single-precision binary floating-point format: binary32 and rarely encodes fractional decimal values exactly. When assigning values like "-3.92", the actual values saved will be one close to that, but maybe not exact. IOWs, the conversion of -3.92 to float was not exact had it been done by assignment or sscanf().
float x1 = -3.92;
// float has an exact value of -3.9200000762939453125
// View # 6 significant digits -3.92000
// OP reported -3.9200001
float x2 = -4.036;
// float has an exact value of -4.035999774932861328125
// View # 6 significant digits -4.03600
// OP reported -4.0359998
Printing these values to beyond a certain number of significant decimal digits (typically 6 for float) can be expected to not match the original assignment. See Printf width specifier to maintain precision of floating-point value for a deeper C post.
OP could lower expectations of how many digits will match. Alternatively could use double and then only see this problem when typically more than 15 significant decimal digits are viewed.

Lua - round to double

The result of math.sqrt(2) seems to be irrational so this occurs:
> return math.sqrt(2)
1.4142135623731
> return math.sqrt(2) == 1.4142135623731
false
How do I make this "irrational" variable same as if I got the variable different way (like in the example above)?
The variable is not irrational, it is floating-point, so it isn't even real. (the square-root of 2 is irrational though, and thus cannot be accurately represented by it)
Just use more digits for your literal, and the round-trip conversion will work. An IEEE double-precision floating-point value needs 17 significant decimal digits to safely represent it, not 14.
Let's see what happens when we take the number 1 and uptick it in the least significant bit. (The '0x' means the numeral is hexadecimal. That makes it easier for me to control the bits for this example.):
x = 0x1.0000000000001
> print(x == 1)
false
> print(('%.16g'):format(x))
1
> print(('%.17g'):format(x))
1.0000000000000002

Big number and lost of precision

I do this operation and I want result without exponents :
main(){
var result = ((1.1+2)+3.14+4+(5+6+7)+(8+9+10)*4267387833344334647677634)/2*553344300034334349999000;
print(result); // With exponent
print(result.toInt()); // Full number ?
}
And it print
3.18780189038289e+49
31878018903828901761984975061078744643351263313920
But the toInt() result is wrong, the good result is 31878018903828899277492024491376690701584023926880 . It check it with groovy (web) console.
How can I do to have my int full number ?
As in result there are double literals, the result type is double.
In Dart a double is a :
64-bit (double-precision) floating-point numbers, as specified by the IEEE 754 standard
This is why you lose precision.
You can see the lost of precision with the following code :
final bignum = 31878018903828899277492024491376690701584023926880;
print(bignum);
// displays 31878018903828899277492024491376690701584023926880
print(bignum.toDouble().toInt());
// displays 31878018903828901761984975061078744643351263313920
This lost of precision is not specific to Dart, for instance 31878018903828899277492024491376690701584023926880.0 and 31878018903828901761984975061078744643351263313920.0 are equal in Java.
The groovy web console gives you the right result because AFAICT groovy literals with decimal points are instantiated as java.math.BigDecimal by default.
Finally there is an open issue on Decimal data type you can star and until decimals are natively supported, you can use my decimal package;

Objective C ceil returns wrong value

NSLog(#"CEIL %f",ceil(2/3));
should return 1. However, it shows:
CEIL 0.000000
Why and how to fix that problem? I use ceil([myNSArray count]/3) and it returns 0 when array count is 2.
The same rules as C apply: 2 and 3 are ints, so 2/3 is an integer divide. Integer division truncates so 2/3 produces the integer 0. That integer 0 will then be cast to a double precision float for the call to ceil, but ceil(0) is 0.
Changing the code to:
NSLog(#"CEIL %f",ceil(2.0/3.0));
Will display the result you're expecting. Adding the decimal point causes the constants to be recognised as double precision floating point numbers (and 2.0f is how you'd type a single precision floating point number).
Maudicus' solution works because (float)2/3 casts the integer 2 to a float and C's promotion rules mean that it'll promote the denominator to floating point in order to divide a floating point number by an integer, giving a floating point result.
So, your current statement ceil([myNSArray count]/3) should be changed to either:
([myNSArray count] + 2)/3 // no floating point involved
Or:
ceil((float)[myNSArray count]/3) // arguably more explicit
2/3 evaluates to 0 unless you cast it to a float.
So, you have to be careful with your values being turned to int's before you want.
float decValue = (float) 2/3;
NSLog(#"CEIL %f",ceil(decValue));
==>
CEIL 1.000000
For you array example
float decValue = (float) [myNSArray count]/3;
NSLog(#"CEIL %f",ceil(decValue));
It probably evaluates 2 and 3 as integers (as they are, obviously), evaluates the result (which is 0), and then converts it to float or double (which is also 0.00000). The easiest way to fix it is to type either 2.0f/3, 2/3.0f, or 2.0f/3.0f, (or without "f" if you wish, whatever you like more ;) ).
Hope it helps

Resources