Is it better to write 0.0, 0.0f or .0f instead of simple 0 for supposed float or double values - ios

Hello well all is in the title. The question apply especially for all those values that can be like NSTimeInterval, CGFloat or any other variable that is a float or a double. Thanks.
EDIT: I'm asking for value assignment not format in a string.
EDIT 2: The question is really does assigning a plain 0 for a float or a double is worst than anything with f a the end.

The basic difference is as :
1.0 or 1. is a double constant
1.0f is a float constant
Without a suffix, a literal with a decimal in it (123.0) will be treated as a double-precision floating-point number. If you assign or pass that to a single-precision variable or parameter, the compiler will (should) issue a warning. Appending f tells the compiler you want the literal to be treated as a single-precision floating-point number.

If you are initializing a variable then it make no sense. compiler does all the cast for you.
float a = 0; //Cast int 0 to float 0.0
float b = 0.0; //Cast 0.0 double to float 0.0 as by default floating point constants are double
float c = 0.0f // Assigning float to float. .0f is same as 0.0f
But if you are using these in an expression then that make a lot of sense.
6/5 becomes 1
6/5.0 becomes 1.2 (double value)
6/5.0f becomes 1.2 (float value)

If you want to dig out if there is any difference to the target CPU running the code or the binary code it executes, you can easily copy one of the command lines compiling the code from XCode to command line, fix missing environment variables and add a -S. By that you would get assembly output, that you can use to compare. If you put all 4 variants in a small example source file, you can compare the resulting assembly code afterwards, even without being fluent in ARM assembly.
From my ARM assembly experience (okay... 6 years ago and GCC) I would bet 1ct on something like XORing a register with itself to flush it's content to 0.

Whether you use 0.0, .0, or 0.0f or even 0f does not make much of a difference. (There are some with respect to double and float) You may even use (float) 0.
But there is a significant difference between 0 and some float notation. Zero will always be some type of integer. And that can force the machine to perform integer operations when you may want float operations instead.
I do not have a good example for zero handy but I've got one for float/int in general, which nealy drove me crazy the other day.
I am used to 8-Bit-RGB colors That is because of my hobby as photographer and because of my recent background as html developer. So I felt it difficult to get used to the cocoa style 0..1 fractions of red, green and yellow. To overcome that I wanted to use the values that I was used to and devide them by 255.
[CGColor colorWithRed: 128/255 green: 128/255 andYellow: 128/255];
That should generate me some nice middle gray. But it did not. All that I tried either made a black or white.
First I thought that this was caused by some undocumented dificiency of the UI text objects with which I was using this colour. It took a while to realize that this constant values forced integer operations wich can only round up or down to 0 and 1.
This expession eventually did what I wanted to achieve:
[CGColor colorWithRed: 128.0/255.0 green: 128.0/255.0 andYellow: 128.0/255.0];
You could achieve the same thing with less .0s attached. But it does not hurt having more of them as needed. 128.0f/(float)255 would do either.
Edit to respond to your "Edit2":
float fvar;
fvar = 0;
vs ...
fvar = .0;
In the end it does not make a difference at all. fvar will contain a float value close to (but not always equal to) 0.0. For compilers in the 60th and 70th I would have guessed that there is a minor performance issue associated with fvar = 0. That is that the compiler creates an int 0 first which will then have to be converted to float before the assignment. Modern compilers of today should optimize automatically much better than older ones. In the end I'd have to look at the machine code output to see whether it does make a difference.
However, with fvar = .0; you are always on the safe site.

Related

Why multiply two double in dart result in very strange number

Can anyone explain why the result is 252.99999999999997 and not 253? What should be used instead to get 253?
double x = 2.11;
double y = 0.42;
print(((x + y) * 100)); // print 252.99999999999997
I am basically trying to convert a currency value with 2 decimal (ie £2.11) into pence/cent (ie 211p)
Thanks
In short: Because many fractional double values are not precise, and adding imprecise values can give even more imprecise results. That's an inherent property of IEEE-754 floating point numbers, which is what Dart (and most other languages and the CPUs running them) are working with.
Neither of the rational numbers 2.11 and 0.42 are precisely representable as a double value. When you write 2.11 as source code, the meaning of that is the actual double values that is closest to the mathematical number 2.11.
The value of 2.11 is precisely 2.109999999999999875655021241982467472553253173828125.
The value of 0.42 is precisely 0.419999999999999984456877655247808434069156646728515625.
As you can see, both are slightly smaller than the value you intended.
Then you add those two values, which gives the precise double result 2.529999999999999804600747665972448885440826416015625. This loses a few of the last digits of the 0.42 to rounding, and since both were already smaller than 2.11 and 0.42, the result is now even more smaller than 2.53.
Finally you multiply that by 100, which gives the precise result 252.999999999999971578290569595992565155029296875.
This is different from the double value 253.0.
The double.toString method doesn't return a string of the exact value, but it does return different strings for different values, and since the value is different from 253.0, it must return a different string. It then returns a string of the shortest number which is still closer to the result than to the next adjacent double value, and that is the string you see.

DirectCompute shader: how to get rid of warning X3205: 'round'

In a compute shader model 5, I have the result of some computation in a double precision floting point value. I have to assign the value to an integer variable and I get the warning:
warning X3205: 'round': conversion from larger type to smaller, possible loss of data
I understand the warning but in my case, at runtime the floating point value will never exceed the value acceptable for an integer. The code produce the expected result so I want to shut off that warning for the specific offending line.
I don't find how to turn off specific warning and I like to write code that do not produce any warning or if they are, they are checked to see if they are false alarm or not.
Any help appreciated.
You did not supply your code, and I suppose it was something in the form of:
double doubleValue = 1.0;
int integer = round(doubleValue);
If you want to suppress the warning, and you are sure the data you are dealing with will not give funny results, you can cast the double to a float before passing it to round().
double doubleValue = 1.0;
int integer = round((float)doubleValue);
This form does not trigger the warning.

how to use AtomiccmpExchange with double?

I have a double value that I need to access to inside a backgroundThread. I would like to use somethink like AtomiccmpExchange but seam to not work with double. is their any other equivalent that I can use with double ? I would like to avoid to use Tmonitor.enter / Tmonitor.exit as I need something the most fast as possible. I m under android/ios so under firemonkey
You could type cast the double values into UInt64 values:
PUInt64(#dOld)^ := AtomicCmpExchange(PUInt64(#d)^,PUInt64(#dNew)^,PUInt64(#dCom‌​p)^);
Note that you need to align the variables properly, according to platforms specifications.
As #David pointed out, comparing doublevalues is not the same as comparing UInt64 values. There are some specific double values that will behave out of the ordinary:
A NaN is normally (as specified in IEEE-754) detected by comparing a value by itself.
IsNaN := d <> d;
footnote: Delphi default exception handler is triggered in the event of comparing a NaN, but other compilers may behave differently. In Delphi there is an IsNaN() function to use instead.
Likewise the value zero could be both positive and negative, for a special meaning. Comparing double 0 with double -0 will return true, but comparing the memory footprint will return false.
Maybe use of System.SyncObjs.TInterlocked class methods will be better?

iOS warning message : Incompatible pointer types passing 'CGFloat *' (aka 'double *') to parameter of type 'float *'

This is causing my App to act up. this error is happening on this line modff(floatIndex, &intIndex); What do I need to do to fix this issue?
Edit: it is because of &intIndex
- (BOOL)isFloatIndexBetween:(CGFloat)floatIndex {
CGFloat intIndex, restIndex;
restIndex = modff(floatIndex, &intIndex);
BOOL isBetween = fabsf(restIndex - 0.5f) < EPSILON;
return isBetween;
}
As I recall CGFloat is defined as float on 32 bit devices and double on 64 bit devices. Thus you don't want to use CGFloat in a call to modff(). Instead, declare your parameters using a specific type, and use casting.
Something like this (In this case I am using modf and all float variables.
- (BOOL)isFloatIndexBetween:(CGFloat)floatIndex
{
float restIndex;
float first, second;
first = (float) floatIndex;
restIndex = modf(first, &second);
BOOL isBetween = fabsf(restIndex - 0.5f) < EPSILON;
return isBetween;
}
Learning to speak compiler error/warning is an invaluable skill. In this case, it is telling you that modff is expecting a float (that is, a single-precision floating point number), but you're passing it a CGFloat (which is typedef'd as a double, which is a double-precision floating point number). As NobodyNada says, you can either change which function you're using or the type if intIndex.
You are passing CGFloats (typedef'ed to double on your system) to functions that expect floats.
You can either change modff and fabsf to modf and fabs, respectively (slower but more precise), or change intIndex and restIndex to be floats instead of doubles (faster but less precise).
Perhaps the easiest way to avoid these types of warnings and errors when using an architecture specific types like CGFloat is to put #import <tgmath.h> in your precompiled header or the imports for this file. That way the type-generic versions of the underly C functions are used. In this case it makes your warnings go away without any code changes. Then it's just a matter of making sure the precision is what you want.
If you are using 64-bit architectures (like arm64),then you should use CGFloat because it is defined as double and therefore a 8-byte floating point number, whereas float is a 4-byte floating point number.
So you should use these according to architecture.

Objective C ceil returns wrong value

NSLog(#"CEIL %f",ceil(2/3));
should return 1. However, it shows:
CEIL 0.000000
Why and how to fix that problem? I use ceil([myNSArray count]/3) and it returns 0 when array count is 2.
The same rules as C apply: 2 and 3 are ints, so 2/3 is an integer divide. Integer division truncates so 2/3 produces the integer 0. That integer 0 will then be cast to a double precision float for the call to ceil, but ceil(0) is 0.
Changing the code to:
NSLog(#"CEIL %f",ceil(2.0/3.0));
Will display the result you're expecting. Adding the decimal point causes the constants to be recognised as double precision floating point numbers (and 2.0f is how you'd type a single precision floating point number).
Maudicus' solution works because (float)2/3 casts the integer 2 to a float and C's promotion rules mean that it'll promote the denominator to floating point in order to divide a floating point number by an integer, giving a floating point result.
So, your current statement ceil([myNSArray count]/3) should be changed to either:
([myNSArray count] + 2)/3 // no floating point involved
Or:
ceil((float)[myNSArray count]/3) // arguably more explicit
2/3 evaluates to 0 unless you cast it to a float.
So, you have to be careful with your values being turned to int's before you want.
float decValue = (float) 2/3;
NSLog(#"CEIL %f",ceil(decValue));
==>
CEIL 1.000000
For you array example
float decValue = (float) [myNSArray count]/3;
NSLog(#"CEIL %f",ceil(decValue));
It probably evaluates 2 and 3 as integers (as they are, obviously), evaluates the result (which is 0), and then converts it to float or double (which is also 0.00000). The easiest way to fix it is to type either 2.0f/3, 2/3.0f, or 2.0f/3.0f, (or without "f" if you wish, whatever you like more ;) ).
Hope it helps

Resources