DirectCompute shader: how to get rid of warning X3205: 'round' - directx

In a compute shader model 5, I have the result of some computation in a double precision floting point value. I have to assign the value to an integer variable and I get the warning:
warning X3205: 'round': conversion from larger type to smaller, possible loss of data
I understand the warning but in my case, at runtime the floating point value will never exceed the value acceptable for an integer. The code produce the expected result so I want to shut off that warning for the specific offending line.
I don't find how to turn off specific warning and I like to write code that do not produce any warning or if they are, they are checked to see if they are false alarm or not.
Any help appreciated.

You did not supply your code, and I suppose it was something in the form of:
double doubleValue = 1.0;
int integer = round(doubleValue);
If you want to suppress the warning, and you are sure the data you are dealing with will not give funny results, you can cast the double to a float before passing it to round().
double doubleValue = 1.0;
int integer = round((float)doubleValue);
This form does not trigger the warning.

Related

Why, in Swift, when I convert from a Double to an Int is it subtracting 1?

I have some very simple code that does a calculation and converts the resulting double to an int.
let startingAge = (Double(babyAge/2).rounded().nextDown)
print(startingAge)
for each in 0..<allQuestions.count {
if allQuestions[each] == "\(Int(startingAge))"
The first print of startingAge gives me the correct answer, for example 5.0. But when it converts to an Int, it gives me an answer of 4. When the Double is 6.0, the int is 5.
I'm feeling stupid, but can't figure out what I'm doing wrong.
When you call rounded(), you round your value to the nearest integer.
When you call .nextDown, you get the next possible value less than the existing value, which means you now have the highest value that's less than the nearest integer to your original value. This still displays as the integer when you print it, but that's just rounding; it's really slightly less than the integer. So if it's printing as "4.0", it's really something like 3.9999999999999 or some such.
When you convert the value to an Int, it keeps the integer part and discards the part to the right of the decimal. Since the floating-point value is slightly less than the integer you rounded to thanks to .nextDown, the integer part is going to be one less than that integer.
Solution: Get rid of the .nextDown.
When you cast you lose precession.
In your case the line returns a double: Assume baby age is 9 then startingAge is 3.999999
let startingAge = (Double(babyAge/2).rounded().nextDown)
and when you print it your answer becomes 3
print("\(Int(startingAge))")
To fix this use this line instead:
let startingAge = (Double(babyAge/2).rounded().nextDown).rounded()
This is what nextdown does, it does not round values, and if the number is
a floating point number it becomes slightly less. If the number was to be an int it would become 1 less I presume.

2048 casted to BOOL returns 0

Consider this code
NSInteger q = 2048;
BOOL boolQ = q;
NSLog(#"%hhd",boolQ);
After execution boolQ is equal 0. Could someone explain why is this so?
BOOL probably is implemented as char or uint8_t/int8_t, as "hh" prints half of the half of an integer. which typically is a byte.
Converting to char is taking the lowest 8bit of 2048 (=0x800) and gives you 0.
The proper way to convert any integer to a boolean value is:
NSInteger q = some-value;
BOOL b = !!q;
Casting an integer value to a type too small to represent the value being converted is undefined behaviour in C (C11 standard Annex J.2), and therefore also in the part of Objective-C which deals with C-level matters. Since it's undefined behaviour it can represent the result however it wants, expected value or not.
As per 6.3.1.4, any integer can be used as a boolean value without casting, in which case it will show the expected behaviour (0 is 0, everything else is 1), giving rise to the !! idiom suggested by alk; perhaps counterintuitively, you convert the value by not explicitly converting the value (instead, the conversion is correctly handled by the implicit conversion operation inserted by the ! operator).

MAX / MIN function in Objective C that avoid casting issues

I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros

Is it better to write 0.0, 0.0f or .0f instead of simple 0 for supposed float or double values

Hello well all is in the title. The question apply especially for all those values that can be like NSTimeInterval, CGFloat or any other variable that is a float or a double. Thanks.
EDIT: I'm asking for value assignment not format in a string.
EDIT 2: The question is really does assigning a plain 0 for a float or a double is worst than anything with f a the end.
The basic difference is as :
1.0 or 1. is a double constant
1.0f is a float constant
Without a suffix, a literal with a decimal in it (123.0) will be treated as a double-precision floating-point number. If you assign or pass that to a single-precision variable or parameter, the compiler will (should) issue a warning. Appending f tells the compiler you want the literal to be treated as a single-precision floating-point number.
If you are initializing a variable then it make no sense. compiler does all the cast for you.
float a = 0; //Cast int 0 to float 0.0
float b = 0.0; //Cast 0.0 double to float 0.0 as by default floating point constants are double
float c = 0.0f // Assigning float to float. .0f is same as 0.0f
But if you are using these in an expression then that make a lot of sense.
6/5 becomes 1
6/5.0 becomes 1.2 (double value)
6/5.0f becomes 1.2 (float value)
If you want to dig out if there is any difference to the target CPU running the code or the binary code it executes, you can easily copy one of the command lines compiling the code from XCode to command line, fix missing environment variables and add a -S. By that you would get assembly output, that you can use to compare. If you put all 4 variants in a small example source file, you can compare the resulting assembly code afterwards, even without being fluent in ARM assembly.
From my ARM assembly experience (okay... 6 years ago and GCC) I would bet 1ct on something like XORing a register with itself to flush it's content to 0.
Whether you use 0.0, .0, or 0.0f or even 0f does not make much of a difference. (There are some with respect to double and float) You may even use (float) 0.
But there is a significant difference between 0 and some float notation. Zero will always be some type of integer. And that can force the machine to perform integer operations when you may want float operations instead.
I do not have a good example for zero handy but I've got one for float/int in general, which nealy drove me crazy the other day.
I am used to 8-Bit-RGB colors That is because of my hobby as photographer and because of my recent background as html developer. So I felt it difficult to get used to the cocoa style 0..1 fractions of red, green and yellow. To overcome that I wanted to use the values that I was used to and devide them by 255.
[CGColor colorWithRed: 128/255 green: 128/255 andYellow: 128/255];
That should generate me some nice middle gray. But it did not. All that I tried either made a black or white.
First I thought that this was caused by some undocumented dificiency of the UI text objects with which I was using this colour. It took a while to realize that this constant values forced integer operations wich can only round up or down to 0 and 1.
This expession eventually did what I wanted to achieve:
[CGColor colorWithRed: 128.0/255.0 green: 128.0/255.0 andYellow: 128.0/255.0];
You could achieve the same thing with less .0s attached. But it does not hurt having more of them as needed. 128.0f/(float)255 would do either.
Edit to respond to your "Edit2":
float fvar;
fvar = 0;
vs ...
fvar = .0;
In the end it does not make a difference at all. fvar will contain a float value close to (but not always equal to) 0.0. For compilers in the 60th and 70th I would have guessed that there is a minor performance issue associated with fvar = 0. That is that the compiler creates an int 0 first which will then have to be converted to float before the assignment. Modern compilers of today should optimize automatically much better than older ones. In the end I'd have to look at the machine code output to see whether it does make a difference.
However, with fvar = .0; you are always on the safe site.

Objective C ceil returns wrong value

NSLog(#"CEIL %f",ceil(2/3));
should return 1. However, it shows:
CEIL 0.000000
Why and how to fix that problem? I use ceil([myNSArray count]/3) and it returns 0 when array count is 2.
The same rules as C apply: 2 and 3 are ints, so 2/3 is an integer divide. Integer division truncates so 2/3 produces the integer 0. That integer 0 will then be cast to a double precision float for the call to ceil, but ceil(0) is 0.
Changing the code to:
NSLog(#"CEIL %f",ceil(2.0/3.0));
Will display the result you're expecting. Adding the decimal point causes the constants to be recognised as double precision floating point numbers (and 2.0f is how you'd type a single precision floating point number).
Maudicus' solution works because (float)2/3 casts the integer 2 to a float and C's promotion rules mean that it'll promote the denominator to floating point in order to divide a floating point number by an integer, giving a floating point result.
So, your current statement ceil([myNSArray count]/3) should be changed to either:
([myNSArray count] + 2)/3 // no floating point involved
Or:
ceil((float)[myNSArray count]/3) // arguably more explicit
2/3 evaluates to 0 unless you cast it to a float.
So, you have to be careful with your values being turned to int's before you want.
float decValue = (float) 2/3;
NSLog(#"CEIL %f",ceil(decValue));
==>
CEIL 1.000000
For you array example
float decValue = (float) [myNSArray count]/3;
NSLog(#"CEIL %f",ceil(decValue));
It probably evaluates 2 and 3 as integers (as they are, obviously), evaluates the result (which is 0), and then converts it to float or double (which is also 0.00000). The easiest way to fix it is to type either 2.0f/3, 2/3.0f, or 2.0f/3.0f, (or without "f" if you wish, whatever you like more ;) ).
Hope it helps

Resources