2048 casted to BOOL returns 0 - ios

Consider this code
NSInteger q = 2048;
BOOL boolQ = q;
NSLog(#"%hhd",boolQ);
After execution boolQ is equal 0. Could someone explain why is this so?

BOOL probably is implemented as char or uint8_t/int8_t, as "hh" prints half of the half of an integer. which typically is a byte.
Converting to char is taking the lowest 8bit of 2048 (=0x800) and gives you 0.
The proper way to convert any integer to a boolean value is:
NSInteger q = some-value;
BOOL b = !!q;

Casting an integer value to a type too small to represent the value being converted is undefined behaviour in C (C11 standard Annex J.2), and therefore also in the part of Objective-C which deals with C-level matters. Since it's undefined behaviour it can represent the result however it wants, expected value or not.
As per 6.3.1.4, any integer can be used as a boolean value without casting, in which case it will show the expected behaviour (0 is 0, everything else is 1), giving rise to the !! idiom suggested by alk; perhaps counterintuitively, you convert the value by not explicitly converting the value (instead, the conversion is correctly handled by the implicit conversion operation inserted by the ! operator).

Related

DirectCompute shader: how to get rid of warning X3205: 'round'

In a compute shader model 5, I have the result of some computation in a double precision floting point value. I have to assign the value to an integer variable and I get the warning:
warning X3205: 'round': conversion from larger type to smaller, possible loss of data
I understand the warning but in my case, at runtime the floating point value will never exceed the value acceptable for an integer. The code produce the expected result so I want to shut off that warning for the specific offending line.
I don't find how to turn off specific warning and I like to write code that do not produce any warning or if they are, they are checked to see if they are false alarm or not.
Any help appreciated.
You did not supply your code, and I suppose it was something in the form of:
double doubleValue = 1.0;
int integer = round(doubleValue);
If you want to suppress the warning, and you are sure the data you are dealing with will not give funny results, you can cast the double to a float before passing it to round().
double doubleValue = 1.0;
int integer = round((float)doubleValue);
This form does not trigger the warning.

Why, in Swift, when I convert from a Double to an Int is it subtracting 1?

I have some very simple code that does a calculation and converts the resulting double to an int.
let startingAge = (Double(babyAge/2).rounded().nextDown)
print(startingAge)
for each in 0..<allQuestions.count {
if allQuestions[each] == "\(Int(startingAge))"
The first print of startingAge gives me the correct answer, for example 5.0. But when it converts to an Int, it gives me an answer of 4. When the Double is 6.0, the int is 5.
I'm feeling stupid, but can't figure out what I'm doing wrong.
When you call rounded(), you round your value to the nearest integer.
When you call .nextDown, you get the next possible value less than the existing value, which means you now have the highest value that's less than the nearest integer to your original value. This still displays as the integer when you print it, but that's just rounding; it's really slightly less than the integer. So if it's printing as "4.0", it's really something like 3.9999999999999 or some such.
When you convert the value to an Int, it keeps the integer part and discards the part to the right of the decimal. Since the floating-point value is slightly less than the integer you rounded to thanks to .nextDown, the integer part is going to be one less than that integer.
Solution: Get rid of the .nextDown.
When you cast you lose precession.
In your case the line returns a double: Assume baby age is 9 then startingAge is 3.999999
let startingAge = (Double(babyAge/2).rounded().nextDown)
and when you print it your answer becomes 3
print("\(Int(startingAge))")
To fix this use this line instead:
let startingAge = (Double(babyAge/2).rounded().nextDown).rounded()
This is what nextdown does, it does not round values, and if the number is
a floating point number it becomes slightly less. If the number was to be an int it would become 1 less I presume.

Why NSNumber *a = 0 do not display error

I know the correct way to initial a NSNumber is NSNumber *a = #1;
and when I declare NSNumber *a = 1;, I will got the error
Implicit conversion of int to nsnumber is disallowed with arc
But I don't know why when I declare NSNumber *a = 0; there is no error
In my case, I have write some function in NSNumber category
and then
If the value of NSNumber is #0, I can use the function in category normally
If the value of NSNumber is 0, I can use the function in category, no error happened but when run app, this function will never call
The value 0 is synonymous with nil or NULL, which are valid values for a pointer.
It's a bit of compatibility with C that leads to this inconsistent behavior.
History
In the C language, there is no special symbol to represent an uninitialized pointer. Instead, the value 0 (zero) was chosen to represent such a pointer. To make code more understandable, a preprocessor macro was introduced to represent this value: NULL. Because it is a macro, the C compiler itself never sees the symbol; it only sees a 0 (zero).
This means that 0 (zero) is a special value when assigned to pointers. Even though it is an integer, the compiler accepts the assignment without complaining of a type conversion, implicit or otherwise.
To keep compatibility with C, Objective-C allows assigning a literal 0 to any pointer. It is treated by the compiler as identical to assigning nil.
0 is a null pointer constant. A null pointer constant can be assigned to any pointer variable and sets it to NULL or nil. This was the case in C for the last 45 years at least and is also the case in Objective-C. Same as NSNumber* a = nil.
You can consider 0 as nil or null that can be assign to object but 1 is integer and can't allow to object or non integer.
Objective-C silently ignores method calls on object pointers with value 0 (i.e. nil). That's why nothing happens when you call a method of your NSNumber category pointer which you assigned the value 0.
A nil value is the safest way to initialize an object pointer if you don’t have another value to use, because it’s perfectly acceptable in Objective-C to send a message to nil. If you do send a message to nil, obviously nothing happens.
Note: If you expect a return value from a message sent to nil, the return value will be nil for object return types, 0 for numeric types, and NO for BOOL types. Returned structures have all members initialized to zero.
In the last Apple Doc Working with nil

Why do nonzero numbers fail to set a BOOL property to YES?

It is my understanding that in Objective C, which is based off of C, all BOOLs are basically shorts (-127 to 128), with zero being the only value for "FALSE", or "NO". However, when I recently tried to set a button's selected value based off of a bitmask, it fails. Why?
NSInteger bitfield = 127;
NSInteger bitmask = 1 << 6; // 64
myButton.selected = bitfield & bitmask; // selected will remain NO
That's because BOOL is not bool.
BOOL is a just a non-standard (Objective-C-specific) typedef for a (non-bool) integral type (as far as I know, it's always signed char but I might be wrong). As such, it does not behave as a true Boolean data type, but rather as its underlying integral type. So, if you assign 64 to it, it will store 64 (and not true or 1). It is possible that, as a result of this, an operation that always assumes the true value to be 1 (i. e. the LSB set) will fail to recognize 64 as such.
In contrast, if you replaced BOOL with the true C99 Boolean type, which is _Bool or bool, then you would experience the expected behavior. I. e., assigning any non-zero value to the variable would have it store true or 1, regardless to whether that value was really 1.

MAX / MIN function in Objective C that avoid casting issues

I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros

Resources