All kinds of Booleans in ObjectiveC [duplicate] - ios

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What values should I use for iOS boolean states?
I believe there are something like 5 boolean types in iOS environment (which comes from C, C++ and Objective C).
_Bool
bool
BOOL
boolean_t
Boolean
And there are at least four pairs of values for them:
true, false
TRUE, FALSE
YES, NO
1, 0
Which one do you think is the best (style wise) to use for iOS Objective C development?
Update 1
I mentioned a type "boolean". It looks like it doesn't exist. I removed it from the list and added _Bool.
I am aware of typedef's for these types and values. The question is about style differences.

iOS and OS X are mostly made of Cocoa, which use the boolean type BOOL with values YES/NO.
bool
Defined by C++.
A true boolean, guaranteed to be 0 or 1.
_Bool
Defined by C99.
A true boolean, guaranteed to be 0 or 1.
If stdbool.h is included, bool is #defined as _Bool.
BOOL
Defined by the Objective-C runtime at /usr/include/objc/objc.h.
A signed char in 32 bit. Values may be YES (0x01), NO (0x00), or anything in the range -127 to 128. YES/NO are defined at <Foundation/NSObjCRuntime.h>.
A bool in 64 bits, guaranteed to be 0 or 1.
Boolean
Defined by Carbon at CFBase.h.
An unsigned char.
Values may be TRUE (0x01), FALSE (0x00), or anything in the range -127 to 128.
boolean_t
Defined by /usr/include/mach/i386/boolean.h
An int in x32 or unsigned int in x64.
For non true boolean types:
Any non zero value is treated as true in logical expressions.
If you cast to a boolean types with less range than the casted type, only the lower bytes are used.
Cases where one type or another makes a difference are hard to imagine. There are several cases where casting to BOOL may bite you, and some rare situations (eg: KVO converts BOOL to a NSNumber, and bool to a CFBoolean). If anything, when you consistently use BOOL, you are covered in case Apple changes its definition.

Use BOOL and YES/NO in Objective-C code. That's expected in Objective-C code and how the Objective-C headers define the data type. (Note that you may occasionally deal with other type/values, such as when checking the "truthiness" of pointers or when dealing with, e.g., C++ code, but Objective-C generally uses BOOL and YES/NO for boolean data types and values.)

Use ObjC's BOOL for ObjC code. Use the others where they're "native". Anything else will tend to stand out as "unusual" and worthy of extra care when reading the code, which is a property that should be reserved for code that actually is unusual and worthy of extra care.

It's basically absolutely indifferent. In practice, I bet they're even declared the very same way: typedef signed char BOOL;, typedef signed char Boolean;, etc.
So they're practically compatible and equivalent; the best approach is, however, to respect the type methods expect and return, so write
[object someObjectiveCMethod:YES];
instead of
[object someObjectiveCMethod:TRUE];
and
CFWhateverSetBooleanProperty(true);
instead of
CFWhateverSetBooleanProperty(YES);

You could use each one if wasn't for the fact that who reads the code may not know this type. So for convention, use BOOL which could have a value of YES (1) or NO .

BOOL with YES and NO.
However BOOL is signed char so YES equals to 1, and NO equals to 0.
In objc.h they are defined as:
typedef signed char BOOL;
And
#define YES ((BOOL)1)
#define NO ((BOOL)0)

Related

why identical() return true on two strings

I'm new to Dart. The documentation say: "To test whether two objects x and y represent the same thing, use the == operator. (In the rare case where you need to know whether two objects are the exact same object, use the identical() function instead.)"
So, if type this code:
var foo = 'bar';
var baz = 'bar';
print(identical(foo, baz));
If i've well understood, foo and bar do not reference the same object. So identical() must return false, isn't it ?
But it's not the case, at least in DartPad.
Where is the problem.
In this case foo and bar do reference the same object.
That is because the compiler canonicalizes string literals.
The specification requires most constants to be canonicalized. If you create const Duration(seconds: 1) in two places, it will become the same object. Integers, doubles and booleans are always canonicalized, whether constant or not (the language pretends that there is only one instance per value).
Strings are special in that the specification is not entirely clear on whether they need to be canonicalized or not, but constant strings need to be canonicalized for constants to make sense, and all the compilers do that.
A literal is a constant expression, so string literals are always canonicalized. That means that 'bar' denotes the same object no matter where it occurs in your code.
For several built-in "literals", you will always get identical true for equal values.
bool
String
int
double (I think)

how to use AtomiccmpExchange with double?

I have a double value that I need to access to inside a backgroundThread. I would like to use somethink like AtomiccmpExchange but seam to not work with double. is their any other equivalent that I can use with double ? I would like to avoid to use Tmonitor.enter / Tmonitor.exit as I need something the most fast as possible. I m under android/ios so under firemonkey
You could type cast the double values into UInt64 values:
PUInt64(#dOld)^ := AtomicCmpExchange(PUInt64(#d)^,PUInt64(#dNew)^,PUInt64(#dCom‌​p)^);
Note that you need to align the variables properly, according to platforms specifications.
As #David pointed out, comparing doublevalues is not the same as comparing UInt64 values. There are some specific double values that will behave out of the ordinary:
A NaN is normally (as specified in IEEE-754) detected by comparing a value by itself.
IsNaN := d <> d;
footnote: Delphi default exception handler is triggered in the event of comparing a NaN, but other compilers may behave differently. In Delphi there is an IsNaN() function to use instead.
Likewise the value zero could be both positive and negative, for a special meaning. Comparing double 0 with double -0 will return true, but comparing the memory footprint will return false.
Maybe use of System.SyncObjs.TInterlocked class methods will be better?

Why do nonzero numbers fail to set a BOOL property to YES?

It is my understanding that in Objective C, which is based off of C, all BOOLs are basically shorts (-127 to 128), with zero being the only value for "FALSE", or "NO". However, when I recently tried to set a button's selected value based off of a bitmask, it fails. Why?
NSInteger bitfield = 127;
NSInteger bitmask = 1 << 6; // 64
myButton.selected = bitfield & bitmask; // selected will remain NO
That's because BOOL is not bool.
BOOL is a just a non-standard (Objective-C-specific) typedef for a (non-bool) integral type (as far as I know, it's always signed char but I might be wrong). As such, it does not behave as a true Boolean data type, but rather as its underlying integral type. So, if you assign 64 to it, it will store 64 (and not true or 1). It is possible that, as a result of this, an operation that always assumes the true value to be 1 (i. e. the LSB set) will fail to recognize 64 as such.
In contrast, if you replaced BOOL with the true C99 Boolean type, which is _Bool or bool, then you would experience the expected behavior. I. e., assigning any non-zero value to the variable would have it store true or 1, regardless to whether that value was really 1.

MAX / MIN function in Objective C that avoid casting issues

I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros

Why is BOOL type in Objective-C a char?

I have been told that BOOL in Objective-C is a typedef of an unsigned char and YES & NO keywords are encoded chars. This is not the first time I heard it. I have read that this is because Apple used BOOL before the C standard provided a _Bool type, am I wrong? Is there any advantage of that fact? Are we wasting bits of memory? Does this provide a way to return valuable data in a function? Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
BOOL myFunction(int argument)
{
BOOL result = YES; //The function generates the result
if (someError == YES) {
return 5;
}
return result;
}
Are we wasting bits of memory?
No, because you can't get a variable smaller than a char: it's always a single byte. You can pack multiple bits representing boolean flags in a single word, but you have to do it manually - with bit shifts, using bit fields, and so on.
Does this provide a way to return valuable data in a function?
Not really: what you did is a hack, although 5 would definitely make its way through the system to the caller, and would be interpreted as YES in a "plain" if statement, e.g.
if (myFunction(123)) {
...
}
However, it would fail miserably if used like this:
if (myFunction(123) == YES) { // 5 != YES
...
}
Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
It would always be incorrect from the readability point of view; as far as "doing what you meant it to do", your mileage may vary, depending on the way in which your function is used.
There is a slight advantage: On many platforms (including iOS, IIRC), sizeof(_Bool) == sizeof(int), so using char can be slightly more compact.
Except BOOL is actually signed char, not char, this is so #encode(BOOL) evaluates to the same thing on all platforms. This complicates bitfields slightly, since BOOL foo:1; appears to define a 1-bit signed integer (IIRC the behaviour of which is undefined) — clearly unsigned char would be a better choice, but it was probably too late.
_Bool also ought to optimize better since the compiler can make assumptions about the bit-pattern used, e.g. replacing a&&b with a&b (provided b is side effect-free). Some architectures also represent "true" as all bits set, which is useful for masking (SSE comparison instructions come to mind).
"BOOL in Objective-C" is not an unsigned char, it's whatever the Objective-C library defines it to be. Which is unsigned char or bool, depending on your compiler settings (32 bit or 64 bit). Both behave different. Try this code with a 32 bit compiler and a 64 bit compiler:
BOOL b = 256;
if (b) NSLog (#"b is true"); else NSLog (#"b is false");

Resources