Why do nonzero numbers fail to set a BOOL property to YES? - ios

It is my understanding that in Objective C, which is based off of C, all BOOLs are basically shorts (-127 to 128), with zero being the only value for "FALSE", or "NO". However, when I recently tried to set a button's selected value based off of a bitmask, it fails. Why?
NSInteger bitfield = 127;
NSInteger bitmask = 1 << 6; // 64
myButton.selected = bitfield & bitmask; // selected will remain NO

That's because BOOL is not bool.
BOOL is a just a non-standard (Objective-C-specific) typedef for a (non-bool) integral type (as far as I know, it's always signed char but I might be wrong). As such, it does not behave as a true Boolean data type, but rather as its underlying integral type. So, if you assign 64 to it, it will store 64 (and not true or 1). It is possible that, as a result of this, an operation that always assumes the true value to be 1 (i. e. the LSB set) will fail to recognize 64 as such.
In contrast, if you replaced BOOL with the true C99 Boolean type, which is _Bool or bool, then you would experience the expected behavior. I. e., assigning any non-zero value to the variable would have it store true or 1, regardless to whether that value was really 1.

Related

set of WideChar: Sets may have at most 256 elements

I have this line:
const
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
The above does not compile, with error:
[Error] Sets may have at most 256 elements
But this line does compile ok:
var WS: WideString;
if WS[1] in [WideChar('A')..WideChar('Z')] then...
And this also compiles ok:
const
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
...
if WS[1] in MY_SET then...
Why is that?
EDIT: My question is why if WS[1] in [WideChar('A')..WideChar('Z')] compiles? and why MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')]; compiles? aren't they also need to apply to the set rules?
A valid set has to obey two rules:
Each element in a set must have an ordinal value less than 256.
The set must not have more than 256 elements.
MY_SET: set of WideChar = [WideChar('A')..WideChar('Z')];
Here you declare a set type (Set of WideChar) which has more than 256 elements -> Compiler error.
if WS[1] in [WideChar('A')..WideChar('Z')]
Here, the compiler sees WideChar('A') as an ordinal value. This value and all other values in the set are below 256. This is ok with rule 1.
The number of unique elements are also within limits (Ord('Z')-Ord('A')+1), so the 2nd rules passes.
MY_SET = [WideChar('A')..WideChar('Z'), WideChar('a')..WideChar('z')];
Here you declare a set that also fulfills the requirements as above. Note that the compiler sees this as a set of ordinal values, not as a set of WideChar.
A set can have no more than 256 elements.
Even with so few elements the set already uses 32 bytes.
From the documentation:
A set is a bit array where each bit indicates whether an element is in the set or not. The maximum number of elements in a set is 256, so a set never occupies more than 32 bytes. The number of bytes occupied by a particular set is equal to
(Max div 8) - (Min div 8) + 1
For this reason only sets of byte, (ansi)char, boolean and enumerations with fewer than 257 elements are possible.
Because widechar uses 2 bytes it can have 65536 possible values.
A set of widechar would take up 8Kb, too large to be practical.
type
Capitals = 'A'..'Z';
const
MY_SET: set of Capitals = [WideChar('A')..WideChar('Z')];
Will compile and work the same.
It does seem a bit silly to use widechar if your code ignores unicode.
As written only the English capitals are recognized, you do not take into account different locales.
In this case it would be better to use code like
if (AWideChar >= 'A') and (AWideChar <= 'Z') ....
That will work no matter how many chars fall in between.
Obviously you can encapsulate this in a function to save on typing.
If you insist on having large sets, see this answer: https://stackoverflow.com/a/2281327/650492

2048 casted to BOOL returns 0

Consider this code
NSInteger q = 2048;
BOOL boolQ = q;
NSLog(#"%hhd",boolQ);
After execution boolQ is equal 0. Could someone explain why is this so?
BOOL probably is implemented as char or uint8_t/int8_t, as "hh" prints half of the half of an integer. which typically is a byte.
Converting to char is taking the lowest 8bit of 2048 (=0x800) and gives you 0.
The proper way to convert any integer to a boolean value is:
NSInteger q = some-value;
BOOL b = !!q;
Casting an integer value to a type too small to represent the value being converted is undefined behaviour in C (C11 standard Annex J.2), and therefore also in the part of Objective-C which deals with C-level matters. Since it's undefined behaviour it can represent the result however it wants, expected value or not.
As per 6.3.1.4, any integer can be used as a boolean value without casting, in which case it will show the expected behaviour (0 is 0, everything else is 1), giving rise to the !! idiom suggested by alk; perhaps counterintuitively, you convert the value by not explicitly converting the value (instead, the conversion is correctly handled by the implicit conversion operation inserted by the ! operator).

All kinds of Booleans in ObjectiveC [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What values should I use for iOS boolean states?
I believe there are something like 5 boolean types in iOS environment (which comes from C, C++ and Objective C).
_Bool
bool
BOOL
boolean_t
Boolean
And there are at least four pairs of values for them:
true, false
TRUE, FALSE
YES, NO
1, 0
Which one do you think is the best (style wise) to use for iOS Objective C development?
Update 1
I mentioned a type "boolean". It looks like it doesn't exist. I removed it from the list and added _Bool.
I am aware of typedef's for these types and values. The question is about style differences.
iOS and OS X are mostly made of Cocoa, which use the boolean type BOOL with values YES/NO.
bool
Defined by C++.
A true boolean, guaranteed to be 0 or 1.
_Bool
Defined by C99.
A true boolean, guaranteed to be 0 or 1.
If stdbool.h is included, bool is #defined as _Bool.
BOOL
Defined by the Objective-C runtime at /usr/include/objc/objc.h.
A signed char in 32 bit. Values may be YES (0x01), NO (0x00), or anything in the range -127 to 128. YES/NO are defined at <Foundation/NSObjCRuntime.h>.
A bool in 64 bits, guaranteed to be 0 or 1.
Boolean
Defined by Carbon at CFBase.h.
An unsigned char.
Values may be TRUE (0x01), FALSE (0x00), or anything in the range -127 to 128.
boolean_t
Defined by /usr/include/mach/i386/boolean.h
An int in x32 or unsigned int in x64.
For non true boolean types:
Any non zero value is treated as true in logical expressions.
If you cast to a boolean types with less range than the casted type, only the lower bytes are used.
Cases where one type or another makes a difference are hard to imagine. There are several cases where casting to BOOL may bite you, and some rare situations (eg: KVO converts BOOL to a NSNumber, and bool to a CFBoolean). If anything, when you consistently use BOOL, you are covered in case Apple changes its definition.
Use BOOL and YES/NO in Objective-C code. That's expected in Objective-C code and how the Objective-C headers define the data type. (Note that you may occasionally deal with other type/values, such as when checking the "truthiness" of pointers or when dealing with, e.g., C++ code, but Objective-C generally uses BOOL and YES/NO for boolean data types and values.)
Use ObjC's BOOL for ObjC code. Use the others where they're "native". Anything else will tend to stand out as "unusual" and worthy of extra care when reading the code, which is a property that should be reserved for code that actually is unusual and worthy of extra care.
It's basically absolutely indifferent. In practice, I bet they're even declared the very same way: typedef signed char BOOL;, typedef signed char Boolean;, etc.
So they're practically compatible and equivalent; the best approach is, however, to respect the type methods expect and return, so write
[object someObjectiveCMethod:YES];
instead of
[object someObjectiveCMethod:TRUE];
and
CFWhateverSetBooleanProperty(true);
instead of
CFWhateverSetBooleanProperty(YES);
You could use each one if wasn't for the fact that who reads the code may not know this type. So for convention, use BOOL which could have a value of YES (1) or NO .
BOOL with YES and NO.
However BOOL is signed char so YES equals to 1, and NO equals to 0.
In objc.h they are defined as:
typedef signed char BOOL;
And
#define YES ((BOOL)1)
#define NO ((BOOL)0)

Why is BOOL type in Objective-C a char?

I have been told that BOOL in Objective-C is a typedef of an unsigned char and YES & NO keywords are encoded chars. This is not the first time I heard it. I have read that this is because Apple used BOOL before the C standard provided a _Bool type, am I wrong? Is there any advantage of that fact? Are we wasting bits of memory? Does this provide a way to return valuable data in a function? Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
BOOL myFunction(int argument)
{
BOOL result = YES; //The function generates the result
if (someError == YES) {
return 5;
}
return result;
}
Are we wasting bits of memory?
No, because you can't get a variable smaller than a char: it's always a single byte. You can pack multiple bits representing boolean flags in a single word, but you have to do it manually - with bit shifts, using bit fields, and so on.
Does this provide a way to return valuable data in a function?
Not really: what you did is a hack, although 5 would definitely make its way through the system to the caller, and would be interpreted as YES in a "plain" if statement, e.g.
if (myFunction(123)) {
...
}
However, it would fail miserably if used like this:
if (myFunction(123) == YES) { // 5 != YES
...
}
Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
It would always be incorrect from the readability point of view; as far as "doing what you meant it to do", your mileage may vary, depending on the way in which your function is used.
There is a slight advantage: On many platforms (including iOS, IIRC), sizeof(_Bool) == sizeof(int), so using char can be slightly more compact.
Except BOOL is actually signed char, not char, this is so #encode(BOOL) evaluates to the same thing on all platforms. This complicates bitfields slightly, since BOOL foo:1; appears to define a 1-bit signed integer (IIRC the behaviour of which is undefined) — clearly unsigned char would be a better choice, but it was probably too late.
_Bool also ought to optimize better since the compiler can make assumptions about the bit-pattern used, e.g. replacing a&&b with a&b (provided b is side effect-free). Some architectures also represent "true" as all bits set, which is useful for masking (SSE comparison instructions come to mind).
"BOOL in Objective-C" is not an unsigned char, it's whatever the Objective-C library defines it to be. Which is unsigned char or bool, depending on your compiler settings (32 bit or 64 bit). Both behave different. Try this code with a 32 bit compiler and a 64 bit compiler:
BOOL b = 256;
if (b) NSLog (#"b is true"); else NSLog (#"b is false");

How to define -1 as a uint64 in a match clause?

let myuint64 = 10uL
match myuint64 with
| -1 -> ()
| _ -> ()
How do I define the given -1 as a uint64 value?
> match 0UL-1UL with
- |System.UInt64.MaxValue -> "-1"
- |_ -> "???"
- ;;
val it : string = "-1"
Let me leave alone the fact that you can't really represent a negative value with a data type that can only store positive values (and zero of course).
If, on the other hand, you were storing it in a signed value, -1 would be stored as all bits set.
So basically, I will assume you want to find a way to represent -1 as a bit-wise value that will be compatible with -1 as a signed value.
The value would then be, in C# and C/C++ syntax, 0xffffffffffffffff. Exactly how to specify that in F# I don't know.
I don't know F# at all, but if it's anything like any other languages, a UInt64 can't be -1. Ever. UInt means unsigned integer, which means it can only represent positive values.
To expand on other answers:
When a type starts with a u it means unsigned. What signed/unsigned means is this:
Numbers are stored using a certain number of bits. In the case of int64 and uint64, 64 bits are used. If the number is signed, the 1st bit is not used as part of the number itself, only the other 63 are. That bit is used to say whether the number is negative. If the number is unsigned, then all bits including the 1st bit are used as part of the number and the number is always non-negative (ie: is positive or 0).
Well you could assign it -1 and on most architectures store the 2's complement in there. The signed and unsigned stuff are really only for the type checking. There is no negative sign in hardware.
I have no idea if f# type checker is smart enough to know that a lexical constant -1 is a negative number and should not be put in a uint64.
C definitely does not care.
#include <stdio.h>
#include <inttypes.h>
main()
{
uint64_t x = -1;
printf("0x%x\n", x); // 0xffffffff
}
if F# will convert it for you then -1UL would work. If not then you can specify it as 0xFFFFFFFFFFFFFFFFUL and add a comment to remember that it's -1.
Don't have the F# tools installed at the moment so I cannot verify this.
If you want to go with a signed int:
-1: int64
but you can't match a negative number to a uint, as others have stated.

Resources