BOOL with 64-bit on iOS - ios

When I use BOOL for 32-bit, I get:
BOOL b1=8960; //b1 == NO
bool b2=8960; //b2 == true
But for 64-bit, I get:
BOOL b1=8960; //b1 == YES
bool b2=8960; //b2 == true
What has changed about BOOL from 32-bit to 64-bit?

#TimBodeit is right, but it doesn't explain why ...
BOOL b1=8960; //b1 == NO
... evaluates to NO on 32-bit iOS and why it evaluates to YES on 64-bit iOS. Let's start from the same beginning.
ObjC BOOL definition
#if (TARGET_OS_IPHONE && __LP64__) || (__ARM_ARCH_7K__ >= 2)
#define OBJC_BOOL_IS_BOOL 1
typedef bool BOOL;
#else
#define OBJC_BOOL_IS_CHAR 1
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif
For 64-bit iOS or ARMv7k (watch) it's defined as bool and for the rest as signed char.
ObjC BOOL YES and NO
Read Objective-C Literals, where you can find:
Previously, the BOOL type was simply a typedef for signed char, and
YES and NO were macros that expand to (BOOL)1 and (BOOL)0
respectively. To support #YES and #NO expressions, these macros are
now defined using new language keywords in <objc/objc.h>:
#if __has_feature(objc_bool)
#define YES __objc_yes
#define NO __objc_no
#else
#define YES ((BOOL)1)
#define NO ((BOOL)0)
#endif
The compiler implicitly converts __objc_yes and __objc_no to (BOOL)1
and (BOOL)0. The keywords are used to disambiguate BOOL and integer
literals.
bool definition
bool is a macro defined in stdbool.h and it expands to _Bool, which is a boolean type introduced in C99. It can store two values, 0 or 1. Nothing else. To be more precise, stdbool.h defines four macros to use:
/* Don't define bool, true, and false in C++, except as a GNU extension. */
#ifndef __cplusplus
#define bool _Bool
#define true 1
#define false 0
#elif defined(__GNUC__) && !defined(__STRICT_ANSI__)
/* Define _Bool, bool, false, true as a GNU extension. */
#define _Bool bool
#define bool bool
#define false false
#define true true
#endif
#define __bool_true_false_are_defined 1
_Bool
_Bool was introduced in C99 and it can hold the values 0 or 1. What's important is:
When a value is demoted to a _Bool, the result is 0 if the value
equals 0, and 1 otherwise.
Now we know where this mess comes from and we can better understand what's going on.
64-bit iOS || ARMv7k
BOOL -> bool -> _Bool (values 0 or 1)
Demoting 8960 to _Bool gives 1, because the value doesn't equal 0. See (_Bool section).
32-bit iOS
BOOL -> signed char (values -128 to 127).
If you're going to store int values (-128 to 127) as signed char, the value is unchanged per C99 6.3.1.3. Otherwise it is implementation defined (C99 quote):
Otherwise, the new type is signed and the value cannot be represented
in it; either the result is implementation-defined or an
implementation-defined signal is raised.
It means that clang can decide. To make it short, with the default settings, clang wraps it around (int -> signed char):
-129 becomes 127,
-130 becomes 126,
-131 becomes 125,
...
And in the opposite direction:
128 becomes -128,
129 becomes -127,
130 becomes -126,
...
But because signed char can store values in the range -128 to 127, it can store 0 as well. For example 256 (int) becomes 0 (signed char). And when your value 8960 is wrapped around ...
8960 becomes 0,
8961 becomes 1,
8959 becomes -1,
...
... it becomes 0 when stored in signed char (8960 is a multiple of 256, 8960 % 256 == 0), thus it's NO. The same applies to 256, 512, ... multiples of 256.
I strongly recommend using YES, NO with BOOL and not relying on fancy C features like int as a condition in if, etc. That's the reason Swift has Bool, true, and false and you can't use Int values in conditions where Bool is expected. Just to avoid this mess ...

For 32-bit BOOL is a signed char, whereas under 64-bit it is a bool.
Definition of BOOL from objc.h:
/// Type to represent a boolean value.
#if (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
#define OBJC_BOOL_IS_BOOL 1
typedef bool BOOL;
#else
#define OBJC_BOOL_IS_CHAR 1
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif

Related

In Swift 2.0 what is the maximum length of a string?

I have an application which utilizes a single string. This string contains data loaded from an array and then the string is exported to a text file.
My question is what is the longest length possible for this string, and when does it become a problem that it is getting too long?
Following the official Apple documentation:
String is bridged to Objective-C as NSString, and a String that
originated in Objective-C may store its characters in an NSString.
Since all devices were capable of running iOS are 32 bit, this means NSUIntegerMax is 2^32.
According to Swift opensource GitHub repo It would seem that its value is 2^64 = 18,446,744,073,709,551,615 ; hexadecimal 0xFFFFFFFFFFFFFFFF for the 64 bit devices, following this code:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
// + (instancetype)
// stringWithCharacters:(const unichar *)chars length:(NSUInteger)length
...
maxLength:(NSUInteger)maxBufferCount
...
TEST: (on iPhone 6)
String is bridged to Objective-C as NSString, and a String
that originated in Objective-C may store its characters in an
NSString. Since any arbitrary subclass of NSString can
become a String, there are no guarantees about representation or
efficiency in this case.
What is the maximum length of an NSString object?
The hard limit for NSString would be NSUIntegerMax characters. NSUIntegerMax is 2^32 - 1 and NSString can hold a little over 4.2 billion characters.
According comments:
for iPhone 5S and above since they are 64 Bit. It's 2^(64 - 1)

Looping over BOOLs in Objective-C

Is it safe to loop over Objective-C BOOLs like so:
for (BOOL flagA = NO; flagA <= YES; flagA++)
for (BOOL flagB = NO; flagB <= flagA; flagB++)
// ...
I'd like to use this to cycle through all relevant permutations of flags in a XCTestCase.
But it appears as if YES++ is still YES at least on some platforms (hence leading to an infinite loop e.g. on iPhone 6 Plus simulator) whereas I would have expected that BOOL would just be treated as int (and hence YES++ becoming 2).
Do I have to loop over ints (my best guess) instead, or can the use of BOOL be salvaged in some convenient way?
You are all missing the point here. Drux is asking why can't he increment over BOOL, while it should be a char (8 bit value), which is perfectly incrementable.
The Answer is very easy. BOOL is sometimes a char and sometimes a bool depending on the target. From objc.h file:
#if !defined(OBJC_HIDE_64) && TARGET_OS_IPHONE && __LP64__
typedef bool BOOL;
#else
typedef signed char BOOL;
If you iterate over a bool you will get value of 1 maximum.
EDIT:
Can you please add a reference to where the semantics of ++ for bool are specified? - Drux
Even though that bool has to be 8 bits minimum, it can't have any other value than 0 or 1. Why ? Because bool a = 3 (bool equal operator) converts 3 into a bool value, which is true which is 1.
So bool a = true; a++ is the same as bool a = 2; which makes a have a value of 1
The only way I see would be adding a break in your loop to escape the infinite loop.
Another possibilities is to use simple integer and stop the for loop when counter == 2
for (BOOL flagA = NO; YES; flagA++) {
for (BOOL flagB = NO; YES; flagB++) {
// Do something
if (flagB)
break;
}
if (flagA)
break;
}
I think #Sulthan means something like this (made overly explicit on purpose):
for(int indexA = 0; indexA <= 1; indexA++){
for(int indexB = 0; indexB <= indexA; indexB++){
BOOL flagA = (indexA == 1) ? YES : NO;
BOOL flagB = (indexB == 1) ? YES : NO;
// Use your flags (booleans) here...
}
}
(Of course, you can use just the ints in place of booleans in Objective-C, if you want to avoid using too many redundant variables).
ADDENDUM: I actually performed a "Jump to definition" in Xcode (OSX project), and the part looks like this:
#if __has_feature(objc_bool)
#define YES __objc_yes
#define NO __objc_no
#else
#define YES ((BOOL)1)
#define NO ((BOOL)0)
#endif
(usr/include/objc/objc.h)
Can't "Jump to Definition" on __objc_yes (gives "Symbol Not Found")
If you're set on operating over BOOLs, then instead of:
for (BOOL flagA = NO; flagA <= YES; flagA++)
for (BOOL flagB = NO; flagB <= flagA; flagB++)
// ...
You should really be doing something this (though it is not what you want):
for (BOOL flagA = NO; flagA != YES; flagA = !flagA)
for (BOOL flagB = NO; flagB != flagA; flagB = !flagB)
// This is the only safe way to 'iterate' BOOLs
The behaviour, (BOOL)++ is not well-defined* as a BOOL can only be YES or NO. What you really should be doing is casting your BOOL to an int, and iterating over that, or refactoring your loop entirely to use int types.
The problem with casting your BOOL values to ints is, as you have pointed out, BOOL is typedef'd to something with only 8 bits of information*, therefore it only makes sense to have 255 iterations. In fact in more recent times, BOOL is not cast-able at all because it is defined as a compiler intrinsic (objc_bool, which can have values __objc_yes and __objc_no). __objc_no++ has no meaning.
TL;DR My (strong) suggestion would be to refactor your code so you are iterating over integers, and inspecting BOOLs within each iteration. Whether you cast your BOOL values, or refactor your loop is up to you, but iterating over BOOL values in the way you have indicated is both unsafe and (now, because of that) unsupported.
* In past years, the implementation details of BOOL were obvious (namely a cast to an unsigned char). With the advent of compiler intrinsics, the details are hidden (though they are likely the same). The reason they are now hidden is because you're really not supposed to rely on them, and the easiest way to stop people relying on them is to hide them from the compiler altogether.

Why isn't NSInteger defined as long on 32-bit iOS?

NSInteger is defined this way:
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
#else
typedef int NSInteger;
#endif
This results in NSInteger being defined as int on 32-bit iOS even though int and long are the same anyway (both 4 bytes). Format strings like the following generate a warning with this definition:
NSInteger x = 4;
[NSString stringWithFormat: #"%ld", x];
// Warning: Values of type 'NSInteger' should not be used as format arguments;
// add an explicit cast to 'long' instead.
So does somebody know why NSInteger isn't always defined as long?
Historical reasons, where previous releases of APIs used int -- then were migrated to use typedef NSInteger ca. the 64-bit transition of OS X.
I suppose they could have changed for iOS, but that would have impacted a lot of existing and to-be-developed code if they were different on OS X and iOS.

BOOL property from a calculation returns NSNumber with incorect value using valueForKey:

I have a simple object which has one NSNumber which is used to store some flags.
I have a conienience getter method which in fact does:
[self.flags integerValue] & SomeConstantFlag
for a property#property (readonly, nonatomic, assign) BOOL someConstantFlag
and this works fine when accesing the underlying bool value like
model.someConstantFlag
but when I try to
id value = [model valueForKey:#"someConstantFlag"];
Then it returns a bad boolean representation e.g. NSNumber with value 2, 4 etc.
Why is this happening when the declaration of the property is BOOL? Is there a "Pretty" way to overcome this issue?
Wrapping on the other hand works ok:
BOOL someBool = 42;
NSNumber* numberVal = #(someBool);
//The underlying is an __NSCFBoolean with the proper 0/1 val!
valueForKey always returns an Objective-C object, even if the property has scalar type.
From the documentation (emphasis mine):
The default implementations of valueForKey: and setValue:forKey:
provide support for automatic object wrapping of the non-object data
types, both scalars and structs.
Once valueForKey: has determined the specific accessor method or
instance variable that is used to supply the value for the specified
key, it examines the return type or the data type. If the value to be
returned is not an object, an NSNumber or NSValue object is created
for that value and returned in its place.
The return value of your method is BOOL, which is defined as
typedef signed char BOOL;
on OS X and on the 32-bit iOS platform. So what valueForKey returns is a NSNumber
containing the result of
signed char val = [self.flags integerValue] & SomeConstantFlag;
and that can be in the range -128 .. 127.
To ensure that you get only YES or NO (aka 1 or 0) write your custom getter as:
-(BOOL)someConstantFlag
{
return ([self.flags integerValue] & SomeConstantFlag) != 0;
}
Remark: On the 64-bit iOS platform (but not on 64-bit OS X), BOOL is defined as the C99 _Bool, which is a "proper" boolean type and can take only the value 0 or 1.
NSNumber *value = #([model objectForKey:#"someConstantFlag"]);
BOOL boolVal = [value boolValue];
I think you should consider the following problems. Firstly, integerValue returns NSInteger which means if you support 64Bit architecture it will return int_64 not int_32, what is more in your code here
[self.flags integerValue] & SomeConstantFlag
this does the following if flags is 00010 and somConstantFlags is 00001 the & of those will do something you probably does not expect because you will get value of 00000 which equals 0 or if they are 00011 and 00110 you will get 00010 which equals 2. So that is why when you call valueForKey you get 2 or 4 or something else depending on your flags :)
What is more in objective-C everything different then 0 is YES.
Try reconsidering your bit logic :). See The following example
enum
{
kWhite = 0,
kBlue = 1 << 0,
kRed = 1 << 1,
kYellow = 1 << 2,
kBrown = 1 << 3,
};
typedef char ColorType;
and in your setter check the following
ColorType pinkColor = kWhite | kRed;
if (pinkColor & (kWhite | kBlue | kRed | kYellow)) {
// any of the flags has been set
}
The flags kWhite, kBlue, kRed and kYellow have been set.
However, kBrown has not been set.

What is the difference between NS_ENUM and NS_OPTIONS?

I preprocessed following code with clang in Xcode5.
typedef NS_ENUM(NSInteger, MyStyle) {
MyStyleDefault,
MyStyleCustom
};
typedef NS_OPTIONS(NSInteger, MyOption) {
MyOption1 = 1 << 0,
MyOption2 = 1 << 1,
};
And got this.
typedef enum MyStyle : NSInteger MyStyle; enum MyStyle : NSInteger {
MyStyleDefault,
MyStyleCustom
};
typedef enum MyOption : NSInteger MyOption; enum MyOption : NSInteger {
MyOption1 = 1 << 0,
MyOption2 = 1 << 1,
};
I know NS_OPTIONS is for a bitmask, but is there any technical differences?
Or this is just for naming convention?
EDIT
According to the definition of NS_OPTIONS, it's probably for compiler compatibility.(especially for c++ compiler)
// In CFAvailability.h
// Enums and Options
#if (__cplusplus && __cplusplus >= 201103L && (__has_extension(cxx_strong_enums) || __has_feature(objc_fixed_enum))) || (!__cplusplus && __has_feature(objc_fixed_enum))
#define CF_ENUM(_type, _name) enum _name : _type _name; enum _name : _type
#if (__cplusplus)
#define CF_OPTIONS(_type, _name) _type _name; enum : _type
#else
#define CF_OPTIONS(_type, _name) enum _name : _type _name; enum _name : _type
#endif
#else
#define CF_ENUM(_type, _name) _type _name; enum
#define CF_OPTIONS(_type, _name) _type _name; enum
#endif
__cplusplus value in clang is 199711 and I can't test what this is exactly for, though.
There's a basic difference between an enum and a bitmask (option). You use an enum to list exclusive states. A bitmask is used when several properties can apply at the same time.
In both cases you use integers, but you look at them differently. With an enum you look at the numerical value, with bitmasks you look at the individual bits.
typedef NS_ENUM(NSInteger, MyStyle) {
MyStyleDefault,
MyStyleCustom
};
Will only represent two states. You can simply check it by testing for equality.
switch (style){
case MyStyleDefault:
// int is 0
break;
case MyStyleCustom:
// int is 1
break;
}
While the bitmask will represent more states. You check for the individual bits with logic or bitwise operators.
typedef NS_OPTIONS(NSInteger, MyOption) {
MyOption1 = 1 << 0, // bits: 0001
MyOption2 = 1 << 1, // bits: 0010
};
if (option & MyOption1){ // last bit is 1
// bits are 0001 or 0011
}
if (option & MyOption2){ // second to last bit is 1
// bits are 0010 or 0011
}
if ((option & MyOption1) && (option & MyOption2)){ // last two bits are 1
// bits are 0011
}
tl;dr An enum gives names to numbers. A bitmask gives names to bits.
The only major difference is that using the appropriate macro allows Code Sense (Xcode's code completion) to do type checking and code completion better. For example, NS_OPTIONS allows the compiler to make sure all the enums you | together are of the same type.
For further reading see: http://nshipster.com/ns_enum-ns_options/
Edit:
Now that Swift is coming, using NS_ENUM/OPTIONS is highly recommended so that the enum can be correctly bridged to a swift enum.
The only difference is to let developers using the values know if it makes sense to use them in an OR'ed bitmask.
The compiler doesn't care which one you use though :)
I copied my answer from this question Objective-C Enumeration, NS_ENUM & NS_OPTIONS:
Since the user who add that question hasn't been active for a long time, maybe you can suggest my answers for people who search and find here.
BELOW IS THE ANSWER COPIED:
There is a difference between the two except that they infer different kind of enumerations.
When compiled in Objective-C++ mode, they generate different code:
this is the original code:
typedef NS_OPTIONS(NSUInteger, MyOptionType) {
MyOptionType1 = 1 << 0,
MyOptionType2 = 1 << 1,
};
typedef NS_ENUM(NSUInteger, MyEnumType) {
MyEnumType1 = 1 << 0,
MyEnumType2 = 1 << 1,
};
this is the code when macros are expanded in Objective-C compiling:
typedef enum MyOptionType : NSUInteger MyOptionType; enum MyOptionType : NSUInteger {
MyOptionType1 = 1 << 0,
MyOptionType2 = 1 << 1,
};
typedef enum MyEnumType : NSUInteger MyEnumType; enum MyEnumType : NSUInteger {
MyEnumType1 = 1 << 0,
MyEnumType2 = 1 << 1,
};
this is the code when macros are expanded in Objective-C++ compiling:
typedef NSUInteger MyOptionType; enum : NSUInteger {
MyOptionType1 = 1 << 0,
MyOptionType2 = 1 << 1,
};
typedef enum MyEnumType : NSUInteger MyEnumType; enum MyEnumType : NSUInteger {
MyEnumType1 = 1 << 0,
MyEnumType2 = 1 << 1,
};
See the difference of NS_OPTIONS between two modes?
HERE IS THE REASON:
There is a new feature in C++ 11, you can declare a type for you enumeration, before that, the type holding enumeration is decided by compiler according to the largest value of enumerations.
So in C++ 11, since you can decide the size of your enumeration by yourself, you could forward declare enums without actually define them, like this:
//forward declare MyEnumType
enum MyEnumType: NSInteger
//use myEnumType
enum MyEnumType aVar;
//actually define MyEnumType somewhere else
enum MyEnumType: NSInteger {
MyEnumType1 = 1 << 1,
MyEnumType2 = 1 << 2,
}
This feature is handy, and Objective-C imports this feature , but it brings a problem, when doing bitwise calculation, like this:
enum MyEnumType aVar = MyEnumType1 | MyEnumType2;
This code can't compile in C++/Objective-C++ compiling, since aVar is considered of type NSInteger but MyEnumType1 | MyEnumType2 is of type MyEnumType, this assignment can't perform without a type cast, C++ forbids implicit type casting.
At this time, we need NS_OPTIONS, NS_OPTIONS fall back to enum before C++ 11, so that there is no MyEnumType indeed, MyEnumType is just another name for NSInteger, so that code like
enum MyEnumType aVar = MyEnumType1 | MyEnumType2;
will compile, since it is assigning NSInteger to NSInteger.

Resources