NSInteger and NSUInteger in a mixed 64bit / 32bit environment - ios

I have a fair amount of string format specifiers in NSLog / NSAssert etc. calls which use %d and %u with NSInteger (= int on 32bit) and NSUInteger (= unsigned int on 32bit) types respectively.
When converting the app to 64bit, this gives warnings (of course), as %ld %lu is expected for what now became a long and unsigned long type.
Simply converting the format specifiers will of course introduce the reverse warnings in the 32bit build.
So the only solution I see to become warning free is using the 64bit specifiers, and casting to the 64bit value types everywhere a warning is given in the 32bit build.
But I was wondering if perhaps there are format specifiers specifically for the NSInteger and NSUInteger type which would work on both architectures without casting?

I think the safest way is to box them into NSNumber instances.
NSLog(#"Number is %#", #(number)); // use the highest level of abstraction
This boxing doesn't usually have to create a new object thanks to tagged pointer magic.
If you really don't want to use NSNumber, you can cast primitive types manually, as others suggested:
NSLog(#"Number is %ld", (long)number); // works the same on 32-bit and 64-bit

You can also use %zd (NSInteger) and %tu (NSUInteger) when logging to the console.
NSInteger integer = 1;
NSLog(#"first number: %zd", integer);
NSUInteger uinteger = 1;
NSLog(#"second number: %tu", uinteger);
Also to be found here.

No, (unfortunately) there is no printf format that directly corresponds to NS(U)Integer.
So for architecture independent code, you have to convert everything to the "long"
variant (as the Xcode "Fix-it" suggests):
NSInteger i = ...;
NSLog(#"%ld", (long)i);
The only alternative that I know of is from Foundation types when compiling for arm64 and 32-bit architecture:
// In the precompiled header file:
#if __LP64__
#define NSI "ld"
#define NSU "lu"
#else
#define NSI "d"
#define NSU "u"
#endif
NSInteger i = ...;
NSLog(#"i=%"NSI, i);
using preprocessor macros (but even the author of that answer calls it a
"admittedly awful approach").

Related

How to send int between 32bit and 64bit processors iOS

Pretty much the title, I send an int in a struct using Gamekit and on the receiving end the other device gets it.
Between 64bit cpus (iPhone 5S and over) the number is received fine. But when a iPhone 5 gets it (32bit cpu) the int is received as 0. Whats the correct way?
I've tried sending as NSInteger and the results are the same.
I have to add I have this issue with u_int_32t:
When devices connect, each device trades random numbers. These numbers determine which player starts, and I'm using u_int_32t for this, however, 32bit cpus still receive 0. For example:
I declare
uint32_t _ourRandomNumber;
Then, _ourRandomNumber = arc4random();
And then the numbers are sent, in a struct like this.
typedef struct {
Message message;
uint32_t randomNumber;
} MessageRandomNumber;
Using a method like this:
- (void)sendRandomNumber{
MessageRandomNumber message;
message.message.messageType = kMessageTypeRandomNumber;
message.randomNumber = _ourRandomNumber;
NSData *data = [NSData dataWithBytes:&message length:sizeof(MessageRandomNumber)];
[self sendData:data];
}
When the 32 bit cpu receives it then in the receiving method:
Message *message = (Message*)[data bytes];
if (message->messageType == kMessageTypeRandomNumber) {
MessageRandomNumber *messageRandomNumber = (MessageRandomNumber*)[data bytes];
NSLog(#"Received random number:%d", messageRandomNumber->randomNumber);
The NSLog shows: Received random number:0
NSInteger is going to be 64-bit on a 64-bit platform and 32-bit on a 32-bit platform. If you don't care about 64-bit precision, you could always use an int32_t (or a u_int32_t if you want unsigned) type to explicitly just use a 32-bit value. It is generally wise to be explicit about data lengths when sending values between devices, which is what these types exist for (there's int8_t, int16_t, int32_t, and int64_t and their unsigned counterparts).
It's also worth mentioning that you need to be concerned about the byte order of the values (assuming larger values than int8_t and u_int8_t) when sending values to arbitrary hardware. If you're only working with iOS devices this isn't going to be an issue, however.

Is NSUInteger equal to [NSNumber numberWithUnsignedLongLong]?

Im trying to create an app that will use very large numbers. I was wondering if the storage power of [NSNumber numberWithUnsignedLongLong] is the max number I can get? And if NSUInteger has the same storage power as a [NSNumber numberWithUnsignedLongLong]?
NSUInteger is a typedef of a basic c-type. The exact type depends on your platform:
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
So sizes can vary by implementation of C, but unsigned long is at least 32 bits and unsigned long long is at least 64 bits.
Using types where you know the size is probably better when you're worried about overflowing. They can always be wrapped in objective-C types if needed.
uint64_t which holds a number up to UINT64_MAX might be useful.
#define UINT64_MAX (18446744073709551615ULL)
Just use uint64_t (64-bit unsigned integer, which is the same as unsigned long long).
You don't want to use NSNumber unless you are storing the vales within an Objective-C collection class (NSArray, for example), as they are immutable making them very difficult and expensive to manipulate.
You are correct the largest unsigned integer type you[*] can store in an NSNumber is unsigned long long - and on current systems expect this to be 64 bits.
Is this type equivalent to NSUInteger? No, that is platform dependent and is either an int or long, but not a long long. Just use unsigned long long or typedef it, e.g.:
typedef unsigned long long MYULongLong
You could use a sized type, such as uint64_t, but there are no matching sized methods on NSNumber. You can address that by adding such methods to NSNumber using a category and conditional code based on sizeof - with a little care you can write that so the conditionals all disappear during compilation, that is left as an exercise ;-)
HTH

objective c implicit conversion loses integer precision 'NSUInteger'

Following a tutorial on treehouse, I'm see this popular Object-C warning message in XCode.
My button function
- (IBAction)buttonPressed:(UIButton *)sender {
NSUInteger index = arc4random_uniform(predictionArray.count);
self.predictionLabel.text = [predictionArray objectAtIndex:index];
}
I see it on the NSUInteger line, I've few a few of the similar stackoverflows and they seem to talk about 32bit vs 64bit numbers and type casting, but not sure how to do that here?
My predictionArray
- (void)viewDidLoad
{
[super viewDidLoad];
predictionArray = [[NSArray alloc] initWithObjects:
#"It is certain", #"It is decidely so", #"All signs say YES", #"The stars are not aligned",
#"My reply is no",
#"It is doubtful",
#"Better not tell you now",
#"Concentrate and ask again",
#"Unable to answer now", nil];
// Do any additional setup after loading the view, typically from a nib.
}
You can safely suppress the warning with a cast.
NSUInteger index = arc4random_uniform((uint32_t) predictionArray.count);
It's not always safe to suppress warnings, so don't go casting things to get rid of the warnings until you figure out whether the operation is safe.
What's going on here is that NSUInteger is, on your platform, a typedef for a 64-bit integer type. It's not always 64 bits, just on some platforms. The compiler is warning you that some of those bits are getting thrown away. If you know that these bits are unimportant, you can just use a cast.
In this case, the result is that index will always be under 232-1. If it's even remotely possible for predictionArray to contain 232 or more elements, then your program has an error and you'll have to construct a 64-bit version of arc4random_uniform(). You can ensure this with the following code:
assert(predictionArray.count <= (uint32_t) -1);
As per my comment, arc4random_uniform() takes in, and returns, a u_int32_t, an unsigned integer that is always 32 bits, regardless of target architecture. However, predictionArray.count returns an NSUInteger, which is typedefd differently for 32-bit and 64-bit systems; it's 32 bits (unsigned int) on a 32-bit system, and 64-bits (unsigned long) on a 64-bit system. If you're running on a 64-bit system, passing in a 64-bit NSUInteger to a function expecting a 32-bit integer will cause the compiler to complain that you're throwing away bits.

Alternatives to type casting when formatting NS(U)Integer on 32 and 64 bit architectures?

With the 64 bit version of iOS we can't use %d and %u anymore to format NSInteger and NSUInteger. Because for 64 bit those are typedef'd to long and unsigned long instead of int and unsigned int.
So Xcode will throw warnings if you try to format NSInteger with %d. Xcode is nice to us and offers an replacement for those two cases, which consists of a l-prefixed format specifier and a typecast to long. Then our code basically looks like this:
NSLog(#"%ld", (long)i);
NSLog(#"%lu", (unsigned long)u);
Which, if you ask me, is a pain in the eye.
A couple of days ago someone at Twitter mentioned the format specifiers %zd to format signed variables and %tu to format unsigned variables on 32 and 64 bit plattforms.
NSLog(#"%zd", i);
NSLog(#"%tu", u);
Which seems to work. And which I like more than typecasting.
But I honestly have no idea why those work. Right now both are basically magic values for me.
I did a bit of research and figured out that the z prefix means that the following format specifier has the same size as size_t. But I have absolutely no idea what the prefix t means. So I have two questions:
What exactly do %zd and %tu mean?
And is it safe to use %zd and %tu instead of Apples suggestion to typecast to long?
I am aware of similar questions and Apples 64-Bit Transition guides, which all recommend the %lu (unsigned long) approach. I am asking for an alternative to type casting.
From http://pubs.opengroup.org/onlinepubs/009695399/functions/printf.html:
z
Specifies that a following [...] conversion specifier applies to a size_t or the corresponding signed integer type argument;
t
Specifies that a following [...] conversion specifier applies to a ptrdiff_t or the corresponding unsigned type argument;
And from http://en.wikipedia.org/wiki/Size_t#Size_and_pointer_difference_types:
size_t is used to represent the size of any object (including arrays) in the particular implementation. It is used as the return type of the sizeof operator.
ptrdiff_t is used to represent the difference between pointers.
On the current OS X and iOS platforms we have
typedef __SIZE_TYPE__ size_t;
typedef __PTRDIFF_TYPE__ ptrdiff_t;
where __SIZE_TYPE__ and __PTRDIFF_TYPE__ are predefined by the
compiler. For 32-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ int
and for 64-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ long int
(This may have changed between Xcode versions. Motivated by #user102008's
comment, I have checked this with Xcode 6.2 and updated the answer.)
So ptrdiff_t and NSInteger are both typedef'd to the same type:
int on 32-bit and long on 64-bit. Therefore
NSLog(#"%td", i);
NSLog(#"%tu", u);
work correctly and compile without warnings on all current
iOS and OS X platforms.
size_t and NSUInteger have the same size on all platforms, but
they are not the same type, so
NSLog(#"%zu", u);
actually gives a warning when compiling for 32-bit.
But this relation is not fixed in any standard (as far as I know), therefore I would
not consider it safe (in the same sense as assuming that long has the same size
as a pointer is not considered safe). It might break in the future.
The only alternative to type casting that I know of is from the answer to "Foundation types when compiling for arm64 and 32-bit architecture", using preprocessor macros:
// In your prefix header or something
#if __LP64__
#define NSI "ld"
#define NSU "lu"
#else
#define NSI "d"
#define NSU "u"
#endif
NSLog(#"i=%"NSI, i);
NSLog(#"u=%"NSU, u);
I prefer to just use an NSNumber instead:
NSInteger myInteger = 3;
NSLog(#"%#", #(myInteger));
This does not work in all situations, but I've replaced most of my NS(U)Integer formatting with the above.
According to Building 32-bit Like 64-bit, another solution is to define the NS_BUILD_32_LIKE_64 macro, and then you can simply use the %ld and %lu specifiers with NSInteger and NSUInteger without casting and without warnings.

Why is BOOL type in Objective-C a char?

I have been told that BOOL in Objective-C is a typedef of an unsigned char and YES & NO keywords are encoded chars. This is not the first time I heard it. I have read that this is because Apple used BOOL before the C standard provided a _Bool type, am I wrong? Is there any advantage of that fact? Are we wasting bits of memory? Does this provide a way to return valuable data in a function? Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
BOOL myFunction(int argument)
{
BOOL result = YES; //The function generates the result
if (someError == YES) {
return 5;
}
return result;
}
Are we wasting bits of memory?
No, because you can't get a variable smaller than a char: it's always a single byte. You can pack multiple bits representing boolean flags in a single word, but you have to do it manually - with bit shifts, using bit fields, and so on.
Does this provide a way to return valuable data in a function?
Not really: what you did is a hack, although 5 would definitely make its way through the system to the caller, and would be interpreted as YES in a "plain" if statement, e.g.
if (myFunction(123)) {
...
}
However, it would fail miserably if used like this:
if (myFunction(123) == YES) { // 5 != YES
...
}
Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
It would always be incorrect from the readability point of view; as far as "doing what you meant it to do", your mileage may vary, depending on the way in which your function is used.
There is a slight advantage: On many platforms (including iOS, IIRC), sizeof(_Bool) == sizeof(int), so using char can be slightly more compact.
Except BOOL is actually signed char, not char, this is so #encode(BOOL) evaluates to the same thing on all platforms. This complicates bitfields slightly, since BOOL foo:1; appears to define a 1-bit signed integer (IIRC the behaviour of which is undefined) — clearly unsigned char would be a better choice, but it was probably too late.
_Bool also ought to optimize better since the compiler can make assumptions about the bit-pattern used, e.g. replacing a&&b with a&b (provided b is side effect-free). Some architectures also represent "true" as all bits set, which is useful for masking (SSE comparison instructions come to mind).
"BOOL in Objective-C" is not an unsigned char, it's whatever the Objective-C library defines it to be. Which is unsigned char or bool, depending on your compiler settings (32 bit or 64 bit). Both behave different. Try this code with a 32 bit compiler and a 64 bit compiler:
BOOL b = 256;
if (b) NSLog (#"b is true"); else NSLog (#"b is false");

Resources