NSInteger is defined this way:
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
#else
typedef int NSInteger;
#endif
This results in NSInteger being defined as int on 32-bit iOS even though int and long are the same anyway (both 4 bytes). Format strings like the following generate a warning with this definition:
NSInteger x = 4;
[NSString stringWithFormat: #"%ld", x];
// Warning: Values of type 'NSInteger' should not be used as format arguments;
// add an explicit cast to 'long' instead.
So does somebody know why NSInteger isn't always defined as long?
Historical reasons, where previous releases of APIs used int -- then were migrated to use typedef NSInteger ca. the 64-bit transition of OS X.
I suppose they could have changed for iOS, but that would have impacted a lot of existing and to-be-developed code if they were different on OS X and iOS.
Related
This is my first Xcode app and objective-c so give me some slack :)
I tried googling on the issue but I cannot see any help regarding Xcode and app development. I added the error masseages after //
- (id)initWithBytes:(int8_t)byte1, ... { //Error: 1. Parameter of type 'int8_t' (aka 'signed char') is declared here
va_list args;
va_start(args, byte1); //Error: Passing an object that undergoes default argument promotion to 'va_start' has undefined behavior
unsigned int length = 0;
for (int8_t byte = byte1; byte != -1; byte = va_arg(args, int)) {
length++;
}
va_end(args);
if ((self = [self initWithLength:length]) && (length > 0)) {
va_list args;
va_start(args, byte1); // Error: Passing an object that undergoes default argument promotion to 'va_start' has undefined behavior
int i = 0;
for (int8_t byte = byte1; byte != -1; byte = va_arg(args, int)) {
_array[i++] = byte;
}
va_end(args);
}
return self;
}
Thank you in advance!!
va_start() saves the pointer to the first argument passed to the function into a va_list.
The arguments themselves are passed via a hardware stack.
The issue with int8_t comes from the way the hardware stack is implemented. (in x86 at least)
Just like the SSE and MMX does, the stack requires elements stored on it to have an alignment equal to a multiple of 16bits, so everything passed to the function WILL have at least 16 bits of size, regardless of its type.
But the problem is va_arg() doesn't know about all that. Historically, it was a macro, and all it does is returning a pointer stored in va_list, and incrementing va_list by sizeof(type).
So, when you retrieve the next argument, the pointer returned does not point to the next argument but one byte before it, or not - depending on whether the va_arg is a macro or a compiler built-in function.
And this is what a warning is about.
IMO at least. Pardon my English, It's my 2nd language.
I have an application which utilizes a single string. This string contains data loaded from an array and then the string is exported to a text file.
My question is what is the longest length possible for this string, and when does it become a problem that it is getting too long?
Following the official Apple documentation:
String is bridged to Objective-C as NSString, and a String that
originated in Objective-C may store its characters in an NSString.
Since all devices were capable of running iOS are 32 bit, this means NSUIntegerMax is 2^32.
According to Swift opensource GitHub repo It would seem that its value is 2^64 = 18,446,744,073,709,551,615 ; hexadecimal 0xFFFFFFFFFFFFFFFF for the 64 bit devices, following this code:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
// + (instancetype)
// stringWithCharacters:(const unichar *)chars length:(NSUInteger)length
...
maxLength:(NSUInteger)maxBufferCount
...
TEST: (on iPhone 6)
String is bridged to Objective-C as NSString, and a String
that originated in Objective-C may store its characters in an
NSString. Since any arbitrary subclass of NSString can
become a String, there are no guarantees about representation or
efficiency in this case.
What is the maximum length of an NSString object?
The hard limit for NSString would be NSUIntegerMax characters. NSUIntegerMax is 2^32 - 1 and NSString can hold a little over 4.2 billion characters.
According comments:
for iPhone 5S and above since they are 64 Bit. It's 2^(64 - 1)
When I use BOOL for 32-bit, I get:
BOOL b1=8960; //b1 == NO
bool b2=8960; //b2 == true
But for 64-bit, I get:
BOOL b1=8960; //b1 == YES
bool b2=8960; //b2 == true
What has changed about BOOL from 32-bit to 64-bit?
#TimBodeit is right, but it doesn't explain why ...
BOOL b1=8960; //b1 == NO
... evaluates to NO on 32-bit iOS and why it evaluates to YES on 64-bit iOS. Let's start from the same beginning.
ObjC BOOL definition
#if (TARGET_OS_IPHONE && __LP64__) || (__ARM_ARCH_7K__ >= 2)
#define OBJC_BOOL_IS_BOOL 1
typedef bool BOOL;
#else
#define OBJC_BOOL_IS_CHAR 1
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif
For 64-bit iOS or ARMv7k (watch) it's defined as bool and for the rest as signed char.
ObjC BOOL YES and NO
Read Objective-C Literals, where you can find:
Previously, the BOOL type was simply a typedef for signed char, and
YES and NO were macros that expand to (BOOL)1 and (BOOL)0
respectively. To support #YES and #NO expressions, these macros are
now defined using new language keywords in <objc/objc.h>:
#if __has_feature(objc_bool)
#define YES __objc_yes
#define NO __objc_no
#else
#define YES ((BOOL)1)
#define NO ((BOOL)0)
#endif
The compiler implicitly converts __objc_yes and __objc_no to (BOOL)1
and (BOOL)0. The keywords are used to disambiguate BOOL and integer
literals.
bool definition
bool is a macro defined in stdbool.h and it expands to _Bool, which is a boolean type introduced in C99. It can store two values, 0 or 1. Nothing else. To be more precise, stdbool.h defines four macros to use:
/* Don't define bool, true, and false in C++, except as a GNU extension. */
#ifndef __cplusplus
#define bool _Bool
#define true 1
#define false 0
#elif defined(__GNUC__) && !defined(__STRICT_ANSI__)
/* Define _Bool, bool, false, true as a GNU extension. */
#define _Bool bool
#define bool bool
#define false false
#define true true
#endif
#define __bool_true_false_are_defined 1
_Bool
_Bool was introduced in C99 and it can hold the values 0 or 1. What's important is:
When a value is demoted to a _Bool, the result is 0 if the value
equals 0, and 1 otherwise.
Now we know where this mess comes from and we can better understand what's going on.
64-bit iOS || ARMv7k
BOOL -> bool -> _Bool (values 0 or 1)
Demoting 8960 to _Bool gives 1, because the value doesn't equal 0. See (_Bool section).
32-bit iOS
BOOL -> signed char (values -128 to 127).
If you're going to store int values (-128 to 127) as signed char, the value is unchanged per C99 6.3.1.3. Otherwise it is implementation defined (C99 quote):
Otherwise, the new type is signed and the value cannot be represented
in it; either the result is implementation-defined or an
implementation-defined signal is raised.
It means that clang can decide. To make it short, with the default settings, clang wraps it around (int -> signed char):
-129 becomes 127,
-130 becomes 126,
-131 becomes 125,
...
And in the opposite direction:
128 becomes -128,
129 becomes -127,
130 becomes -126,
...
But because signed char can store values in the range -128 to 127, it can store 0 as well. For example 256 (int) becomes 0 (signed char). And when your value 8960 is wrapped around ...
8960 becomes 0,
8961 becomes 1,
8959 becomes -1,
...
... it becomes 0 when stored in signed char (8960 is a multiple of 256, 8960 % 256 == 0), thus it's NO. The same applies to 256, 512, ... multiples of 256.
I strongly recommend using YES, NO with BOOL and not relying on fancy C features like int as a condition in if, etc. That's the reason Swift has Bool, true, and false and you can't use Int values in conditions where Bool is expected. Just to avoid this mess ...
For 32-bit BOOL is a signed char, whereas under 64-bit it is a bool.
Definition of BOOL from objc.h:
/// Type to represent a boolean value.
#if (TARGET_OS_IPHONE && __LP64__) || TARGET_OS_WATCH
#define OBJC_BOOL_IS_BOOL 1
typedef bool BOOL;
#else
#define OBJC_BOOL_IS_CHAR 1
typedef signed char BOOL;
// BOOL is explicitly signed so #encode(BOOL) == "c" rather than "C"
// even if -funsigned-char is used.
#endif
I am trying to make an iOS-app ready for 64bit.
I got a method which build me a string with entries from an enum. The parameters from this method can be variable in count.
The method works fine under 32bit, but under 64bit my for-loop cant end correctly.
Here some code from .h:
#define enumToString(intVal) \
[NSString stringWithFormat: #"%ld", intVal]
#define ENUM_END -1
typedef enum _MYENTRIES
{
entry1,
entry2,
entry3
} MYENTRIES;
typedef NSUInteger MYENTRY;
My crashing method: (the loop has to end, but it doesn't end)
-(NSString*) getMyString:(MYENTRY) firstArg, ... {
va_list args;
va_start(args, firstArg);
NSMutableString *mySTRING = [[[NSMutableString alloc] init] autorelease];
for (MYENTRY arg = firstArg; arg != ENUM_END; arg = va_arg(args, MYENTRY))
{
NSLog(#"arg: %d %#", arg, enumToString(arg)); // when ENUM_END: "arg: -1 4294967295"
[mySTRING appendString:self.myDictionary[[NSString stringWithFormat: #"%ld", arg]]];
}
An example methodcall:
myString = [myClass getMyString: entry1, entry3, ENUM_END , nil];
Hope you can help me.
best regards
That's because MYENTRIES and MYENTRY are different types. One is 32 bit, and the other is 64 bit. Passing 32 bit values and reading them as 64 bit isn't going to work.
That's what the NS_ENUM macro is there for. Google for it, understand it, and use it. You will also have the problem that ENUM_END is not the same type as either MYENTRIES or MYENTRY (it is an int). Unless you can quote the rules for type conversion between int and unsigned long by heart (which you can't), I suggest you make it part of the enum.
I am going through a former employees code and about 20 of these warnings show up:
Values of type 'NSUInteger' should not be used as format arguments; add an explicit cast to 'unsigned long' instead
one part of the code where this arises is:
NSUInteger Length;
With:
- (NSString *) description {
// If no value was given, display type
if([Value length] == 0)
{
NSString *type = #"";
switch (Type) {
case DTCompType_Year: type = #"year";
break;
case DTCompType_Month: type = #"month";
break;
case DTCompType_Day: type = #"day";
break;
case DTCompType_Hour: type = #"hour";
break;
case DTCompType_Minute: type = #"minute";
break;
case DTCompType_Second: type = #"second";
break;
case DTCompType_Meridiem: type = #"meridiem";
break;
case DTCompType_MonthName: type = #"month_name";
break;
case DTCompType_DayOfTheWeek: type = #"day_of_the_week";
break;
case DTCompType_Undefined: type = #"undefined";
break;
}
return Length == 0 ? [NSString stringWithFormat:#"[%#]", type] :
[NSString stringWithFormat:#"[%#:%i]", type, Length];
}
No where in apples documentation can I find %i
Apple's Documentation
I have never worked with Objective-C before, and now I have to update this app. I understand that this needs to become an unsigned long, but I don't want to start changing things without knowing why. The app works just fine as is, so are there any inherent consequences for changing these to unsigned long? or even changing the format specifier from %i to %lu?
From what I've read, it could be a matter of the platform. (32-bit vs 64-bit)
This was developed for an iPad 2 in iOS7, and we just upgraded the SDK to iOS8.
I found this post:
NSUInteger should not be used in format strings?
which has given me some guidance, but I need more clarification.
%i is equivalent to %d. Technically, you should have been using %u anyway. The problem is, as you suspect, 32-bit vs 64-bit; NS[U]Integer is [unsigned] int on 32-bit builds, but [unsigned] long on 64-bit ones. Because the iPhone is little-endian, it will "work" as long as the %i/d/u is the last format specified, but it's still wrong. You should cast the argument to be the type the format specifier expects (int/long/unsigned/unsigned long), as the warning message tells you to.
From <objc/NSObjCRuntime.h>:
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
You can use a boxed literal to allow the compiler and the NSNumber class to handle the details of converting between the various numeric types and their string representations. For example, given the following variable definition...
NSUInteger foo = 42;
...you can create an instance of NSNumber as follows:
NSNumber *myNumber = #(foo);
You can then use the %# format specifier whenever you need to format the value of myNumber. Of course it's easy enough to instead box the original numeric value right in line:
NSString *s = [NSString stringWithFormat:#"The answer is %#", #(foo)];