My app have to use int to do some multiplication, it is easy to meet two fairly big numbers' multiplication.
Of course it will crash. And how can I remark some bool value. Just like every time before we'll quit the app, we saveData in the AppDelegate.swift's function:
func applicationWillTerminate(application: UIApplication) {
// Called when the application is about to terminate. Save data if appropriate. See also applicationDidEnterBackground:.
}
If the result of an integer arithmetic operation (+, -, *, /, ...)
overflows, the application terminates immediately. There is no way to
catch this situation or to get notified e.g. to save data.
There is no Swift error or NSException thrown which you could catch.
The same would happen for other runtime errors like accessing
an array element outside of the valid bounds, or unwrapping an
optional which is nil.
This means that you have to check beforehand if the integer arithmetic
operation can be executed. Alternatively – depending on your needs –
you can
use the "Overflow operators" &+, &- and &* instead,
which truncate the result instead of triggering an error,
similar as in (Objective-)C.
use addingReportingOverflow() and similar methods which “return the sum of this value and the given value, along with a Boolean value indicating whether overflow occurred in the operation.”
You should prefer NSInteger over Int.
Use NSInteger when you don't know what kind of processor architecture your code might run on, so you may for some reason want the largest possible int type, which on 32 bit systems is just an int, while on a 64-bit system it's a long.
stick with using NSInteger instead of int/long unless you specifically require them.
NSInteger/NSUInteger are defined as *dynamic typedef *s to one of these types, and they are defined like this:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Related
Im trying to create an app that will use very large numbers. I was wondering if the storage power of [NSNumber numberWithUnsignedLongLong] is the max number I can get? And if NSUInteger has the same storage power as a [NSNumber numberWithUnsignedLongLong]?
NSUInteger is a typedef of a basic c-type. The exact type depends on your platform:
#if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
So sizes can vary by implementation of C, but unsigned long is at least 32 bits and unsigned long long is at least 64 bits.
Using types where you know the size is probably better when you're worried about overflowing. They can always be wrapped in objective-C types if needed.
uint64_t which holds a number up to UINT64_MAX might be useful.
#define UINT64_MAX (18446744073709551615ULL)
Just use uint64_t (64-bit unsigned integer, which is the same as unsigned long long).
You don't want to use NSNumber unless you are storing the vales within an Objective-C collection class (NSArray, for example), as they are immutable making them very difficult and expensive to manipulate.
You are correct the largest unsigned integer type you[*] can store in an NSNumber is unsigned long long - and on current systems expect this to be 64 bits.
Is this type equivalent to NSUInteger? No, that is platform dependent and is either an int or long, but not a long long. Just use unsigned long long or typedef it, e.g.:
typedef unsigned long long MYULongLong
You could use a sized type, such as uint64_t, but there are no matching sized methods on NSNumber. You can address that by adding such methods to NSNumber using a category and conditional code based on sizeof - with a little care you can write that so the conditionals all disappear during compilation, that is left as an exercise ;-)
HTH
With the 64 bit version of iOS we can't use %d and %u anymore to format NSInteger and NSUInteger. Because for 64 bit those are typedef'd to long and unsigned long instead of int and unsigned int.
So Xcode will throw warnings if you try to format NSInteger with %d. Xcode is nice to us and offers an replacement for those two cases, which consists of a l-prefixed format specifier and a typecast to long. Then our code basically looks like this:
NSLog(#"%ld", (long)i);
NSLog(#"%lu", (unsigned long)u);
Which, if you ask me, is a pain in the eye.
A couple of days ago someone at Twitter mentioned the format specifiers %zd to format signed variables and %tu to format unsigned variables on 32 and 64 bit plattforms.
NSLog(#"%zd", i);
NSLog(#"%tu", u);
Which seems to work. And which I like more than typecasting.
But I honestly have no idea why those work. Right now both are basically magic values for me.
I did a bit of research and figured out that the z prefix means that the following format specifier has the same size as size_t. But I have absolutely no idea what the prefix t means. So I have two questions:
What exactly do %zd and %tu mean?
And is it safe to use %zd and %tu instead of Apples suggestion to typecast to long?
I am aware of similar questions and Apples 64-Bit Transition guides, which all recommend the %lu (unsigned long) approach. I am asking for an alternative to type casting.
From http://pubs.opengroup.org/onlinepubs/009695399/functions/printf.html:
z
Specifies that a following [...] conversion specifier applies to a size_t or the corresponding signed integer type argument;
t
Specifies that a following [...] conversion specifier applies to a ptrdiff_t or the corresponding unsigned type argument;
And from http://en.wikipedia.org/wiki/Size_t#Size_and_pointer_difference_types:
size_t is used to represent the size of any object (including arrays) in the particular implementation. It is used as the return type of the sizeof operator.
ptrdiff_t is used to represent the difference between pointers.
On the current OS X and iOS platforms we have
typedef __SIZE_TYPE__ size_t;
typedef __PTRDIFF_TYPE__ ptrdiff_t;
where __SIZE_TYPE__ and __PTRDIFF_TYPE__ are predefined by the
compiler. For 32-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ int
and for 64-bit the compiler defines
#define __SIZE_TYPE__ long unsigned int
#define __PTRDIFF_TYPE__ long int
(This may have changed between Xcode versions. Motivated by #user102008's
comment, I have checked this with Xcode 6.2 and updated the answer.)
So ptrdiff_t and NSInteger are both typedef'd to the same type:
int on 32-bit and long on 64-bit. Therefore
NSLog(#"%td", i);
NSLog(#"%tu", u);
work correctly and compile without warnings on all current
iOS and OS X platforms.
size_t and NSUInteger have the same size on all platforms, but
they are not the same type, so
NSLog(#"%zu", u);
actually gives a warning when compiling for 32-bit.
But this relation is not fixed in any standard (as far as I know), therefore I would
not consider it safe (in the same sense as assuming that long has the same size
as a pointer is not considered safe). It might break in the future.
The only alternative to type casting that I know of is from the answer to "Foundation types when compiling for arm64 and 32-bit architecture", using preprocessor macros:
// In your prefix header or something
#if __LP64__
#define NSI "ld"
#define NSU "lu"
#else
#define NSI "d"
#define NSU "u"
#endif
NSLog(#"i=%"NSI, i);
NSLog(#"u=%"NSU, u);
I prefer to just use an NSNumber instead:
NSInteger myInteger = 3;
NSLog(#"%#", #(myInteger));
This does not work in all situations, but I've replaced most of my NS(U)Integer formatting with the above.
According to Building 32-bit Like 64-bit, another solution is to define the NS_BUILD_32_LIKE_64 macro, and then you can simply use the %ld and %lu specifiers with NSInteger and NSUInteger without casting and without warnings.
I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!
NSString *test = #"short";
int calFailed = MAX(test.length - 10, 0); // returns -5
After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.
NSString *test = #"short";
int calExpected = MAX((int)test.length - 10, 0); // returns 0
This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.
Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.
The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.
The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [#"short" length] - 10 will evaluate to 4294967291.
What might be more surprising is that even without the subtraction, something like MAX([#"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [#"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([#"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).
So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.
Quoting from Understand integer conversion rules [cert.org]:
If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
(emphasis mine)
Consider this example:
int s = -1;
unsigned int u = 1;
NSLog(#"%i", s < u);
// -> 0
The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.
It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)
So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.
You probably don't have enough compiler warnings turned on. If you turn on -Wsign-compare (which can be turned on with -Wextra) you will generate a warning that looks like the following
warning: signed and unsigned type in conditional expression [-Wsign-compare]
This allows you to place the casts in the right places if necessary and you shouldn't need to rewrite the MAX or MIN macros
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What values should I use for iOS boolean states?
I believe there are something like 5 boolean types in iOS environment (which comes from C, C++ and Objective C).
_Bool
bool
BOOL
boolean_t
Boolean
And there are at least four pairs of values for them:
true, false
TRUE, FALSE
YES, NO
1, 0
Which one do you think is the best (style wise) to use for iOS Objective C development?
Update 1
I mentioned a type "boolean". It looks like it doesn't exist. I removed it from the list and added _Bool.
I am aware of typedef's for these types and values. The question is about style differences.
iOS and OS X are mostly made of Cocoa, which use the boolean type BOOL with values YES/NO.
bool
Defined by C++.
A true boolean, guaranteed to be 0 or 1.
_Bool
Defined by C99.
A true boolean, guaranteed to be 0 or 1.
If stdbool.h is included, bool is #defined as _Bool.
BOOL
Defined by the Objective-C runtime at /usr/include/objc/objc.h.
A signed char in 32 bit. Values may be YES (0x01), NO (0x00), or anything in the range -127 to 128. YES/NO are defined at <Foundation/NSObjCRuntime.h>.
A bool in 64 bits, guaranteed to be 0 or 1.
Boolean
Defined by Carbon at CFBase.h.
An unsigned char.
Values may be TRUE (0x01), FALSE (0x00), or anything in the range -127 to 128.
boolean_t
Defined by /usr/include/mach/i386/boolean.h
An int in x32 or unsigned int in x64.
For non true boolean types:
Any non zero value is treated as true in logical expressions.
If you cast to a boolean types with less range than the casted type, only the lower bytes are used.
Cases where one type or another makes a difference are hard to imagine. There are several cases where casting to BOOL may bite you, and some rare situations (eg: KVO converts BOOL to a NSNumber, and bool to a CFBoolean). If anything, when you consistently use BOOL, you are covered in case Apple changes its definition.
Use BOOL and YES/NO in Objective-C code. That's expected in Objective-C code and how the Objective-C headers define the data type. (Note that you may occasionally deal with other type/values, such as when checking the "truthiness" of pointers or when dealing with, e.g., C++ code, but Objective-C generally uses BOOL and YES/NO for boolean data types and values.)
Use ObjC's BOOL for ObjC code. Use the others where they're "native". Anything else will tend to stand out as "unusual" and worthy of extra care when reading the code, which is a property that should be reserved for code that actually is unusual and worthy of extra care.
It's basically absolutely indifferent. In practice, I bet they're even declared the very same way: typedef signed char BOOL;, typedef signed char Boolean;, etc.
So they're practically compatible and equivalent; the best approach is, however, to respect the type methods expect and return, so write
[object someObjectiveCMethod:YES];
instead of
[object someObjectiveCMethod:TRUE];
and
CFWhateverSetBooleanProperty(true);
instead of
CFWhateverSetBooleanProperty(YES);
You could use each one if wasn't for the fact that who reads the code may not know this type. So for convention, use BOOL which could have a value of YES (1) or NO .
BOOL with YES and NO.
However BOOL is signed char so YES equals to 1, and NO equals to 0.
In objc.h they are defined as:
typedef signed char BOOL;
And
#define YES ((BOOL)1)
#define NO ((BOOL)0)
I have been told that BOOL in Objective-C is a typedef of an unsigned char and YES & NO keywords are encoded chars. This is not the first time I heard it. I have read that this is because Apple used BOOL before the C standard provided a _Bool type, am I wrong? Is there any advantage of that fact? Are we wasting bits of memory? Does this provide a way to return valuable data in a function? Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
BOOL myFunction(int argument)
{
BOOL result = YES; //The function generates the result
if (someError == YES) {
return 5;
}
return result;
}
Are we wasting bits of memory?
No, because you can't get a variable smaller than a char: it's always a single byte. You can pack multiple bits representing boolean flags in a single word, but you have to do it manually - with bit shifts, using bit fields, and so on.
Does this provide a way to return valuable data in a function?
Not really: what you did is a hack, although 5 would definitely make its way through the system to the caller, and would be interpreted as YES in a "plain" if statement, e.g.
if (myFunction(123)) {
...
}
However, it would fail miserably if used like this:
if (myFunction(123) == YES) { // 5 != YES
...
}
Would it be correct to use it as a way to return a different values when some unexpected behavior occurs?
It would always be incorrect from the readability point of view; as far as "doing what you meant it to do", your mileage may vary, depending on the way in which your function is used.
There is a slight advantage: On many platforms (including iOS, IIRC), sizeof(_Bool) == sizeof(int), so using char can be slightly more compact.
Except BOOL is actually signed char, not char, this is so #encode(BOOL) evaluates to the same thing on all platforms. This complicates bitfields slightly, since BOOL foo:1; appears to define a 1-bit signed integer (IIRC the behaviour of which is undefined) — clearly unsigned char would be a better choice, but it was probably too late.
_Bool also ought to optimize better since the compiler can make assumptions about the bit-pattern used, e.g. replacing a&&b with a&b (provided b is side effect-free). Some architectures also represent "true" as all bits set, which is useful for masking (SSE comparison instructions come to mind).
"BOOL in Objective-C" is not an unsigned char, it's whatever the Objective-C library defines it to be. Which is unsigned char or bool, depending on your compiler settings (32 bit or 64 bit). Both behave different. Try this code with a 32 bit compiler and a 64 bit compiler:
BOOL b = 256;
if (b) NSLog (#"b is true"); else NSLog (#"b is false");