How to use long value in < iPhone5 and swift - ios

I'm parsing json object with Id (long) coming from a Java backend.
the id is declared as CLong in my app. On iPhones5 < it works, but on iPhone5 the id is invalid value.

CLong is a typedef for Int in the iOS SDK:
/// The C 'long' type.
public typealias CLong = Int
From the Swift docs:
On a 32-bit platform, Int is the same size as Int32.
On a 64-bit platform, Int is the same size as Int64.
Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.
If you need to keep your Integer size consistent across multiple architectures, use Int32. If you value is larger than 32 bits, you should look into handling overflows. Also consider sending a different data type rather than a long from your backend, such as a String or NSNumber.

Related

Does Int bits in Swift(Core Data) automatically cast to the same size as the device bits?

I'm trying to figure out whether the Int32 in Core Data will be presented as 32 bits or 64 bits on a 64 bits Device, but I could not find a valid answer on SO.
I went to Swift document, it only displayed this information:
Int In most cases, you don’t need to pick a specific size of integer
to use in your code. Swift provides an additional integer type, Int,
which has the same size as the current platform’s native word size:
On a 32-bit platform, Int is the same size as Int32.
On a 64-bit platform, Int is the same size as Int64.
Unless you need to work with a specific size of integer, always use
Int for integer values in your code. This aids code consistency and
interoperability. Even on 32-bit platforms, Int can store any value
between -2,147,483,648 and 2,147,483,647, and is large enough for many
integer ranges.
Could someone point me out would I save some memory if I use the Int32 instead of Int64 in Core Data with 64 bits device?
If you’re using a SQLite persistent store (which is usually how people use Core Data), it won’t make any difference to file size. SQLite uses dynamic typing and doesn’t restrict values based on column type. SQLite’s documentation says that for integers,
The value is a signed integer, stored in 0, 1, 2, 3, 4, 6, or 8 bytes depending on the magnitude of the value.
So whatever integer value you store uses as many bytes as it needs, and it doesn’t know or care if your code uses 32 bits or 64 or some other value.
In memory it’s different. A 32 bit integer takes 4 bytes, a 64 bit int takes 8 bytes, etc. Use whatever integer type is large enough to hold all of the values you need to store in a variable.
At the same time though, do you have so much data that this kind of optimization will have any effect? Unless you have very large data sets, using the wrong integer type is unlikely to make a difference to your app.
Integer 16, 32, and 64 in CoreData are equivalent to Swift int16, int32, and int64. So you will save memory and disk space with a proper size.
Source: APress Media, LLC, part of Springer Nature 2022 A. Tsadok, Unleash Core Data

Using generic Int in code - Use 32 or 64 bit Int in Core Data iOS?

I am writing an app in swift that saves its data through apple's core data. In my code, all integers are just declared as "Int", because that is more flexible and the compiler adapts those ints to the device the code runs on.
However, when I want to save these "Int"s using core data, I have to chose either 32 or 64 bit Integers. I would want my app to be compatible with iphone 5-6s if possible and am therefore hesitant to go for the 32bit(I read Apple moved to 32bit in 6s because of better performance).
Any workarounds to keep this part of the code flexible? If I select 32bit, what will happen if the code is run on a 64bit device?
Thanks in advance.
The default Int:
on 32 Bit Devices = Int32
on 64 Bit Devices = Int64 (and yes it's an Int64 just testet it on my iPhone6S)
But both Int32 and Int64 will work on an 32 Bit device. (But Int64 takes longer to calculate on 32 Bit devices)
I recommend you using Int32 if your number is smaller or equal to ±2.147.483.647
Formula: (2^(Bit - 1) - 1)
Or even Int16 if smaller or equal to ±32.767
(Actually the negative Value can be 1 greater than the positiv value: Range of Int32 -2.147.483.648 ... 2.147.483.647)
If you use Int32 in coreData just make sure that you don't exceed this number and cast the Int as? Int32 when saving (as? because it theoretically can be a higher number)
When loading Int32 to Int always succeeds (use: as! Int)
If you use Int64 in coreData just cast the Int as! Int64 when saving (This will always succeed even on 32-Bit devices, but might be a slightly slower but if you don't save/load it to ofter you shouldn't have any problems)
But be careful when loading, the cast form Int64 to Int might fail because again Int64 could theoretically have a greater number stored that an Int on 32-Bit devices can store (use as? Int to prevent possible crashes)

Possible to create a field type, that is the size of a single bit?

I would like to create a field type that is only the size of a single bit in Dynamo AWS.
I created a field of type 'Binary' - and its value is '0000' (4 bits).
However when I read this value and get the sizeof it - it shows me that the size taken up is actually 8 (bytes) - which is huge.
The code below shows the return block I use to get the 'Binary' value of the AWSDynamoDBAttributeValue
(AWSDynamoDBAttributeValue *value)
{
NSLog(#"i'm here %#, %lu", value.B, sizeof(value.B));
}];
The sizeof operation returns the size of an object/type in bytes. If the smallest value is 1 then your minimum is 8 bits. I don't know for certain about C, but C++ guarantees that a char, which is the smallest possible type, contains at least 8 bits. Chances are that a basic standard like that will hold for both languages.
If you're looking to optimise the usage of bits in a variable then you might want to look into bitmasking.
.B is a Binary data type, which is a Blob. In Objective-C, it's mapped to NSData, and sizeof is printing out the size of the pointer. Amazon DynamoDB recently introduced BOOL data type; however, it's not supported in the AWS Mobile SDK for iOS yet at this moment.

Benefits of using NSInteger over int?

I am trying to comprehend how development is affected when developing for both 32-bit and 64-bit architectures. From what I have researched thus far, I understand an int is always 4 bytes regardless of the architecture of the device running the app. But an NSInteger is 4 bytes on a 32-bit device and 8 bytes on a 64-bit device. I get the impression NSInteger is "safer" and recommended but I'm not sure what the reasoning is for that.
My question is, if you know the possible value you're using is never going to be large (maybe you're using it to index into an array of 200 items or store the count of objects in an array), why define it as an NSInteger? That's just going to take up 8 bytes when you won't use it all. Is it better to define it as an int in those cases? If so, in what case would you want to use an NSInteger (as opposed to int or long etc)? Obviously if you needed to utilize larger numbers, you could with the 64-bit architecture. But if you needed it to also work on 32-bit devices, would you not use long long because it's 8 bytes on 32-bit devices as well? I don't see why one would use NSInteger, at least when creating an app that runs on both architectures.
Also I cannot think of a method which takes in or returns a primitive type - int, and instead utilizes NSInteger, and am wondering if there is more to it than just the size of the values. For example, (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section. I'd like to understand why this is the case. Assuming it's possible to have a table with 2,147,483,647 rows, what would occur on a 32-bit device when you add one more - does it wrap around to a -2,147,483,647? And on a 64-bit device it would be 2,147,483,648. (Why return a signed value? I'd think it should be unsigned since you can't have a negative number of rows.)
Ultimately, I'd like to obtain a better understanding of actual use of these number data types, perhaps some code examples would be great!
I personally think that, 64-bit is actually the reason for existence for NSInteger and NSUInteger; before 10.5, those did not exist. The two are simply defined as longs in 64-bit, and as ints in 32-bit.
NSInteger/NSUInteger are defined as *dynamic typedef*s to one of these types, and they are defined like this:
#if __LP64__ || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Thus, using them in place of the more basic C types when you want the 'bit-native' size.
I suggest you to throughly read this link.
CocoaDev has some more info.
For proper format specifier you should use for each of these types, see the String Programming Guide's section on Platform Dependencies
I remember when attending iOS developer conference. you have to take a look on the data-type in iOS7. for example, you use NSInteger in 64-bit device and save it on iCloud. then you want to sync to lower device (say iPad 2nd gen), your app will not behave the same, because it recognizes NSInteger in 4 bytes not 8 bytes, then your calculation would be wrong.
But so far, I use NSInteger because mostly my app doesn't use iCloud or doesn't sync. and to avoid compiler warning.
Apple uses int because for a loop control variable (which is only used to control the loop iterations) int datatype is fine, both in datatype size and in the values it can hold for your loop. No need for platform dependent datatype here. For a loop control variable even a 16-bit int will do most of the time.
Apple uses NSInteger for a function return value or for a function argument because in this case datatype [size] matters, because what you are doing with a function is communicating/passing data with other programs or with other pieces of code.
Apple uses NSInteger (or NSUInteger) when passing a value as an
argument to a function or returning a value from a function.
The only thing I would use NSInteger for is passing values to and from an API that specifies it. Other than that it has no advantage over an int or a long. At least with an int or a long you know what format specifiers to use in a printf or similar statement.
As a continue to Irfan's answer:
sizeof(NSInteger)
equals a processor word's size. It is much more simple and faster for processor to operate with words

When to use NSInteger vs. int

When should I be using NSInteger vs. int when developing for iOS? I see in the Apple sample code they use NSInteger (or NSUInteger) when passing a value as an argument to a function or returning a value from a function.
- (NSInteger)someFunc;...
- (void)someFuncWithInt:(NSInteger)value;...
But within a function they're just using int to track a value
for (int i; i < something; i++)
...
int something;
something += somethingElseThatsAnInt;
...
I've read (been told) that NSInteger is a safe way to reference an integer in either a 64-bit or 32-bit environment so why use int at all?
You usually want to use NSInteger when you don't know what kind of processor architecture your code might run on, so you may for some reason want the largest possible integer type, which on 32 bit systems is just an int, while on a 64-bit system it's a long.
I'd stick with using NSInteger instead of int/long unless you specifically require them.
NSInteger/NSUInteger are defined as *dynamic typedef*s to one of these types, and they are defined like this:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
With regard to the correct format specifier you should use for each of these types, see the String Programming Guide's section on Platform Dependencies
Why use int at all?
Apple uses int because for a loop control variable (which is only used to control the loop iterations) int datatype is fine, both in datatype size and in the values it can hold for your loop. No need for platform dependent datatype here. For a loop control variable even a 16-bit int will do most of the time.
Apple uses NSInteger for a function return value or for a function argument because in this case datatype [size] matters, because what you are doing with a function is communicating/passing data with other programs or with other pieces of code; see the answer to When should I be using NSInteger vs int? in your question itself...
they [Apple] use NSInteger (or NSUInteger) when passing a value as an
argument to a function or returning a value from a function.
OS X is "LP64". This means that:
int is always 32-bits.
long long is always 64-bits.
NSInteger and long are always pointer-sized. That means they're 32-bits on 32-bit systems, and 64 bits on 64-bit systems.
The reason NSInteger exists is because many legacy APIs incorrectly used int instead of long to hold pointer-sized variables, which meant that the APIs had to change from int to long in their 64-bit versions. In other words, an API would have different function signatures depending on whether you're compiling for 32-bit or 64-bit architectures. NSInteger intends to mask this problem with these legacy APIs.
In your new code, use int if you need a 32-bit variable, long long if you need a 64-bit integer, and long or NSInteger if you need a pointer-sized variable.
If you dig into NSInteger's implementation:
#if __LP64__
typedef long NSInteger;
#else
typedef int NSInteger;
#endif
Simply, the NSInteger typedef does a step for you: if the architecture is 32-bit, it uses int, if it is 64-bit, it uses long. Using NSInteger, you don't need to worry about the architecture that the program is running on.
You should use NSIntegers if you need to compare them against constant values such as NSNotFound or NSIntegerMax, as these values will differ on 32-bit and 64-bit systems, so index values, counts and the like: use NSInteger or NSUInteger.
It doesn't hurt to use NSInteger in most circumstances, excepting that it takes up twice as much memory. The memory impact is very small, but if you have a huge amount of numbers floating around at any one time, it might make a difference to use ints.
If you DO use NSInteger or NSUInteger, you will want to cast them into long integers or unsigned long integers when using format strings, as new Xcode feature returns a warning if you try and log out an NSInteger as if it had a known length. You should similarly be careful when sending them to variables or arguments that are typed as ints, since you may lose some precision in the process.
On the whole, if you're not expecting to have hundreds of thousands of them in memory at once, it's easier to use NSInteger than constantly worry about the difference between the two.
On iOS, it currently does not matter if you use int or NSInteger. It will matter more if/when iOS moves to 64-bits.
Simply put, NSIntegers are ints in 32-bit code (and thus 32-bit long) and longs on 64-bit code (longs in 64-bit code are 64-bit wide, but 32-bit in 32-bit code). The most likely reason for using NSInteger instead of long is to not break existing 32-bit code (which uses ints).
CGFloat has the same issue: on 32-bit (at least on OS X), it's float; on 64-bit, it's double.
Update: With the introduction of the iPhone 5s, iPad Air, iPad Mini with Retina, and iOS 7, you can now build 64-bit code on iOS.
Update 2: Also, using NSIntegers helps with Swift code interoperability.
As of currently (September 2014) I would recommend using NSInteger/CGFloat when interacting with iOS API's etc if you are also building your app for arm64.
This is because you will likely get unexpected results when you use the float, long and int types.
EXAMPLE: FLOAT/DOUBLE vs CGFLOAT
As an example we take the UITableView delegate method tableView:heightForRowAtIndexPath:.
In a 32-bit only application it will work fine if it is written like this:
-(float)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 44;
}
float is a 32-bit value and the 44 you are returning is a 32-bit value.
However, if we compile/run this same piece of code in a 64-bit arm64 architecture the 44 will be a 64-bit value. Returning a 64-bit value when a 32-bit value is expected will give an unexpected row height.
You can solve this issue by using the CGFloat type
-(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 44;
}
This type represents a 32-bit float in a 32-bit environment and a 64-bit double in a 64-bit environment. Therefore when using this type the method will always receive the expected type regardless of compile/runtime environment.
The same is true for methods that expect integers.
Such methods will expect a 32-bit int value in a 32-bit environment and a 64-bit long in a 64-bit environment. You can solve this case by using the type NSInteger which serves as an int or a long based on the compile/runtime environemnt.
int = 4 byte (fixed irrespective size of the architect)
NSInteger = depend upon size of the architect(e.g. for 4 byte architect = 4 byte NSInteger size)

Resources