On 64-bit platforms, Int is the same size as Int64, and on 32-bit platforms, Int is the same size as Int32.
Can this behavior be changed, i.e. can Int's size be forced to be Int32 on 64-bit platforms?
The idea behind Int is that it reflects the native size (32-bit on 32-bit system and 64-bit on 64-bit system).
If you really want a 32-bit int no matter what platform you're on then you use Int32.
If you really want a 64-bit int no matter what platform you're on then you use Int64.
To solve your problem be explicit and just use Int32 instead of Int.
There are multiple data types available in Swift to define an integer
- Int, Int8, Int16, Int32, Int64
- UInt, UInt8, UInt16, UInt32, UInt64
You can use any of the above as per your requirement independent of whether you're using a 32-bit or 64-bit platform.
Related
I'm trying to figure out whether the Int32 in Core Data will be presented as 32 bits or 64 bits on a 64 bits Device, but I could not find a valid answer on SO.
I went to Swift document, it only displayed this information:
Int In most cases, you don’t need to pick a specific size of integer
to use in your code. Swift provides an additional integer type, Int,
which has the same size as the current platform’s native word size:
On a 32-bit platform, Int is the same size as Int32.
On a 64-bit platform, Int is the same size as Int64.
Unless you need to work with a specific size of integer, always use
Int for integer values in your code. This aids code consistency and
interoperability. Even on 32-bit platforms, Int can store any value
between -2,147,483,648 and 2,147,483,647, and is large enough for many
integer ranges.
Could someone point me out would I save some memory if I use the Int32 instead of Int64 in Core Data with 64 bits device?
If you’re using a SQLite persistent store (which is usually how people use Core Data), it won’t make any difference to file size. SQLite uses dynamic typing and doesn’t restrict values based on column type. SQLite’s documentation says that for integers,
The value is a signed integer, stored in 0, 1, 2, 3, 4, 6, or 8 bytes depending on the magnitude of the value.
So whatever integer value you store uses as many bytes as it needs, and it doesn’t know or care if your code uses 32 bits or 64 or some other value.
In memory it’s different. A 32 bit integer takes 4 bytes, a 64 bit int takes 8 bytes, etc. Use whatever integer type is large enough to hold all of the values you need to store in a variable.
At the same time though, do you have so much data that this kind of optimization will have any effect? Unless you have very large data sets, using the wrong integer type is unlikely to make a difference to your app.
Integer 16, 32, and 64 in CoreData are equivalent to Swift int16, int32, and int64. So you will save memory and disk space with a proper size.
Source: APress Media, LLC, part of Springer Nature 2022 A. Tsadok, Unleash Core Data
I am writing an app in swift that saves its data through apple's core data. In my code, all integers are just declared as "Int", because that is more flexible and the compiler adapts those ints to the device the code runs on.
However, when I want to save these "Int"s using core data, I have to chose either 32 or 64 bit Integers. I would want my app to be compatible with iphone 5-6s if possible and am therefore hesitant to go for the 32bit(I read Apple moved to 32bit in 6s because of better performance).
Any workarounds to keep this part of the code flexible? If I select 32bit, what will happen if the code is run on a 64bit device?
Thanks in advance.
The default Int:
on 32 Bit Devices = Int32
on 64 Bit Devices = Int64 (and yes it's an Int64 just testet it on my iPhone6S)
But both Int32 and Int64 will work on an 32 Bit device. (But Int64 takes longer to calculate on 32 Bit devices)
I recommend you using Int32 if your number is smaller or equal to ±2.147.483.647
Formula: (2^(Bit - 1) - 1)
Or even Int16 if smaller or equal to ±32.767
(Actually the negative Value can be 1 greater than the positiv value: Range of Int32 -2.147.483.648 ... 2.147.483.647)
If you use Int32 in coreData just make sure that you don't exceed this number and cast the Int as? Int32 when saving (as? because it theoretically can be a higher number)
When loading Int32 to Int always succeeds (use: as! Int)
If you use Int64 in coreData just cast the Int as! Int64 when saving (This will always succeed even on 32-Bit devices, but might be a slightly slower but if you don't save/load it to ofter you shouldn't have any problems)
But be careful when loading, the cast form Int64 to Int might fail because again Int64 could theoretically have a greater number stored that an Int on 32-Bit devices can store (use as? Int to prevent possible crashes)
I'm parsing json object with Id (long) coming from a Java backend.
the id is declared as CLong in my app. On iPhones5 < it works, but on iPhone5 the id is invalid value.
CLong is a typedef for Int in the iOS SDK:
/// The C 'long' type.
public typealias CLong = Int
From the Swift docs:
On a 32-bit platform, Int is the same size as Int32.
On a 64-bit platform, Int is the same size as Int64.
Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.
If you need to keep your Integer size consistent across multiple architectures, use Int32. If you value is larger than 32 bits, you should look into handling overflows. Also consider sending a different data type rather than a long from your backend, such as a String or NSNumber.
When should I be using NSInteger vs. int when developing for iOS? I see in the Apple sample code they use NSInteger (or NSUInteger) when passing a value as an argument to a function or returning a value from a function.
- (NSInteger)someFunc;...
- (void)someFuncWithInt:(NSInteger)value;...
But within a function they're just using int to track a value
for (int i; i < something; i++)
...
int something;
something += somethingElseThatsAnInt;
...
I've read (been told) that NSInteger is a safe way to reference an integer in either a 64-bit or 32-bit environment so why use int at all?
You usually want to use NSInteger when you don't know what kind of processor architecture your code might run on, so you may for some reason want the largest possible integer type, which on 32 bit systems is just an int, while on a 64-bit system it's a long.
I'd stick with using NSInteger instead of int/long unless you specifically require them.
NSInteger/NSUInteger are defined as *dynamic typedef*s to one of these types, and they are defined like this:
#if __LP64__ || TARGET_OS_EMBEDDED || TARGET_OS_IPHONE || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
With regard to the correct format specifier you should use for each of these types, see the String Programming Guide's section on Platform Dependencies
Why use int at all?
Apple uses int because for a loop control variable (which is only used to control the loop iterations) int datatype is fine, both in datatype size and in the values it can hold for your loop. No need for platform dependent datatype here. For a loop control variable even a 16-bit int will do most of the time.
Apple uses NSInteger for a function return value or for a function argument because in this case datatype [size] matters, because what you are doing with a function is communicating/passing data with other programs or with other pieces of code; see the answer to When should I be using NSInteger vs int? in your question itself...
they [Apple] use NSInteger (or NSUInteger) when passing a value as an
argument to a function or returning a value from a function.
OS X is "LP64". This means that:
int is always 32-bits.
long long is always 64-bits.
NSInteger and long are always pointer-sized. That means they're 32-bits on 32-bit systems, and 64 bits on 64-bit systems.
The reason NSInteger exists is because many legacy APIs incorrectly used int instead of long to hold pointer-sized variables, which meant that the APIs had to change from int to long in their 64-bit versions. In other words, an API would have different function signatures depending on whether you're compiling for 32-bit or 64-bit architectures. NSInteger intends to mask this problem with these legacy APIs.
In your new code, use int if you need a 32-bit variable, long long if you need a 64-bit integer, and long or NSInteger if you need a pointer-sized variable.
If you dig into NSInteger's implementation:
#if __LP64__
typedef long NSInteger;
#else
typedef int NSInteger;
#endif
Simply, the NSInteger typedef does a step for you: if the architecture is 32-bit, it uses int, if it is 64-bit, it uses long. Using NSInteger, you don't need to worry about the architecture that the program is running on.
You should use NSIntegers if you need to compare them against constant values such as NSNotFound or NSIntegerMax, as these values will differ on 32-bit and 64-bit systems, so index values, counts and the like: use NSInteger or NSUInteger.
It doesn't hurt to use NSInteger in most circumstances, excepting that it takes up twice as much memory. The memory impact is very small, but if you have a huge amount of numbers floating around at any one time, it might make a difference to use ints.
If you DO use NSInteger or NSUInteger, you will want to cast them into long integers or unsigned long integers when using format strings, as new Xcode feature returns a warning if you try and log out an NSInteger as if it had a known length. You should similarly be careful when sending them to variables or arguments that are typed as ints, since you may lose some precision in the process.
On the whole, if you're not expecting to have hundreds of thousands of them in memory at once, it's easier to use NSInteger than constantly worry about the difference between the two.
On iOS, it currently does not matter if you use int or NSInteger. It will matter more if/when iOS moves to 64-bits.
Simply put, NSIntegers are ints in 32-bit code (and thus 32-bit long) and longs on 64-bit code (longs in 64-bit code are 64-bit wide, but 32-bit in 32-bit code). The most likely reason for using NSInteger instead of long is to not break existing 32-bit code (which uses ints).
CGFloat has the same issue: on 32-bit (at least on OS X), it's float; on 64-bit, it's double.
Update: With the introduction of the iPhone 5s, iPad Air, iPad Mini with Retina, and iOS 7, you can now build 64-bit code on iOS.
Update 2: Also, using NSIntegers helps with Swift code interoperability.
As of currently (September 2014) I would recommend using NSInteger/CGFloat when interacting with iOS API's etc if you are also building your app for arm64.
This is because you will likely get unexpected results when you use the float, long and int types.
EXAMPLE: FLOAT/DOUBLE vs CGFLOAT
As an example we take the UITableView delegate method tableView:heightForRowAtIndexPath:.
In a 32-bit only application it will work fine if it is written like this:
-(float)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 44;
}
float is a 32-bit value and the 44 you are returning is a 32-bit value.
However, if we compile/run this same piece of code in a 64-bit arm64 architecture the 44 will be a 64-bit value. Returning a 64-bit value when a 32-bit value is expected will give an unexpected row height.
You can solve this issue by using the CGFloat type
-(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 44;
}
This type represents a 32-bit float in a 32-bit environment and a 64-bit double in a 64-bit environment. Therefore when using this type the method will always receive the expected type regardless of compile/runtime environment.
The same is true for methods that expect integers.
Such methods will expect a 32-bit int value in a 32-bit environment and a 64-bit long in a 64-bit environment. You can solve this case by using the type NSInteger which serves as an int or a long based on the compile/runtime environemnt.
int = 4 byte (fixed irrespective size of the architect)
NSInteger = depend upon size of the architect(e.g. for 4 byte architect = 4 byte NSInteger size)
I'd like to know how much bytes a
32-bit integer
ASCII character (char in C++?)
Pointer (4 bytes?)
Short
Float
Takes up in Delphi, and if it is generally the same in most languages
Also, do the data types mentioned above have a constant size? I mean are the integers 0, 4, 123 and 32231 all of the same size?
A 32-bit integer is ALWAYS four bytes, because 1 byte = 8 bits.
An Integer is a signed 32-bit integer, and a Cardinal is a unsigned 32-bit integer. These thus always occupy four bytes, irrespective of the value they represent. (In fact, it is an extremely important fact that simple types do have fixed widths -- low-level programming really depends on this! It is even a cornerstone part of how computers work.)
Smaller integer types are Smallint (16-bit signed), Word (16-bit unsigned) and Byte (8-bit unsigned). Larger integer types are Int64 (64-bit signed) and UInt64 (64-bit unsigned).
Char was a 1-byte AnsiChar prior to Delphi 2009; now it is a 2-byte WideChar.
Pointer is always 4 bytes, because Delphi currently creates 32-bit applications only. When it supports 64-bit applications, Pointer will become 8 bytes.
There are three common floating-point types in Delphi. These are Single, Double (=Real), and Extended. These occupy 4, 8, and 10 bytes, respectively.
To investigate the size of a given type, e.g. Short, simply try
ShowMessage(IntToStr(SizeOf(Short)))
Reference:
http://docwiki.embarcadero.com/RADStudio/en/Simple_Types
In C/C++, SizeOf(Char) = 1 byte as required by C/C++ standard.
In Delphi, SizeOf(Char) is version dependent (1 byte for non-Unicode versions, 2 bytes for Unicode versions), so Char in Delphi is more like TChar in C++.
It may be different for different machines, so you can use the following code to determine the size of integer(for examle):
cout << "Integer size:" << sizeof(int);
I don't want to confuse you too much, but there's also an alignment issue; If you define a record like this, it will depend on the compiler how it's layout will turn out :
type Test = record
A: Byte;
B: Pointer;
end;
If compiled with {$A1}, SizeOf(Test) will end up as 5, while compiling it with {$A4} would give you 8 (at least, on current 32 bit Delphi's that is!)
There are all sorts of little gotcha's here, so I'd advise to ignore this for now and read an article like this when the need arises ;-)