Core Data, NSNumber, Integer 32 and Integer 64 - ios

In Core Data, I have many attributes declared as Integer 64, and then accessed through NSNumber properties (this is by default).
Does it matter if I store and access these values by:
NSNumber *mySetValue = [NSNumber numberWithInt:someIntValue];
[myObject setMyNumberProperty:mySetValue];
int myRetrievedValue = [myObject.myNumberProperty intValue];
or by
NSNumber *mySetValue = [NSNumber numberWithInteger:someIntegerValue];
[myObject setMyNumberProperty:mySetValue];
NSInteger myRetrievedValue = [myObject.myNumberProperty integerValue];
?
There are two case for which I would like to know the answer: 1) if the value needed is used for calculations (it holds a quantity or a value that will be converted to currency) and 2)if the value is just a type which will basically only be compared against itself and will not be used for any calculations. Is it okay to use numberWithInt and intValue in one case and not the other, both cases, or must numberWithInteger and integerValue be used in both cases?
Also, does it matter if I have previously stored all of the values as [NSNumber numberWithInt:] - can I simply change the way I store/retrieve the value now, or do I need to maintain consistency so as not to create a problem with current user data?
I am particularly interested in this working in both a 32 bit and 64 bit iOS app.
Also - does it make a difference to your answer if the Core Data value is Integer 32, Integer 16, Integer 64, etc?

You should be using NSInteger whenever you can. The reason is that it will be platform independent. On 32-bit architecture, an NSInteger will be an int, on 64-bit a long.
Therefore, you are OK having used the int-methods before - it is the smaller subset of the two.
What you have stored in your Core Data database is also OK for the same reason. The fact that you set the value to Integer64 ensures, that also long numbers will be stored correctly in the future.
The use as currency is also OK, with some caveats. If you are mainly interested in cents, not fraction of cents, you can obviously just keep track of the cents as integers. However, if you want to do more complex calculations that could involve fractions of cents, such as some accounting methods or currency conversion, and store these results you would need something like a float or (better) a double.

Related

Core Data Attribute Type Differences

I'm having a difficult time understanding the size differences of the data types.
I have an attribute called displayOrder with type of Integer 16. I use this attribute to maintain a display order of tableViewCells, added by a user in a tableView. I set the value with plain numbers, "1, 2, 3", and it's working fine.
But there's also a lot of other options like Integer 32, Integer 64, Decimal, Float, and Double. I did my own research and found that a Float can have a decimal point, and a Double is double the size of the Float (Not sure the difference between Decimal and Float).
My question is, if the differences of these are just the size, does that mean I have to worry about displayOrder going up to, for example "1000", and it will exceed the bits of Integer 16 (Does it ever exceed the size?), and therefore I should use Integer 32 instead? And if I set it to Integer 64, and if the displayOrder is just "1", do I have to worry about slow performance?
I've seen the docs NSAttributeType but not sure what the numbers stands for.
Thanks
I think #choppin meant that speed wise it won't make much of a difference. Size wise it very much does, an int16 is half the size of int32, and having a ton of int32s when you only need int16s will have a larger memory footprint. The number here represents the number of bits the variable takes up in memory.
If you will only have a couple then don't worry about it, but if you will have a large data set, then it becomes an issue.
Also, if the number you will store can possibly be a very large number then you need the bigger option, for example an int32 can store 4,294,967,296 or half this if the Int is signed which it is by default. If you go over the maximum size of a signed int then the number wraps around, going negative or to 0 for a signed int.
Since memory is a concern on a mobile device then which option you choose warrants thought, though it warrants less that it did a few years ago.
It shouldn't make a huge deal on performance which one you use, but I would stick with integer 32. That gives you 2 to the power of 32 values (which should be more than enough for a display order)

Why is there no Integer 8 NSAttributeDescription attributeType in Core Data?

Core Data's NSAttributeDescription has integer types for 16-bit, 32-bit, and 64-bit numbers, but not for 8-bit numbers. Why is that? Is the recommended way to store 8-bit numbers in an Integer 16 type?
It seems wasteful from a storage perspective to double the data size (by using 16 bits to store the 8-bit number). Also, what happens if, due to programmer error, a number out of the range of an 8-bit number is stored in that Integer 16? Then any function/method that takes int8_t could be passed the wrong number. For example:
NSManagedObject *object = // fetch from store
int16_t value = object.value.intValue;
[otherObject methodThatTakesInt8:(int8_t)value]; // bad things happen if value isn't within an 8-bit range
I don't think the answer as to why NSAttributeDescription doesn't offer an 8-bit number is any more complicated than that Core Data doesn't have an 8-bit storage type. Which is probably a circular argument. Probably Apple just didn't see the worth.
As to your other concerns: what if the programmer wanted to store a 12-bit number? What if they wanted to store a 24-bit number? It seems odd to pull out 8 bits as a special case in the modern world. But the problem is easily solved: you can implement -willSave on any NSManagedObject subclass to validate data before it is committed to the store. Or you could implement your own custom setter (ultimately calling -setPrimitiveValue:forKey:) similarly to validate immediately upon set. In either case you can implement whatever strategy you want for an out-of-bounds number: raise an exception, saturate, whatever.
In addition to #Tommy's answer, if you're using a SQLite persistent store (which nearly everyone does when using Core Data), it's not actually wasteful from a storage perspective. SQLite uses dynamic typing, meaning that any column can contain a value of any type. Size requirements are determined based on the value being saved. If you tell Core Data that you want a 64-bit integer attribute but all values of that attribute would fit in 8 bits, you haven't actually wasted 7/8 of the space used for that attribute.

Benefits of using NSInteger over int?

I am trying to comprehend how development is affected when developing for both 32-bit and 64-bit architectures. From what I have researched thus far, I understand an int is always 4 bytes regardless of the architecture of the device running the app. But an NSInteger is 4 bytes on a 32-bit device and 8 bytes on a 64-bit device. I get the impression NSInteger is "safer" and recommended but I'm not sure what the reasoning is for that.
My question is, if you know the possible value you're using is never going to be large (maybe you're using it to index into an array of 200 items or store the count of objects in an array), why define it as an NSInteger? That's just going to take up 8 bytes when you won't use it all. Is it better to define it as an int in those cases? If so, in what case would you want to use an NSInteger (as opposed to int or long etc)? Obviously if you needed to utilize larger numbers, you could with the 64-bit architecture. But if you needed it to also work on 32-bit devices, would you not use long long because it's 8 bytes on 32-bit devices as well? I don't see why one would use NSInteger, at least when creating an app that runs on both architectures.
Also I cannot think of a method which takes in or returns a primitive type - int, and instead utilizes NSInteger, and am wondering if there is more to it than just the size of the values. For example, (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section. I'd like to understand why this is the case. Assuming it's possible to have a table with 2,147,483,647 rows, what would occur on a 32-bit device when you add one more - does it wrap around to a -2,147,483,647? And on a 64-bit device it would be 2,147,483,648. (Why return a signed value? I'd think it should be unsigned since you can't have a negative number of rows.)
Ultimately, I'd like to obtain a better understanding of actual use of these number data types, perhaps some code examples would be great!
I personally think that, 64-bit is actually the reason for existence for NSInteger and NSUInteger; before 10.5, those did not exist. The two are simply defined as longs in 64-bit, and as ints in 32-bit.
NSInteger/NSUInteger are defined as *dynamic typedef*s to one of these types, and they are defined like this:
#if __LP64__ || NS_BUILD_32_LIKE_64
typedef long NSInteger;
typedef unsigned long NSUInteger;
#else
typedef int NSInteger;
typedef unsigned int NSUInteger;
#endif
Thus, using them in place of the more basic C types when you want the 'bit-native' size.
I suggest you to throughly read this link.
CocoaDev has some more info.
For proper format specifier you should use for each of these types, see the String Programming Guide's section on Platform Dependencies
I remember when attending iOS developer conference. you have to take a look on the data-type in iOS7. for example, you use NSInteger in 64-bit device and save it on iCloud. then you want to sync to lower device (say iPad 2nd gen), your app will not behave the same, because it recognizes NSInteger in 4 bytes not 8 bytes, then your calculation would be wrong.
But so far, I use NSInteger because mostly my app doesn't use iCloud or doesn't sync. and to avoid compiler warning.
Apple uses int because for a loop control variable (which is only used to control the loop iterations) int datatype is fine, both in datatype size and in the values it can hold for your loop. No need for platform dependent datatype here. For a loop control variable even a 16-bit int will do most of the time.
Apple uses NSInteger for a function return value or for a function argument because in this case datatype [size] matters, because what you are doing with a function is communicating/passing data with other programs or with other pieces of code.
Apple uses NSInteger (or NSUInteger) when passing a value as an
argument to a function or returning a value from a function.
The only thing I would use NSInteger for is passing values to and from an API that specifies it. Other than that it has no advantage over an int or a long. At least with an int or a long you know what format specifiers to use in a printf or similar statement.
As a continue to Irfan's answer:
sizeof(NSInteger)
equals a processor word's size. It is much more simple and faster for processor to operate with words

Core Data Storing NSDecimalNumber with Precision

I'm receiving and parsing JSON and storing the data into Core Data. Some of the data is currency being stored as NSDecimalNumber, but some of these values have a higher precision than two decimal places.
For instance, if I get a value from the service such as 8.2399999995 I would like to store this in Core Data as 8.24. Is there any way to set up my model to a two decimal place precision? Or do I need to manually round each value after it's stored?
UPDATE
Leijonien thanks for the information. I tried to doing that and I'm having some trouble saving the formatted value. I checked the JSON and searched Google and it turns out I'm getting a clean value from the service. RESTKit is the problem....https://github.com/RestKit/RestKit/issues/1405.
However, I've created a category on one of my NSManagedObject classes, overridden the setter for the attribute I want, formatted the value, but I still see the long decimal value in my db. Here's my code.
- (void)setAmount:(NSDecimalNumber *)amount {
NSDecimalNumberHandler *round = [NSDecimalNumberHandler decimalNumberHandlerWithRoundingMode:NSRoundPlain
scale:2
raiseOnExactness:NO
raiseOnOverflow:NO
raiseOnUnderflow:NO
raiseOnDivideByZero:YES];
NSDecimalNumber *newAmount = [amount decimalNumberByRoundingAccordingToBehavior:round];
NSLog(#"%#", newAmount);
[self setPrimitiveValue:newAmount forKey:#"amount"];
}
What's weird is that when newAmount prints to the console it's in the format 8.24 like I want, but when I check the db it's saved as 8.2399999995. Am I doing something wrong here?
CoreData will just store the value you pass, so if you need only 2 digits, you should round the value yourself. Probably better to round the value before it's stored, so you only have to do it once per result.
As it turns out, the problem is not RESTKit. In fact, the problem appears to be Core Data. This is why printing to the console in my setter method printed the correctly formatted number, but the number has the wrong precision in the db. The best remedy for this situation was to override the getter method and format the number there, after it has been pulled from Core Data. The following seems to work...let me know if you've found anything else that works.
- (NSDecimalNumber*)amount {
[self willAccessValueForKey:#"amount"];
NSDecimalNumber* unroundedAmount = [self primitiveValueForKey:#"amount"];
[self didAccessValueForKey:#"amount"];
NSDecimalNumberHandler *round = [NSDecimalNumberHandler decimalNumberHandlerWithRoundingMode:NSRoundPlain
scale:2
raiseOnExactness:NO
raiseOnOverflow:NO
raiseOnUnderflow:NO
raiseOnDivideByZero:YES];
NSDecimalNumber *roundedAmount = [unroundedAmount decimalNumberByRoundingAccordingToBehavior:round];
return roundedAmount;
}
One thing to watch out for is that some (most?) decimal numbers can't be stored in floating-point binary in exactly the same way we think of them. There's alot to read on it out there, but I like the detailed approach of this one: http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point_representation.html.
So for your number, the .24 can't be represented in floating-point because the binary can only give that the decimal component of a number is made up of .5, .25, .125, ... (that might be missing some special rules, but it's the general idea). So to represent .24 it uses as many of the lower values to get as close as it can.
NSDecimalNumber was set up to round it, but if you don't tell it to round anything you'll see the same 8.23999... value there too.

Converting `NSDecimalNumber` to `SInt64` without precision loss. (within range of SInt64, iOS)

Currently, [NSDecimalNumber longLongValue] created with string #"9999999999999999" returns 10000000000000000.
This means the class converts it's value to double first, and re-converts into SInt64(signed long long)
How to evade this behavior? I want to get precise integral number within the range of SInt64.
PS.
I considered about converting to NSString and re-converting into SInt64 with NSScanner or strtoll, but I believe there's better way. But if you sure about there's no other way, please tell me that.
First: unless you're sure it's performance-critical, I'd write it into a string and scan it back. That's the easy way.
Now, if you really want to do it otherwise:
get an NSDecimal from your NSDecimalNumber
work with the private fields of the structure, initialize your long long value from the mantissa (possibly introduce checks to handle too-large mantissas)
multiply by 10^exponent; you can do that using binary exponentiation; again, check for overflow
Start with an NSDecimalNumber* originalValue.
Let int64_t approx = [originalValue longLongValue]. This will not be exact, but quite close.
Convert approx to NSDecimalNumber, calculate originalValue - approx, take the longLongValue, and add to approx. Now you got the correct result.

Resources