How to get the value of an NSDecimalNumber from NSDecimal? - ios

I have an array of objects, each object has an NSDecimalNumber, call it "size"
For each object in the array, I will subtract a recommended size, called rSize.
I then want to go into the resultant NSDecimalNumber and get the value of the delta, don't really care if it's positive or negative result.
I think I'm going to use the decimalNumber method which will return a NSDecimal struct, so the question is: which property within the struct will give me the value of the delta?
To rephrase: A NSDecimal represents an NSDecimalNumber, but which property of the NSDecimal struct holds the value?
Many thanks
Rob

Your "delta" appears to be the absolute value of the difference between "rSize" and the item. In that case, you can perform the subtraction (item – rSize), and multiply it by -1 if it is negative entirely within NSDecimalNumber:
NSDecimalNumber *negativeOne = [NSDecimalNumber decimalNumberWithMantissa:1
exponent:0
isNegative:YES];
NSDecimalNumber *delta = [item decimalNumberBySubtracting:rSize];
if ([delta compare:[NSDecimalNumber zero]] == NSOrderedAscending) {
delta = [delta decimalNumberByMultiplyingBy:negativeOne];
}
Then use the -compare: selector on the resulting delta objects to sort your array of objects.

Related

Scale and Precision of NSDecimalNumber value

Let us suppose I have a variable v of type NSDecimalNumber
let v = 34.596904 in its own format.
I want to know the precision and scale of this number, not the default one. I did not find any function in the NSDecimalNumber class which gives these values or maybe someone would like to throw some light on how it works.
precision = 8
scale = 6
precision is count of significant digits in number and scale is count of significant digit after decimal
This extension will give you the specific value for your only example:
extension Decimal {
var scale: Int {
return -self.exponent
}
var precision: Int {
return Int(floor(log10((self.significand as NSDecimalNumber).doubleValue)))+1
}
}
Usage:
let v: NSDecimalNumber = NSDecimalNumber(string: "34.596904")
print("precision=\((v as Decimal).precision)") //->precision=8
print("scale=\((v as Decimal).scale)") //->scale=6
But I cannot be sure if this generates expected results in all cases you have in mind, as you have shown only one example...
One more, in Swift, Decimal and NSDecimalNumber are easily bridgeable and you should better use Decimal as far as you can.

NSNumbers with value 0.5 and 1.0 have the same hash

Can someone please confirm, and explain, why this happens:
On simulator (7.1, 32-bit):
NSNumber *a = [NSNumber numberWithFloat:0.5]; // hash = 506952114
NSNumber *b = [NSNumber numberWithFloat:1.0]; // hash = 2654435761
NSNumber *c = [NSNumber numberWithFloat:2.0]; // hash = 1013904226
On device (7.1, 32-bit):
NSNumber *a = [NSNumber numberWithFloat:0.5]; // hash = 2654435761
NSNumber *b = [NSNumber numberWithFloat:1.0]; // hash = 2654435761 - SAME!
NSNumber *c = [NSNumber numberWithFloat:2.0]; // hash = 5308871522
I thought it might be a 32-bit issue, but when I try the same thing on 64-bit simulator and device, I get the SAME issue. Simulator is fine, device has identical hashes.
I was trying to add unique objects to an NSMutableOrderedSet and noticed that my two objects that were identical except for differing values of 0.5 and 1.0 were not both being added, and this is why. I tried both floats and doubles with the same result.
But why?
I think this excellent article from Mike Ash might give some insight:
For floats that are integer values, we want to do the same thing.
Since our isEqual: considers an integer-valued DOUBLE equal to an INT
or UINT of the same value, we must return the same hash as the INT and
UINT equivalent. To accomplish this, we check to see if the DOUBLE
value is actually an integer, and return the integer value if so:
if(_value.d == floor(_value.d))
return [self unsignedIntegerValue];
(I won't quote the whole section about hash, so please read the article for full disclosure).
But, bottom line, it looks like using [NSNumber hash] is a bad idea as a key in an associative array/hash table. However I cannot explain why it behaves differently under the Simulator and device; that looks somewhat troubling...
There is no guarantee that a hash for different inputs is different.
In this case consider that there are 2^32 hash values and there are magnitudes more unique NSSNumbers so the hash can not be used for uniqueness.
A rather short hash is generally used as a fast initial comparison and then if it matches with a full compare of the object. This is probably what NSNumber isEqual does.
That is why using a hash as a key in a NSSet is a bad idea and for the reasons #trojanfoe quoted from Mike Ash an NSNumber hash will not work.
Even cryptographic hashes such as SHA512 are not guaranteed to produce different results for different inputs but the chance is small as the hash length increases. This is why MD5 is recommended against and even SHA2 is increasingly being considered to short.

Inconsistent values with NSNumber to double multiplication

I'm currently parsing NSString values to NSNumbers and then adding them into a NSMutableArray called operands in an object called "data" like so:
NSNumberFormatter * f = [[NSNumberFormatter alloc] init];
[f setNumberStyle:NSNumberFormatterDecimalStyle];
NSNumber * myNumber = [f numberFromString:*operandString];
[data.operands addObject:myNumber];
I then retrieve those numbers, perform some math on them, then update the array:
double x = [[data.operands objectAtIndex: i]doubleValue];
double y = [[data.operands objectAtIndex: i + 1]doubleValue];
double answer = x * y;
[data.operands replaceObjectAtIndex:(i) withObject:[NSNumber numberWithDouble:answer]];
When I get the answer, everything looks fine eg: ( 3.33 * 5 = 16.65)
BUT, when I look in the debugger I'm seeing some crazy values for x and answer, such as:
x = 3.3300000000000001
answer = 16.649999999999999
Why is this happening? Am I loosing some precision with parsing these back and fourth? Is it how I've used the NSNumberFormatter to parse the string?
The reason I'm in trouble with this is because I'm trying to ensure there's no double overflow errors so I'm using this simple test to check the integrity:
if (answer / y != x){
//THROW OVERFLOW ERROR
}
With the above crazy numbers this is always inconsistent. When I NSLog the answer it comes out fine:
NSLog (#"%g", [[data.operands objectAtIndex:i]doubleValue]]);
Same for
NSLog (#"%f", [[data.operands objectAtIndex:i]doubleValue]]);
You are not losing any precision that you need to worry about. Those are the correct values. There are only about 2^60 different double numbers, that finite set has to try to approximately cover the infinite 'number of numbers' in the range that doubles cover.
In other words, there are no exact answers in computer land and your
if (answer / y != x){
//THROW OVERFLOW ERROR
}
Will not work. Or it may work much of the time, but fail if you push it. Instead you need to acknowledge the limited precision (which is pretty high precision) of doubles:
//Don't waste time worrying like this...
if (fabs(answer / y - x) > 1e-12*fabs(answer)){
//Not correct or useful thing to check don't use this - i did not check
}
// let the math package handle it:
if (isnan(answer)){
// we gots problems
}
if (!isnormal(answer)){
// we gots some other problems
}
Also don't forget that 10^300 is a very large number, doubles work pretty well. To use 32 bit floats you need to pay much more attention to order of execution, etc.
NSLog is likely outputting with fewer decimals of precision, and rounds to the nearest thing, so the answers look better.

Remove last digit when it's a 0 in double (or any other option) when comparing

I'm adding a bunch of coordinates into quad tree and when I'm asking for the closest coordinate near my location, sometimes I've coordinate with 0 at the end, added automatically perhaps by the quad tree or I don't know how.
The problem is when I'm asking the double value in my core data using predicate it won't match because of the 0 digit addition to the number.
I thought about removing it when I've 0 but I'm sure there is a better way doing it.
For example:
Near location 31.123456, 34.123456, the nearest is 31.123444, 34.123450
when '34.123450' is actually 34.12345 in the database.
//Convert float to String
NSString *str_lat = #"34.123450";
NSString *trimmedString=[str_lat substringFromIndex:MAX((int)[str_lat length]-1, 0)];
if([trimmedString isEqualToString:#"0"])
{
str_lat = [str_lat substringToIndex:[str_lat length] - 1];
}
else
{
}
NSLog(#"%#",str_lat);
First: You should not store numbers as strings. 7.3 and 7.30 are the same values with simply different representations. You should not compare the representations, but the value.
Second: You should not compare floating-point numbers with == but their difference to a delta. In a calculation precision might get lost, rounding is applied and so on. The "mathematical" equal values might be physical different by a more or less small amount.
// remove the zeros from values (if you have them as floats)
NSString *valueFromTheDataBase = [NSString stringWithFormat:#"%g", 34.123450];
NSString *yourValue = [NSString stringWithFormat:#"%g", 34.12345];
if([yourValue isEqualToString:valueFromDataBase]) {
// they are equal
}
OR Make Them floats and compare them
// make them floats and compare them
CGFloat floatFromDB = [valueFromDB floatValue];
CGFloat yourFloat = [yourString floatValue];
if((floatFromDB - yourFloat) == 0) {
// they are equal
}
UPDATED as #Amin Negm says

Why NSInteger instead of NSUInteger in "numberOfSectionInTableView:"?

The UITableView data source method numberOfSectionsInTableView: has a return type of NSInteger. However, a UITableView cannot have a negative amount of rows; it has 0 or greater rows, so why is the return type of NSInteger? Doesn't that allow for crashes relating to a negative integer being returned?
You can't do the check (if var < 0) return; with an unsigned integer. That is the standard reason for preferring one. Really the only reason to use an unsigned integer is if you need the extra room for larger digits, and you can guarantee the input will never try to be less than zero.

Resources