NSNumbers with value 0.5 and 1.0 have the same hash - ios

Can someone please confirm, and explain, why this happens:
On simulator (7.1, 32-bit):
NSNumber *a = [NSNumber numberWithFloat:0.5]; // hash = 506952114
NSNumber *b = [NSNumber numberWithFloat:1.0]; // hash = 2654435761
NSNumber *c = [NSNumber numberWithFloat:2.0]; // hash = 1013904226
On device (7.1, 32-bit):
NSNumber *a = [NSNumber numberWithFloat:0.5]; // hash = 2654435761
NSNumber *b = [NSNumber numberWithFloat:1.0]; // hash = 2654435761 - SAME!
NSNumber *c = [NSNumber numberWithFloat:2.0]; // hash = 5308871522
I thought it might be a 32-bit issue, but when I try the same thing on 64-bit simulator and device, I get the SAME issue. Simulator is fine, device has identical hashes.
I was trying to add unique objects to an NSMutableOrderedSet and noticed that my two objects that were identical except for differing values of 0.5 and 1.0 were not both being added, and this is why. I tried both floats and doubles with the same result.
But why?

I think this excellent article from Mike Ash might give some insight:
For floats that are integer values, we want to do the same thing.
Since our isEqual: considers an integer-valued DOUBLE equal to an INT
or UINT of the same value, we must return the same hash as the INT and
UINT equivalent. To accomplish this, we check to see if the DOUBLE
value is actually an integer, and return the integer value if so:
if(_value.d == floor(_value.d))
return [self unsignedIntegerValue];
(I won't quote the whole section about hash, so please read the article for full disclosure).
But, bottom line, it looks like using [NSNumber hash] is a bad idea as a key in an associative array/hash table. However I cannot explain why it behaves differently under the Simulator and device; that looks somewhat troubling...

There is no guarantee that a hash for different inputs is different.
In this case consider that there are 2^32 hash values and there are magnitudes more unique NSSNumbers so the hash can not be used for uniqueness.
A rather short hash is generally used as a fast initial comparison and then if it matches with a full compare of the object. This is probably what NSNumber isEqual does.
That is why using a hash as a key in a NSSet is a bad idea and for the reasons #trojanfoe quoted from Mike Ash an NSNumber hash will not work.
Even cryptographic hashes such as SHA512 are not guaranteed to produce different results for different inputs but the chance is small as the hash length increases. This is why MD5 is recommended against and even SHA2 is increasingly being considered to short.

Related

How to Convert Float to NSData Bytes

Using the following code, I am attempting to convert three float values into a single NSData object, which I can then transmit over a serial port.
float kP = [[self.kPTextField stringValue] floatValue];
float kI = [[self.kITextField stringValue] floatValue];
float kD = [[self.kDTextField stringValue] floatValue];
float combined[] = {kP, kI, kD};
NSData *dataPackage = [NSData dataWithBytes:&combined length:sizeof(combined)];
[self.serialPort sendData:dataPackage];
The problem is that it doesn't seem to work very well. Whenever I use the "sizeof()" C function, it tells me that the "dataPackage" is only 8 bytes, even though 3 float values should total 12 bytes. I am receiving the data with an Arduino. It sees the bytes coming in, but they aren't legible at all. I don't think it's a problem on the Arduino side of things (but who knows?).
Any help would be appreciated! I'm not a CS major, just a bio major, and I've never learned this stuff in a formal way so I am sorry if my question is ridiculous. I've spent several hours searching the net about this problem and haven't found anything that helped.
EDIT: It turns out this code was completely correct. I made a simple mistake on the arduino side of things by using a struct instead of a union to take the bytes and convert them back into floats.
For others who may be in a similar predicament, a successful way to convert floats from bytes coming out of the serial port is the following:
(at top of implementation file)
union {
float pidVals[3];
byte bytes[12];
} pidUnion;
(inside loop)
if (Serial.available() > 11) {
for (int i = 0; i < 12; i++) {
pidUnion.bytes[i] = Serial.read();
}
}
//Now, you can get access to all three floats of data using pidUnion.pidVals[0], pidUnion.pidVals[1], etc.
This probably isn't the best or most reliable way to transmit data. There is no error-correcting mechanism or packet structure. But it does work in a pinch. I imagine you would probably want to find a way to create a packet of data along with a hash byte to make sure all of the data is correct on the other side, this code doesn't have any of that though.
There are multiple problems with your code.
First, you don't want to use stringValue on a text field. You want the text property, which is a string.
So the first line should read like this:
float kP = [self.kPTextField.text floatValue];
Second, in C, an array of things is a pointer. The data type of
float combined[]
and
float *combined
is identical. Both are "pointer to float".
So this code:
NSData *dataPackage = [NSData dataWithBytes:&combined
length: sizeof(combined)];
Should not have an ampersand in front of combined. It should read:
NSData *dataPackage = [NSData dataWithBytes:combined
length: sizeof(combined)];
Third, what matters is sizeof(combined), not sizeof(dataPackage).
The expression sizeof(dataPackage) will tell you the size of the variable dataPackage, which is a pointer to an NSData object. You must be running on a 64 bit device, where pointers are 8 bytes.
To test the length of the data in your NSData object, you want to ask it with the length property:
NSLog(#"sizeof(combined) = %d", sizeof(combined)";
NSData *dataPackage = [NSData dataWithBytes:&combined
length: sizeof(combined)];
NSLog(#"dataPackage.length = %d", dataPackage.length";
Both log statements should display values of 12.

Inconsistent values with NSNumber to double multiplication

I'm currently parsing NSString values to NSNumbers and then adding them into a NSMutableArray called operands in an object called "data" like so:
NSNumberFormatter * f = [[NSNumberFormatter alloc] init];
[f setNumberStyle:NSNumberFormatterDecimalStyle];
NSNumber * myNumber = [f numberFromString:*operandString];
[data.operands addObject:myNumber];
I then retrieve those numbers, perform some math on them, then update the array:
double x = [[data.operands objectAtIndex: i]doubleValue];
double y = [[data.operands objectAtIndex: i + 1]doubleValue];
double answer = x * y;
[data.operands replaceObjectAtIndex:(i) withObject:[NSNumber numberWithDouble:answer]];
When I get the answer, everything looks fine eg: ( 3.33 * 5 = 16.65)
BUT, when I look in the debugger I'm seeing some crazy values for x and answer, such as:
x = 3.3300000000000001
answer = 16.649999999999999
Why is this happening? Am I loosing some precision with parsing these back and fourth? Is it how I've used the NSNumberFormatter to parse the string?
The reason I'm in trouble with this is because I'm trying to ensure there's no double overflow errors so I'm using this simple test to check the integrity:
if (answer / y != x){
//THROW OVERFLOW ERROR
}
With the above crazy numbers this is always inconsistent. When I NSLog the answer it comes out fine:
NSLog (#"%g", [[data.operands objectAtIndex:i]doubleValue]]);
Same for
NSLog (#"%f", [[data.operands objectAtIndex:i]doubleValue]]);
You are not losing any precision that you need to worry about. Those are the correct values. There are only about 2^60 different double numbers, that finite set has to try to approximately cover the infinite 'number of numbers' in the range that doubles cover.
In other words, there are no exact answers in computer land and your
if (answer / y != x){
//THROW OVERFLOW ERROR
}
Will not work. Or it may work much of the time, but fail if you push it. Instead you need to acknowledge the limited precision (which is pretty high precision) of doubles:
//Don't waste time worrying like this...
if (fabs(answer / y - x) > 1e-12*fabs(answer)){
//Not correct or useful thing to check don't use this - i did not check
}
// let the math package handle it:
if (isnan(answer)){
// we gots problems
}
if (!isnormal(answer)){
// we gots some other problems
}
Also don't forget that 10^300 is a very large number, doubles work pretty well. To use 32 bit floats you need to pay much more attention to order of execution, etc.
NSLog is likely outputting with fewer decimals of precision, and rounds to the nearest thing, so the answers look better.

Why is converting my float to an int making the number negative?

NSTimeInterval expirationTime = (secondsSinceUnixEpoch*1000)+120000;
expirationTime = ceil(expirationTime/2);
int expirationInt = (int)expirationTime;
NSLog(#"%d", expirationInt);
The log output is always negative, even though before I convert it to an int it's positive... I tried just multiplying it by -1 to make it positive again and it's just staying negative! I'm totally perplexed.... don't know much about C, am I just doing something silly??
The number (secondsSinceUnixEpoch*1000)+120000 looks to me like it's going to be way too large to fit in an int. Chances are the integer is overflowing and becoming negative.
Converting to long long is one solution. As you stated in a comment, you want to show a whole number for use in a URL. Just do this:
NSTimeInterval expirationTime = (secondsSinceUnixEpoch*1000)+120000;
expirationTime = ceil(expirationTime/2);
NSString *urlString = [NSString stringWithFormat:#"http://example.com?time=%.0f", expirationTime];
This will format the decimal number as a whole number.

Checking if an array contaisObject:#(float) in Objective-C can unexpectedly return NO

I have some method that receives a float as an argument, and checks to see if this float is in an array. To do this, I first convert the float to a NSNumber. This is a testable simplification of my code:
float aFloat = 0.3;
NSNumber *aNSNumber = #(aFloat);
NSArray *anArray = #[#(0.0), #(0.3), #(1.0)];
NSLog(#"%d", [anArray containsObject:aNSNumber]);
This code will log 0 (i.e. NO), so it's saying that 0.3is not in anArray. If aFloat is a "round" number such as 0.0, 0.5, or 1.0, the test works and logs 1 (i.e. YES). Any number other than that, like the 0.3 above, fails.
In the other hand, if we change aFloat to be a double, it works. Or, if we change anArray to this:
NSArray *array = #[[NSNumber numberWithFloat:0.0], [NSNumber numberWithFloat:0.3], [NSNumber numberWithFloat:1.0]];
It also works. What I presumed is that the NSNumber #() notation creates a numberWithDouble:.
But, my question is, shouldn't it work even when aFloat is a float? Since I'm "converting" it anyways by saving it in aNSNumber... And shouldn't it automatically recognize that the float 0.3 and the double 0.3 are actually the same numbers? Also, why the "round" numbers works anyway?
#(0.3) uses -numberWithDouble: because 0.3 has type double. If you wrote #(0.3f) then it would use -numberWithFloat:.
Neither float nor double can store 0.3 exactly. (It's similar to the problem of writing 1/3 in decimal form - you can't do it exactly using a finite number of digits.) Instead, you get the float closest to 0.3 and the double closest to 0.3. These two numbers are not equal to each other, so -containsObject: can't find a match.
Both float and double can store 0.0 and 1.0 exactly, so both conversions give you the same result and -containsObject: succeeds.
The #(0.3) in the anArray is a double wrapped in an NSNumber. And of course your aFloat is a float wrapped in an NSNumber.
Try one of the two possible changes:
1) Change float aFloat to double aFloat
or
2) Change #(0.3) in anArray to #(0.3f).
You can visualize the difference between 0.3 and 0.3f with this snippet:
NSLog(#"%.20f %.20f", 0.3, 0.3f);
My debugger shows: 0.29999999999999998890 0.30000001192092895508
Funny enough, the '0.3' appears more accurate than the '0.3f'. Anyway, I speculate that the compiler may take the straight 0.3 as a double first before converting it to a float whereas the 0.3f specification is a float immediately.
Another thing to observe is had you done this:
NSArray *anArray = #[#(0.0), aNSNumber, #(1.0)];
The containsObject call would have succeeded. Perhaps the compiler's route to convert the float into whatever it uses internally between the aFloat declaration and the literal has some differences. Just speculating here...
In general, I don't like testing for equality with floats or doubles for that matter. Accuracy problems can through you for a loop just too easily.

How to get the value of an NSDecimalNumber from NSDecimal?

I have an array of objects, each object has an NSDecimalNumber, call it "size"
For each object in the array, I will subtract a recommended size, called rSize.
I then want to go into the resultant NSDecimalNumber and get the value of the delta, don't really care if it's positive or negative result.
I think I'm going to use the decimalNumber method which will return a NSDecimal struct, so the question is: which property within the struct will give me the value of the delta?
To rephrase: A NSDecimal represents an NSDecimalNumber, but which property of the NSDecimal struct holds the value?
Many thanks
Rob
Your "delta" appears to be the absolute value of the difference between "rSize" and the item. In that case, you can perform the subtraction (item – rSize), and multiply it by -1 if it is negative entirely within NSDecimalNumber:
NSDecimalNumber *negativeOne = [NSDecimalNumber decimalNumberWithMantissa:1
exponent:0
isNegative:YES];
NSDecimalNumber *delta = [item decimalNumberBySubtracting:rSize];
if ([delta compare:[NSDecimalNumber zero]] == NSOrderedAscending) {
delta = [delta decimalNumberByMultiplyingBy:negativeOne];
}
Then use the -compare: selector on the resulting delta objects to sort your array of objects.

Resources