iOS calculating sum of filesizes always negative - ios

I've got a strange problem here, and i'm sure it's just something small.
I recieve information about files via JSON (RestKit is doing a good job).
I write the filesize of each file via coredata to a local store.
Afterwards within one of my viewcontrollers i need to sum up the files-sizes of all files in database. I fetch all files and then going through a slope (for) to sum the size up.
The problem is now, the result is always negative!
The coredata entity filesize is of type Integer 32 (filesize is reported in bytes by JSON).
I read the fetchresult in an NSArray allPublicationsToLoad and then try to sum up. The Objects in the NSArray of Type CDPublication have a value filesize of Type NSNumber:
for(int n = 0; n < [allPublicationsToLoad count]; n = n + 1)
{
CDPublication* thePub = [allPublicationsToLoad objectAtIndex:n];
allPublicationsSize = allPublicationsSize + [[thePub filesize] integerValue];
sum = [NSNumber numberWithFloat:([sum floatValue] + [[thePub filesize] floatValue])];
Each single filesize of the single CDPublications objects are positive and correct. Only the sum of all the filesizes ist negative afterwards. There are around 240 objects right now with filesize-values between 4000 and 234.645.434.123.
Can somebody please give me a hit into the right direction !?
Is it the problem that Integer 32 or NSNumber can't hold such a huge range?
Thanks
MadMaxApp
}

The NSNumber object can't hold such a huge number. Because of the way negative numbers are stored the result is negative.
Negative numbers are stored using two's complement, this is done to make addition of positive and negative numbers easier. The range of numbers NSNumber can hold is split in two, the highest half (the int values for which the highest order bit is equal to 1) is considered to be negative, the lowest half (where the highest order bit is equal to 0) are the normal positive numbers. Now, if you add sufficiently large numbers, the result will be in the highest half and thus be interpreted as a negative number. Here's an illustration for the 4-bit integer situation (32 works exactly the same but there would be a lot more 0 and 1 to type;))
With 4 bits you can represent this range of signed integers:
0000 (=0)
0001 (=1)
0010 (=2)
...
0111 (=7)
1000 (=-8)
1001 (=-7)
...
1111 (=-1)
The maximum positive integer you can represent is 7 in this case. If you would add 5 and 4 for example you would get:
0101 + 0100 = 1001
1001 equals -7 when you represent signed integers like this (and not 9, as you would expect). That's the effect you are observing, but on a much larger scale (32 bits)
Your only option to get correct results in this case is to increase the number of bits used to represent your integers so the result won't be in the negative number range of bit combinations. So if 32 bits is not enough (like in your case), you can use a long (64 bits).
[myNumber longLongValue];

I think this has to do with int overflow: very large integers get reinterpreted as negatives when they overflow the size of int (32 bits). Use longLongValue instead of integerValue:
long long allPublicationsSize = 0;
for(int n = 0; n < [allPublicationsToLoad count]; n++) {
CDPublication* thePub = [allPublicationsToLoad objectAtIndex:n];
allPublicationsSize += [[thePub filesize] longLongValue];
}

This is an integer overflow issue associated with use of two's complement arithmetic. For a 32 bit integer there are exactly 232 (4,294,967,296) possible integer values which can be expressed. When using two's complement, the most significant bit is used as a sign bit which allows half of the numbers to represent non-negative integers (when the sign bit is 0) and the other half to represent negative numbers (when the sign bit is 1). This gives an effective range of [-231, 231-1] or [-2,147,483,648, 2,147,483,647].
To overcome this problem for your case, you should consider using a 64-bit integer. This should work well for the range of values you seem to be interested in using. Alternatively, if even 64-bit is not sufficient, you should look for big integer libraries for iOS.

Related

bin2dec for 16 bit signed binary values (in google sheets)

In google sheets, I'm trying to convert a 16-bit signed binary number to its decimal equivalent, but the built in function that does that only takes up to 10 bits. Other solutions to the problem that I've seen don't preserve the signedness.
So far I've tried:
bin2dec on the leftmost 8 bits * 2^8 + bin2dec on the rightmost 8 bits
hex2dec on the result of bin2dec on the leftmost 8 bits concatenated with bin2dec on the rightmost 8 bits
I've also seen a suggestion that multiplies each bit by its power of 2, eliminating bin2dec altogether.
Any suggestions?
You will need to use a custom function
function binary2decimal(bin) {
return parseInt(bin, 2);
}
Let's assume that your binary number is in cell A2.
First, set the formatting as follows: Format > Number > Plain text.
Then place the following formula in, say, B2:
=ArrayFormula(SUM(SPLIT(REGEXREPLACE(SUBSTITUTE(A2&"","-",""),"(\d)","$1|"),"|")*(2^SEQUENCE(1,LEN(SUBSTITUTE(A2&"","-","")),LEN(SUBSTITUTE(A2&"","-",""))-1,-1))*IF(LEFT(A2)="-",-1,1)))
This formula will process any length binary number, positive or negative, from 1 bit to 16 bits (and, in fact, to a length of 45 or 46 bits).
What this formula does is SPLIT the binary number (without the negative sign if it exists) into its separate bits, one per column; multiply each of those by 2 raised to the power of each element of an equal-sized degressive SEQUENCE that runs from a high of the LEN (i.e., number) of bits down to zero; and finally apply the negative sign conditionally IF one exists.
If you need to process a range where every value is a positive or negative binary number with exactly 16 bits, you can do so. Suppose that your 16-bit binary numbers are in the range A2:A. First, be sure to select all of Column A and set the formatting to "Plain text" as described above. Then place the following array formula into, say, B2 (being sure that B2:B is empty first):
=ArrayFormula(MMULT(SPLIT(REGEXREPLACE(SUBSTITUTE(FILTER(A2:A,A2:A<>"")&"","-",""),"(\d)","$1|"),"|")*(2^SEQUENCE(1,16,15,-1)),SEQUENCE(16,1,1,0))*IF(LEFT(FILTER(A2:A,A2:A<>""))="-",-1,1))

Getting Garbage value while convert into long Objective -C

I am trying to convert NSString to long but I am getting garbage value. Below is my code :
long t1 = [[jsonDict valueForKeyPath:#"detail.amount"]doubleValue] * 1000000000000000000;
long t2 = [[jsonDict valueForKeyPath:#"detail.fee"]doubleValue] * 10000000000000000;
NSLog(#"t1: %ld",t1);
NSLog(#"t2: %ld",t2);
detail.amout = 51.74
detail.fee = 2.72
O/P :
t1: 9223372036854775807 (Getting Garbage value here)
t2: 27200000000000000 (Working fine)
Thanks in advance.
Each number types (int, long, double, float) has limits. For your long 64 bit (because your device is 64bit) number the upper limit is :9,223,372,036,854,775,807 (see here: https://en.wikipedia.org/wiki/9,223,372,036,854,775,807)
In your case, 51.74 * 1,000,000,000,000,000,000 =
51,740,000,000,000,000,000
While Long 64bit only has a maximum of
9,223,372,036,854,775,807
So an overflow happens at 9,223,372,036,854,775,808 and above. Which is what your calculation evaluates into.
Also to note, that what you are doing will also cause problem if you only cater for 64bit long range, because what happens when your app runs on a 32bit (like iPhone 5c or below)?
Generally a bad idea to use large numbers, unless you're doing complex maths. If number accuracies are not critical, then you should consider simplifying the number like 51,740G (G = Giga). etc.
It's because you're storing the product to long type variables t1 and t2.
Use either float or double, and you'll get the correct answer.
Based on C's data types:
Long signed integer type. Capable of containing at least the
[−2,147,483,647, +2,147,483,647] range; thus, it is at least 32
bits in size.
Ref: https://en.wikipedia.org/wiki/C_data_types
9223372036854775807 is the maximum value of a 64-bit signed long. I deduce that [[jsonDict valueForKeyPath:#"detail.amount"]doubleValue] * 1000000000000000000 is larger than the maximum long value, so when you cast it to long, you get the closest value that long can represent.
As you read, it is not possible with long. Since it looks like you do finance math, you should use NSDecimalNumber instead of double to solve that problem.

sscanf in flex changing value of input

I'm using flex and bison to read in a file that has text but also floating point numbers. Everything seems to be working fine, except that I've noticed that it sometimes changes the values of the numbers. For example,
-4.036 is (sometimes) becoming -4.0359998, and
-3.92 is (sometimes) becoming -3.9200001
The .l file is using the lines
static float fvalue ;
sscanf(specctra_dsn_file_yytext, "%f", &fvalue) ;
The values pass through the yacc parser and arrive at my own .cpp file as floats with the values described. Not all of the values are changed, and even the same value is changed in some occurrences, and unchanged in others.
Please let me know if I should add more information.
float cannot represent every number. It is typically 32-bit and so is limited to at most 232 different numbers. -4.036 and -3.92 are not in that set on your platform.
<float> is typically encoded using IEEE 754 single-precision binary floating-point format: binary32 and rarely encodes fractional decimal values exactly. When assigning values like "-3.92", the actual values saved will be one close to that, but maybe not exact. IOWs, the conversion of -3.92 to float was not exact had it been done by assignment or sscanf().
float x1 = -3.92;
// float has an exact value of -3.9200000762939453125
// View # 6 significant digits -3.92000
// OP reported -3.9200001
float x2 = -4.036;
// float has an exact value of -4.035999774932861328125
// View # 6 significant digits -4.03600
// OP reported -4.0359998
Printing these values to beyond a certain number of significant decimal digits (typically 6 for float) can be expected to not match the original assignment. See Printf width specifier to maintain precision of floating-point value for a deeper C post.
OP could lower expectations of how many digits will match. Alternatively could use double and then only see this problem when typically more than 15 significant decimal digits are viewed.

Unexpected result subtracting decimals in ruby [duplicate]

Can somebody explain why multiplying by 100 here gives a less accurate result but multiplying by 10 twice gives a more accurate result?
± % sc
Loading development environment (Rails 3.0.1)
>> 129.95 * 100
12994.999999999998
>> 129.95*10
1299.5
>> 129.95*10*10
12995.0
If you do the calculations by hand in double-precision binary, which is limited to 53 significant bits, you'll see what's going on:
129.95 = 1.0000001111100110011001100110011001100110011001100110 x 2^7
129.95*100 = 1.1001011000010111111111111111111111111111111111111111011 x 2^13
This is 56 significant bits long, so rounded to 53 bits it's
1.1001011000010111111111111111111111111111111111111111 x 2^13, which equals
12994.999999999998181010596454143524169921875
Now 129.95*10 = 1.01000100110111111111111111111111111111111111111111111 x 2^10
This is 54 significant bits long, so rounded to 53 bits it's 1.01000100111 x 2^10 = 1299.5
Now 1299.5 * 10 = 1.1001011000011 x 2^13 = 12995.
First off: you are looking at the string representation of the result, not the actual result itself. If you really want to compare the two results, you should format both results explicitly, using String#% and you should format both results the same way.
Secondly, that's just how binary floating point numbers work. They are inexact, they are finite and they are binary. All three mean that you get rounding errors, which generally look totally random, unless you happen to have memorized the entirety of IEEE754 and can recite it backwards in your sleep.
There is no floating point number exactly equal to 129.95. So your language uses a value which is close to it instead. When that value is multiplied by 100, the result is close to 12995, but it just so happens to not equal 12995. (It is also not exactly equal to 100 times the original value it used in place of 129.95.) So your interpreter prints a decimal number which is close to (but not equal to) the value of 129.95 * 100 and which shows you that it is not exactly 12995. It also just so happens that the result 129.95 * 10 is exactly equal to 1299.5. This is mostly luck.
Bottom line is, never expect equality out of any floating point arithmetic, only "closeness".

Why can I go up to this number with an integer?

I am learning C and I read in the Kernighan&Ritchie's book that integers int were included in a set specific set [-32767;32767]. I tried to verify this assertion by writing the following program which increment a variable count from 1 to the limit before it fall in negative numbers.
#include <stdio.h>
int main(void){
int count = 1;
while(count > 0){
count++;
printf("%d\n", count);
}
return 0;
}
And surprisingly I got this output:
1
......
2147483640
2147483641
2147483642
2147483643
2147483644
2147483645
2147483646
2147483647 -> This is a lot more than 32767?!
-2147483648
I do not understand, Why do I get this output? And I doubt M. Ritchie made a mistake ;)
You're on a 32- or a 64-bit machine and the C compiler you are using has 32-bit integers. In 2's complement binary, the highest positive integer would be 31 bites, or 2^31-1 or 2147483647, as you are observing.
Note that this doesn't violate K&R's claim that the integer value includes the range [-32768;32767].
Shorts typically go from -32768 to 32767. 2^15th - 1 is the largest short.
Ints typically go from -2147483648 to 2147483647. 2^31st -1 is the largest int.
Basically ints are twice the size you thought.

Resources