This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
I am reading from a txt file and populating a core data entity.
At some point I have read the value form the TXT file and the value is #"0.9".
Now I assign it to a CGFloat.
CGFloat value = (CGFloat)[stringValue floatValue];
debugger shows value as 0.89999997615814208 !!!!!!?????
why? bug? Even if it things [stringValue floatValue] is a double, casting it to CGFloat should not produce that abnormality.
The binary floating point representation used for float can't store the value exactly. So it uses the closest representable value.
It's similar to decimal numbers: It's impossible to represent one third in decimal (instead we use an approximate representation like 0.3333333).
Because to store a float in binary you can only approximate it by summing up fractions like 1/2, 1/4, 1/8 etc. For 0.9 (any many other values) there is no exact representation that can be constructed from summing fractions like this. Whereas if the value was say, 0.25 you could represent that exactly as 1/4.
Floating point imprecision, check out http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Basically has to do with how floating points work, they don't just store a number, they store a math problem. Broken down by a base number, and a precision. It then must combine the two numbers via math operation to retrieve the actual value, which doesn't always turn out to be exactly what you assigned to it.
Related
I see unlike Double, Int in Swift does not have infinity. Only thing we have is Int.max and Int.min which are actually numbers and (Int.max - 1) is not the same as Int.max. I need to perform operations such as:
//maximumDuration is Integer...width, widthPerSecond, currentWidth are CGFloat, all positive
width = max(CGFloat(maximumDuration) * widthPerSecond, currentWidth)
So if maximumDuration is Int.max, CGFloat(maximumDuration) * widthPerSecond may not be Int.max. Infact, comparisons may not be reliable due to overflow.
What's the way out to have true infinity when using Int datatype? One way would be to use Double instead of Int but that would require so many type casts everywhere else in the code.
All the integer types are simple scalars. All the bits hold value (plus a sign bit for the signed variants.) There are no spare bits for marking things like NAN, (not a number) infinity, normalized/non-normalized.
There is simply no way to represent infinity with binary integer types. This is not unique to Swift. It is true of just about all different languages/platforms.
Floating point types use an IEEE format that reserves some bits for special cases like infinity.
You could create an enum with associated values that had cases for negative and positive infinity, NAN, and the like, but you'd have the same casting/code rework problems that you're trying to avoid with floats.
Edit:
Interestingly, in Binary Coded Decimal (BCD) there are spare bits. I wonder if there is a standard for indicating special values like infinity in BCD?
let xCoordinate: CGFloat = 1.4
XCTAssertEqual(view.frame.origin.x, xCoordinate)
I got the following error while running a test, as shown in the screenshot:
XCTAssertEqual failed: ("1.4") is not equal to ("1.4") -
Does anyone have any solutions or explanations?
If I am not making mistake take a look at this option of the evaluation of the CGFloat:
XCTAssertEqual(_, _, accuracy:)
In this case you could set accuracy to evaluate CGFloat numbers because evaluation of them without take in to account accuracy is not right.
Does anyone have any solutions or explanations?
The solution is what Oleg has suggested. The explanation is that 1.4 is not expressible as a CGFloat. In normal decimals, you may have noticed that, unless the denominator of a fraction is only divisible by 2 or 5 (the factors of 10) when you try ti convert it to a decimal number, it goes on forever. eg. 1/3 is 0.33333333333...
The same applies to CGFloats except the number base is 2, not 10. 1.4 is 7/5. 5 is not divisible by 2, therefore if converted to a binary number, it would repeat forever. In fact, it would be 1.0110011...
So your view.frame.origin.x is likely to be a number close to 1.4 but not exactly 1.4 and your xCoordinate will be a different number very close to 1.4. These two numbers do not compare equal, but when rounded to say six decimal places to be printed, look like 1.4.
I need to upload JSON data from an app (IOS) to the backend server.
The goal is to optimise the size of the upload packet which is JSON encoded as a NSString. The string is currently about 5MB but contains mostly doubles which have more precision than necessary.
The size of the packet can be reduced by around 40-50% by removing unnecessary decimal places in doubles. This has to be customisable based on the key.
What is the best way to create a JSON string with different numbers of significant figures or decimal places depending on the key.
You may need to do some experiments. Let's say you want to send data with two decimal digits, like 3.14 instead of pi. You know you have to turn all numbers into NSNumber. You would turn x into a number with two decimals by writing
double asDouble = 3.141592653;
NSNumber* asNumber = #(round (asDouble * 100.0) / 100.0);
However, you need to check that this always works; with some bad luck this could send 3.140000000000000000000001 to your server.
Obviously you can replace the 100.0 with 1000.0 etc. Do not replace the division with a multiplication by 0.01 because that will increase rounding errors and the chance that you get tons of decimal digits.
You might check what happens if you write
NSNumber* asNumber = #((float) asDouble);
If NSJSONSerialization is clever enough, it will send fewer decimals.
Good morning all,
I'm having some issues with floating point math, and have gotten totally lost in ".to_f"'s, "*100"'s and ".0"'s!
I was hoping someone could help me with my specific problem, and also explain exactly why their solution works so that I understand this for next time.
My program needs to do two things:
Sum a list of decimals, determine if they sum to exactly 1.0
Determine a difference between 1.0 and a sum of numbers - set the value of a variable to the exact difference to make the sum equal 1.0.
For example:
[0.28, 0.55, 0.17] -> should sum to 1.0, however I keep getting 1.xxxxxx. I am implementing the sum in the following fashion:
sum = array.inject(0.0){|sum,x| sum+ (x*100)} / 100
The reason I need this functionality is that I'm reading in a set of decimals that come from excel. They are not 100% precise (they are lacking some decimal points) so the sum usually comes out of 0.999999xxxxx or 1.000xxxxx. For example, I will get values like the following:
0.568887955,0.070564759,0.360547286
To fix this, I am ok taking the sum of the first n-1 numbers, and then changing the final number slightly so that all of the numbers together sum to 1.0 (must meet validation using the equation above, or whatever I end up with). I'm currently implementing this as follows:
sum = 0.0
array.each do |item|
sum += item * 100.0
end
array[i] = (100 - sum.round)/100.0
I know I could do this with inject, but was trying to play with it to see what works. I think this is generally working (from inspecting the output), but it doesn't always meet the validation sum above. So if need be I can adjust this one as well. Note that I only need two decimal precision in these numbers - i.e. 0.56 not 0.5623225. I can either round them down at time of presentation, or during this calculation... It doesn't matter to me.
Thank you VERY MUCH for your help!
If accuracy is important to you, you should not be using floating point values, which, by definition, are not accurate. Ruby has some precision data types for doing arithmetic where accuracy is important. They are, off the top of my head, BigDecimal, Rational and Complex, depending on what you actually need to calculate.
It seems that in your case, what you're looking for is BigDecimal, which is basically a number with a fixed number of digits, of which there are a fixed number of digits after the decimal point (in contrast to a floating point, which has an arbitrary number of digits after the decimal point).
When you read from Excel and deliberately cast those strings like "0.9987" to floating points, you're immediately losing the accurate value that is contained in the string.
require "bigdecimal"
BigDecimal("0.9987")
That value is precise. It is 0.9987. Not 0.998732109, or anything close to it, but 0.9987. You may use all the usual arithmetic operations on it. Provided you don't mix floating points into the arithmetic operations, the return values will remain precise.
If your array contains the raw strings you got from Excel (i.e. you haven't #to_f'd them), then this will give you a BigDecimal that is the difference between the sum of them and 1.
1 - array.map{|v| BigDecimal(v)}.reduce(:+)
Either:
continue using floats and round(2) your totals: 12.341.round(2) # => 12.34
use integers (i.e. cents instead of dollars)
use BigDecimal and you won't need to round after summing them, as long as you start with BigDecimal with only two decimals.
I think that algorithms have a great deal more to do with accuracy and precision than a choice of IEEE floating point over another representation.
People used to do some fine calculations while still dealing with accuracy and precision issues. They'd do it by managing the algorithms they'd use and understanding how to represent functions more deeply. I think that you might be making a mistake by throwing aside that better understanding and assuming that another representation is the solution.
For example, no polynomial representation of a function will deal with an asymptote or singularity properly.
Don't discard floating point so quickly. I could be that being smarter about the way you use them will do just fine.
I am parsing some vertice information from an XML file which reads as follows (partial extract) :
21081.7 23447.6 2781.62 24207.4 18697.3 -2196.96
I save the string as an NSString and then convert to a float value (which I will later feed into OpenGL ES)
NSString * xPoint = [finishedParsingArray objectAtIndex:baseIndex];
NSLog(#"xPoiint is %#", xPoint);
float x = [xPoint floatValue];
The problem is that float x changes the values as follows :
21081.699219, 23447.599609, 2781.620117, 24207.400391, 18697.300781, -2196.959961
As you can see, it is changing the number of decimal places (not sure how it is doing this - must be hidden formatting in the xml file ?)
My question is how can I store the float to match the original number in the NSString / XML file to the same number of decimal places ?
Thanks in advance !
Your issue seems to be that you don't understand how floats are stored in memory and don't know that floats aren't precise.
Exact values often can't be stored and so the system picks the closest number it can to represent it. If you look carefully, you can see that each of the outputted numbers is very close to your inputted values.
For better accuracy, try using double instead. Double does encounter the same problems, but with better precision. Floats have about 6 significant digits; doubles have more than twice that. Source
Here are some other StackOverflow answers and external articles you should read:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Floating Points on Wikipedia
This answer on a similar question
All primitives which store floating point numbers have an accuracy issue. Most of the time it's so small it doesn't matter, but sometimes it's vital.
When it's important to keep the exact number, I would suggest using NSDecimalNumber.