Getting weird value in Double - ios

Hello i made a "Clicker" as a first project while learning swift i have an automated timer that is supposed to remove some numbers from other numbers but sometimes i get values like 0.600000000000001 and i have no idea why.
Here is my "Attack" function that removes 0.2 from the Health of a zombie.
let fGruppenAttackTimer = NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: Selector("fGruppenAttackTime"), userInfo: nil, repeats: true)
func fGruppenAttackTime() {
zHealth -= 0.2
if zHealth <= 0 {
zHealth = zSize
pPengar += pPengarut
}
...
}
And here is my attackZ button that is supposed to remove 1 from the health of the zombie
#IBAction func attackZ(sender: UIButton) {
zHealth -= Double(pAttack)
fHunger -= 0.05
fGruppenHunger.progress = Float(fHunger / 100)
Actionlbl.text = ""
if zHealth <= 0 {
zHealth = zSize
pPengar += pPengarut
}
}
Lastly here are the variables value:
var zHealth = 10.0
var zSize = 10.0
var pAttack = 1
var pPengar = 0
var pPengarut = 1
When the timer is on and the function is running and i click the button i sometimes get weird values like 0.600000000000001 and if i set the 0.2 in the function to 0.25 i get 0.0999999999999996 sometimes. I wonder why this happens and what to do with it.

In trojanfoe's answer, he shares a link that describes the source of the problem regarding rounding of floating point numbers.
In terms of what to do, there are a number of approaches:
You can shift to integer types. For example, if your existing values can all be represented with a maximum of two decimal places, multiply those by 100 and then use Int types everywhere, excising the Double and Float representations from your code.
You can simply deal with the very small variations that Double type introduces. For example:
If displaying the results in the UI, use NumberFormatter to convert the Double value to a String using a specified number of decimal places.
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 0 // or you might use `2` here, too
formatter.numberStyle = .decimal
print(formatter.string(for: value)!)
By the way, the NSNumberFormatter enjoys another benefit, too, namely that it honors the localization settings for the user. For example, if the user lives in Germany, where the decimal place is represented with a , rather than a ., the NSNumberFormatter will use the user's native number formatting.
When testing to see if a number is equal to some value, rather than just using == operator, look at the difference between two values and seeing if they're within some permissible rounding threshold.
You can use Decimal/NSDecimalNumber, which doesn't suffer from rounding issues when dealing with decimals:
var value = Decimal(string: "1.0")!
value -= Decimal(string: "0.9")!
value -= Decimal(string: "0.1")!
Or:
var value = Decimal(1)
value -= Decimal(sign: .plus, exponent: -1, significand: 9)
value -= Decimal(sign: .plus, exponent: -1, significand: 1)
Or:
var value = Decimal(1)
value -= Decimal(9) / Decimal(10)
value -= Decimal(1) / Decimal(10)
Note, I explicitly avoid using any Double values such as Decimal(0.1) because creating a Decimal from a fractional Double only captures whatever imprecision Double entails, where as the three examples above avoid that entirely.

It's because of floating point rounding errors.
For further reading, see What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Squeezing infinitely many real numbers into a finite number of bits
requires an approximate representation. Although there are infinitely
many integers, in most programs the result of integer computations can
be stored in 32 bits. In contrast, given any fixed number of bits,
most calculations with real numbers will produce quantities that
cannot be exactly represented using that many bits. Therefore the
result of a floating-point calculation must often be rounded in order
to fit back into its finite representation. This rounding error is the
characteristic feature of floating-point computation.

Related

toStringAsFixed() returns a round-up value of a given number in Dart

I want to get one decimal place of a double in Dart. I use the toStringAsFixed() method to get it, but it returns a round-up value.
double d1 = 1.151;
double d2 = 1.150;
print('$d1 is ${d1.toStringAsFixed(1)}');
print('$d2 is ${d2.toStringAsFixed(1)}');
Console output:
1.151 is 1.2
1.15 is 1.1
How can I get it without a round-up value? Like 1.1 for 1.151 too. Thanks in advance.
Not rounding seems highly questionable to me1, but if you really want to truncate the string representation without rounding, then I'd take the string representation, find the decimal point, and create the appropriate substring.
There are a few potential pitfalls:
The value might be so large that its normal string representation is in exponential form. Note that double.toStringAsFixed just returns the exponential form anyway for such large numbers, so maybe do the same thing.
The value might be so small that its normal string representation is in exponential form. double.toStringAsFixed already handles this, so instead of using double.toString, use double.toStringAsFixed with the maximum number of fractional digits.
The value might not have a decimal point at all (e.g. NaN, +infinity, -infinity). Just return those values as they are.
extension on double {
// Like [toStringAsFixed] but truncates (toward zero) to the specified
// number of fractional digits instead of rounding.
String toStringAsTruncated(int fractionDigits) {
// Require same limits as [toStringAsFixed].
assert(fractionDigits >= 0);
assert(fractionDigits <= 20);
if (fractionDigits == 0) {
return truncateToDouble().toString();
}
// [toString] will represent very small numbers in exponential form.
// Instead use [toStringAsFixed] with the maximum number of fractional
// digits.
var s = toStringAsFixed(20);
// [toStringAsFixed] will still represent very large numbers in
// exponential form.
if (s.contains('e')) {
// Ignore values in exponential form.
return s;
}
// Ignore unrecognized values (e.g. NaN, +infinity, -infinity).
var i = s.indexOf('.');
if (i == -1) {
return s;
}
return s.substring(0, i + fractionDigits + 1);
}
}
void main() {
var values = [
1.151,
1.15,
1.1999,
-1.1999,
1.0,
1e21,
1e-20,
double.nan,
double.infinity,
double.negativeInfinity,
];
for (var v in values) {
print(v.toStringAsTruncated(1));
}
}
Another approach one might consider is to multiply by pow(10, fractionalDigits), use double.truncateToDouble, divide by the power-of-10 used earlier, and then use .toStringAsFixed(fractionalDigits). That could work for human-scaled values, but it could generate unexpected results for very large values due to precision loss from floating-point arithmetic. (This approach would work if you used package:decimal instead of double, though.)
1 Not rounding seems especially bad given that using doubles to represent fractional base-10 numbers is inherently imprecise. For example, since the closest IEEE-754 double-precision floating number to 0.7 is 0.6999999999999999555910790149937383830547332763671875, do you really want 0.7.toStringAsTruncated(1) to return '0.6' instead of '0.7'?

Divide two numbers and return fraction in swift

I know this is a simple question, but i would like to get a result like this.
3/6 = 0.500000
Divide two numbers and return quotient and reminder in a single variable, how can i achieve above in swift ?
To get the quotient and remainder of a division, you can use the quotientAndRemainder(dividingBy:) function.
3.quotientAndRemainder(dividingBy: 6) // (quotient 0, remainder 3)
If you want to get the floating point result of a division, use the / operator on two floating point numbers.
Either do
let result = 3.0 / 6.0 // 0.5
or if your integers are coming from variables, do
let result = Double(3.0) / Double(6.0) // 0.5

Swift .isNaN Understand how it works

I am currently facing an issue in understanding how .isNan works.
I am maintaining an application which is not developed (in Swift 2.3) by myself.
We have a nice amount of crashes from this code, and from my understanding I don't understand how.
Here is the method, which is simply a format method in order to set the appropriate value to your label by testing different cases.
static func formatFloat(float: Float?, withMaxDigits
max: Int, andUnit unit: String) -> String {
var label: String = "-"
if let float = float {
let numberFormatter = NSNumberFormatter()
numberFormatter.numberStyle = NSNumberFormatterStyle.DecimalStyle
numberFormatter.minimumFractionDigits = 0
numberFormatter.maximumFractionDigits = max
numberFormatter.roundingMode = .RoundHalfUp
if !float.isNaN {
var formattedValue = numberFormatter.stringFromNumber(float)!
if(formattedValue == "-0")
{
formattedValue = "0"
}
label = "\(formattedValue) \(unit)"
}
}
return label
}
Am I right that it justs check to determine whether a value is NaN or not, in order to test everything, and set the text accordingly ?
I read some posts/documentations and I don't understand this :
In some languages NaN != NaN, but this isn't the case in Cocoa.
What about nil and NaN ? I mean isNan check for false right ?
The IEEE floating point spec documents certain bit patterns that represent NaN invalid results.
nil is different from a NaN. In Swift, only an optional can contain nil, and it indicates the absence of a value.
a NaN means you performed some operation that resulted in an invalid result. You should check the isNaN property to see if a number contains a NaN.
Edit:
Note that there are different values that are marked as NaN, so one .NaN value may not be equal to another .NaN.
No, nan is a value that a floating point can take. nil can only be taken by optional vars. Also I'm not sure where you got that quote, but .nan == .nan is false. For more information read https://developer.apple.com/reference/swift/floatingpoint

String to Double conversion loses precision in Swift

I want to covert a string to double and keep the same value:
let myStr = "2.40"
let numberFormatter = NSNumberFormatter()
numberFormatter.locale = NSLocale(localeIdentifier: "fr_FR")
let myDouble = numberFormatter.numberFromString(myStr)?.doubleValue ?? 0.0
myDouble is now
Double? 2.3999999999999999
So how to convert "2.40" to exact be 2.40 as Double ??
Update:
Even rounding after conversion does not seem to work
I don't want to print, I want to calculate and it's important that the number should be correct, it's Money calculation and rates
First off: you don't! What you encountered here is called floating point inaccuracy. Computers cannot store every number precisely. 2.4 cannot be stored lossless within a floating point type.
Secondly: Since floating point is always an issue and you are dealing with money here (I guess you are trying to store 2.4 franc) your number one solution is: don't use floating point numbers. Use the NSNumber you get from the numberFromString and do not try to get a Double out of it.
Alternatively shift the comma by multiplying and store it as Int.
The first solutions might look something like:
if let num = myDouble {
let value = NSDecimalNumber(decimal: num.decimalValue)
let output = value.decimalNumberByMultiplyingBy(NSDecimalNumber(integer: 10))
}

Round function doesn't work as expected

I'm trying round a Double value with two decimal places:
var x = 0.68999999999999995
var roundX = round(x * 100.0) / 100.0
println(roundX) // print 0.69
If print the value is correct.. but the var value isn't that i expect, continue 0.68999999999999995
I need the Double value... not String like other StackOverflow answers :(
Floating point numbers like doubles do not have a number of decimal places. They store values in binary, and a value like .69 can't be represented exactly. It's just the nature of binary floating point on computers.
Use a number formatter, or use String(format:) as #KRUKUSA suggests
var x:Double = 0.68999999999999995
let stringWithTwoDecimals = String(format: "%.2f", x)
println(stringWithTwoDecimals)

Resources