Swift rounding to X decimals places issue when .999999 - ios

I did try different rounding to decimal places methods and all of them have the same in common. When I use a number, lets say 0.99999 and I want to round it to 2 decimal places. My expected result would be 0.99 but instead I get 1.00
I did try
let divisor = pow(10.0, Double(decimals))
let roundedVal = round(value * divisor) / divisor
Also did try
String(format:"%.2f",decimals)
And
let behavior = NSDecimalNumberHandler(roundingMode: .plain, scale: decimals, raiseOnExactness: false, raiseOnOverflow: false, raiseOnUnderflow: false, raiseOnDivideByZero: true)
NSDecimalNumber(value: value).rounding(accordingToBehavior: behavior)
let roundedValue2 = NSDecimalNumber(value: 0.6849).rounding(accordingToBehavior: behavior)
All methods give me the same issue.
Some ideas?
Thanks for the help!
EDIT:
The idea is that rounding is okay for all cases but not okay for that 0.9999 case. The display numbers are small always (range from 0.000 to 1) and decimals to show is parameter so 0.348 should be 0.35 and not 0.34 (when trunked)

let amount = 0.99999999999999
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
formatter.maximumFractionDigits = 2
formatter.roundingMode = .floor // rounding mode floor is the key here
let formattedAmount = formatter.string(from: amount as NSNumber)!
print(formattedAmount) // 0.99

Related

NumberFormatter Fraction Digits confusion (swift)

I am trying to format numbers so that there are always 4 digits after the decimal place. For example:
1 // 1.0000
0 // 0.0000
1.23 // 1.2300
1.234 // 1.2340
1.2345 // 1.2345
1.23456 // 1.2346 **[edited]**
I have tried all kinds of combinations of the following:
let formatter = NumberFormatter()
formatter.usesSignificantDigits = true // I believe this the default so not required
formatter.numberStyle = .decimal
formatter.maximumSignificantDigits = 4
formatter.minimumSignificantDigits = 4
formatter.maximumFractionDigits = 4
formatter.minimumFractionDigits = 4
let p = formatter.string(from: NSNumber(value: percentage))
debugPrint("p = \(p)")
But in two of the cases, this is what I get:
0 // 0.000
0.0123456 // 0.01234
Here is an example:
and the debug output:
"p = 0.9375"
"p = 0.000"
"p = 0.03125"
"p = 0.000"
"p = 0.03125"
What am I missing?
[I thought I had seen really good explanation in here some time ago, but can no longer find it - if anyone could drop a link to it, that would be great too!]
If you are trying to dictate the number of decimal places, then simply remove this significant digits stuff:
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
formatter.maximumFractionDigits = 4
formatter.minimumFractionDigits = 4
let values: [Double] = [
1, // 1.0000
0, // 0.0000
1.23, // 1.2300
1.234, // 1.2340
1.2345, // 1.2345
1.23456 // 1.2346 ... if you really want 1.2345, then change formatter’s `roundingMode` to `.down`.
]
let strings = values.map { formatter.string(for: $0) }
That yields the four digits after the decimal point, as desired.

How to return 0 instead of Nan in Swift?

let need = (Singleton.shared.getTotalExpense(category: .need) / Singleton.shared.getTotalIncome()) * 100
needsLabel.text = "" + String(format: "%.f%%", need)
If total expenses and total income are both zero I don't want NaN to be returned. How can I make it so if need = Nan, it returns 0
You'd need to conditionally check if you both numerator and denominator are zero, which is one reason it would result in NaN, and conditionally assign zero to the result.
let expenses = Singleton.shared.getTotalExpense(category: .need)
let income = Singleton.shared.getTotalIncome()
let need = expenses == 0 && income == 0 ? 0.0 : expenses / income * 100
You could also check if the result of division is NaN (which could be if any of the operands was NaN):
let ratio = expenses / income * 100
let need = ratio.isNaN ? 0.0 : ratio
Be aware that you might also need to handle a scenario where you divide non-zero by zero - this would result in Inf - it all depends on your use case:
if ratio.isInfinite {
// do something else, like throw an exception
}

Zero symbol in iOS MeasurementFormatter

I'm having problems to declare/use a zero symbol for an unknown value when using MeasurementFormatter:
let numberFormatter = NumberFormatter()
numberFormatter.numberStyle = .decimal
numberFormatter.zeroSymbol = "?"
numberFormatter.string(from: 0.0) // '?'
let formatter = MeasurementFormatter()
formatter.unitOptions = .providedUnit
formatter.numberFormatter = numberFormatter
var distance = Measurement<UnitLength>(value: 0, unit: .parsecs)
formatter.string(from: distance) // '0 pc' - expected: '? pc'
Trying different declarations of the value such as Double.zero doesn't change the output.
Is this a conceptual thing in iOS or am I missing something here?
It turned out to produce the desired output by changing the Measurement declaration (distance):
let dist1 = Measurement<UnitLength>(value: 0, unit: .parsecs) // output: '0 pc'
let dist2 = Measurement(value: 0, unit: Unit(symbol: UnitLength.parsecs.symbol)) // output '? pc' as expected
A radar is filed.

Mathematical integrity of NSDecimalNumber

I'm using numbers divided by 10^30
I may be adding values like 1000000000000000 and 5000000000000000 stored in NSDecimalNumbers.
My concern is that I think I've seen a few times, when adding or subtracting these values, incorrect math being done.
Is that a possibility or are NSDecimalNumbers pretty sound in terms of the integrity of their math.
In answer to your question, the math offered by Decimal/NSDecimalNumber is sound, and the problem probably rests in either:
The calculations might exceed the capacity of these decimal formats (as outlined by rob mayoff). For example, this works because we're within the 38 digit mantissa:
let x = Decimal(sign: .plus, exponent: 60, significand: 1)
let y = Decimal(sign: .plus, exponent: 30, significand: 1)
let z = x + y
1,000,000,000,000,000,000,000,000,000,001,000,000,000,000,000,000,000,000,000,000
But this will not:
let x = Decimal(sign: .plus, exponent: 60, significand: 1)
let y = Decimal(sign: .plus, exponent: 10, significand: 1)
let z = x + y
1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
Or, it could just be how you are instantiating these decimal values, e.g. supplying a floating point number rather than using the Decimal(sign:exponent:significand:) or NSDecimalNumber(mantissa:exponent:isNegative:) initializers:
For example, this works fine:
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
let x = Decimal(sign: .plus, exponent: 30, significand: 1)
print(formatter.string(for: x)!)
That results in:
1,000,000,000,000,000,000,000,000,000,000
But these won't, because you're supplying a floating point number which suffers lower limits in precision:
let y = Decimal(1.0e30)
print(formatter.string(for: y)!)
let z = Decimal(1_000_000_000_000_000_000_000_000_000_000.0)
print(formatter.string(for: z)!)
These both result in:
1,000,000,000,000,000,409,600,000,000,000
For more information on floating-point arithmetic (and why certainly decimal numbers cannot be perfectly captured in floating-point types), see floating-point arithmetic.
In your other question, you ask why the following:
let foo = NSDecimalNumber(value: 334.99999).multiplying(byPowerOf10: 30)
produced:
334999990000000051200000000000000
This is the same underlying issue that I outlined above in point 2. Floating point numbers cannot accurately represent certain decimal values.
Note, your question is the same as the following Decimal rendition:
let adjustment = Decimal(sign: .plus, exponent: 30, significand: 1)
let foo = Decimal(334.99999) * adjustment
This also produces:
334999990000000051200000000000000
But you will get the desired result if you supply either a string or a exponent and mantissa/significant, because these will be accurately represented as a Decimal/NSDecimalNumber:
let bar = Decimal(string: "334.99999")! * adjustment
let baz = Decimal(sign: .plus, exponent: -5, significand: 33499999) * adjustment
Those both produce:
334999990000000000000000000000000
Bottom line, do not supply floating point numbers to Decimal or NSDecimalNumber. Use string representations or use the exponent and mantissa/significand representation and you will not see these strange deviations introduced when using floating point numbers.
I'm using numbers divided by 1^30
Good news, then, because 1^30 = 1. Perhaps you meant 10^30?
Anyway, according to the NSDecimalNumber class reference:
An instance can represent any number that can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.

Dividing massive numbers in Swift

I have a UInt128 holding a massive number like 2000009100000000000000 and I want to divide it by 1/10^30
How do I do that?
Possibly by using NSDecimalNumber. For example,
let num1 = NSDecimalNumber(string: "2000009100000000000000")
let num2 = NSDecimalNumber(mantissa: 10, exponent: 30, isNegative: false)
let result = num1.dividing(by: num2)

Resources