I'm using numbers divided by 10^30
I may be adding values like 1000000000000000 and 5000000000000000 stored in NSDecimalNumbers.
My concern is that I think I've seen a few times, when adding or subtracting these values, incorrect math being done.
Is that a possibility or are NSDecimalNumbers pretty sound in terms of the integrity of their math.
In answer to your question, the math offered by Decimal/NSDecimalNumber is sound, and the problem probably rests in either:
The calculations might exceed the capacity of these decimal formats (as outlined by rob mayoff). For example, this works because we're within the 38 digit mantissa:
let x = Decimal(sign: .plus, exponent: 60, significand: 1)
let y = Decimal(sign: .plus, exponent: 30, significand: 1)
let z = x + y
1,000,000,000,000,000,000,000,000,000,001,000,000,000,000,000,000,000,000,000,000
But this will not:
let x = Decimal(sign: .plus, exponent: 60, significand: 1)
let y = Decimal(sign: .plus, exponent: 10, significand: 1)
let z = x + y
1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
Or, it could just be how you are instantiating these decimal values, e.g. supplying a floating point number rather than using the Decimal(sign:exponent:significand:) or NSDecimalNumber(mantissa:exponent:isNegative:) initializers:
For example, this works fine:
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
let x = Decimal(sign: .plus, exponent: 30, significand: 1)
print(formatter.string(for: x)!)
That results in:
1,000,000,000,000,000,000,000,000,000,000
But these won't, because you're supplying a floating point number which suffers lower limits in precision:
let y = Decimal(1.0e30)
print(formatter.string(for: y)!)
let z = Decimal(1_000_000_000_000_000_000_000_000_000_000.0)
print(formatter.string(for: z)!)
These both result in:
1,000,000,000,000,000,409,600,000,000,000
For more information on floating-point arithmetic (and why certainly decimal numbers cannot be perfectly captured in floating-point types), see floating-point arithmetic.
In your other question, you ask why the following:
let foo = NSDecimalNumber(value: 334.99999).multiplying(byPowerOf10: 30)
produced:
334999990000000051200000000000000
This is the same underlying issue that I outlined above in point 2. Floating point numbers cannot accurately represent certain decimal values.
Note, your question is the same as the following Decimal rendition:
let adjustment = Decimal(sign: .plus, exponent: 30, significand: 1)
let foo = Decimal(334.99999) * adjustment
This also produces:
334999990000000051200000000000000
But you will get the desired result if you supply either a string or a exponent and mantissa/significant, because these will be accurately represented as a Decimal/NSDecimalNumber:
let bar = Decimal(string: "334.99999")! * adjustment
let baz = Decimal(sign: .plus, exponent: -5, significand: 33499999) * adjustment
Those both produce:
334999990000000000000000000000000
Bottom line, do not supply floating point numbers to Decimal or NSDecimalNumber. Use string representations or use the exponent and mantissa/significand representation and you will not see these strange deviations introduced when using floating point numbers.
I'm using numbers divided by 1^30
Good news, then, because 1^30 = 1. Perhaps you meant 10^30?
Anyway, according to the NSDecimalNumber class reference:
An instance can represent any number that can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.
Related
I need a way to round my decimal in lua.
Sometimes my number looks like this:
When I search for it online, I only find solutions to round it to a whole number, but I don't want to round my variable to 0.00, 1.00, or 2.00, how would I round it to a specific decimal digit?
Code:
health = 1
maxhp = 2
function hp_showcase()
makeLuaText("hpcounter", "HP: "..health.."/"..maxhp.."", 2250, 30, 350)
addLuaText("hpcounter")
end
function opponentNoteHit(id, noteData, noteType, isSustainNote)
hp_showcase();
end
You can define a function that takes in the value to be rounded and the digit position you would like to round to, for this example positions in front of the . are positive and behind are negative so 2 rounds to the nearest 100 and -2 rounds to the nearest 0.01
local value = 0.79200750000001
local function round(number, digit_position)
local precision = math.pow(10, digit_position)
number = number + (precision / 2); -- this causes value #.5 and up to round up
-- and #.4 and lower to round down.
return math.floor(number / precision) * precision
end
print(value)
print(round(value, -2))
print(round(value, -1))
print(round(value, 0))
Results:
0.79200750000001
0.79
0.8
1
With Xcode 11.1 if I run a playground with:
pow(10 as Double, -2) // 0.01
I get same output using Float:
pow(10 as Float, -2) // 0.01
But if I try to use the pow(Decimal, Int) as in:
pow(10 as Decimal, -2) // NaN
Does anybody know why?
Is there a better way to deal with positive and negative exponent with pow and Decimal? I need Decimal as they behave as I expect with currency value.
EDIT: I know how to resolve that from math perspective, I'd like to understand why it happens and/or if it can be solved without adding on the cyclomatic complexity of my code (e.g. checking if the exponent is negative and executing 1 / pow)
Well, algebraically, x^(-p) == 1/(x^(p))
So, convert your negative power to a positive power, and then take the reciprocal.
1/pow(10 as Decimal, 2) // 0.01
I think that this struct give us an idea about the problem:
public struct Decimal {
public var _exponent: Int32
public var _length: UInt32 // length == 0 && isNegative -> NaN
public var _isNegative: UInt32
public var _isCompact: UInt32
public var _reserved: UInt32
public var _mantissa: (UInt16, UInt16, UInt16, UInt16, UInt16, UInt16, UInt16, UInt16)
public init()
public init(_exponent: Int32, _length: UInt32, _isNegative: UInt32, _isCompact: UInt32, _reserved: UInt32, _mantissa: (UInt16, UInt16, UInt16, UInt16, UInt16, UInt16, UInt16, UInt16))
}
The length condition should be satisfacted only length == 0, but as UInt32 doesn't represents fractionary numbers the condition is satisfacted...
That's simply how NSDecimal / NSDecimalNumber works: it doesn't do negative exponents. You can see a rather elaborate workaround described here:
https://stackoverflow.com/a/12095004/341994
As you can see, the workaround is exactly what you've already been told: look to see if the exponent would be negative and, if so, take the inverse of the positive root.
... if it can be solved without adding on the cyclomatic complexity of my code ...
extension Decimal {
func pow(i: Int)->Decimal {
i < 0 ? 1.0 / Foundation.pow(self, -i) : Foundation.pow(self, i)
}
}
it is really not so complex to use it, from your example ....
(10 as Decimal).pow(i: -2) // 0.01
Kindly find my point of view for this Apple function implementation, Note the following examples:
pow(1 as Decimal, -2) // 1; (1 ^ Any number) = 1
pow(10 as Decimal, -2) // NAN
pow(0.1 as Decimal, -2) // 100
pow(0.01 as Decimal, -2) // 10000
pow(1.5 as Decimal, -2) // NAN
pow(0.5 as Decimal, -2) // NAN
It seems like, pow with decimal don't consider any floating numbers except for 10 basis. So It deals with:
0.1 ^ -2 == (1/10) ^ -2 == 10 ^ 2 // It calculates it appropriately, It's 10 basis 10, 100, 1000, ...
1.5 ^ -2 == (3/2) ^ -2 // (3/2) is a floating number ,so deal with it as Double not decimal, It returns NAN.
0.5 ^ -2 == (1/2) ^ -2 // (2) isn't 10 basis, So It will be dealt as (1/2) as It is, It's a floating number also. It returns NAN.
I think this could be a bug on Swift's compiler.
EDIT: This is a weird behaviour on Objective-C's NSDecimalNumber, see #matt's comment on this answer below.
As stated by #jawadAli on his answer
Well, algebraically, x^(-p) == 1/(x^(p))
This formula is correct therefore the following statements should be equal
let ten: Decimal = 10
let one: Decimal = 1
let answer: Decimal = 0.01
pow(ten, -2) // NaN
one / pow(ten, 2) // 0.01
one / (ten * ten) // 0.01
answer // 0.01
Trying this with other data types would result to 0.01.
I also tried to replicate this by using other negative exponents on Decimal data type and it seems to always evaluate to NaN. With the exceptions of 1, 0, and -1
(1...100).forEach {
print(pow(-2, -$0)) // NaN
print(pow(-1, -$0)) // Correct
print(pow(0, -$0)) // Correct
print(pow(-1, -$0)) // Correct
print(pow(-2, -$0)) // NaN
}
I would suggest that you use a different Data Type for now.
I did try different rounding to decimal places methods and all of them have the same in common. When I use a number, lets say 0.99999 and I want to round it to 2 decimal places. My expected result would be 0.99 but instead I get 1.00
I did try
let divisor = pow(10.0, Double(decimals))
let roundedVal = round(value * divisor) / divisor
Also did try
String(format:"%.2f",decimals)
And
let behavior = NSDecimalNumberHandler(roundingMode: .plain, scale: decimals, raiseOnExactness: false, raiseOnOverflow: false, raiseOnUnderflow: false, raiseOnDivideByZero: true)
NSDecimalNumber(value: value).rounding(accordingToBehavior: behavior)
let roundedValue2 = NSDecimalNumber(value: 0.6849).rounding(accordingToBehavior: behavior)
All methods give me the same issue.
Some ideas?
Thanks for the help!
EDIT:
The idea is that rounding is okay for all cases but not okay for that 0.9999 case. The display numbers are small always (range from 0.000 to 1) and decimals to show is parameter so 0.348 should be 0.35 and not 0.34 (when trunked)
let amount = 0.99999999999999
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
formatter.maximumFractionDigits = 2
formatter.roundingMode = .floor // rounding mode floor is the key here
let formattedAmount = formatter.string(from: amount as NSNumber)!
print(formattedAmount) // 0.99
I have a UInt128 holding a massive number like 2000009100000000000000 and I want to divide it by 1/10^30
How do I do that?
Possibly by using NSDecimalNumber. For example,
let num1 = NSDecimalNumber(string: "2000009100000000000000")
let num2 = NSDecimalNumber(mantissa: 10, exponent: 30, isNegative: false)
let result = num1.dividing(by: num2)
I've stumbled onto an odd NSDecimalNumber behavior: for some values, invocations of integerValue, longValue, longLongValue, etc., return the an unexpected value. Example:
let v = NSDecimalNumber(string: "9.821426272392280061")
v // evaluates to 9.821426272392278
v.intValue // evaluates to 9
v.integerValue // evaluates to -8
v.longValue // evaluates to -8
v.longLongValue // evaluates to -8
let v2 = NSDecimalNumber(string: "9.821426272392280060")
v2 // evaluates to 9.821426272392278
v2.intValue // evaluates to 9
v2.integerValue // evaluates to 9
v2.longValue // evaluates to 9
v2.longLongValue // evaluates to 9
This is using XCode 7.3; I haven't tested using earlier versions of the frameworks.
I've seen a bunch of discussion about unexpected rounding behavior with NSDecimalNumber, as well as admonishments not to initialize it with the inherited NSNumber initializers, but I haven't seen anything about this specific behavior. Nevertheless there are some rather detailed discussions about internal representations and rounding which may contain the nugget I seek, so apologies in advance if I missed it.
EDIT: It's buried in the comments, but I've filed this as issue #25465729 with Apple. OpenRadar: http://www.openradar.me/radar?id=5007005597040640.
EDIT 2: Apple has marked this as a dup of #19812966.
Since you already know the problem is due to "too high precision", you could workaround it by rounding the decimal number first:
let b = NSDecimalNumber(string: "9.999999999999999999")
print(b, "->", b.int64Value)
// 9.999999999999999999 -> -8
let truncateBehavior = NSDecimalNumberHandler(roundingMode: .down,
scale: 0,
raiseOnExactness: true,
raiseOnOverflow: true,
raiseOnUnderflow: true,
raiseOnDivideByZero: true)
let c = b.rounding(accordingToBehavior: truncateBehavior)
print(c, "->", c.int64Value)
// 9 -> 9
If you want to use int64Value (i.e. -longLongValue), avoid using numbers with more than 62 bits of precision, i.e. avoid having more than 18 digits totally. Reasons explained below.
NSDecimalNumber is internally represented as a Decimal structure:
typedef struct {
signed int _exponent:8;
unsigned int _length:4;
unsigned int _isNegative:1;
unsigned int _isCompact:1;
unsigned int _reserved:18;
unsigned short _mantissa[NSDecimalMaxSize]; // NSDecimalMaxSize = 8
} NSDecimal;
This can be obtained using .decimalValue, e.g.
let v2 = NSDecimalNumber(string: "9.821426272392280061")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -18 (30717, 39329, 46888, 34892, 0, 0, 0, 0) 4
This means 9.821426272392280061 is internally stored as 9821426272392280061 × 10-18 — note that 9821426272392280061 = 34892 × 655363 + 46888 × 655362 + 39329 × 65536 + 30717.
Now compare with 9.821426272392280060:
let v2 = NSDecimalNumber(string: "9.821426272392280060")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -17 (62054, 3932, 17796, 3489, 0, 0, 0, 0) 4
Note that the exponent is reduced to -17, meaning the trailing zero is omitted by Foundation.
Knowing the internal structure, I now make a claim: the bug is because 34892 ≥ 32768. Observe:
let a = NSDecimalNumber(decimal: Decimal(
_exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
_mantissa: (65535, 65535, 65535, 32767, 0, 0, 0, 0)))
let b = NSDecimalNumber(decimal: Decimal(
_exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
_mantissa: (0, 0, 0, 32768, 0, 0, 0, 0)))
print(a, "->", a.int64Value)
print(b, "->", b.int64Value)
// 9.223372036854775807 -> 9
// 9.223372036854775808 -> -9
Note that 32768 × 655363 = 263 is the value just enough to overflow a signed 64-bit number. Therefore, I suspect that the bug is due to Foundation implementing int64Value as (1) convert the mantissa directly into an Int64, and then (2) divide by 10|exponent|.
In fact, if you disassemble Foundation.framework, you will find that it is basically how int64Value is implemented (this is independent of the platform's pointer width).
But why int32Value isn't affected? Because internally it is just implemented as Int32(self.doubleValue), so no overflow issue would occur. Unfortunately a double only has 53 bits of precision, so Apple has no choice but to implement int64Value (requiring 64 bits of precision) without floating-point arithmetics.
I'd file a bug with Apple if I were you. The docs say that NSDecimalNumber can represent any value up to 38 digits long. NSDecimalNumber inherits those properties from NSNumber, and the docs don't explicitly say what conversion is involved at that point, but the only reasonable interpretation is that if the number is roundable to and representable as an Int, then you get the correct answer.
It looks to me like a bug in handling the sign-extension during the conversion somewhere, since intValue is 32-bit and integerValue is 64-bit (in Swift).