I would like to convert a negative value into a positive one using NumberFormatter. The value should represent a percent change and should look like this: 1.46% even if the value is negative -1.46.
There is a numberFormatter.negativeFormat, but I am unsure what the format should look like. I tried numberFormatter.negativeFormat = "0.00" but the percent sign disappears and I do not want to add it explicitly at the end, because I am using numberStyle
numberFormatter.numberStyle = .percent
Any ideas what would be the best solution?
You can extend the Double type to create a specific property that returns a String with the absolute value:
extension Double {
var positivePercent: String {
return abs(self).formatted(.percent)
}
}
Usage:
let a = -0.0146
print(a.positivePercent) // 1.46%
Related
I'm wanting to add commas in to break up numbers within my iOS application.
For example:
Change 1000 into 1,000
Change 10,000 into 10,000
Change 100000 into 100,000
And so on...
What is the most efficient way of doing this, and safe-guarding against numbers post decimal point too?
So for example,
1000.50 should return 1,000.50
My numbers at the moment are Ints, Doubles and Floats - so not sure if I need to manipulate them before or after converting to Strings.
Any feedback would be appreciated.
The Foundation framework (which is shared between iOS and MacOS) includes the NumberFormatter class, which will do exactly what you want. You'd configure a number formatter to include a groupingSeparator. (Note that different countries use different grouping separators, so you might want to set the localizesFormat flag to allow the NumberFormatter to change the separator character based on the user's locale.
Here is some sample code that will generate strings with comma thousands separators and 2 decimal places:
let formatter = NumberFormatter()
// Set up the NumberFormatter to use a thousands separator
formatter.usesGroupingSeparator = true
formatter.groupingSize = 3
//Set it up to always display 2 decimal places.
formatter.alwaysShowsDecimalSeparator = true
formatter.minimumFractionDigits = 2
formatter.maximumFractionDigits = 2
// Now generate 10 formatted random numbers
for _ in 1...10 {
// Randomly pick the number of digits
let digits = Double(Int.random(in:1...9))
// Generate a value from 0 to that number of digits
let x = Double.random(in: 1...(pow(10, digits)))
// If the number formatter is able to output a string, log it to the console.
if let string = formatter.string(from:NSNumber(value:x)){
print(string)
}
}
Some sample output from that code:
356,295,901.77
34,727,299.01
395.08
37,185.02
87,055.35
356,112.91
886,165.06
98,334,087.81
3,978,837.62
3,178,568.97
Let us suppose I have a variable v of type NSDecimalNumber
let v = 34.596904 in its own format.
I want to know the precision and scale of this number, not the default one. I did not find any function in the NSDecimalNumber class which gives these values or maybe someone would like to throw some light on how it works.
precision = 8
scale = 6
precision is count of significant digits in number and scale is count of significant digit after decimal
This extension will give you the specific value for your only example:
extension Decimal {
var scale: Int {
return -self.exponent
}
var precision: Int {
return Int(floor(log10((self.significand as NSDecimalNumber).doubleValue)))+1
}
}
Usage:
let v: NSDecimalNumber = NSDecimalNumber(string: "34.596904")
print("precision=\((v as Decimal).precision)") //->precision=8
print("scale=\((v as Decimal).scale)") //->scale=6
But I cannot be sure if this generates expected results in all cases you have in mind, as you have shown only one example...
One more, in Swift, Decimal and NSDecimalNumber are easily bridgeable and you should better use Decimal as far as you can.
I am currently facing an issue in understanding how .isNan works.
I am maintaining an application which is not developed (in Swift 2.3) by myself.
We have a nice amount of crashes from this code, and from my understanding I don't understand how.
Here is the method, which is simply a format method in order to set the appropriate value to your label by testing different cases.
static func formatFloat(float: Float?, withMaxDigits
max: Int, andUnit unit: String) -> String {
var label: String = "-"
if let float = float {
let numberFormatter = NSNumberFormatter()
numberFormatter.numberStyle = NSNumberFormatterStyle.DecimalStyle
numberFormatter.minimumFractionDigits = 0
numberFormatter.maximumFractionDigits = max
numberFormatter.roundingMode = .RoundHalfUp
if !float.isNaN {
var formattedValue = numberFormatter.stringFromNumber(float)!
if(formattedValue == "-0")
{
formattedValue = "0"
}
label = "\(formattedValue) \(unit)"
}
}
return label
}
Am I right that it justs check to determine whether a value is NaN or not, in order to test everything, and set the text accordingly ?
I read some posts/documentations and I don't understand this :
In some languages NaN != NaN, but this isn't the case in Cocoa.
What about nil and NaN ? I mean isNan check for false right ?
The IEEE floating point spec documents certain bit patterns that represent NaN invalid results.
nil is different from a NaN. In Swift, only an optional can contain nil, and it indicates the absence of a value.
a NaN means you performed some operation that resulted in an invalid result. You should check the isNaN property to see if a number contains a NaN.
Edit:
Note that there are different values that are marked as NaN, so one .NaN value may not be equal to another .NaN.
No, nan is a value that a floating point can take. nil can only be taken by optional vars. Also I'm not sure where you got that quote, but .nan == .nan is false. For more information read https://developer.apple.com/reference/swift/floatingpoint
I want to covert a string to double and keep the same value:
let myStr = "2.40"
let numberFormatter = NSNumberFormatter()
numberFormatter.locale = NSLocale(localeIdentifier: "fr_FR")
let myDouble = numberFormatter.numberFromString(myStr)?.doubleValue ?? 0.0
myDouble is now
Double? 2.3999999999999999
So how to convert "2.40" to exact be 2.40 as Double ??
Update:
Even rounding after conversion does not seem to work
I don't want to print, I want to calculate and it's important that the number should be correct, it's Money calculation and rates
First off: you don't! What you encountered here is called floating point inaccuracy. Computers cannot store every number precisely. 2.4 cannot be stored lossless within a floating point type.
Secondly: Since floating point is always an issue and you are dealing with money here (I guess you are trying to store 2.4 franc) your number one solution is: don't use floating point numbers. Use the NSNumber you get from the numberFromString and do not try to get a Double out of it.
Alternatively shift the comma by multiplying and store it as Int.
The first solutions might look something like:
if let num = myDouble {
let value = NSDecimalNumber(decimal: num.decimalValue)
let output = value.decimalNumberByMultiplyingBy(NSDecimalNumber(integer: 10))
}
In my iOS swift application, I receive some json from the web which contains some double values which represent currency. It looks like this:
[{"Amount": 5.0},{"Amount":-26.07},{"Amount": 4}, ...etc]
I cast these as Doubles and then try to feed these values as a Swift "Double" into the NSDecimalNumber's constructor like this:
let amount = NSDecimalNumber(double: amountAsDouble)
I'm running into problems with this approach because very frequently the NSDecimalNumber I created will contain a different number that goes 16 places passed the decimal point.
let amount = NSDecimalNumber(double: -15.97)
println(amount)
this returns -15.970000000000004096
I don't want this, I want -15.97.
Thanks,
A Double is stored with 18 decimal digits, you can't do anything about that, it's how it works.
Read here: http://en.wikipedia.org/wiki/Double-precision_floating-point_format
However, at the time of displaying the value on the screen, you can use NSNumberFormatter like this:
let amountInDouble: Double = -15.970000000000004096
let formatter = NSNumberFormatter()
formatter.numberStyle = .DecimalStyle
formatter.roundingIncrement = 0.01
formatter.maximumFractionDigits = 2
let amountAsString = formatter.stringFromNumber(NSNumber(double: amountInDouble))
if let amountAsString = amountAsString {
println(amountAsString) // -15.97
}
I recently went through this for myself. I ended up using an NSNumberFormatter to get the proper decimal places.
let currFormatter = NSNumberFormatter()
currFormatter.numberStyle = .DecimalStyle
currFormatter.roundingIncrement = 0.01
currFormatter.minimumFractionDigits = 2
currFormatter.maximumFractionDigits = 2
let doubleAmount = currFormatter.numberFromString(amountAsDouble) as NSNumber!
let amount = doubleAmount as Double
println(amount)
Here's a tip: If you use NSJSONSerializer, numbers with decimal points are actually turned into NSDecimalNumber for you. NSDecimalNumber is a subclass of NSNumber. So what you are doing: You've got a perfectly fine NSDecimalNumber, round the value to double, and try to turn the double back into an NSDecimalNumber. Just check that what you have is indeed an NSDecimalNumber, and do no conversion if it is.
This is because the intermediate double representation is causing problems.
You should take the values from your dictionary as NSString objects and use the + decimalNumberWithString: method to convert without losing precision. In swift:
let amount = NSDecimalNumber(string: amountAsString)
let amount = NSDecimalNumber.init(value: -15.97)
let roundValue = amount.rounding(accordingToBehavior: NSDecimalNumberHandler(roundingMode: .bankers, scale: 2, raiseOnExactness: false, raiseOnOverflow: false, raiseOnUnderflow: false, raiseOnDivideByZero: false))
print(roundValue)