Problem with Swift Extensions and NumberFormatters - ios

I'm trying to format the text field of a label such that its text will only print values with a certain number of significant digits. I'm using the extension functionality of Swift and the NumberFormatter class but, while the code complies correctly with no errors, the functionality I want (i.e. a maximum of 6 significant digits) is not being implemented.
Here's my extension code:
extension Double {
func formatNumbers() -> String {
let numberFormat = NumberFormatter()
let number = NSNumber(value: self)
numberFormat.usesSignificantDigits = true
numberFormat.minimumFractionDigits = 0
return String(numberFormat.string(from: number) ?? "")
}
}
And here's when I call the extension method:
ConsoleValue.text! = "\(tempResult.formatNumbers())"
where ConsoleValue is a UILabel and tempResult is a Double var.
Can someone help me with what I'm doing wrong?

To set the maximum number of significant digits, use the maximumSignificantDigits property :
numberFormat.maximumSignificantDigits = 6
According to this wikipedia article, significant figures are :
All non-zero digits are significant: 1, 2, 3, 4, 5, 6, 7, 8, 9.
Zeros between non-zero digits are significant: 102, 2005, 50009.
Leading zeros are never significant: 0.02, 001.887, 0.000515.
In a number with a decimal point, trailing zeros (those to the right of the last non-zero digit) are significant: 2.02000, 5.400, 57.5400.
In a number without a decimal point, trailing zeros may or may not be significant. More information through additional graphical symbols
or explicit information on errors is needed to clarify the
significance of trailing zeros.

Related

Adding commas in to extended numbers in Swift

I'm wanting to add commas in to break up numbers within my iOS application.
For example:
Change 1000 into 1,000
Change 10,000 into 10,000
Change 100000 into 100,000
And so on...
What is the most efficient way of doing this, and safe-guarding against numbers post decimal point too?
So for example,
1000.50 should return 1,000.50
My numbers at the moment are Ints, Doubles and Floats - so not sure if I need to manipulate them before or after converting to Strings.
Any feedback would be appreciated.
The Foundation framework (which is shared between iOS and MacOS) includes the NumberFormatter class, which will do exactly what you want. You'd configure a number formatter to include a groupingSeparator. (Note that different countries use different grouping separators, so you might want to set the localizesFormat flag to allow the NumberFormatter to change the separator character based on the user's locale.
Here is some sample code that will generate strings with comma thousands separators and 2 decimal places:
let formatter = NumberFormatter()
// Set up the NumberFormatter to use a thousands separator
formatter.usesGroupingSeparator = true
formatter.groupingSize = 3
//Set it up to always display 2 decimal places.
formatter.alwaysShowsDecimalSeparator = true
formatter.minimumFractionDigits = 2
formatter.maximumFractionDigits = 2
// Now generate 10 formatted random numbers
for _ in 1...10 {
// Randomly pick the number of digits
let digits = Double(Int.random(in:1...9))
// Generate a value from 0 to that number of digits
let x = Double.random(in: 1...(pow(10, digits)))
// If the number formatter is able to output a string, log it to the console.
if let string = formatter.string(from:NSNumber(value:x)){
print(string)
}
}
Some sample output from that code:
356,295,901.77
34,727,299.01
395.08
37,185.02
87,055.35
356,112.91
886,165.06
98,334,087.81
3,978,837.62
3,178,568.97

Decimal values get remove when convert from String to Decimal in Swift

Please check this example
let strValue = "12.00"
let decimalValue = Decimal(string: strValue) // 12
it always returns 12 not 12.00
I also try with NSNumber, Float, and Double but it always remove zeros.
Can you please help me with this thanks in advance
Like martin said. 12 and 12.00 is the same number. It doesn't mathematical matter how many 0 are behind the comma.
While a number stays the same, the representation is a different topic.
String(format: "%.6f", 12) will give 12.000000 as String.
Edit:
A Decimal is a Struct. The meaning of the numbers stays the same. Throw this in a playground.
import UIKit
let decimal = Decimal(12.0000000000)
let justAnInt = 12
let sameDecimal = decimal == Decimal(justAnInt)
See? There is no such thing as infinite 0 in math. Zero is finite. A Decimal from 12.000000 and a Decimal created from 12 are the same.
There is no difference between 12 & 12.00.
The only thing we need is to present the value to user., for that you could use formatted string.
Like:
String(format: "%.2f", floatValue)

Swift .isNaN Understand how it works

I am currently facing an issue in understanding how .isNan works.
I am maintaining an application which is not developed (in Swift 2.3) by myself.
We have a nice amount of crashes from this code, and from my understanding I don't understand how.
Here is the method, which is simply a format method in order to set the appropriate value to your label by testing different cases.
static func formatFloat(float: Float?, withMaxDigits
max: Int, andUnit unit: String) -> String {
var label: String = "-"
if let float = float {
let numberFormatter = NSNumberFormatter()
numberFormatter.numberStyle = NSNumberFormatterStyle.DecimalStyle
numberFormatter.minimumFractionDigits = 0
numberFormatter.maximumFractionDigits = max
numberFormatter.roundingMode = .RoundHalfUp
if !float.isNaN {
var formattedValue = numberFormatter.stringFromNumber(float)!
if(formattedValue == "-0")
{
formattedValue = "0"
}
label = "\(formattedValue) \(unit)"
}
}
return label
}
Am I right that it justs check to determine whether a value is NaN or not, in order to test everything, and set the text accordingly ?
I read some posts/documentations and I don't understand this :
In some languages NaN != NaN, but this isn't the case in Cocoa.
What about nil and NaN ? I mean isNan check for false right ?
The IEEE floating point spec documents certain bit patterns that represent NaN invalid results.
nil is different from a NaN. In Swift, only an optional can contain nil, and it indicates the absence of a value.
a NaN means you performed some operation that resulted in an invalid result. You should check the isNaN property to see if a number contains a NaN.
Edit:
Note that there are different values that are marked as NaN, so one .NaN value may not be equal to another .NaN.
No, nan is a value that a floating point can take. nil can only be taken by optional vars. Also I'm not sure where you got that quote, but .nan == .nan is false. For more information read https://developer.apple.com/reference/swift/floatingpoint

Getting weird value in Double

Hello i made a "Clicker" as a first project while learning swift i have an automated timer that is supposed to remove some numbers from other numbers but sometimes i get values like 0.600000000000001 and i have no idea why.
Here is my "Attack" function that removes 0.2 from the Health of a zombie.
let fGruppenAttackTimer = NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: Selector("fGruppenAttackTime"), userInfo: nil, repeats: true)
func fGruppenAttackTime() {
zHealth -= 0.2
if zHealth <= 0 {
zHealth = zSize
pPengar += pPengarut
}
...
}
And here is my attackZ button that is supposed to remove 1 from the health of the zombie
#IBAction func attackZ(sender: UIButton) {
zHealth -= Double(pAttack)
fHunger -= 0.05
fGruppenHunger.progress = Float(fHunger / 100)
Actionlbl.text = ""
if zHealth <= 0 {
zHealth = zSize
pPengar += pPengarut
}
}
Lastly here are the variables value:
var zHealth = 10.0
var zSize = 10.0
var pAttack = 1
var pPengar = 0
var pPengarut = 1
When the timer is on and the function is running and i click the button i sometimes get weird values like 0.600000000000001 and if i set the 0.2 in the function to 0.25 i get 0.0999999999999996 sometimes. I wonder why this happens and what to do with it.
In trojanfoe's answer, he shares a link that describes the source of the problem regarding rounding of floating point numbers.
In terms of what to do, there are a number of approaches:
You can shift to integer types. For example, if your existing values can all be represented with a maximum of two decimal places, multiply those by 100 and then use Int types everywhere, excising the Double and Float representations from your code.
You can simply deal with the very small variations that Double type introduces. For example:
If displaying the results in the UI, use NumberFormatter to convert the Double value to a String using a specified number of decimal places.
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 0 // or you might use `2` here, too
formatter.numberStyle = .decimal
print(formatter.string(for: value)!)
By the way, the NSNumberFormatter enjoys another benefit, too, namely that it honors the localization settings for the user. For example, if the user lives in Germany, where the decimal place is represented with a , rather than a ., the NSNumberFormatter will use the user's native number formatting.
When testing to see if a number is equal to some value, rather than just using == operator, look at the difference between two values and seeing if they're within some permissible rounding threshold.
You can use Decimal/NSDecimalNumber, which doesn't suffer from rounding issues when dealing with decimals:
var value = Decimal(string: "1.0")!
value -= Decimal(string: "0.9")!
value -= Decimal(string: "0.1")!
Or:
var value = Decimal(1)
value -= Decimal(sign: .plus, exponent: -1, significand: 9)
value -= Decimal(sign: .plus, exponent: -1, significand: 1)
Or:
var value = Decimal(1)
value -= Decimal(9) / Decimal(10)
value -= Decimal(1) / Decimal(10)
Note, I explicitly avoid using any Double values such as Decimal(0.1) because creating a Decimal from a fractional Double only captures whatever imprecision Double entails, where as the three examples above avoid that entirely.
It's because of floating point rounding errors.
For further reading, see What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Squeezing infinitely many real numbers into a finite number of bits
requires an approximate representation. Although there are infinitely
many integers, in most programs the result of integer computations can
be stored in 32 bits. In contrast, given any fixed number of bits,
most calculations with real numbers will produce quantities that
cannot be exactly represented using that many bits. Therefore the
result of a floating-point calculation must often be rounded in order
to fit back into its finite representation. This rounding error is the
characteristic feature of floating-point computation.

Swift countElements() return incorrect value when count flag emoji

let str1 = "πŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺ"
let str2 = "πŸ‡©πŸ‡ͺ.πŸ‡©πŸ‡ͺ.πŸ‡©πŸ‡ͺ.πŸ‡©πŸ‡ͺ.πŸ‡©πŸ‡ͺ."
println("\(countElements(str1)), \(countElements(str2))")
Result: 1, 10
But should not str1 have 5 elements?
The bug seems only occurred when I use the flag emoji.
Update for Swift 4 (Xcode 9)
As of Swift 4 (tested with Xcode 9 beta) grapheme clusters break after every second regional indicator symbol, as mandated by the Unicode 9
standard:
let str1 = "πŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺ"
print(str1.count) // 5
print(Array(str1)) // ["πŸ‡©πŸ‡ͺ", "πŸ‡©πŸ‡ͺ", "πŸ‡©πŸ‡ͺ", "πŸ‡©πŸ‡ͺ", "πŸ‡©πŸ‡ͺ"]
Also String is a collection of its characters (again), so one can
obtain the character count with str1.count.
(Old answer for Swift 3 and older:)
From "3 Grapheme Cluster Boundaries"
in the "Standard Annex #29 UNICODE TEXT SEGMENTATION":
(emphasis added):
A legacy grapheme cluster is defined as a base (such as A or γ‚«)
followed by zero or more continuing characters. One way to think of
this is as a sequence of characters that form a β€œstack”.
The base can be single characters, or be any sequence of Hangul Jamo
characters that form a Hangul Syllable, as defined by D133 in The
Unicode Standard, or be any sequence of Regional_Indicator (RI) characters. The RI characters are used in pairs to denote Emoji
national flag symbols corresponding to ISO country codes. Sequences of
more than two RI characters should be separated by other characters,
such as U+200B ZWSP.
(Thanks to #rintaro for the link).
A Swift Character represents an extended grapheme cluster, so it is (according
to this reference) correct that any sequence of regional indicator symbols
is counted as a single character.
You can separate the "flags" by a ZERO WIDTH NON-JOINER:
let str1 = "πŸ‡©πŸ‡ͺ\u{200C}πŸ‡©πŸ‡ͺ"
print(str1.characters.count) // 2
or insert a ZERO WIDTH SPACE:
let str2 = "πŸ‡©πŸ‡ͺ\u{200B}πŸ‡©πŸ‡ͺ"
print(str2.characters.count) // 3
This solves also possible ambiguities, e.g. should "πŸ‡«β€‹πŸ‡·β€‹πŸ‡Ίβ€‹πŸ‡Έ"
be "πŸ‡«β€‹πŸ‡·πŸ‡Ίβ€‹πŸ‡Έ" or "πŸ‡«πŸ‡·β€‹πŸ‡ΊπŸ‡Έ" ?
See also How to know if two emojis will be displayed as one emoji? about a possible method
to count the number of "composed characters" in a Swift string,
which would return 5 for your let str1 = "πŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺ".
Here's how I solved that problem, for Swift 3:
let str = "πŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺπŸ‡©πŸ‡ͺ" //or whatever the string of emojis is
let range = str.startIndex..<str.endIndex
var length = 0
str.enumerateSubstrings(in: range, options: NSString.EnumerationOptions.byComposedCharacterSequences) { (substring, substringRange, enclosingRange, stop) -> () in
length = length + 1
}
print("Character Count: \(length)")
This fixes all the problems with character count and emojis, and is the simplest method I have found.

Resources