I am writing a Swift app and am dealing with decimals in a database (stored in mysql as decimals, with 2 digits. Basically it's sales someone made each day, so generally anything from $0 to $1000, but not millions, and nothing insane in terms of trailing decimals, just always rounded to 2 decimal places).
Referencing this helped me out:
How to properly format currency on ios
..But I wanted to just do a quick sanity check here and make sure this strategy is ok.
i.e I would use NSDecimal or NSDecimalNumber (is there a preferred swift equivalent??)
What would you all recommend I do when dealing with currency in Swift? I'd like to use the locale-based currency symbol as well. I have a class called Sales that contains the amount in question. What do you recommend the datatype to be?
Apologies if I am coming off lazy, I actually have some ideas on what to do but feel a little overwhelmed at the "right" approach, especially in a locale-sensitive way, and wanted to check in here with you all.
Thanks so much!
Update for Swift 3: A Decimal type is now available with built-in support for operators like *, /, +, <, etc. When used in an Any context (passed to Objective-C), it's bridged to NSDecimalNumber.
Old answer:
NSDecimal is not really supported in Swift (it's a weird opaque pointer type), but NSDecimalNumber is — and as in Obj-C, it's the best thing to use for base-ten arithmetic (because it actually does its operations in base ten). NSLocale, NSNumberFormatter and friends all work too and should satisfy your localization needs.
Swift 3 now has a Decimal (value) type which is bridged to NSDecimalNumber.
Related
Recently in a iOS project I have refactored money data type from double to NSDecimalNumber to get a rid of discrepancies caused by handling money with doubles.
The refactoring is complete but my concerns are for all those comparisons made through standard operators that are not raised as errors by the compiler because comparing objects is still valid (memory addresses are compared) but not logically correct
double a = 4;
double b = 3;
if(a > b) do stuff //valid and logically correct
NSDecimalNumber *a = [NSDecimalNumber decimalNumberWithString:#"4"];
NSDecimalNumber *b = [NSDecimalNumber decimalNumberWithString:#"3"];
if(a > b) do stuff //valid but not logically correct as here the memory addresses of variables will be compared instead of values.
Is there any way to make the compiler highlight these situations?
My first idea was to create a category for NSDecimalNumber and overloading operators so that they would work properly for them and, as it s not possible to override operators in Obj-C, I tried 2 different ways to accomplish this:
Creating a swift extension of NSDecimalNumber where I overload those operators (in swift it s possible to override standard operators) and then using those new methods in my Obj-c environment through Swift bridge. After some tests and have red a lot on StackOverflow it seems this approach is not possible as Obj-C will always treat standard operators in the common way (comparing addressed for objects)
Creating a category for NSDecimalNumber where I overload those operators though C++ that gives the option to overload operators to the programmer. The problem here is that I am not confident with C++ and I cannot find any example to address me.
So in the end I need a way to highlight any comparison between objects made with standard operators.
Any help is really appreciated,
Thanks.
What is the recommended data type for handling Money - numeric values with just 2-decimal places in Elixir/Erlang?
I think you should always use integers when handling money. Floating point operations can have rounding errors and money-handling code that's off even by 1 cent is often not ok. For example, instead of
amount = 99.99
Use
amount_cents = 9999
This is doubly important if you are storing the amount in a database since conversion between Elixir and your database and back may produce undesirable results.
I highly recommend using the Decimal library. There has been a lot of thought and work put into handling all the difficult edge cases.
Money, like cryptography, is not something you should implement yourself. You will get it wrong.
Using the Decimal library is the way to go in currency handling logic,
especially when you have to perform arithmetic operations with the quantities.
I have problem with comparison of two variables of "Real" type. One is a result of mathematical operation, stored in a dataset, second one is a value of an edit field in a form, converted by StrToFloat and stored to "Real" variable. The problem is this:
As you can see, the program is trying to tell me, that 121,97 is not equal to 121,97... I have read
this topic, and I am not copletely sure, that it is the same problem. If it was, wouldn't be both the numbers stored in the variables as an exactly same closest representable number, which for 121.97 is 121.96999 99999 99998 86313 16227 83839 70260 62011 71875 ?
Now let's say that they are not stored as the same closest representable number. How do I find how exactly are they stored? When I look in the "CPU" debugging window, I am completely lost. I see the adresses, where those values should be, but nothing even similar to some binary, hexadecimal or whatever representation of the actual number... I admit, that advanced debugging is unknown universe to me...
Edit:
those two values really are slightly different.
OK, I don't need to understand everything. Although I am not dealing with money, there will be maximum 3 decimal places, so "currency" is the way out
BTW: The calculation is:
DATA[i].Meta.UnUsedAmount := DATA[i].AMOUNT - ObjQuery.FieldByName('USED').AsFloat;
In this case it is 3695 - 3573.03
For reasons unknown, you cannot view a float value (single/double or real48) as hexadecimal in the watch list.
However, you can still view the hexadecimal representation by viewing it as a memory dump.
Here's how:
Add the variable to the watch list.
Right click on the watch -> Edit Watch...
View it as memory dump
Now you can compare the two values in the debugger.
Never use floats for monetary amounts
You do know of course that you should not use floats to count money.
You'll get into all sorts of trouble with rounding and comparisons will not work the way you want them too.
If you want to work with money use the currency type instead. It does not have these problems, supports 4 decimal places and can be compared using the = operator with no rounding issues.
In your database you use the money or currency datatype.
For simple uses, such as tracking weight values like 65.1kg, is there any benefit of going with NSDecimal/NSDecimalNumber over double?
My understanding here is double (or even float) provides more than enough precision in such cases. Please correct me if I'm wrong.
First, read Josh Caswell's link. It it especially critical when working with money. In your case it may matter or may not, depending on your goal. If you put in 65.1 and you want to get exactly 65.1 back out, then you definitely need to use a format that rounds properly to decimal values like NSDecimalNumber. If, when you put in 65.1, you want "a value that is within a small error of 65.1," then float or double are fine (depending on how much error you are willing to accept).
65.1 is a great example, because it demonstrates the problem. Here in Swift because its so easy to demonstrate, but ObjC is the same:
1> 65.1
$R0: Double = 65.099999999999994
2>
1/10 happens to be a repeating decimal in binary, just like 1/3 is a repeating decimal in decimal. So 65.1 encoded as a double is "close to" 65.1, but not exact. If you need an exact representation of decimal-encoded number (i.e. what most humans expect), use NSDecimalNumber. This isn't to say that NSDecimalNumber is more accurate than double. It just imposes different rounding errors than double. Which rounding errors you prefer depends on your use case.
As I understand it, Europeans(*) write numbers with a comma for a decimal separator, so one-and-a-quarter is written as 1,25
Europeans also use commas to separate lists, so how do you write a list of decimal numbers? I, as an Englishman, would write one-and-a-quarter, one-and-a-half, one-and-three-quarters like this:
1.25, 1.5, 1.75
How do you do that in Europe?
(Why is this a programming question? Because I'm writing a program that will ask European users for a list of numbers!)
* For the purposes of this question, there are no English-speaking countries in Europe. :-)
I'm European (french), and in almost all programs here we have to use semicolons ';' as a separator, even if the numbers are only integers because the comma doesn't look like a separator for us. In mathematics, semicolons are the only right way here to separate a list of numbers.
The most common example is when we have to enter the page numbers we want to print on a PDF, all programs ask for a semicolon-separated list and I clearly found it intuitive. I think they would have changed it if it was uncomfortable for some.
This varies by culture, and within a culture. The CLDR data contains the “list” element that specifies the list separator character, and it is the semicolon for most cultures, see the chart of number symbols (element “list”). The definition is very implicit though, and there is variation inside locales. Some people regard 1,25, 1,5, 1,75 as acceptable, while others prefer 1,25; 1,5; 1,75. There are also people who seriously think that in a strongly mathematical or numeric context, one should deviate from the locale practices and use the Anglo-Saxon notation with decimal point, hence with comma as separator.
On the practical side, I think it would not be very wrong to use ”;” as number list separator when decimal comma is used, or even when decimal point is used. So you might even consider using ”;” in all locales.
But when it comes to user input, it’s trickier. In principle, you be liberal in what you accept, but since the comma can be meant to be a decimal comma, a thousands separator, or a list item separator, there is such a thing as being too liberal.
If possible, prompt for each number separately, avoiding the separator issue. If this is not possible, the crucial thing is to make it very, very clear to the use which separator is expected. I would go as far as saying that requiring for the semicolon ”;” is the most reliable thing to do.
Why ask about Europeans in general ? I don't think there is one European way of doing so, and if it happens to be the case then it would be sheer luck. Europe is comprised of different cultures and each has its own rules.
You don't mention what platform you are using but you might be able to rely on your plaform to get this information. In the case of .NET, you can get this information through Textinfo.ListSeparator. For example this would give you the French one (result: a semicolon):
string listSeparator = new CultureInfo("fr-FR").TextInfo.ListSeparator;
I don't think there is one way to do it. White space separating the numbers would works just the same, or you could use a semicolon (';') to separate the numbers