Hi i want to work with geocoordinates in java.
I've defined my Java variables as "double" and my Postgres Database is defined as "double precision".
I've heared of problems with float that 0.1 results sometimes in 0.09999...
It will have to store values like 50.081406 or 8.24481.
The values will be read from an Android-Device.
Do i have to worry about floating-point problems?
The issue with the floats is usually related to addition and subtraction where due to 2's compliment (how they are stored on the computer) they don't always round out exactly to what you want. That being said Doubles are a great way to store lat/lon
Also see: proper/best type for storing latitude and longitude
Related
What is the recommended data type for handling Money - numeric values with just 2-decimal places in Elixir/Erlang?
I think you should always use integers when handling money. Floating point operations can have rounding errors and money-handling code that's off even by 1 cent is often not ok. For example, instead of
amount = 99.99
Use
amount_cents = 9999
This is doubly important if you are storing the amount in a database since conversion between Elixir and your database and back may produce undesirable results.
I highly recommend using the Decimal library. There has been a lot of thought and work put into handling all the difficult edge cases.
Money, like cryptography, is not something you should implement yourself. You will get it wrong.
Using the Decimal library is the way to go in currency handling logic,
especially when you have to perform arithmetic operations with the quantities.
I have problem with comparison of two variables of "Real" type. One is a result of mathematical operation, stored in a dataset, second one is a value of an edit field in a form, converted by StrToFloat and stored to "Real" variable. The problem is this:
As you can see, the program is trying to tell me, that 121,97 is not equal to 121,97... I have read
this topic, and I am not copletely sure, that it is the same problem. If it was, wouldn't be both the numbers stored in the variables as an exactly same closest representable number, which for 121.97 is 121.96999 99999 99998 86313 16227 83839 70260 62011 71875 ?
Now let's say that they are not stored as the same closest representable number. How do I find how exactly are they stored? When I look in the "CPU" debugging window, I am completely lost. I see the adresses, where those values should be, but nothing even similar to some binary, hexadecimal or whatever representation of the actual number... I admit, that advanced debugging is unknown universe to me...
Edit:
those two values really are slightly different.
OK, I don't need to understand everything. Although I am not dealing with money, there will be maximum 3 decimal places, so "currency" is the way out
BTW: The calculation is:
DATA[i].Meta.UnUsedAmount := DATA[i].AMOUNT - ObjQuery.FieldByName('USED').AsFloat;
In this case it is 3695 - 3573.03
For reasons unknown, you cannot view a float value (single/double or real48) as hexadecimal in the watch list.
However, you can still view the hexadecimal representation by viewing it as a memory dump.
Here's how:
Add the variable to the watch list.
Right click on the watch -> Edit Watch...
View it as memory dump
Now you can compare the two values in the debugger.
Never use floats for monetary amounts
You do know of course that you should not use floats to count money.
You'll get into all sorts of trouble with rounding and comparisons will not work the way you want them too.
If you want to work with money use the currency type instead. It does not have these problems, supports 4 decimal places and can be compared using the = operator with no rounding issues.
In your database you use the money or currency datatype.
For simple uses, such as tracking weight values like 65.1kg, is there any benefit of going with NSDecimal/NSDecimalNumber over double?
My understanding here is double (or even float) provides more than enough precision in such cases. Please correct me if I'm wrong.
First, read Josh Caswell's link. It it especially critical when working with money. In your case it may matter or may not, depending on your goal. If you put in 65.1 and you want to get exactly 65.1 back out, then you definitely need to use a format that rounds properly to decimal values like NSDecimalNumber. If, when you put in 65.1, you want "a value that is within a small error of 65.1," then float or double are fine (depending on how much error you are willing to accept).
65.1 is a great example, because it demonstrates the problem. Here in Swift because its so easy to demonstrate, but ObjC is the same:
1> 65.1
$R0: Double = 65.099999999999994
2>
1/10 happens to be a repeating decimal in binary, just like 1/3 is a repeating decimal in decimal. So 65.1 encoded as a double is "close to" 65.1, but not exact. If you need an exact representation of decimal-encoded number (i.e. what most humans expect), use NSDecimalNumber. This isn't to say that NSDecimalNumber is more accurate than double. It just imposes different rounding errors than double. Which rounding errors you prefer depends on your use case.
I'm playing around with rails a bit and I have found something strange. For storing a money value I use the typical decimal data type which active record converts to BigDecimal. I considered this to be precise and I thought to avoid the odd behavior of floating point math. But when I store 99.99 to the db everything works fine, but when the records gets loaded by active record it loses precision and converts to something like 99.9899999999. This looks like a floating point issue.
I made some tests and found out that creating a BigDecimal like this b = BigDecimal.new("99.99") leads to a "clean" variable but building it this way b = BigDecimal.new(99.99) leads to the "unclean" version that I want to avoid.
I guess, that ActiveRecord reconstructs the BigDecimal with an intermediate float when loading the record from the database. This is not what I want and I would like to know if it can be avoided.
Ruby Version 1.9.3p0
Rails 3.2.9
Sqlite 3.7.9
Your problem is that you're using SQLite and SQLite doesn't have native support for numeric(m,n) data types. From the fine manual:
1.0 Storage Classes and Datatypes
Each value stored in an SQLite database (or manipulated by the database engine) has one of the following storage classes:
NULL. The value is a NULL value.
INTEGER. The value is a signed integer, stored in 1, 2, 3, 4, 6, or 8 bytes depending on the magnitude of the value.
REAL. The value is a floating point value, stored as an 8-byte IEEE floating point number.
TEXT. The value is a text string, stored using the database encoding (UTF-8, UTF-16BE or UTF-16LE).
BLOB. The value is a blob of data, stored exactly as it was input.
Read further down that page to see how SQLite's type system works.
Your 99.99 may be BigDecimal.new('99.99') in your Ruby code but it is almost certainly the REAL value 99.99 (i.e. an eight byte IEEE floating point value) inside SQLite and there goes the neighborhood.
So switch to a better database in your development environment; in particular, develop on top of whatever database you're going to be deploying on.
Don't use floating point for monetary values
Yes, exactly, SQLite is messing up your BigDecimal values.
The fundamental problem is that the FP format cannot store most decimal fractions correctly.
I believe you have about four choices:
Round everything to, say, two decimal places so that you never notice the slightly-off values.
Store your BigDecimal values in SQLite with the TEXT or BLOB storage class.
Use a different db that has some sort of decimal string support.
Scale everything to integral values and use the INTEGER storage class.
The problem is that FP fractions are rational numbers of the form x/2n. But the decimal monetary amounts have fractions that are x/(2n * 5m). The representations just aren't compatible. For example, in 0.01 ... 0.99, only 0.25, 0.50, and 0.75 have exact binary representations.
Most of my applications revolve around financial calculations involving payments and interest rate calculations. I'm looking to find out how to determine what Delphi data type is best to use.
If I'm using a database to store these values and I've defined the fields in that database to be a decimal value with two decimal places, which Delphi datatype is most compatible with that scenario?
Should I use a rounding formula in Delphi to format the results to two decimal places before storing the values in the database? If so what is a best practice for doing so?
For such calculations, don't use floating point types like Real, Single or Double. They are not good with decimal values like 0.01 or 1234.995, as they must approximate them.
You can use Currency, a fixed point type, but that is still limited to 4 decimal places.
Try my Decimal type, which has 28-29 places and has a decimal exponent so it is ideal for such calculations. The only disadvantage is that it is not FPU supported (but written in assembler, nevertheless) so it is not as fast as the built-in types. It is the same as the Decimal type used in .NET (but a little faster) and quite similar to the one used on the Mac.
If you want to do financial calculations, don't use any of the floating-point/real types. Delphi has a Currency type, which is a fixed-point value with 4 decimal places, that should be just what you need.