I have problem with comparison of two variables of "Real" type. One is a result of mathematical operation, stored in a dataset, second one is a value of an edit field in a form, converted by StrToFloat and stored to "Real" variable. The problem is this:
As you can see, the program is trying to tell me, that 121,97 is not equal to 121,97... I have read
this topic, and I am not copletely sure, that it is the same problem. If it was, wouldn't be both the numbers stored in the variables as an exactly same closest representable number, which for 121.97 is 121.96999 99999 99998 86313 16227 83839 70260 62011 71875 ?
Now let's say that they are not stored as the same closest representable number. How do I find how exactly are they stored? When I look in the "CPU" debugging window, I am completely lost. I see the adresses, where those values should be, but nothing even similar to some binary, hexadecimal or whatever representation of the actual number... I admit, that advanced debugging is unknown universe to me...
Edit:
those two values really are slightly different.
OK, I don't need to understand everything. Although I am not dealing with money, there will be maximum 3 decimal places, so "currency" is the way out
BTW: The calculation is:
DATA[i].Meta.UnUsedAmount := DATA[i].AMOUNT - ObjQuery.FieldByName('USED').AsFloat;
In this case it is 3695 - 3573.03
For reasons unknown, you cannot view a float value (single/double or real48) as hexadecimal in the watch list.
However, you can still view the hexadecimal representation by viewing it as a memory dump.
Here's how:
Add the variable to the watch list.
Right click on the watch -> Edit Watch...
View it as memory dump
Now you can compare the two values in the debugger.
Never use floats for monetary amounts
You do know of course that you should not use floats to count money.
You'll get into all sorts of trouble with rounding and comparisons will not work the way you want them too.
If you want to work with money use the currency type instead. It does not have these problems, supports 4 decimal places and can be compared using the = operator with no rounding issues.
In your database you use the money or currency datatype.
Related
Why such code is used in some applications instead of a MOVE?
add 16 to ZERO giving SOME-RESULT
I spotted this in professionally written code at several spots.
Sorce is on this page
Why such code is used in some applications instead of a MOVE?
add 16 to ZERO giving SOME-RESULT
Without seeing more of the code, it appears that it could be a translation of IBM Assembler to COBOL. In particular, the ZAP (Zero and Add Packed) instruction may be literally translated to the above instruction, particularly if SOME-RESULT is COMP-3. Thus, someone checking the translation could see that the ZAP instruction was faithfully translated.
Or, it could be an assembler programmer's idea of a joke.
Having seen the code, I also note the use of
subtract some-data-item from some-data-item
which is used instead of
move zero to some-data-item
This is consistent with operations used with packed decimal fields in IBM Assembly, where there are no other instructions to accomplish "flexible" moves. By flexible, I mean that the packed decimal instructions contain a length field so that specific size MVC instructions need not be used.
This particular style, being unusual, may be related to catching copyright violations.
From my experience, I'm pretty sure I know the reason why the programmer would have done this. It has something to do with the binary representation of the number.
I bet SOME-RESULT is a packed-decimal (or COMP-3) format number. Let's assume the field is defined like this
05 SOME-RESULT PIC S9(5) COMP-3.
This results in a 3-byte field with a hex representation like this
x'00016C'
The decimal number is encoded as a binary encoded decimal (BCD, one decimal digit per half-byte), and the last half-byte holds the sign.
Let's take a look at how the sign is defined:
if it is one of x'C', x'A', x'F', x'E' (café), then the number is positive
if it is one of x'B', x'D', then the number is negative
any of x'0'..'x'9' are not valid signs, so we can distinguish signed packed-decimals from unsigned.
However, a zoned number (PIC 9(5) DISPLAY) - as in the source code - looks like this:
x'F0F0F0F1F6'
As you can see, each decimal digit is an EBCDIC character with the 'zone' part (the first half-byte) always being x'F'.
Now we get closer to your question!
What happens when we use
MOVE 16 TO SOME-RESULT
If you just MOVE a number to such a field, this results in being compiled into a PACK instruction on the machine code level.
PACK SOME-RESULT,=C'16'
A pack instruction takes a zoned number and packs it by picking only the second half-byte of each byte and storing it in the half-bytes of the packed number - with one exception! When it comes to the last byte, it simply flips the two half-bytes and stores them in the last half-byte of the decimal.
This means that the zone of the last byte of the zoned decimal becomes the sign in the packed decimal:
x'00016F'
So now we have an x'F' as the sign – which is a valid positive sign.
However, what happens if use this Cobol instruction instead
ADD 16 TO ZERO GIVING SOME-RESULT
This compiles into multiple machine level instructions
PACK SOME_RESULT,=C'0'
PACK TEMP,=C'16'
AP SOME_RESULT,TEMP
(or similar - the key point is that is needs an AP somewhere)
This makes a slight difference in the result, because the AP (add packed) instruction always sets the resulting sign to either x'C' for a positive or x'D' for a negative result.
So the difference lies in the sign
x'00016C'
Finally, the question is why would one make this difference? After all, both x'F' and x'C' are valid positive signs. So why care?
There is one situation when this slight difference can cause big problems: When the packed decimal is part of an index key, then we would not get a match, even though the numbers are semantically identical!
Because this situation occurred quite often in older databases like VSAM and DL/I (later: IMS/DB), it became good practice to "normalize" packed decimals if they were part of an index key.
However, some programmers adopted the practice without knowing why, so you may come across code that uses this "normalization" even though the data are not used for index keys.
You might also wonder why a compiler does not optimize out the ADD 16 TO ZERO. I'm pretty sure it once did, but that broke a lot of applications, so this specific optimization was removed again or at least made a non-default option with warnings.
Additional useful info
Note that at least the Enterprise Cobol for z/OS compiler allows you to see exactly the machine code that is produced from your source code if use the LIST compile option (see this example output). I recommend to always compile with options LIST, MAP, OFFSET, XREF because these options enable you find the exact problem in your Cobol source even when you only have a program dump from an abend.
Anyway, good programming practice is not to care about the compiler or the machine code, but about the other programmers who will have to maintain, and thus read and understand the code. Good practice would be to always prefer simple and readable instructions, and to document the reasons (right in the code) when deviating from this rule.
Some programmers like to do things "just because they can". I have a feeling that is what you are seeing here. It makes about as much sense as doing
a := 0 + b
would in go.
I have the following sheet:
Why does the difference of two cells with the same value (J7 & J8) not equal zero?
Here is an example spreadsheet.
Switching to Format/Number/Scientific on that cell, you can see the leftover value:
These rounding errors are clearly a regular thing. You can use a formula such as =E3-round(E3,2) to see them for any cell:
Clearly, allowing for a small value in the conditional formatting would avoid the problem:
To add to #df778899's answer, this is not just a "regular thing". It is an unavoidable consequence of floating-point arithmetics.
The surprising thing is that Google apparently represents numbers internally in the floating-point format. For the same numbers, Excel gives the correct result in the equality comparison, which suggests that unlike Google, Excel's representation is fixed-point.
EDIT: upon double-checking, one can find examples of floating point errors in Excel too, so this paragraph is not correct.
Students in computer science are taught to always use approximated and not exact comparison for FP numbers. I think it's unreasonable to expect this from spreadsheets users, though.
In practical terms, when comparing fractionals to other fractionals, avoid using
=if(j7=j8, ...)
or EQ(). Instead, use
=if(abs(J7-J8)<0.0001, ...)
(with appropriate arbitrary precision)
Studying for a test right now and can't seem to wrap my head around when to use "V" for a decimal instead of an actual decimal in PIC clauses. I've done some research but can't find anything I understand. Only been learning cobol for about a week, so is there like a rule of thumb here? Thanks for your time.
You use an actual decimal-point when you want to "output" a value which has decimal places, like a report line, a position on a screen, an item in an output file which is going to a "different" system which doesn't understand the format with an implied decimal pace.
That's what the V is, it is an implied decimal place. It tells the compiler where to align results from calculations, MOVEs, whatever. Computer chips, and the machine instructions they support, don't know about actual decimal points for their internal processing.
COBOL is a language with fixed-length fields. The machine instructions don't need to know where the decimal point is (effectively it can deal with everything as integer values) but the compiler does, and the compiler has to do the correct scaling and alignment of results.
Storing on your own files, use V, the implied decimal place.
For data which is to be "human readable" or read by a system which cannot understand your character set, cannot scale what looks like an integer, use an actual decimal-point, . (for computer-readable stuff, you can sometimes use a separate scaling factor, if that is more convenient for the receiving system).
Basically, V for internal, . for external, should be a rule of thumb to get you there.
Which COBOL are you using? I'm surprised it is not covered in your documentation.
I have stringgrid on delphi form and i am trying to divide values of one cell with value of another cell in another column.
But the problem is, stringgrid cells are populated with different types of numbers, so I am getting ConvertErrors.
For example the numbers in cells can look like
0.37 or 34 or 0.0013 or 0.00 or 0.35 or 30.65 or 45.9108 or 0.0307 or 6854.93.
In another words I never know is it going to be real, float, integer or any other kind of type in those cells.
I have looked everywhere on internet but no luck. Anyone any ideas. By the way I am not exactly Delphi expert. Thanks.
For each string, convert it first to a float value using StrToFloat function in SysUtils.pas . This should allow for any numerical type to be dealt with (unless you have something unusual like complex numbers). As you have some zero values in your list above you should also ensure that you check for divide by zero conditions as this will also potentially throw an exception.
SysUtils has many functions such as TryStrToFloat, TryStrToInt, TryStrToInt64 etc for this purpose. These functions accept a reference parameter (var parameter) for returning the converted value and function itself returns true if the conversion is successful.
If you are sure that the string has a valid number then you can check the input string to see if it has a decimal point before deciding which function to use.
Treat all the numbers as float. Use StrToFloat, divide the numbers, and then convert the result back to string with FloatToStr. If the result is an integer, no decimal point would be produced.
Right now I have two fields for cost. One for dollars and one for cents. This works, but it is a bit ugly. It also doesn't allow the user to enter the term "free" or "no cost" if they want. But if I only have one field, I might have to make my parser a bit smarter. What do you think?
On the server side, I combine dollars and cents to store them as decimals in my database. Mainly so that I can gather statistics (cost averages, etc.) quickly.
Do you think it is better to store the cost as a string? Then whenever I actually use the cost for stats or other purposes, I would convert it to a decimal at that point. Or am I on the right track?
There is a rule in database design that states that "atomic data" should not be split. By this rule a price, or cost is such an example of atomic data and therefore it should never be split among multiple columns just like you shouldn't split a phone number among multiple columns (unless you really have a very good reason for it - very rare)
Use a DECIMAL data type. Something like DECIMAL(8,3) should work and it's supported by all ANSI SQL compliant database products!
You can consult Joe Celko's "Thinking In Sets" book for a discussion of this topic. See section 1.6.2, pages 21-22.
EDIT -
It seems from your question that you are also concerned with how to accept user's input in a form that resembles the price (xxxx.xx) - hence the two input boxes, for the whole dollars, and the pennies.
I recommend using a single input box and then doing input validation using Regular Expressions to match your format (i.e. something like [0-9]+(.[0-9]{1,3})? would probably work but could be improved). You could then parse the validated string to a Decimal type in your language, or just pass it as a string into your database - SQL will know how to cast it to a DECIMAL type.
Keep the whole cost as decimal. If it's free, then keep the cost as 0. In presentation if cost is zero - write "free" instead of 0.
I generally store the cost as the lowest unit (pennies) and then convert it to whole dollars later.
So a cost of $4.50 gets stored as 450. Free items would be -1 pennies. You could store free things as 0 pennies as well, this gives you the flexibility to use 0 and -1 to mean two slightly different things (free vs no sale?).
It also makes it easier to support countries that don't use cents if you choose to go that route.
As for presenting the data entry field, I personally don't like it when I have to keep switching fields for tiny things (like when they break up phone numbers into 3 fields, or IP addresses into 4). I'd present one field, and let the users type the decimal point in themselves. That way, your users don't have to tab (or click, if they are unfamiliar with tab) to the next field.
Use cents, use 450 for $4.50 this will save you problems that are arising very often
from the fact that floating point operations are not safe. Just try the following expression in irb:
0.4 - 0.3 == 0.1 will return false. All because of floating point representation
innacuracies.
In my models I'm always using:
attr_accessor :price_with_cents
def price_with_cents
self.price/100.00
end
def price\_with\_cents==(num)
self.price = (num.to_f * 100.00).to_i
end
And the name of column is just price and integer type.
I don't have much experience with decimal columns and their representation in ruby (which can be float that is problematic as i've shown at the begining).
Don't allow garbage to make it to your database. If you're expecting a dollar amount on a field, than make sure it's valid before it gets in there. This will allow you to report better on the data and allow simpler formatting on output.
I suggest making this a single field with validation on update or insert.
if field != SpecialFreeTag then
try to convert to decimal
if fail then report to user
otherwise accept value
Use try parse or regular expressions to help with the validation.
I would store the cost as decimal with the scale being no less than 2 and maybe even 3-5. If something is bought in bulk the unit cost could easily include fractions of a cent. Free items have a cost of 0. If the cost is unknown then allow null values also.