I am in the process of upgrading a few in-house applications from ADS 7.1 to 8.1.
I was told a while back that there are changes in return values of the AVG() function as well as some division calculations, but I cannot find any documentation on these changes.
Does anyone know what I'm talking about or have a link that explains the details?
The "Effects of Upgrading to Version 8.1" topic in the help file has a small paragraph about the change, but doesn't go into any details.
Basically, as of version 8.1 Advantage now adheres to the SQL standard with regards to integer division. Integer division expressions have the fractional portion truncated, where in the past they would result in a floating point result.
To address this change, you may have to cast certain expressions if you still want them to result in a floating point data type. For example:
This:
select int1 / int2 from mytable;
Would need to change to:
select cast( int1 as sql_float ) / int2 from mytable;
Related
Why such code is used in some applications instead of a MOVE?
add 16 to ZERO giving SOME-RESULT
I spotted this in professionally written code at several spots.
Sorce is on this page
Why such code is used in some applications instead of a MOVE?
add 16 to ZERO giving SOME-RESULT
Without seeing more of the code, it appears that it could be a translation of IBM Assembler to COBOL. In particular, the ZAP (Zero and Add Packed) instruction may be literally translated to the above instruction, particularly if SOME-RESULT is COMP-3. Thus, someone checking the translation could see that the ZAP instruction was faithfully translated.
Or, it could be an assembler programmer's idea of a joke.
Having seen the code, I also note the use of
subtract some-data-item from some-data-item
which is used instead of
move zero to some-data-item
This is consistent with operations used with packed decimal fields in IBM Assembly, where there are no other instructions to accomplish "flexible" moves. By flexible, I mean that the packed decimal instructions contain a length field so that specific size MVC instructions need not be used.
This particular style, being unusual, may be related to catching copyright violations.
From my experience, I'm pretty sure I know the reason why the programmer would have done this. It has something to do with the binary representation of the number.
I bet SOME-RESULT is a packed-decimal (or COMP-3) format number. Let's assume the field is defined like this
05 SOME-RESULT PIC S9(5) COMP-3.
This results in a 3-byte field with a hex representation like this
x'00016C'
The decimal number is encoded as a binary encoded decimal (BCD, one decimal digit per half-byte), and the last half-byte holds the sign.
Let's take a look at how the sign is defined:
if it is one of x'C', x'A', x'F', x'E' (café), then the number is positive
if it is one of x'B', x'D', then the number is negative
any of x'0'..'x'9' are not valid signs, so we can distinguish signed packed-decimals from unsigned.
However, a zoned number (PIC 9(5) DISPLAY) - as in the source code - looks like this:
x'F0F0F0F1F6'
As you can see, each decimal digit is an EBCDIC character with the 'zone' part (the first half-byte) always being x'F'.
Now we get closer to your question!
What happens when we use
MOVE 16 TO SOME-RESULT
If you just MOVE a number to such a field, this results in being compiled into a PACK instruction on the machine code level.
PACK SOME-RESULT,=C'16'
A pack instruction takes a zoned number and packs it by picking only the second half-byte of each byte and storing it in the half-bytes of the packed number - with one exception! When it comes to the last byte, it simply flips the two half-bytes and stores them in the last half-byte of the decimal.
This means that the zone of the last byte of the zoned decimal becomes the sign in the packed decimal:
x'00016F'
So now we have an x'F' as the sign – which is a valid positive sign.
However, what happens if use this Cobol instruction instead
ADD 16 TO ZERO GIVING SOME-RESULT
This compiles into multiple machine level instructions
PACK SOME_RESULT,=C'0'
PACK TEMP,=C'16'
AP SOME_RESULT,TEMP
(or similar - the key point is that is needs an AP somewhere)
This makes a slight difference in the result, because the AP (add packed) instruction always sets the resulting sign to either x'C' for a positive or x'D' for a negative result.
So the difference lies in the sign
x'00016C'
Finally, the question is why would one make this difference? After all, both x'F' and x'C' are valid positive signs. So why care?
There is one situation when this slight difference can cause big problems: When the packed decimal is part of an index key, then we would not get a match, even though the numbers are semantically identical!
Because this situation occurred quite often in older databases like VSAM and DL/I (later: IMS/DB), it became good practice to "normalize" packed decimals if they were part of an index key.
However, some programmers adopted the practice without knowing why, so you may come across code that uses this "normalization" even though the data are not used for index keys.
You might also wonder why a compiler does not optimize out the ADD 16 TO ZERO. I'm pretty sure it once did, but that broke a lot of applications, so this specific optimization was removed again or at least made a non-default option with warnings.
Additional useful info
Note that at least the Enterprise Cobol for z/OS compiler allows you to see exactly the machine code that is produced from your source code if use the LIST compile option (see this example output). I recommend to always compile with options LIST, MAP, OFFSET, XREF because these options enable you find the exact problem in your Cobol source even when you only have a program dump from an abend.
Anyway, good programming practice is not to care about the compiler or the machine code, but about the other programmers who will have to maintain, and thus read and understand the code. Good practice would be to always prefer simple and readable instructions, and to document the reasons (right in the code) when deviating from this rule.
Some programmers like to do things "just because they can". I have a feeling that is what you are seeing here. It makes about as much sense as doing
a := 0 + b
would in go.
I have problem with comparison of two variables of "Real" type. One is a result of mathematical operation, stored in a dataset, second one is a value of an edit field in a form, converted by StrToFloat and stored to "Real" variable. The problem is this:
As you can see, the program is trying to tell me, that 121,97 is not equal to 121,97... I have read
this topic, and I am not copletely sure, that it is the same problem. If it was, wouldn't be both the numbers stored in the variables as an exactly same closest representable number, which for 121.97 is 121.96999 99999 99998 86313 16227 83839 70260 62011 71875 ?
Now let's say that they are not stored as the same closest representable number. How do I find how exactly are they stored? When I look in the "CPU" debugging window, I am completely lost. I see the adresses, where those values should be, but nothing even similar to some binary, hexadecimal or whatever representation of the actual number... I admit, that advanced debugging is unknown universe to me...
Edit:
those two values really are slightly different.
OK, I don't need to understand everything. Although I am not dealing with money, there will be maximum 3 decimal places, so "currency" is the way out
BTW: The calculation is:
DATA[i].Meta.UnUsedAmount := DATA[i].AMOUNT - ObjQuery.FieldByName('USED').AsFloat;
In this case it is 3695 - 3573.03
For reasons unknown, you cannot view a float value (single/double or real48) as hexadecimal in the watch list.
However, you can still view the hexadecimal representation by viewing it as a memory dump.
Here's how:
Add the variable to the watch list.
Right click on the watch -> Edit Watch...
View it as memory dump
Now you can compare the two values in the debugger.
Never use floats for monetary amounts
You do know of course that you should not use floats to count money.
You'll get into all sorts of trouble with rounding and comparisons will not work the way you want them too.
If you want to work with money use the currency type instead. It does not have these problems, supports 4 decimal places and can be compared using the = operator with no rounding issues.
In your database you use the money or currency datatype.
Studying for a test right now and can't seem to wrap my head around when to use "V" for a decimal instead of an actual decimal in PIC clauses. I've done some research but can't find anything I understand. Only been learning cobol for about a week, so is there like a rule of thumb here? Thanks for your time.
You use an actual decimal-point when you want to "output" a value which has decimal places, like a report line, a position on a screen, an item in an output file which is going to a "different" system which doesn't understand the format with an implied decimal pace.
That's what the V is, it is an implied decimal place. It tells the compiler where to align results from calculations, MOVEs, whatever. Computer chips, and the machine instructions they support, don't know about actual decimal points for their internal processing.
COBOL is a language with fixed-length fields. The machine instructions don't need to know where the decimal point is (effectively it can deal with everything as integer values) but the compiler does, and the compiler has to do the correct scaling and alignment of results.
Storing on your own files, use V, the implied decimal place.
For data which is to be "human readable" or read by a system which cannot understand your character set, cannot scale what looks like an integer, use an actual decimal-point, . (for computer-readable stuff, you can sometimes use a separate scaling factor, if that is more convenient for the receiving system).
Basically, V for internal, . for external, should be a rule of thumb to get you there.
Which COBOL are you using? I'm surprised it is not covered in your documentation.
I've spent many hours researching this and am pretty stuck: my question is - has the internal format of a Delphi TDateTime changed between Delphi 7 (released in 2002 or so) and today?
Scenario: I'm reading a binary logfile created by a Delphi 7 app, and the vendor tells me it's a TDateTime in the record, but decoding the bits shows it's clearly not standard IEEE 754 floating point even though the TDateTime produced by modern Delphi is.
But it's some kind of floating point with around 15 bits of exponent and 45 bits of significand (as opposed to 11 and 53 bits in IEE754), and the leading bit is a 1 (which in IEE754 indicates a negative number) for numbers that are clearly not negative, such as the current date/time.
Hints in old documentation suggested that TDateTime "read as" a double but wasn't necessarily represented internally as one, which means that the internal format would be mostly invisible except where these TDateTimes were written out in binary form.
My suspicion is that the change occurred with Delphi 8, which added .NET support, but I simply can't find any references to this anywhere. I have perl code (!) that picks apart these types mostly working, but I'd love to find a formal spec so I can do it properly.
Any old-timers run into this?
~~~ Steve
Nothing has changed since Delphi 7. In Delphi 7, and in fact previous versions, TDateTime is IEEE754, measuring the number of days since the Delphi epoch.
You are going to need to get in touch with the software vendor and try to work out what this data's format really is. It would be surprising if the format was a non-IEEE754 floating point data type. Are you quite sure that it is floating point?
As for BCB3, BCB6 and D4, it's exactly the IEEE 754 Double-precision floating-point format, in the VCL source file system.pas (as included in BCB6) it's defined by thus:
TDateTime = type Double;
I'm using delphi 2010 with db2 9.7 Express-C and I have a database that has several fields of decimal type to work with monetary values. Now I see that there are some problems using it, like the 9.20 value displays the value 9.19999980926514 in my front-end. I need to change all the fields in my database to DECFLOAT or is there a function, property in tfield or other alternative to solve it ?
Thanks.
Davis
working directly with monetary decimals is almost always a problem. due to different conversions made from the database to your front-end application, it is possible to lose or to gain(this applies to most of the financial systems also - see bankers rounding).
I suggest you to use the RoundTo function before making operations/display/etc. A very good article about rounding http://docwiki.embarcadero.com/RADStudio/XE2/en/Floating-Point_Rounding_Issues
Another suggestion will be using of the Currency type. Here is a question on SO with good explanation about this type How to avoid rounding problems when comparing currency values in Delphi?