When to use V instead of a decimal in Cobol Pic Clauses - cobol

Studying for a test right now and can't seem to wrap my head around when to use "V" for a decimal instead of an actual decimal in PIC clauses. I've done some research but can't find anything I understand. Only been learning cobol for about a week, so is there like a rule of thumb here? Thanks for your time.

You use an actual decimal-point when you want to "output" a value which has decimal places, like a report line, a position on a screen, an item in an output file which is going to a "different" system which doesn't understand the format with an implied decimal pace.
That's what the V is, it is an implied decimal place. It tells the compiler where to align results from calculations, MOVEs, whatever. Computer chips, and the machine instructions they support, don't know about actual decimal points for their internal processing.
COBOL is a language with fixed-length fields. The machine instructions don't need to know where the decimal point is (effectively it can deal with everything as integer values) but the compiler does, and the compiler has to do the correct scaling and alignment of results.
Storing on your own files, use V, the implied decimal place.
For data which is to be "human readable" or read by a system which cannot understand your character set, cannot scale what looks like an integer, use an actual decimal-point, . (for computer-readable stuff, you can sometimes use a separate scaling factor, if that is more convenient for the receiving system).
Basically, V for internal, . for external, should be a rule of thumb to get you there.
Which COBOL are you using? I'm surprised it is not covered in your documentation.

Related

Add to zero...What is it for?

Why such code is used in some applications instead of a MOVE?
add 16 to ZERO giving SOME-RESULT
I spotted this in professionally written code at several spots.
Sorce is on this page
Why such code is used in some applications instead of a MOVE?
add 16 to ZERO giving SOME-RESULT
Without seeing more of the code, it appears that it could be a translation of IBM Assembler to COBOL. In particular, the ZAP (Zero and Add Packed) instruction may be literally translated to the above instruction, particularly if SOME-RESULT is COMP-3. Thus, someone checking the translation could see that the ZAP instruction was faithfully translated.
Or, it could be an assembler programmer's idea of a joke.
Having seen the code, I also note the use of
subtract some-data-item from some-data-item
which is used instead of
move zero to some-data-item
This is consistent with operations used with packed decimal fields in IBM Assembly, where there are no other instructions to accomplish "flexible" moves. By flexible, I mean that the packed decimal instructions contain a length field so that specific size MVC instructions need not be used.
This particular style, being unusual, may be related to catching copyright violations.
From my experience, I'm pretty sure I know the reason why the programmer would have done this. It has something to do with the binary representation of the number.
I bet SOME-RESULT is a packed-decimal (or COMP-3) format number. Let's assume the field is defined like this
05 SOME-RESULT PIC S9(5) COMP-3.
This results in a 3-byte field with a hex representation like this
x'00016C'
The decimal number is encoded as a binary encoded decimal (BCD, one decimal digit per half-byte), and the last half-byte holds the sign.
Let's take a look at how the sign is defined:
if it is one of x'C', x'A', x'F', x'E' (café), then the number is positive
if it is one of x'B', x'D', then the number is negative
any of x'0'..'x'9' are not valid signs, so we can distinguish signed packed-decimals from unsigned.
However, a zoned number (PIC 9(5) DISPLAY) - as in the source code - looks like this:
x'F0F0F0F1F6'
As you can see, each decimal digit is an EBCDIC character with the 'zone' part (the first half-byte) always being x'F'.
Now we get closer to your question!
What happens when we use
MOVE 16 TO SOME-RESULT
If you just MOVE a number to such a field, this results in being compiled into a PACK instruction on the machine code level.
PACK SOME-RESULT,=C'16'
A pack instruction takes a zoned number and packs it by picking only the second half-byte of each byte and storing it in the half-bytes of the packed number - with one exception! When it comes to the last byte, it simply flips the two half-bytes and stores them in the last half-byte of the decimal.
This means that the zone of the last byte of the zoned decimal becomes the sign in the packed decimal:
x'00016F'
So now we have an x'F' as the sign – which is a valid positive sign.
However, what happens if use this Cobol instruction instead
ADD 16 TO ZERO GIVING SOME-RESULT
This compiles into multiple machine level instructions
PACK SOME_RESULT,=C'0'
PACK TEMP,=C'16'
AP SOME_RESULT,TEMP
(or similar - the key point is that is needs an AP somewhere)
This makes a slight difference in the result, because the AP (add packed) instruction always sets the resulting sign to either x'C' for a positive or x'D' for a negative result.
So the difference lies in the sign
x'00016C'
Finally, the question is why would one make this difference? After all, both x'F' and x'C' are valid positive signs. So why care?
There is one situation when this slight difference can cause big problems: When the packed decimal is part of an index key, then we would not get a match, even though the numbers are semantically identical!
Because this situation occurred quite often in older databases like VSAM and DL/I (later: IMS/DB), it became good practice to "normalize" packed decimals if they were part of an index key.
However, some programmers adopted the practice without knowing why, so you may come across code that uses this "normalization" even though the data are not used for index keys.
You might also wonder why a compiler does not optimize out the ADD 16 TO ZERO. I'm pretty sure it once did, but that broke a lot of applications, so this specific optimization was removed again or at least made a non-default option with warnings.
Additional useful info
Note that at least the Enterprise Cobol for z/OS compiler allows you to see exactly the machine code that is produced from your source code if use the LIST compile option (see this example output). I recommend to always compile with options LIST, MAP, OFFSET, XREF because these options enable you find the exact problem in your Cobol source even when you only have a program dump from an abend.
Anyway, good programming practice is not to care about the compiler or the machine code, but about the other programmers who will have to maintain, and thus read and understand the code. Good practice would be to always prefer simple and readable instructions, and to document the reasons (right in the code) when deviating from this rule.
Some programmers like to do things "just because they can". I have a feeling that is what you are seeing here. It makes about as much sense as doing
a := 0 + b
would in go.

Delphi Roundto and FormatFloat Inconsistency

I'm getting a rounding oddity in Delphi 2010, where some numbers are rounding down in roundto, but up in formatfloat.
I'm entirely aware of binary representation of decimal numbers sometimes giving misleading results, but in that case I would expect formatfloat and roundto to give the same result.
I've also seen advice that this is the sort of thing "Currency" should be used for, but as you can see below, Currency and Double give the same results.
program testrounding;
{$APPTYPE CONSOLE}
{$R *.res}
uses
System.SysUtils,Math;
var d:Double;
c:Currency;
begin
d:=534.50;
c:=534.50;
writeln('Format: ' +formatfloat('0',d));
writeln('Roundto: '+formatfloat('0',roundto(d,0)));
writeln('C Format: ' +formatfloat('0',c));
writeln('C Roundto: '+formatfloat('0',roundto(c,0)));
readln;
end.
The results are as follows:
Format: 535
Roundto: 534
C Format: 535
C Roundto: 534
I've looked at Why is the result of RoundTo(87.285, -2) => 87.28 and the suggested remedies do not seem to apply.
First of all, we can remove Currency from the question, because the two functions that you use don't have Currency overloads. The value is converted to an IEEE754 floating point value and then follows the same path as your Double code.
Let's look at RoundTo first of all. It is quick to check, using the debugger, or an additional Writeln that RoundTo(d,0) = 534. Why is that?
Well, the documentation for RoundTo says:
Rounds a floating-point value to a specified digit or power of ten using "Banker's rounding".
Indeed in the implementation of RoundTo we see that the rounding mode is temporarily switched to TRoundingMode.rmNearest before being restored to its original value. The rounding mode only applies when the value is exactly half way between two integers. Which is precisely the case we have here.
So Banker's rounding applies. Which means that when the value is exactly half way between two integers, the rounding algorithm chooses the adjacent even integer.
So it makes sense that RoundTo(534.5,0) = 534, and equally you can check that RoundTo(535.5,0) = 536.
Understanding FormatFloat is quite a different matter. Quite frankly its behaviour is somewhat opaque. It performs an ad hoc rounding in code that differs for different platforms. For instance it is assembler on 32 bit Windows, but Pascal on 64 bit Windows. The overall approach appears to be to take the mantissa of the floating point value, convert it to an integer, convert that to text digits, and then perform the rounding based on those text digits. No respect is paid to the current rounding mode when the rounding is performed, and the algorithm appears to implement the round half away from zero policy. However, even that is not implemented robustly for all possible floating point values. It works correctly for your value, but for values with more digits in the mantissa the algorithm breaks down.
In fact it is fairly well known that the Delphi RTL routines for converting between floating point values and text are fundamentally broken by design. There are no routines in the Delphi RTL that can correctly convert from text to float, or from float to text. In fact, I have recently implemented my own conversion routines, that do this correctly, based on existing open source code used by other language runtimes. One of these days I will get around to publishing this code for use by others.
I'm not sure what your exact needs are, but if you are wishing to exert some control over rounding, then you can do so if you take charge of the rounding. Whilst RoundTo always uses Banker's rounding, you can instead use Round which uses the current rounding mode. This will allow you to perform the round using the rounding algorithm of your choice (by calling SetRoundMode), and then you can convert the rounded value to text. That's the key. Keep the value in an arithmetic type, perform the rounding, and only convert to text at the very last moment, after the correct rounding has been applied.
In this case, the value 534.5 is exactly representable in Double precision.
Looking into source code, reveals that the FormatFloat function rounds upwards if the last pending digit is 5 or more.
RoundTo uses the Banker's rounding, and rounds to nearest even number (534) in this case.

Delphi - comparison of two "Real" number variables

I have problem with comparison of two variables of "Real" type. One is a result of mathematical operation, stored in a dataset, second one is a value of an edit field in a form, converted by StrToFloat and stored to "Real" variable. The problem is this:
As you can see, the program is trying to tell me, that 121,97 is not equal to 121,97... I have read
this topic, and I am not copletely sure, that it is the same problem. If it was, wouldn't be both the numbers stored in the variables as an exactly same closest representable number, which for 121.97 is 121.96999 99999 99998 86313 16227 83839 70260 62011 71875 ?
Now let's say that they are not stored as the same closest representable number. How do I find how exactly are they stored? When I look in the "CPU" debugging window, I am completely lost. I see the adresses, where those values should be, but nothing even similar to some binary, hexadecimal or whatever representation of the actual number... I admit, that advanced debugging is unknown universe to me...
Edit:
those two values really are slightly different.
OK, I don't need to understand everything. Although I am not dealing with money, there will be maximum 3 decimal places, so "currency" is the way out
BTW: The calculation is:
DATA[i].Meta.UnUsedAmount := DATA[i].AMOUNT - ObjQuery.FieldByName('USED').AsFloat;
In this case it is 3695 - 3573.03
For reasons unknown, you cannot view a float value (single/double or real48) as hexadecimal in the watch list.
However, you can still view the hexadecimal representation by viewing it as a memory dump.
Here's how:
Add the variable to the watch list.
Right click on the watch -> Edit Watch...
View it as memory dump
Now you can compare the two values in the debugger.
Never use floats for monetary amounts
You do know of course that you should not use floats to count money.
You'll get into all sorts of trouble with rounding and comparisons will not work the way you want them too.
If you want to work with money use the currency type instead. It does not have these problems, supports 4 decimal places and can be compared using the = operator with no rounding issues.
In your database you use the money or currency datatype.

COBOL Compute Issues

I have a compute statement that uses fields like so:
WS-COMPUTE PIC 9(14).
WS-NUM-1 PIC 9(09).
WS-NUM-2 PIC 9(09).
WS-NUM-3 PIC S9(11) COMP-3.
WS-DENOM PIC 9(09).
And then there is logic to make a computation
COMPUTE WS-COMPUTE =
((WS-NUM-1 - WS-NUM-2 + WS-NUM-3)
/ WS-DENOM) * 100
The * 100 is in there because a number < 1 is expected from the division, but 0 is what was always stored in WS-COMPUTE.
We got a workaround by declaring another field that did have implied decimals, and then moving that to value to WS-COMPUTE, but I was lost on why the original would always populate WS-COMPUTE with 0?
The number of decimal places for the results of intermediate calculations are directly related to the number of decimal places in your the final result field (you can consult the manual in the case where you have multiple result fields) when there are no decimal places in the individual operands. COBOL is not going to use a predetermined number of decimal places for intermediate results. If neither actual operands in question nor final result contain decimal places, the intermediate result will not contain decimal places.
The relationship is: number of decimal places in intermediate results = number of decimal places in final result field. The only thing which can modify this is the specification of ROUNDED. If ROUNDED is specified, one extra decimal place is kept for the intermediate result fields, and that will be used to perform the rounding of the final result.
You have no decimal places on your final result, and no ROUNDED. So the intermediate results will have no decimal places. If you get a value of less than zero, then it is gone before anything can happen to it. It is stored as zero, because there is no decimal part available to store it in.
You need to understand COMPUTE before you use it. Nowhere near enough people do. There is absolutely no need to specify excessive lengths of fields or decimal places where none are needed. These a common ways to "deal with" a problem, but are unnecessary, as the actual problem is a poorly-formed COMPUTE.
If your COMPUTE contains multiplication, do that first. If it contains division, do that last. This may require re-arranging a formula, but this will give you the correct result. Subject to truncation, which comes in two parts, as Bruce Martin has indicated. There is the one you are getting, decimal truncation through not specifying enough (any) decimal places when you expect a decimal-only value for an intermediate result, and high-order truncation if your source fields are not big enough. Always remember that the result field controls the size (decimal and integer) of the intermediate results. If you do those things, your COMPUTEs will always work.
And consider whether you want the final result rounded. If so, use ROUNDED. If you want intermediate results to be rounded, you need to do that yourself with separate COMPUTEs or DIVIDEs or MULTIPLYs.
If you don't take these things into account, your COMPUTEs will work by accident, or sometimes, or not at all, or when you specify excessive size or decimal places. Always remember that the result field controls the size (decimal) of the intermediate results where operands contain no decimal places.
If you don't need any decimal places in the final result, use Bruce Martin's first COMPUTE:
COMPUTE WS-COMPUTE = (((WS-NUM-1 - WS-NUM-2 + WS-NUM-3) * 100) / WS-DENOM
If you do need decimal places, use Bruce Martin's first COMPUTE (yes, the same one) with the decimals defined on the final result (WS-COMPUTE).
If you need the result to be rounded (0-4 down, 5-9 up) use ROUNDED. If you need some other rounding, specify the final result with an extra decimal place beyond what you need, and do your own rounding to your specification.
If you look at the column to the right of your question, under Related, you'll find existing questions here which would/should have answered this one for you.
You do not need to add spurious digits or spurious decimal places to everything in sight. Ensure your final result is big enough, has enough decimal places, and pay attention to the order of things. Read your manual which should document intermediate results. If your manual does not cover this, the IBM Enterprise COBOL manuals are an excellent general reference, as well as specific ones. The Programming Guide devotes an entire Appendix to intermediate results.
It sounds like you are using the TRUNC(STD) option, the compiler takes the picture clause to decide what precision to use for intermediate results. You can either add implied decimals to all your intermediate fields or try something like TRUNC(BIN) or TRUNC(OPT), though in this case, I don't think they will help.
Truncates final intermediate results. OS/VS COBOL has the TRUNC and NOTRUNC options (NOTRUNC is the default). VS COBOL II , IBM COBOL, and Enterprise COBOL have the TRUNC(STD|OPT|BIN) option.
TRUNC(STD)
Truncates numeric fields according to PICTURE specification of the binary receiving field
TRUNC(OPT)
Truncates numeric fields in the most optimal way
TRUNC(BIN)
Truncates binary fields based on the storage they occupy
TRUNC(STD) is the default.
For a complete description, see the Enterprise COBOL Programming Guide.
The default for Cobol is normally to truncate !!. This includes intermediate results.
So the decimal places will be truncated in your calculation
You could try:
COMPUTE WS-COMPUTE = (((WS-NUM-1 - WS-NUM-2 + WS-NUM-3) * 100) / WS-DENOM
This could result in loosing top order digits.
Alternatively you could
Use 2 computes
Add decimals to the input declaration
Use floating point fields (comp-1, comp-2). As they are rarely used in Cobol, I do not advise it.l
03 WS-Temp Pic 9(11)V9999 comp-3.
Compute WS-Temp = WS-NUM-1 - WS-NUM-2 + WS-NUM-3.
Compute WS-Temp = (WS-Temp / WS-DENOM) * 100.
Compute WS-COMPUTE = WS-Temp.
Change the field definition:
WS-COMPUTE PIC 9(14).
WS-NUM-1 PIC 9(09)V999.
WS-NUM-2 PIC 9(09)V999.
WS-NUM-3 PIC S9(11)V999 COMP-3.
WS-DENOM PIC 9(09).

Lua floating point operations

I run Lua on a CPU without dedicated floating point HW, depending on SW emulation.
From luaopt.h I can see that some macros are set to double, but it does not clearly state when floats are used and its a little hard to track it.
If my script does simple stuff like:
a=0
a=a+1
for...
Would that involve a floating point operations at any level?
If no that's fine, but what is then the benefit to change macros to long?
(I tried of course but did not work....)
All numeric operations in Lua are performed (according to the default configuration) in floating point. There is no distinction made between floating point and integer, all values are simply numbers.
The actual C type used to store a Lua number is set in luaconf.h, and it is both allowed and even practical to change that to a suitable integral type. You start by changing LUA_NUMBER from double to int, long, or perhaps ptrdiff_t. Then you will find you need to tweak the related macros that control the conversions between strings and numbers. And, of course, you will likely need to eliminate most or all of the base math library since math.sin() and its friends and neighbors are not particularly useful over integers.
The result will be a Lua interpreter where all numbers are integers. The language will still allow you to type 3.14, but it will be stored as 3. Your code will likely not be completely portable to a Lua interpreter built with the standard configuration since a huge amount of Lua code casually assumes that floating point arithmetic is permitted, and remember that your compiled byte code will definitely not be compatible since byte code will store numbers as LUA_NUMBER.
There is LNUM patch (used, for example, by OpenWrt project which relies heavily on Lua for providing Web UI on hardware without FPU) that allows dual integer/floating point representation of numbers in Lua with conversions happening behind the scenes when required. With it most integer computations will be performed without resorting to FPU. Unfortunately, it's only applicable to Lua 5.1; 5.2 is not supported.

Resources