Mainframe COBOL COMPUTE TRUNC Query - cobol

I wonder why there is different in the result for those two cases.
Working Storage:
WS-SUM-LEN PIC S9(4) COMP.
WS-LEN-9000 PIC 9(5) VALUE 9000.
WS-TMP-LEN PIC 9(5).
WS-FIELD-A PIC X(2000).
Case 1) COMPUTE WS-SUM-LEN = WS-LEN-9000 + LENGTH OF WS-FIELD-A
Result: WS-SUM-LEN = 1000
Case 2)
MOVE LENGTH OF WS-FIELD-A TO WS-TMP-LEN
COMPUTE WS-SUM-LEN = WS-LEN-9000 + WS-TMP-LEN
Result: WS-SUM-LEN = 11000
The compiler option is TRUNC(OPT). Why there is no trunc occur for case 2?

Binary fields in IBM's Enterprise COBOL
Warnings
Compiler option TRUNC determines how code for binary fields is generated.
Do not just up and change from your site's default setting of option TRUNC. The different settings for TRUNC can give different results.
Changing from TRUNC(BIN) to TRUNC(STD) will give different results for any values beyond the decimal-values represented by the PICture defining the field. For a signed field, the same applies for the negative value.
01 a-good-name BINARY PIC 99.
ADD 1 TO a-good-name
With TRUNC(STD) the result will be truncated once 99 is reached. With TRUNC(BIN) the result will be truncated once 65535 is reached (if the field were signed, the truncation would be at 99 as before for TRUNC(STD) and 32767 for TRUNC(BIN)).
Changing from TRUNC(BIN) to TRUNC(OPT), without program changes, is only possible if all, entirely all, usage of binary fields is limited to decimal-values represented by the PICture. Particular pieces of code may appear to "work", but it would be a massive coincidence if all use of binary fields gave the same result between the two compiler options, on your system.
It is similar for changing from TRUNC(STD) to TRUNC(OPT). Although the amount of coincidence for working would be smaller, this would increase a false sense of security leaving the potential for subtle differences to be missed for some time.
Changing from genuine use of TRUNC(OPT) to either TRUNC(STD) or TRUNC(BIN) is possible without effort. However, why would you want to?
However, if your use is not genuine (using TRUNC(OPT) with data that does not conform to PICture), then your original results are unreliable, and you will get differences if changing to TRUNC(STD) and likely get difference changing to TRUNC(BIN).
In short, changing the site-default for compiler option TRUNC is something to be considered very carefully, and must include provision for verification of results.
Sites do at times make such a change, the only ones I know of are TRUNC(BIN) (mostly) and TRUNC(STD) to TRUNC(OPT), for performance reasons. These have been done as projects, not just by changing the option and blundering on from there.
Do not override the site-default for TRUNC within systems. If you have programs which are using the same binary data (from files, databases, inter-program communication, messages, or any other way) and they don't all treat the data in the same way, it is asking for trouble.
Some myths
Further explanation will be given later in the text.
There is a difference between TRUNC(BIN) and making all your binary
fields COMP-5 (or COMPUTATIONAL-5).
There is no difference whatsoever. When TRUNC(BIN) is specified, the compiler simply treats all binary fields as COMP-5.
Native-binary is faster than COBOL binary (a binary field with decimal limits defined by the PICture clause).
Although the term itself makes many experienced people think it will be faster ("it'll be like when I code it myself in Assembler") it is in fact slower, on the whole. The slowing-down increases as the field-size increases.
Native-binary (COMP-5/COMPUTATIONAL-5) does not truncate.
It does. It truncates to field-size. Because it truncates to field-size, the intermediate fields must always be larger than the source fields, which means more instructions must be used, and different instructions.
Also, and important to know, the ON SIZE ERROR clause (which can be used with all arithmetic verbs) always only uses the PICture clause to determine that a size-error has occurred. That is, if you have a COMP-5 PIC S9(4), which can contain a maximum positive value of 32,767 and do this:
MULTIPLY that-field BY 10 GIVING that-field
ON SIZE ERROR
DISPLAY "Busted"
END-MULTIPLY
Any value above 9999 will cause the DISPLAY to be processed.
Which really means "don't use ON SIZE ERROR with COMP-5 or TRUNC(BIN)".
TRUNC(OPT) generates optimal code.
In isolation, it does. However this does not preclude further optimisations from compiler option OPTIMIZE/OPT across a wider context.
When using binary fields, always use the maximum PICture for the size of the field
A binary field with 1-4 digits will occupy a half-word, two bytes of storage. With 5-9 digits, a word, or a fullword, of four bytes. With 10-18 digits, a double-word of eight bytes.
The aged recommendation is to always specify four digits, nine digits and 18 digits (well, no-one really goes above nine, do they...?).
This is advice I've received in the past, and given out myself. However, in Enterprise COBOL it is not good advice.
The best advice here is to define the number of digits needed. This will at times improve performance, will never degrade performance, and will make the program easier to understand by best describing the data.
When using binary fields, always make them signed.
More advice I've received and given in the past. Untrue with Enterprise COBOL. If a field can contain a negative value, make it signed. Otherwise make it unsigned.
At times, with interfaces, it is not explicit whether a field should be signed. However, it will be explicit from the maximum value expected. As will the field definition (the USAGE).
For instance, an SQL VARCHAR as a host-variable can have a maximum size of 32767 bytes. Since the actual length is held in a two-byte binary field, the field should be signed. Any value "above" 32767 will be misinterpreted by DB2/SQL.
Since nine decimal digits can fully fit within a word/fullword, there is no problem using nine decimal digits for a COMP/COMP-4/BINARY definition (without TRUNC(BIN)).
Since the compiler has to take care of decimal truncation, and since anything which could lead to truncation would require the "next size up", a binary field of nine digits can require a double-word intermediate field. So requires code to convert to a double-word, and convert the result back from a double-word to a word. If nine digits are required, it will generally be better to define 10 digits and save on the conversions.
Note
The above is all known to hold true for Enterprise COBOL up to V4.2.
IBM has entirely rewritten the code-generation and optimisation (now at two possible levels of optimisation) For Enterprise COBOL V5. There is considerable improvement in the treatment of binary fields, including, for instance, only doing the truncation of values once it is known that truncation is necessary. I am not aware that the use of V5 changes anything here other than the scale of performance differences. All general usage of binary fields should be faster with V5 than with earlier versions of Enterprise COBOL.
Binary fields
COBOL, for binary fields, uses decimal maxima determined by the PICture size.
Such a field with PIC 9 can contain a maximum value of 9 before truncation. If signed, the range of values is -9 to +9. Values outside that range will be truncated.
For PIC 99, 99, and if signed -99 to 99.
For PIC 999, 999,and if signed -999 to 999.
You get the pictu... idea.
It is down to the compiler-implementation as to how those values are stored.
Indeed, according to the Standard, COBOL only recently (1985) had support for binary fields (USAGE BINARY). Which and how actual "non-display" fields were supported was down to USAGE COMPUTATIONAL, whose specifics were compiler-dependent.
Generally across compilers COMP, COMP-1 and COMP-2 (binary, with decimal maxima, short floating-point and long floating-point) are standard, though not part of the Standard. Beyond COMP-2, what the field definitions mean can vary amongst compilers.
So, first recommendation, suggest that your local site standards use BINARY instead of COMP for new code (and PACKED-DECIMAL instead of COMP-3, for packed-decimal fields). BINARY (and COMP-4) within Enterprise COBOL is simply an alias of COMP, so there is absolutely no problem in doing this.
There is another type of binary field, which is the native-binary field. In Enterprise COBOL this is USAGE COMP-5.
COMP-5 has its field-size determined by the PICture definition, but its maxima are that of the full bit-pattern possible for the field size. A PIC S9(4) COMP-5 can contain -32768 to 32767.
Note at this point that a native-binary field, and this may seem counter-intuitive, generally needs more generated machine-code to support its use. This is because it truncates to field-size, rather than PICture.
Note also that there is one place where this does not happen, which is ON SIZE ERROR, which will be true if the value exceeds the PICture size. Which means, to my mind, don't use ON SIZE ERROR with COMP-5 (or TRUNC(BIN), see soon) fields.
Compiler option TRUNC
The compiler option TRUNC defines how machine-code is generated for binary fields. There are three options:
TRUNC(BIN)
Truncation to field-size.
This treats all the non-native-binary fields in the program (COMP/COMP-4/BINARY) as native-binary (as though they had been defined as COMP-5).
This allows the full range of bit patterns to be used, but has impacts on performance.
TRUNC(STD)
Truncation to PICture size.
Generates machine-code for the COBOL Standard truncation to PICture size. PIC 9(n) can contain no more than n significant digits, they will be truncated when the field is a "target" (field value changes).
TRUNC(OPT)
Truncation of any type only used if it happens to be convenient.
I describe this as being a contract between the coder and the compiler. The coder contracts to never (as in never) allow a value to exceed the PICture size. The compiler contracts to always get it right in such a case.
If the coder breaks the contract the coder is entirely to blame for the ensuing rubbish.
When to use each setting of TRUNC (further recommendation)
BIN Never. Use COMP-5 for individual fields where they require access to all bits (pay attention to SQL and CICS "system" fields, and external data from non-Mainframe sources, and inter-language communication between COBOL and JAVA/C/C++ and anywhere else where the data-maxima for a field are beyond the PICture and it is not possible to make the field bigger (as the actual logical definition of the field size is outside your program).
STD use this unless all, as in all, your data always, as in always, conforms to PICture.
OPT use this only, as in only, if all, as in all, your data always, as in always, conforms to PICture.
If you have COMP PIC 99, for instance, you must not, when using OPT, allow that to have a value of 99, and then add one to it. Or anything similar
The Answer
You used TRUNC(OPT), entering into the contract. You immediately broke the contract. It is your fault.
Warning
If your site is using TRUNC(OPT) and not everyone is fully aware of the implications, you will, as in will, have problems.
Substantiation of the Myths from above
There is a difference between TRUNC(BIN) and making all your binary
fields COMP-5 (or COMPUTATIONAL-5).
Define two fields in small program. They should be defined as COMP/COMP-4/BINARY (or COMPUTATIONAL/COMPUTATIONAL-4 if that is your bent).
In the program, add a literal to each of the fields (do this with two separate statements, to make it easier to follow, unless you are experience with the generated code in a listing).
Compile the program with compiler options LIST,NOOFFSET (this will produce, in the compiler listing, output showing the generated machine-code in a so-called "pseudo-assembler" format) and TRUNC(BIN).
Copy the program. In the copy, change the USAGE of the two fields to COMP-5 (or COMPUTATIONAL-5).
Compile this program, again with LIST,NOOFFSET but this time the value for TRUNC is irrelevant as it does not affect COMP-5 fields.
Compare the output listings. If there is one byte difference, eat someone's hat.
Native-binary is faster than COBOL binary (a binary field with decimal limits defined by the PICture clause).
From this discussion at IBM's COBOL Cafe: https://www.ibm.com/developerworks/community/forums/html/topic?id=ae9ef6bc-6e4e-43f8-a814-e66bea25fb8c&ps=25
Here's a multiply of a PIC 9(3) by a PIC 9(5).
With TRUNC(STD)
000023 MULTIPLY
000258 5820 8008 L 2,8(0,8) PIC9-5
00025C 4C20 8000 MH 2,0(0,8) PIC9-3
000260 5020 8010 ST 2,16(0,8)
With TRUNC(BIN)
000019 MULTIPLY
00023C 4820 8030 LH 2,48(0,8) PICS9-4
000240 5840 8038 L 4,56(0,8) PICS9-8
000244 8E40 0020 SRDA 4,32(0)
000248 5D40 C000 D 4,0(0,12) SYSLIT AT +0
00024C 4E50 D120 CVD 5,288(0,13) TS2=16
000250 F154 D110 D123 MVO 272(6,13),291(5,13) TS2=0
000256 4E40 D120 CVD 4,288(0,13) TS2=16
00025A 9110 D115 TM 277(13),X'10' TS2=5
00025E D204 D115 D123 MVC 277(5,13),291(13) TS2=5
000264 4780 B05C BC 8,92(0,11) GN=10(00026C)
000268 9601 D119 OI 281(13),X'01' TS2=9
00026C GN=10 EQU *
00026C 4E20 D120 CVD 2,288(0,13) TS2=16
000270 FC82 D111 D125 MP 273(9,13),293(3,13) TS2=1
000276 D202 D128 C008 MVC 296(3,13),8(12) TS2=24
00027C D204 D12B D115 MVC 299(5,13),277(13) TS2=27
000282 4F20 D128 CVB 2,296(0,13) TS2=24
000286 F144 D12B D110 MVO 299(5,13),272(5,13) TS2=27
00028C 4F50 D128 CVB 5,296(0,13) TS2=24
000290 5C40 C000 M 4,0(0,12) SYSLIT AT +0
000294 1E52 ALR 5,2
000296 47C0 B08E BC 12,142(0,11) GN=11(00029E)
00029A 5A40 C004 A 4,4(0,12) SYSLIT AT +4
00029E GN=11 EQU *
00029E 1222 LTR 2,2
0002A0 47B0 B098 BC 11,152(0,11) GN=12(0002A8)
0002A4 5B40 C004 S 4,4(0,12) SYSLIT AT +4
0002A8 GN=12 EQU *
0002A8 5050 8040 ST 5,64(0,8)
It doesn't take any knowledge of IBM Assembler to work out which of those two pieces of code is going to run more quickly.
The difference in the line-numbers (19 Vs 23) is just down to the fact that TRUNC(BIN) makes the PICture size irrelevant, so where I had three calculations doing the same thing with different size fields, for TRUNC(BIN) the code for each was the same, because the size of each field is the same, a word/fullword of four bytes.
Native-binary (COMP-5/COMPUTATIONAL-5) does not truncate.
See the code immediately above. It is so massive due to the need to provide truncation. The need to provide decimal truncation is down to the COBOL Standard, it's what must happen in the language.
TRUNC(OPT) generates optimal code.
The code generated will always be the most efficient for that code-sequence. The same code-sequence will always generate the same code, before optimisation.
However, the optimizer is capable of spotting that a particular undisturbed state is available for a source-field earlier in the program, and replace part or all of the TRUNC(OPT) code with code relying on the previously-available value.
When using binary fields, always use the maximum PICture for the size of the field
From the same IBM COBOL Cafe discussion referenced above, with these definitions:
01 PIC9-3 BINARY PIC 999.
01 PIC9-5 BINARY PIC 9(5).
01 THE-RESULT8 BINARY PIC 9(8).
01 PIC9-4 BINARY PIC 9(4).
01 PIC9-8 BINARY PIC 9(8).
01 THE-RESULT BINARY PIC 9(8).
And these calculations:
MULTIPLY PIC9-4 BY PIC9-8
GIVING THE-RESULT
MULTIPLY PIC9-3 BY PIC9-5
GIVING THE-RESULT8
Here's the generated code for TRUNC(STD):
000021 MULTIPLY
000248 4830 8018 LH 3,24(0,8) PIC9-4
00024C 5C20 8020 M 2,32(0,8) PIC9-8
000250 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000254 5020 8028 ST 2,40(0,8) THE-RESULT
000023 MULTIPLY
000258 5820 8008 L 2,8(0,8) PIC9-5
00025C 4C20 8000 MH 2,0(0,8) PIC9-3
000260 5020 8010 ST 2,16(0,8)
The first block of pseudo-assembler is with the number of digits in the PICture being the maximum that give the same field-size. A BINARY PIC 9(3) occupies a half-word, and 9(4) is the largest that can appear in a half-word. A PIC 9(5) occupies a word/fullword, and, given Myth 7, eight digits is used for that (to be fair to this particular Myth).
The second block is with the number of digits which represent the data accurately, and which don't happen to require truncation when a multiplication is carried out.
Using the "full-size" PICtures guarantees that unnecessary truncation will always occur.
The difference in the number of instructions is small, and LH is faster than L, so plus to the full-size on that. But M is much slower than L, and MH is slower than L but faster than M. So plus to the optimal size on that. And the D (a divide, which is slow, slow) is not required at all in the second block (because no truncation is required). So bad-boy to the full-size fields on that.
The code for TRUNC(OPT) is also faster for the optimal-size fields, although the difference between the two is not as great (because TRUNC(OPT) in this code-sequence decides it does not need the truncation to base-10 and would not in a million years consider the truncation to field-size).
When using binary fields, always make them signed.
Again from the same IBM COBOL Cafe discussion, here's same-length signed fields Vs unsigned fields, TRUNC(STD):
000019 MULTIPLY
000238 4830 8030 LH 3,48(0,8) PICS9-4
00023C 5C20 8038 M 2,56(0,8) PICS9-8
000240 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000244 5020 8040 ST 2,64(0,8) THE-RESULTS
000021 MULTIPLY
000248 4830 8018 LH 3,24(0,8) PIC9-4
00024C 5C20 8020 M 2,32(0,8) PIC9-8
000250 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000254 5020 8028 ST 2,40(0,8)
The code is different from the above when compiled with both TRUNC(OPT) and TRUNC(BIN), but each of the code-sequences in both cases is identical in those options.
The presence or absence of a sign makes no difference to the code generated.
Except in one case. Where Myth 7 comes into play. With a nine-digit binary, using a signed vs unsigned field does generate less code, but that code generated is more code than if using eight digits.
Since nine decimal digits can fully fit within a word/fullword, there is no problem using nine decimal digits for a COMP/COMP-4/BINARY definition (without TRUNC(BIN)).
From the IBM Enterprise COBOL Version 4 Release 2 Performance Tuning paper, pp32-33:
The following shows the general performance considerations (from most
efficient to least efficient) for the number of digits of precision
for signed binary data items (using PICTURE S9(n) COMP) using
TRUNC(OPT):
n is from 1 to 8
for n from 1 to 4, arithmetic is done in halfword instructions where
possible for n from 5 to 8, arithmetic is done in fullword
instructions where possible
n is from 10 to 17 arithmetic is done in doubleword format
n is 9
fullword values are converted to doubleword format and then doubleword
arithmetic is used (this is SLOWER than any of the above)
n is 18 doubleword values are converted to a higher precision format
and then arithmetic is done using this higher precision (this is the
SLOWEST of all for binary data items)
There is a similar issue with TRUNC(STD). TRUNC(BIN) already has the built-in slowness for the number of digits 1-9, so is not further affected.

From the publicly available documentation :
TRUNC(OPT) is a performance option. When TRUNC(OPT) is in effect, the compiler assumes that data conforms to PICTURE specifications in USAGE BINARY receiving fields in MOVE statements and arithmetic expressions. The results are manipulated in the most optimal way, either truncating to the number of digits in the PICTURE clause, or to the size of the binary field in storage (halfword, fullword, or doubleword).
Tip: Use the TRUNC(OPT) option only if you are sure that the data being moved into the binary areas will not have a value with larger precision than that defined by the PICTURE clause for the binary item. Otherwise, unpredictable results could occur. This truncation is performed in the most efficient manner possible; therefore, the results are dependent on the particular code sequence generated. It is not possible to predict the truncation without seeing the code sequence generated for a particular statement.
Read the "Tip" very carefully and see what it means for your situation. (Hint : it means it does not make sense to ask the question you did because it literally says that "whatever happens, it was unpredictable" or iow "there is no explanation for what happens").
To make the compiler behaviour predictable, switch to either TRUNC(BIN) or TRUNC(STD). STD is good for standards compliance but bad for CPU usage, BIN is good for CPU usage but requires you to be a bit careful (because decimal truncation simply will not happen).

Related

How to efficiently encode readings below or above range of measurement

Using numeric variables, what is the best practice to encode results of measurements that are below or above the range provided by the instrumentation (e.g. TSH < 0.001)? In the specific case this is needed for a medical project, but the problem is expected to apply to any kind of measurement. In my own research I couldn’t find a satisfactory solution up to now.
Generally, this class of problems is addressed in medical data formats, e.g. HL7, but there, numeric values are basically represented as strings. Is there an efficient way to do this with numeric data types (apart from a separate flag variable indicating if the result is within, below or above the cut-off value of the range of measurement)?
This should preferably be a cross-platform solution independent of the used processor architecture and being compatible with Pascal or Object Pascal, but elegant solutions in other programming languages are welcome, too.
The double values, in their IEEE definition, have already some "special values".
0 11111111111 00000000000000000000000000000000000000000000000000002 ≙ 7FF0 0000 0000 000016
≙ +∞ (positive infinity)
1 11111111111 00000000000000000000000000000000000000000000000000002 ≙ FFF0 0000 0000 000016
≙ −∞ (negative infinity)
You may reuse these "flags" for below/above range values.
Every language can recognize those values, e.g. Delphi/FPC Math.pas unit defines NegInfinity and Infinity if I remember correctly:
Infinity = 1.0 / 0.0;
NegInfinity = -1.0 / 0.0;
One side advantage is that they will be converted as text properly as non numbers (+INF/-INF), so it may help debugging and tracing those values.
Of course, you should detect and avoid computing with those values (e.g. a mean/R² or curve fitting), which may break your calculation with the correct values. But the result will probably be so obviously wrong (infinity will preempt other values in most mathematical operations) that it could be not too difficult to track this problem.
Check this article as reference.

Is information stored in registers/memory structured as binary?

Looking at this question on Quora HERE ("Are data stored in registers and memory in hex or binary?"), I think the top answer is saying that data persistence is achieved through physical properties of hardware and is not directly relatable to either binary or hex.
I've always thought of computers as 'binary', but have just realized that that only applies to the usage of components (magnetic up/down or an on/off transistor) and not necessarily the organisation of, for example, memory contents.
i.e. you could, theoretically, create an abstraction in memory that used 'binary components' but that wasn't binary, like this:
100000110001010001100
100001001001010010010
111101111101010100001
100101000001010010010
100100111001010101100
And then recognize that as the (badly-drawn) image of 'hello', rather than the ASCII encoding of 'hello'.
An answer on SO (What's the difference between a word and byte?) mentions that processors can handle 'words', i.e. several bytes at a time, so while information representation has to be binary I don't see why information processing has to be.
Can computers do arithmetic on hex directly? In this case, would the internal representation of information in memory/registers be in binary or hex?
Perhaps "digital computer" would be a good starting term and then from there "binary digit" ("bit"). Electronically, the terms for the values are sometimes "high" and "low". You are right, everything after that depends on the operation. Most of the time, groups of bits are operated on together. Commonly groups are 1, 8, 16, 32 and 64 bits. The meaning of the bits depends on the program but some operations go hand-in-hand with some level of meaning.
When the meaning of a group of bits is not known or important, humans like to be able to decern the value of each bit. Binary could be used but more than 8 bits is hard to read. Although it is rare to operate on groups of 4 bits, hexadecimal is much more readable and is generally used regardless of the number of bits. Sometimes octal is used but that's based on contexts where there is some meaning to a subgrouping of the 3 bits or an avoidance of digits beyond 9.
Integers can be stored in two's complement format and often CPUs have instructions for such integers. Once such operation is negation. For a group of 8 bits, it would map 1 to -1,… 127 to -127, and -1 to 1, … -127 to 127, and 0 to 0 and -128 to -128. Decimal is likely the most valuable to humans here, not base 256, base 2 or base 16. In unsigned hexadecimal, that would be 01 to FF, …, 00 to 00, 80 to 80.
For an intro to how a CPU might do integer addition on a group of bits, see adder circuits.
Other number formats include IEEE-754 floating point and binary-coded decimal.
I think you understand that digital circuits are binary. So, based on the above, yes, operations do operate on a higher conceptual level despite the actual storage.

What does It mean by Divide Function

DIVIDE WS-ENT-CNYR-RED BY 4 GIVING WS-DT-CNYR
REMAINDER WS-YR-REMAINDER ON SIZE ERROR.
What does it mean?
DIVIDE is a COBOL verb that allows you to do division, like in maths.
This, and, other maths verbs, are covered in your manual and course notes.
The actual DIVIDE you show is syntactically incorrect: you should have an "imperative statement" after the ON SIZE ERROR phrase. No reasonable COBOL compiler will allow that statement to compile.
What is the DIVIDE doing in? It is likely the start of a check for a leap-year. If a year is divisible by four, it is a leap-year candidate (it must also not be divisible by 100 unless it is divisible by 400).
The result of the division is placed in the data-name following the GIVING and the what is "left over" from the division is place in the data-name following the REMAINDER.
Usually when using REMAINDER it will be division with integers, which makes sense for being a year. The year 2015 divided by four gives 503 with a remainder of three. Not a leap year.
The ON SIZE ERROR in this case should be superfluous. It is division by a literal (4) and unless the result fields are not big enough to contain the result, there can never be a SIZE ERROR.
Data-definitions should be:
ll WS-ENT-CNYR-RED PIC 9(4).
ll WS-DT-CNYR PIC 9(3).
ll WS-YR-REMAINDER PIC 9.
Unless there are very large value for the year, in which case WS-DT-CNYR would need to be 9(4). ll is a level-number, it will be in the range 01-49 (or 1-49) or a 77.
An 88-level condition name should appear on WS-YR-REMAINDER, something like:
88 could-be-leap-year VALUE ZERO.
GIVING is very common to see in COBOL. If GIVING is not used, then the result is stored in one of the fields mentioned in the statement (you should check which for DIVIDE, MULTIPLY, ADD and SUBTRACT).
REMAINDER you will only see when the "modulus" of a division is required.
There will be no rounding of a result unless the ROUNDED phrase is specified, and rounding with REMAINDER does not make much sense.
In this example, only WS-ENT-CNYR-RED must be a numeric item. WS-DT-CNYR and WS-YR-REMAINDER can both be numeric-edited items. The item on a GIVING will quite often be numeric-edited when formatting report lines. In this typical code for the start of a leap-year check, it is likely that all will be numeric, and all will be integers.
Depending on how much the three items are used, and how they are used, they may be defined as PACKED-DECIMAL (or whichever COMPUTATIONAL-? item is packed-decimal for that compiler) or even binary.
It is not necessary that this is the start of a leap-year check. There can be other reasons for dividing by four and needing to know the remainder.
Note that DIVIDE ... INTO ... is also valid. Indeed, there are five distinct formats of the DIVIDE statement documented in the 1985 COBOL Standard (and earlier ones) which you should see reflected in your manual.
ON SIZE ERROR tells the compiler to generate code when a "size error" occurs. A "size error" is when a result does not fit in a field provided for it.
ON SIZE ERROR
imperative-statement.
or
ON SIZE ERROR
imperative-statement.
END-... (scope-delimiter, consists of END- prefix and verb used, in this case `END-DIVIDE`).
The imperative-statement can be multiple statements, but is usually one (setting the result field to a default value, often zero). Because it can be multiple statements, it is very important to terminate the statement, otherwise you'll make unintended code part of the imperative-statement.
Many people think that ON SIZE ERROR is only actioned for a "divide by zero", but this is not the case. If a result does not fit in a field due to the size of the field, a "size error" has occurred.
I don't use ON SIZE ERROR. I ensure non-zero divisors, and that all result fields are large enough to contain the expected results.
Because I don't use ON SIZE ERROR, I don't know whether the REMAINDER can also cause a size error. I'll check :-)
OK, I've checked. This is with IBM's Enterprise COBOL, which, apart from Extensions, is to the 1985 Standard. If the REMAINDER field is too small to hold the remainder, the ON SIZE ERROR will be actioned. So be very careful about the size of the remainder field, as there is no way of knowing which field caused the size-error.
It is documented like so:
SIZE ERROR phrases For formats 1, 2, and 3, see “SIZE ERROR phrases”
on page 296.
For formats 4 and 5, if a size error occurs in the
quotient, no remainder calculation is meaningful. Therefore, the
contents of the quotient field (identifier-3) and the remainder field
(identifier-4) are unchanged. If size error occurs in the remainder,
the contents of the remainder field (identifier-4) are unchanged. In
either of these cases, you must analyze the results to determine which
situation has actually occurred.
Formats 4 and 5 are with REMAINDER.
If you don't specify ON SIZE ERROR then the behaviour will be down to the individual compiler, and run-time options. Enterprise COBOL will truncate the fields, but only after going to the run-time (Language Environment) to check whether you wanted something else to happen. Which will consume a lot of time relative to specifying ON SIZE ERROR.
So, ensure your fields are the correct size. If you don't want to do this, use ON SIZE ERROR. If using ON SIZE ERROR with REMAINDER, you have to determine yourself what caused the SIZE ERROR before doing anything.
ON SIZE ERROR has a counterpart, NOT ON SIZE ERROR. It's use is similar to ON SIZE ERROR, except with the obvious difference. ON SIZE ERROR and NOT ON SIZE ERROR can both be used at the same time:
DIVIDE WS-ENT-CNYR-RED BY 4
GIVING WS-DT-CNYR
REMAINDER WS-YR-REMAINDER
ON SIZE ERROR
imperative-statement-1
NOT ON SIZE ERROR
imperative-statement-2
END-DIVIDE (or .)

COBOL COMPUTE calculation

I am executing a standalone Enterprise COBOL program with the below calculation. I am having a COMPUTE with multiple operations and another with the full calculation split up. But the results are different(the last 4 digits) in both cases.
I have manually calculated these using a calculator and the result matches the one with the split up COMPUTE statements. I have tried the calculations by using the entire answer at intermediate results and use only 15 digits of the final answer and also using only 15 digits at all intermediate steps(without rounding). But none of these results match with the combined COMPUTE result.
Could someone help me understand why there is such a difference.
05 WS-A PIC S9(3)V9(15) COMP.
05 WS-B PIC S9(3)V9(15) COMP.
05 WS-C PIC S9(3)V9(15) COMP.
05 WS-D PIC S9(3)V9(15) COMP.
05 WS-E PIC S9(3)V9(15) COMP.
05 WS-RES PIC S9(3)V9(15) COMP.
05 RES-DISP PIC -9(2).9(16).
MOVE 3.56784 TO WS-A.
MOVE 1.3243284234 TO WS-B.
MOVE .231433897121334834 TO WS-C.
MOVE 9.3243243213 TO WS-D.
MOVE 7.0 TO WS-E.
COMPUTE WS-RES = WS-A / WS-B.
MOVE WS-RES TO RES-DISP.
COMPUTE WS-RES = WS-RES / WS-C.
MOVE WS-RES TO RES-DISP.
COMPUTE WS-RES = WS-RES / (WS-D + WS-E).
MOVE WS-RES TO RES-DISP.
COMPUTE WS-RES = WS-RES * (WS-C - WS-A)
MOVE WS-RES TO RES-DISP.
COMPUTE WS-RES = WS-RES + WS-E ** WS-B
MOVE WS-RES TO RES-DISP.
COMPUTE WS-RES = WS-RES + WS-D.
MOVE WS-RES TO RES-DISP.
Result of last compute = 20.1030727225138740
COMPUTE WS-RES = WS-A / WS-B / WS-C /
(WS-D + WS-E) * (WS-C - WS-A) +
WS-E ** WS-B + WS-D.
MOVE WS-RES TO RES-DISP.
Result of combined compute = 20.1030727225138680
Slow, I know, but I think I've just got to why you've defined everything with 15 decimal places. You couldn't get it to work otherwise.
Read the question in the link lower down (and the answer, of course). You do not need to specify all fields with the precision you require for the output.
Re-arrange your COMPUTE. Exponentiation outside the main COMPUTE. Multiply first. Then divide. Any additions/subtractions fit in naturally. Use parenthesis to exactly specify how you want a human to read the COMPUTE (the compiler doesn't care, it'll do what it is told, but at times people don't know what they are telling it).
If you do this (correctly) you will get the same answer as in your COMPUTE with all fields having 15 decimal places.
If you don't do this, your COMPUTE (and others when you copy it) will always be fragile and prone to error when changed.
It was a good idea to break out the COMPUTE into smaller ones like you did so that you could see which values to put into your calculator. You can do the same thing when you make the fields their correct sizes.
I'm going to have to totally re-write this, as the multiple updates are making it messy... at some point.
OK, confirmed. The difference is due to the calculation of a non-integer exponentiation within the COMPUTE which, as the manual says, then converts everything in the COMPUTE (all the intermediate fields) into floating-point numbers, which have a higher number of decimal places than the 15 specified in the PICture clauses.
There is now a diagnostic message (having taken the exponentiation out) due to the multiplication, which would like to have 36 digits, but can only have 30 (ARITH(COMPAT)) or 31 (ARITH(EXTEND)). If high-order data is truncated through this, there will be a run-time message.
Note. With ARITH(COMPAT) 15 is the largest number of significant digits where precision will not be lost (64-bit floating-point). ARITH(EXTEND) guarantees precision, but there is an overhead in processing (128-bit floating-point).
Back to earlier...
Been thinking more about this. You are using 18 digits, and you haven't mentioned using ARITH(EXTEND) as a compile option and haven't mentioned any diagnostic messages being produced for the large COMPUTE. Which is interesting.
Haven't done much exponentiation in COBOL, and then only with whole numbers. So I looked at the manual. Because of the fractional exponentiation everything in your large COMPUTE is being done in floating-point. That doesn't matter as such, but it means things are being done with greater precision than is expected from the 15 decimals in your definition. In your small COMPUTEs, this is not happening.
I'd suggest taking the exponentiation out of the big COMPUTE, calculating that separately and simply putting that result in the big COMPUTE (a simple addition replacing the exponentiation). I suspect at that stage the compiler will start to moan about the number of significant digits in results. If it does, then you will get a run-time message if you actually lose a significant digit.
You should:
Take the exponentiation out of the big COMPUTE and replace it with the result of a separate COMPUTE of the exponentiation
Define each field to its maximum size required by the data (not the maximum possible for everything)
(Probably) change to COMP-3 from COMP, but test it yourself
Parenthesise everything so that the human reader knows the order the compiler will do things in
If you still have warnings about possible truncation in the COMPUTE, look at ARITH(EXTEND) compiler option, but don't just put it in as a fix, only use it if you need it, and document its use for that program
I will try to confirm this later, but I think that will sort it out.
The following was the start, and still applies in general, although not directly relevant for the specific question (the problem being the higher floating-point precision forced onto everything vs only forced for the exponentiation):
Your problem with the small COMPUTEs is that you are not executing them in the same order as the elements of the large COMPUTE.
The ( and ) are not there for fun, or just to group things together, they establish precedence in the calculations.
What else establishes precedence? The operator used. What is the order of precedence? Well, you have to look that up in the manual, memorise it, or familiarise yourself with it each time if you forget. Mmmmm.... not a good suggestion.
Plus, other people will be working on the programs that you write or change. And they may "know" how a COMPUTE works (by which I mean they don't, but think they do, so won't look it up). Doubly-not-good suggestion.
So....
Use ( and ) and define the order in which you want things done.
Also be aware of where you may lose significance. Have a look at this one, AS/400: Using COMPUTE function, inconsistent results with different field definition, read up and understand the referenced parts of the Enterprise COBOL manuals.
As a summary of the linked-to question on this site, multiply first and divide last, to ensure that intermediate results do not lose significant digits. Unless you deliberately want to lose digits, in which case do those COMPUTEs to lose significance individually, and comment the code, so that no-one "fixes" it.
Also it is unusual on the Mainframe to use COMP/COMP-4/BINARY/COMP-5 for fields with decimal places. Once you are happy with your COMPUTEs, copy the program and change the field definitions to COMP-3/PACKED-DECIMAL. Put a loop on a counter in each program, and see if you notice any significant difference in CPU usage.
I copied, compiled and ran your program with a couple of minor changes:
Declared RES-DISP as PIC -9(3).9(15) to avoid compiler truncation warnings
Added new variable WS-EXP PIC S9(3)V9(15) COMP
Added a new computation doing as Bill Woodger suggested by breaking the exponentiation into a seperate calculation as follows:
COMPUTE WS-EXP = WS-E ** WS-B
COMPUTE WS-RES = WS-A / WS-B / WS-C /
(WS-D + WS-E) * (WS-C - WS-A) +
WS-EXP + WS-D.
The only compiler warning issued in this program was for the second COMPUTE statement above. The message was:
IGYPG3113-W Truncation of high-order digit positions may occur due to intermediate results exceeding 30 digits
The result of the COMPUTE above is exactly the same as your fragmented calculation: 020.103072722513874.
Your all-in-one COMPUTE statement did not cause a compiler warning. But the internal exponentiation caused
higher precision intermediate results to be used throughout the calculation (fewer roundings) yielding a slightly different
result: 020.103072722513868.
Another interesting observation here. Using ARITH(COMPAT) I get exactly the same results as peresented in the question, using ARITH(EXTEND) I
get 020.103072722513877 for the fragmentary calculation and 020.103072722513874 for the all-in-one COMPUTE (which is the same as
the fragmentary calculation when compiling with ARITH(COMPAT)).
All goes to show that you need to really study numeric precision, rounding and truncation rules when doing complex computations. This is
especially true of COBOL because of the number of different numeric data types available to the programmer.

Why is the smallest value that can be stored is a Byte(8bit) & not a Bit(1bit)?

Why is the smallest value that can be stored a Byte(8bit) & not a Bit(1bit) in memory?
Even booleans are stored as Bytes. Will we ever bump the smallest number to 32 or 64bits like register's on the CPU?
EDIT: To clarify as many answers seemed confused about the nature of questing. This question is about why isn't a byte 7-bit, 1-bit, 32-bit, etc (not why lower bit primitives must fit within the hardware's byte at min). Is the 8-bit byte simply historical as some hardware has 10-bit bytes for example. Or is there a mathematical reason 8-bit is ideal vs say 10-bit for general processing?
The hardware is built to read data in blocks (bytes, later words and dwords). This provides greater efficiency, than accessing individual bits, and also offers more addressing range. So most data is aligned to at least byte boundary. There exist encodings that operate with bit sequences, rather than bytes, but they are quite rare.
Nowadays the data is most often aligned to dword (32-bits) boundary anyway. Moreover, some hardware (ARM, for example), can't access misaligned multibyte variables, i.e. 16-bit word can't "cross" dword boundary - exception will be thrown.
Because computers address memory at the byte level, so anything smaller than a byte is not addressable.
The underlying methods of processor access are limited to the size of the smallest usable register. On most architectures, that size is 8 bits. You can use smaller portions of these; for instance, C has the bitfield feature in structs that will allow combining fields that only need to be certain bit lengths. Access will still require that the whole byte be read.
Some older exotic architectures actually did have different a "word size." In these machines, 10 bits might be the common size.
Lastly, processors are almost always backwards compatible. Intel, for instance, has maintained complete instruction compatibility from the 386 on up. If you take a program compiled for the 386, it will still run on an i7 processor. Changing the word size would break compatibility. So while it is possible, no manufacturer will ever do it.
Assume that we have native language that consist of 2 character such as a , b
to distinguish two characters we need at least 1 bit for example 0 to represent char a and 1 to represent char b
so that if we count number of characters and special characters and symbols, there are 128 character and to distinguish one character from another, you need log2(128) = 7 bit and 8th bit for transmission

Resources