I wonder why there is different in the result for those two cases.
Working Storage:
WS-SUM-LEN PIC S9(4) COMP.
WS-LEN-9000 PIC 9(5) VALUE 9000.
WS-TMP-LEN PIC 9(5).
WS-FIELD-A PIC X(2000).
Case 1) COMPUTE WS-SUM-LEN = WS-LEN-9000 + LENGTH OF WS-FIELD-A
Result: WS-SUM-LEN = 1000
Case 2)
MOVE LENGTH OF WS-FIELD-A TO WS-TMP-LEN
COMPUTE WS-SUM-LEN = WS-LEN-9000 + WS-TMP-LEN
Result: WS-SUM-LEN = 11000
The compiler option is TRUNC(OPT). Why there is no trunc occur for case 2?
Binary fields in IBM's Enterprise COBOL
Warnings
Compiler option TRUNC determines how code for binary fields is generated.
Do not just up and change from your site's default setting of option TRUNC. The different settings for TRUNC can give different results.
Changing from TRUNC(BIN) to TRUNC(STD) will give different results for any values beyond the decimal-values represented by the PICture defining the field. For a signed field, the same applies for the negative value.
01 a-good-name BINARY PIC 99.
ADD 1 TO a-good-name
With TRUNC(STD) the result will be truncated once 99 is reached. With TRUNC(BIN) the result will be truncated once 65535 is reached (if the field were signed, the truncation would be at 99 as before for TRUNC(STD) and 32767 for TRUNC(BIN)).
Changing from TRUNC(BIN) to TRUNC(OPT), without program changes, is only possible if all, entirely all, usage of binary fields is limited to decimal-values represented by the PICture. Particular pieces of code may appear to "work", but it would be a massive coincidence if all use of binary fields gave the same result between the two compiler options, on your system.
It is similar for changing from TRUNC(STD) to TRUNC(OPT). Although the amount of coincidence for working would be smaller, this would increase a false sense of security leaving the potential for subtle differences to be missed for some time.
Changing from genuine use of TRUNC(OPT) to either TRUNC(STD) or TRUNC(BIN) is possible without effort. However, why would you want to?
However, if your use is not genuine (using TRUNC(OPT) with data that does not conform to PICture), then your original results are unreliable, and you will get differences if changing to TRUNC(STD) and likely get difference changing to TRUNC(BIN).
In short, changing the site-default for compiler option TRUNC is something to be considered very carefully, and must include provision for verification of results.
Sites do at times make such a change, the only ones I know of are TRUNC(BIN) (mostly) and TRUNC(STD) to TRUNC(OPT), for performance reasons. These have been done as projects, not just by changing the option and blundering on from there.
Do not override the site-default for TRUNC within systems. If you have programs which are using the same binary data (from files, databases, inter-program communication, messages, or any other way) and they don't all treat the data in the same way, it is asking for trouble.
Some myths
Further explanation will be given later in the text.
There is a difference between TRUNC(BIN) and making all your binary
fields COMP-5 (or COMPUTATIONAL-5).
There is no difference whatsoever. When TRUNC(BIN) is specified, the compiler simply treats all binary fields as COMP-5.
Native-binary is faster than COBOL binary (a binary field with decimal limits defined by the PICture clause).
Although the term itself makes many experienced people think it will be faster ("it'll be like when I code it myself in Assembler") it is in fact slower, on the whole. The slowing-down increases as the field-size increases.
Native-binary (COMP-5/COMPUTATIONAL-5) does not truncate.
It does. It truncates to field-size. Because it truncates to field-size, the intermediate fields must always be larger than the source fields, which means more instructions must be used, and different instructions.
Also, and important to know, the ON SIZE ERROR clause (which can be used with all arithmetic verbs) always only uses the PICture clause to determine that a size-error has occurred. That is, if you have a COMP-5 PIC S9(4), which can contain a maximum positive value of 32,767 and do this:
MULTIPLY that-field BY 10 GIVING that-field
ON SIZE ERROR
DISPLAY "Busted"
END-MULTIPLY
Any value above 9999 will cause the DISPLAY to be processed.
Which really means "don't use ON SIZE ERROR with COMP-5 or TRUNC(BIN)".
TRUNC(OPT) generates optimal code.
In isolation, it does. However this does not preclude further optimisations from compiler option OPTIMIZE/OPT across a wider context.
When using binary fields, always use the maximum PICture for the size of the field
A binary field with 1-4 digits will occupy a half-word, two bytes of storage. With 5-9 digits, a word, or a fullword, of four bytes. With 10-18 digits, a double-word of eight bytes.
The aged recommendation is to always specify four digits, nine digits and 18 digits (well, no-one really goes above nine, do they...?).
This is advice I've received in the past, and given out myself. However, in Enterprise COBOL it is not good advice.
The best advice here is to define the number of digits needed. This will at times improve performance, will never degrade performance, and will make the program easier to understand by best describing the data.
When using binary fields, always make them signed.
More advice I've received and given in the past. Untrue with Enterprise COBOL. If a field can contain a negative value, make it signed. Otherwise make it unsigned.
At times, with interfaces, it is not explicit whether a field should be signed. However, it will be explicit from the maximum value expected. As will the field definition (the USAGE).
For instance, an SQL VARCHAR as a host-variable can have a maximum size of 32767 bytes. Since the actual length is held in a two-byte binary field, the field should be signed. Any value "above" 32767 will be misinterpreted by DB2/SQL.
Since nine decimal digits can fully fit within a word/fullword, there is no problem using nine decimal digits for a COMP/COMP-4/BINARY definition (without TRUNC(BIN)).
Since the compiler has to take care of decimal truncation, and since anything which could lead to truncation would require the "next size up", a binary field of nine digits can require a double-word intermediate field. So requires code to convert to a double-word, and convert the result back from a double-word to a word. If nine digits are required, it will generally be better to define 10 digits and save on the conversions.
Note
The above is all known to hold true for Enterprise COBOL up to V4.2.
IBM has entirely rewritten the code-generation and optimisation (now at two possible levels of optimisation) For Enterprise COBOL V5. There is considerable improvement in the treatment of binary fields, including, for instance, only doing the truncation of values once it is known that truncation is necessary. I am not aware that the use of V5 changes anything here other than the scale of performance differences. All general usage of binary fields should be faster with V5 than with earlier versions of Enterprise COBOL.
Binary fields
COBOL, for binary fields, uses decimal maxima determined by the PICture size.
Such a field with PIC 9 can contain a maximum value of 9 before truncation. If signed, the range of values is -9 to +9. Values outside that range will be truncated.
For PIC 99, 99, and if signed -99 to 99.
For PIC 999, 999,and if signed -999 to 999.
You get the pictu... idea.
It is down to the compiler-implementation as to how those values are stored.
Indeed, according to the Standard, COBOL only recently (1985) had support for binary fields (USAGE BINARY). Which and how actual "non-display" fields were supported was down to USAGE COMPUTATIONAL, whose specifics were compiler-dependent.
Generally across compilers COMP, COMP-1 and COMP-2 (binary, with decimal maxima, short floating-point and long floating-point) are standard, though not part of the Standard. Beyond COMP-2, what the field definitions mean can vary amongst compilers.
So, first recommendation, suggest that your local site standards use BINARY instead of COMP for new code (and PACKED-DECIMAL instead of COMP-3, for packed-decimal fields). BINARY (and COMP-4) within Enterprise COBOL is simply an alias of COMP, so there is absolutely no problem in doing this.
There is another type of binary field, which is the native-binary field. In Enterprise COBOL this is USAGE COMP-5.
COMP-5 has its field-size determined by the PICture definition, but its maxima are that of the full bit-pattern possible for the field size. A PIC S9(4) COMP-5 can contain -32768 to 32767.
Note at this point that a native-binary field, and this may seem counter-intuitive, generally needs more generated machine-code to support its use. This is because it truncates to field-size, rather than PICture.
Note also that there is one place where this does not happen, which is ON SIZE ERROR, which will be true if the value exceeds the PICture size. Which means, to my mind, don't use ON SIZE ERROR with COMP-5 (or TRUNC(BIN), see soon) fields.
Compiler option TRUNC
The compiler option TRUNC defines how machine-code is generated for binary fields. There are three options:
TRUNC(BIN)
Truncation to field-size.
This treats all the non-native-binary fields in the program (COMP/COMP-4/BINARY) as native-binary (as though they had been defined as COMP-5).
This allows the full range of bit patterns to be used, but has impacts on performance.
TRUNC(STD)
Truncation to PICture size.
Generates machine-code for the COBOL Standard truncation to PICture size. PIC 9(n) can contain no more than n significant digits, they will be truncated when the field is a "target" (field value changes).
TRUNC(OPT)
Truncation of any type only used if it happens to be convenient.
I describe this as being a contract between the coder and the compiler. The coder contracts to never (as in never) allow a value to exceed the PICture size. The compiler contracts to always get it right in such a case.
If the coder breaks the contract the coder is entirely to blame for the ensuing rubbish.
When to use each setting of TRUNC (further recommendation)
BIN Never. Use COMP-5 for individual fields where they require access to all bits (pay attention to SQL and CICS "system" fields, and external data from non-Mainframe sources, and inter-language communication between COBOL and JAVA/C/C++ and anywhere else where the data-maxima for a field are beyond the PICture and it is not possible to make the field bigger (as the actual logical definition of the field size is outside your program).
STD use this unless all, as in all, your data always, as in always, conforms to PICture.
OPT use this only, as in only, if all, as in all, your data always, as in always, conforms to PICture.
If you have COMP PIC 99, for instance, you must not, when using OPT, allow that to have a value of 99, and then add one to it. Or anything similar
The Answer
You used TRUNC(OPT), entering into the contract. You immediately broke the contract. It is your fault.
Warning
If your site is using TRUNC(OPT) and not everyone is fully aware of the implications, you will, as in will, have problems.
Substantiation of the Myths from above
There is a difference between TRUNC(BIN) and making all your binary
fields COMP-5 (or COMPUTATIONAL-5).
Define two fields in small program. They should be defined as COMP/COMP-4/BINARY (or COMPUTATIONAL/COMPUTATIONAL-4 if that is your bent).
In the program, add a literal to each of the fields (do this with two separate statements, to make it easier to follow, unless you are experience with the generated code in a listing).
Compile the program with compiler options LIST,NOOFFSET (this will produce, in the compiler listing, output showing the generated machine-code in a so-called "pseudo-assembler" format) and TRUNC(BIN).
Copy the program. In the copy, change the USAGE of the two fields to COMP-5 (or COMPUTATIONAL-5).
Compile this program, again with LIST,NOOFFSET but this time the value for TRUNC is irrelevant as it does not affect COMP-5 fields.
Compare the output listings. If there is one byte difference, eat someone's hat.
Native-binary is faster than COBOL binary (a binary field with decimal limits defined by the PICture clause).
From this discussion at IBM's COBOL Cafe: https://www.ibm.com/developerworks/community/forums/html/topic?id=ae9ef6bc-6e4e-43f8-a814-e66bea25fb8c&ps=25
Here's a multiply of a PIC 9(3) by a PIC 9(5).
With TRUNC(STD)
000023 MULTIPLY
000258 5820 8008 L 2,8(0,8) PIC9-5
00025C 4C20 8000 MH 2,0(0,8) PIC9-3
000260 5020 8010 ST 2,16(0,8)
With TRUNC(BIN)
000019 MULTIPLY
00023C 4820 8030 LH 2,48(0,8) PICS9-4
000240 5840 8038 L 4,56(0,8) PICS9-8
000244 8E40 0020 SRDA 4,32(0)
000248 5D40 C000 D 4,0(0,12) SYSLIT AT +0
00024C 4E50 D120 CVD 5,288(0,13) TS2=16
000250 F154 D110 D123 MVO 272(6,13),291(5,13) TS2=0
000256 4E40 D120 CVD 4,288(0,13) TS2=16
00025A 9110 D115 TM 277(13),X'10' TS2=5
00025E D204 D115 D123 MVC 277(5,13),291(13) TS2=5
000264 4780 B05C BC 8,92(0,11) GN=10(00026C)
000268 9601 D119 OI 281(13),X'01' TS2=9
00026C GN=10 EQU *
00026C 4E20 D120 CVD 2,288(0,13) TS2=16
000270 FC82 D111 D125 MP 273(9,13),293(3,13) TS2=1
000276 D202 D128 C008 MVC 296(3,13),8(12) TS2=24
00027C D204 D12B D115 MVC 299(5,13),277(13) TS2=27
000282 4F20 D128 CVB 2,296(0,13) TS2=24
000286 F144 D12B D110 MVO 299(5,13),272(5,13) TS2=27
00028C 4F50 D128 CVB 5,296(0,13) TS2=24
000290 5C40 C000 M 4,0(0,12) SYSLIT AT +0
000294 1E52 ALR 5,2
000296 47C0 B08E BC 12,142(0,11) GN=11(00029E)
00029A 5A40 C004 A 4,4(0,12) SYSLIT AT +4
00029E GN=11 EQU *
00029E 1222 LTR 2,2
0002A0 47B0 B098 BC 11,152(0,11) GN=12(0002A8)
0002A4 5B40 C004 S 4,4(0,12) SYSLIT AT +4
0002A8 GN=12 EQU *
0002A8 5050 8040 ST 5,64(0,8)
It doesn't take any knowledge of IBM Assembler to work out which of those two pieces of code is going to run more quickly.
The difference in the line-numbers (19 Vs 23) is just down to the fact that TRUNC(BIN) makes the PICture size irrelevant, so where I had three calculations doing the same thing with different size fields, for TRUNC(BIN) the code for each was the same, because the size of each field is the same, a word/fullword of four bytes.
Native-binary (COMP-5/COMPUTATIONAL-5) does not truncate.
See the code immediately above. It is so massive due to the need to provide truncation. The need to provide decimal truncation is down to the COBOL Standard, it's what must happen in the language.
TRUNC(OPT) generates optimal code.
The code generated will always be the most efficient for that code-sequence. The same code-sequence will always generate the same code, before optimisation.
However, the optimizer is capable of spotting that a particular undisturbed state is available for a source-field earlier in the program, and replace part or all of the TRUNC(OPT) code with code relying on the previously-available value.
When using binary fields, always use the maximum PICture for the size of the field
From the same IBM COBOL Cafe discussion referenced above, with these definitions:
01 PIC9-3 BINARY PIC 999.
01 PIC9-5 BINARY PIC 9(5).
01 THE-RESULT8 BINARY PIC 9(8).
01 PIC9-4 BINARY PIC 9(4).
01 PIC9-8 BINARY PIC 9(8).
01 THE-RESULT BINARY PIC 9(8).
And these calculations:
MULTIPLY PIC9-4 BY PIC9-8
GIVING THE-RESULT
MULTIPLY PIC9-3 BY PIC9-5
GIVING THE-RESULT8
Here's the generated code for TRUNC(STD):
000021 MULTIPLY
000248 4830 8018 LH 3,24(0,8) PIC9-4
00024C 5C20 8020 M 2,32(0,8) PIC9-8
000250 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000254 5020 8028 ST 2,40(0,8) THE-RESULT
000023 MULTIPLY
000258 5820 8008 L 2,8(0,8) PIC9-5
00025C 4C20 8000 MH 2,0(0,8) PIC9-3
000260 5020 8010 ST 2,16(0,8)
The first block of pseudo-assembler is with the number of digits in the PICture being the maximum that give the same field-size. A BINARY PIC 9(3) occupies a half-word, and 9(4) is the largest that can appear in a half-word. A PIC 9(5) occupies a word/fullword, and, given Myth 7, eight digits is used for that (to be fair to this particular Myth).
The second block is with the number of digits which represent the data accurately, and which don't happen to require truncation when a multiplication is carried out.
Using the "full-size" PICtures guarantees that unnecessary truncation will always occur.
The difference in the number of instructions is small, and LH is faster than L, so plus to the full-size on that. But M is much slower than L, and MH is slower than L but faster than M. So plus to the optimal size on that. And the D (a divide, which is slow, slow) is not required at all in the second block (because no truncation is required). So bad-boy to the full-size fields on that.
The code for TRUNC(OPT) is also faster for the optimal-size fields, although the difference between the two is not as great (because TRUNC(OPT) in this code-sequence decides it does not need the truncation to base-10 and would not in a million years consider the truncation to field-size).
When using binary fields, always make them signed.
Again from the same IBM COBOL Cafe discussion, here's same-length signed fields Vs unsigned fields, TRUNC(STD):
000019 MULTIPLY
000238 4830 8030 LH 3,48(0,8) PICS9-4
00023C 5C20 8038 M 2,56(0,8) PICS9-8
000240 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000244 5020 8040 ST 2,64(0,8) THE-RESULTS
000021 MULTIPLY
000248 4830 8018 LH 3,24(0,8) PIC9-4
00024C 5C20 8020 M 2,32(0,8) PIC9-8
000250 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000254 5020 8028 ST 2,40(0,8)
The code is different from the above when compiled with both TRUNC(OPT) and TRUNC(BIN), but each of the code-sequences in both cases is identical in those options.
The presence or absence of a sign makes no difference to the code generated.
Except in one case. Where Myth 7 comes into play. With a nine-digit binary, using a signed vs unsigned field does generate less code, but that code generated is more code than if using eight digits.
Since nine decimal digits can fully fit within a word/fullword, there is no problem using nine decimal digits for a COMP/COMP-4/BINARY definition (without TRUNC(BIN)).
From the IBM Enterprise COBOL Version 4 Release 2 Performance Tuning paper, pp32-33:
The following shows the general performance considerations (from most
efficient to least efficient) for the number of digits of precision
for signed binary data items (using PICTURE S9(n) COMP) using
TRUNC(OPT):
n is from 1 to 8
for n from 1 to 4, arithmetic is done in halfword instructions where
possible for n from 5 to 8, arithmetic is done in fullword
instructions where possible
n is from 10 to 17 arithmetic is done in doubleword format
n is 9
fullword values are converted to doubleword format and then doubleword
arithmetic is used (this is SLOWER than any of the above)
n is 18 doubleword values are converted to a higher precision format
and then arithmetic is done using this higher precision (this is the
SLOWEST of all for binary data items)
There is a similar issue with TRUNC(STD). TRUNC(BIN) already has the built-in slowness for the number of digits 1-9, so is not further affected.
From the publicly available documentation :
TRUNC(OPT) is a performance option. When TRUNC(OPT) is in effect, the compiler assumes that data conforms to PICTURE specifications in USAGE BINARY receiving fields in MOVE statements and arithmetic expressions. The results are manipulated in the most optimal way, either truncating to the number of digits in the PICTURE clause, or to the size of the binary field in storage (halfword, fullword, or doubleword).
Tip: Use the TRUNC(OPT) option only if you are sure that the data being moved into the binary areas will not have a value with larger precision than that defined by the PICTURE clause for the binary item. Otherwise, unpredictable results could occur. This truncation is performed in the most efficient manner possible; therefore, the results are dependent on the particular code sequence generated. It is not possible to predict the truncation without seeing the code sequence generated for a particular statement.
Read the "Tip" very carefully and see what it means for your situation. (Hint : it means it does not make sense to ask the question you did because it literally says that "whatever happens, it was unpredictable" or iow "there is no explanation for what happens").
To make the compiler behaviour predictable, switch to either TRUNC(BIN) or TRUNC(STD). STD is good for standards compliance but bad for CPU usage, BIN is good for CPU usage but requires you to be a bit careful (because decimal truncation simply will not happen).
DIVIDE WS-ENT-CNYR-RED BY 4 GIVING WS-DT-CNYR
REMAINDER WS-YR-REMAINDER ON SIZE ERROR.
What does it mean?
DIVIDE is a COBOL verb that allows you to do division, like in maths.
This, and, other maths verbs, are covered in your manual and course notes.
The actual DIVIDE you show is syntactically incorrect: you should have an "imperative statement" after the ON SIZE ERROR phrase. No reasonable COBOL compiler will allow that statement to compile.
What is the DIVIDE doing in? It is likely the start of a check for a leap-year. If a year is divisible by four, it is a leap-year candidate (it must also not be divisible by 100 unless it is divisible by 400).
The result of the division is placed in the data-name following the GIVING and the what is "left over" from the division is place in the data-name following the REMAINDER.
Usually when using REMAINDER it will be division with integers, which makes sense for being a year. The year 2015 divided by four gives 503 with a remainder of three. Not a leap year.
The ON SIZE ERROR in this case should be superfluous. It is division by a literal (4) and unless the result fields are not big enough to contain the result, there can never be a SIZE ERROR.
Data-definitions should be:
ll WS-ENT-CNYR-RED PIC 9(4).
ll WS-DT-CNYR PIC 9(3).
ll WS-YR-REMAINDER PIC 9.
Unless there are very large value for the year, in which case WS-DT-CNYR would need to be 9(4). ll is a level-number, it will be in the range 01-49 (or 1-49) or a 77.
An 88-level condition name should appear on WS-YR-REMAINDER, something like:
88 could-be-leap-year VALUE ZERO.
GIVING is very common to see in COBOL. If GIVING is not used, then the result is stored in one of the fields mentioned in the statement (you should check which for DIVIDE, MULTIPLY, ADD and SUBTRACT).
REMAINDER you will only see when the "modulus" of a division is required.
There will be no rounding of a result unless the ROUNDED phrase is specified, and rounding with REMAINDER does not make much sense.
In this example, only WS-ENT-CNYR-RED must be a numeric item. WS-DT-CNYR and WS-YR-REMAINDER can both be numeric-edited items. The item on a GIVING will quite often be numeric-edited when formatting report lines. In this typical code for the start of a leap-year check, it is likely that all will be numeric, and all will be integers.
Depending on how much the three items are used, and how they are used, they may be defined as PACKED-DECIMAL (or whichever COMPUTATIONAL-? item is packed-decimal for that compiler) or even binary.
It is not necessary that this is the start of a leap-year check. There can be other reasons for dividing by four and needing to know the remainder.
Note that DIVIDE ... INTO ... is also valid. Indeed, there are five distinct formats of the DIVIDE statement documented in the 1985 COBOL Standard (and earlier ones) which you should see reflected in your manual.
ON SIZE ERROR tells the compiler to generate code when a "size error" occurs. A "size error" is when a result does not fit in a field provided for it.
ON SIZE ERROR
imperative-statement.
or
ON SIZE ERROR
imperative-statement.
END-... (scope-delimiter, consists of END- prefix and verb used, in this case `END-DIVIDE`).
The imperative-statement can be multiple statements, but is usually one (setting the result field to a default value, often zero). Because it can be multiple statements, it is very important to terminate the statement, otherwise you'll make unintended code part of the imperative-statement.
Many people think that ON SIZE ERROR is only actioned for a "divide by zero", but this is not the case. If a result does not fit in a field due to the size of the field, a "size error" has occurred.
I don't use ON SIZE ERROR. I ensure non-zero divisors, and that all result fields are large enough to contain the expected results.
Because I don't use ON SIZE ERROR, I don't know whether the REMAINDER can also cause a size error. I'll check :-)
OK, I've checked. This is with IBM's Enterprise COBOL, which, apart from Extensions, is to the 1985 Standard. If the REMAINDER field is too small to hold the remainder, the ON SIZE ERROR will be actioned. So be very careful about the size of the remainder field, as there is no way of knowing which field caused the size-error.
It is documented like so:
SIZE ERROR phrases For formats 1, 2, and 3, see “SIZE ERROR phrases”
on page 296.
For formats 4 and 5, if a size error occurs in the
quotient, no remainder calculation is meaningful. Therefore, the
contents of the quotient field (identifier-3) and the remainder field
(identifier-4) are unchanged. If size error occurs in the remainder,
the contents of the remainder field (identifier-4) are unchanged. In
either of these cases, you must analyze the results to determine which
situation has actually occurred.
Formats 4 and 5 are with REMAINDER.
If you don't specify ON SIZE ERROR then the behaviour will be down to the individual compiler, and run-time options. Enterprise COBOL will truncate the fields, but only after going to the run-time (Language Environment) to check whether you wanted something else to happen. Which will consume a lot of time relative to specifying ON SIZE ERROR.
So, ensure your fields are the correct size. If you don't want to do this, use ON SIZE ERROR. If using ON SIZE ERROR with REMAINDER, you have to determine yourself what caused the SIZE ERROR before doing anything.
ON SIZE ERROR has a counterpart, NOT ON SIZE ERROR. It's use is similar to ON SIZE ERROR, except with the obvious difference. ON SIZE ERROR and NOT ON SIZE ERROR can both be used at the same time:
DIVIDE WS-ENT-CNYR-RED BY 4
GIVING WS-DT-CNYR
REMAINDER WS-YR-REMAINDER
ON SIZE ERROR
imperative-statement-1
NOT ON SIZE ERROR
imperative-statement-2
END-DIVIDE (or .)
I want to port a 32 by 32 bit unsigned multiplication on a 24-bit dsp (it's a Linear Congruential Generator, so I'm not allowed to truncate, also I don't want to replace yet the current LCG with a 24 bit one). The available data types are 24 and 48 bit ints.
Only the last 32 LSB are needed. Do you know any hacks to implement this in fewer multiplies, masks and shifts than the usual way?
The line looks like this:
//val is an int(32 bit)
val = (1664525 * val) + 1013904223;
An outline would be (in my current compiler style):
static uint48_t val = SEED;
...
val = 0xFFFFFFFFUL & ((1664525UL * val) + 1013904223UL);
and hopefully the compiler will recognise:
it can use a multiply and accumulate command
it only needs a reduced multiply algorithim due to the "high word" of the constant being zero
the AND could be effected by resetting the upper bits or multiplying a constant and restoring
...other stuff depends on your {mystery dsp} target
Note
if you scale up the coefficients by 2^16, you can get truncation for free, but due to lack of info
you will have to explore/decide if it is better overall.
(This is more an elaboration why two multiplications 24×24→n, 31<n are enough for 32×32→min(n, 40).)
The question discloses amazingly little about the capabilities to build a method
32×21→32 in fewer [24×24] multiplies, masks and shifts than the usual way on:
24 and 48 bit ints & DSP (I read high throughput, non-high latency 24×24→48).
As far as there indeed is a 24×24→48 multiply (or even 24×24+56→56 MAC) and one factor is less than 24 bits, the question is pointless, a second multiply being the compelling solution.
The usual composition of a 24<n<48×24<m<48→24<p multiply from 24×24→48 uses three of the latter; a compiler should know as well as a coder that "the fourth multiply" would yield bits with a significance/position exceeding the combined lengths of the lower parts of the factors.
So, is it possible to generate "the long product" using just a second 24×24→48?
Let the (bytes of the) factors be w_xyz and W_XYZ, respectively; the underscores suggesting "the Ws" being the lower significance bits in the higher significance words/ints if interpreted as 24bit ints. The first 24×24→48 gives the sum of
zX
yXzY
xXyYzZ
xYyZ
xZ, what is needed (fat) is
wZ +
zW.
This can be computed using one combined multiplication of
((w<<16)|(z & 0xff)) × ((W<<16)|(Z & 0xff)). (Never mind the 17th bit of wZ+zW "running" into wW.)
(In the first revision of this answer, I foolishly produced wZ and zW separately - their sum is wanted in the end, anyway.)
(Annoyingly, this is about all you can do for 24×24→24 as a base operation too - beyond this "combining multiplication", you need four instead of one.)
Another angle to explore is choosing a different PRNG.
It may have to be >24 bits (tell!).
On a 24 bit machine, XorShift* (or even XorShift+) 48/32 seems worth a look.
Say I have the following variable-length table defined in WORKING-STORAGE...
01 SOAP-RECORD.
05 SOAP-INPUT PIC X(8) VALUE SPACES.
05 SOAP-STATUS PIC 9 VALUE ZERO.
05 SOAP-MESSAGE PIC X(50) VALUE SPACES.
05 SOAP-ITEMS OCCURS 0 TO 500 TIMES
DEPENDING ON ITEM-COUNT
INDEXED BY ITEM-X.
10 SI-SUB-ITEMS OCCURS 0 TO 100 TIMES
DEPENDING ON SUB-COUNT
INDEXED BY SUB-X.
15 SS-KEY PIC X(8) VALUE SPACES.
15 SS-AMOUNT PIC -9(7).99 VALUE ZEROS.
15 SS-DESCR PIC x(100) VALUE SPACES.
When this program runs, will it initially allocate as much space as this table could possibly need, or is it more dynamic about allocating memory? I would guess that the DEPENDING ON clause would make it more dynamic in the sense that it would allocate more memory as the ITEM-COUNT variable is incremented. A co-worker tells me otherwise, but he is not 100% sure. So I would really like to know how this works in order to structure my program as efficiently as possible.
PS: Yes, I am writing a new COBOL program! It's actually a CICS web service. I don't think this language will ever die :(
You don't mention which compiler you're using, but, at least up through the current, 2002, COBOL standard, the space allocated for an OCCURS...DEPENDING ON (ODO) data item is not required to be dynamic. (It's really only the number of occurrences, not the length, of the data item that varies.) Although your compiler vendor may've implemented an extension to the standard, I'm not aware of any vendor that has done so in this area.
The next, but not yet approved, revision of the standard includes support for dynamic-capacity tables with a new OCCURS DYNAMIC format.
In the CICS world, OCCURS DEPENDING ON (ODO) can be used to create a
table that is dynamically sized at run time. However, the way you are declaring
SOAP-RECORD will allocate enough memory to hold a record of maximum size.
Try the following:
First, move the SOAP-RECORD into LINKAGE SECTION. Items declared
in the linkage section do not have any memory allocated for them. At this
point you only have a record layout. Leave the declaration of
ITEM-COUNT and SUB-COUNT in WORKING-STORAGE.
Next, declare a pointer and a length in WORKING-STORAGE something like:
77 SOAP-PTR USAGE POINTER.
77 SOAP-LENGTH PIC S9(8) BINARY.
Finally in the PROCEDURE DIVISION: Set the size of the array
dimensions to some real values; allocate the
appropriate amount of memory and then connect the two. For example:
MOVE 200 TO ITEM-COUNT
MOVE 15 TO SUB-COUNT
MOVE LENGTH OF SOAP-RECORD TO SOAP-LENGTH
EXEC CICS GETMAIN
BELOW
USERDATAKEY
SET(SOAP-PTR)
FLENGTH(SOAP-LENGTH)
END-EXEC
SET ADDRESS OF SOAP-RECORD TO SOAP-PTR
This will allocate only enough memory to store a SOAP-RECORD with 200 SOAP-ITEMS
each of which contain 15 SI-SUB-ITEMS.
Note that the LENGTH OF register gives you the size of SOAP-RECORD
based on the ODO object values (ITEM-COUNT, SUB-COUNT) as opposed to
the maximum number of OCCURS.
Very important... Don't forget to deallocate the memory when your done!