What does It mean by Divide Function - cobol

DIVIDE WS-ENT-CNYR-RED BY 4 GIVING WS-DT-CNYR
REMAINDER WS-YR-REMAINDER ON SIZE ERROR.
What does it mean?

DIVIDE is a COBOL verb that allows you to do division, like in maths.
This, and, other maths verbs, are covered in your manual and course notes.
The actual DIVIDE you show is syntactically incorrect: you should have an "imperative statement" after the ON SIZE ERROR phrase. No reasonable COBOL compiler will allow that statement to compile.
What is the DIVIDE doing in? It is likely the start of a check for a leap-year. If a year is divisible by four, it is a leap-year candidate (it must also not be divisible by 100 unless it is divisible by 400).
The result of the division is placed in the data-name following the GIVING and the what is "left over" from the division is place in the data-name following the REMAINDER.
Usually when using REMAINDER it will be division with integers, which makes sense for being a year. The year 2015 divided by four gives 503 with a remainder of three. Not a leap year.
The ON SIZE ERROR in this case should be superfluous. It is division by a literal (4) and unless the result fields are not big enough to contain the result, there can never be a SIZE ERROR.
Data-definitions should be:
ll WS-ENT-CNYR-RED PIC 9(4).
ll WS-DT-CNYR PIC 9(3).
ll WS-YR-REMAINDER PIC 9.
Unless there are very large value for the year, in which case WS-DT-CNYR would need to be 9(4). ll is a level-number, it will be in the range 01-49 (or 1-49) or a 77.
An 88-level condition name should appear on WS-YR-REMAINDER, something like:
88 could-be-leap-year VALUE ZERO.
GIVING is very common to see in COBOL. If GIVING is not used, then the result is stored in one of the fields mentioned in the statement (you should check which for DIVIDE, MULTIPLY, ADD and SUBTRACT).
REMAINDER you will only see when the "modulus" of a division is required.
There will be no rounding of a result unless the ROUNDED phrase is specified, and rounding with REMAINDER does not make much sense.
In this example, only WS-ENT-CNYR-RED must be a numeric item. WS-DT-CNYR and WS-YR-REMAINDER can both be numeric-edited items. The item on a GIVING will quite often be numeric-edited when formatting report lines. In this typical code for the start of a leap-year check, it is likely that all will be numeric, and all will be integers.
Depending on how much the three items are used, and how they are used, they may be defined as PACKED-DECIMAL (or whichever COMPUTATIONAL-? item is packed-decimal for that compiler) or even binary.
It is not necessary that this is the start of a leap-year check. There can be other reasons for dividing by four and needing to know the remainder.
Note that DIVIDE ... INTO ... is also valid. Indeed, there are five distinct formats of the DIVIDE statement documented in the 1985 COBOL Standard (and earlier ones) which you should see reflected in your manual.
ON SIZE ERROR tells the compiler to generate code when a "size error" occurs. A "size error" is when a result does not fit in a field provided for it.
ON SIZE ERROR
imperative-statement.
or
ON SIZE ERROR
imperative-statement.
END-... (scope-delimiter, consists of END- prefix and verb used, in this case `END-DIVIDE`).
The imperative-statement can be multiple statements, but is usually one (setting the result field to a default value, often zero). Because it can be multiple statements, it is very important to terminate the statement, otherwise you'll make unintended code part of the imperative-statement.
Many people think that ON SIZE ERROR is only actioned for a "divide by zero", but this is not the case. If a result does not fit in a field due to the size of the field, a "size error" has occurred.
I don't use ON SIZE ERROR. I ensure non-zero divisors, and that all result fields are large enough to contain the expected results.
Because I don't use ON SIZE ERROR, I don't know whether the REMAINDER can also cause a size error. I'll check :-)
OK, I've checked. This is with IBM's Enterprise COBOL, which, apart from Extensions, is to the 1985 Standard. If the REMAINDER field is too small to hold the remainder, the ON SIZE ERROR will be actioned. So be very careful about the size of the remainder field, as there is no way of knowing which field caused the size-error.
It is documented like so:
SIZE ERROR phrases For formats 1, 2, and 3, see “SIZE ERROR phrases”
on page 296.
For formats 4 and 5, if a size error occurs in the
quotient, no remainder calculation is meaningful. Therefore, the
contents of the quotient field (identifier-3) and the remainder field
(identifier-4) are unchanged. If size error occurs in the remainder,
the contents of the remainder field (identifier-4) are unchanged. In
either of these cases, you must analyze the results to determine which
situation has actually occurred.
Formats 4 and 5 are with REMAINDER.
If you don't specify ON SIZE ERROR then the behaviour will be down to the individual compiler, and run-time options. Enterprise COBOL will truncate the fields, but only after going to the run-time (Language Environment) to check whether you wanted something else to happen. Which will consume a lot of time relative to specifying ON SIZE ERROR.
So, ensure your fields are the correct size. If you don't want to do this, use ON SIZE ERROR. If using ON SIZE ERROR with REMAINDER, you have to determine yourself what caused the SIZE ERROR before doing anything.
ON SIZE ERROR has a counterpart, NOT ON SIZE ERROR. It's use is similar to ON SIZE ERROR, except with the obvious difference. ON SIZE ERROR and NOT ON SIZE ERROR can both be used at the same time:
DIVIDE WS-ENT-CNYR-RED BY 4
GIVING WS-DT-CNYR
REMAINDER WS-YR-REMAINDER
ON SIZE ERROR
imperative-statement-1
NOT ON SIZE ERROR
imperative-statement-2
END-DIVIDE (or .)

Related

Google Sheet yields infinitesimal number as remainder of an integer/whole number

I have this worksheet where I need to create a checker to determine whether a number (result of dividing the sum of two numbers by another value --DIVISOR) is an integer/does not have decimals. Upon running the said checker, it mostly worked just fine but appeared to detect that a few items are not integers despite being exact multiples of the DIVISOR.
https://docs.google.com/spreadsheets/d/17-idS5G0kUI7JoHAx3qcJOiJ-zofmMrg93hUvZuxPiA/edit#gid=0
I have two values (V1 and V2) whose sum I need to divide by a certain number (Divisor).
I need the OUTPUT to be an integer/whole number. Since the DIVISOR is a multiple of SUM (V1,V2), the OUTPUT is supposed to be a whole number. I also expanded the number of decimal places to make sure that there are no trailing numbers after the decimal point.
However, upon running the MOD function over the OUTPUT, it generated some infinitesimal value.
I also tried TRUNCATING the OUTPUT and getting the DIFFERENCE between the TRUNC and OUTPUT. It yielded the same remainder value as the MOD result.
I downloaded the GSheet and opened it in MS Excel. There seems to be no problem with the DIFFERENCE result, but the MOD function yielded yet another value.
actually, this is not a bug and it is pretty common. its called a floating point "error" and in a nutshell, it has to do things with how decimal numbers are stored within a google sheets (even excel or any other app)
more details can be found here: https://en.wikipedia.org/wiki/IEEE_754
to counter it you will need to introduce rounding like:
=ROUND(SUM(A1:A))
this is not an ideal solution for all cases so depending on your requirements you may need to use these instead of ROUND:
ROUNDUP
ROUNDDOWN
TRUNC
TEXT

Delphi Warning W1073 Combining signed type and unsigned 64-bit type - treated as an unsigned type

I get the subject warning on the following line of code;
SelectedFilesSize := SelectedFilesSize +
UInt64(IdList.GetPropertyValue(TShellColumns.Size)) *
ifthen(Selected, 1, -1);
Specifically, the IDE highlights the third line.
SelectedFilesSize is declared as UInt64.
The code appears to work when I run it; if I select an item, its file size is added to the total, if I deselect a file its size is subtracted.
I know I can suppress this warning with {$WARN COMBINING_SIGNED_UNSIGNED64 OFF}.
Can someone explain? Will there be an unforeseen impact if SelectedFilesSize gets huge? Or an impact on a specific target platform?
Delphi 10.3, Win32 and Win64 targets
This will work here, but the warning is right.
If you multiply a UInt64 with -1, you are actually multiplying it with $FFFFFFFFFFFFFFFF. The final result will be a 128 bit value, but the lower 64 bits will be the same as for a signed multiplication (that is also why the code generator often produces an imul opcode, even for unsiged multiplication: the lower bits will be correct, just the — unused — higher bits won't be). The upper 64 bits won't be used anyway, so they don't matter.
If you add that (actually negative) value to another UInt64 (e.g. SelectedFilesSize), the 64 bit result will be correct again. The CPU does not discriminate between positive or negative values when adding. The resulting CPU flags (carry, overflow) will indicate overflow, but if you ignore that by not using range or overflow checks, your code will be fine.
Your code will likely produce a runtime error if range or overflow checks are on, though.
In other words, this works because any excess upper bit — the 64th bit and above — can be ignored. Otherwise, the values would be wrong. See example.
Example
Say your IdList.GetPropertyValue(TShellColumns.Size) is 420. Then you are performing:
$00000000000001A4 * $FFFFFFFFFFFFFFFF = $00000000000001A3FFFFFFFFFFFFFF5C
This is a huge but positive number, but fortunately the lower 64 bits ($FFFFFFFFFFFFFF5C) can be interpreted as -420 (a really negative value in 128 bit would be $FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF5C or -420).
Now say your SelectedFileSize is 100000 (or hex $00000000000186A0). Then you get:
$00000000000186A0 + $FFFFFFFFFFFFFF5C = $00000000000184FC
(or actually $100000000000184FC, but the top bit -- the carry -- is ignored).
$00000000000184FC is 99580 in decimal, so exactly the value you wanted.

Mainframe COBOL COMPUTE TRUNC Query

I wonder why there is different in the result for those two cases.
Working Storage:
WS-SUM-LEN PIC S9(4) COMP.
WS-LEN-9000 PIC 9(5) VALUE 9000.
WS-TMP-LEN PIC 9(5).
WS-FIELD-A PIC X(2000).
Case 1) COMPUTE WS-SUM-LEN = WS-LEN-9000 + LENGTH OF WS-FIELD-A
Result: WS-SUM-LEN = 1000
Case 2)
MOVE LENGTH OF WS-FIELD-A TO WS-TMP-LEN
COMPUTE WS-SUM-LEN = WS-LEN-9000 + WS-TMP-LEN
Result: WS-SUM-LEN = 11000
The compiler option is TRUNC(OPT). Why there is no trunc occur for case 2?
Binary fields in IBM's Enterprise COBOL
Warnings
Compiler option TRUNC determines how code for binary fields is generated.
Do not just up and change from your site's default setting of option TRUNC. The different settings for TRUNC can give different results.
Changing from TRUNC(BIN) to TRUNC(STD) will give different results for any values beyond the decimal-values represented by the PICture defining the field. For a signed field, the same applies for the negative value.
01 a-good-name BINARY PIC 99.
ADD 1 TO a-good-name
With TRUNC(STD) the result will be truncated once 99 is reached. With TRUNC(BIN) the result will be truncated once 65535 is reached (if the field were signed, the truncation would be at 99 as before for TRUNC(STD) and 32767 for TRUNC(BIN)).
Changing from TRUNC(BIN) to TRUNC(OPT), without program changes, is only possible if all, entirely all, usage of binary fields is limited to decimal-values represented by the PICture. Particular pieces of code may appear to "work", but it would be a massive coincidence if all use of binary fields gave the same result between the two compiler options, on your system.
It is similar for changing from TRUNC(STD) to TRUNC(OPT). Although the amount of coincidence for working would be smaller, this would increase a false sense of security leaving the potential for subtle differences to be missed for some time.
Changing from genuine use of TRUNC(OPT) to either TRUNC(STD) or TRUNC(BIN) is possible without effort. However, why would you want to?
However, if your use is not genuine (using TRUNC(OPT) with data that does not conform to PICture), then your original results are unreliable, and you will get differences if changing to TRUNC(STD) and likely get difference changing to TRUNC(BIN).
In short, changing the site-default for compiler option TRUNC is something to be considered very carefully, and must include provision for verification of results.
Sites do at times make such a change, the only ones I know of are TRUNC(BIN) (mostly) and TRUNC(STD) to TRUNC(OPT), for performance reasons. These have been done as projects, not just by changing the option and blundering on from there.
Do not override the site-default for TRUNC within systems. If you have programs which are using the same binary data (from files, databases, inter-program communication, messages, or any other way) and they don't all treat the data in the same way, it is asking for trouble.
Some myths
Further explanation will be given later in the text.
There is a difference between TRUNC(BIN) and making all your binary
fields COMP-5 (or COMPUTATIONAL-5).
There is no difference whatsoever. When TRUNC(BIN) is specified, the compiler simply treats all binary fields as COMP-5.
Native-binary is faster than COBOL binary (a binary field with decimal limits defined by the PICture clause).
Although the term itself makes many experienced people think it will be faster ("it'll be like when I code it myself in Assembler") it is in fact slower, on the whole. The slowing-down increases as the field-size increases.
Native-binary (COMP-5/COMPUTATIONAL-5) does not truncate.
It does. It truncates to field-size. Because it truncates to field-size, the intermediate fields must always be larger than the source fields, which means more instructions must be used, and different instructions.
Also, and important to know, the ON SIZE ERROR clause (which can be used with all arithmetic verbs) always only uses the PICture clause to determine that a size-error has occurred. That is, if you have a COMP-5 PIC S9(4), which can contain a maximum positive value of 32,767 and do this:
MULTIPLY that-field BY 10 GIVING that-field
ON SIZE ERROR
DISPLAY "Busted"
END-MULTIPLY
Any value above 9999 will cause the DISPLAY to be processed.
Which really means "don't use ON SIZE ERROR with COMP-5 or TRUNC(BIN)".
TRUNC(OPT) generates optimal code.
In isolation, it does. However this does not preclude further optimisations from compiler option OPTIMIZE/OPT across a wider context.
When using binary fields, always use the maximum PICture for the size of the field
A binary field with 1-4 digits will occupy a half-word, two bytes of storage. With 5-9 digits, a word, or a fullword, of four bytes. With 10-18 digits, a double-word of eight bytes.
The aged recommendation is to always specify four digits, nine digits and 18 digits (well, no-one really goes above nine, do they...?).
This is advice I've received in the past, and given out myself. However, in Enterprise COBOL it is not good advice.
The best advice here is to define the number of digits needed. This will at times improve performance, will never degrade performance, and will make the program easier to understand by best describing the data.
When using binary fields, always make them signed.
More advice I've received and given in the past. Untrue with Enterprise COBOL. If a field can contain a negative value, make it signed. Otherwise make it unsigned.
At times, with interfaces, it is not explicit whether a field should be signed. However, it will be explicit from the maximum value expected. As will the field definition (the USAGE).
For instance, an SQL VARCHAR as a host-variable can have a maximum size of 32767 bytes. Since the actual length is held in a two-byte binary field, the field should be signed. Any value "above" 32767 will be misinterpreted by DB2/SQL.
Since nine decimal digits can fully fit within a word/fullword, there is no problem using nine decimal digits for a COMP/COMP-4/BINARY definition (without TRUNC(BIN)).
Since the compiler has to take care of decimal truncation, and since anything which could lead to truncation would require the "next size up", a binary field of nine digits can require a double-word intermediate field. So requires code to convert to a double-word, and convert the result back from a double-word to a word. If nine digits are required, it will generally be better to define 10 digits and save on the conversions.
Note
The above is all known to hold true for Enterprise COBOL up to V4.2.
IBM has entirely rewritten the code-generation and optimisation (now at two possible levels of optimisation) For Enterprise COBOL V5. There is considerable improvement in the treatment of binary fields, including, for instance, only doing the truncation of values once it is known that truncation is necessary. I am not aware that the use of V5 changes anything here other than the scale of performance differences. All general usage of binary fields should be faster with V5 than with earlier versions of Enterprise COBOL.
Binary fields
COBOL, for binary fields, uses decimal maxima determined by the PICture size.
Such a field with PIC 9 can contain a maximum value of 9 before truncation. If signed, the range of values is -9 to +9. Values outside that range will be truncated.
For PIC 99, 99, and if signed -99 to 99.
For PIC 999, 999,and if signed -999 to 999.
You get the pictu... idea.
It is down to the compiler-implementation as to how those values are stored.
Indeed, according to the Standard, COBOL only recently (1985) had support for binary fields (USAGE BINARY). Which and how actual "non-display" fields were supported was down to USAGE COMPUTATIONAL, whose specifics were compiler-dependent.
Generally across compilers COMP, COMP-1 and COMP-2 (binary, with decimal maxima, short floating-point and long floating-point) are standard, though not part of the Standard. Beyond COMP-2, what the field definitions mean can vary amongst compilers.
So, first recommendation, suggest that your local site standards use BINARY instead of COMP for new code (and PACKED-DECIMAL instead of COMP-3, for packed-decimal fields). BINARY (and COMP-4) within Enterprise COBOL is simply an alias of COMP, so there is absolutely no problem in doing this.
There is another type of binary field, which is the native-binary field. In Enterprise COBOL this is USAGE COMP-5.
COMP-5 has its field-size determined by the PICture definition, but its maxima are that of the full bit-pattern possible for the field size. A PIC S9(4) COMP-5 can contain -32768 to 32767.
Note at this point that a native-binary field, and this may seem counter-intuitive, generally needs more generated machine-code to support its use. This is because it truncates to field-size, rather than PICture.
Note also that there is one place where this does not happen, which is ON SIZE ERROR, which will be true if the value exceeds the PICture size. Which means, to my mind, don't use ON SIZE ERROR with COMP-5 (or TRUNC(BIN), see soon) fields.
Compiler option TRUNC
The compiler option TRUNC defines how machine-code is generated for binary fields. There are three options:
TRUNC(BIN)
Truncation to field-size.
This treats all the non-native-binary fields in the program (COMP/COMP-4/BINARY) as native-binary (as though they had been defined as COMP-5).
This allows the full range of bit patterns to be used, but has impacts on performance.
TRUNC(STD)
Truncation to PICture size.
Generates machine-code for the COBOL Standard truncation to PICture size. PIC 9(n) can contain no more than n significant digits, they will be truncated when the field is a "target" (field value changes).
TRUNC(OPT)
Truncation of any type only used if it happens to be convenient.
I describe this as being a contract between the coder and the compiler. The coder contracts to never (as in never) allow a value to exceed the PICture size. The compiler contracts to always get it right in such a case.
If the coder breaks the contract the coder is entirely to blame for the ensuing rubbish.
When to use each setting of TRUNC (further recommendation)
BIN Never. Use COMP-5 for individual fields where they require access to all bits (pay attention to SQL and CICS "system" fields, and external data from non-Mainframe sources, and inter-language communication between COBOL and JAVA/C/C++ and anywhere else where the data-maxima for a field are beyond the PICture and it is not possible to make the field bigger (as the actual logical definition of the field size is outside your program).
STD use this unless all, as in all, your data always, as in always, conforms to PICture.
OPT use this only, as in only, if all, as in all, your data always, as in always, conforms to PICture.
If you have COMP PIC 99, for instance, you must not, when using OPT, allow that to have a value of 99, and then add one to it. Or anything similar
The Answer
You used TRUNC(OPT), entering into the contract. You immediately broke the contract. It is your fault.
Warning
If your site is using TRUNC(OPT) and not everyone is fully aware of the implications, you will, as in will, have problems.
Substantiation of the Myths from above
There is a difference between TRUNC(BIN) and making all your binary
fields COMP-5 (or COMPUTATIONAL-5).
Define two fields in small program. They should be defined as COMP/COMP-4/BINARY (or COMPUTATIONAL/COMPUTATIONAL-4 if that is your bent).
In the program, add a literal to each of the fields (do this with two separate statements, to make it easier to follow, unless you are experience with the generated code in a listing).
Compile the program with compiler options LIST,NOOFFSET (this will produce, in the compiler listing, output showing the generated machine-code in a so-called "pseudo-assembler" format) and TRUNC(BIN).
Copy the program. In the copy, change the USAGE of the two fields to COMP-5 (or COMPUTATIONAL-5).
Compile this program, again with LIST,NOOFFSET but this time the value for TRUNC is irrelevant as it does not affect COMP-5 fields.
Compare the output listings. If there is one byte difference, eat someone's hat.
Native-binary is faster than COBOL binary (a binary field with decimal limits defined by the PICture clause).
From this discussion at IBM's COBOL Cafe: https://www.ibm.com/developerworks/community/forums/html/topic?id=ae9ef6bc-6e4e-43f8-a814-e66bea25fb8c&ps=25
Here's a multiply of a PIC 9(3) by a PIC 9(5).
With TRUNC(STD)
000023 MULTIPLY
000258 5820 8008 L 2,8(0,8) PIC9-5
00025C 4C20 8000 MH 2,0(0,8) PIC9-3
000260 5020 8010 ST 2,16(0,8)
With TRUNC(BIN)
000019 MULTIPLY
00023C 4820 8030 LH 2,48(0,8) PICS9-4
000240 5840 8038 L 4,56(0,8) PICS9-8
000244 8E40 0020 SRDA 4,32(0)
000248 5D40 C000 D 4,0(0,12) SYSLIT AT +0
00024C 4E50 D120 CVD 5,288(0,13) TS2=16
000250 F154 D110 D123 MVO 272(6,13),291(5,13) TS2=0
000256 4E40 D120 CVD 4,288(0,13) TS2=16
00025A 9110 D115 TM 277(13),X'10' TS2=5
00025E D204 D115 D123 MVC 277(5,13),291(13) TS2=5
000264 4780 B05C BC 8,92(0,11) GN=10(00026C)
000268 9601 D119 OI 281(13),X'01' TS2=9
00026C GN=10 EQU *
00026C 4E20 D120 CVD 2,288(0,13) TS2=16
000270 FC82 D111 D125 MP 273(9,13),293(3,13) TS2=1
000276 D202 D128 C008 MVC 296(3,13),8(12) TS2=24
00027C D204 D12B D115 MVC 299(5,13),277(13) TS2=27
000282 4F20 D128 CVB 2,296(0,13) TS2=24
000286 F144 D12B D110 MVO 299(5,13),272(5,13) TS2=27
00028C 4F50 D128 CVB 5,296(0,13) TS2=24
000290 5C40 C000 M 4,0(0,12) SYSLIT AT +0
000294 1E52 ALR 5,2
000296 47C0 B08E BC 12,142(0,11) GN=11(00029E)
00029A 5A40 C004 A 4,4(0,12) SYSLIT AT +4
00029E GN=11 EQU *
00029E 1222 LTR 2,2
0002A0 47B0 B098 BC 11,152(0,11) GN=12(0002A8)
0002A4 5B40 C004 S 4,4(0,12) SYSLIT AT +4
0002A8 GN=12 EQU *
0002A8 5050 8040 ST 5,64(0,8)
It doesn't take any knowledge of IBM Assembler to work out which of those two pieces of code is going to run more quickly.
The difference in the line-numbers (19 Vs 23) is just down to the fact that TRUNC(BIN) makes the PICture size irrelevant, so where I had three calculations doing the same thing with different size fields, for TRUNC(BIN) the code for each was the same, because the size of each field is the same, a word/fullword of four bytes.
Native-binary (COMP-5/COMPUTATIONAL-5) does not truncate.
See the code immediately above. It is so massive due to the need to provide truncation. The need to provide decimal truncation is down to the COBOL Standard, it's what must happen in the language.
TRUNC(OPT) generates optimal code.
The code generated will always be the most efficient for that code-sequence. The same code-sequence will always generate the same code, before optimisation.
However, the optimizer is capable of spotting that a particular undisturbed state is available for a source-field earlier in the program, and replace part or all of the TRUNC(OPT) code with code relying on the previously-available value.
When using binary fields, always use the maximum PICture for the size of the field
From the same IBM COBOL Cafe discussion referenced above, with these definitions:
01 PIC9-3 BINARY PIC 999.
01 PIC9-5 BINARY PIC 9(5).
01 THE-RESULT8 BINARY PIC 9(8).
01 PIC9-4 BINARY PIC 9(4).
01 PIC9-8 BINARY PIC 9(8).
01 THE-RESULT BINARY PIC 9(8).
And these calculations:
MULTIPLY PIC9-4 BY PIC9-8
GIVING THE-RESULT
MULTIPLY PIC9-3 BY PIC9-5
GIVING THE-RESULT8
Here's the generated code for TRUNC(STD):
000021 MULTIPLY
000248 4830 8018 LH 3,24(0,8) PIC9-4
00024C 5C20 8020 M 2,32(0,8) PIC9-8
000250 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000254 5020 8028 ST 2,40(0,8) THE-RESULT
000023 MULTIPLY
000258 5820 8008 L 2,8(0,8) PIC9-5
00025C 4C20 8000 MH 2,0(0,8) PIC9-3
000260 5020 8010 ST 2,16(0,8)
The first block of pseudo-assembler is with the number of digits in the PICture being the maximum that give the same field-size. A BINARY PIC 9(3) occupies a half-word, and 9(4) is the largest that can appear in a half-word. A PIC 9(5) occupies a word/fullword, and, given Myth 7, eight digits is used for that (to be fair to this particular Myth).
The second block is with the number of digits which represent the data accurately, and which don't happen to require truncation when a multiplication is carried out.
Using the "full-size" PICtures guarantees that unnecessary truncation will always occur.
The difference in the number of instructions is small, and LH is faster than L, so plus to the full-size on that. But M is much slower than L, and MH is slower than L but faster than M. So plus to the optimal size on that. And the D (a divide, which is slow, slow) is not required at all in the second block (because no truncation is required). So bad-boy to the full-size fields on that.
The code for TRUNC(OPT) is also faster for the optimal-size fields, although the difference between the two is not as great (because TRUNC(OPT) in this code-sequence decides it does not need the truncation to base-10 and would not in a million years consider the truncation to field-size).
When using binary fields, always make them signed.
Again from the same IBM COBOL Cafe discussion, here's same-length signed fields Vs unsigned fields, TRUNC(STD):
000019 MULTIPLY
000238 4830 8030 LH 3,48(0,8) PICS9-4
00023C 5C20 8038 M 2,56(0,8) PICS9-8
000240 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000244 5020 8040 ST 2,64(0,8) THE-RESULTS
000021 MULTIPLY
000248 4830 8018 LH 3,24(0,8) PIC9-4
00024C 5C20 8020 M 2,32(0,8) PIC9-8
000250 5D20 C000 D 2,0(0,12) SYSLIT AT +0
000254 5020 8028 ST 2,40(0,8)
The code is different from the above when compiled with both TRUNC(OPT) and TRUNC(BIN), but each of the code-sequences in both cases is identical in those options.
The presence or absence of a sign makes no difference to the code generated.
Except in one case. Where Myth 7 comes into play. With a nine-digit binary, using a signed vs unsigned field does generate less code, but that code generated is more code than if using eight digits.
Since nine decimal digits can fully fit within a word/fullword, there is no problem using nine decimal digits for a COMP/COMP-4/BINARY definition (without TRUNC(BIN)).
From the IBM Enterprise COBOL Version 4 Release 2 Performance Tuning paper, pp32-33:
The following shows the general performance considerations (from most
efficient to least efficient) for the number of digits of precision
for signed binary data items (using PICTURE S9(n) COMP) using
TRUNC(OPT):
n is from 1 to 8
for n from 1 to 4, arithmetic is done in halfword instructions where
possible for n from 5 to 8, arithmetic is done in fullword
instructions where possible
n is from 10 to 17 arithmetic is done in doubleword format
n is 9
fullword values are converted to doubleword format and then doubleword
arithmetic is used (this is SLOWER than any of the above)
n is 18 doubleword values are converted to a higher precision format
and then arithmetic is done using this higher precision (this is the
SLOWEST of all for binary data items)
There is a similar issue with TRUNC(STD). TRUNC(BIN) already has the built-in slowness for the number of digits 1-9, so is not further affected.
From the publicly available documentation :
TRUNC(OPT) is a performance option. When TRUNC(OPT) is in effect, the compiler assumes that data conforms to PICTURE specifications in USAGE BINARY receiving fields in MOVE statements and arithmetic expressions. The results are manipulated in the most optimal way, either truncating to the number of digits in the PICTURE clause, or to the size of the binary field in storage (halfword, fullword, or doubleword).
Tip: Use the TRUNC(OPT) option only if you are sure that the data being moved into the binary areas will not have a value with larger precision than that defined by the PICTURE clause for the binary item. Otherwise, unpredictable results could occur. This truncation is performed in the most efficient manner possible; therefore, the results are dependent on the particular code sequence generated. It is not possible to predict the truncation without seeing the code sequence generated for a particular statement.
Read the "Tip" very carefully and see what it means for your situation. (Hint : it means it does not make sense to ask the question you did because it literally says that "whatever happens, it was unpredictable" or iow "there is no explanation for what happens").
To make the compiler behaviour predictable, switch to either TRUNC(BIN) or TRUNC(STD). STD is good for standards compliance but bad for CPU usage, BIN is good for CPU usage but requires you to be a bit careful (because decimal truncation simply will not happen).

SPSS percentile issue

I am working with SPSS 18.
I am using FREQUENCIES to calculate the 95th percentile of a variable.
FREQUENCIES SdrelPromSldDeu_Acr_5_0
/FORMAT=NOTABLE
/PERCENTILES 1,5,95,99.
The result is given in a table
Statistics
SdrelPromSldDeu_Acr_5_0
N Valid 8881
Missing 0
Percentiles 1 -1,001060644014
5 -1,000541440102
95 6619,140632636228
99 9223372,036854776000
But if I double-click the 9223372,036854776 to copy it, another number appears: 1.0757943411193715E7.
If I use MEANS to get the maximum value, the result is 2.4329524990388575E8, so the number that appears on the double-click seems possible.
I have seen 9223372,03 in other cases as well, as if it were some kind of upper limit SPSS is able to display.
Can anybody tell me if the 9223372,03 represents anything useful? Should I trust the bigger number?
Thanks!
It appears to be a bug in the display of SPSS.
The number you have shown is eerily similar to
9223372036854775807
which is the highest value possible if a variable is declared as a long integer.
see also:
https://en.wikipedia.org/wiki/9223372036854775807
Since your actual number is 11 degrees smaller, it should not reach this limit. Hence the conclusion that it must be a bug in the display software.
Do not trust it.
(the number behind may or may not be right, but the 9223372,03 is surely wrong)

Why does this code causes the machine to crash?

I am trying to run this code but it keeps crashing:
log10(x):=log(x)/log(10);
char(x):=floor(log10(x))+1;
mantissa(x):=x/10**char(x);
chop(x,d):=(10**char(x))*(floor(mantissa(x)*(10**d))/(10**d));
rnd(x,d):=chop(x+5*10**(char(x)-d-1),d);
d:5;
a:10;
Ibwd:[[30,rnd(integrate((x**60)/(1+10*x^2),x,0,1),d)]];
for n from 30 thru 1 step -1 do Ibwd:append([[n-1,rnd(1/(2*n-1)-a*last(first(Ibwd)),d)]],Ibwd);
Maxima crashes when it evaluates the last line. Any ideas why it may happen?
Thank you so much.
The problem is that the difference becomes negative and your rounding function dies horribly with a negative argument. To find this out, I changed your loop to:
for n from 30 thru 1 step -1 do
block([],
print (1/(2*n-1)-a*last(first(Ibwd))),
print (a*last(first(Ibwd))),
Ibwd: append([[n-1,rnd(1/(2*n-1)-a*last(first(Ibwd)),d)]],Ibwd),
print (Ibwd));
The last difference printed before everything fails miserably is -316539/6125000. So now try
rnd(-1,3)
and see the same problem. This all stems from the fact that you're taking the log of a negative number, which Maxima interprets as a complex number by analytic continuation. Maxima doesn't evaluate this until it absolutely has to and, somewhere in the evaluation code, something's dying horribly.
I don't know the "fix" for your specific example, since I'm not exactly sure what you're trying to do, but hopefully this gives you enough info to find it yourself.
If you want to deconstruct a floating point number, let's first make sure that it is a bigfloat.
say z: 34.1
You can access the parts of a bigfloat by using lisp, and you can also access the mantissa length in bits by ?fpprec.
Thus ?second(z)*2^(?third(z)-?fpprec) gives you :
4799148352916685/140737488355328
and bfloat(%) gives you :
3.41b1.
If you want the mantissa of z as an integer, look at ?second(z)
Now I am not sure what it is that you are trying to accomplish in base 10, but Maxima
does not do internal arithmetic in base 10.
If you want more bits or fewer, you can set fpprec,
which is linked to ?fpprec. fpprec is the "approximate base 10" precision.
Thus fpprec is initially 16
?fpprec is correspondingly 56.
You can easily change them both, e.g. fpprec:100
corresponds to ?fpprec of 335.
If you are diddling around with float representations, you might benefit from knowing
that you can look at any of the lisp by typing, for example,
?print(z)
which prints the internal form using the Lisp print function.
You can also trace any function, your own or system function, by trace.
For example you could consider doing this:
trace(append,rnd,integrate);
If you want to use machine floats, I suggest you use, for the last line,
for n from 30 thru 1 step -1 do :
Ibwd:append([[n-1,rnd(1/(2.0*n- 1.0)-a*last(first(Ibwd)),d)]],Ibwd);
Note the decimal points. But even that is not quite enough, because integration
inserts exact structures like atan(10). Trying to round these things, or compute log
of them is probably not what you want to do. I suspect that Maxima is unhappy because log is given some messy expression that turns out to be negative, even though it initially thought otherwise. It hands the number to the lisp log program which is perfectly happy to return an appropriate common-lisp complex number object. Unfortunately, most of Maxima was written BEFORE LISP HAD COMPLEX NUMBERS.
Thus the result (log -0.5)= #C(-0.6931472 3.1415927) is entirely unexpected to the rest of Maxima. Maxima has its own form for complex numbers, e.g. 3+4*%i.
In particular, the Maxima display program predates the common lisp complex number format and does not know what to do with it.
The error (stack overflow !!!) is from the display program trying to display a common lisp complex number.
How to fix all this? Well, you could try changing your program so it computes what you really want, in which case it probably won't trigger this error. Maxima's display program should be fixed, too. Also, I suspect there is something unfortunate in simplification of logs of numbers that are negative but not obviously so.
This is probably waaay too much information for the original poster, but maybe the paragraph above will help out and also possibly improve Maxima in one or more places.
It appears that your program triggers an error in Maxima's simplification (algebraic identities) code. We are investigating and I hope we have a bug fix soon.
In the meantime, here is an idea. Looks like the bug is triggered by rnd(x, d) when x < 0. I guess rnd is supposed to round x to d digits. To handle x < 0, try this:
rnd(x, d) := if x < 0 then -rnd1(-x, d) else rnd1(x, d);
rnd1(x, d) := (... put the present definition of rnd here ...);
When I do that, the loop runs to completion and Ibwd is a list of values, but I don't know what values to expect.

Resources