Compute: Warn when low-order truncation occurs - cobol

I have a result field specified as
01 MY-RESULT VALUE +0 USAGE COMP-3 PIC S9(13)V99
Imagine I multiply two factors:
COMPUTE MY-RESULT = A * B
What is the best way to detect low-order truncation in MY-RESULT?
E.g. when A=B=2.01.
The ON SIZE ERRORclause is not triggered.

Something that will work with all vendors and even the oldest compilers (as you did not specified any dialect the seems to be the most important part): if it matters use an additional target field with more decimal positions and check for equality afterwards:
COMPUTE MY-RESULT RESULT-WITH-MORE-DECIMALS = A * B
IF MY-RESULT NOT = RESULT-WITH-MORE DECIMALS
...
END-IF
ON SIZE ERROR will only be tracked for the upper truncation.
If this 2014 feature os available for your compiler you could use the INTERMEDIATE ROUNDING IS PROHIBITED (the draft had it in as ROUNDED-MODE PROHIBITED) which will show you this problem (if EC-SIZE-TRUNCATION exception is enabled). Beware of one part: this is an exception with a "fatal" category...

Related

MULTIPLY doesn't behave as I would expect

I have this cobol program, meant to calculate a factorial:
IDENTIFICATION DIVISION.
PROGRAM-ID. Factorial-hopefully.
AUTHOR. Darth Egregious.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 Keeping-Track-Variables.
05 Operand PIC S99 VALUE 0.
05 Product PIC S99 VALUE 1.
PROCEDURE DIVISION.
PERFORM-FACTORIAL.
DISPLAY SPACES
PERFORM VARYING Operand FROM 6 BY -1 UNTIL Operand = 0
DISPLAY "Before Product " Product " Operand " Operand
MULTIPLY Product By Operand GIVING Product
DISPLAY "After Product " Product " Operand " Operand
END-PERFORM
DISPLAY Product.
STOP RUN.
I run it like this:
cobc -free -x -o a.out fact.cbl && ./a.out
And I get this strange output:
Before Product +01 Operand +06
After Product +06 Operand +06
Before Product +06 Operand +05
After Product +30 Operand +05
Before Product +30 Operand +04
After Product +30 Operand +04
Before Product +30 Operand +03
After Product +90 Operand +03
Before Product +90 Operand +02
After Product +90 Operand +02
Before Product +90 Operand +01
After Product +90 Operand +01
+90
My decrementing loop is working as expected, but the MULTIPLY command is behaving strangely. It's doing 1*6, and 6*5 correctly, but 30*4 doesn't seem to work, then 30*3 does, and finally 90*2 doesn't work again. Does COBOL not like multiplying by powers of two or something?
My decrementing loop is working as expected, but the MULTIPLY command is behaving strangely. It's doing 1*6, and 6*5 correctly, but 30*4 doesn't seem to work, then 30*3 does, and finally 90*2 doesn't work again. Does COBOL not like multiplying by powers of two or something?
05 Operand PIC S99 VALUE 0.
05 Product PIC S99 VALUE 1.
When you are multiplying 30*4 and 90*2, the values are larger than the PICTURE clause, S99.
Increase the size of the PIC clause to, say, S999.
Response to comments:
Technically, the result is undefined [COBOL 85], therefore doing nothing is a valid choice. Other implementations will truncate the value giving a different result.
So it isn't so much the language as it is the implementation.
The language also allows the SIZE ERROR phrase to catch truncation errors. In that situation, the result is unaltered, but additional code may be executed to indicate that the error occurred.
With COBOL 2002, the result is defined by the implementor, if the ON SIZE ERROR phrase isn't specified and checking of the EC-SIZE-TRUNCATION exception is not active.
Quote from 2002 standard:
F.1 Substantive changes potentially affecting existing programs
15) Size error condition with no SIZE ERROR phrase. If a size error condition occurs, the statement in which it occurs contains no SIZE ERROR or NOT SIZE ERROR phrase, and there is no associated declarative, the implementor defines whether the run unit is terminated or execution continues with incorrect values.
Justification:
In the previous COBOL standard, the rules for size error stated that execution would continue with undefined values, but it was not clear where execution would continue, particularly in conditional statements. Additionally, continued execution with incorrect results was not acceptable for many critical applications, where it might cause corruption of databases, incorrect continued execution of the program, and potentially a multitude of additional errors. It was prohibitive to modify programs to add ON SIZE ERROR for every affected statement. Responding to user requirements, several implementors terminated execution of the program in this situation; in some cases, the implementor allowed selection of termination based on a compiler directive.
The number and criticality of applications that terminated in this situation provides strong justification for this change. It is expected that this change will have little impact on existing programs because implementors are free to continue or terminate, in accordance with their implementation of the previous COBOL standard.

Logic behind COBOL code

I am not able to understand what is the logic behind these lines:
COMPUTE temp = RESULT - 1.843E19.
IF temp IS LESS THAN 1.0E16 THEN
Data definition:
000330 01 VAR1 COMP-1 VALUE 3.4E38. // 3.4 x 10 ^ 38
Here are those lines in context (the sub-program returns a square root):
MOVE VAR1 TO PARM1.
CALL "SQUAREROOT_ROUTINE" USING
BY REFERENCE PARM1,
BY REFERENCE RESULT.
COMPUTE temp = RESULT - 1.843E19.
IF temp IS LESS THAN 1.0E16 THEN
DISPLAY "OK"
ELSE
DISPLAY "False"
END-IF.
These lines are just trying to test if the result returned by the SQUAREROOT_ROUTINE is correct. Since the program is using float-values and rather large numbers this might look a bit complicated. Let's just do the math:
You start with 3.4E38, the squareroot is 1.84390889...E19.
By subtracting 1.843E19 (i.e. the approximate result) and comparing the difference against 1.0E16 the program is testing whether the result is between 1.843E19 and 1.843E19+1.0E16 = 1.844E19.
Not that this test would not catch an error if the result from SQUAREROOT_ROUTINE was too low instead of too high. To catch both types of wrong results you should compare the absolute value of the difference against the tolerance.
You might ask "Why make things so complicated"? The thing is that float-values usually are not exact and depending on the used precision you will get sightly different results due to rounding-errors.
well the logic itself is very straight forward, you are subtracting 1.843*(10^19) from the result you get from the SQUAREROOT_ROUTINE and putting that value in the variable called temp and then If the value of temp is less than 1.0*(10^16) you are going to print a line out to the SYSOUT that says "OK", otherwise you are going to print out "False" (if the value was equal to or greater than).
If you mean the logic as to why this code exists, you will need to talk to the author of the code, but it looks like a debugging display that was left in the program.

sscanf in flex changing value of input

I'm using flex and bison to read in a file that has text but also floating point numbers. Everything seems to be working fine, except that I've noticed that it sometimes changes the values of the numbers. For example,
-4.036 is (sometimes) becoming -4.0359998, and
-3.92 is (sometimes) becoming -3.9200001
The .l file is using the lines
static float fvalue ;
sscanf(specctra_dsn_file_yytext, "%f", &fvalue) ;
The values pass through the yacc parser and arrive at my own .cpp file as floats with the values described. Not all of the values are changed, and even the same value is changed in some occurrences, and unchanged in others.
Please let me know if I should add more information.
float cannot represent every number. It is typically 32-bit and so is limited to at most 232 different numbers. -4.036 and -3.92 are not in that set on your platform.
<float> is typically encoded using IEEE 754 single-precision binary floating-point format: binary32 and rarely encodes fractional decimal values exactly. When assigning values like "-3.92", the actual values saved will be one close to that, but maybe not exact. IOWs, the conversion of -3.92 to float was not exact had it been done by assignment or sscanf().
float x1 = -3.92;
// float has an exact value of -3.9200000762939453125
// View # 6 significant digits -3.92000
// OP reported -3.9200001
float x2 = -4.036;
// float has an exact value of -4.035999774932861328125
// View # 6 significant digits -4.03600
// OP reported -4.0359998
Printing these values to beyond a certain number of significant decimal digits (typically 6 for float) can be expected to not match the original assignment. See Printf width specifier to maintain precision of floating-point value for a deeper C post.
OP could lower expectations of how many digits will match. Alternatively could use double and then only see this problem when typically more than 15 significant decimal digits are viewed.

Large lua numbers are being printed incorrectly

I have the following test case:
Lua 5.3.2 Copyright (C) 1994-2015 Lua.org, PUC-Rio
> foo = 1000000000000000000
> bar = foo + 1
> bar
1000000000000000001
> string.format("%.0f", foo)
1000000000000000000
> string.format("%.0f", bar)
1000000000000000000
That last line should be 1000000000000000001, since that's the value of bar, but for some reason it's not. This doesn't only apply to 1000000000000000000, I've yet to find another number over that one which gives the correct value. Can anyone give an explanation for why this happens?
You're formatting the number as floating-point, not integer. That's what %.0f is doing. At some point, floats lose precision. double, for example, will lose precision after about 16 decimal digits.
If you want to format an integer as an integer, then you need to format it as an integer, using standard printf rules:
string.format("%i", bar)
log2(1000000000000000000) is between 59 and 60, which means that the binary representation of that number needs 60 bits. double-precision floating point numbers have only 53 bits of precision, plus a power-of-two exponent with 11 bits of range. So to store that large of a number as floating point (which is what you requested with the %f format specifier), six to seven bits of precision are chopped off the end of the number, and the whole thing is multiplied by a power of two to get it back in range (259 in this case, I think). Chopping off those final bits removes the precision that allows 1000000000000000000 and 1000000000000000001 to be distinct from each other.
(This is not a particularly precise description of floating point, apologies if my numbers or descriptions are not exact.)

Unexpected result subtracting decimals in ruby [duplicate]

Can somebody explain why multiplying by 100 here gives a less accurate result but multiplying by 10 twice gives a more accurate result?
± % sc
Loading development environment (Rails 3.0.1)
>> 129.95 * 100
12994.999999999998
>> 129.95*10
1299.5
>> 129.95*10*10
12995.0
If you do the calculations by hand in double-precision binary, which is limited to 53 significant bits, you'll see what's going on:
129.95 = 1.0000001111100110011001100110011001100110011001100110 x 2^7
129.95*100 = 1.1001011000010111111111111111111111111111111111111111011 x 2^13
This is 56 significant bits long, so rounded to 53 bits it's
1.1001011000010111111111111111111111111111111111111111 x 2^13, which equals
12994.999999999998181010596454143524169921875
Now 129.95*10 = 1.01000100110111111111111111111111111111111111111111111 x 2^10
This is 54 significant bits long, so rounded to 53 bits it's 1.01000100111 x 2^10 = 1299.5
Now 1299.5 * 10 = 1.1001011000011 x 2^13 = 12995.
First off: you are looking at the string representation of the result, not the actual result itself. If you really want to compare the two results, you should format both results explicitly, using String#% and you should format both results the same way.
Secondly, that's just how binary floating point numbers work. They are inexact, they are finite and they are binary. All three mean that you get rounding errors, which generally look totally random, unless you happen to have memorized the entirety of IEEE754 and can recite it backwards in your sleep.
There is no floating point number exactly equal to 129.95. So your language uses a value which is close to it instead. When that value is multiplied by 100, the result is close to 12995, but it just so happens to not equal 12995. (It is also not exactly equal to 100 times the original value it used in place of 129.95.) So your interpreter prints a decimal number which is close to (but not equal to) the value of 129.95 * 100 and which shows you that it is not exactly 12995. It also just so happens that the result 129.95 * 10 is exactly equal to 1299.5. This is mostly luck.
Bottom line is, never expect equality out of any floating point arithmetic, only "closeness".

Resources