COBOL ARITH(EXTEND) - COBCH0143S Unknown IDENTIFICATION DIVISION paragraph - cobol

As we know Cobol has max support for numbers up to 18 digits. This can be extended to 31 digits with ARITH(EXTEND).
IBM - ARITH option syntax
If I try to declare variable with 23 digits
01 NUM31 PIC 9(23).
I am getting error: COBCH0213S Item is longer than USAGE allows or contains too many numeric positions : C:\fileSample.CBL(56,26)
I had tried to add a command from the link at top of the file ARITH(EXTEND) but without results - getting error: COBCH0143S Unknown IDENTIFICATION DIVISION paragraph
I am using Microfocus Net Express 4.0 as a development IDE.

There is no actual standard for max sizes in COBOL. It depends on the compiler.
The link you show refers to an option for an IBM mainframe compiler which normally supports 18 digits in a binary or decimal number, but, can be extended to 31 using the documented option.
However judging by the error message you are using a completely different implemenation from Micro-Focus, which, according to this should support 38 digits by default as long as you do NOT specify ANSI85 or ISO2000 dialects.
https://www.microfocus.com/documentation/visual-cobol/vc60/DevHub/HRPGRHPROG0A.html

Related

Getting current line number in Cobol

Is it possible to get and display the current line number in the Cobol program?
For example, C allows do it by the next way:
...
printf("Current line = %d\n", __LINE__);
...
Short answer: No.
There is no portable COBOL way in doing this, especially not in all places like __LINE__ does.
Long answer with potential alternatives:
COBOL 2002 added intrinsic functions for exception handling. Using these you can get the location where the last error happened, which checks are activated.
You could hack something by raising a non-fatal exception and ideally in the same line use that function...
From the standard:
The EXCEPTION-LOCATION function returns an alphanumeric character string, part of which is the implementor-defined location of the statement associated with the last exception status.
So this may provide you with the line number, as the returned value depends on the implementation, additional it seems that - at the time of writing - neither IBM nor MicroFocus nor Fujitsu compilers support that intrinsic function at all.
The GnuCOBOL implementation returns a semicolon-separated list with the last entry being the line number.
The upcoming COBOL standard added the MODULE-NAME intrinsic function - but this will only give the name, not the line reference.
If you are free to choose which implementation you use, then an addition of an extra register COB_SOURCE_LINE / COB_SOURCE_FILE in GnuCOBOL should be relative easy to add...
If the intend is a tracing of some kind: many compilers have an extension READY TRACE/ RESET TRACE. With those two statements (and possibly compiler directives / options) they will at least show the name of sections and paragraphs reached, some may also show the line number. Often this could be redirected to a file and will otherwise go to the default error stream.
If you use GnuCOBOL and compile with -ftrace-all you can also use that for line or statement tracing with self-defined format as specified in COB_TRACE_FORMAT [which can also be adjusted within the COBOL program and limited to the line number].
Q: Is it possible to get and display the current line number in the Cobol program?
There was a feature through COBOL 85 called the DEBUG module. The feature was made obsolete in COBOL 85 and subsequently removed in COBOL 2002. While DEBUG lines were available in the 2002 standard, the DEBUG module was removed from the standard.
NOTE: The DEBUG module may still be available in current compilers.
The feature requires debugging mode in the source-computer paragraph. If the line is removed, source lines with a D or d in column 7 are treated as comments.
Declaratives must be added to access debug-line which is the standard name for the source line number.
I have coded the source such that the source line number of wherever I place perform show-line will be displayed. Notice that show-line doesn't do anything.
Source:
program-id. dbug.
environment division.
source-computer. computer-name debugging mode.
object-computer. computer-name.
data division.
working-storage section.
01 char pic x.
procedure division.
declaratives.
ddebug section.
duse for debugging show-line.
d display "Source-line: " debug-line.
end declaratives.
main-line.
begin.
display "Before"
d perform show-line
display "After"
accept char
stop run.
dshow-line.
end program dbug.
Each implementor has their own means for activating the feature. For the system I use, it's a switch parameter (+D) on the command line. Without the switch parameter the line number will not show. (For GnuCOBOL 3.2 it is, apparently, the environment variable COB_SET_DEBUG with a value of 'Y', 'y' or '1'. ;-))
Command line:
dbug (+D)
Display:
Before
Source-line: 17
After

Add to zero...What is it for?

Why such code is used in some applications instead of a MOVE?
add 16 to ZERO giving SOME-RESULT
I spotted this in professionally written code at several spots.
Sorce is on this page
Why such code is used in some applications instead of a MOVE?
add 16 to ZERO giving SOME-RESULT
Without seeing more of the code, it appears that it could be a translation of IBM Assembler to COBOL. In particular, the ZAP (Zero and Add Packed) instruction may be literally translated to the above instruction, particularly if SOME-RESULT is COMP-3. Thus, someone checking the translation could see that the ZAP instruction was faithfully translated.
Or, it could be an assembler programmer's idea of a joke.
Having seen the code, I also note the use of
subtract some-data-item from some-data-item
which is used instead of
move zero to some-data-item
This is consistent with operations used with packed decimal fields in IBM Assembly, where there are no other instructions to accomplish "flexible" moves. By flexible, I mean that the packed decimal instructions contain a length field so that specific size MVC instructions need not be used.
This particular style, being unusual, may be related to catching copyright violations.
From my experience, I'm pretty sure I know the reason why the programmer would have done this. It has something to do with the binary representation of the number.
I bet SOME-RESULT is a packed-decimal (or COMP-3) format number. Let's assume the field is defined like this
05 SOME-RESULT PIC S9(5) COMP-3.
This results in a 3-byte field with a hex representation like this
x'00016C'
The decimal number is encoded as a binary encoded decimal (BCD, one decimal digit per half-byte), and the last half-byte holds the sign.
Let's take a look at how the sign is defined:
if it is one of x'C', x'A', x'F', x'E' (café), then the number is positive
if it is one of x'B', x'D', then the number is negative
any of x'0'..'x'9' are not valid signs, so we can distinguish signed packed-decimals from unsigned.
However, a zoned number (PIC 9(5) DISPLAY) - as in the source code - looks like this:
x'F0F0F0F1F6'
As you can see, each decimal digit is an EBCDIC character with the 'zone' part (the first half-byte) always being x'F'.
Now we get closer to your question!
What happens when we use
MOVE 16 TO SOME-RESULT
If you just MOVE a number to such a field, this results in being compiled into a PACK instruction on the machine code level.
PACK SOME-RESULT,=C'16'
A pack instruction takes a zoned number and packs it by picking only the second half-byte of each byte and storing it in the half-bytes of the packed number - with one exception! When it comes to the last byte, it simply flips the two half-bytes and stores them in the last half-byte of the decimal.
This means that the zone of the last byte of the zoned decimal becomes the sign in the packed decimal:
x'00016F'
So now we have an x'F' as the sign – which is a valid positive sign.
However, what happens if use this Cobol instruction instead
ADD 16 TO ZERO GIVING SOME-RESULT
This compiles into multiple machine level instructions
PACK SOME_RESULT,=C'0'
PACK TEMP,=C'16'
AP SOME_RESULT,TEMP
(or similar - the key point is that is needs an AP somewhere)
This makes a slight difference in the result, because the AP (add packed) instruction always sets the resulting sign to either x'C' for a positive or x'D' for a negative result.
So the difference lies in the sign
x'00016C'
Finally, the question is why would one make this difference? After all, both x'F' and x'C' are valid positive signs. So why care?
There is one situation when this slight difference can cause big problems: When the packed decimal is part of an index key, then we would not get a match, even though the numbers are semantically identical!
Because this situation occurred quite often in older databases like VSAM and DL/I (later: IMS/DB), it became good practice to "normalize" packed decimals if they were part of an index key.
However, some programmers adopted the practice without knowing why, so you may come across code that uses this "normalization" even though the data are not used for index keys.
You might also wonder why a compiler does not optimize out the ADD 16 TO ZERO. I'm pretty sure it once did, but that broke a lot of applications, so this specific optimization was removed again or at least made a non-default option with warnings.
Additional useful info
Note that at least the Enterprise Cobol for z/OS compiler allows you to see exactly the machine code that is produced from your source code if use the LIST compile option (see this example output). I recommend to always compile with options LIST, MAP, OFFSET, XREF because these options enable you find the exact problem in your Cobol source even when you only have a program dump from an abend.
Anyway, good programming practice is not to care about the compiler or the machine code, but about the other programmers who will have to maintain, and thus read and understand the code. Good practice would be to always prefer simple and readable instructions, and to document the reasons (right in the code) when deviating from this rule.
Some programmers like to do things "just because they can". I have a feeling that is what you are seeing here. It makes about as much sense as doing
a := 0 + b
would in go.

Why flang Fortran print adds a new line at a certain width? [duplicate]

I want to write the following output to a txt file using f77:
14 76900.56273 0.000077 -100000 1000000000 -0.769006
I use:
write(6,*) KINC, BM, R2, AF, BK, BM/AF
without any format (which works well in terms of decimal digits). However in my txt file the output is written as:
14 76900.56273 0.000077 -100000
1000000000 -0.769006
Because I think there is a fixed column width limit by default. I don't know if it is possible to change this so that I can just copy and paste it to excel.
I've looked at FORTRAN 77 Language Reference but I haven't found a way to do it. Any ideas? Thanks
use format
or check your compiler's option
if your compiler is one of dec/compaq/intel, read this link.
http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/compiler/fortran-win/hh_goto.htm#GUID-C6A40AAC-81D8-4DD8-A792-62792B3AC213.htm#GUID-C6A40AAC-81D8-4DD8-A792-62792B3AC213
list directed output (fmt=*) :: 80 column limit default.
"There is a property of list-directed sequential WRITE statements called the right margin. If you do not specify RECL as an OPEN statement specifier or in environmental variable FORT_FMT_RECL, the right margin value defaults to 80. When RECL is specified, the right margin is set to the value of RECL. If the length of a list-directed sequential WRITE exceeds the value of the right margin value, the remaining characters will wrap to the next line. Therefore, writing 100 characters will produce two lines of output, and writing 180 characters will produce three lines of output."
In intel's manual, blue color indicates extensions to the Fortran Standards. These extensions (non-standard features) may or may not be implemented by other compilers that conform to the language standard.
oracle(sun) F77
http://docs.oracle.com/cd/E19957-01/805-4939/6j4m0vnbu/index.html#z400074369ac
"Output lines longer than 80 characters are avoided where possible"
With the asterisk as the format, you are using listed-directed IO. This is intended as a convenience. It gives the programmer minimal control, with few restrictions on the compiler and incomplete portability. The compiler is free to determine aspects such as line length. If you want control over line length, switch to using an actual format.
P.S. Why use FORTRAN 77? Fortran 90/95/2003 is easier to use and more powerful. gfortran is an open-source compiler.

Internal format of TDateTime in Delphi 7?

I've spent many hours researching this and am pretty stuck: my question is - has the internal format of a Delphi TDateTime changed between Delphi 7 (released in 2002 or so) and today?
Scenario: I'm reading a binary logfile created by a Delphi 7 app, and the vendor tells me it's a TDateTime in the record, but decoding the bits shows it's clearly not standard IEEE 754 floating point even though the TDateTime produced by modern Delphi is.
But it's some kind of floating point with around 15 bits of exponent and 45 bits of significand (as opposed to 11 and 53 bits in IEE754), and the leading bit is a 1 (which in IEE754 indicates a negative number) for numbers that are clearly not negative, such as the current date/time.
Hints in old documentation suggested that TDateTime "read as" a double but wasn't necessarily represented internally as one, which means that the internal format would be mostly invisible except where these TDateTimes were written out in binary form.
My suspicion is that the change occurred with Delphi 8, which added .NET support, but I simply can't find any references to this anywhere. I have perl code (!) that picks apart these types mostly working, but I'd love to find a formal spec so I can do it properly.
Any old-timers run into this?
~~~ Steve
Nothing has changed since Delphi 7. In Delphi 7, and in fact previous versions, TDateTime is IEEE754, measuring the number of days since the Delphi epoch.
You are going to need to get in touch with the software vendor and try to work out what this data's format really is. It would be surprising if the format was a non-IEEE754 floating point data type. Are you quite sure that it is floating point?
As for BCB3, BCB6 and D4, it's exactly the IEEE 754 Double-precision floating-point format, in the VCL source file system.pas (as included in BCB6) it's defined by thus:
TDateTime = type Double;

How can I print a huge number in Lua without using scientific notation?

I'm dealing with timestamps in Lua showing the number of microseconds since the Epoch (e.g. "1247687475123456").
I would really like to be able to print that number in all its terrible glory, but Lua insists on printing it in scientific notation. I've scoured the available documentation about printing a formatted string, but the only available commands are "Print in scientific notation (%e/%E)" and "Automatically print in scientific notation if the number is very long (%g)". No options seem to be available to print the number in its normal form.
I realize that I could write a function that will take the original number, do some dividing by 10 and print the digits in a loop but that seems like an inelegant hassle. Surely there's some way of doing this that's built in to the language?
> print(string.format("%18.0f",1247687475123456))
1247687475123456
Lua as usually configured uses your platform's usual double-precision floating point format to store all numbers. For most desktop platforms today, that will be the 64-bit IEEE-754 format. The conventional wisdom is that integers in the range -1E15 to +1E15 can be safely assumed to be represented exactly.
In any case, the string.format() function passes its arguments through (with some minor tweaks) to the platform's implementation of printf(). The format string understood by printf() includes %e and %E to force "scientific" notation, and %f to force plain decimal notation. In addition, %g and %G choose the shortest notation.
For example:
E:\...>lua
Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio
> a = 1e17/3
> print(string.format("%f",a))
33333333333333332.000000
> print(string.format("%e",a))
3.333333e+016
> print(string.format("%.0f",a))
33333333333333332
Note that if the value fits within a 32-bit signed integer range, you can also use the %d format. However, results are not well defined if the value exceeds that range. System timestamps in microseconds are likely to exceed the 32-bit range.
If 16 decimal digits is not enough precision, there are several choices available for increased precision.
First, it would not be difficult to package a true 64-bit integer in a userdata along with a suitable set of arithmetic metamethods. This gets discussed occasionally on the Lua mailing list, but I don't recall seeing a completed module released by anyone.
Second, one of the Lua authors has released two modules supporting arbitrary precision arithmetic: lbc and lmapm. Both are found at that page.
Third, casual searching in Google readily turns up several other math library wrappers.

Resources