Using an index from another table - cobol

If a table element (table without an index) is accessed using an index of another table it can give a Table overflow error on IBM Host. But the same program does not result in a crash or a message (even with debug options) when using GnuCOBOL (formerly OpenCOBOL).
e.g.
IDENTIFICATION DIVISION.
PROGRAM-ID. TSTPROGX.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 IX PIC 9(04) COMP VALUE ZERO.
01 VARS.
05 S-PART-C.
10 S-DETAIL OCCURS 100 TIMES
INDEXED BY S-SUB.
15 S-ACTUAL PIC 9(06) VALUE ZERO.
15 S-ACTUAL-A
REDEFINES S-ACTUAL
PIC X(06).
15 S-GRADE PIC X(02) VALUE LOW-VALUE.
05 POS-USED-ARRAY PIC X(999)
VALUE SPACE.
05 FILLER REDEFINES POS-USED-ARRAY
OCCURS 999.
10 FILLER-X PIC X .
88 POSITIONS-USED-X VALUE 'T'.
PROCEDURE DIVISION.
SET S-SUB TO 1
PERFORM VARYING IX FROM 1 BY 1 UNTIL IX > 999
SET S-SUB TO IX
SET POSITIONS-USED-X(S-SUB) TO TRUE
DISPLAY IX ":" FILLER-X(S-SUB)
END-PERFORM
GOBACK.
Is there a compiler option to issue warnings to avoid this kind of usage?
This error can be avoided by using the right usage.i.e, using the variable 'I-X', instead of using the index(S-SUB) of a different table.
SET POSITIONS-USED-X(I-X) TO TRUE
In general, exchanging index of independent tables(of different size) seems to be erroneous.

Presuming that by Host you mean Mainframe, using Enterprise COBOL, the link I originally included has the answer for you.
In Enterprise COBOL, you can use an index from one table to reference data on another table, even one which does not have an index (like your example), but unless the lengths of the data subordinate to the OCCURS is the same you will not get the results you expect.
With Enterprise COBOL and compiler option SSRANGE the code in your question will fail (as you are aware). Where will it fail? The length of the OCCURS associated with the index S-SUB is eight bytes. That length is effectively "inbuilt" into the index S-SUB. Dividing 999 (length of second table) by eight yields 124 (ignore the remainder), so an S-SUB being SET to 124 will be OK, and SET to 125 will not, because 125 will start at by 1,000, and you only have 999 bytes.
In GnuCOBOL the index does not have the length inbuilt, it is a simple integer relating directly to the occurence in the table. However, having got up to 100, you start overflowing the first table (where there is no compiler/run-time check) and 125 references later you go beyond the end of the second table, and with -g in your compile options will get a crash depending on how (and how much) the storage is allocated in the C program generated by the GnuCOBOL compiler. 225 would be the first point at which a crash could occur, but it may not do so at that point, and it may, or may not, later.
Basically, there's no way you can really expect the code to work, which you know, you just want to know how to set the compiler to check for it with GnuCOBOL. Currently, you can't.
Usually there is a field defined to hold the maximum entries in a table, and another defined to say how many are being used. When using entries, you first check to see the table is not full.
You could use LENGTH OF/FUNCTION LENGTH to protect against going over the end of a table as well.
You could combine the two approaches.
You just don't have a switch for GnuCOBOL that helps you here. The -g goes down to the C compiler. It gives you something, but not everything, and just depends.
Also, it is not, to my mind, a good idea to have a data-name which you use for subscripting named IX and an index named S-SUB. That will confuse a human reader of a program.
See this answer: https://stackoverflow.com/a/36254166/1927206 for some detail about using indexes from different tables.
IBM Enterprise COBOL can check that storage referenced using an index is within the table that the referenced-element is subordinate to. This is done with compiler option SSRANGE. If data outside the table is referenced (by any manner of subscripting, or by reference-modification) a run-time diagnostic message is produced. In all but the latest Enterprise COBOL compilers, that problem will cause an "abend" (an "abnormal end"). With the latest compilers, there are some sub-options to give more control.
This is not strictly checking the "bounds" of a table. For a one-dimensional table, the table-reference checking coincides with bounds-checking. For a multi-dimensional table it does not. The second and subsequent levels of subscript can be "wrong", but unless they cause a reference outside the table, this is unproblematic.
GnuCOBOL/OpenCOBOL does not have any checking for the use of subscripts or the reference of a table by subscripting/reference-modification. If you wanted to consider adding this yourself, you would be more than welcome. Or you could post it as a feature request. Visit https://sourceforge.net/p/open-cobol/discussion/?source=navbaring, but not everything.
See this answer: https://stackoverflow.com/a/36254166/1927206 for some detail about using indexes from different tables.
IBM Enterprise COBOL can check that storage referenced using an index is within the table that the referenced-element is subordinate to. This is done with compiler option SSRANGE. If data outside the table is referenced (by any manner of subscripting, or by reference-modification) a run-time diagnostic message is produced. In all but the latest Enterprise COBOL compilers, that problem will cause an "abend" (an "abnormal end"). With the latest compilers, there are some sub-options to give more control.
This is not strictly checking the "bounds" of a table. For a one-dimensional table, the table-reference checking coincides with bounds-checking. For a multi-dimensional table it does not. The second and subsequent levels of subscript can be "wrong", but unless they cause a reference outside the table, this is unproblematic.
GnuCOBOL/OpenCOBOL does not have any checking for the use of subscripts or the reference of a table by subscripting/reference-modification. If you wanted to consider adding this yourself, you would be more than welcome. Or you could post it as a feature request. Visit https://sourceforge.net/p/open-cobol/discussion/?source=navbar

Related

How to move COMP-5 values to a Numeric field?

I am trying to move a COMP-5 variable(which I am receiving from some other system) to a numeric field. I have noticed when I display the COMP-5 variable I can see the value but when I try to move it the value in COMP-5 variable becomes zeros.I don't have any experience working with COMP-5. Can someone help me with this?
Code:
09 O-Xid.
12 O-Xid2-length PIC S9999 COMP-5 SYNC.
12 O-Xid2 PIC X(255).
09 WS-O-Xid.
12 WS-O-Xid2-length PIC 9999.
12 WS-O-Xid2 PIC X(255).
MOVE O-Xid2-length TO WS-O-Xid2-length
MOVE O-Xid2 TO WS-O-Xid2
MOVE as you've used does any necessary conversions between any numeric USAGE, as long as the data is valid.
The code misses the actual DISPLAY statement, I assume you've tested for valid data with DISPLAY O-Xid2-length (please specify the output).
The most likely reason that the target does not contain the source value would be:
the COBOL environment you use (you've neither specified the compiler not the options you used) doesn't truncate COMP-5 values according to ANSI/ISO - so it may contain "10000" and is then truncated on the MOVE because the target can't hold that value (standard truncation happening here keeps only the last 4 digits).
All other cases get to "there is not the data in the field that you think" - again: please specify both the ´DISPLAY` statement and the result.
Additional info to TRUNC(BIN): according to the docs:
BINARY sending fields are handled as halfwords, fullwords, or doublewords when the receiver is numeric
DISPLAY will convert the entire content of binary fields with no truncation.
DISPLAY would also show a value like 30000, I think the MOVE would in this case result to a zero value.
For other usages, it would be possible that the value stored in the variable is actually not valid data, but this does not apply to BINARY (or COMP-5 items). In this case the COBOL environment used could do some auto-correction on DISPLAY, but on MOVE just change the invalid value to ZERO; to check that you'd need to use either a debugger or otherwise hex-dump the value received.

How to write an output which is longer than the maximum LRECL in COBOL?

Have you ever worked on VBS or FBS files with more than the maximum LRECL in COBOL?
I want to edit LOB (Large Object) records which are much longer than 32760, write them into files, and transfer them to an Unix server.
If you already have experience, it would be nice if you could give me some tips for processing.
Here is material on the considerations of Spanned records in COBOL
You can code RECORDING MODE S for spanned records in QSAM files that
are assigned to magnetic tape or to direct access devices. Do not
request spanned records for files in the HFS. You can omit the
RECORDING MODE clause. The compiler determines the recording mode to
be S if the maximum record length (in bytes) plus 4 is greater than
the block size set in the BLOCK CONTAINS clause.
For files with format S in your program, the compiler determines the
maximum record length with the same rules as are used for format V.
The length is based on your usage of the RECORD clause.
When creating files that contain format-S records and a record is
larger than the remaining space in a block, COBOL writes a segment of
the record to fill the block. The rest of the record is stored in the
next block or blocks depending on its length. COBOL supports QSAM
spanned records up to 32,760 bytes in length.
When retrieving files that have format-S records, a program can
retrieve only complete records.
Here is an explanation of storing records that are longer than 32,760 bytes. Segmented records are not supported via ISPF Edit. They are kind of an odd beast.
You can call C runtime routines from COBOL (or other LE-conforming languages).
[...]
Working-Storage Section.
01 CONSTANTS.
05 WS-FILE-OPTN PIC X(003) VALUE Z'rb'.
01 WORK-AREAS.
05 WS-FILE POINTER VALUE NULL.
05 WS-FILE-NM PIC X(255).
[...]
Procedure Division.
[...]
CALL 'FOPEN' USING
BY REFERENCE WS-FILE-NM
BY REFERENCE WS-FILE-OPTN
RETURNING WS-FILE
END-CALL
IF WS-FILE = NULL
[error handling, maybe call perror()]
END-IF
This way you can delegate the I/O to the C runtime and do the rest of your logic in COBOL.
Consult the C runtime library reference for documentation on required parameters to your chosen I/O functions.

COBOL read/store in table

The goal of this exercise is to read and store an input file into a table then validate certain fields within the input and output any error records. I need to read and store each policy group so that there are just 5 records stored in the table at a time instead of the entire file.
So I need to read in a policy group which is 5 records, do the processing, then read the next 5 records, etc until the end of the file..
This is the input file.
10A 011111 2005062520060625
20A 011111000861038
32A 011111 79372
60A 0111112020 6 4
94A 011111 080 1
10A 02222 2005082520060825
20A 022221000187062
32A 022221 05038
60A 0222212003 6 4
94A 022221 090 1
....
I was able to load the first 5 records into a table by having my table OCCUR 5 TIMES but I don't know how I would continue that. My code is below. (I wrote it just to see if it was working correctly, but it prints the header line with the first 4 records, instead of just the first 5)
01 TABLES.
05 T1-RECORD-TABLE.
10 T1-ENTRY OCCURS 5 TIMES
INDEXED BY T1-INDEX.
15 RECORD-TYPE-10 PIC X(80).
15 RECORD-TYPE-20 PIC X(80).
15 RECORD-TYPE-32 PIC X(80).
15 RECORD-TYPE-60 PIC X(80).
15 RECORD-TYPE-94 PIC X(80).
copy trnrec10.
COPY TRNREC20.
COPY TRNREC32.
COPY TRNREC60.
COPY TRNREC94.
.....
Z200-READ-FILES.
READ DISK-IN INTO T1-ENTRY(T1-INDEX)
AT END MOVE 'YES' TO END-OF-FILE-SW.
WRITE PRINT-RECORD FROM T1-ENTRY(T1-INDEX).
I don't want a step by step for this (though that'd be nice :P) bc I know WHAT I need to do I just don't know HOW to do it bc my textbook and course note are useless to me. I've been stuck on this for a while and nothing I try works.
I'm assuming that every policy group has exactly 5 records with the 5 record types.
You can set up your working storage like this.
05 T1-RECORD.
10 T1-RECORD-TYPE PIC XX.
10 FILLER PIC X(78).
COPY TRNREC10.
COPY TRNREC20.
COPY TRNREC32.
COPY TRNREC60.
COPY TRNREC94.
Then your read paragraph would look like this. I assumed that TRNREC10-RECORD was the 01 level of the TRNREC10 copybook. if not, substitute the actual 01 levels in the following code.
2200-READ-FILE.
READ DISK-IN INTO T1-RECORD
AT END MOVE 'YES' TO END-OF-FILE-SW.
IF END-OF-FILE-SW = 'NO'
IF T1-RECORD-TYPE = '10'
MOVE T1-RECORD TO TRNREC10-RECORD
END-IF
IF T1-RECORD-TYPE = '20'
MOVE T1-RECORD TO TRNREC20-RECORD
END-IF
...
END-IF.
Your write paragraph would look like this
2400-WRITE-FILE.
WRITE PRINT-RECORD FROM TRNREC10-RECORD
WRITE PRINT-RECORD FROM TRNREC20-RECORD
...
Your processing paragraphs would access the data in the copybook records.
You have textbook, course notes, a manual, an editor, JCL and a computer.
All of those are going to be of use to you, but you've also got to get yourself thinking like your should program.
Your task is to read a file, load five records into a table, do something with them, then write them out.
You will have many tasks where you read a file, do something, and write a file.
So how about getting the file processing down pat first?
Define your files using FILE STATUS
PERFORM OPEN-INPUT-POLICY-MASTER
PERFORM OPEN-OUTPUT-NEW-POLICY-MASTER
In those paragraphs (or SECTIONs, depending on your site standards) OPEN the files, check the file status, abend if not "00".
You will need a READ paragraph. READ in there, check the file status, being aware that "10" is valid and that it indicates end-of-file (so you don't need AT END and END-READ). Count all records read (file status "00").
You will need a WRITE paragraph. Check the file status. Only "00" is valid. Count the records written.
PERFORM PRIMING-READ-OF-POLICY-MASTER
All that paragraph needs to do is PERFORM the READ paragraph. Putting it in a paragraph of its own is a way of documenting what it does. Telling the next person along.
What does it do? Reads, or attempts to read, the first record. If the file is empty, you will get file status "10". If the file should not be empty, abend. You've now dealt with an empty file without affecting your processing logic.
PERFORM PROCESS-POLICY-MASTER UNTIL END-OF-POLICY-MASTER
or
PERFORM UNTIL END-OF-POLICY-MASTER
....
END-PERFORM
I prefer the first, to avoid the main logic "spreading", but it's fine to start with the second if you prefer/it fits in with your course.
The last thing in the paragraph or before the END-PERFORM is a PERFORM of your READ.
You can then PERFORM CLOSE-INPUT-POLICY-MASTER, and similar for the output file.
Then check that the counts are equal. If not, abend. This is trivial in this example, but as your logic gets more complicated, things can go wrong.
Always provide counts to reconcile your input to output, count additions, deletions. Count updates separately for information. Get your program to check what it can. If you have more updates than input records, identify that and abend. Have your program do as much verification as possible.
You will now have a simple program which reads a file, writes a file, checks as much as it can, and is just lacking processing logic.
You can use that program as a base for all your tasks reading one file and writing another.
All that stuff will work in your new program, without you having to do anything.
The logic you need for now is to store data in a table.
OK, as Gilbert has rightly shown, storing in a table doesn't make sense in your actual case. But, it is the requirement. You need to get good at tables as well.
Your table is not defined correctly. Try this:
01 T1-RECORD-TABLE.
05 T1-ENTRY OCCURS 5 TIMES
INDEXED BY T1-INDEX.
10 POLICY-RECORD.
15 POLICY-RECORD-TYPE PIC XX.
15 POLICY-RECORD-DATA PIC X(78).
Put an 88 underneath POLICY-RECORD-TYPE for each of your record-types. Make the 88 descriptive of the business function, don't just say "RECORD-IS-TYPE-10".
You are using an index to reference items in the table. Before putting the first entry in the table you have to SET the index to 1. To access the next entry, you have to SET the index UP BY 1.
Once you have stored your items in the table, you need to get at them again. SET the index to 1 again and you can reference the first entry. SET the index UP BY 1 serially to access the other entries.
SET index TO 1 before you start the processing. MOVE zero to a count of table entries. Get into your file processing loop.
There, count what you store, every time that your count of table entries reaches five, PERFORM a paragraph to output your records and SET your index to 1. If the count is not five, SET your index UP BY 1.
In your paragraph to output the records, use PERFORM VARYING your index FROM 1 BY 1 UNTIL GREATER THAN 5. In the PERFORM, PERFORM your WRITE paragraph with the current table entry as the source for the record.
You will now have two programs, both of which read an input file and produce an identical output file.
Then you can do your verification logic.
If you break everything down, keep things separate, keep things simple, name them well, you'll start to write COBOL programs that are the same except for the specific business logic. All the standard stuff, all the boring stuff, if you like, all the basic structure stays the same. The new code you write is just the specifics of the next task.
Yes, you'll get to read more files, either as reference files, or as multiple inputs. You'll have multiple outputs. But you can build the basics of all those in exactly the same manner. Then you'll have more examples to base your future programs on.
Once you've got the basic stuff, you never need to code it again. You just copy and apply.
With good names, your programs will tell what they are doing.
The code that you actually write will be the "interesting" stuff, not the stuff you do "every time".
I've just "designed" this for you. It is not the only workable design. It is how I do it, and have done for some time. You should also design every part of your processing. You should know what it is doing before you write the code.
As an example, take a simple loop. Imagine how you will test it. With zero entries in the table, what happens? With one? With an intermediate number? One less than the maximum? The maximum? One more than the maximum? 10 more than the maximum? Then write the code knowing that you need to know how to deal with those cases.
In time, not too long, you'll think about the low-level design while you code. In more time, you'll do the high-level design that way as well. In enough time you'll only design things you've not had to deal with before, the rest you'll already know.
You have textbook, course notes, a manual, an editor, JCL and a computer. I've given you some ideas. How about seeing if taken all together they are useful to you. I think you have some frustrations now. Write some basic programs, then apply them to your tasks.

Allocation of Memory in Variable-Length Tables

Say I have the following variable-length table defined in WORKING-STORAGE...
01 SOAP-RECORD.
05 SOAP-INPUT PIC X(8) VALUE SPACES.
05 SOAP-STATUS PIC 9 VALUE ZERO.
05 SOAP-MESSAGE PIC X(50) VALUE SPACES.
05 SOAP-ITEMS OCCURS 0 TO 500 TIMES
DEPENDING ON ITEM-COUNT
INDEXED BY ITEM-X.
10 SI-SUB-ITEMS OCCURS 0 TO 100 TIMES
DEPENDING ON SUB-COUNT
INDEXED BY SUB-X.
15 SS-KEY PIC X(8) VALUE SPACES.
15 SS-AMOUNT PIC -9(7).99 VALUE ZEROS.
15 SS-DESCR PIC x(100) VALUE SPACES.
When this program runs, will it initially allocate as much space as this table could possibly need, or is it more dynamic about allocating memory? I would guess that the DEPENDING ON clause would make it more dynamic in the sense that it would allocate more memory as the ITEM-COUNT variable is incremented. A co-worker tells me otherwise, but he is not 100% sure. So I would really like to know how this works in order to structure my program as efficiently as possible.
PS: Yes, I am writing a new COBOL program! It's actually a CICS web service. I don't think this language will ever die :(
You don't mention which compiler you're using, but, at least up through the current, 2002, COBOL standard, the space allocated for an OCCURS...DEPENDING ON (ODO) data item is not required to be dynamic. (It's really only the number of occurrences, not the length, of the data item that varies.) Although your compiler vendor may've implemented an extension to the standard, I'm not aware of any vendor that has done so in this area.
The next, but not yet approved, revision of the standard includes support for dynamic-capacity tables with a new OCCURS DYNAMIC format.
In the CICS world, OCCURS DEPENDING ON (ODO) can be used to create a
table that is dynamically sized at run time. However, the way you are declaring
SOAP-RECORD will allocate enough memory to hold a record of maximum size.
Try the following:
First, move the SOAP-RECORD into LINKAGE SECTION. Items declared
in the linkage section do not have any memory allocated for them. At this
point you only have a record layout. Leave the declaration of
ITEM-COUNT and SUB-COUNT in WORKING-STORAGE.
Next, declare a pointer and a length in WORKING-STORAGE something like:
77 SOAP-PTR USAGE POINTER.
77 SOAP-LENGTH PIC S9(8) BINARY.
Finally in the PROCEDURE DIVISION: Set the size of the array
dimensions to some real values; allocate the
appropriate amount of memory and then connect the two. For example:
MOVE 200 TO ITEM-COUNT
MOVE 15 TO SUB-COUNT
MOVE LENGTH OF SOAP-RECORD TO SOAP-LENGTH
EXEC CICS GETMAIN
BELOW
USERDATAKEY
SET(SOAP-PTR)
FLENGTH(SOAP-LENGTH)
END-EXEC
SET ADDRESS OF SOAP-RECORD TO SOAP-PTR
This will allocate only enough memory to store a SOAP-RECORD with 200 SOAP-ITEMS
each of which contain 15 SI-SUB-ITEMS.
Note that the LENGTH OF register gives you the size of SOAP-RECORD
based on the ODO object values (ITEM-COUNT, SUB-COUNT) as opposed to
the maximum number of OCCURS.
Very important... Don't forget to deallocate the memory when your done!

COBOL Confusion

Hey, everyone. I'm having a bit of trouble in a coding project that I'm trying to tackle in the zOS environment using COBOL. I need to read a file in and put them into an indexed table (I know there will be less than 90 records).
The thing that is throwing me is that we are bound by the parameters of the project to use a variable called "Table-Size" (set to zero at declaration).
Given all that, I need to do something like "Occurs 1 to 90 times Depending on Table-Size", but I don't understand how that will work if table-size has to (as far as I can tell) because table-size is incremented along with each entry that is added to the table. Can anyone please clear this up for me?
Thanks!
It sounds like your primary concern is: how does the compiler know how much to allocate in the array if the size changes at run-time?
The answer is that it allocates the maximum amount of space (enough for 90 entries). Note that this is for space in working storage. When the record is written to a file, only the relevant portion is written.
An example:
01 TABLE-SIZE PIC 9
01 TABLE OCCURS 1 TO 9 TIMES DEPENDING ON TABLE-SIZE
03 FLD1 PIC X(4)
This will allocate 36 characters (9 multiplied by 4) for TABLE in working storage. If TABLE-SIZE is set to 2 when the record is written to a file, only 8 characters of TABLE will be written (over and above the characters written for TABLE-SIZE, of course).
So, for example, if the memory occupied by TABLE was AaaaBbbbCcccDdddEeeeFfffGgggHhhhIiii, the date written to the file may be the shortened (including size): 2AaaaBbbb.
Similarly, when the record is read back in, both TABLE-SIZE and the relevant bits of TABLE will be populated from the file (setting only the size and first two elements).
I don't believe that the unused TABLE entries are initialised to anything when that occurs. It's best to assume not anyway, and populate them explicitly if you need to add another item to the table.
For efficiency, you may want to consider setting the TABLE-SIZE to USAGE IS COMP.
We don't have quite enough information here, but the basic thing is that the variable named in the DEPENDING ON clause has to have a count of the variable number of groups. So you need something like
01 TABLE-SIZE PIC 99
01 TABLE OCCURS 1 TO 90 TIMES
DEPENDING ON TABLE-SIZE
03 FIELD-1
03 FIELD-2
and so on.
See this article or this article at Publib.

Resources