What does "$CICS ON" in a legacy IBM COBOL program mean? - cobol

I have some IBM COBOL of 2006 vintage. It contains "$CICS ON" and "$CICS OFF". I'm generally familiar with IBM COBOL "EXEC CICS" statements and directives, but I've never seen this pair.
What do these commands do, and where are they documented (IBM reference manual name would be ideal answer)? Searching the web didn't show an obvious result.
COBOL program fragment below:
000750 01 WG-YOB.
000760 05 FILLER PIC X(4).
000770 05 WX-YOB PIC X(2).
000780$CICS ON.
000790$CICS OFF.
000800$COPY CMPLDBF.
000810$COPY CMPLDBH.
000820 LINKAGE SECTION.
000830 01 DFHCOMMAREA.
000840 COPY COMMAREA.
000850 PROCEDURE DIVISION.
EDIT: This is a code base of about 1000 programs ~~ 450K SLOC. The only $ commands I find across this entire code base are:
$CICS ON.
$CICS OFF.
$COPY <filename>.
$BLOCK.
$BlOCKS.
If it helps, the copy libs referenced by $COPY contain nonstandard COBOL declarations (note "COMMBLOCK" and "FORM" starting in column 7)
000100COMMBLOCK OF HCPDSDB.
000110 01 HCPDSDB-DB.
000120 05 RECORD-NAME.
000130 05 RETURN-KODE.
000140 05 FREE.
000150 05 LAST-RECORD-FLAG.
000160 05 PASSWD.
000170 05 NO-OF-RECORDS.
...
000380FORM YYMMDD.
000390 05 C4-RED REDEFINES C4.
000400 07 C4-YY PIC 99.
000410 07 C4-MM PIC 99.
000420 07 C4-DD PIC 99.
000430 05 C5 PIC 9(11).
000440 05 C6 PIC 9(6).

Converting comment to an answer at OP request.
May be System 2000 references. V1 PDF from 20+ years ago includes $CICS ON and $CICS OFF directives. I cannot locate any V2 documentation that includes these directives.
From comments, OP found a more helpful manual at https://support.sas.com/resources/papers/proceedings/pdfs/s2k/PLEX.pdf which "appears to contain all the $xxxx directives mentioned" in the question.
I'm glad I was able to at least point in a helpful direction.

Related

Cobol data files

First let me apologize if data is not that complete . This is not me being lazy but me being not aware of cobol details .
I have been assigned in my firm to extract our old financial data from files read by cobol programs and turn them to a database in our oracle DB . I am not able to read these files as normal texts . i don't know how to turn then to normal text .
As per the cobol source each row is 7 records and each record is 72 chars .
the files are very large . each one is 3 GB in average . how can i open them as a normal text ?
here is the file section
000220 ENVIRONMENT DIVISION.
000230 CONFIGURATION SECTION.
000240 SOURCE-COMPUTER. NCR-3000.
000250 OBJECT-COMPUTER. NCR-3000.
000260 INPUT-OUTPUT SECTION.
000270 FILE-CONTROL.
000280 SELECT DQ-HIMVT-A ASSIGN TO DISC
000290 ORGANIZATION INDEXED
000300 ACCESS MODE DYNAMIC
000310 RECORD KEY CLE-A.
000320*
000330 DATA DIVISION.
000340 FILE SECTION.
000350 FD DQ-HIMVT-A BLOCK CONTAINS 7 RECORDS
000360 RECORD CONTAINS 73 CHARACTERS
000370 LABEL RECORD STANDARD
000380 DATA RECORD IS HIMVT-A.
000390 01 HIMVT-A.
000400 02 CLE-A.
000410 03 ENT-A PIC 99.
000420 03 NUCPT-A PIC 9(13) COMP-6.
000430 03 DEV-A PIC XXX.
000440 03 DATOP-A PIC 9(7) COMP-6.
000450 03 SIG-A PIC 9.
000460 03 FORC-A PIC 9.
000470 03 DATVAL-A PIC 9(7) COMP-6.
000480 03 NUMOP-A PIC 9(9) COMP-6.
000490 03 MT-A PIC 9(12)V999 COMP-6.
000500 02 FILLER PIC X(8).
000510 02 TYPCPT-A PIC 9(3) COMP-6.
000520 02 LIBOP-A PIC X(15).
000530 02 SOLD-A PIC S9(12)V999 COMP-3.
000540 02 DATTRAIT-A PIC 9(7) COMP-6.
000550 02 FILLER PIC X.
Here is a sample of the file when opened from notepad++
RMKF I I 0 ** ƒ ’ *B9 *B9 ’ ’ ÿ # "c *B9 Þ #01 EGP %10 % ƒ 21 $ '10 ' (#P )€ 010 0 0 EGP $21 $ %11 $ (EGP $21 $ %11 $ 7EGP $21 $ %11 $ FEGP $21 $ %11 $ UEGP $21 $ %11 $ ` ÿÿÿÿÿÿÿÿÿÿÿÿÿÿ >01 ÔEGP %10 % ÔƒÖ 21Â
NO. 0 ÄÕ
environment section
000220 ENVIRONMENT DIVISION.
000230 CONFIGURATION SECTION.
000240 SOURCE-COMPUTER. NCR-3000.
000250 OBJECT-COMPUTER. NCR-3000.
000260 INPUT-OUTPUT SECTION.
000270 FILE-CONTROL.
000280 SELECT DQ-HIMVT-A ASSIGN TO DISC
000290 ORGANIZATION INDEXED
000300 ACCESS MODE DYNAMIC
000310 RECORD KEY CLE-A.
I found this file which they call a copy book . don't know how it ois related
000100*
000200**** CINVDAT - ZONE DE TRAVAIL ****
000300*******************************************
000400****
000500*
000600 01 INVDATRAV.
000700 03 INVZON1 PIC 99.
000800 03 INVZON2 PIC 99.
000900 03 INVZON3 PIC 99.
001000 01 INVZONI PIC 99.
001100 01 INVDATE PIC 9(6).
001200 01 INVCAL PIC 9.
001300*
Regards
You may be able to locate a service which can do the extract for you. If you go this route, ensure that they have all the information you can provide (which must include the data-definitions under the FD) and agree to only pay on verified receipt of the data.
An alternative is to talk to Micro Focus about a short-term license for a COBOL which (again must be guaranteed) can understand the indexed-file format. You then write one simple program per file whose data you need to extract. Advantage here is that what COMP-3 and COMP-6 represent, you don't need to know, as the conversion to a "text" number is done without anyone having to think about it (on the output definition, you remove all references to COMP-anything (also COMP, if there happen to be any)).
A further alternative is to sit down with a hex editor, knowledge of the data, and work out how to abstract the index information away from the data (all the data records are a known, fixed, length, 73 bytes in your example).
Then, with your preferred language which can handle non-delimited-record (fixed length) binary data, and working out what COMP-3, COMP-6, and any other COMP- (or COMP) fields mean. They will likely be packed-decimal, Binary Coded Decimal (BCD) or "some type of binary" given that Standard COBOL has binary fields limited by decimal values (to the size of the PICture clause).
In the first and second alternatives, there is a greater expectation of the reliability of extract. The third may be the "cheapest", but expectations of the time expended to complete are more difficult to stick to.
Of the first two, cost is the likely determinant (assuming you are not going to use COBOL going forward). If you yourself have to write some COBOL programs, don't worry about that, they are very, very simple, and once you have done one, you simply "clone" it.
I'm not sure which system you are using. As my experience in AS400. COBOL data file using EBCDIC format, it cannot be open directly from a text editor. It will only show random texts. You have to convert it in to ASCII before you export. In AS400, I use CHGTOPCD file/member name to a directory and export it out. Then it will show correct texts. Not sure is this information helps you.

Converting a date in COBOL

So im reading a mmddyyyy (01012000) date into a pic x (8) and im wondering how I can create a new variable with the previous variables info into yyyymmdd (20000101) format. Im sure there must be some way to do this with substrings or what not?
#ScottNelson has provided the "using substrings" part of the answer, following is the "or what not" part of the answer.
01 mmddyyyy.
05 mm pic xx.
05 dd pic xx.
05 yyyy pic xxxx.
01 yyyymmdd.
05 yyyy pic xxxx.
05 mm pic xx.
05 dd pic xx.
move corresponding mmddyyyy to yyyymmdd
In working storage:
77 mmddyyyy-date pic x(8).
77 yyyymmdd-date pic x(8).
In your procedure division logic:
move mmddyyyy-date(1:2) to yyyymmdd-date(5:2)
move mmddyyyy-date(3:2) to yyyymmdd-date(7:2)
move mmddyyyy-date(5:4) to yyyymmdd-date(1:4)
01 a-name-to-describe-the-source-date.
05 antdtsd-dd PIC XX.
05 antdtsd-mm PIC XX.
05 antdtsd-yyyy PIC XXXX.
01 a-name-to-describe-the-destination-date.
05 antdtdd-yyyy PIC XXXX.
05 antdtdd-mm PIC XX.
05 antdtdd-dd PIC XX.
Or
01 a-name-to-describe-the-source-date PIC X(8).
01 FILLER
REDEFINES a-name-to-describe-the-source-date.
05 antdtsd-dd PIC XX.
05 antdtsd-mm PIC XX.
05 antdtsd-yyyy PIC XX.
01 a-name-to-describe-the-destination-date.
01 FILLER
REDEFINES a-name-to-describe-the-destination-date.
05 antdtdd-yyyy PIC XX.
05 antdtdd-mm PIC XX.
05 antdtdd-dd PIC XX.
Then
MOVE antdtsd-dd TO antdtdd-dd
MOVE antdtsd-mm TO antdtdd-mm
MOVE antdtsd-yyyy TO antdtdd-yyyy
Firstly, you are overstating things if you call this "conversion". It is a simple rearrangement of data.
Secondly, there are many ways to do this. Which way do you do it? COBOL tends to be coded by "teams", and if you do this for a job, you will be best served by doing it how others on your team do it.
You've been shown two ways: reference-modification and using CORRESPONDING (which, if you see it in real code, will often be abbreviated to CORR - who's going to type CORRESPONDING if the intent is not to type much...?).
How otherwise to chose between them? Performance? They'll likely generate identical code. So the compiler is out of it. Understandability to the human reader? For me, that is very important in COBOL (or any language).
Two problems with the reference-modification. Typo? No problem, code will compile and execute. And you'll find it in testing. Won't you? At some point? And wasting all the time expended until you find it. The second is, what does (5:4) mean? When someone tells you "that program is doing something odd with years", you have to first find that the year is disguised as (5:4). Oh, and (1:4). Great, you've not even started looking for the issue with the program yet, and you still have to check the positions and lengths are correct. OK, a date is a trivial example, but reference-modification-users presumably apply it to everything they can (if not, why apply it to a date)? So, have fun reading.
Oh, and COBOL doesn't have "strings", it has fixed-length fields. Reference-modification creates fields
The CORR. Using this saves lots of typing (the it's reason it exists is probably due to punched-cards, and the way many COBOL programs would process input data and create new output data. Programs on punched-cards, so a genuine reason to reduce typing - for punched-card programs).
Well, tis modern times now.
Let's say you want to use the "month" as a subscript to get the month-name if the year is 2005.
IF yy OF yyyymmdd EQUAL TO "2005"
MOVE month-name-in-table ( mm OF yyyymmdd ) TO ...
END_IF
(that assumes mm OF yyyymmdd is defined as numeric).
Do you want to scatter "qualification" (the use of OF or IN to make a name unique by referring to something it belongs to) throughout a program, just so you can use CORR?

JRecord - Handling duplicate columns in cobol copybook

I am using CopybookInputFormat on git https://github.com/tmalaska/CopybookInputFormat/ to generate hive table definition from COBOL copybook. My copybook has many Fillers (duplicate columns)
but it looks like JRecord is not handling duplicate column name correctly.
For below copybook, when I iterate columns, JRecord only prints second Filler and ignores first filler.
05 Birth-day PIC X(002)
05 Filler PIC X(008)
05 Birth-Month PIC X(002)
05 Filler PIC X(008)
05 Birth-year PIC X(004)
Does anyone have any solution for this? I know JRecord 0.80.6 onward is handling duplicate columns, but method getUniqueField("FIRST-NAME", "PRESIDENT") needs a group name.. but what if group has duplicate columns?
You should not need to import a Filler. In Cobol, a Filler can not be directly accessed. In Cobol a Filler say's Ignore this Field (or access it by another method).
A Cobol-Copybook is like a mask over a block of memory; A filler is used to skip some memory.
Data !##........##........## (# - accessible bytes; . - inaccessible bytes)
^ ^ ^
! ! !
Birth-day ---+ ! !
Filler ! !
Birth-Month -------------+ !
Filler !
Birth-year -----------------------+
A filler can be used to:
Mask fields that are no longer used.
Used mask data in a redefines
Create a simplified version of a copybook when you do not need all the fields
Initializing an output field i.e
05 report-Birth-date
10 dd pic 99.
10 filler pic '/'.
10 mm pic 99.
10 filler pic '/'.
10 yyyy pic 9999.
setting up Table data
05 codes.
10 code occurs 5 pic 99.
05 filler redefines codes pic x(10)
value '0204050612'.
I would ask the Cobol specialists where you work what is going on ???. Possible answers could be:
The filler data may not be needed.
You should be using a different more complicated Copybook.
The copybook should be updated with the Fillers given real names.

How to use file descriptions in cobol?

I have seen in some of the tutorials that the records are declared only in file description(FD) and in some tutorials they have declared the record in Working storage section and used it.What is the difference between the both.
In some programs it is like this
FD STUDENT
01 FS-EMP-REC.
02 FS-EMP-ID PIC X(07).
02 FS-EMP-NAME PIC X(20).
02 FS-EMP-ACCT PIC X(06).
01 WS-EMP-REC.
02 WS-EMP-ID PIC X(07).
02 WS-EMP-NAME PIC X(20).
02 WS-EMP-ACCT PIC X(06).
In some tutorials it is (FD alone)
01 FS-EMP-REC.
02 FS-EMP-ID PIC X(07).
02 FS-EMP-NAME PIC X(20).
02 FS-EMP-ACCT PIC X(06).
What is the difference?
It can be a question of coding style. Some people just always use READ ... INTO ... or do a MOVE of the 01 under the FD to an 01 in the WORKING-STORAGE. Often the 01 in the FILE SECTION will just be defined with an elementary FILLER to describe the length of the input record.
Sometimes there is a specific need to do this, if the particular COBOL being used limits the use of the data in the FD (in Enterprise COBOL you can't SET an address for something in the FILE SECTION, and DB2 requires a known address, so can't be in the FILE SECTION, for instance).
People tend to think it is "safer" to use the WORKING-STORAGE, but this is not the case. People also think it easier to locate information in the WORKING-STORAGEwhen a program fails.
The READ ... INTO ... requires an extra transfer of the data, so will be "slower", but that is only a problem in extreme situations.
You'll see both in programs, as you already have done, and there is no hard-and-fast answer as to why one program uses on, and another the other. Mostly it will just make no difference at all.
With READ ... INTO ... the record will still also be available in the FILE SECTION.
Unless necessary, I don't use READ ... INTO ... myself, but many people think programs won't work properly if you don't use it :-)
Just be aware of the two different ways, and use the way that those you are coding with use.

Create a TABLE in Cobol from a data structure?

I need to make a table out of the data structure below because I am not certain how many records that are each one line long will be in my input file. If I can make a table then I will be able to loop through them at a later time which is what I need to be able to do.
**Question: How to make a table out of the data structure before?
Part B: An array in Cobol is an OCCURS 100 TIMES
01 PRECORD.
05 JE.
10 NE PIC X(6) VALUE SPACES.
10 NM PIC X(2) VALUE SPACES.
05 FILL1 PIC X(16) VALUE SPACES.
05 TM PIC X(7) VALUE SPACES.
05 FILL2 PIC X(6) VALUE SPACES.
05 TT PIC X(7) VALUE SPACES.
05 FILL3 PIC X(13) VALUE SPACES.
05 TTY PIC X(10) VALUE SPACES.
05 FILL4 PIC X(13) VALUE SPACES.
01 PRECORD.
02 table-counter <-- this is used to hold the number of records
02 tTable occurs 300 times. <-- creates a table with three hundred occurences
05 JE.
10 NE PIC X(6) VALUE SPACES.
10 NM PIC X(2) VALUE SPACES.
05 FILL1 PIC X(16) VALUE SPACES.
05 TM PIC X(7) VALUE SPACES.
05 FILL2 PIC X(6) VALUE SPACES.
05 TT PIC X(7) VALUE SPACES.
05 FILL3 PIC X(13) VALUE SPACES.
05 TTY PIC X(10) VALUE SPACES.
05 FILL4 PIC X(13) VALUE SPACES.
The code above is update with how I think that table should look. The table has to have a counter at the top and then under than it will have to have a occur and how many times the table should occur.
The question that I was asking was how do you make a table like above actually a table I did not know that you had to create an Occurs and then put everything below that level of the occurs.
01 mytable.
02 counter...
02 tablevar occures 200 times.
05 var...
05 var2..
I just was not sure of the structure of a Cobol table. My question is what was the format of a Cobol data structure?
Your table-counter will need a PICture.
What PICture? Opinions vary.
There are three numeric formats which are useful for this, binary, packed-decimal, and display-numeric.
nn table-counter COMP/COMP-4/BINARY/COMP-5 PIC 9(4).
nn table-counter COMP-3/PACKED-DECIMAL PIC 9(3).
nn table-counter PIC 9(3).
The most efficient definition will be a binary one. If you use packed-decimal, the compiler will generate code to convert it to binary when used in comparison with anything you use for subscripting (except literals). When using display-numeric, the compiler will generate code to first convert to packed-decimal, then to binary.
Do these things matter with the speed of machines these days? Well, if they don't matter, may as well be efficient, but opinions do vary.
What size for the PICture? 9(4) for binary allows up to 9999 as a maximum value. You can code 999, but it does not give you much advantage (can't limit it to 300), so I go for optimal for the size (for a packed-decimal (COMP-3) it would be 999, as you don't get a fourth digit for nothing). Same if using display-numeric. Again, opinions vary.
If those are records, as Magoo has pointed out, you can't just add the count to the beginning of the record. You can't keep your table in the FILE SECTION under and FD. It will need to go into the WORKING-STORAGE SECTION.
Then there is the problem of keeping two structures "in step" for where they should match each other.
You probably have a copybook for the record-layout. The best is if you can parameterise the names in the copybook, so that you can use REPLACING on the COPY statement, allowing you to use the same copybook for the two different purposes. It would then be important that the copybook does not contain an 01-level. Again opinions vary on the inclusion of 01s in copybooks, but you may get lucky.
Which, given all the opinion, gets us to "well, what do I do?". What you do is the way they do it at your site. There should be documentation of local standards. This may not cover everything, you may have to seek the opinions of colleagues. If you all code in about the same way, it makes the code easier to understand.
Personally, I'd declare table-counter as a 77-level with a PIC 9(03). And you really should remove the VALUE clauses. Of course, this would need to be a WORKING-STORAGE entry, not an FD since the table isn't on the file. Other than that, what you've dome appears valid - but it's difficult to see what question you are asking.

Resources