Evaluate statement not working as expected - cobol

In my Cobol routine, I want to base only on the first byte of the variable PAR-STR PIC X(12) from Linkage section to perform a different task (argument matching).
The compiler exits with the statement, that there is more than one object to evaluate in the statement.
DISPLAY PAR-STR.
EVALUATE PAR-STR(1:1)
WHEN 'P'
WHEN 'p'
PAR-STR = "X"
WHEN 'L'
WHEN 'l'
PAR-STR = "Y"
WHEN OTHER PAR-STR = "Z"
The compile result is giving me a problem, that there is no separation by also.
From my understanding, there is only one object to evaluate. Thus, unless there is a bug in the
compiler, there is something of a cause of this proble that I'm not aware of..?
From the complier output:
000010 WHEN 'p'
==000010==> IGYPS2165-S Multiple "EVALUATE" objects were not separated by "ALSO". The statement
was discarded.
==000010==> IGYPS2133-S The number of "EVALUATE" subjects was less than the number of "EVALUATE"
objects. The statement was discarded.
==000010==> IGYPA3009-S The selection object at position 1 in the "WHEN" phrase did not match the
type of the corresponding selection subject in the "EVALUATE" statement.
The selection object was discarded.
Thanks for any hint about what could be the cause of this messages shown.

The compiler does mention that there is no ALSO, but finds there is an additional expression in the first WHEN (PAR-STR = "X").
The correct statements that work look accordingly:
DISPLAY PAR-STR.
EVALUATE PAR-STR(1:1)
WHEN 'P'
WHEN 'p'
MOVE "X" TO PAR-STR
WHEN 'L'
WHEN 'l'
MOVE "Y" TO PAR-STR
WHEN OTHER MOVE "Z" TO PAR-STR
END-EVALUATE.

Related

Cobol Reference Modification: What exactly does "MOVE Variable(Variable +literal:literal) TO Variable" do?

There is one thing which I don't understand about reference modification in Cobol.
The example goes like this:
MOVE VARIABLE(VARIABLE2 +4:2) TO VARIABLE3
Now I do not qutie understand what the "+4:2" references to. Does it mean that the first two signs 4 signs after the target are moved? Meaning if for example VARIABLE (the 1st) is filled with "123456789" and VARIABLE2 contains the 2nd and 3rd position within that variable (so"23"), the target is "23 +4" meaning "789". Then the first two positions in the target (indicated by the ":2") are moved to VARIABLE3. So in the end VARIABLE3 would contain "78".
Am I understanding this right or am I making a false assumption about that instruction?
(VARIABLE2 +4:2) is a syntax error, because the starting position must be an arithmetic expression. There must be a space after the + for this reference modification to be valid. And, VARIABLE2 must be numeric and the expression shall evaluate to an integer.
Once corrected, then 4 is added to the content of VARIABLE2. That is the left-most (or starting position) within VARIABLE1 for the move. 2 characters are moved to VARIABLE3. If VARIABLE3 is longer than two characters, the remaining positions are filled with spaces.
From the 2002 COBOL standard:
8.7.1 Arithmetic operators
There are five binary arithmetic operators and two unary arithmetic operators that may be used in arithmetic expressions. They are represented by specific COBOL characters that shall be preceded by a space and followed by a space except that no space is required between a left parenthesis and a unary operator or between a unary operator and a left parenthesis.
Emphasis added.

Behavior of STRING verb

I am reading a COBOL program file and I am struggling to understand the way the STRING command works in the following example
STRING WK-NO-EMP-SGE
','
WK-DT-DEB-PER-FEU-TEM
','
WK-DT-FIN-PER-FEU-TEM
DELIMITED BY SIZE
INTO UUUUUU-CO-CLE-ERR-DB2
I have three possible understandings of what it does:
Either the code concatenate each variables into UUUUUU-CO-CLE-ERR-DB2 and separate each values with ',', and the last variable is delimited by size;
Either the code concatenate each variables into UUUUUU-CO-CLE-ERR-DB2 and separate each values with ',', but all the values are delimited by size (meaning that the DELIMITED BY SIZE in this case applies to all the values passed in the string command;
Or each variable is delimited by a specific character, for example WK-NO-EMP-SGE would be delimited by ',', WK-DT-DEB-PER-FEU-TEM by ',' and WK-DT-FIN-PER-FEU-TEM would then be DELIMITED BY SIZE.
Which of my reading is actually the good one?
Here's the syntax-diagram for STRING (from the Enterprise COBOL Language Reference):
Now you need to know how to read it.
Fortunately, the same document tells you how:
How to read the syntax diagrams
Use the following description to read the syntax diagrams in this
document:
. Read the syntax diagrams from left to right, from top to bottom,
following the path of the line.
The >>--- symbol indicates the beginning of a syntax diagram.
The ---> symbol indicates that the syntax diagram is continued on the
next line.
The >--- symbol indicates that the syntax diagram is continued from
the previous line.
The --->< symbol indicates the end of a syntax diagram. Diagrams of
syntactical units other than complete statements start with the >---
symbol and end with the ---> symbol.
. Required items appear on the horizontal line (the main path).
. Optional items appear below the main path.
. When you can choose from two or more items, they appear vertically,
in a stack.
If you must choose one of the items, one item of the stack appears on
the main path.
If choosing one of the items is optional, the entire stack appears
below the main path.
. An arrow returning to the left above the main line indicates an item
that can be repeated.
A repeat arrow above a stack indicates that you can make more than one
choice from the stacked items, or repeat a single choice.
. Variables appear in italic lowercase letters (for example, parmx).
They represent user-supplied names or values.
. If punctuation marks, parentheses, arithmetic operators, or other
such symbols are shown, they must be entered as part of the syntax.
All that means, if you follow it through, that your number 2 is correct.
You can use a delimiter (when you don't have fixed-length data) or just use the size. Any item which is not explicit in how it is delimited, is delimited by the next DELIMITED BY statement.
One thing to watch for with STRING, which doesn't matter in your case, is that the target field does not get space-padded if the data is shorter than the target. With variable-length data, you need to clear the field to space before the STRING executes.
There is a nuance one must grasp in order to understand the results. DELIMITED BY SIZE can be misleading if one has experience in other programming languages.
Each of the three variables has a size that is defined in WORKING-STORAGE. Let's presume it looks something like this.
05 WK-NO-EMP-SGE PIC X(04).
05 WK-DT-DEB-PER-FEU-TEM PIC X(10).
05 WK-DT-FIN-PER-FEU-TEM PIC X(10).
If the value of the variables were set like this:
MOVE 'BOB' TO WK-NO-EMP-SGE.
MOVE 'Q' TO WK-DT-DEB-PER-FEU-TEM.
MOVE 'D19EIEIO2B' TO WK-DT-FIN-PER-FEU-TEM.
Then one might expect the value of UUUUUU-CO-CLE-ERR-DB2 to be:
BOB,Q,D19EIEIO2B
But it would actually be:
BOB ,Q ,D19EIEIO2B

Grammar: start: (a b)? a c; Input: a d. Which error correct at position 2? 1. expected "b", "c". OR expected "c"

Grammar:
rule: (a b)? a c ;
Input:
a d
Question: Which error message correct at position 2 for given input?
1. expected "b", "c".
2. expected "c".
P.S.
I write parser and I have choice (dilemma) take into account that "b" expected at position or not take.
The #1 error (expected "b", "c") want to say that input "a b" expected but because it optional it may not expected but possible.
I don't know possible is the same as expected or not?
Which error message better and correct #1 or #2?
Thanks for answers.
P.S.
In first case I define marker of testing as limit of position.
if(_inputPos > testing) {
_failure(_inputPos, _code[cp + {{OFFSET_RESULT}}]);
}
Limit moved in optional expressions:
OPTIONAL_EXPRESSION:
testing = _inputPos;
The "b" expression move _inputPos above the testing pos and add failure at _inputPos.
In second case I can define marker of testing as boolean flag.
if(!testing) {
_failure(_inputPos, _code[cp + {{OFFSET_RESULT}}]);
}
The "b" expression in this case not add failure because it tested (inner for optional expression).
What you think what is better and correct?
Testing defined as specific position and if expression above this position (_inputPos > testing) it add failure (even it inside optional expression).
Testing defined as flag and if this flag set that the failures not takes into account. After executing optional expression it restore (not reset!) previous value of testing (true or false).
Also failures not takes into account if rule not fails. They only reported if parsing fails.
P.S.
Changes at 06 Jan 2014
This question raised because it related to two different problems.
First problem:
Parsing expression grammar (PEG) describe only three atomic items of input:
terminal symbol
nonterminal symbol
empty string
This grammar does not provide such operation as lexical preprocessing an thus it does not provide such element as the token.
Second problem:
What is a grammar? Are two grammars can be considred equal if they accept the same input but produce different result?
Assume we have two grammar:
Grammar 1
rule <- type? identifier
Grammar 2
rule <- type identifier / identifier
They both accept the same input but produce (in PEG) different result.
Grammar 1 results:
{type : type, identifier : identifier}
{type : null, identifier : identifier}
Grammar 2 results:
{type : type, identifier : identifier}
{identifier : identifier}
Quetions:
Both grammar equal?
It is painless to do optimization of grammars?
My answer on both questions is negative. No equal, Not painless.
But you may ask. "But why this happens?".
I can answer to you. "Because this is not a problem. This is a feature".
In PEG parser expression ALWAYS consists from these parts.
ORDERED_CHOICE => SEQUENCE => EXPRESSION
And this explanation is the my answer on question "But why this happens?".
Another problem.
PEG parser does not recognize WHITESPACES because it does not have tokens and tokens separators.
Now look at this grammar (in short):
program <- WHITESPACE expr EOF
expr <- ruleX
ruleX <- 'X' WHITESPACE
WHITESPACE < ' '?
EOF <- ! .
All PEG grammar desribed in this manner.
First WHITESPACE at begin and other WHITESPACE (often) at the end of rule.
In this case in PEG optional WHITESPACE must be assumed as expected.
But WHITESPACE not means only space. It may be more complex [\t\n\r] and even comments.
But the main rule of error messages is the following.
If not possible to display all expected elements (or not possible to display at least one from all set of expected elements) in this case is more correct do not display anything.
More precisely required to display "unexpected" error mesage.
How you in PEG will display expected WHITESPACE?
Parser error: expected WHITESPACE
Parser error: expected ' ', '\t', '\n' , 'r'
What about start charcters of comments? They also may be part of WHITESPACE in some grammars.
In this case optional WHITESPACE will be reject all other potential expected elements because not possible correctly to display WHITESPACE in error message because WHITESPACE is too complex to display.
Is this good or bad?
I think this is not bad and required some tricks to hide this nature of PEG parsers.
And in my PEG parser I not assume that the inner expression at first position of optional (optional & zero_or_more) expression must be treated as expected.
But all other inner (except at the first position) must treated as expected.
Example 1:
List<int list; // type? ident
Here "List<int" is a "type". But missing ">" is not at the first position in optional "type?".
This failure take into account and report as "expected '>'"
This is because we not skip "type" but enter into "type" and after really optional "List" we move position from first to next real "expected" (that already outside of testing position) element.
"List" was in "testing" position.
If inner expression (inside optional expression) "fits in the limitation" not continue at next position then it not assumed as the expected input.
From this assumption has been asked main question.
You must just take into account that we are talking about PEG parsers and their error messages.
Here is your grammar:
What is clear here is that after the first a there are two possible inputs: b or c. Your error message should not prioritize one over the other.
The basic idea to produce an error message for an invalid input is to find the most far place you failed (if your grammar where d | (a b)? a c, d wouldn't be part of the error) and determine what are all possible inputs that could make you advance and say "expected '...' but got '...'". There are other approaches to try to recover the parser and force it to continue. If there is only one possible expected token, let's temporarily insert it into the token stream and continue as if it where there since ever. This would lead to better error detection as you can find errors beyond the point where the parser first stopped.

Data Validation

So I have entered my second semester of College and they have me doing a course called Advanced COBOL. As one of my assignments I have to my make a program that tests certain things in a file to make sure the input has no errors. I get the general idea but there are just a few things I don't understand and my teacher is one of those people who will give you an assignment and make you figure it out yourself with little or no help. So here is what I need help with.
I have a field that the first 5 columns have to be numbers, the 6th column a capital letter and the last 2 numbers in a range of 01-68 or 78-99.
one of my fields has to be a string of numbers with a dash in it like 00000-000, but some have more than one dash. How can I count the dashes to identify that there is a problem.
Here are a few hints...
Use a hieratical record structure to view the data in different ways. For example:
01 ITEM-REC.
05 ITEM-CODE.
10 ITEM-NUM-CODE PIC 9(3).
10 ITEM-CHAR-CODE PIC A(3).
88 ITEM-TYPE-A VALUE 'AAA' THRU 'AZZ'.
88 ITEM-TYPE-B VALUE 'BAA' THRU 'BZZ'.
05 QUANTITY PIC 9(4).
ITEM-CODE is a 6 character group field, the first part of which is numeric (ITEM-NUM-CODE) and the last part
is alphabetic (ITEM-CHAR-CODE). You can refer to any one of these three variables in your program. When you
refer to ITEM-CODE, or any other group item, COBOL
treats the variable as if it were declared as PIC X. This means you can
MOVE just about anything into it without raising an error. For example:
MOVE 'ABCdef' TO ITEM-CODE
or
MOVE 'ABCdef0005' TO ITEM-REC
Neither one would cause an error even though the elementary data item ITEM-NUM-CODE is definitely not a number.
To verify the validity
of your data after a group move you should validate each elementary data item separately (unless
you know for certain no data type errors could have occurred). There are a variety of ways to do this. For
example if the data item has to be numeric the following would work:
IF ITEM-NUM-CODE IS NUMERIC
CONTINUE
ELSE
DISPLAY 'ITEM-NUM-CODE IS NOT NUMERIC'
PERFORM BIG-BAD-ERROR
END-IF
COBOL provides various class tests which can be applied against a data item. For
example: NUMERIC, ALPHABETIC and ALPHANUMERIC are commonly used.
Another common way to test for ranges of values is by defining various 88 levels - but exercise
caution. In the above
example ITEM-TYPE-A is an 88 level that defines a data range from 'AAA' through 'AZZ' based on
the collating sequence currently in effect. To verify that ITEM-CHAR-CODE contains only alphabetic
characters and the first letter is an 'A' or a 'B', you could do something like:
IF ITEM-CHAR-CODE ALPHABETIC
DISPLAY 'ITEM-CHAR-CODE is alphabetic.'
EVALUATE TRUE
WHEN ITEM-TYPE-A
DISPLAY 'ITEM-CHAR-CODE is in range AAA through AZZ'
WHEN ITEM-TYPE-B
DISPLAY 'ITEM-CHAR-CODE is in range BAA through BZZ'
WHEN OTHER
DISPLAY 'ITEM-CHAR-CODE is in some other range'
END-EVALUATE
ELSE
DISPLAY 'ITEM-CHAR-CODE is not alphabetic'
END-IF
Note the separate test for ALPHABETIC above. Why do that when the 88 level tests
could have done the job? Actually the 88's are not sufficient because they
cover the entire range from AAA through AZZ based on the collating sequence currently
in effect. In
an EBCDIC based environment (a very large number of COBOL shops use EBCDIC) this captures
values such as A}\. the close-brace and backslash characters are non-alpha but
fall into the middle of
the range 'A' through 'Z' (what the #*#! is that all about?). Also note that a value such
as 'aaa' would not satisfy the ITEM-TYPE-A condition because lower case letters fall outside
the defined range. Maybe time to check out an EBCDIC character table.
Finally, you can count the number of occurrences of a character, or string of characters, in
a variable with the INSPECT verb as follows:
INSPECT ITEM-CODE TALLING DASH-COUNT FOR ALL '-'
DASH-COUNT needs to be a numeric item and will contain the number of dash characters in ITEM-CODE. The INSPECT
verb is not so useful if you want to count the number of digits. For this you would need one statement for each digit.
It might be easier to just code a loop something like:
PERFORM VARYING I FROM 1 BY 1
UNTIL I > LENGTH OF ITEM-CODE
EVALUATE ITEM-CODE(I:1)
WHEN '-'
COMPUTE DASH-COUNT = DASH-COUNT + 1
WHEN '0' THRU '9'
COMPUTE DIGIT-COUNT = DIGIT-COUNT + 1
WHEN OTHER
COMPUTE OTHER-COUNT = OTHER-COUNT + 1
END-EVALUATE
END-PERFORM
Now ask yourself why I was comfortable using a zero through 9 range check? Hint: look at the collating sequence.
Hope this helps.

In Cobol, to test "null or empty" we use "NOT = SPACE [ AND/OR ] LOW-VALUE" ? Which is it?

I am now working in mainframe,
in some modules, to test
Not null or Empty
we see :
NOT = SPACE OR LOW-VALUE
The chief says that we should do :
NOT = SPACE AND LOW-VALUE
Which one is it ?
Thanks!
Chief is correct.
COBOL is supposed to read something like natural language (this turns out to be just
another bad joke).
Lets play with the following variables and values:
A = 1
B = 2
C = 3
An expression such as:
IF A NOT EQUAL B THEN...
Is fairly straight forward to understand. One is not equal to two so we will do
whatever follows the THEN. However,
IF A NOT EQUAL B AND A NOT EQUAL C THEN...
Is a whole lot harder to follow. Again one is not equal to two AND one is not
equal to three so we will do whatever follows the 'THEN'.
COBOL has a short hand construct that IMHO should never be used. It confuses just about
everyone (including me from time to time). Short hand expressions let you reduce the above to:
IF A NOT EQUAL B AND C THEN...
or if you would
like to apply De Morgans rule:
IF NOT (A EQUAL B OR C) THEN...
My advice to you is avoid NOT in exprssions and NEVER use COBOL short hand expressions.
What you really want is:
IF X = SPACE OR X = LOW-VALUE THEN...
CONTINUE
ELSE
do whatever...
END-IF
The above does nothing when the 'X' contains either spaces or low-values (nulls). It
is exactly the same as:
IF NOT (X = SPACE OR X = LOW-VALUE) THEN
do whatever...
END-IF
Which can be transformed into:
IF X NOT = SPACE AND X NOT = LOW-VALUE THEN...
And finally...
IF X NOT = SPACE AND LOW-VALUE THEN...
My advice is to stick to simple to understand longer and straight forward expressions
in COBOL, forget the short hand crap.
In COBOL, there is no such thing as a Java null AND it is never "empty".
For example, take a field
05 FIELD-1 PIC X(5).
The field will always contain something.
MOVE LOW-VALUES TO FIELD-1.
now it contains hexadimal zeros. x'0000000000'
MOVE HIGH-VALUES TO FIELD-1.
Now it contains all binary ones: x'FFFFFFFFFF'
MOVE SPACES TO FIELD-1.
Now each byte is a space. x'4040404040'
Once you declare a field, it points to a certain area in memory. That memory area must be set to something, even if you never modify it, it still will have what ever garbage it had before the program was loaded. Unless you initialize it.
05 FIELD-1 PIC X(6) VALUE 'BARUCH'.
It is worth noting that the value null is not always the same as low-value and this depends on the device architecture and its character set in use as determined by the manufacturer. Mainframes can have an entirely different collating sequence (low to high character code and symbol order) and symbol set compared to a device using linux or windows as you have no doubt seen by now. The shorthand used in Cobol for comparisons is sometimes used for boolean operations, like IF A GOTO PAR-5 and IF A OR C THEN .... and can be combined with comparisons of two variables or a variable and a literal value. The parser and compiler on different devices should deal with these situations in a standard (ANSI) method but this is not always the situation.
I agree with NealB. Keep it simple, avoid "short cuts", make it easy to understand without having to refer to the manual to check things out.
IF ( X EQUAL TO SPACE )
OR ( X EQUAL TO LOW-VALUES )
CONTINUE
ELSE
do whatever...
END-IF
However, why not put an 88 on X, and keep it really simple?:
88 X-HAS-A-VALUE-INDICATING-NULL-OR-EMPTY VALUE SPACE, LOW-VALUES.
IF X-HAS-A-VALUE-INDICATING-NULL-OR-EMPTY
CONTINUE
ELSE
do whatever...
END-IF
Note, in Mainframe Cobol, NULL is very restricted in meaning, and is not the meaning that you are attributing to it, Tom. "Empty" only means something in a particular coder-generated context (it means nothing to Cobol as far as a field is concerned).
We don't have "strings". Therefore, we don't have "null strings" (a string of length one including string-terminator). We don't have strings, so a field always has a value, so it can never be "empty" other than as termed by the programmer.
Oguz, I think your post illustrates how complex something that is really simple can be made, and how that can lead to errors. Can you test your conditions, please?

Resources