Alphanumeric movement to Numeric - cobol

Alphanumeric movement to Numeric variable caused unexpected results. Here is the code fyr:
DATA DIVISION.
WORKING-STORAGE SECTION.
01 WS-VAR-STR PIC X(3) VALUE SPACES.
01 WS-VAR-NUM PIC 9(3) VALUE ZEROES.
PROCEDURE DIVISION.
MOVE '1' TO WS-VAR-STR
MOVE WS-VAR-STR TO WS-VAR-NUM
DISPLAY 'STRING > ' WS-VAR-STR '< MOVED > ' WS-VAR-NUM '<'
IF WS-VAR-NUM >= 40 AND <= 59
DISPLAY 'INSIDE IF >' WS-VAR-NUM
ELSE
DISPLAY 'INSIDE ELSE >' WS-VAR-NUM
END-IF
GOBACK
.
OUTPUT:
STRING > 1 < MOVED > 1 0<
INSIDE ELSE >1 O
The result is bizzare and want to figure why '1' is moved as '1 0' into numeric variable and interestingly there was NO issue in conditioning it as well. Do share your views. Thanks for your interest.

Basically you have done an illegal MOVE. Moving alphanumeric to numeric fields is valid
provided that the content of the alphanumeric field contains only numeric characters.
This reference
summarizes valid/invalid moves.
What were you expecting as a result?
Moves of alphanumeric fields into numeric ones are done without
'conversion'. Basically you just dropped a one digit followed by two spaces into a numeric field. the '1' was ok, the two spaces
were not. The last two bytes of WS-VAR-NUM contain spaces.
But wait... why is the last character a zero? The answer to this is a bit more complicated.
Items declared as PIC 9 something are represented in Zoned Decimal.
Each digit of a zoned decimal number is represented by a single byte.
The 4 high-order bits of each byte are zone bits; the 4 high-order bits of the low-order byte represent
the sign of the item. The 4 low-order bits of each byte contain the value of the digit. The key here
is where the sign is stored. It is in the high order bits of the last byte. Your declaration did not
include a sign so the MOVE statement blows away the sign bits and replaces them with default
numeric high order bits (remember the only valid characters to MOVE are digits - so this
patch process should always yield a valid result). The high order bits of an unsigned zoned decimal
digit are always HEX F. What are the low order bits of the last byte? A space has an ebcdic HEX value of 40. A zero is HEX F0. Since the MOVE statement "fixes" the sign automatically, you end up with HEX F0 in the low order digit, which happens to be, you guessed it, zero. None of the other 'digits' contain sign bits so they are left as
they were.
Finally, a DISPLAY statement converts zoned decimal fields into their equivalent character representation
for presentation: Net result is: '1 0'.
BTW The above discussion is how it works out on an IBM z/OS platform - other character sets (eg. ASCII) and/or other platforms may yield different results, not because IBM is doing the wrong thing, but because the program is doing an illegal MOVE and the results are essentially undefined.

Related

Why does my COBOL working storage variable have trailing zeroes?

I'm building a COBOL program to calculate the average of up to 15 integers. The execution displays a number that is far bigger than intended with a lot of trailing zeroes. Here is the relevant code:
Data Division.
Working-Storage Section.
01 WS-COUNTER PIC 9(10).
01 WS-INPUT-TOTAL PIC 9(10).
01 WS-NEXT-INPUT PIC X(8).
01 WS-CONVERTED-INPUT PIC 9(8).
01 WS-AVG PIC 9(8)V99.
Procedure Division.
PROG.
PERFORM INIT-PARA
PERFORM ADD-PARA UNTIL WS-COUNTER = 15 OR WS-NEXT-INPUT = 'q'
PERFORM AVG-PARA
PERFORM END-PARA.
INIT-PARA.
DISPLAY 'This program calculates the average of inputs.'.
MOVE ZERO TO WS-COUNTER
MOVE ZERO TO WS-INPUT-TOTAL
MOVE ZERO TO WS-AVG.
ADD-PARA.
DISPLAY 'Enter an integer or type q to quit: '
ACCEPT WS-NEXT-INPUT
IF WS-NEXT-INPUT NOT = 'q'
MOVE WS-NEXT-INPUT TO WS-CONVERTED-INPUT
ADD WS-CONVERTED-INPUT TO WS-INPUT-TOTAL
ADD 1 TO WS-COUNTER
END-IF.
AVG-PARA.
IF WS-COUNTER > 1
DIVIDE WS-INPUT-TOTAL BY WS-COUNTER GIVING WS-AVG
DISPLAY 'Your average is ' WS-AVG '.' WS-NEXT-INPUT
END-IF.
The reason I put WS-NEXT-INPUT as alphanumeric and move it to a numeric WS-CONVERTED-INPUT if the IF condition is satisfied is because I want it to be able to take "q" to break the UNTIL loop, but after the condition is satisfied, I want a numeric variable for the arithmetical statements. Here's what it looks like with the numbers 10 and 15 as inputs:
10is program calculates the average of inputs.
Enter an integer or type q to quit:
15
Enter an integer or type q to quit:
q
Your average is 1250000000.
The console is a bit buggy so it forces me to input the 10 in that top left corner most of the time. Don't worry about that.
You see my problem in that execution. The result is supposed to be 00000012.50 instead of 1250000000. I tried inserting a few of my other variables into that display statement and they're all basically as they should be except for WS-INPUT-TOTAL which with that combination of numbers ends up being 0025000000 instead of 0000000025 as I would have expected. Why are these digits being stored in such a weird and unexpected way?
You have that strange output because of undefined behavior - computing with spaces.
The MOVE you present has the exact same USAGE and same size - it will commonly be taken over "as is", it normally does not convert the trailing spaces by some magic, so WS-CONVERTED-INPUT ends up with 10 . As the standard says for the move:
De-editing takes place only when the sending operand is a numeric-edited data item and the receiving item is a numeric or a numeric-edited data item.
and if it would be an edited field then it still should raise an exception on the MOVE:
When a numeric-edited data item is the sending operand of a de-editing MOVE statement and the content of that data item is not a possible result for any editing operation in that data item, the result of the MOVE operation is undefined and an EC-DATA-INCOMPATIBLE exception condition is set to exist.
When computing with spaces you commonly would raise a fatal error, but it seems your compile does not have that activated (and because you didn't share your compile command or even your compiler, we can't help with that).
Different COBOL dialects often use (partial only when checks are not activated which would lead to an abort) zero for invalid data, at least for spaces (but they can use everything. This will then lead to WS-CONVERTED-INPUT "seen as" 10000000 - so your computation will then include those big numbers.
So your program should work if you enter the necessary amount of leading zeroes on input.
General:
"never trust input data - validate" (and error or convert as necessary)
at least if something looks suspicious - activate all runtime checks available, re-try.
Solution - Do an explicit conversion:
MOVE FUNCTION NUMVAL(WS-NEXT-INPUT) TO WS-CONVERTED-INPUT, this will strip surrounding spaces and then convert from left to right until invalid data is found. A good coder would also check up-front using FUNCTION TEST-NUMVAL, otherwise you compute with zero if someone enters "TWENTY".

Change display format from character mode to numeric mode

The value in variable VAR is -1, and when I am trying to write to a file, it gets displayed as J(character mode), which is equivalent to -1.
The VAR is defined in Cobol program copybook as below:
10 VAR PIC S9(1).
Is there any way, to change the display format from character "J" to -1, in the output file.
The information which I found by googling is below:
Value +0 Character {
Value -0 Character }
Value +1 Character A
To convert the zoned ASCII field which results from an EBCDIC to ASCII character translation to a leading sign numeric field, inspect the last digit in the field. If it's a "{" replace the last digit with a 0 and make the number positive. If it's an "A" replace the last digit with a 1 and make the number positive, if it's a "B" replace the last digit with a 2 and make the number positive, etc., etc. If the last digit is a "}" replace the last digit with a 0 and make the number negative. If it's a "J" replace the last digit with a 1 and make the number negative, if it's a "K" replace the last digit with a 2 and make the number negative, etc., etc. Follow these rules for all possible values. You could do this with a look-up table or with IF or CASE statements. Use whatever method suits you best for the language you are using. In most cases you should put the sign immediately before the first digit in the field. This is called a floating sign, and is what most PC programs expect. For example, if your field is 6 bytes, the value -123 should read " -123" not "- 123".
It might be simpler to move it to an EBCDIC output (display) field so that its just EBCDIC characters, and then convert that to ASCII and write it.
For example
10 VAR PIC S9(1).
10 WS-SEPSIGN PIC S9(1) SIGN IS LEADING SEPARATE.
10 WS-DISP REDEFINES WS-SEPSIGN
PIC XX.
MOVE VAR TO WS-SEPSIGN.
Then convert WS-OUT to ASCII using a standard lookup table and write it to the file.
If you are sending data from an EBCDIC machine to an ASCII machne, or vice versa, by far the best way is to only deal with character data. You can then let the transfer/communication mechanism do the ASCII/EBCDIC translation at record/file level.
Field-level translation is possible, but is much more prone to error (fields must be defined, accurately, for everything) and is slower (many translations versus one).
The SIGN clause is a very good way to do this. There is no need to REDEFINES the field (again you get to issues with field-definitions, two places to change if the size is changed).
There is a similar issue with decimal places where they exist. Where source and data definitions are not the same, an explicit decimal-point has to be provided, or a separate scaling-factor.
Both issues, and the original issue, can also be dealt with by using numeric-edited definitions.
01 transfer-record.
...
05 numeric-edited-VAR1 PIC +9.
...
With positive one, that will contain +1, with negative one, that will contain -1.
Take an amount field:
01 VAR2 PACKED-DECIMAL PIC S9(7)V99.
...
01 transfer-record.
...
05 numeric-edited-VAR2 PIC +9(7).99.
...
For 4567.89, positive, the new field will contain +0004567.79. For the same value, but negative, -0004567.79.
The code on the Source-machine is:
MOVE VAR1 TO numeric-edited-VAR1
MOVE VAR2 TO numeric-edited-VAR2
And on the target (in COBOL)
MOVE numeric-edited-VAR1 TO VAR1
MOVE numeric-edited-VAR2 TO VAR2
The code is the same if you use the SIGN clause for fields without decimal places (or with decimal places if you want the danger of being implicit about it).
Another thing with field-level translation is that Auditors don't/shouldn't like it. "The first thing you do when the data arrives is you change it? Really?" says the Auditor.

What's wrong with this alphanumeric to numeric move?

When I move a number in a PIC X to a PIC 9 the numeric field's value is 0.
FOO, a PIC X(400), has '1' in the first byte and spaces in the remaining 399. Moving into the PIC 9(02) BAR like so
DISPLAY FOO
MOVE FOO to BAR
DISPLAY BAR
yields
1
0
Why is BAR 0 instead of 1? [Edit: originally, 'What is happening?']
Postscript: NealB says "Do not write programs that rely on obscure truncation rules and/or
data type coercion. Be precise and explicit in what you are doing."
That made me realize I really want COMPUTE BAR AS FUNCTION NUMVAL(FOO) wrapped in a NUMERIC test, not a MOVE.
Data MOVEment in COBOL is a complex subject - but here is
a simplified answer to your question. Some data movement rules
are straight forward and conform to what one might expect. Others are somewhat bizzar and may vary with
compiler option, vendor and possibly among editions of the COBOL standard (74, 85, 2002).
With the above in mind, here is an explanation of what happend in your example.
When something 'large' is
MOVEd into something 'small' truncation must occur. This is what happened when BAR was MOVEd to FOO. How that
truncation occurs is determined by the receving item
data type. When the receiving item is character data (PIC X), the rightmost characters will be truncated from the sending field.
For numeric data the leftmost digits are truncated from the sending field. This behaviour is pretty much universal for all COBOL
compilers.
As a consequense of these rules:
When a long 'X' field (BAR) starting with a '1' followed by a bunch of space characters is MOVEd
into a shorter 'X' field the leftmost characters are transferred. This is why the '1' would be preserved when moving to another PIC X
item.
When a long 'X' field (BAR) is moved to a '9' (numeric) datatype the rightmost characters are moved first. This is why '1' was lost, it was never
moved, the last two spaces in BAR were.
So far simple enough... The next bit is more complicated. Exactly what happens is vendor, version, compiler option and character set
specific. For the remainder of this example I will assume EBCDIC character sets and the IBM Enterprise COBOL compiler are being used. I
also assume your program displayed b0 and not 0b.
It is universally legal in COBOL to move PIC X data to PIC 9 fields provided the PIC X field contains only digits. Most
COBOL compilers only look at the lower 4 bits of a PIC 9 field when determining its numeric value. An exception is the least
significant digit where the sign, or lack of one, is stored. For unsigned numerics the upper 4 bits of the least significant digit
are set to 1's (hex F) as a result of the MOVE (coercion follows different rules for signed fields). The lower 4 bits are MOVEd without
coercion. So, what happens when a space character is moved into a PIC 9 field? The hex
representation of a SPACE is '40' (ebcdic). The upper 4 bits, '4', are flipped to 'F' and the lower 4 bits are moved as they are. This results in the
least significant digit (lsd) containing 'F0' hex. This just happens to be the unsigned numeric representation for the digit '0' in a PIC 9 data item.
The remaining leading digits are moved as they are (ie. '40' hex). The net result is that FOO displays as
b0. However, if you were to do anything other that 'MOVE' or 'DISPLAY' FOO, the upper 4 bits of the remaining 'digits' may be coerced to zeroes as a
result. This would flip their display characteristics from spaces to zeros.
The following example COBOL program and its output illustrates these points.
IDENTIFICATION DIVISION.
PROGRAM-ID. EXAMPLE.
DATA DIVISION.
WORKING-STORAGE SECTION.
01.
05 BAR PIC X(10).
05 FOO PIC 9(2).
05 FOOX PIC X(2).
PROCEDURE DIVISION.
MOVE '1 ' TO BAR
MOVE BAR TO FOO
MOVE BAR TO FOOX
DISPLAY 'FOO : >' FOO '< Leftmost trunctaion + lsd coercion'
DISPLAY 'FOOX: >' FOOX '< Righmost truncation'
ADD ZERO TO FOO
DISPLAY 'FOO : >' FOO '< full numeric coercion'
GOBACK
.
Output:
FOO : > 0< Leftmost trunctaion, lsd coercion
FOOX: >1 < Righmost truncation
FOO : >00< full numeric coercion
Final words... Best not to have to know anything about this sort to thing. Do not write programs that rely on obscure truncation
rules and/or data type coercion. Be precise and explicit in what you are doing.
Firstly, why do you think it might be useful to MOVE a 400-byte field to a two-byte field? You are going to get a "certain amount(!)" of "truncation" with that (and the amount of truncation is certain, at 398 bytes). Do you know which part of your 400 bytes is going to be truncated? I'd guess not.
For an alpha-numeric "sending" item (what you have), the (maximum) number of bytes used is the maximum number of bytes in a numeric field (18/31 depending on compiler/compiler option). Those bytes are taken from the right of the alpha-numeric field.
You have, therefore, MOVEd the rightmost 18/31 digits to the two-digit receiving field. You have already explained that you have "1" and 399 spaces, so you have MOVEd 18/31 spaces to your two-digit numeric field.
Your numeric field is "unsigned" (PIC 9(2) not PIC S9(2) or with a SIGN SEPARATE). For an unsigned field (which is a field with "no operational sign") a COBOL compiler should generate code to ensure that the field contains no sign.
This code will turn the right-most space in your PIC 9(2) into a "0" because and ASCII space is X'20' and an EBCDIC space is X'40'. The "sign" is embedded in the right-most byte of a USAGE DISPLAY numeric field, and and no other data but the sign is changed during the MOVE. The 2 or 4 of X'2n' or X'4n' is, without regard to its value, obliterated to the bit-pattern for an "unsign" (the lack of an "operational sign"). An "unsign" followed by a numeric digit (which is the '0' left over from the space) will, obviously, appear as a zero.
Now, you show a single "1" for your 400-byte field and a single 0 for your two-byte numeric.
What I do is this:
DISPLAY
">"
the-first-field-name
"<"
">"
the-second-field-name
"<"
...
or
DISPLAY
">"
the-first-field-name
"<"
DISPLAY
">"
the-second-field-name
"<"
...
If you had done that, you should find 1 followed by 399 spaces for your first field (as you would expect) and space followed by zero for your second field, which you didn't expect.
If you want to specifically see this in operation:
FOO PIC X(400) JUST RIGHT.
MOVE "1" TO FOO
MOVE FOO TO BAR
DISPLAY
">"
FOO
"<"
DISPLAY
">"
BAR
"<"
And you should see what you "almost" expect. You probably want the leading zero as well (the level-number 05 is an example, whatever level-number you are using will work).
05 BAR PIC 99.
05 FILLER REDEFINES BAR.
10 BAR-FIRST-BYTE PIC X.
88 BAR-FIRST-BYTE-SPACE VALUE SPACE.
10 FILLER PIC X.
...
IF BAR-FIRST-BYTE-SPACE
MOVE ZERO TO BAR-FIRST-BYTE
END-IF
Depending on your compiler and how close it is to ANSI Standard (and which ANSI Standard) your results may differ (if so, try to get a better compiler), but:
Don't MOVE alpha-numeric which are longer than the maximum a numeric can be to a numeric;
Note that in the MOVE alpha-numeric to numeric it is the right-most bytes of the alpha-numeric which are actually moved first;
An "unsigned" numeric should/must always remain unsigned;
Always check for compiler diagnostics and correct the code so that no diagnostics are produced (where possible);
When showing examples, it is highly important to show the actual results the computer produced, not the results as interpreted by a human. " 0" is not the same as "0 " is not the same as "0".
EDIT: Looking at TS's other questions, I think Enterprise COBOL is a safe bet. This message would have been issued by the compiler:
IGYPG3112-W Alphanumeric or national sending field "FOO" exceeded 18 digits. The rightmost 18 characters were used as the sender.
Note, the "18 digits" would have been "31 digits" with compiler option ARITH(EXTEND).
Even though it is a lowly "W" which only gives a Return Code of 4, not bothering to read it is not good practice, and if you had read it you'd not have needed to ask the question - although perhaps you'd still not know how you ended up with " 0", but that is another thing.
I gather you expect the 9(2) value to show up as "1" instead of "0" and you are confused as to why it does not?
You are moving values from left to right when you move from an X value (unless the destination value changes things). So the 9 value has a space in it. To simplify it, moving "X(2) value '1 '" to a 9(2) value literally moves those characters. The space makes what is in the 9(2) invalid, so the COBOL compiler does with it what it knows to do, return 0. In other words, defining the 9(2) as it does tells the compiler to interpret the data in a different way.
If you want the 9(2) to show up as "1", you have to present the data in the right way to the 9(2). A 9(2) with a value of 1 has the characters "01". Untested:
03 FOO PIC X(2) value '1'.
03 TEXT-01 PIC X(2) JUSTIFIED RIGHT.
03 NUMB-01 REDEFINES TEXT-01 PIC 9(2).
03 BAR PIC 9(2).
DISPLAY FOO.
MOVE FOO TO TEXT-01.
INSPECT TEXT-01 REPLACING LEADING ' ' BY '0'.
MOVE NUMB-01 TO BAR.
DISPLAY BAR.
Using the NUMERIC test against BAR in your example should fail as well...

mapping from xml to cobol field

I need to pass LOW-VALUES(am not very sure what kind would that be), as default for a copybook field, to the backend team. I use a wtx transform which converts xml to cobol
15 :abc PIC X(15).
From the mainframe team I got this as sample for the field.
X'000000000000000000000000000000'
However when I use this rule, it fails because the number of characters is above 15. How can I pass the LOW-VALUES?
my rule map for the above cobol field
="X'000000000000000000000000000000'"
error meesage
Map: Output: abc Field:123 Group:outputcbl
Size of input item is greater than size of output item.
LOW-VALUE in COBOL is a figurative constant. The value of this constant
is the character having the lowest ordinal position in the collating sequence used.
Assuming the character set in use is EBCDIC (as indicated in one of your comments to another answer)
and the collating sequence has not
been overridden (probably a good assumption), a LOW-VALUE corresponds to binary zeros.
A PIC X(15) data item in COBOL occupies 15 bytes. Use a transformation that translates this
field into 15 bytes of binary zeros. The COBOL application will see this a LOW-VALUE.
Note: The value your 'Mainframe team' gave you is a hexadecimal string representation for 15 bytes of binary zeros.
Low-values is simply all Hex zeros, so if you resize your rule map to contain 15 hex digits, all zero, you should be fine.

Data Validation

So I have entered my second semester of College and they have me doing a course called Advanced COBOL. As one of my assignments I have to my make a program that tests certain things in a file to make sure the input has no errors. I get the general idea but there are just a few things I don't understand and my teacher is one of those people who will give you an assignment and make you figure it out yourself with little or no help. So here is what I need help with.
I have a field that the first 5 columns have to be numbers, the 6th column a capital letter and the last 2 numbers in a range of 01-68 or 78-99.
one of my fields has to be a string of numbers with a dash in it like 00000-000, but some have more than one dash. How can I count the dashes to identify that there is a problem.
Here are a few hints...
Use a hieratical record structure to view the data in different ways. For example:
01 ITEM-REC.
05 ITEM-CODE.
10 ITEM-NUM-CODE PIC 9(3).
10 ITEM-CHAR-CODE PIC A(3).
88 ITEM-TYPE-A VALUE 'AAA' THRU 'AZZ'.
88 ITEM-TYPE-B VALUE 'BAA' THRU 'BZZ'.
05 QUANTITY PIC 9(4).
ITEM-CODE is a 6 character group field, the first part of which is numeric (ITEM-NUM-CODE) and the last part
is alphabetic (ITEM-CHAR-CODE). You can refer to any one of these three variables in your program. When you
refer to ITEM-CODE, or any other group item, COBOL
treats the variable as if it were declared as PIC X. This means you can
MOVE just about anything into it without raising an error. For example:
MOVE 'ABCdef' TO ITEM-CODE
or
MOVE 'ABCdef0005' TO ITEM-REC
Neither one would cause an error even though the elementary data item ITEM-NUM-CODE is definitely not a number.
To verify the validity
of your data after a group move you should validate each elementary data item separately (unless
you know for certain no data type errors could have occurred). There are a variety of ways to do this. For
example if the data item has to be numeric the following would work:
IF ITEM-NUM-CODE IS NUMERIC
CONTINUE
ELSE
DISPLAY 'ITEM-NUM-CODE IS NOT NUMERIC'
PERFORM BIG-BAD-ERROR
END-IF
COBOL provides various class tests which can be applied against a data item. For
example: NUMERIC, ALPHABETIC and ALPHANUMERIC are commonly used.
Another common way to test for ranges of values is by defining various 88 levels - but exercise
caution. In the above
example ITEM-TYPE-A is an 88 level that defines a data range from 'AAA' through 'AZZ' based on
the collating sequence currently in effect. To verify that ITEM-CHAR-CODE contains only alphabetic
characters and the first letter is an 'A' or a 'B', you could do something like:
IF ITEM-CHAR-CODE ALPHABETIC
DISPLAY 'ITEM-CHAR-CODE is alphabetic.'
EVALUATE TRUE
WHEN ITEM-TYPE-A
DISPLAY 'ITEM-CHAR-CODE is in range AAA through AZZ'
WHEN ITEM-TYPE-B
DISPLAY 'ITEM-CHAR-CODE is in range BAA through BZZ'
WHEN OTHER
DISPLAY 'ITEM-CHAR-CODE is in some other range'
END-EVALUATE
ELSE
DISPLAY 'ITEM-CHAR-CODE is not alphabetic'
END-IF
Note the separate test for ALPHABETIC above. Why do that when the 88 level tests
could have done the job? Actually the 88's are not sufficient because they
cover the entire range from AAA through AZZ based on the collating sequence currently
in effect. In
an EBCDIC based environment (a very large number of COBOL shops use EBCDIC) this captures
values such as A}\. the close-brace and backslash characters are non-alpha but
fall into the middle of
the range 'A' through 'Z' (what the #*#! is that all about?). Also note that a value such
as 'aaa' would not satisfy the ITEM-TYPE-A condition because lower case letters fall outside
the defined range. Maybe time to check out an EBCDIC character table.
Finally, you can count the number of occurrences of a character, or string of characters, in
a variable with the INSPECT verb as follows:
INSPECT ITEM-CODE TALLING DASH-COUNT FOR ALL '-'
DASH-COUNT needs to be a numeric item and will contain the number of dash characters in ITEM-CODE. The INSPECT
verb is not so useful if you want to count the number of digits. For this you would need one statement for each digit.
It might be easier to just code a loop something like:
PERFORM VARYING I FROM 1 BY 1
UNTIL I > LENGTH OF ITEM-CODE
EVALUATE ITEM-CODE(I:1)
WHEN '-'
COMPUTE DASH-COUNT = DASH-COUNT + 1
WHEN '0' THRU '9'
COMPUTE DIGIT-COUNT = DIGIT-COUNT + 1
WHEN OTHER
COMPUTE OTHER-COUNT = OTHER-COUNT + 1
END-EVALUATE
END-PERFORM
Now ask yourself why I was comfortable using a zero through 9 range check? Hint: look at the collating sequence.
Hope this helps.

Resources