what is the difference between binary and binary coded decimal? - encoder

I'm studying encoders, which convert decimal to a code such as binary or binary coded decimal. What is binary coded decimal? Is it different from binary?

It's very late but I saw this so I thought I might answer. So binary coded decimal is a way to denote larger binary numbers in decimal form, except each digit is represented in binary for example.
1111(binary) = 15(decimal)
1111(binary) = 0001 0101(BCD)
so the bcd form of 1111 is two 4 bits numbers where the first 4 bit number in decimal is 1 and the second in decimal is 5, thus giving us 15. The way to calculate this is through an algorithm called double dabble.

(B)inary (C)oded (D)ecimal data usage is found mostly in Assembler programs. On mainframes, it was mostly used to save half a byte, typically to allow an 8-digit date to be stored in a (four-byte) fullword as YYYYMMDD, avoiding a full binary conversion while also keeping the date more "eye friendly" format (i.e. easier to see in a file dump).
IBM Mainframe Assembler provides a special instruction - MVO: (M)o(V)e with (O)ffset - enabling very easy conversion from Packed Decimal (i.e. COMP-3 in COBOL) to BCD format (and vice-versa) without using an algorithm.
Example: Assume a date of 31-DEC-2017 in YYYYMMDD (8-byte) format is to be converted to a 8-digit BCD format field (4-bytes).
(1) Use the PACK instruction to convert the 8-char DATE into 5-bytes PACKED
(2) Use the MVO instruction to move the 8 significant binary decimal digits to the BCD field
[Note length override "...BCD(5)...": so sign X'F' from PACKED is shifted into byte after the BCD field]
BCD now contains X'20171231'
SAMPLE CSECT
[...]
(1) PACK PACKED,DATE C'20171231' BECOMES X'020171231F' IN PACKED
(2) MVO BCD(5),PACKED X'020171231F' BECOMES X'20171231' IN BCD
[...]
BCD DS XL4
PACKED DS PL5
DATE DC CL8'21071231'

Likewise, to convert an 8-digit BCD date to an 8-char DATE is a simple sequence of 3 instructions:
(1) Insert sign into rightmost byte of a 5-byte packed decimal field
[think of this as restoring the sign shifted out in step 2 "MVO BCD(5),PACKED" in the first example, above]
(2) Use the MVO instruction to extract the 8 binary decimal digits into the 5-byte packed decimal field
(3) Use UNPK to convert the 5-byte packed decimal field to an 8-char date
DATE now contains C'20171231'
SAMPLE CSECT
[...]
(1) MVI PACKED+(L'PACKED-1),X'0F' INSERT SIGN (PACKED BECOMES X'........0F'
(2) MVO PACKED,BCD X'20171231' BECOMES X'020171231F' IN PACKED
(3) UNPK DATE,PACKED X'020171231F' BECOMES C'20171231' IN DATE
[...]
BCD DC XL4'20171231'
PACKED DS PL5
DATE DS CL8

Related

DISPLAY in COBOL of Signed Comp-3 Data shows unexpected output

Theoretically, I studied like the end of character in comp-3 tells whether it is positive or negative value
C - Indicates positive value
D - Indicates negative value
Is this not applicable for new version of COBOL in mainframes?
01 WS-COMP3 PIC S9(5) COMP-3 VALUES -12.
DISPLAY WS-COMP3
OUTPUT: 0001K
For above piece of code, I am getting the end of character as K instead of D
The value K is the substitution of -2
0 ==> } -1 ==> J, -2 ==> K ....
Using DISPLAY ... with a numeric data type requires a conversion to a displayable type. The COBOL standard requires it.
A typical conversion for COMP-3 is to move the data item to an equivalent displayable format. For this case, PIC S9(5) COMP-3 is often converted to PIC S9(5) SIGN TRAILING for display.
This conversion means the internally stored value will be converted so that individual digits, except the last, will be converted to displayable digits. The last will have the sign indicator changed to reflect the format for the particular implementation.
For IBM mainframes, the internal COMP-3 format for -12 is 00 01 2D and will be converted to F0 F0 F0 F1 D2 which displays as 0001K.
Many ASCII systems will provide a slightly different result. The same internal format will be converted to 30 30 30 31 x2 where the x depends on the implementation's requirement. It may display as 0001B or 0001r or some other, such as SIGN SEPARATEgiving -00012.
The actual conversion for any data type done by any COBOL implementation will be documented in the language reference.
From the 2002 standard, B.1 Implementor-defined language element list,
DISPLAY statement (data conversion). This item is required. This item shall be documented in the implementor's user documentation. (14.8.10, DISPLAY statement, general rule 1)
DISPLAY statement, 14.8.10.3 General rules,
The DISPLAY statement causes the content of each operand to be transferred to the hardware device in the order listed. If an operand is a zero-length data item, no data is transferred for that operand. Any conversion of data required between literal-1 or the data item referenced by identifier-1 and the hardware device is defined by the implementor.
As an addition to Rick Smith's excellent answer describing the reasons I wanted to add that IBM's Enterprise COBOL for z/OS since version 5 provides a compiler-option to handle this issue.
When compiling with DISPSIGN(SEP) a DISPLAY of a signed numeric item (binary, packed decimal or zoned) will always produce a separate leading sign.
Default is DISPSIGN(COMPAT) which will behave like shown in the question.

is there a way to convert an integer to be always a 4 digit hex number using Lua

I'm creating a Lua script which will calculate a temperature value then format this value as a 4 digit hex number which must always be 4 digits. Having the answer as a string is fine.
Previously in C I have been able to use
data_hex=string.format('%h04x', -21)
which would return ffeb
however the 'h' string formatter is not available to me in Lua
dropping the 'h' doesn't cater for negative answers i.e
data_hex=string.format('%04x', -21)
print(data_hex)
which returns ffffffeb
data_hex=string.format('%04x', 21)
print(data_hex)
which returns 0015
Is there a convenient and portable equivalent to the 'h' string formatter?
I suggest you try using a bitwise AND to truncate any leading hex digits for the value being printed.
If you have a variable temp that you are going to print then you would use something like data_hex=string.format("%04x",temp & 0xffff) which would remove the leading hex digits leaving only the least significant 4 hex digits.
I like this approach as there is less string manipulation and it is congruent with the actual data type of a signed 16 bit number. Whether reducing string manipulation is a concern would depend on the rate at which the temperature is polled.
For further information on the format function see The String Library article.

Delphi base convert Binary to Decimal

Im converting binary to decimal and Im converting Decimal to binary. My problem is Length of the binary integer. For example:
Convertx("001110",2,10) = 14
Convertx("14",10,2) = 1110
But length of the binary is NOT constant, So How can I get exact original binary with zeros front of it? How can I get "001110" instead of "1110" ?
I m using this function in Delphi 7. -> How can I convert base of decimal string to another base?
The function you are using returns a string that is the shortest length required to express the value you have converted.
Any zeroes in front of that string are simply padding - they do not alter the binary value represented. If you need a string of a minimum length then you need to add this "padding" yourself. e.g. if you want a binary representation of a "byte" (i.e. 8 binary digits) then the minimum length you would need is 8:
binStr := Convertx("14",10,2);
while Length(binStr) < 8 do
binStr := '0' + binStr;
If you need the exact number of zeroes that were included in the "padding" of some original binary value when converting from binary to decimal and then back to "original" binary again, then this is impossible unless you separately record how many padding zeroes there were or the length of the original string, including those zeroes.
i.e. in your example, the ConvertX function has no idea (and now way to figure out) that the number "14" it is asked to convert to binary was originally converted from a 6 digit binary string with 2 leading zeroes, rather than an 8 digit binary with 4 leading zeroes (or a 16 digit binary with 12 leading zeroes, etc etc).
What you are hoping for is impossible. Consider
Convertx('001110', 2, 10)
and
Convertx('1110', 2, 10)
These both return the same output, 14. At that point there is no way to recover the length of the original input.
The way forward is therefore clear. You must remember the length of the original binary, as well as the equivalent decimal. However, once you have reached that conclusion then you might wonder whether there is an even simpler approach. Just remember the original binary value and save yourself having to convert back from decimal.

Convert packed decimal to decimal in AB initio

I am having my source data dml as packed decimal data type and i want to reformat it as only decimal data type.
eg.
original DML :- packed decimal(5,2) Salary
Reformated DML:- decimal(",") salary
packed decimal(5,2) Salary => decimal(",") salary
How can i type cast this packed decimal with decimal in AB initio ?
Ab Initio DML automatically casts between compatible types. So simply map the packed decimal "Salary" onto the regular decimal "salary" in a transform component and the casting will just happen. That being said, you should feel free to direct these questions to support#abinitio.com. They're always happy to help.

mapping from xml to cobol field

I need to pass LOW-VALUES(am not very sure what kind would that be), as default for a copybook field, to the backend team. I use a wtx transform which converts xml to cobol
15 :abc PIC X(15).
From the mainframe team I got this as sample for the field.
X'000000000000000000000000000000'
However when I use this rule, it fails because the number of characters is above 15. How can I pass the LOW-VALUES?
my rule map for the above cobol field
="X'000000000000000000000000000000'"
error meesage
Map: Output: abc Field:123 Group:outputcbl
Size of input item is greater than size of output item.
LOW-VALUE in COBOL is a figurative constant. The value of this constant
is the character having the lowest ordinal position in the collating sequence used.
Assuming the character set in use is EBCDIC (as indicated in one of your comments to another answer)
and the collating sequence has not
been overridden (probably a good assumption), a LOW-VALUE corresponds to binary zeros.
A PIC X(15) data item in COBOL occupies 15 bytes. Use a transformation that translates this
field into 15 bytes of binary zeros. The COBOL application will see this a LOW-VALUE.
Note: The value your 'Mainframe team' gave you is a hexadecimal string representation for 15 bytes of binary zeros.
Low-values is simply all Hex zeros, so if you resize your rule map to contain 15 hex digits, all zero, you should be fine.

Resources