This may be a very basic question for COBOL experts. But I till date had nothing to do with COBOL. We are processing some files based on character position. The files are being sent to us from mainframe machines and we have a layout file for that that says somethings like this.
POSITION : LENGTH : TYPE : DESCRIPTION
----------:--------:------:-------------------------------
61-70 : 10 : P5 : FIELD-1 9(13)V(05)
71-80 : 10 : P5 : Field-2 9(13)V(05)
81-81 : 1 : A/N : FLAG
82-84 : 3 : N : NUMBER OF DAYS 9(3)
I understand that the type A/N means it is alpha-numeric. N means numeric and P means Packed data type. What i dont understand is what P5 means. What is the significance of 5 that comes next to P?
What is the significance of 5 that comes next to P?
I'm not sure. Five 16 bit words, maybe.
Your packed fields are 10 bytes and holding 19 characters (18 digits plus the sign). The decimal point is implied.
If the sign byte (the rightmost byte) is anything other than hexadecimal F, update your question.
If you could update your question with five hexadecimal strings representing five of the numbers, that would be great.
Right now, I'm guessing that it's an ordinary packed decimal field.
P - packed decimal (i.e. Cobol Comp-3) a 18 digit packed decimal would occupy 10 bytes which agrees with the lengths give
5 - number of digits after the decimal point (at a guess).
Field definition in cobol is probably
03 FIELD-1 pic s9(13)V(05) comp-3.
in packed decimal, the sign is held in the last nyble (4 bits) and each nyble (4 bits) holds one decimal digit.
i.e.
121 is represented as x'121c'
while
-121 is represented as x'121d'
If you are using java and can get the cobol copybook, there are packages that can read the file using the cobol copybook.
I would bet it means 5 decimal places.
Related
This question already has an answer here:
Dealing with big numbers in Lua
(1 answer)
Closed 1 year ago.
When using lua to handle floating point numbers I found that lua can handle very limited precision, for example:
print(3.14159265358979)
output:
3.1415926535898
The result will be missing a few decimal places, which will lead to calculation bias. How can I deal with such a lack of precision
By default, Lua only displays 14 digits of a number. A float can require 15 to 17 digits to be represented exactly as a base-10 string. We can use a loop to find the right number of digits. Note that %g will drop the trailing zeros, so we can start our search at 15 digits, not 1. This is the function I use:
local function floatToString(x)
for precision = 15, 17 do
-- Use a 2-layer format to try different precisions with %g.
local s <const> = ('%%.%dg'):format(precision):format(x)
-- See if s is an exact representation of x.
if tonumber(s) == x then
return s
end
end
end
print(floatToString(3.14159265358979))
Output: 3.14159265358979
In google sheets, I'm trying to convert a 16-bit signed binary number to its decimal equivalent, but the built in function that does that only takes up to 10 bits. Other solutions to the problem that I've seen don't preserve the signedness.
So far I've tried:
bin2dec on the leftmost 8 bits * 2^8 + bin2dec on the rightmost 8 bits
hex2dec on the result of bin2dec on the leftmost 8 bits concatenated with bin2dec on the rightmost 8 bits
I've also seen a suggestion that multiplies each bit by its power of 2, eliminating bin2dec altogether.
Any suggestions?
You will need to use a custom function
function binary2decimal(bin) {
return parseInt(bin, 2);
}
Let's assume that your binary number is in cell A2.
First, set the formatting as follows: Format > Number > Plain text.
Then place the following formula in, say, B2:
=ArrayFormula(SUM(SPLIT(REGEXREPLACE(SUBSTITUTE(A2&"","-",""),"(\d)","$1|"),"|")*(2^SEQUENCE(1,LEN(SUBSTITUTE(A2&"","-","")),LEN(SUBSTITUTE(A2&"","-",""))-1,-1))*IF(LEFT(A2)="-",-1,1)))
This formula will process any length binary number, positive or negative, from 1 bit to 16 bits (and, in fact, to a length of 45 or 46 bits).
What this formula does is SPLIT the binary number (without the negative sign if it exists) into its separate bits, one per column; multiply each of those by 2 raised to the power of each element of an equal-sized degressive SEQUENCE that runs from a high of the LEN (i.e., number) of bits down to zero; and finally apply the negative sign conditionally IF one exists.
If you need to process a range where every value is a positive or negative binary number with exactly 16 bits, you can do so. Suppose that your 16-bit binary numbers are in the range A2:A. First, be sure to select all of Column A and set the formatting to "Plain text" as described above. Then place the following array formula into, say, B2 (being sure that B2:B is empty first):
=ArrayFormula(MMULT(SPLIT(REGEXREPLACE(SUBSTITUTE(FILTER(A2:A,A2:A<>"")&"","-",""),"(\d)","$1|"),"|")*(2^SEQUENCE(1,16,15,-1)),SEQUENCE(16,1,1,0))*IF(LEFT(FILTER(A2:A,A2:A<>""))="-",-1,1))
Starting with these frequencies:
A:7 F:6 H:1 M:2 N:4 U:5
at a later step I have 5 6 7 7, where one of the 7's is the "A". Which 7 branch I pick to be a 0 or a 1 is arbitrary.
So how do I get uniquely decodable code word?
You need to send the code to the receiver, not the frequencies. You can arbitrarily assign 0's and 1's to all of the branches, and then send the codes for each symbol before the coded symbols themselves. There are many possible Huffman codes from the same set of frequencies.
More commonly only the code lengths in bits for each symbol are sent. In this case those are A:2 F:2 H:4 M:4 N:3 U:2. Then a canonical code is used on both ends that depends only on the lengths. In this case, starting with 0's, the canonical code would be:
A: 00
F: 01
U: 10
N: 110
H: 1110
M: 1111
where codes of equal length are assigned to the symbols in lexicographical order. Note that the Huffman tree that was built is not needed. All that is needed is the number of bits for each symbol.
How can I convert a floating point number to a string with a maximum of 2 decimal digits in Delphi7?
I've tried using:
FloatToStrF(Query.FieldByName('Quantity').AsFloat, ffGeneral, 18, 2, FS);
But with the above, sometimes more than 2 decimal digits are given back, ie. the result is: 15,60000009
Use ffFixed instead of ffGeneral.
ffGeneral ignores the Decimal parameter.
When you use ffGeneral, the 18 is saying that you want 18 significant decimal digits. The routine will then express that number in the shortest manner, using scientific notation if necessary. The 2 is ignored.
When you use ffFixed, you are saying you want 2 digits after the decimal point.
If you are wondering about why you sometimes get values that seem to be imprecise, there is much to be found on this site and others that will explain how floating-point numbers work.
In this case, AsFloat is returning a double, which like (most) other floating-point formats, stores its value in binary. In the same way that 1/3 cannot be written in decimal with finite digits, neither can 15.6 be represented in binary in a finite number of bits. The system chooses the closest possible value that can be stored in a double. The exact value, in decimal, is:
15.5999999999999996447286321199499070644378662109375
If you had asked for 16 digits of precision, the value would've been rounded off to 15.6. But you asked for 18 digits, so you get 15.5999999999999996.
If you really mean what you write (MAX 2 decimal digits) and does not mean ALWAYS 2 decimal digits, then the two code snippets in the comments won't give you want you asked for (they will return a string that ALWAYS has two decimal digits, ie. ONE is returned as "1.00" (or "1,00" for Format depending on your decimal point).
If you truly want an option with MAX 2 decimal digits, you'll have to do a little post-processing of the returned string.
FUNCTION FloatToStrMaxDecimals(F : Extended ; MaxDecimals : BYTE) : STRING;
BEGIN
Result:=Format('%.'+IntToStr(MaxDecimals)+'f',[F]);
WHILE Result[LENGTH(Result)]='0' DO DELETE(Result,LENGTH(Result),1);
IF Result[LENGTH(Result)] IN ['.',','] THEN DELETE(Result,LENGTH(Result),1)
END;
An alternative (and probably faster) implementation could be:
FUNCTION FloatToStrMaxDecimals(F : Extended ; MaxDecimals : BYTE) : STRING;
BEGIN
Result:=Format('%.'+IntToStr(MaxDecimals)+'f',[F]);
WHILE Result[LENGTH(Result)]='0' DO SetLength(Result,PRED(LENGTH(Result)));
IF Result[LENGTH(Result)] IN ['.',','] THEN SetLength(Result,PRED(LENGTH(Result)))
END;
This function will return a floating point number with MAX the number of specified decimal digits, ie. one half with MAX 2 digits will return "0.5" and one third with MAX 2 decimal digits will return "0.33" and two thirds with MAX 2 decimal digits will return "0.67". TEN with MAX 2 decimal digits will return "10".
The final IF statement should really test for the proper decimal point, but I don't think any value other than period or comma is possible, and if one of these are left as the last character in the string after having stripped all zeroes from the end, then it MUST be a decimal point.
Also note, that this code assumes that strings are indexed with 1 for the first character, as it always is in Delphi 7. If you need this code for the mobile compilers in newer Delphi versions, you'll need to update the code. I'll leave that exercise up to the reader :-).
i use this function in my application:
function sclCurrencyND(Const F: Currency; GlobalDegit: word = 2): Currency;
var R: Real; Fact: Currency;
begin
Fact:= power(10, GlobalDegit);
Result:= int(F*Fact)/Fact;
end;
I have the following test case:
Lua 5.3.2 Copyright (C) 1994-2015 Lua.org, PUC-Rio
> foo = 1000000000000000000
> bar = foo + 1
> bar
1000000000000000001
> string.format("%.0f", foo)
1000000000000000000
> string.format("%.0f", bar)
1000000000000000000
That last line should be 1000000000000000001, since that's the value of bar, but for some reason it's not. This doesn't only apply to 1000000000000000000, I've yet to find another number over that one which gives the correct value. Can anyone give an explanation for why this happens?
You're formatting the number as floating-point, not integer. That's what %.0f is doing. At some point, floats lose precision. double, for example, will lose precision after about 16 decimal digits.
If you want to format an integer as an integer, then you need to format it as an integer, using standard printf rules:
string.format("%i", bar)
log2(1000000000000000000) is between 59 and 60, which means that the binary representation of that number needs 60 bits. double-precision floating point numbers have only 53 bits of precision, plus a power-of-two exponent with 11 bits of range. So to store that large of a number as floating point (which is what you requested with the %f format specifier), six to seven bits of precision are chopped off the end of the number, and the whole thing is multiplied by a power of two to get it back in range (259 in this case, I think). Chopping off those final bits removes the precision that allows 1000000000000000000 and 1000000000000000001 to be distinct from each other.
(This is not a particularly precise description of floating point, apologies if my numbers or descriptions are not exact.)