I heard that you just have to put an F in front of the operator and then put a (.) at the end to calculate the floating point number and then display it. But it gave me this answer:
2 3 F/ .
:8: Floating-point stack underflow
2 3 >>>F/<<< .
Backtrace:
How can I get 0.66666667 ok?
You heard wrong. Presumably whoever told you that meant that the period should be at the end of the operands, but that would make them double-precision values (no relation to double floats). You need to put an e at the end of 2 and 3 to make them floats, write f/ to divide them, and write f. to print:
in: 2e 3e f/ f.
out: 0.666666666666667 ok
Related
This question already has an answer here:
Dealing with big numbers in Lua
(1 answer)
Closed 1 year ago.
When using lua to handle floating point numbers I found that lua can handle very limited precision, for example:
print(3.14159265358979)
output:
3.1415926535898
The result will be missing a few decimal places, which will lead to calculation bias. How can I deal with such a lack of precision
By default, Lua only displays 14 digits of a number. A float can require 15 to 17 digits to be represented exactly as a base-10 string. We can use a loop to find the right number of digits. Note that %g will drop the trailing zeros, so we can start our search at 15 digits, not 1. This is the function I use:
local function floatToString(x)
for precision = 15, 17 do
-- Use a 2-layer format to try different precisions with %g.
local s <const> = ('%%.%dg'):format(precision):format(x)
-- See if s is an exact representation of x.
if tonumber(s) == x then
return s
end
end
end
print(floatToString(3.14159265358979))
Output: 3.14159265358979
This is related to the Zeller's Congruence algorithm where there is a requirement to use Modulo to get the actual day of an input date. However, in the software I'm using which is Blueprism, there is no modulo operator/function that is available and I can't get the result I would hope to get.
In some coding language (Python, C#, Java), Zeller's congruence formula were provided because mod is available.
Would anyone know a long method of combine arithmetic operation to get the mod result?
From what I've read, mod is the remainder result from two numbers. But
181 mod 7 = 6 and 181 divided by 7 = 25.857.. the remainder result are different.
There are two answers to this.
If you have a floor() or int() operation available, then a % b is:
a - floor(a/b)*b
(revised to incorporate Andrzej Kaczor's comment, thanks!)
If you don't, then you can iterate, each time subtracting b from a until the remainder is less than b. At that point, the remainder is a % b.
Can somebody explain why multiplying by 100 here gives a less accurate result but multiplying by 10 twice gives a more accurate result?
± % sc
Loading development environment (Rails 3.0.1)
>> 129.95 * 100
12994.999999999998
>> 129.95*10
1299.5
>> 129.95*10*10
12995.0
If you do the calculations by hand in double-precision binary, which is limited to 53 significant bits, you'll see what's going on:
129.95 = 1.0000001111100110011001100110011001100110011001100110 x 2^7
129.95*100 = 1.1001011000010111111111111111111111111111111111111111011 x 2^13
This is 56 significant bits long, so rounded to 53 bits it's
1.1001011000010111111111111111111111111111111111111111 x 2^13, which equals
12994.999999999998181010596454143524169921875
Now 129.95*10 = 1.01000100110111111111111111111111111111111111111111111 x 2^10
This is 54 significant bits long, so rounded to 53 bits it's 1.01000100111 x 2^10 = 1299.5
Now 1299.5 * 10 = 1.1001011000011 x 2^13 = 12995.
First off: you are looking at the string representation of the result, not the actual result itself. If you really want to compare the two results, you should format both results explicitly, using String#% and you should format both results the same way.
Secondly, that's just how binary floating point numbers work. They are inexact, they are finite and they are binary. All three mean that you get rounding errors, which generally look totally random, unless you happen to have memorized the entirety of IEEE754 and can recite it backwards in your sleep.
There is no floating point number exactly equal to 129.95. So your language uses a value which is close to it instead. When that value is multiplied by 100, the result is close to 12995, but it just so happens to not equal 12995. (It is also not exactly equal to 100 times the original value it used in place of 129.95.) So your interpreter prints a decimal number which is close to (but not equal to) the value of 129.95 * 100 and which shows you that it is not exactly 12995. It also just so happens that the result 129.95 * 10 is exactly equal to 1299.5. This is mostly luck.
Bottom line is, never expect equality out of any floating point arithmetic, only "closeness".
This may be a very basic question for COBOL experts. But I till date had nothing to do with COBOL. We are processing some files based on character position. The files are being sent to us from mainframe machines and we have a layout file for that that says somethings like this.
POSITION : LENGTH : TYPE : DESCRIPTION
----------:--------:------:-------------------------------
61-70 : 10 : P5 : FIELD-1 9(13)V(05)
71-80 : 10 : P5 : Field-2 9(13)V(05)
81-81 : 1 : A/N : FLAG
82-84 : 3 : N : NUMBER OF DAYS 9(3)
I understand that the type A/N means it is alpha-numeric. N means numeric and P means Packed data type. What i dont understand is what P5 means. What is the significance of 5 that comes next to P?
What is the significance of 5 that comes next to P?
I'm not sure. Five 16 bit words, maybe.
Your packed fields are 10 bytes and holding 19 characters (18 digits plus the sign). The decimal point is implied.
If the sign byte (the rightmost byte) is anything other than hexadecimal F, update your question.
If you could update your question with five hexadecimal strings representing five of the numbers, that would be great.
Right now, I'm guessing that it's an ordinary packed decimal field.
P - packed decimal (i.e. Cobol Comp-3) a 18 digit packed decimal would occupy 10 bytes which agrees with the lengths give
5 - number of digits after the decimal point (at a guess).
Field definition in cobol is probably
03 FIELD-1 pic s9(13)V(05) comp-3.
in packed decimal, the sign is held in the last nyble (4 bits) and each nyble (4 bits) holds one decimal digit.
i.e.
121 is represented as x'121c'
while
-121 is represented as x'121d'
If you are using java and can get the cobol copybook, there are packages that can read the file using the cobol copybook.
I would bet it means 5 decimal places.
I'm trying to solve some huffman coding problems, but I always get different values for the codewords (values not lengths).
for example, if the codeword of character 'c' was 100, in my solution it is 101.
Here is an example:
Character Frequency codeword my solution
A 22 00 10
B 12 100 010
C 24 01 11
D 6 1010 0110
E 27 11 00
F 9 1011 0111
Both solutions have the same length for codewords, and there is no codeword that is prefix of another codeword.
Does this make my solution valid ? or it has to be only 2 solutions, the optimal one and flipping the bits of the optimal one ?
There are 96 possible ways to assign the 0's and 1's to that set of lengths, and all would be perfectly valid, optimal, prefix codes. You have shown two of them.
There exist conventions to define "canonical" Huffman codes which resolve the ambiguity. The value of defining canonical codes is in the transmission of the code from the compressor to the decompressor. As long as both sides know and agree on how to unambiguously assign the 0's and 1's, then only the code length for each symbol needs to be transmitted -- not the codes themselves.
The deflate format starts with zero for the shortest code, and increments up. Within each code length, the codes are ordered by the symbol values, i.e. sorting by symbol. So for your code that canonical Huffman code would be:
A - 00
C - 01
E - 10
B - 110
D - 1110
F - 1111
So there the two bit codes are assigned in the symbol order A, C, E, and similarly, the four bit codes are assigned in the order D, F. Shorter codes are assigned before longer codes.
There is a different and interesting ambiguity that arises in finding the code lengths. Depending on the order of combination of equal frequency nodes, i.e. when you have a choice of more than two lowest frequency nodes, you can actually end up with different sets of code lengths that are exactly equally optimal. Even though the code lengths are different, when you multiply the lengths by the frequencies and add them up, you get exactly the same number of bits for the two different codes.
There again, the different codes are all optimal and equally valid. There are ways to resolve that ambiguity as well at the time the nodes to combine are chosen, where the benefit can be minimizing the depth of the tree. That can reduce the table size for table-driven Huffman decoding.
For example, consider the frequencies A: 2, B: 2, C: 1, D: 1. You first combine C and D to get 2. Then you have A, B, and C+D all with frequency 2. Now you can choose to combine either A and B, or C+D with A or B. This gives two different sets of bit lengths. If you combine A and B, you get lengths: A-2, B-2, C-2, and D-2. If you combine C+D with B, you get A-1, B-2, C-3, D-3. Both are optimal codes, since 2x2 + 2x2 + 1x2 + 1x2 = 2x1 + 2x2 + 1x3 + 1x3 = 12, so both codes use 12 bits to represent those symbols that many times.
The problem is, that there is no problem.
You huffman tree is valid, it also gives the exactly same results after encoding and decoding. Just think if you would build a huffman tree by hand, there are always more ways to combine items with equal (or least difference) value. E.g. if you have A B C (everyone frequency 1), you can at first combine A and B, and the result with C, or at first B and C, and the result with a.
You see, there are more correct ways.
Edit: Even with only one possible way to combine the items by frequency, you can get different results because you can assign 1 for the left or for the right branch, so you would get different (correct) results.