Bit Syntax and Binary Representation in Erlang - erlang

I'm trying to get my head around the Bit Syntax in Erlang and I'm having some trouble understand how this works:
Red = 10.
Green = 61.
Blue = 20.
Color = << Red:5, Green:6, Blue:5 >> .
I've seen this example in the Software for a concurrent world by Joe Armstrong second edition and this code will
create a 16 bit memory area containing a single RGB triplet.
My question is how can 3 bytes be packed in a 16-bit memory area?. I'm not familiar whatsoever with bit shifting and I wasn't able to find anything relevant to this subject referring to erlang as well. My understand so far is that the segment is made up of 16 parts and that Red occupies 5, green 6 and blue 5 however I'm note sure how this is even possible.
Given that
61 = 0011011000110001
which alone is 16 bits how is this packaging possible?

To start with, 61 is only equal to 00110110 00110001 if you store it as two ASCII digits. When written in binary, 61 is 111101.
Note that the binary representation requires six binary digits, or six "bits" for short. That's what we're taking advantage of in this line:
Color = << Red:5, Green:6, Blue:5 >> .
We're using 5 bits for the red value, 6 bits for the green value, and 5 bits for the blue value, for a total of 16 bits. This works since the red value and the blue value are both less than 32 (since 31 is the largest number that can be represented with 5 bits), and the green value is less than 64 (since 63 is the largest number that can be represented with 6 bits).
The complete value is 01010 111101 10100 (three segments for red, green and blue), or if we split it into two bytes, 01010111 10110100.

Related

Understanding AArch64 Translation Tables

I'm doing a hobby OS project and I an trying to get Virtual Memory set up. I had another project in an x86 architecture working with Page Tables but I am now learning ArmV8 now.
Now, I now that the maximum amount of bits used for addressing is 48[1]. The last 12 to 16 bits are used "as-is" to index within the selected region (depending on which granule size is selected[2]).
I just don't understand how we get those intermediate bits. Obviously the documentation is showing that intermediate tables are used[3] but it is quite unclear on how those tables are used.
In the first half of the following image, we see translation of an address with 4k granules and using 38 address bits.
I can't understand this image in the slightest. The "offsets", for example bits 38 to 30 point to an entry in the L1 table. How and where is this table defined ?
What I think is happening is, this a 12+8+8+8 address translation scheme. Starting from the right, 12 bits to find an offset within a 4096 block of memory. Right of that is 8 bits for L3, meaning that L3 indexes 256 blocks of 4096 bytes (1MB). Right of this, L2, has 8 bits also so 256 entries of (256*4096), totalling 256MB per L2 entry. Right of L2 is L1 with also 8 bits, 256 entries of 256MB means the total addressable memory is 64GB of physical RAM.
I don't think this is correct because that would only allow a 1:1 mapping of memory. Each table descriptor needs to carry some access flags and what not. Thus going back to the question of: how are those table defined. Each offset section is 8 bits and that's not enough to contain the address of a translation table.
Anyway, I am completely lost. I would appreciate if someone could give me a "plain english" explanation of how a translation table walk is done ? A graph would be nice but probably too much effort, I'll make one and share if after to help me synthesize the information. Or at least, if someone has one, a link to a good video/guide where the information isn't totally obfuscated ?
Here is the list of materials I have consulted:
https://developer.arm.com/documentation/den0024/a/The-Memory-Management-Unit/Translating-a-Virtual-Address-to-a-Physical-Address
https://forums.raspberrypi.com/viewtopic.php?t=227139
https://armv8-ref.codingbelief.com/en/chapter_d4/d42_4_translation_tables_and_the_translation_proces.html
https://github.com/bztsrc/raspi3-tutorial/blob/master/10_virtualmemory/mmu.c
[1]https://developer.arm.com/documentation/den0024/a/The-Memory-Management-Unit/Translation-tables-in-ARMv8-A
[2]https://developer.arm.com/documentation/den0024/a/The-Memory-Management-Unit/Translation-tables-in-ARMv8-A/Effect-of-granule-sizes-on-translation-tables
[3]https://developer.arm.com/documentation/den0024/a/The-Memory-Management-Unit/Translating-a-Virtual-Address-to-a-Physical-Address
The entire model behind translation tables arises from three values: the size of a translation table entry (TTE), the hardware page size (aka "translation granule"), and the amount of bits used for virtual addressing.
On arm64, TTEs are always 8 bytes. The hardware page size can be one of 4KiB, 16KiB or 64KiB (0x1000, 0x4000 or 0x10000 bytes), depending on both hardware support and runtime configuration. The amount of bits used for virtual addressing similarly depends on hardware support and runtime configuration, but with a lot more complex constraints.
By example
For the sake of simplicity, let's consider address translation under TTBR0_EL1 with no block mappings, no virtualization going on, no pointer authentication, no memory tagging, no "large physical address" support and the "top byte ignore" feature being inactive. And let's pick a hardware page size of 0x1000 bytes and 39-bit virtual addressing.
From here, I find it easiest to start at the result and go backwards in order to understand why we arrived here. So suppose you have a virtual address of 0x123456000 and the hardware maps that to physical address 0x800040000 for you. Because the page size is 0x1000 bytes, that means that for 0 <= n <= 0xfff, all accesses to virtual address 0x123456000+n will go to physical address 0x800040000+n. And because 0x1000 = 2^12, that means the lowest 12 bytes of your virtual address are not used for address translation, but indexing into the resulting page. Though the ARMv8 manual does not use this term, they are commonly called the "page offset".
63 12 11 0
+------------------------------------------------------------+-------------+
| upper bits | page offset |
+------------------------------------------------------------+-------------+
Now the obvious question is: how did we get 0x800040000? And the obvious answer is: we got it from a translation table. A "level 3" translation table, specifically. Let's defer how we found that for just a moment and suppose we know it's at 0x800037000. One thing of note is that translation tables adhere to the hardware page size as well, so we have 0x1000 bytes of translation information there. And because we know that one TTE is 8 bytes, that gives us 0x1000/8 = 0x200, or 512 entries in that table. 512 = 2^9, so we'll need 9 bits from our virtual address to index into this table. Since we already use the lower 12 bits as page offset, we take bits 20:12 here, which for our chosen address yield the value 0x56 ((0x123456000 >> 12) & 0x1ff). Multiply by the TTE size, add to the translation table address, and we know that the TTE that gave us 0x800040000 is written at address 0x8000372b0.
63 21 20 12 11 0
+------------------------------------------------------------+-------------+
| upper bits | L3 index | page offset |
+------------------------------------------------------------+-------------+
Now you repeat the same process over for how you got 0x800037000, which this time came from a TTE in a level 2 translation table. You again take 9 bits off your virtual address to index into that table, this time with an value of 0x11a ((0x123456000 >> 21) & 0x1ff).
63 30 29 21 20 12 11 0
+------------------------------------------------------------+-------------+
| upper bits | L2 index | L3 index | page offset |
+------------------------------------------------------------+-------------+
And once more for a level 1 translation table:
63 40 39 30 29 21 20 12 11 0
+------------------------------------------------------------+-------------+
| upper bits | L1 index | L2 index | L3 index | page offset |
+------------------------------------------------------------+-------------+
At this point, you used all 39 bits of your virtual address, so you're done. If you had 40-bit addressing, then there'd be another L0 table to go through. If you had 38-bit addressing, then we would've taken the L1 table all the same, but it would only span 0x800 bytes instead of 0x1000.
But where did the L1 translation table come from? Well, from TTBR0_EL1. Its physical address is just in there, serving as the root for address translation.
Now, to perform the actual translation, you have to do this whole process in reverse. You start with a translation table from TTBR0_EL1, but you don't know ad-hoc whether it's L0, L1, etc. To figure that out, you have to look at the translation granule and the number of bits used for virtual addressing. With 4KiB pages there's a 12-bit page offset and 9 bits for each level of translation tables, so with 39 bits you're looking at an L1 table. Then you take bits 39:30 of the virtual address to index into it, giving you the address of the L2 table. Rinse and repeat with bits 29:21 for L2 and 20:12 for L3, and you've arrived at the physical address of the target page.

bin2dec for 16 bit signed binary values (in google sheets)

In google sheets, I'm trying to convert a 16-bit signed binary number to its decimal equivalent, but the built in function that does that only takes up to 10 bits. Other solutions to the problem that I've seen don't preserve the signedness.
So far I've tried:
bin2dec on the leftmost 8 bits * 2^8 + bin2dec on the rightmost 8 bits
hex2dec on the result of bin2dec on the leftmost 8 bits concatenated with bin2dec on the rightmost 8 bits
I've also seen a suggestion that multiplies each bit by its power of 2, eliminating bin2dec altogether.
Any suggestions?
You will need to use a custom function
function binary2decimal(bin) {
return parseInt(bin, 2);
}
Let's assume that your binary number is in cell A2.
First, set the formatting as follows: Format > Number > Plain text.
Then place the following formula in, say, B2:
=ArrayFormula(SUM(SPLIT(REGEXREPLACE(SUBSTITUTE(A2&"","-",""),"(\d)","$1|"),"|")*(2^SEQUENCE(1,LEN(SUBSTITUTE(A2&"","-","")),LEN(SUBSTITUTE(A2&"","-",""))-1,-1))*IF(LEFT(A2)="-",-1,1)))
This formula will process any length binary number, positive or negative, from 1 bit to 16 bits (and, in fact, to a length of 45 or 46 bits).
What this formula does is SPLIT the binary number (without the negative sign if it exists) into its separate bits, one per column; multiply each of those by 2 raised to the power of each element of an equal-sized degressive SEQUENCE that runs from a high of the LEN (i.e., number) of bits down to zero; and finally apply the negative sign conditionally IF one exists.
If you need to process a range where every value is a positive or negative binary number with exactly 16 bits, you can do so. Suppose that your 16-bit binary numbers are in the range A2:A. First, be sure to select all of Column A and set the formatting to "Plain text" as described above. Then place the following array formula into, say, B2 (being sure that B2:B is empty first):
=ArrayFormula(MMULT(SPLIT(REGEXREPLACE(SUBSTITUTE(FILTER(A2:A,A2:A<>"")&"","-",""),"(\d)","$1|"),"|")*(2^SEQUENCE(1,16,15,-1)),SEQUENCE(16,1,1,0))*IF(LEFT(FILTER(A2:A,A2:A<>""))="-",-1,1))

Large lua numbers are being printed incorrectly

I have the following test case:
Lua 5.3.2 Copyright (C) 1994-2015 Lua.org, PUC-Rio
> foo = 1000000000000000000
> bar = foo + 1
> bar
1000000000000000001
> string.format("%.0f", foo)
1000000000000000000
> string.format("%.0f", bar)
1000000000000000000
That last line should be 1000000000000000001, since that's the value of bar, but for some reason it's not. This doesn't only apply to 1000000000000000000, I've yet to find another number over that one which gives the correct value. Can anyone give an explanation for why this happens?
You're formatting the number as floating-point, not integer. That's what %.0f is doing. At some point, floats lose precision. double, for example, will lose precision after about 16 decimal digits.
If you want to format an integer as an integer, then you need to format it as an integer, using standard printf rules:
string.format("%i", bar)
log2(1000000000000000000) is between 59 and 60, which means that the binary representation of that number needs 60 bits. double-precision floating point numbers have only 53 bits of precision, plus a power-of-two exponent with 11 bits of range. So to store that large of a number as floating point (which is what you requested with the %f format specifier), six to seven bits of precision are chopped off the end of the number, and the whole thing is multiplied by a power of two to get it back in range (259 in this case, I think). Chopping off those final bits removes the precision that allows 1000000000000000000 and 1000000000000000001 to be distinct from each other.
(This is not a particularly precise description of floating point, apologies if my numbers or descriptions are not exact.)

Unexpected result subtracting decimals in ruby [duplicate]

Can somebody explain why multiplying by 100 here gives a less accurate result but multiplying by 10 twice gives a more accurate result?
± % sc
Loading development environment (Rails 3.0.1)
>> 129.95 * 100
12994.999999999998
>> 129.95*10
1299.5
>> 129.95*10*10
12995.0
If you do the calculations by hand in double-precision binary, which is limited to 53 significant bits, you'll see what's going on:
129.95 = 1.0000001111100110011001100110011001100110011001100110 x 2^7
129.95*100 = 1.1001011000010111111111111111111111111111111111111111011 x 2^13
This is 56 significant bits long, so rounded to 53 bits it's
1.1001011000010111111111111111111111111111111111111111 x 2^13, which equals
12994.999999999998181010596454143524169921875
Now 129.95*10 = 1.01000100110111111111111111111111111111111111111111111 x 2^10
This is 54 significant bits long, so rounded to 53 bits it's 1.01000100111 x 2^10 = 1299.5
Now 1299.5 * 10 = 1.1001011000011 x 2^13 = 12995.
First off: you are looking at the string representation of the result, not the actual result itself. If you really want to compare the two results, you should format both results explicitly, using String#% and you should format both results the same way.
Secondly, that's just how binary floating point numbers work. They are inexact, they are finite and they are binary. All three mean that you get rounding errors, which generally look totally random, unless you happen to have memorized the entirety of IEEE754 and can recite it backwards in your sleep.
There is no floating point number exactly equal to 129.95. So your language uses a value which is close to it instead. When that value is multiplied by 100, the result is close to 12995, but it just so happens to not equal 12995. (It is also not exactly equal to 100 times the original value it used in place of 129.95.) So your interpreter prints a decimal number which is close to (but not equal to) the value of 129.95 * 100 and which shows you that it is not exactly 12995. It also just so happens that the result 129.95 * 10 is exactly equal to 1299.5. This is mostly luck.
Bottom line is, never expect equality out of any floating point arithmetic, only "closeness".

Irregularities in Gforth's conversion to doubles

I'm fairly confused about how the s>d and d>s functions work in Forth.
From what I've read, typing 16.0 will put 160 0 on the stack (since it takes up two cells) and d. will show 160.
Now, if I enter 16 s>d I would expect the stack to be 160 0 and d. to show 160 like in the previous example. However, the stack is 16 0 and d. is 16.
Am I entering doubles incorrectly? Is s>d not as simple as "convert a single celled value into a double celled value? Is there any reason for this irregularity? Any clues would be much appreciated.
Gforth interpets all of these the same: 1.60, 16.0, and 160., i.e. 160 converted to a double number. Whereas 16 s>d converts 16 to a double number.
ANS Forth only mandates that when the text interpreter processes a number that is immediately followed by a decimal point and is not found as a definition name, the text interpreter shall convert it to a double-cell number. But Gforth goes beoynd that: http://www.complang.tuwien.ac.at/forth/gforth/Docs-html/Number-Conversion.html#Number-Conversion

Resources