Length and size of strings in Elixir / Erlang needs explanation - erlang

Can someone explain why s is a string with 4096 chars
iex(9)> s = String.duplicate("x", 4096)
... lots of "x"
iex(10)> String.length(s)
4096
but its memory size are a few 6 words?
iex(11)> :erts_debug.size(s)
6 # WHAT?!
And why s2 is a much shorter string than s
iex(13)> s2 = "1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20"
"1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20"
iex(14)> String.length(s)
50
but its size has more 3 words than s?
iex(15)> :erts_debug.size(s2)
9 # WHAT!?
And why does the size of these strings does not match their lengths?
Thanks

First clue why this is showing that values can be found in this question. Quoting size/1 docs:
%% size(Term)
%% Returns the size of Term in actual heap words. Shared subterms are
%% counted once. Example: If A = [a,b], B =[A,A] then size(B) returns 8,
%% while flat_size(B) returns 12.
Second clue can be found in Erlang documentation about bitstrings implementation.
So in the first case the string is too big to fit on heap alone, so it uses refc binaries which are stored on stack and on heap there is only pointer to given binary.
In second case string is shorter than 64 bytes and it uses heap binaries which is just array of bytes stored directly in on the heap, so that gives us 8 bytes per word (64-bit) * 9 = 72 and when we check documentation about exact memory overhead in VM we see that Erlang uses 3..6 words per binary + data, where data can be shared.

Related

Understanding AArch64 Translation Tables

I'm doing a hobby OS project and I an trying to get Virtual Memory set up. I had another project in an x86 architecture working with Page Tables but I am now learning ArmV8 now.
Now, I now that the maximum amount of bits used for addressing is 48[1]. The last 12 to 16 bits are used "as-is" to index within the selected region (depending on which granule size is selected[2]).
I just don't understand how we get those intermediate bits. Obviously the documentation is showing that intermediate tables are used[3] but it is quite unclear on how those tables are used.
In the first half of the following image, we see translation of an address with 4k granules and using 38 address bits.
I can't understand this image in the slightest. The "offsets", for example bits 38 to 30 point to an entry in the L1 table. How and where is this table defined ?
What I think is happening is, this a 12+8+8+8 address translation scheme. Starting from the right, 12 bits to find an offset within a 4096 block of memory. Right of that is 8 bits for L3, meaning that L3 indexes 256 blocks of 4096 bytes (1MB). Right of this, L2, has 8 bits also so 256 entries of (256*4096), totalling 256MB per L2 entry. Right of L2 is L1 with also 8 bits, 256 entries of 256MB means the total addressable memory is 64GB of physical RAM.
I don't think this is correct because that would only allow a 1:1 mapping of memory. Each table descriptor needs to carry some access flags and what not. Thus going back to the question of: how are those table defined. Each offset section is 8 bits and that's not enough to contain the address of a translation table.
Anyway, I am completely lost. I would appreciate if someone could give me a "plain english" explanation of how a translation table walk is done ? A graph would be nice but probably too much effort, I'll make one and share if after to help me synthesize the information. Or at least, if someone has one, a link to a good video/guide where the information isn't totally obfuscated ?
Here is the list of materials I have consulted:
https://developer.arm.com/documentation/den0024/a/The-Memory-Management-Unit/Translating-a-Virtual-Address-to-a-Physical-Address
https://forums.raspberrypi.com/viewtopic.php?t=227139
https://armv8-ref.codingbelief.com/en/chapter_d4/d42_4_translation_tables_and_the_translation_proces.html
https://github.com/bztsrc/raspi3-tutorial/blob/master/10_virtualmemory/mmu.c
[1]https://developer.arm.com/documentation/den0024/a/The-Memory-Management-Unit/Translation-tables-in-ARMv8-A
[2]https://developer.arm.com/documentation/den0024/a/The-Memory-Management-Unit/Translation-tables-in-ARMv8-A/Effect-of-granule-sizes-on-translation-tables
[3]https://developer.arm.com/documentation/den0024/a/The-Memory-Management-Unit/Translating-a-Virtual-Address-to-a-Physical-Address
The entire model behind translation tables arises from three values: the size of a translation table entry (TTE), the hardware page size (aka "translation granule"), and the amount of bits used for virtual addressing.
On arm64, TTEs are always 8 bytes. The hardware page size can be one of 4KiB, 16KiB or 64KiB (0x1000, 0x4000 or 0x10000 bytes), depending on both hardware support and runtime configuration. The amount of bits used for virtual addressing similarly depends on hardware support and runtime configuration, but with a lot more complex constraints.
By example
For the sake of simplicity, let's consider address translation under TTBR0_EL1 with no block mappings, no virtualization going on, no pointer authentication, no memory tagging, no "large physical address" support and the "top byte ignore" feature being inactive. And let's pick a hardware page size of 0x1000 bytes and 39-bit virtual addressing.
From here, I find it easiest to start at the result and go backwards in order to understand why we arrived here. So suppose you have a virtual address of 0x123456000 and the hardware maps that to physical address 0x800040000 for you. Because the page size is 0x1000 bytes, that means that for 0 <= n <= 0xfff, all accesses to virtual address 0x123456000+n will go to physical address 0x800040000+n. And because 0x1000 = 2^12, that means the lowest 12 bytes of your virtual address are not used for address translation, but indexing into the resulting page. Though the ARMv8 manual does not use this term, they are commonly called the "page offset".
63 12 11 0
+------------------------------------------------------------+-------------+
| upper bits | page offset |
+------------------------------------------------------------+-------------+
Now the obvious question is: how did we get 0x800040000? And the obvious answer is: we got it from a translation table. A "level 3" translation table, specifically. Let's defer how we found that for just a moment and suppose we know it's at 0x800037000. One thing of note is that translation tables adhere to the hardware page size as well, so we have 0x1000 bytes of translation information there. And because we know that one TTE is 8 bytes, that gives us 0x1000/8 = 0x200, or 512 entries in that table. 512 = 2^9, so we'll need 9 bits from our virtual address to index into this table. Since we already use the lower 12 bits as page offset, we take bits 20:12 here, which for our chosen address yield the value 0x56 ((0x123456000 >> 12) & 0x1ff). Multiply by the TTE size, add to the translation table address, and we know that the TTE that gave us 0x800040000 is written at address 0x8000372b0.
63 21 20 12 11 0
+------------------------------------------------------------+-------------+
| upper bits | L3 index | page offset |
+------------------------------------------------------------+-------------+
Now you repeat the same process over for how you got 0x800037000, which this time came from a TTE in a level 2 translation table. You again take 9 bits off your virtual address to index into that table, this time with an value of 0x11a ((0x123456000 >> 21) & 0x1ff).
63 30 29 21 20 12 11 0
+------------------------------------------------------------+-------------+
| upper bits | L2 index | L3 index | page offset |
+------------------------------------------------------------+-------------+
And once more for a level 1 translation table:
63 40 39 30 29 21 20 12 11 0
+------------------------------------------------------------+-------------+
| upper bits | L1 index | L2 index | L3 index | page offset |
+------------------------------------------------------------+-------------+
At this point, you used all 39 bits of your virtual address, so you're done. If you had 40-bit addressing, then there'd be another L0 table to go through. If you had 38-bit addressing, then we would've taken the L1 table all the same, but it would only span 0x800 bytes instead of 0x1000.
But where did the L1 translation table come from? Well, from TTBR0_EL1. Its physical address is just in there, serving as the root for address translation.
Now, to perform the actual translation, you have to do this whole process in reverse. You start with a translation table from TTBR0_EL1, but you don't know ad-hoc whether it's L0, L1, etc. To figure that out, you have to look at the translation granule and the number of bits used for virtual addressing. With 4KiB pages there's a 12-bit page offset and 9 bits for each level of translation tables, so with 39 bits you're looking at an L1 table. Then you take bits 39:30 of the virtual address to index into it, giving you the address of the L2 table. Rinse and repeat with bits 29:21 for L2 and 20:12 for L3, and you've arrived at the physical address of the target page.

Why does allocated memory is different than the size of the string?

Please consider the following code:
char **ptr;
str = malloc(sizeof(char *) * 3); // Allocates enough memory for 3 char pointers
str[0] = malloc(sizeof(char) * 24);
str[1] = malloc(sizeof(char) * 25);
str[2] = malloc(sizeof(char) * 25);
When I use some printf to print the memory adresses of each pointer:
printf("str[0] = '%p'\nstr[1] = '%p'\nstr[2] = '%p'\n", str[0], str[1], str[2]);
I get this output:
str[0] = '0x1254030'
str[1] = '0x1254050'
str[2] = '0x1254080'
I expected the number corresponding to the second adress to be the sum of the number corresponding to the first one, and 24, which corresponds to the size of the string str[0] in bytes (since a char has a size of 1 byte). I expected the number corresponding to the second adress to be 0x1254047, considering that this number is expressed in base 16 (0123456789abcdef).
It seems to me that I spotted a pattern: from 24 characters in a string, for every 16 more characters contained in it, the memory used is 16 bytes larger. For example, a 45 characters long string uses 64 bytes of memory, a 77 characters long string uses 80 bytes of memory, and a 150 characters long string uses 160 bytes of memory.
Here is an illustration of the pattern:
I would like to understand why the memory allocated isn't equal to the size of the string. Why does it follow this pattern?
There at least two reasons malloc() may return memory in the manner you've noted.
Efficiency and peformance. By returning blocks of memory in only a small set of actual sizes, a request for memory is much more likely to find an already-existing block of memory that can be used, or wind up producing a block of memory that can be easily reused in the future. This will make the request return faster and has the side effect of limiting memory fragmentation.
Implications arising from the memory alignment requirements 7.22.3 Memory management functions of the C Standard states "The order and contiguity of storage allocated by successive calls to the
aligned_alloc, calloc,
malloc, and
realloc functions is unspecified. The
pointer returned if the allocation succeeds is suitably aligned so that it may be assigned to
a pointer to any type of object with a fundamental alignment requirement ..." Note the italicized part. Since the malloc() implementation has no knowledge of what the memory is to be used for, the memory returned has to be suitably aligned for any possible use. This usually means 8- or 16-byte alignment, depending on the platform.
Three reasons:
(1) The string "ABC" contains 4 characters, because every string in C has to have a terminating '\0'.
(2) Many processors have memory address alignment issues that make it most efficient when any block allocated by malloc() starts on an address that is a multiple of 4, or 8, or whatever the "natural" memory size is.
(3) The malloc() function itself requires some memory to store information about what has been allocated where, so that free() knows what to do.

Direct Mapped Cache of Blocks Example

So i have this question in my homework assignment that i have struggling a bit with. I looked over my lecture content/notes and have been able to utilize those to answer the questions, however, i am not 100% sure that i did everything correctly. There are two parts (part C and D) in the question that i was not able to figure out even after consulting my notes and online sources. I am not looking for a solution for those two parts by any means, but it would be greatly appreciated if i could get, at least, a nudge in the right direction in how i can go about solving it.
I know this is a rather large question, however, i hope someone could possibly check my answers and tell me if all my work and methods of looking at this problem is correct. As always, thank you for any help :)
Alright, so now that we have the formalities out of the way,
--------------------------Here is the Question:--------------------------
Suppose a small direct-mapped cache of blocks with 32 blocks is constructed. Each cache block stores
eight 32-bit words. The main memory—which is byte addressable1—is 16,384 bytes in size. 32-bit words are stored
word aligned in memory, i.e., at an address that is divisible by 4.
(a) How many 32-bit words can the memory store (in decimal)?
(b) How many address bits would be required to address each byte of memory?
(c) What is the range of memory addresses, in hex? That is, what are the addresses of the first and last bytes of
memory? I'll give you a hint: memory addresses are numbered starting at 0.
(d) What would be the address of the last word in memory?
(e) Using the cache mapping scheme discussed in the Chapter 5 lecture notes, how many and which address bits
would be used to form the block offset?
(f) How many and which memory address bits would be used to form the cache index?
(g) How many and which address bits would be used to form the tag field for each cache block?
(h) To which cache block (in decimal) would memory address 0x2A5C map to?
(i) What would be the block offset (in decimal) for 0x2A5C?
(j) How many other main memory words would map to the same block as 0x2A5C?
(k) When the word at 0x2A5C is moved into a cache block, what are the memory addresses (in hex) of the other
words which will also be moved into this block? Express your answer as a range, e.g., [0x0000, 0x0200].
(l) The first word of a main memory block that is mapped to a cache block will always be at an address that is
divisible by __ (in decimal)?
(m) Including the V and tag bits of each cache block, what would be the total size of the cache (in bytes)
(n) what would be the size allocated for the data bits (in bytes)?
----------------------My answers and work-----------------------------------
a) memory = 16384 bytes. 16384 bytes into bits = 131072 bits. 131072/32 = 4096 32-bit words
b) 2^14 (main memory) * 2^2 (4 bits/word) = 2^16. take log(base2)(2^16) = 16 bits
c) couldnt figure this part out (would appreciate some input (NOT A SOLUTION) on how i can go about looking at this problem
d)could not figure this part out either :(
e)8 words in each cache line. 8 * 4(2^2 bits/word) = 32 bits in each cache line. log(base2)(2^5) = 5 bits used for block offset.
f) # of blocks = 2^5 = 32 blocks. log(base2)(2^5) = 5 bits for cache index
g) tag = 16 - 5 - 5 - 2(word alignment) = 4 bits
h) 0x2A5C
0010 10100 10111 00
tag index offset word aligned bits
maps to cache block index = 10100 = 0x14
i) maps to block offset = 10111 = 0x17
j) 4 tag bits, 5 block offset = 2^9 other main memory words
k) it is a permutation of the block offsets. so it maps the memory addresses with the same tag and cache index bits and block offsets of 0x00 0x01 0x02 0x04 0x08 0x10 0x11 0x12 0x14 0x18 0x1C 0x1E 0x1F
l)divisible by 4
m) 2(V+tag+data) = 2(1+4+2^3*2^5) = 522 bits = 65.25 bytes
n)data bits = 2^5 blocks * 2^3 words per block = 256 bits = 32 bytes
Part C:
If a memory has M bytes, and the memory is byte addressable, the the memory addresses range from 0 to M - 1.
For your question, this means that memory addresses range from 0 to 16383, or in hex 0x0 to 0x3FFF.
Part D:
Words are 4 bytes long. So given your answer to C, the last word is at:
(0x3FFFF - 3) -> 0x3FFC.
You can see that this is correct because the lowest 2 bits of the address are 0, which must be true of any 4 byte aligned address.

How do I calculate the size and layout of this particular struct?

The structure is,
struct {
char a;
short b;
short c;
short d;
char e;
} s1;
size of short is given as 2 bytes
size of char is given as 1 bytes
It is a 32-bit LITTLE ENDIAN processor
According to me, the answer should be:
1000 a[0]
1001 offset
1002 b[0]
1003 b[1]
1004 c[0]
1005 c[1]
1006 d[0]
1007 d[1]
1008 e[0]
size of S1 = 9 bytes​
but according to the solution, the size of S1 is supposed to be 10 bytes
The answer here is that it is that the layout of the structure is entirely up to the compiler.
10 is likely to be the most common size of this structure.
The reason for the padding is that, if there is an array, it will keep all the members properly aligned. If the size were 9, every other array element would have misaligned structure members.
Unaligned did accesses are not permitted on some systems. On most systems, they cause the processor to use extra cycles to access the data.
A compiler could allocate 4 bytes for each element in such a structure.
The C Standard says (sorry, not at my computer, so no quote): structs are aligned to the alignment of the largest (base type) member. Your largest member field is a short, 2 bytes, so the first element 'a' is aligned at an even address. 'a' takes up 1 byte. 'b' has to be aligned again at an even address, so one byte gets wasted. The last element of your struct 'e' is also one byte, and the byte following that is likely to be wasted, but that doesn't have to show up in the size of the struct. If put 'a' to the end, ie rearrange the members, you are likely to find the size of your struct to be 8 bytes..which is as good as it gets.

How to solve memory address problems

Can anyone explain how to solve these problems step by step
Assume a 2^24 byte memory.
Assume the memory is byte addressable. What is the lowest address and highest address? How many bits are needed for the address?
Assume the memory is word addressable, with a 16 bit word. What is the lowest address and highest address? How many bits are needed for the address?
Assume the memory is word addressable, with a 32 bit word. What is the lowest address and highest address? How many bits are needed for the address?
A byte is 8 bits. If it's byteaddressable, you can't reference an address by anything other than the start of some 8 bits. That is, in a 2^2 byte memory, you have 4 bytes. The lowest address starts at 0 bytes, and the highest address starts at 3 bytes. (0, 1, 2, 3 = 4 bytes total)
If the bytes are contiguous (they are juxtaposed- touching each other rather than spread out) then you can fit all 4 bytes into a 4 byte memory perfectly.
a)
If you have 2^24 bytes then you have 2^(24 + 3) bits because you're doing (2^24 * 2^3) = 2^(24+3). Thus you have 134,217,728 total bits.
The highest address would be one byte before the end, so the address at 2^24 - 1. Note that it's 2^24 - 1 and not 2^27 - 1 because you are addressing it by bytes and not bits. Lowest address would be 0.
Lowest address = 0
Highest address = 2^24 - 1
b)
A word just means a grouping of bytes. A 1-byte word is literally the same thing as a byte, it just implies that the word is some meaningful piece of data, whereas a byte is not necessarily a meaningful piece of data.
A 16-bit word == a 2-byte word because 8 bits are in a byte, thus if you have 2^24 bytes available, you only have a total of 2^23 words.
Lowest address = 0
Highest address = max number of words - 1 = 2^23 - 1.
c)
Same thing as with a 4-byte word instead of 2. Thus:
2^22 bytes available to store words.
Lowest address = 0
Highest address = max number of words - 1 = 2^22 - 1.
Feel free to correct me if you see any errors. Hope I helped.

Resources