Reading here it says malloc can't allocate less than 32 bytes. I have also seen somewhere saying 16 bytes is the minimum.
This diagram shows generally what malloc block looks like, but is not detailed enough.
The first link suggests there is an 8-byte minimum required to store the size of the block. Piecing these things together, my guess is:
16 bytes for the size (this, but that would limit block size to 65,535 bytes)
16 bytes for the pointer to the next free block (but that would also limit the number of blocks to 65,535 ~ 4 GB, which I guess makes sense).
That would mean the block structure would be:
[size, pointer, userdata....]
[16b, 16b, 65,535b max]
This would mean malloc can't allocate less than 16 + 16 + 16 = 48 bytes.
Wondering if this is accurate or if there is more to it.
Related
Let's say a computer can hold a word size of 26 bits, I'm curious to know how many memory addresses can the processor generate?
I'm thinking that the maximum number it can hold would be 2^26 - 1 and can have 2^26 unique memory addresses.
I'm also curious to know that if let's say that each cell in the memory has a size of 12 bits then how many bytes of memory can this processor address?
My understanding is that in most cases a processor can hold up to 32 bits which is 4 bytes and each byte is 8 bits. However, in this case, each byte would be 12 bits and the processor would be able to address 2^26/12 bytes of memory. Is that safe to say?
I'm thinking that the maximum number it can hold would be 2^26 - 1 and can have 2^26 unique memory addresses.
I agree. We usually refer to this as the size of the address space.
As for the next question:
These days, the term byte is generally agreed to means 8 bits, so 12 bits would mean 1.5 bytes. It is a matter of terminology, though, which has varied in the long past.
So, I would say 226 12-bit words is capable of holding/storing 226 * 1.5 bytes, though they are not individually addressable, and would have to be packed & unpacked to access the separate bytes.
The DEC PDP-8 computer was a 12 bit computer and word addressable, so there were multiple schemes for storing characters: two 6 bit characters in a 12 bit word, and also 1 & 1/2 8-bit characters in a 12-bit word, so three 8-bit characters in two 12-bit words.
Similar issues occur when storing packed booleans in a memory, where each boolean takes only a single bit, yet the processor can access a minimum of 8 bits at a time, so must extract a single bit from a larger datum.
I'm just checking to make sure I have a proper understanding of how memory access works.
Say I have a word-addressable memory system with 64-bit words.
How much memory could be accessed using a 32-bit address size?
A 64 bit word is 8 bytes, so we're dealing with an 8 byte word.
An 8 byte word can hold up to 2^8 (256).
Given that we have a 32 bit address, we have 2^32, but since each word is taking up 256 of those, (2^32)/256 = 1677216 bytes.
To put that into metric terms, we have 2^24 = (2^4) * (2^20) = 16 Mb.
Is this the proper way of doing this?
A 32 bit address provides 4,294,967,296 possible addresses. If the smallest addressable element is a 64 bit == 8 byte word (versus a byte), then the total amount of addressable space would be 4,294,967,296 x 8 = 34,359,738,368 bytes= 34GB.
As for the capacity of an 8 byte word, it's 8 bytes, not 2^8 = 256 bytes.
Note some old computers did have a basic addressing system that only addressed words. Byte accessing required a byte index or offset from a word based address. I don't think any current computers use such a scheme.
You are taking 32 bit address which means 2^32 bits can be addressed but if you want how many bytes can be address then just divide it like 2^32/8=2^29 because 1 byte have 8 bit
and if you want how many words can be addressed then 2^29/8 because 1 word contains 8 bytes so 2^26 words can be addressed.
And since one word is 8 byte so we can address (2^26)*8 bytes.
Hope it might help!
good people of the Internet!
A past couple of days I've been reading about how CPU access memory and how it could be slower then desired if the accessed object is spread over different chunks that CPU accesses.
In a very generalized and abstract words, if I, say, have an address space from 0x0 to 0xF with a cell of one byte, and CPU reads memory in chunks of 4 bytes (that is, has a quad byte memory access granularity), then, if I need to read an object of 4 bytes size residing in cells 0x0 - 0x3, CPU would do it in one operation, while if the same object occupies cells 0x1 - 0x4, then CPU needs to perform two read operations (read memory in 0x0 - 0x3 first, then in 0x4 - 0x7), shift bytes and combine two parts (or break, if it cannot do unaligned access). This happens, once again, because CPU can read memory in 4 bytes chunks (in our abstract case). Let's also assume, that CPU make these reads inside one cache line and there is no need to change the contents of cache between reads.
So, in this case, the beginning of each chunk CPU can read is residing in a memory cell that has an address which is multiple of 4 (right?). Ok, i don't have any questions about why CPU reads in chunks, but why exactly the beginning of each chunk is aligned in such a way? If referring to an example in a previous paragraph, why exactly CPU cannot read a chunk of 4 bytes starting from 0x1?
As I may understand, CPU is pretty much aware that 0x1 exists. So is all the fuzz happening because memory controller cannot access chunk of memory starting from 0x1? Or is it because a couple of LSBs in a processor word are reserved on some architectures? Or the fact that they are reserved is the consequence of an aligned access, an not its cause (it seems like it's a second question already, but I would leave it as at the time I write this question I have a feeling that they are related)?
There are a bunch of answers here touching this topic (like this and this) and articles online (like this and this), but in all the resources there are good explanations on the phenomena itself and its consequences, but no explanation on why exactly CPU cannot read a chunk of memory starting "in between" byte boundaries (or I couldn't see it maybe).
Consider a simple CPU. It has 32 RAM chips. Each chip supplies one bit of memory. The CPU produces one address, passes it to the 32 RAM chips, and 32 bits come back. The first RAM chip holds bit 0 of bytes 0, 4, 8, 12, 16 etc. The second RAM chip holds bit 1 of bytes 0, 4, 8, 12, 16 etc. The ninth RAM chip holds bit 0 of bytes 1, 5, 9, 13, 17 etc.
So you see that the 32 RAM chips between them can produce bits 0 to 7 of bytes 0 to 3, or bytes 4 to 7, or bytes 8 to 11 etc. They are incapable of producing bytes 1 to 4.
I'm just checking to make sure I have a proper understanding of how memory access works.
Say I have a word-addressable memory system with 64-bit words.
How much memory could be accessed using a 32-bit address size?
A 64 bit word is 8 bytes, so we're dealing with an 8 byte word.
An 8 byte word can hold up to 2^8 (256).
Given that we have a 32 bit address, we have 2^32, but since each word is taking up 256 of those, (2^32)/256 = 1677216 bytes.
To put that into metric terms, we have 2^24 = (2^4) * (2^20) = 16 Mb.
Is this the proper way of doing this?
A 32 bit address provides 4,294,967,296 possible addresses. If the smallest addressable element is a 64 bit == 8 byte word (versus a byte), then the total amount of addressable space would be 4,294,967,296 x 8 = 34,359,738,368 bytes= 34GB.
As for the capacity of an 8 byte word, it's 8 bytes, not 2^8 = 256 bytes.
Note some old computers did have a basic addressing system that only addressed words. Byte accessing required a byte index or offset from a word based address. I don't think any current computers use such a scheme.
You are taking 32 bit address which means 2^32 bits can be addressed but if you want how many bytes can be address then just divide it like 2^32/8=2^29 because 1 byte have 8 bit
and if you want how many words can be addressed then 2^29/8 because 1 word contains 8 bytes so 2^26 words can be addressed.
And since one word is 8 byte so we can address (2^26)*8 bytes.
Hope it might help!
I was learning about the pros and cons of using Stacks with linked lists, when i found a cons that say: " the memory cost for each node can be significantly more than the databin stored. Ex a 32 bit value such as integer can be memory overhead 7 times larger than the integer itself."
What does this mean?
When you use a general memory allocator you don't know how big block it allocates on each request. Many of them round the requested size up to some even quantity so that each block is aligned to an address divisible, say, by 8 or 16, or even 32. In that case you always use at least 32 bytes, even if you request only 1 byte. Then you get 32 bytes of a heap for a 4-byte piece of data, which is 8 times what you really need, thus the overhead equal 7.
EDIT
Often the allocator adds a 'header' before the block it returns and the header size is an allocation size step. For a header 16 bytes long your requested allocation size will get rounded up to a nearest 16 multiply and incremented by 16 for a header. So for requested size 1 through 16 you use 32 bytes, for 17—32 you use 48, for 33—48 it's 64 and so on.