Why CPU accesses aligned memory - memory

good people of the Internet!
A past couple of days I've been reading about how CPU access memory and how it could be slower then desired if the accessed object is spread over different chunks that CPU accesses.
In a very generalized and abstract words, if I, say, have an address space from 0x0 to 0xF with a cell of one byte, and CPU reads memory in chunks of 4 bytes (that is, has a quad byte memory access granularity), then, if I need to read an object of 4 bytes size residing in cells 0x0 - 0x3, CPU would do it in one operation, while if the same object occupies cells 0x1 - 0x4, then CPU needs to perform two read operations (read memory in 0x0 - 0x3 first, then in 0x4 - 0x7), shift bytes and combine two parts (or break, if it cannot do unaligned access). This happens, once again, because CPU can read memory in 4 bytes chunks (in our abstract case). Let's also assume, that CPU make these reads inside one cache line and there is no need to change the contents of cache between reads.
So, in this case, the beginning of each chunk CPU can read is residing in a memory cell that has an address which is multiple of 4 (right?). Ok, i don't have any questions about why CPU reads in chunks, but why exactly the beginning of each chunk is aligned in such a way? If referring to an example in a previous paragraph, why exactly CPU cannot read a chunk of 4 bytes starting from 0x1?
As I may understand, CPU is pretty much aware that 0x1 exists. So is all the fuzz happening because memory controller cannot access chunk of memory starting from 0x1? Or is it because a couple of LSBs in a processor word are reserved on some architectures? Or the fact that they are reserved is the consequence of an aligned access, an not its cause (it seems like it's a second question already, but I would leave it as at the time I write this question I have a feeling that they are related)?
There are a bunch of answers here touching this topic (like this and this) and articles online (like this and this), but in all the resources there are good explanations on the phenomena itself and its consequences, but no explanation on why exactly CPU cannot read a chunk of memory starting "in between" byte boundaries (or I couldn't see it maybe).

Consider a simple CPU. It has 32 RAM chips. Each chip supplies one bit of memory. The CPU produces one address, passes it to the 32 RAM chips, and 32 bits come back. The first RAM chip holds bit 0 of bytes 0, 4, 8, 12, 16 etc. The second RAM chip holds bit 1 of bytes 0, 4, 8, 12, 16 etc. The ninth RAM chip holds bit 0 of bytes 1, 5, 9, 13, 17 etc.
So you see that the 32 RAM chips between them can produce bits 0 to 7 of bytes 0 to 3, or bytes 4 to 7, or bytes 8 to 11 etc. They are incapable of producing bytes 1 to 4.

Related

Per-node memory overhead

I was learning about the pros and cons of using Stacks with linked lists, when i found a cons that say: " the memory cost for each node can be significantly more than the databin stored. Ex a 32 bit value such as integer can be memory overhead 7 times larger than the integer itself."
What does this mean?
When you use a general memory allocator you don't know how big block it allocates on each request. Many of them round the requested size up to some even quantity so that each block is aligned to an address divisible, say, by 8 or 16, or even 32. In that case you always use at least 32 bytes, even if you request only 1 byte. Then you get 32 bytes of a heap for a 4-byte piece of data, which is 8 times what you really need, thus the overhead equal 7.
EDIT
Often the allocator adds a 'header' before the block it returns and the header size is an allocation size step. For a header 16 bytes long your requested allocation size will get rounded up to a nearest 16 multiply and incremented by 16 for a header. So for requested size 1 through 16 you use 32 bytes, for 17—32 you use 48, for 33—48 it's 64 and so on.

Why does 20 address space with on a 16 bit machine give access to 1 Megabyte and not 2 Megabytes?

OK, this question sounds simple but I am taken by surprise. In the ancient days when 1 Megabyte was a huge amount of memory, Intel was trying to figure out how to use 16 bits to access 1 Megabyte of memory. They came up with the idea of using segment and offset address values to generate a 20 bit address.
Now, 20 bits gives 2^20 = 1,048,576 locations that can be addressed. Now assuming that we access 1 byte per address location we get 1,048,576/(1024*1024) = 2^20/2^20 Megabytes = 1 Megabyte. Ok understood.
The confusion comes here, we have 16 bit data bus in the ancient 8086 and can access 2 bytes at a time rather than 1, this equate 20 bit address to being able to access a total of 2 Megabyte of data right? Why do we assume that each address only has 1 byte stored in it when the data bus is 2 bytes wide? I am confused here.
It is very important to consider the bus when trying to understand this. This is probably more of an electrical question than a software one, but here is the answer:
For 8086, when reading from ROM, The least significant address line (A0) is not used, reducing the number of address lines to 19 right then and there.
In the case where the CPU needs to read 16 bits from an odd address, say, bytes at 0x3 and 0x4, it will actually do two 16-bit reads: One from 0x2 and one from 0x4, and discard bytes 0x2 and 0x5.
For 8-bit ROM reads, the read on the bus is still 16-bits but the unneeded byte is discarded.
But for RAM there is sometimes a need to write just a single byte, this gets a little more complex. There is an extra output signal on the processor called BHE# (Bus high enable). The combination of A0 and BHE# are used to determine if the write is an 8 or 16-bits wide, and whether or not it is at an odd or even address.
Understanding these two signals is key to answering your question. Stating it simply as possible:
8-bit even access: A0 OFF, BHE# OFF
8-bit odd access: A0 ON, BHE# ON
16-bit access (must be even): A0 OFF, BHE# ON
And we cannot have a bus cycle with A0 ON and BHE# OFF because an odd access to the even byte of the bus is meaningless.
Relating this back to your original understanding: You are completely correct in the case of memory devices. A 1 megabyte 16-bit memory chip will indeed only have 19 address lines, to that chip, 16 bits is a byte, and in effect, they do not physically have an A0 address input.
... almost. 16-bit writable memory devices have two extra signals (BHE# and BLE#) which are connected to the CPU's BHE# and A0 respectively. This so they know to ignore part of the bus when an 8-bit access is under way, making them hybrid 8/16 bit devices. ROM chips do not have these signals.
For the hardware unenlightened, this is a fairly complex area we're touching on here, and it does get very complex indeed in terms of performance considerations and in large systems with mixed 8 and 16 bit hardware.
It's is all explained in fantastic detail in the 8086 datasheet
It's because a byte is the 'atom' in memory addressing and the code must be able to access all the individual bytes in the address space. really a matter of software and compatibility with 8-bit existing software back then.
This too may interest you: How a single byte of memory is accessed by CPU in a 32-bit memory and 32-bit processor

Double-byte memory access granularity

I am attempting to learn about memory alignment, without much success admittedly. I am using this article from IBM.
Can someone please explain to me what this excerpt means from the double byte memory access granularity section:
However, notice what happens when reading from address 1. Because the address doesn't fall evenly on the processor's memory access boundary, the processor has extra work to do. Such an address is known as an unaligned address. Because address 1 is unaligned, a processor with two-byte granularity must perform an extra memory access, slowing down the operation.
Why is another memory access in order? What does it mean by memory access boundary and it being even on the memory access boundary?
I have a VERY limited knowledge on the CPU, as I have only delt with upper level programming (Objective-C and C++). Any help is greatly appreciated!
Thanks!
The example is describing what happens when you try to read a block of 4 consecutive bytes on a CPU with double-byte access granuality. On this type of CPU, memory is accessed as pairs of bytes, always starting with an even-numbered byte.
If you try to read the block starting with byte 0, it has to perform 2 reads: bytes 0-1 and bytes 2-3.
If you try to read the block starting with byte 1, it has to perform 3 reads: bytes 0-1 (to get byte 1), bytes 2-3, and bytes 4-5 (to get byte 4).
Memory access granularity is the number of bytes it accesses at a time, and a memory access boundary is where each of these groups of bytes begins. The groups of bytes are always addressed at even multiples of the granularity -- if it's double-byte granularity they start on even addresses, if it's quad-byte granularity they're at multiples of 4.
As an analogy, consider an apartment building with 4 units on each floor. Units 0-3 are on floor 0, units 4-7 are on floor 1, etc. If you want to slip a flyer under the doors of units 0-3, you only have to go to one floor. But if you want to slip a flyer under 1-4, you have to go to 2 floors: floor 0 for 1-3, floor 2 for unit 4.

Reading a bit from memory

I'm looking into reading single bits from memory (RAM, harddisk). My understanding was, one can not read less than a byte.
However I read someone telling it can be done with assembly.
I wan't the bandwidth usage to be as low as possible and the to be retrieved data is not sequential, so I can not read a byte and convert it to 8 bits.
I don't think the CPU will read less than the size of a cache line from RAM (64 bytes on recent Intel chips). From disk, the minimum is typically 4 kiB.
Reading a single bit at a time is neither possible nor necessary, since the data bus is much wider than that.
You cannot read less than a byte from any PC or hard disk that I know of. Even if you could, it would be extremely inefficient.
Some machines do memory mapped port io that can read/write less than a byte to the port, but it still shows up when you get it as at least a byte.
Use the bitwise operators to pick off specific bits as in:
char someByte = 0x3D; // In binary, 111101
bool flag = someByte & 1; // Get the first bit, 1
flag = someByte & 2; // Get the second bit, 0
// And so on. The number after the & operator is a power of 2 if you want to isolate one bit.
// You can also pick off several bits like so:
int value = someByte & 3; // Assume the lower 2 bits are interesting for some reason
It used to be, say 386/486 days, where a memory was a bit wide, 1 meg by 1 bit, but you will have 8 or some multiple number of chips, one for each bit lane on the bus, and you could only read in widths of the bus. today the memories are a byte wide and you can only read in units of 32 or 64 or multiples of those. Even when you read a byte, most designs fill in the whole byte. it adds unnecessarily complication/cost, to isolate the bus all the way to the memory, a byte read looks to most of the system as a 32 or 64 bit read, as it approaches the edge of the processor (sometimes physical pins, sometimes the edge of the core inside the chip) is when the individual byte lane is separated out and the other bits are discarded. Having the cache on changes the smallest divisible read size from the memory, you will see a burst or block of reads.
it is possible to design a memory system that is 8 bits wide and read 8 bits at a time, but why would you? unless it is an 8 bit processor which you probably couldnt take advantage of a 8bit by 2 gig memory. dram is pretty slow anyway, something like 133 mhz (even your 1600mhz memory is only short burst as you read from slow parts, memory has not gotten faster in over 10 years).
Hard disks are similar but different, I think sectors are the smallest divisible unit, you have to read or write in those units. so when reading you have a memory cycle on the processor, no different that going to a memory, and depending on the controller either before you do the read or as a result, a sector is read of the disk, into a buffer, not unlike a cache line read, then your memory cycle to the buffer in the disk controller either causes a bus width read and the processor divides it up or if the bus adds complexity to isolate byte lanes then you isolate a byte, but nobody isolates bit lanes. (I say the word nobody and someone will come back with an exception...)
most of this is well documented, not hard to find. For arm platforms look for the amba and/or axi specifications, freely downloaded. the number of bridges, pcie controllers, disk controller documents are all available for PCs and other platforms. it still boils down to an address and data bus or one goesouta and one goesinta data bus and some control signals that indicate the access type. some busses have byte lane enables, which is generally for a write not a read. If I want to write only a byte to a dram in a modern 64 bit system, I DO have to tell everyone almost all the way out to the dram what I want to write. To write a byte on a memory module which must be accessed 64 bits at a time, at a minimum a 64 bit read happens into a temporary place either the cache or the memory controller, then the byte to be written modifies the specific byte within the 64 bit word, then that 64 bit quantity, eventually, is written back to the memory module itself. You can do this using a combination of the address bits and a few control signals or you can just put 8 byte lane enables and the lower address bits can be ignored. Hard disk, same deal, have to read a sector, modify one byte, then eventually write the whole sector at a time. with flash and eeprom, you can only write zeros (from the programmers perspective), you erase to ones (from the programmers perspective, is actually a zero in the logic, there is an inversion) and a write has to be a sector at a time, sectors can be 64 bytes, 128 bytes, 256 bytes typically.

Difference between word addressable and byte addressable

Can someone explain what's the different between Word and Byte addressable? How is it related to memory size etc.?
A byte is a memory unit for storage
A memory chip is full of such bytes.
Memory units are addressable. That is the only way we can use memory.
In reality, memory is only byte addressable. It means:
A binary address always points to a single byte only.
A word is just a group of bytes – 2, 4, 8 depending upon the data bus size of the CPU.
To understand the memory operation fully, you must be familiar with the various registers of the CPU and the memory ports of the RAM. I assume you know their meaning:
MAR(memory address register)
MDR(memory data register)
PC(program counter register)
MBR(memory buffer register)
RAM has two kinds of memory ports:
32-bits for data/addresses
8-bit for OPCODE.
Suppose CPU wants to read a word (say 4 bytes) from the address xyz onwards. CPU would put the address on the MAR, sends a memory read signal to the memory controller chip. On receiving the address and read signal, memory controller would connect the data bus to 32-bit port and 4 bytes starting from the address xyz would flow out of the port to the MDR.
If the CPU wants to fetch the next instruction, it would put the address onto the PC register and sends a fetch signal to the memory controller. On receiving the address and fetch signal, memory controller would connect the data bus to 8-bit port and a single byte long opcode located at the address received would flow out of the RAM into the CPU's MDR.
So that is what it means when we say a certain register is memory addressable or byte addressable. Now what will happen when you put, say decimal 2 in binary on the MAR with an intention to read the word 2, not (byte no 2)?
Word no 2 means bytes 4, 5, 6, 7 for 32-bit machine. In real physical memory is byte addressable only. So there is a trick to handle word addressing.
When MAR is placed on the address bus, its 32-bits do not map onto the 32 address lines(0-31 respectively). Instead, MAR bit 0 is wired to address bus line 2, MAR bit 1 is wired to address bus line 3 and so on. The upper 2 bits of MAR are discarded since they are only needed for word addresses above 2^32 none of which are legal for our 32 bit machine.
Using this mapping, when MAR is 1, address 4 is put on the bus, when MAR is 2, address 8 is put on the bus and so forth.
It is a bit difficult in the beginning to understand. I learnt it from Andrew Tanenbaums's structured computer organisation.
This image should make it easy to understand:
http://i.stack.imgur.com/rpB7N.png
Simply put,
• In the byte addressing scheme, the first word starts at address 0, and
the second word starts at address 4.
• In the word addressing scheme, all bytes of the first word are located
in address 0, and all bytes of the second word are located in address 1.
The advantage of byte-addressability are clear when we consider applications that process data one byte at a time. Access of a single byte in a byte-addressable system requires only the issuing of a single address. In a 16–bit word addressable system, it is necessary first to compute the address of the word containing the byte, fetch that word, and then extract the byte from the two-byte word. Although the processes for byte extraction are well understood, they are less efficient than directly accessing the byte. For this reason, many modern machines are byte addressable.
Addressability is the size of a unit of memory that has its own address. It's also the smallest chunk of memory that you can modify without affecting its neighbours.
For example: a machine where bytes are the normal 8 bits, and the word-size = 4 bytes. If it's a word-addressable machine, there's no such thing as the address of the second byte of an int. Dealing with strings (e.g. an array like char str[]) becomes inconvenient, because you still store characters packed together. Modifying just str[1] means loading the word that contains it, doing some shift/and/or operations to apply the change, then doing a word store.
Note that this is different from a machine that doesn't allow unaligned word load/stores (where the low 2 bits of a word address have to be 0). Such machines usually have a byte load/store instruction. We're talking about machines without even that.
CPU addresses might actually still include the low bits, but require them to always be zero (or ignore them). However, after checking that they're zero, the could be discarded, so the rest of the memory system only sees the word address, where two adjacent words have an address that differs by 1 (not 4). However, on a 16-bit CPU where a register can only hold 64k different addresses, you wouldn't likely do this. Each separate CPU address would refer to a different 2 bytes of memory, instead of discarding the low bit. 2B word-addressable memory would let you address 128kiB of memory, instead of just 64kiB with byte-addressable memory.
Fun fact: ARM used to use the low 2 bits of an address as a shuffle control for unaligned word loads. (But it always had byte load/store instructions.)
See also:
https://en.wikipedia.org/wiki/Word-addressable
https://en.wikipedia.org/wiki/Byte_addressing
Note that bit-addressable memory could exist, but doesn't. 8-bit bytes are nearly universally standard now. (Ancient computers sometimes had larger bytes, see the history section of wikipedia's Byte article.)

Resources