how long is a memory address typically in bits - memory

I am confused with so many terminologies that my instructor talks about such as word,byte addressing and memory location.
I was under the impression that for a 32-bit processor,
it can address upto 2^32 bits, which is 4.29 X 10^9 bits (NOT BYTES).
The way I think now is:
The memory is like an array of buckets each of 1 byte length.
when we say byte addressing (which I guess is the most common ones), each char is 1 byte and is retrieved from the first bucket (say for example).
for int the next 4 bytes are put together in little-endian ordering to compute the Integer value.
so each memory, I see it as, 8 bits or 1 byte, which can give upto 2^8 locations, this is far less than what cpu can address.
There is some very basic mis-understanding here on my part which if some experts can explain in simple terms that a prosepective CS-major student can it in once forever.
I have read various pages including this one on word and here the unit of address resolution is given as 8b for ARM, which adds more to my confusion.

The processor uses 32 bits to store an address. With 32 bits, you can store 2^32 distinct numbers, ranging from 0 to 2^32 - 1. "Byte addressing" means that each byte in memory is individually addressable, i.e. there is an address x which points to that specific byte. Since there are 2^32 different numbers you can put into a 32-bit address, we can address up to 2^32 bytes, or 4 GB.
It sounds like the key misconception is the meaning of "byte addressing." That only means that each individual byte has its own address. Addresses themselves are still composed of multiple bytes (4, in this case, since four 8-bit bytes are taken together and interpreted as a single 32-bit number).
I was under the impression that for a 32-bit processor, it can address upto 2^32 bits, which is 4.29 X 10^9 bits (NOT BYTES).
This is typically not the case -- bit-level addressing is quite rare. Byte addressing is far more common. You could design a CPU that worked this way, though. In that case as you said, you would be able to address up to 2^32 bits = 2^29 bytes (512 MiB).

For one bit, You would have 0 or 1 and For two bits, you would have 00, 01, 10, 11. For 8 bits, you would have 2^8 which is 256 address values. Address and Data are separate terms. Address is the location and Data is the content in that location. Data width(content) is how many bits you could store in one memory cell address.(Think like an apartment with bedrooms- each apartment in a building has two bedrooms)and Data depth(address) is how many addresses you would have(In a building how many apartments you would have #1 thru #1400 etc). One bit in the CPU register can reference an individual byte in memory like one number in apartment number can reference one apartment. SIMM module RAMs had 32 bit Data width and DIMM modules have 64 bit Data width. It means in one memory address in DIMM, It stores 64 bits data. How many addresses can be multiplexed by two wires (two bit processing), you could make 4 addresses. (Each of these addresses could hold 64 bits if it is DIMM module ). 32 bit processing means, 32 wires, 2^32 address options. Even though, 64 bit processing has 64 bit registers and internal bus (wires) as 64 bit, http://www.tech-faq.com/address-bus.html, address bus max is 44 bits. means 2^44 maximum addressing can be achieved by Intel Super Server CPU Itanium 2.

Related

how to tell how many memory addresses a processor can generate

Let's say a computer can hold a word size of 26 bits, I'm curious to know how many memory addresses can the processor generate?
I'm thinking that the maximum number it can hold would be 2^26 - 1 and can have 2^26 unique memory addresses.
I'm also curious to know that if let's say that each cell in the memory has a size of 12 bits then how many bytes of memory can this processor address?
My understanding is that in most cases a processor can hold up to 32 bits which is 4 bytes and each byte is 8 bits. However, in this case, each byte would be 12 bits and the processor would be able to address 2^26/12 bytes of memory. Is that safe to say?
I'm thinking that the maximum number it can hold would be 2^26 - 1 and can have 2^26 unique memory addresses.
I agree.  We usually refer to this as the size of the address space.
As for the next question:
These days, the term byte is generally agreed to means 8 bits, so 12 bits would mean 1.5 bytes.  It is a matter of terminology, though, which has varied in the long past.
So, I would say 226 12-bit words is capable of holding/storing 226 * 1.5 bytes, though they are not individually addressable, and would have to be packed & unpacked to access the separate bytes.
The DEC PDP-8 computer was a 12 bit computer and word addressable, so there were multiple schemes for storing characters: two 6 bit characters in a 12 bit word, and also 1 & 1/2 8-bit characters in a 12-bit word, so three 8-bit characters in two 12-bit words.
Similar issues occur when storing packed booleans in a memory, where each boolean takes only a single bit, yet the processor can access a minimum of 8 bits at a time, so must extract a single bit from a larger datum.

How byte addressing works?

I am new to computer architecture. So correct me if I am wrong.
If a memory module consists of 8 memory chips and if each chip stores 4bits per address then by applying an address to the address pin of the module I can get (8 x 4=) 32 bit from that address in the module. But byte addressing tells that every byte has an address. But here I am accessing 32bits using an address. So how is it possible?
I think if each chip stores 1bit per address then by applying an address to the module I can access 8bit or one byte.
You say each chip stores 4bits per address and you have 8 on the same address bus. It is the address bus that is the limiting factor. The address bus must have 32 lines for each byte in a 32 bit architecture to be addressable. If you have 8 chips each producing 4 bits in response to the same address, then you have 32bits per address. The advantage of such an arrangement would be the address bus lines could be reduced by 2 without decreasing the addressable range (only the resolution).
You are correct in thinking that each chip would need to produce 1 bit per address to allow byte addressing.
That's the theory, in practice I would suspect a solution could be architected where the 4 bits could be time division multiplexed making each individually accessible.
I have heard for a long time not to address less than 32 bits at a time, as that may be the smallest unit addressable. Certainly it would make sense when 2Gb-4Gb was the physical limit of 32 bit byte addressing.

How many bytes can be stored in a memory unit that uses 8 address bits and a 16 bit architecture?

I need help understanding memory. How many bytes can be stored in a memory unit that uses 8 address bits and a 16 bit architecture?
I think it's 2^8 = 256. Is this correct?
Edit: I mean 256
It depends.
Firstly "16 bit architecture" is too vague to be a helpful characterization for this problem. A characterization like that generally refers to the width of registers and data paths (e.g. in the ALU), not how memory is addressed.
Secondly, the answer actually depends on whether the addresses are byte addresses or "word" addresses. AFAIK "almost all" new processor / instruction set architectures designed since the 1980's have used byte addresses. But prior to that it was common for addresses to address words of up to 60 bits (or possibly more).
But assuming byte addressing, then an 8 bit address allows you to address 2^8 bytes; i.e. 256 bytes.
On the other hand, if we assume word addressing with a 16 bit word, then 8 bit addresses will address 256 words ... which is 512 bytes.
The base answer is the 2^number of bits. However,long ago sixteen bit systems came up with means for accessing more than 2^16 memory though segments. While an application can only access 2^16 bytes at a time, changing the values in hardware registers allows the application to change which 2^16 bytes of a larger address space are being accessed.
You typically do things like
Map buffer 1 to the address space.
Queue a read operation to the buffer.
Map another buffer 2 to the address space.
Read operation completes-map Buffer 1 back so that the data can be accessed.

Why does 20 address space with on a 16 bit machine give access to 1 Megabyte and not 2 Megabytes?

OK, this question sounds simple but I am taken by surprise. In the ancient days when 1 Megabyte was a huge amount of memory, Intel was trying to figure out how to use 16 bits to access 1 Megabyte of memory. They came up with the idea of using segment and offset address values to generate a 20 bit address.
Now, 20 bits gives 2^20 = 1,048,576 locations that can be addressed. Now assuming that we access 1 byte per address location we get 1,048,576/(1024*1024) = 2^20/2^20 Megabytes = 1 Megabyte. Ok understood.
The confusion comes here, we have 16 bit data bus in the ancient 8086 and can access 2 bytes at a time rather than 1, this equate 20 bit address to being able to access a total of 2 Megabyte of data right? Why do we assume that each address only has 1 byte stored in it when the data bus is 2 bytes wide? I am confused here.
It is very important to consider the bus when trying to understand this. This is probably more of an electrical question than a software one, but here is the answer:
For 8086, when reading from ROM, The least significant address line (A0) is not used, reducing the number of address lines to 19 right then and there.
In the case where the CPU needs to read 16 bits from an odd address, say, bytes at 0x3 and 0x4, it will actually do two 16-bit reads: One from 0x2 and one from 0x4, and discard bytes 0x2 and 0x5.
For 8-bit ROM reads, the read on the bus is still 16-bits but the unneeded byte is discarded.
But for RAM there is sometimes a need to write just a single byte, this gets a little more complex. There is an extra output signal on the processor called BHE# (Bus high enable). The combination of A0 and BHE# are used to determine if the write is an 8 or 16-bits wide, and whether or not it is at an odd or even address.
Understanding these two signals is key to answering your question. Stating it simply as possible:
8-bit even access: A0 OFF, BHE# OFF
8-bit odd access: A0 ON, BHE# ON
16-bit access (must be even): A0 OFF, BHE# ON
And we cannot have a bus cycle with A0 ON and BHE# OFF because an odd access to the even byte of the bus is meaningless.
Relating this back to your original understanding: You are completely correct in the case of memory devices. A 1 megabyte 16-bit memory chip will indeed only have 19 address lines, to that chip, 16 bits is a byte, and in effect, they do not physically have an A0 address input.
... almost. 16-bit writable memory devices have two extra signals (BHE# and BLE#) which are connected to the CPU's BHE# and A0 respectively. This so they know to ignore part of the bus when an 8-bit access is under way, making them hybrid 8/16 bit devices. ROM chips do not have these signals.
For the hardware unenlightened, this is a fairly complex area we're touching on here, and it does get very complex indeed in terms of performance considerations and in large systems with mixed 8 and 16 bit hardware.
It's is all explained in fantastic detail in the 8086 datasheet
It's because a byte is the 'atom' in memory addressing and the code must be able to access all the individual bytes in the address space. really a matter of software and compatibility with 8-bit existing software back then.
This too may interest you: How a single byte of memory is accessed by CPU in a 32-bit memory and 32-bit processor

Endian-ness: Bits in a Byte vs. Bytes in Memory

When we say a specific architecture is either little-endian or big-endian, we are referring to the whether numerical significance is stored from left-to-right or right-to-left in memory. My question is: does this ordering refer to how bits or ordered in a byte, or how bytes are ordered in a memory?
For example, consider the number 6000=1770h=0001011101110000b. If both bits in a byte and byte in memory are little-endian, this would be stored as
00001110 11101000 = 0E E8,
if bits in a byte were big-endian, but bytes in memory were little-endian, this would be stored as (for what it's worth, this happens to be how Visual Studio seems to be telling me that memory is organized in x64 architecture)
01110000 00010111 = 70 17,
if bits were little-endian, but bytes were big-endian, this would be stored as
11101000 00001110 = 0E E8,
and finally, if bits were big-endian, but bytes were little-endian, this would be stored as
00010111 01110000 = 17 70
(Hopefully I did that right.)
So then, what do the terms "little-endian" and "big-endian" actually refer to? Do the terms refer to the ordering of bits in a byte, or the ordering of bytes in memory, or both? Furthermore, if VS tells me that, for example, 7C, is 'in' a given particular byte, do they mean that the bits that make up that byte in computer memory are literally 0111 1100, or do they just mean that the value stored in that byte is 7Ch=124, but or may not be actually represented as 7c=01111100 depending on whether or not the underlying architecture happens to be little-endian?
The ordering of bits in a byte is invisible. Since you can't address individual bits, there would be no difference between the two cases. However, you can address individual bytes, so there it does make a difference.
If we're expressing 6000 in byte-addressible memory, the high byte is 23 decimal (6000 divided by 256) and the low byte is 112 decimal (6000 mod 256). We could store this as 23,112 or 112,23. There are no other options. Only the ordering of bytes is an open choice, and this is what endianness refers to.
In memory, little-endian or big-endian is not so much left-to-right or right-to-left issue as one of addressing. In little endian, the least significant portion of data is stored in the lower addressed locations and the reverse with big endian.
The ordering occurs independently at 2 levels. As most machines address more than 1 bit at a time (recall some graphic CPUs that did have bit addresses), the address will locate a group of bits, typically 8 bits. So if the bits at address 10 are less significant than the bits at address 11, it is a little endian machine. This is generalized as byte endian-ness. The processor's characteristics define this.
The endian-ness of the group of bits, the bit endian-ness, is significant if there is a way to address them in some fashion. Some processors provide operations that use a bit level address within the byte (or word). If your programming language directly allows you to use that or hides that is another question.
In C there are bit fields such as
union u {
unsigned char uc;
struct s {
int a :1;
int b :7;
};
};
This code is non-portable because of bit endian-ness. u.uc = 7 may result in u.s.b also being 7 or something else. Typically the byte endian-ness and bit endian-ness are the same. But the compiler controls the above example.
Bit endian-ness is also significant in serial communication. As 1 bit is sent/received sequentially, its construction to/from memory needs endian-ness definition.
Conclude:
Big-endian and little endian most often refer to the "byte" level addressing. The endian-ness of the bit is typically either the same or of select importance to the programmer.
BTW, your example of "If both bits in a byte and byte in memory are little-endian, this would be stored as
00001110 11101000 = 0E E8
I would suggest is not correct as the left side and right side are using different endian-ness. Had you used the same endian-ness, you may conclude
00001110 11101000 = 07 71
For fun consider:
01000000 (big endian) has value "sixty-four" (A big endian word)
10110000 (little endian) has value "thirteen". (Thirteen is little endian word)

Resources