Array not crossing 256 byte boundary - memory

Is it possible to create an array that doesn't cross 256 byte boundary? That is addresses of the individual array items only differ in the lower byte. This is weaker requirement than keeping the array aligned to 256 bytes. The only solution I could think of was aligning to next_power_of_two(sizeof(array)), but I'm not sure about the gaps that would appear this way.
It is for a library for AVR microcontrollers, and this would save me a few precious instructions in an interrupt handler.The array that should have this property is 54 byte long out of about 80 bytes of total static memory used by the library. I'm looking for a way that doesn't increase the memory requirements.
I'm using avr-as gnu assembler and avr-ld linker.
Example:If the array starts at address 0x00f0, then the higher word will change from 0x00 to 0x01 while traversing the array.
I don't really care whether it starts at address 0x0100 or 0x0101 as long as it doesn't cross the boundary.

You only need 64 byte alignment to meet this requirement, so e.g. this should work:
uint8_t a[54] __attribute__ ((aligned(64)));

I don't know anything about AVR microcontrollers, but, generally speaking, static variables are usually placed in the data section of the executable, and, since your static memory requirements are low, all you need to ensure is that the data section is 256 byte aligned. (Which it may be by default. On x86, it usually is.) Check the linker options...

Related

JVM 64-bit different memory usages?

I've done some reading but I'm not entirely sure about one thing, for example how much memory would this use in JVM 64 bit(sorry if stupid question, but I'm a bit confused and don't know much about this):
MyObject[] myArray; - I know an array takes up 24 bytes, but how much will each element in this array take? is every element an object reference, meaning 8 byte per element? If not, how do I know how many bytes each element in this array needs?
Normally, that is when using heap sizes of less than 32 GB, the 64-bit JVM uses compressed oops which store object pointers as a 32-bit integer (scaled by three bits when used, since all objects are aligned to 8 bytes; see the link for details), so each element would actually only use 4 bytes.
If you use more than 32 GB of heap or otherwise turn off compressed oops, however, then each element will indeed use 8 bytes.
Also, I suspect that your statement on the array header being 24 bytes is wrong. To begin with, when compressing oops, the class reference in the header is also compressed, and the identity-hash-code and array length fields are 32-bit to begin with, so I suspect it is more likely to use 12 bytes. Even when using full-length oops, it should still only take 16 bytes. I can't find any hard source verifying either, however. In general, however, it should be said that Hotspot does not even use a fixed-size object header but one that varies in size depending on various circumstances of the object. This article describes some of those circumstances.
That is on the Hotspot JVM, at least. Since the JLS doesn't specify any primitive sizes, it could, theoretically, be anything on any given JVM, though 8 bytes are, of course, the most likely implementation choice.
Here is good information on how to calculate the memory usage of a Java array
For Example
let's consider a 10x10 int array. Firstly, the "outer" array has its 12-byte object header followed by space for the 10 elements. Those elements are object references to the 10 arrays making up the rows. That comes to 12+4*10=52 bytes, which must then be rounded up to the next multiple of 8, giving 56. Then, each of the 10 rows has its own 12-byte object header, 4*10=40 bytes for the actual row of ints, and again, 4 bytes of padding to bring the total for that row to a multiple of 8. So in total, that gives 11*56=616 bytes. That's a bit bigger than if you'd just counted on 10*10*4=400 bytes for the hundred "raw" ints themselves.

what does Double word alignment mean?

what is double word aligned data??. i am working on a Ti processor with the c6accel DSP engine.The fft function requires the input data array of samples to be double word aligned.what does double word aligned exactly mean and how do i generate it?
A word is the amount of data which each register in the CPU is able to hold. This is dependant on the processor being used - on 32-bit systems this will be 32 bits, and on 64-bit systems this will be 64 bits, etc... Your TI processor will probably be either 16-bit or 32-bit depending on the model (guess based on this).
The size of a word will generally correspond to the size of a pointer, although technically this is not guarenteed to be true (doesn't work on PS3/XB360) and as a result should not be relied on as a rule (source). The correct way of determining the size of a world will depend on which operating system you're using. As quoted from the previous source:
The C header file may defines WORD_BIT and/or __WORDSIZE.
The size of a double word is just the size of a word * 2. Data in memory (RAM) is generally fetched by programs 1 word at a time assuming the fetch begins on a word boundary. If this is not the case the data on either side of the word boundary will need to be fetched in two separate instructions, which leads to inefficiencies since twice the amount of fetches need to be done and reading/writing to/from RAM is relatively slow (Sidenote: this is largely mitigated by caches in modern processors, although that's another topic altogether).
For TI C6x: double word = 64-bit
Make sure your array starts at an address that is a multiple of 8 bytes.
You can use the assembly directive .align before declaring your array, or you can use the linker to link your array section at an aligned address. If you can't control the address (e.g. you have memory from malloc), make your array start after a few bytes (to the next 8 byte boundary).

Endian-ness: Bits in a Byte vs. Bytes in Memory

When we say a specific architecture is either little-endian or big-endian, we are referring to the whether numerical significance is stored from left-to-right or right-to-left in memory. My question is: does this ordering refer to how bits or ordered in a byte, or how bytes are ordered in a memory?
For example, consider the number 6000=1770h=0001011101110000b. If both bits in a byte and byte in memory are little-endian, this would be stored as
00001110 11101000 = 0E E8,
if bits in a byte were big-endian, but bytes in memory were little-endian, this would be stored as (for what it's worth, this happens to be how Visual Studio seems to be telling me that memory is organized in x64 architecture)
01110000 00010111 = 70 17,
if bits were little-endian, but bytes were big-endian, this would be stored as
11101000 00001110 = 0E E8,
and finally, if bits were big-endian, but bytes were little-endian, this would be stored as
00010111 01110000 = 17 70
(Hopefully I did that right.)
So then, what do the terms "little-endian" and "big-endian" actually refer to? Do the terms refer to the ordering of bits in a byte, or the ordering of bytes in memory, or both? Furthermore, if VS tells me that, for example, 7C, is 'in' a given particular byte, do they mean that the bits that make up that byte in computer memory are literally 0111 1100, or do they just mean that the value stored in that byte is 7Ch=124, but or may not be actually represented as 7c=01111100 depending on whether or not the underlying architecture happens to be little-endian?
The ordering of bits in a byte is invisible. Since you can't address individual bits, there would be no difference between the two cases. However, you can address individual bytes, so there it does make a difference.
If we're expressing 6000 in byte-addressible memory, the high byte is 23 decimal (6000 divided by 256) and the low byte is 112 decimal (6000 mod 256). We could store this as 23,112 or 112,23. There are no other options. Only the ordering of bytes is an open choice, and this is what endianness refers to.
In memory, little-endian or big-endian is not so much left-to-right or right-to-left issue as one of addressing. In little endian, the least significant portion of data is stored in the lower addressed locations and the reverse with big endian.
The ordering occurs independently at 2 levels. As most machines address more than 1 bit at a time (recall some graphic CPUs that did have bit addresses), the address will locate a group of bits, typically 8 bits. So if the bits at address 10 are less significant than the bits at address 11, it is a little endian machine. This is generalized as byte endian-ness. The processor's characteristics define this.
The endian-ness of the group of bits, the bit endian-ness, is significant if there is a way to address them in some fashion. Some processors provide operations that use a bit level address within the byte (or word). If your programming language directly allows you to use that or hides that is another question.
In C there are bit fields such as
union u {
unsigned char uc;
struct s {
int a :1;
int b :7;
};
};
This code is non-portable because of bit endian-ness. u.uc = 7 may result in u.s.b also being 7 or something else. Typically the byte endian-ness and bit endian-ness are the same. But the compiler controls the above example.
Bit endian-ness is also significant in serial communication. As 1 bit is sent/received sequentially, its construction to/from memory needs endian-ness definition.
Conclude:
Big-endian and little endian most often refer to the "byte" level addressing. The endian-ness of the bit is typically either the same or of select importance to the programmer.
BTW, your example of "If both bits in a byte and byte in memory are little-endian, this would be stored as
00001110 11101000 = 0E E8
I would suggest is not correct as the left side and right side are using different endian-ness. Had you used the same endian-ness, you may conclude
00001110 11101000 = 07 71
For fun consider:
01000000 (big endian) has value "sixty-four" (A big endian word)
10110000 (little endian) has value "thirteen". (Thirteen is little endian word)

Reading a bit from memory

I'm looking into reading single bits from memory (RAM, harddisk). My understanding was, one can not read less than a byte.
However I read someone telling it can be done with assembly.
I wan't the bandwidth usage to be as low as possible and the to be retrieved data is not sequential, so I can not read a byte and convert it to 8 bits.
I don't think the CPU will read less than the size of a cache line from RAM (64 bytes on recent Intel chips). From disk, the minimum is typically 4 kiB.
Reading a single bit at a time is neither possible nor necessary, since the data bus is much wider than that.
You cannot read less than a byte from any PC or hard disk that I know of. Even if you could, it would be extremely inefficient.
Some machines do memory mapped port io that can read/write less than a byte to the port, but it still shows up when you get it as at least a byte.
Use the bitwise operators to pick off specific bits as in:
char someByte = 0x3D; // In binary, 111101
bool flag = someByte & 1; // Get the first bit, 1
flag = someByte & 2; // Get the second bit, 0
// And so on. The number after the & operator is a power of 2 if you want to isolate one bit.
// You can also pick off several bits like so:
int value = someByte & 3; // Assume the lower 2 bits are interesting for some reason
It used to be, say 386/486 days, where a memory was a bit wide, 1 meg by 1 bit, but you will have 8 or some multiple number of chips, one for each bit lane on the bus, and you could only read in widths of the bus. today the memories are a byte wide and you can only read in units of 32 or 64 or multiples of those. Even when you read a byte, most designs fill in the whole byte. it adds unnecessarily complication/cost, to isolate the bus all the way to the memory, a byte read looks to most of the system as a 32 or 64 bit read, as it approaches the edge of the processor (sometimes physical pins, sometimes the edge of the core inside the chip) is when the individual byte lane is separated out and the other bits are discarded. Having the cache on changes the smallest divisible read size from the memory, you will see a burst or block of reads.
it is possible to design a memory system that is 8 bits wide and read 8 bits at a time, but why would you? unless it is an 8 bit processor which you probably couldnt take advantage of a 8bit by 2 gig memory. dram is pretty slow anyway, something like 133 mhz (even your 1600mhz memory is only short burst as you read from slow parts, memory has not gotten faster in over 10 years).
Hard disks are similar but different, I think sectors are the smallest divisible unit, you have to read or write in those units. so when reading you have a memory cycle on the processor, no different that going to a memory, and depending on the controller either before you do the read or as a result, a sector is read of the disk, into a buffer, not unlike a cache line read, then your memory cycle to the buffer in the disk controller either causes a bus width read and the processor divides it up or if the bus adds complexity to isolate byte lanes then you isolate a byte, but nobody isolates bit lanes. (I say the word nobody and someone will come back with an exception...)
most of this is well documented, not hard to find. For arm platforms look for the amba and/or axi specifications, freely downloaded. the number of bridges, pcie controllers, disk controller documents are all available for PCs and other platforms. it still boils down to an address and data bus or one goesouta and one goesinta data bus and some control signals that indicate the access type. some busses have byte lane enables, which is generally for a write not a read. If I want to write only a byte to a dram in a modern 64 bit system, I DO have to tell everyone almost all the way out to the dram what I want to write. To write a byte on a memory module which must be accessed 64 bits at a time, at a minimum a 64 bit read happens into a temporary place either the cache or the memory controller, then the byte to be written modifies the specific byte within the 64 bit word, then that 64 bit quantity, eventually, is written back to the memory module itself. You can do this using a combination of the address bits and a few control signals or you can just put 8 byte lane enables and the lower address bits can be ignored. Hard disk, same deal, have to read a sector, modify one byte, then eventually write the whole sector at a time. with flash and eeprom, you can only write zeros (from the programmers perspective), you erase to ones (from the programmers perspective, is actually a zero in the logic, there is an inversion) and a write has to be a sector at a time, sectors can be 64 bytes, 128 bytes, 256 bytes typically.

Why is the smallest value that can be stored is a Byte(8bit) & not a Bit(1bit)?

Why is the smallest value that can be stored a Byte(8bit) & not a Bit(1bit) in memory?
Even booleans are stored as Bytes. Will we ever bump the smallest number to 32 or 64bits like register's on the CPU?
EDIT: To clarify as many answers seemed confused about the nature of questing. This question is about why isn't a byte 7-bit, 1-bit, 32-bit, etc (not why lower bit primitives must fit within the hardware's byte at min). Is the 8-bit byte simply historical as some hardware has 10-bit bytes for example. Or is there a mathematical reason 8-bit is ideal vs say 10-bit for general processing?
The hardware is built to read data in blocks (bytes, later words and dwords). This provides greater efficiency, than accessing individual bits, and also offers more addressing range. So most data is aligned to at least byte boundary. There exist encodings that operate with bit sequences, rather than bytes, but they are quite rare.
Nowadays the data is most often aligned to dword (32-bits) boundary anyway. Moreover, some hardware (ARM, for example), can't access misaligned multibyte variables, i.e. 16-bit word can't "cross" dword boundary - exception will be thrown.
Because computers address memory at the byte level, so anything smaller than a byte is not addressable.
The underlying methods of processor access are limited to the size of the smallest usable register. On most architectures, that size is 8 bits. You can use smaller portions of these; for instance, C has the bitfield feature in structs that will allow combining fields that only need to be certain bit lengths. Access will still require that the whole byte be read.
Some older exotic architectures actually did have different a "word size." In these machines, 10 bits might be the common size.
Lastly, processors are almost always backwards compatible. Intel, for instance, has maintained complete instruction compatibility from the 386 on up. If you take a program compiled for the 386, it will still run on an i7 processor. Changing the word size would break compatibility. So while it is possible, no manufacturer will ever do it.
Assume that we have native language that consist of 2 character such as a , b
to distinguish two characters we need at least 1 bit for example 0 to represent char a and 1 to represent char b
so that if we count number of characters and special characters and symbols, there are 128 character and to distinguish one character from another, you need log2(128) = 7 bit and 8th bit for transmission

Resources