I'm quite puzzled about the endianness on an ARM device. The device I'm testing uses little endian.
Say there's code here which swaps elements in an array:
uint32_t* srcPtr = (uint32_t*)src->get();
uint8_t* dstPtr = dst->get();
dstPtr[0] = ((*srcPtr) >> 16) & 0xFF;
dstPtr[1] = ((*srcPtr) >> 8) & 0xFF;
dstPtr[2] = (*srcPtr) & 0xFF;
dstPtr[3] = ((*srcPtr) >> 24);
My understanding is that if srcPtr contains {0, 1, 2, 3} the output dstPtr should be {1, 2, 3, 0}.
But the output is dstPtr is {2, 1, 0, 3}.
Does this mean that the srcPtr read in this way 3, 2, 1 -> 0 ?
Can someone please help me ? :)
Is this due to the little endian ?
so at address 0x100 I have the values 0x00, 0x11, 0x22, 0x33. 0x00 is at 0x100, 0x11 at 0x101 and so on. If I point at address 0x100 with a 32 bit unsigned pointer, then I get the value 0x33221100, true for ARM (little endian), true for x86 (little endian) etc.
So now if I take 0x33221100 and (x>>16)&0xFF I get 0x22. (x>>8)&0xFF is 0x11, x&0xFF is 0x00 and (x>>24)&0xFF is 0x33. {2,1,0,3}
Where is your confusion? Is it the conversion from 0x00,0x11,0x22,0x33 to 0x33221100? Little endian, least significant byte first, so the lowest or first address you come across (0x100) has the least significant byte (0x00 the lower 8 bits of the number) and so on 0x101 the next least significant bits 8 to 15, 0x102 bits 16 to 23 and 0x103 bits 24 to 31. for a 32 bit value.
Related
byte 0: min_value (0-3 bit)
max_value (4-7 bit)
The byte0 should be the min and max values combined.
min and max values are both integers (in 0-15 range).
I should convert them into 4-bit binary, and combine them somehow? (how?)
E.g.
min_value=2 // 0010
max_value=3 // 0011
The result should be an Uint8, and the value: 00100011
You can use the shift left operator << to get the result you want:
result = ((min_value << 4) + max_value).toRadixString(2).padLeft(8, '0');
I am reading elixir doc of binary operator: https://elixir-lang.org/getting-started/binaries-strings-and-char-lists.html#binaries-and-bitstrings
In doc:
iex> <<255>>
<<255>>
iex> <<256>> # truncated
<<0>>
iex> <<256 :: size(16)>> # use 16 bits (2 bytes) to store the number
<<1, 0>>
the default is 8 bits of elixir binary, if over 8 bits, the result will truncate to 0.
But why <<256 :: size(16)>> will present <<1, 0>>? I think it should be <<1, 255>>
<<1, 0>> is correct. 256 in binary is 0b100000000.
iex(1)> 0b100000000
256
When you extend it to 16 bits you get 0b0000000100000000.
iex(2)> 0b0000000100000000
256
When you split it into two bytes in big-endian byte order, you get 0b00000001 and 0b00000000, which is 1 and 0.
iex(3)> <<256::size(16)>>
<<1, 0>>
In little-endian byte order, you'll get 0 and 1 as the order of the bytes is reversed:
iex(4)> <<256::little-size(16)>>
<<0, 1>>
To get the original number back from big-endian bytes, you can think of it is multiplying the last number by 1, the second last by 256, the third last by 256 * 256 and so on, and then summing all of them.
iex(5)> <<256::size(16)>>
<<1, 0>>
iex(6)> 1 * 256 + 0 * 1
256
iex(7)> <<123456::size(24)>>
<<1, 226, 64>>
iex(8)> 1 * 256 * 256 + 226 * 256 + 64 * 1
123456
given a 64bit int I need to split it into 4 x 2bytes int.
for example decimal 66309 is 0000 0000 0000 0001 0000 0011 0000 0101
I need to convert this into an array of 4 ints {0, 1, 3, 5}. How can I do it in lua?
First, the conversion of 66309 into four 16 bit integers wouldn't be {0, 1, 3, 5}, but {0, 0, 1, 773}. In your example, you are splitting it into 8 bit integers. The below does 16 bit integers.
local int = 66309
local t = {}
for i = 0, 3 do
t[i+1] = (int >> (i * 16)) & 0xFFFF
end
If you want it to be 8 bit integers change the 3 in the loop to 7, the 16 in the shift expression to an 8, and the hex mask 0xFFFF to 0xFF.
And finally, this only works for Lua 5.3. You cannot accurately represent a 64 bit integer in Lua before this version without external libraries.
I need set some bits in ByteData at position counted in bits.
How I can do this?
Eg.
var byteData = new ByteData(1024);
var bitData = new BitData(byteData);
// Offset in bits: 387
// Number of bits: 5
// Value: 3
bitData.setBits(387, 5, 3);
Yes it is quite complicated. I dont know dart, but these are the general steps you need to take. I will label each variable as a letter and also use a more complicated example to show you what happens when the bits overflow.
1. Construct the BitData object with a ByteData object (A)
2. Call setBits(offset (B), bits (C), value (D));
I will use example values of:
A: 11111111 11111111 11111111 11111111
B: 7
C: 10
D: 00000000 11111111
3. Rather than using an integer with a fixed length of bits, you could
use another ByteData object (D) containing your bits you want to write.
Also create a mask (E) containing the significant bits.
e.g.
A: 11111111 11111111 11111111 11111111
D: 00000000 11111111
E: 00000011 11111111 (2^C - 1)
4. As an extra bonus step, we can make sure the insignificant
bits are really zero by ANDing with the bitmask.
D = D & E
D 00000000 11111111
E 00000011 11111111
5. Make sure D and E contain at least one full zero byte since we want
to shift them.
D 00000000 00000000 11111111
E 00000000 00000011 11111111
6. Work out these two integer values:
F = The extra bit offset for the start byte: B mod 8 (e.g. 7)
G = The insignificant bits: size(D) - C (e.g. 14)
7. H = G-F which should not be negative here. (e.g. 14-7 = 7)
8. Shift both D and E left by H bits.
D 00000000 01111111 10000000
E 00000001 11111111 10000000
9. Work out first byte number (J) floor(B / 8) e.g. 0
10. Read the value of A at this index out and let this be K
K = 11111111 11111111 11111111
11. AND the current (K) with NOT E to set zeros for the new bits.
Then you can OR the new bits over the top.
L = (K & !E) | D
K & !E = 11111110 00000000 01111111
L = 11111110 01111111 11111111
12. Write L to the same place you read it from.
There is no BitData class, so you'll have to do some of the bit-pushing yourself.
Find the corresponding byte offset, read in some bytes, mask out the existing bits and set the new ones at the correct bit offset, then write it back.
The real complexity comes when you need to store more bits than you can read/write in a single operation.
For endianness, if you are treating the memory as a sequence of bits with arbitrary width, I'd go for little-endian. Endianness only really makes sense for full-sized (2^n-bit, n > 3) integers. A 5 bit integer as the one you are storing can't have any endianness, and a 37 bit integer also won't have any natural way of expressing an endianness.
You can try something like this code (which can definitely be optimized more):
import "dart:typed_data";
void setBitData(ByteBuffer buffer, int offset, int length, int value) {
assert(value < (1 << length));
assert(offset + length < buffer.lengthInBytes * 8);
int byteOffset = offset >> 3;
int bitOffset = offset & 7;
if (length + bitOffset <= 32) {
ByteData data = new ByteData.view(buffer);
// Can update it one read/modify/write operation.
int mask = ((1 << length) - 1) << bitOffset;
int bits = data.getUint32(byteOffset, Endianness.LITTLE_ENDIAN);
bits = (bits & ~mask) | (value << bitOffset);
data.setUint32(byteOffset, bits, Endianness.LITTLE_ENDIAN);
return;
}
// Split the value into chunks of no more than 32 bits, aligned.
do {
int bits = (length > 32 ? 32 : length) - bitOffset;
setBitData(buffer, offset, bits, value & ((1 << bits) - 1));
offset += bits;
length -= bits;
value >>= bits;
bitOffset = 0;
} while (length > 0);
}
Example use:
main() {
var b = new Uint8List(32);
setBitData(b.buffer, 3, 8, 255);
print(b.map((v)=>v.toRadixString(16)));
setBitData(b.buffer, 13, 6*4, 0xffffff);
print(b.map((v)=>v.toRadixString(16)));
setBitData(b.buffer, 47, 21*4, 0xaaaaaaaaaaaaaaaaaaaaa);
print(b.map((v)=>v.toRadixString(16)));
}
I'm trying the second day to send a midi signal. I'm using following code:
int pitchValue = 8191 //or -8192;
int msb = ?;
int lsb = ?;
UInt8 midiData[] = { 0xe0, msb, lsb};
[midi sendBytes:midiData size:sizeof(midiData)];
I don't understand how to calculate msb and lsb. I tried pitchValue << 8. But it's working incorrect, When I'm looking to events using midi tool I see min -8192 and +8064 max. I want to get -8192 and +8191.
Sorry if question is simple.
Pitch bend data is offset to avoid any sign bit concerns. The maximum negative deviation is sent as a value of zero, not -8192, so you have to compensate for that, something like this Python code:
def EncodePitchBend(value):
''' return a 2-tuple containing (msb, lsb) '''
if (value < -8192) or (value > 8191):
raise ValueError
value += 8192
return (((value >> 7) & 0x7F), (value & 0x7f))
Since MIDI data bytes are limited to 7 bits, you need to split pitchValue into two 7-bit values:
int msb = (pitchValue + 8192) >> 7 & 0x7F;
int lsb = (pitchValue + 8192) & 0x7F;
Edit: as #bgporter pointed out, pitch wheel values are offset by 8192 so that "zero" (i.e. the center position) is at 8192 (0x2000) so I edited my answer to offset pitchValue by 8192.