why <<256 :: size(16)>> present as <<1, 0>>? - erlang

I am reading elixir doc of binary operator: https://elixir-lang.org/getting-started/binaries-strings-and-char-lists.html#binaries-and-bitstrings
In doc:
iex> <<255>>
<<255>>
iex> <<256>> # truncated
<<0>>
iex> <<256 :: size(16)>> # use 16 bits (2 bytes) to store the number
<<1, 0>>
the default is 8 bits of elixir binary, if over 8 bits, the result will truncate to 0.
But why <<256 :: size(16)>> will present <<1, 0>>? I think it should be <<1, 255>>

<<1, 0>> is correct. 256 in binary is 0b100000000.
iex(1)> 0b100000000
256
When you extend it to 16 bits you get 0b0000000100000000.
iex(2)> 0b0000000100000000
256
When you split it into two bytes in big-endian byte order, you get 0b00000001 and 0b00000000, which is 1 and 0.
iex(3)> <<256::size(16)>>
<<1, 0>>
In little-endian byte order, you'll get 0 and 1 as the order of the bytes is reversed:
iex(4)> <<256::little-size(16)>>
<<0, 1>>
To get the original number back from big-endian bytes, you can think of it is multiplying the last number by 1, the second last by 256, the third last by 256 * 256 and so on, and then summing all of them.
iex(5)> <<256::size(16)>>
<<1, 0>>
iex(6)> 1 * 256 + 0 * 1
256
iex(7)> <<123456::size(24)>>
<<1, 226, 64>>
iex(8)> 1 * 256 * 256 + 226 * 256 + 64 * 1
123456

Related

How to convert two 4-bit chunk into 1 byte in Dart?

byte 0: min_value (0-3 bit)
max_value (4-7 bit)
The byte0 should be the min and max values combined.
min and max values are both integers (in 0-15 range).
I should convert them into 4-bit binary, and combine them somehow? (how?)
E.g.
min_value=2 // 0010
max_value=3 // 0011
The result should be an Uint8, and the value: 00100011
You can use the shift left operator << to get the result you want:
result = ((min_value << 4) + max_value).toRadixString(2).padLeft(8, '0');

256 possible values in a 8 bits

I am confused when I read the details section that says 1 byte which is 8 bits gives us a potential of 2^8 or 256 possible values. (https://en.wikipedia.org/wiki/8-bit_computing)
If i am doing the math correctly
2^0 = 1
2^1 = 2
2^2 = 4
2^3 = 8
2^4 = 16
2^5 = 32
2^6 = 64
2^7 = 128
Total = 255
The way i see it there is total or possible 255 values.
0 is also a value so for 8 bits, the value range is 0-255.
00000000 is the lowest and 11111111 (255) is the highest.
2^x gives you the total number of possible values for x bits. You should be using 2^x to get the number of possible combinations only where x > 0. If x = 0, it points to a no-bit scenario which is irrelevant.
For your case, it is not correct to sum values from 2^0 to 2^7. The correct approach should be just calculating 2^8, which is 256.

Character to bits with SIMD (and substrings)

I am learning little by little SIMD programming, and I've devised a (seemingly) simple problem that I hope I can speed-up using SIMD (AVX, at the moment I have access only to AVX CPUs).
I have a long string constituted by an alphabet of 2^k characters (for instance 0, 1, 2, 3), and I'd like to:
generate all substrings of a given length substringlength
convert all the substrings in bits
The substrings are just sequences of characters from the input string:
012301230123012301230123012301233012301301230123123213012301230
substringlength = 6;
string bits
------+--+-----------------
012301 -> 01 00 11 10 01 00
123012 -> 10 01 00 11 10 01
230123 -> 11 10 01 00 11 10
301230 -> 00 11 10 01 00 11
...
My question is due to my inexperience with SIMD (I've only read "Modern x86 Assembly Language Programming", by Kusswurm):
Is this a task where SIMD could help?
Edit: for simplicity, let's just assume k = 2, and so the ASCII numbers will be just '0'..'3'.
Iteration 1
Reading the comments and playing around I've come to these realizations. I can convert the the ASCII into values, and as suggested, multiply-add adjacent bytes:
// SIMD 128-bit registers, apparently I cannot use AVX ones directly (some operations are AVX2 or AVX-512)
__m128i sse, val, adj, res;
auto mask = _mm_set_epi8(1, 1<<4, 1, 1<<4, 1, 1<<4, 1, 1<<4, 1, 1<<4, 1, 1<<4, 1, 1<<4, 1, 1<<4);
auto zero = _mm_set_epi8('0', '0', '0', '0', '0', '0', '0', '0',
'0', '0', '0', '0', '0', '0', '0', '0');
// Load ascii values
sse = _mm_loadu_si128((__m128i*) s.data());
// Convert to integer values
val = _mm_sub_epi8(sse[0], zero);
// Multiply with mask byte by byte (aka SHL second bytes of val) and sum
adj = _mm_maddubs_epi16(val, mask);
An idea of what it does, to people learning like me, is given here (I will need more 128-bits to encode one substring, ascii is in hex):
bytes 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
ascii 30 31 30 31 30 31 30 31 30 31 30 31 30 31 30 31
_mm_sub_epi8:
value 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
_mm_maddubs_epi16:
value 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
* * * * * * * * * * * * * * * *
mask 1 4 1 4 1 4 1 4 1 4 1 4 1 4 1 4
+ + + + + + + +
| | | | | | | | |
(16-bits)
bits ....0100 ....0100 ....0100 ....0100 ....0100 ....0100 ....0100 ....0100
In other words the first 4 bits are correct, encoding 2 ascii chars, if I understand correctly what _mm_maddubs_epi16 did to my values, which I am not sure at all!
Now I'd need some sort of "shift-or" of adjacent bytes, something like _mm_maddubs_epi16 that shifts left the first, and ORs with the second argument, producing an 8-bit or 16-bit value:
(16-bits)
bits ....0100 ....0100 ....0100 ....0100 ....0100 ....0100 ....0100 ....0100
| shl 4 | | shl 4 | | shl 4 | | shl 4 |
0100.... ....0100 0100.... ....0100 0100.... ....0100 0100.... ....0100
OR OR OR OR
....01000100 ....01000100 ....01000100 ....01000100
However, I cannot see how _mm_bslli_si128 could help me here, or if there is a smarter way to do this. Maybe even this "horizontal" approach is foolish, and I have to rethink it.
Any hint is welcome!

how do i convert int to byte and back

given a 64bit int I need to split it into 4 x 2bytes int.
for example decimal 66309 is 0000 0000 0000 0001 0000 0011 0000 0101
I need to convert this into an array of 4 ints {0, 1, 3, 5}. How can I do it in lua?
First, the conversion of 66309 into four 16 bit integers wouldn't be {0, 1, 3, 5}, but {0, 0, 1, 773}. In your example, you are splitting it into 8 bit integers. The below does 16 bit integers.
local int = 66309
local t = {}
for i = 0, 3 do
t[i+1] = (int >> (i * 16)) & 0xFFFF
end
If you want it to be 8 bit integers change the 3 in the loop to 7, the 16 in the shift expression to an 8, and the hex mask 0xFFFF to 0xFF.
And finally, this only works for Lua 5.3. You cannot accurately represent a 64 bit integer in Lua before this version without external libraries.

Confusion on endianness with ARM processor

I'm quite puzzled about the endianness on an ARM device. The device I'm testing uses little endian.
Say there's code here which swaps elements in an array:
uint32_t* srcPtr = (uint32_t*)src->get();
uint8_t* dstPtr = dst->get();
dstPtr[0] = ((*srcPtr) >> 16) & 0xFF;
dstPtr[1] = ((*srcPtr) >> 8) & 0xFF;
dstPtr[2] = (*srcPtr) & 0xFF;
dstPtr[3] = ((*srcPtr) >> 24);
My understanding is that if srcPtr contains {0, 1, 2, 3} the output dstPtr should be {1, 2, 3, 0}.
But the output is dstPtr is {2, 1, 0, 3}.
Does this mean that the srcPtr read in this way 3, 2, 1 -> 0 ?
Can someone please help me ? :)
Is this due to the little endian ?
so at address 0x100 I have the values 0x00, 0x11, 0x22, 0x33. 0x00 is at 0x100, 0x11 at 0x101 and so on. If I point at address 0x100 with a 32 bit unsigned pointer, then I get the value 0x33221100, true for ARM (little endian), true for x86 (little endian) etc.
So now if I take 0x33221100 and (x>>16)&0xFF I get 0x22. (x>>8)&0xFF is 0x11, x&0xFF is 0x00 and (x>>24)&0xFF is 0x33. {2,1,0,3}
Where is your confusion? Is it the conversion from 0x00,0x11,0x22,0x33 to 0x33221100? Little endian, least significant byte first, so the lowest or first address you come across (0x100) has the least significant byte (0x00 the lower 8 bits of the number) and so on 0x101 the next least significant bits 8 to 15, 0x102 bits 16 to 23 and 0x103 bits 24 to 31. for a 32 bit value.

Resources