How to calculate Least Signficant Byte in Ruby - ruby-on-rails

I'm sending a byte array to a piece of hardware.
The first 7 bytes contain data and the 8th byte is a checksum.
The 8th byte is the Least Significant Byte of the sum of the first 7 bytes.
Examples that include the correct checksum. The last byte of each of these is the checksum
200-30-7-5-1-2-0-245
42-0-0-1-176-0-148-39
42-0-0-3-177-0-201-118
How do I calculate the checksum?
Thanks,
Seth

Same as in C: take the sum and 'bitwise and' it with 255 (or 0xff in hexadecimal). Using your first set of data as an example:
arr = [ 200, 30, 7, 5, 1, 2, 0 ]
sum = 0
arr.each do |val|
sum += val
end
checksum = sum & 0xff
print checksum

String objects have a number of methods for direct byte manipulation.

A short way to write it would be
arr.inject { |sum, val| sum += val } & 0xFF
But as previously discovered, this produces a different checksum for your second and third examples. It looks as though either the examples are incorrect or the checksum calculation is not as simple as taking the least significant byte.

Related

Dart: split an arbitrarily precised number onto a sequence of bytes

Assuming I have a declaration like this: final int input = 0xA55AA9D2;, I'd like to get a list of [0xA5, 0x5A, 0xA9, 0xD2]. It is easily achievable in Java by just right shifting the input by 24, 16, 8 and 0 respectively with subsequent cast to byte in order to cut precision to 8-bit value.
But how to do the same with Dart? I can't find sufficient information about numbers encoding (e.g. in Java front 1 means minus, but how is minus encoded here?) and transformations (e.g. how to cut precision) in order to solve this task.
P.S.: I solved this for 32-bit numbers using out.add([value >> 24, (value & 0x00FFFFFF) >> 16, (value & 0x0000FFFF) >> 8, value & 0X000000FF]); but it feels incredibly ugly, I feel that SDK provides more convenient means to split an arbitrarily precised number into bytes
The biggest issue here is that a Dart int is not the same type on the VM and in a browser.
On the native VM, an int is a 64-bit two's complement number.
In a browser, when compiled to JavaScript, an int is just a non-fractional double because JavaScript only has doubles as numbers.
If your code is only running on the VM, then getting the bytes is as simple as:
int number;
List<int> bytes = List.generate(8, (n) => (number >> (8 * n)) & 0xFF);
In JavaScript, bitwise operations only work on 32-bit integers, so you could do:
List<int> bytes = List.generate(4, (n) => (number >> (8 * n)) & 0xFF);
and get the byte representation of number.toSigned(32).
If you want a number larger than that, I'd probably use BigInt:
var bigNumber = BigInt.from(number).toSigned(64);
var b255 = BigInt.from(255);
List<int> bytes = List.generate(8, (n) => ((bigNumber >> (8 * n)) & b255).toInt());
From the documentation to the int class:
The default implementation of int is 64-bit two's complement integers with operations that wrap to that range on overflow.
Note: When compiling to JavaScript, integers are restricted to values that can be represented exactly by double-precision floating point values. The available integer values include all integers between -2^53 and 2^53 ...
(Most modern systems use two's complement for signed integers.)
If you need your Dart code to work portably for both web and for VMs, you can use package:fixnum to use fixed-width 32- or 64-bit integers.

Bitwise operation alternative in Neo4j cypher query

I need to do a bitwise "and" in a cypher query. It seems that cypher does not support bitwise operations. Any suggestions for alternatives?
This is what I want to detect ...
For example 268 is (2^8 + 2^3 + 2^2) and as you can see 2^3 = 8 is a part of my original number. So if I use bitwise AND it will be (100001100) & (1000) = 1000 so this way I can detect if 8 is a part of 268 or not.
How can I do this without bitwise support? any suggestions? I need to do this in cypher.
Another way to perform this type of test using cypher would be to convert your decimal values to collections of the decimals that represent the bits that are set.
// convert the binary number to a collection of decimal parts
// create an index the size of the number to convert
// create a collection of decimals that correspond to the bit locations
with '100001100' as number
, [1,2,4,8,16,32,64,128,256,512,1024,2048,4096] as decimals
with number
, range(length(number)-1,0,-1) as index
, decimals[0..length(number)] as decimals
// map the bits to decimal equivalents
unwind index as i
with number, i, (split(number,''))[i] as binary_placeholder, decimals[-i-1] as decimal_placeholder
// multiply the decimal value by the bits that are set
with collect(decimal_placeholder * toInt(binary_placeholder)) as decimal_placeholders
// filter out the zero values from the collection
with filter(d in decimal_placeholders where d > 0) as decimal_placeholders
return decimal_placeholders
Here is a sample of what this returns.
Then when you want to test whether the number is in the decimal, you can just test the actual decimal for presence in the collection.
with [4, 8, 256] as decimal_placeholders
, 8 as decimal_to_test
return
case
when decimal_to_test in decimal_placeholders then
toString(decimal_to_test) + ' value bit is set'
else
toString(decimal_to_test) + ' value bit is NOT set'
end as bit_set_test
Alternatively, if one had APOC available they could use apoc.bitwise.op which is a wrapper around the java bitwise operations.
RETURN apoc.bitwise.op(268, "&",8 ) AS `268_AND_8`
Which yields the following result
If you absolutely have to do the operation in cypher probably a better solution would be to implement something like #evan's SO solution Alternative to bitwise operation using cypher.
You could start by converting your data using cypher that looks something like this...
// convert binary to a product of prime numbers
// start with the number to conver an a collection of primes
with '100001100' as number
, [2,3,5,7,13,17,19,23,29,31,37] as primes
// create an index based on the size of the binary number to convert
// take a slice of the prime array that is the size of the number to convert
with number
, range(length(number)-1,0,-1) as index
, primes[0..length(number)] as primes, decimals[0..length(number)] as decimals
// iterate over the index and match the prime number to the bits in the number to convert
unwind index as i
with (split(number,''))[i] as binary_place_holder, primes[-i-1] as prime_place_holder, decimals[-i-1] as decimal_place_holder
// collect the primes that are set by multiplying by the set bits
with collect(toInt(binary_place_holder) * prime_place_holder) as prime_placeholders
// filter out the zero bits
with filter(p in prime_placeholders where p > 0) as prime_placeholders
// return a product of primes of the set bits
return prime_placeholders, reduce(pp = 1, p in prime_placeholders | pp * p) as prime_product
Sample of the output of the above query. The query could be adapted to update attributes with the prime product.
Here is a screen cap of how the conversion breaks down
Then when you want to use it you could use the modulus of the prime number in the location of the bit you want to test.
// test if the fourth bit is set in the decimal 268
// 268 is the equivalent of a prime product of 1015
// a modulus 7 == 0 will indicate the bit is set
with 1015 as prime_product
, [2,3,5,7,13,17,19,23,29,31,37] as primes
, 4 as bit_to_test
with bit_to_test
, prime_product
, primes[bit_to_test-1] as prime
, prime_product % primes[bit_to_test-1] as mod_remains
with
case when mod_remains = 0 then
'bit ' + toString(bit_to_test) + ' set'
else
'bit ' + toString(bit_to_test) + ' NOT set'
end as bit_set
return bit_set
It almost certainly defeats the purpose of choosing a bitwise operation in the first place but if you absolutely needed to AND the two binary numbers in cypher you could do something like this with collections.
with split('100001100', '') as bin_term_1
, split('000001000', '') as bin_term_2
, toString(1) as one
with bin_term_1, bin_term_2, one, range(0,size(bin_term_1)-1,1) as index
unwind index as i
with i, bin_term_1, bin_term_2, one,
case
when (bin_term_1[i] = one) and (bin_term_2[i] = one) then
1
else
0
end as r
return collect(r) as AND
Thanks Dave. I tried your solutions and they all worked. They were a good hint for me to find another approach. This is how I solved it. I used String comparison.
with '100001100' as number , '100000000' as sub_number
with number,sub_number,range(length (number)-1,length (number)-length(sub_number),-1) as tail,length (number)-length(sub_number) as difference
unwind tail as i
with i,sub_number,number, i - length (number) + length (sub_number) as sub_number_position
with sub_number_position, (split(number,''))[i-1] as bit_mask , (split(sub_number,''))[sub_number_position] as sub_bit
with collect(toInt(bit_mask) * toInt(sub_bit)) as result
return result
Obviously the number and sub_number can have different values.

Odd Checksum Result(s) - Not Receiving Expected Results

I have been trying to produce a checksum based on a file header and am receiving conflicting results. In the slave devices manual, it states the following to produce the checksum:
"A simple eight-bit calculation is used for the header checksum. The steps required are as follows:
Calculate the sum of the header bytes in a single byte. Alternatively calculate
the sum and then AND the result with FFhex.
The checksum = FFhex - the sum from step 1."
Here, I have created the following code in Lua:
function header_checksum(string)
local sum = 0
for i = 1, #string do
sum = sum + string.byte(i)
end
local chksum = 255 - (sum & 255)
return chksum
end
If I send the following (4x byte) string down print(header_checksum("0181B81800")) I get the following result:
241 (string sent as you see it)
0 (each byte is changed to hex and then sent to function)
In the example given, it states that the byte should be AD, which is 173(dec) or \255.
Can someone please tell me what is wrong with what I am doing; either the code written, my approach, or both?
function header_checksum(header)
local sum = -1
for i = 1, #header do
sum = sum - header:byte(i)
end
return sum % 256
end
print(header_checksum(string.char(0x01,0x81,0xB8,0x18,0x00))) --> 173

Conversion of bitstring to decimal in Erlang

I am trying to decode the bitstring to decimal value. For e.x I have these kind of bitstrings
<<96,64,112,153,9:4>>. I want to convert them to decimal values like you take four bits as a digit (96(01100000) --> 60( first four bits is 6, next four bits is 0) , 64 --> 40 and so on. The output would be 604070999. The last 9:4 represents that you consider 4 bits to represent.
Can anyone help in doing this function erlang.
If you have a binary rather than a bitstring (i.e., without the trailing 9:4 part), you can apply a hex conversion to each byte within a binary comprehension, then convert the resulting binary to an integer:
1> Bin = <<96,64,112,153>>.
<<96,64,112,153>>
2> binary_to_integer(<< <<(integer_to_binary(B,16))/binary>> || <<B:8>> <= Bin >>).
60407099
The same also works for your bitstring, taking 4 bits at a time instead of 8 in the comprehension:
3> Bits = <<96,64,112,153,9:4>>.
<<96,64,112,153,9:4>>
4> binary_to_integer(<< <<(integer_to_binary(B,16))/binary>> || <<B:4>> <= Bits >>).
604070999
But as #Hynek-Pichi-Vychodil points out in the comments, for the bitstring you don't need the integer_to_binary/2 call at all, but instead can convert each 4-bit digit to its corresponding character by adding $0, the literal for the character 0:
5> binary_to_integer(<< <<($0+B)>> || <<B:4>> <= Bits >>).
604070999

iOS calculating sum of filesizes always negative

I've got a strange problem here, and i'm sure it's just something small.
I recieve information about files via JSON (RestKit is doing a good job).
I write the filesize of each file via coredata to a local store.
Afterwards within one of my viewcontrollers i need to sum up the files-sizes of all files in database. I fetch all files and then going through a slope (for) to sum the size up.
The problem is now, the result is always negative!
The coredata entity filesize is of type Integer 32 (filesize is reported in bytes by JSON).
I read the fetchresult in an NSArray allPublicationsToLoad and then try to sum up. The Objects in the NSArray of Type CDPublication have a value filesize of Type NSNumber:
for(int n = 0; n < [allPublicationsToLoad count]; n = n + 1)
{
CDPublication* thePub = [allPublicationsToLoad objectAtIndex:n];
allPublicationsSize = allPublicationsSize + [[thePub filesize] integerValue];
sum = [NSNumber numberWithFloat:([sum floatValue] + [[thePub filesize] floatValue])];
Each single filesize of the single CDPublications objects are positive and correct. Only the sum of all the filesizes ist negative afterwards. There are around 240 objects right now with filesize-values between 4000 and 234.645.434.123.
Can somebody please give me a hit into the right direction !?
Is it the problem that Integer 32 or NSNumber can't hold such a huge range?
Thanks
MadMaxApp
}
The NSNumber object can't hold such a huge number. Because of the way negative numbers are stored the result is negative.
Negative numbers are stored using two's complement, this is done to make addition of positive and negative numbers easier. The range of numbers NSNumber can hold is split in two, the highest half (the int values for which the highest order bit is equal to 1) is considered to be negative, the lowest half (where the highest order bit is equal to 0) are the normal positive numbers. Now, if you add sufficiently large numbers, the result will be in the highest half and thus be interpreted as a negative number. Here's an illustration for the 4-bit integer situation (32 works exactly the same but there would be a lot more 0 and 1 to type;))
With 4 bits you can represent this range of signed integers:
0000 (=0)
0001 (=1)
0010 (=2)
...
0111 (=7)
1000 (=-8)
1001 (=-7)
...
1111 (=-1)
The maximum positive integer you can represent is 7 in this case. If you would add 5 and 4 for example you would get:
0101 + 0100 = 1001
1001 equals -7 when you represent signed integers like this (and not 9, as you would expect). That's the effect you are observing, but on a much larger scale (32 bits)
Your only option to get correct results in this case is to increase the number of bits used to represent your integers so the result won't be in the negative number range of bit combinations. So if 32 bits is not enough (like in your case), you can use a long (64 bits).
[myNumber longLongValue];
I think this has to do with int overflow: very large integers get reinterpreted as negatives when they overflow the size of int (32 bits). Use longLongValue instead of integerValue:
long long allPublicationsSize = 0;
for(int n = 0; n < [allPublicationsToLoad count]; n++) {
CDPublication* thePub = [allPublicationsToLoad objectAtIndex:n];
allPublicationsSize += [[thePub filesize] longLongValue];
}
This is an integer overflow issue associated with use of two's complement arithmetic. For a 32 bit integer there are exactly 232 (4,294,967,296) possible integer values which can be expressed. When using two's complement, the most significant bit is used as a sign bit which allows half of the numbers to represent non-negative integers (when the sign bit is 0) and the other half to represent negative numbers (when the sign bit is 1). This gives an effective range of [-231, 231-1] or [-2,147,483,648, 2,147,483,647].
To overcome this problem for your case, you should consider using a 64-bit integer. This should work well for the range of values you seem to be interested in using. Alternatively, if even 64-bit is not sufficient, you should look for big integer libraries for iOS.

Resources