Need help calculating checksum (crc-16) from a string of data - crc16

I need help calculating the checksum (crc-16:X16+X15+X2+1) for BYTE6 and BYTE7 of this data string. I have read some examples but I have no idea how and where to start. What does the X16, X15 etc means? What should I put in BYTE6 and BYTE7?
Byte0: 0x55
Byte1: 0x80
Byte2: 0x06
Byte3: 0x02
Byte4: 0x00
Byte5: 0x00
Byte6: MSB of the checksum word (CRC-16)
Byte7: LSB of the checksum word (CRC-16)

The CRC polynomial (x16+x15+x2+1) is necessary but not sufficient to define the CRC. You can see this list of 16-bit CRCs, in which you can find seven different CRC's that use that particular polynomial (poly=0x8005).
Once you have the full description, you can use my crcany code to generate C code to compute the CRC.

Related

Parse array of unsigned integers in Julia 1.x.x

I am trying to open a binary file that I have some knowledge of its internal structure, and reinterpret it correctly in Julia. Let us say that I can load it already via:
arx=open("../axonbinaryfile.abf", "r")
databin=read(arx)
close(arx)
The data is loaded as an Array of UInt8, which I guess are bytes.
In the first 4 I can perform a simple Char conversion and it works:
head=databin[1:4]
map(Char, head)
4-element Array{Char,1}:
'A'
'B'
'F'
' '
Then it happens to be that in the positions 13-16 is an integer of 32 bytes waiting to be interpreted. How should I do that?
I have tried reinterpret() and Int32 as function, but to no avail.
You can use reinterpret(Int32, databin[13:16])[1]. The last [1] is needed, because reinterpret returns you a view.
Now note that read supports type passing. So if you first read 12 bytes of data from your file e.g. like this read(arx, 12) and then run read(arx, Int32) you will get the desired number without having to do any conversions or vector allocation.
Finally observe that what conversion to Char does in your code is converting a Unicode number to a character. I am not sure if this is exactly what you want (maybe it is). For example if the first byte read in has value 200 you will get:
julia> Char(200)
'È': Unicode U+00c8 (category Lu: Letter, uppercase)
EDIT one more comment is that when you do a conversion to Int32 of 4 bytes you should be sure to check if it should be encoded as big-endian or little-endian (see ENDIAN_BOM constant and ntoh, hton, ltoh, htol functions)
Here it is. Use view to avoid copying the data.
julia> dat = UInt8[65,66,67,68,0,0,2,40];
julia> Char.(view(dat,1:4))
4-element Array{Char,1}:
'A'
'B'
'C'
'D'
julia> reinterpret(Int32, view(dat,5:8))
1-element reinterpret(Int32, view(::Array{UInt8,1}, 5:8)):
671219712

Detecting boundaries and resetting circular buffer pointer in both directions

I am working with an 8051 microcontroller, but my question is more algorithm specific.
I have created a circular buffer in memory for random incoming data from external sources. Suppose the buffer is 32 bytes and I received 34 bytes of data. Yes I'll handle that two bytes are dropped, but if I wanted to read the last 5 bytes then I'll have to somehow wrap around to the end of the buffer again to read more than 2 bytes.
Here's an example in 8051 code of what I'm trying to achieve:
BUFFER equ 40h ;our buffer = 40-5Fh (32 bytes)
BUFFERMASK equ 5Fh ;Mask so buffer doesn't go past 32nd byte
initialization:
mov R1,#BUFFER ;R1=our buffer pointer
mov #R1,#xxh ;Add some incoming data
inc R1
anl R1,#BUFFERMASK
mov #R1,#xxh ;Add some incoming data
inc R1
anl R1,#BUFFERMASK
...
mov #R1,#xxh ;Add some incoming data
inc R1
anl R1,#BUFFERMASK
;At this point we filled a large chunk of the buffer with data.
;Lets assume the buffer wrapped around and address is 41h
;and we want to read the data in reverse
mov A,#R1 ;Get last byte at 41h
dec R1
??? R1,??? (anl won't work here :( )
mov A,#R1 ;Get byte at 40h
dec R1
??? R1,??? (anl won't work here :( )
mov A,#R1 ;Get byte at 5Fh (how do we jump with a logic statement?)
dec R1
??? R1,??? (anl won't work here :( )
I understand that I could get away with a CJNE (compare and jump if not equal), but the disadvantages with that statement are: 1.) a need for a label for each CJNE, 2.) and the carry flag being modified after execution, and 3.) an extra clock cycle wasted if the boundary is hit.
Is there any way I could pull this off with simple anl/orl (AND or OR) logic? I am willing to change the memory address of the cyclic buffer if that creates an advantage in my situation.

Convert first two bytes of Lua string (in bigendian format) to unsigned short number

I want to have a lua function that takes a string argument. String has N+2 bytes of data. First two bytes has length in bigendian format, and rest N bytes contain data.
Say data is "abcd" So the string is 0x00 0x04 a b c d
In Lua function this string is an input argument to me.
How can I calculate length optimal way.
So far I have tried below code
function calculate_length(s)
len = string.len(s)
if(len >= 2) then
first_byte = s:byte(1);
second_byte = s:byte(2);
//len = ((first_byte & 0xFF) << 8) or (second_byte & 0xFF)
len = second_byte
else
len = 0
end
return len
end
See the commented line (how I would have done in C).
In Lua how do I achieve the commented line.
The number of data bytes in your string s is #s-2 (assuming even a string with no data has a length of two bytes, each with a value of 0). If you really need to use those header bytes, you could compute:
len = first_byte * 256 + second_byte
When it comes to strings in Lua, a byte is a byte as this excerpt about strings from the Reference Manual makes clear:
The type string represents immutable sequences of bytes. Lua is 8-bit clean: strings can contain any 8-bit value, including embedded zeros ('\0'). Lua is also encoding-agnostic; it makes no assumptions about the contents of a string.
This is important if using the string.* library:
The string library assumes one-byte character encodings.
If the internal representation in Lua of your number is important, the following excerpt from the Lua Reference Manual may be of interest:
The type number uses two internal representations, or two subtypes, one called integer and the other called float. Lua has explicit rules about when each representation is used, but it also converts between them automatically as needed.... Therefore, the programmer may choose to mostly ignore the difference between integers and floats or to assume complete control over the representation of each number. Standard Lua uses 64-bit integers and double-precision (64-bit) floats, but you can also compile Lua so that it uses 32-bit integers and/or single-precision (32-bit) floats.
In other words, the 2 byte "unsigned short" C data type does not exist in Lua. Integers are stored using the "long long" type (8 byte signed).
Lastly, as lhf pointed out in the comments, bitwise operations were added to Lua in version 5.3, and if lhf is the lhf, he should know ;-)

mqtt-sn what is the raw data format

I am very new to this mqtt-sn stuff. So my question is - how does a mqtt-sn message look like. So I mean the raw data format. I do not clearly understand what octet means. So as I understand an octet is a byte.
So is the data transferred binary?
Or what means octet exact?
Please can someone give me a sample message. It is really bad that there is not example message in the specification.
Thanks,
Mathias
An octet is just a collection of 8 bit so just another name for a byte
So a message consists of a header made up of 2 or 4 bytes and the the message body.
The header is sub divided into 1 OR 3 bytes for the length and 1 byte for the type. If the first byte is 0x01 then the next 2 bytes are the length, else the value of the first byte is the length.
The next byte is a type, a table of valid message types can be found in sections 5.2.2 in the spec
The Message body varies depending on the type.
But to publish a message with payload HelloWorld on Topic ID of AB (0x41, 0x42) would look something like this:
0x0F - length (15 bytes total, including length)
0x0C - msg type (publish)
0x02 - flags (QOS 0, topic name)
0x41 - topic ID 1
0x42 - topic ID 2
0x00 - MsgID (00 for QOS 0)
0x48 - H
0x65 - e
0x6C - l
0x6C - l
0x6F - o
0x57 - W
0x6F - o
0x72 - r
0x6C - l
0x64 - d
Where the topic id is the output from a topic register message (Section 6.5 in the spec)
An octet is a byte.
There isn't an example payload because the spec doesn't dictate a payload format. You can use whatever you want.

Writing negative integer as 0xnn hex into peripheral

I am dealing with a hardware peripheral which exposes a writable characteristics for Measured Power. As per the specs from the hardware vendor Measured Power should be sent in the format "0xnn". I am not sure how do we convert a negative integer (-59) into 1 byte hex representation (0xnn).
By far I have tried below
int iPower = -59;
int16_t power = CFSwapInt16HostToBig(iPower);
NSData *powerData = [NSData dataWithBytes:&power length:sizeof(power)];
But this writes 0xFF into peripheral which is 255 in decimal. Any idea?
Even tried sending raw bytes by below code and this reaches as 0xC5 which is 197 in decimal.
NSInteger index = -59;
NSData *powerData = [NSData dataWithBytes:&index length:sizeof(index)];
I am not sure how do we convert a negative integer (-59) into 2 bytes hex representation (0xnn).
When you have two bytes for hex representation that means you can have a range of 0x0000 ~ 0xFFFF numbers or 0~65535 for non-negative integers or in 2's compliment representation -32768 ~ 23767.
By using 2's complement representation -59 in two byte hex will be 0xFFC5.
Also check if you really need to use this: "CFSwapInt16HostToBig"

Resources