How do I calculate this checksum? - checksum

I have an alarm system that I have configured to send SMS messages to my phone as well as over Ethernet.
Here a few of the SMSes I receive:
5522 18 1137 00 003 1C76
5522 18 3137 00 003 3278
5522 18 1130 00 002 E36E
5522 18 1401 00 001 ED6E
5522 18 1302 00 003 ED70
5522 18 1302 00 004 EE71
5522 18 1302 00 009 F376
5522 18 3147 00 009 417F
5522 18 1137 00 004 1D77
5522 18 3137 00 009 3379
5522 18 1602 00 000 0870
The first 4 bytes are the account number, the next 2 are always 18, the next 4 are event codes, 2 group bytes and 3 zone numbers. At the end there are 4 bytes which I suspect is some kind of checksum.
This is some kind of Ademco Contact ID format. However, I do not recognize the checksum.
It's not a time stamp as the last message (0870) is sent periodically and is always the same.
When sending via DTMF 0 should have value 10, but I do not know if that is the case with SMSes. Most likely not.

The checksum formula in Ademco's Contact ID is calculated using the formula:
S= HEX Checksum which is one digit.
(Sum of all message digits + S) MOD 15 = 0
and if the value is equal to 10 the checksum digit is 0.
The official Contact ID specification is here:http://li0r.files.wordpress.com/2012/07/sia-dc-05-1999-09_contact_id.pdf
So, using 5522 18 1602 00 000 0870 as an example:
LET C = checksum
5+5+2+2+1+8+1+6+2=32
(32+S) modulo 15 is congruent to 0
We then need the closest multiple of 15 going higher than 32 which would be 45.
45-32=13
Lets test that.
45 modulo 15 is congruent to 0
It is correct however, as Contact ID is 16 digits and you have 19 I would suspect that your panel is using a different proprietary implementation of Contact ID. If you post the make/model of the panel that this came from I may be able to explain things further.
I hope this answers your question!
-Alex
P.S.: To calculate mod use a percent sign in Google
P.P.S: The document that describes Contact ID is actually: DC-05-1999.09 the document you referenced is actually the computer interface communication protocol specifications.

I just want to correct AdemcoGuy's calculation as it seems to be incorrect:
So, the example was 5522 18 1602 00 000 0870
We need to replace each 0 by 10.
So:
5+5+2+2+1+8+1+6+10+2+10+10+10+10+10 = 92
than 100-92 = 8
So the cheksum is 8
Anyway in the question the checksum seems missing and what is the last 4 digit only knows who manufactured the panel which has been send it :)

#ACCT MT QXYZ GG CCC where:
ACCT: 4 Digit Account number (0-9, B-F)
MT: Message Type - Always 18
Q: Event qualifier, which gives specific event information:
1: New Event or Opening
3: New Restore or Closing
6: Previously reported condition still present (Status report)
XYZ: Event code (3 Hex digits 0-9,B-F)
GG: Group or Partition number (2 Hex digits 0-9, B-F). Use 00 to indicate that no specific group or partition information applies.
CCC: Zone number (Event reports) or User (Open / Close reports) (3 Hexdigits 0-9,B-F ). Use 000 to indicate that no specific zone or user information applies.
To look up event codes see this document (pdf).

I came across this post while trying to figure out the checksums of my own alarm system (Woonveilig/Egardia) that seems to be using the same format. I found a forum post on the german alarm forum that contains a snippet of C code to calculate CRCs for the LUPUS alarm system. This CRC calculation method seems to match both my own and Lasse's SMS based system. Here's the C code converted to a simple calculation tool:
#include <stdio.h>
#include <string.h>
// Code from: https://www.alarmforum.de/showthread.php?tid=12037&pid=75893
/**
* Fletcher Checksum.(LUPUS version, 16-bit)
*/
static unsigned int fletcher_sum(char* data, int len) {
unsigned int sum1 = 0x0, sum2 = 0x0;
while (len) {
unsigned int tlen = (len > 256) ? 256 : len;
len -= tlen;
do {
sum1 += *data++;
sum1 = (sum1 & 0xff);
sum2 += sum1;
sum2 = (sum2 & 0xff);
} while (--tlen);
}
return sum2 << 8 | sum1;
}
int main() {
char input[50];
int sum;
printf("Enter input: ");
fgets(input, sizeof(input), stdin);
sum = fletcher_sum(input, strlen(input)-1);
printf("%x\n", sum);
return 0;
}
Example (first SMS from the question post):
# cc checksum.c
# ./a.out
Enter input: 5522 18 1137 00 003
1c76

Related

How do I calculate the FCS field in an Ethernet Frame?

I see a few implementations, but I decided to look at exactly how the specification calls out the FCS for encoding.
So say my input is as follows:
dst: 0xAA AA AA AA AA AA
src: 0x55 55 55 55 55 55
len: 0x00 04
msg: 0xDE AD BE EF
concatenating this in the order that seems to be specified in the format (and the order expressed in the spec later on) seems to indicate my input is:
M(x) = 0xAA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF
a) "The first 32 bits of the frame are complemented."
complemented first 32 MSB of M(x): 0x55 55 55 55 AA AA 55 55 55 55 55 55 00 04 DE AD BE EF
b) "The n bits of the protected fields are then considered to be the coefficients of a polynomial M(x) of
degree n ā€“ 1. (The first bit of the Destination Address field corresponds to the x(nā€“1) term and the last
bit of the MAC Client Data field (or Pad field if present) corresponds to the x0 term.)"
I did this in previous. see M(x)
c) "M(x) is multiplied by x^32 and divided by G(x), producing a remainder R(x) of degree <=31."
Some options online seem to ignore the 33rd bit to represent x^32. I am going to ignore those simplified shortcuts for this exercise since it doesn't seem to specify that in the spec.
It says to multiply M(x) by x^32. So this is just padding with 32 zeroes on the LSBs. (i.e. if m(x) = 1x^3 + 1, then m(x) * x^2 = 1x^5 + 1x^2 + 0)
padded: 0x55 55 55 55 AA AA 55 55 55 55 55 55 00 04 DE AD BE EF 00 00 00 00
Next step is to divide. I am dividing the whole M(x) / G(x). Can you use XOR shifting directly? I see some binary division examples have the dividened as 101 and the divisior as 110 and the remainder is 11. Other examples explain that by converting to decimal, you cannot divide. Which one is it for terms for this standard?
My remainder result is for option 1 (using XOR without carry bit consideration, shifting, no padding) was:
0x15 30 B0 FE
d) "The coefficients of R(x) are considered to be a 32-bit sequence."
e) "The bit sequence is complemented and the result is the CRC."
CRC = 0xEA CF 4F 01
so my entire Ethernet Frame should be:
0xAA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF EA CF 4F 01
In which my dst address is its original value.
When I check my work with an online CRC32 BZIP2 calculator, I see this result: 0xCACF4F01
Is there another option or online tool to calculate the Ethernet FCS field? (not just one of many CRC32 calculators)
What steps am I missing? Should I have padded the M(x)? Should I have complemented the 32 LSBs instead?
Update
There was an error in my CRC output in my software. It was a minor issue with copying a vector. My latest result for CRC is (before post-complement) 35 30 B0 FE.
The post-complement is: CA CF 4F 01 (matching most online CRC32 BZIP2 versions).
So my ethernet according to my programming is currently:
0xAA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF CA CF 4F 01
The CRC you need is commonly available in zlib and other libraries as the standard PKZip CRC-32. It is stored in the message in little-endian order. So your frame with the CRC would be:
AA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF B0 5C 5D 85
Here is an online calculator, where the first result listed is the usual CRC-32, 0x855D5CB0.
Here is simple example code in C for calculating that CRC (calling with NULL for mem gives the initial CRC):
unsigned long crc32iso_hdlc(unsigned long crc, void const *mem, size_t len) {
unsigned char const *data = mem;
if (data == NULL)
return 0;
crc ^= 0xffffffff;
while (len--) {
crc ^= *data++;
for (unsigned k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ 0xedb88320 : crc >> 1;
}
return crc ^ 0xffffffff;
}
The 0xedb88320 constant is 0x04c11db7 reflected.
The actual code used in libraries is more complex and faster.
Here is the calculation of that same CRC (in Mathematica), using the approach described in the IEEE 802.3 document with polynomials, so you can see the correct resulting powers of x used for the remainder calculation:
If you click on the image, it will embiggen to make it easier to read the powers.
The confusing factor here is the 802.3 spec. It mentions that first bit is LSB (least significant bit == bit 0) only in one place, in section 3.2.3-b, and it mentions that for CRC, "the first bit of the Destination Address field corresponds to the x^(n-1) term", so each byte input to the CRC calculation is bit reflected.
Using this online calculator:
http://www.sunshine2k.de/coding/javascript/crc/crc_js.html
Select CRC-32 | CRC32, click on custom, input reflected on, result reflected off. With this data:
AA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF
per the spec, the calculated CRC is 0x0D3ABAA1, stored and transmitted as shown:
bit 0 first | bit 7 first
AA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF | 0D 3A BA A1
To simplify the output to always transmit bit 0 first, bit reflect the CRC bytes:
bit 0 first | bit 0 first
AA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF | B0 5C 5D 85
Note that the bit 0 always first method results in transmitted bits identical to the spec.
Change result setting for the CRC calculator, input reflected on, result reflected on. With this data:
AA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF
the calculated CRC is 0x855D5CB0, stored least significant byte first and transmitted as shown:
bit 0 first | bit 0 first
AA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF | B0 5C 5D 85
For verifying received data, rather than compare a calculated CRC of received data versus the received CRC, the process can calculate a CRC on received data and CRC. Assuming the alternative setup where all bytes are received bit 0 first, then with this received frame or any frame without error
bit 0 first | bit 0 first
AA AA AA AA AA AA 55 55 55 55 55 55 00 04 DE AD BE EF | B0 5C 5D 85
the calculated CRC will always be 0x2144DF1C. In the case of a hardware implementation, the post complement of the CRC is usually performed one bit at a time as bits are shifted out, outside of the logic used to calculate the CRC, and in this case, after receiving a frame without error, the CRC register will always contain 0xDEBB20E3 (0x2144DF1C ^ 0xFFFFFFFF). So verification is done by computing CRC on a received frame and comparing the CRC to a 32 bit constant (0x2144DF1C or 0xDEBB20E3).

Why is there something written in the data section of an ICMPv4 echo ping request?

(My question differs from this one.)I am connected to a AP in a wireless network and I've send a simple ping request to www.google.com. When I analyze the packet in wireshark, I can see, that there are 48 bytes written in the data section of ICMP. After 8 bytes of trash values, the values are sequentially increasing from 0x10 to 0x37.Is there any particular reason, why ICMPv4 fits values instead of just using a bunch of zeroes?
The hexdump of the ICMPv4 data section:
0030 09 d9 0d 00 00 00 00 00 10 11 12 13 14 15 .ƙ............
0040 16 17 18 19 1a 1b 1c 1d 1e 1f 20 21 22 23 24 25 .......... !"#$%
0050 26 27 28 29 2a 2b 2c 2d 2e 2f 30 31 32 33 34 35 &'()*+,-./012345
0060 36 37 67
After 8 bytes of trash values
First of all, these are not trash values. In some implementations of ping, the 1st 8 bytes may represent a timestamp.
As #ross-jacobs mentioned, RFC 792 describes the ICMP Echo Request/Reply Packets. For clarity, these two packets are described, in relevant part, as follows:
Echo or Echo Reply Message
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Code | Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Identifier | Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Data ...
+-+-+-+-+-
...
Description
The data received in the echo message must be returned in the echo
reply message.
Here you can see that the contents of the Data is not described at all; therefore an implementation is free to use whatever data it wishes, including none at all.
Now, since ping is a network test tool, one of the things it can help test is fragmentation/reassembly. Every ping implementation I'm aware of allows the user to specify the size of the payload, and if you exceed the MTU, you should see the ICMP packet fragmented/reassembled. If you examine the payload of the first fragment, you can tell where the second fragment should start just by looking at the sequence of bytes in the payload of the first fragment. If the data was all 0's, it wouldn't be possible to do this. Similarly, if an ICMP packet wasn't reassembled properly, not only would the checksum likely be wrong, but you would most likely be able to tell exactly where the reassembly failed due to a gap or other inconsistency in the payload. This is just one example of why a payload with a sequence of bytes instead of all 0's is more useful to the user.

COM Port Commands CRC XOR

I have a check printer that I want to connect in Delphi 7 by COM port and operate.
I have a command that I extracted with Serial Port Monitor:
STX "PIRI(781" FS NULL ETX "0B" wich is 02 50 49 52 49 28 37 38 31 1c 00 03 30 42 hex
The manual says the following:
CRC (which is the last two digits after the ETX) - packet checksum. It
is calculated by the following algorithm: executing XOR for every
byte of the block including ETX by excluding STX. The data of the
checksum take up two bytes and are a symbolic representation of the
numeric in a hexadecimal calculation system.
I tried to an ONLINE CRC calculator and return a 1B result and a 27 numeric.
How to do it? For "PIRI(781" FS NULL ETX it should be 0B
The documentation incorrectly identifies the check value as a CRC. It is not. It is simply the exclusive-or of the noted bytes. The exclusive-or of 50 49 52 49 28 37 38 31 1c 00 03 is 0b. You then convert the 0b to hex (with an upper case B, i.e. 0B), and get 30 42.

midi file parsing, unrecognised event type

I have a problem trying to parse a midi file. I am trying to parse the notes files used by the frets on fire game (it just uses midi files so i don't think this is relevent) if any of you are familiar with it, the problem i am having is a general midi problem. I have a file with a track called guitar part, the hex, as viewed in a hex editor is as follows:
4D 54 72 6B 00 00 1E 74 00 FF 03 0B 50 41 52 54 20 47 55 49 54 41 52 A9 20 90 61 40 9A 20 61 00 83 60 63 40 BC
My program parses this fine as follows:
4D M
54 T
72 R
6B K
00 < --
00 size of
1E track part
74 -- >
00 time of this event
FF event type (this is meta)
03 meta event type
0B length of data
50 "P"
41 "A"
52 "R"
54 "T"
20 " "
47 "G"
55 "U"
49 "I"
54 "T"
41 "A"
52 "R"
A9 time of event (variable length) 10101001
20 time of event (variable length) 00100000
90 event,channel (non-meta) 1001=note on,channel=0000
61 note on has 2 params this is the first
40 this is the second
9A variable time 10011010
20 variable time 00100000
This is where my problem lies, there is no event that has event type 0x6, since 0x61 is 01100001 we have to assume it's non meta, therefore the event type should be 6 (0110) and the channel is (0001) but the midi specification contains no identification for this event.. I've added a few of the bytes after this incase they are somehow relevent but obviously at the moment my program hits the next byte, doesn't recognise the event and bombs out.
61
00
83
60
63
40
BC
If anyone thinks they could shed any light on where my parsing logic has gone wrong i'd be most appreciative, sorry for the formatting, i couldn't think of a better way to illustrate my problem.
I have been using this site: http://www.sonicspot.com/guide/midifiles.html as a reference and it hasn't led me wrong so far. I figured this might be something directly relating to frets on fire but it doesn't seem to be as i downloaded another notes file for the game and that file did not contain this event.
Thanks in advance.
It's called running status. If an event is of the same type as the previous event, the MIDI status byte can be eliminated. So if the first byte after the timing info is < $80, use the previous status. In the case of your $61 byte, the previous status was $90, so it's Note On, channel 0. Which makes sense since the previous event was note number $61 velocity $40. This event is note number $61 velocity 0 (releasing the previously played note). The next event is note number $63 velocity $40.

Assembly Language: Memory Bytes and Offsets

I am confused as to how memory is stored when declaring variables in assembly language. I have this block of sample code:
val1 db 1,2
val2 dw 1,2
val3 db '12'
From my study guide, it says that the total number of bytes required in memory to store the data declared by these three data definitions is 8 bytes (in decimal). How do I go about calculating this?
It also says that the offset into the data segment of val3 is 6 bytes and the hex byte at offset 5 is 00. I'm lost as to how to calculate these bytes and offsets.
Also, reading val1 into memory will produce 0102 but reading val3 into memory produces 3132. Are apostrophes represented by the 3 or where does it come from? How would val2 be read into memory?
You have two bytes, 0x01 and 0x02. That's two bytes so far.
Then you have two words, 0x0001 and 0x0002. That's another four bytes, making six to date.
The you have two more bytes making up the characters of the string '12', which are 0x31 and 0x32 in ASCII (a). That's another two bytes bringing the grand total to eight.
In little-endian format (which is what you're looking at here based on the memory values your question states), they're stored as:
offset value
------ -----
0 0x01
1 0x02
2 0x01
3 0x00
4 0x02
5 0x00
6 0x31
7 0x32
(a) The character set you're using in this case is the ASCII one (you can follow that link for a table describing all the characters in that set).
The byte values 0x30 thru 0x39 are the digits 0 thru 9, just as the bytes 0x41 thru 0x5A represent the upper-case alpha characters. The pseudo-op:
db '12'
is saying to insert the bytes for the characters '1' and '2'.
Similarly:
db 'Pax is a really cool guy',0
would give you the hex-dump representation:
addr +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +A +B +C +D +E +F +0123456789ABCDEF
0000 50 61 78 20 69 73 20 61 20 72 65 61 6C 6C 79 20 Pax is a really
0010 63 6F 6F 6C 20 67 75 79 00 cool guy.
val1 is two consecutive bytes, 1 and 2. db means "direct byte". val2 is two consecutive words, i.e. 4 bytes, again 1 and 2. in memory they will be 1, 0, 2, 0, assuming you're on a big endian machine. val3 is a two bytes string. 31 and 32 in are 49 and 50 in hexadecimal notation, they are ASCII codes for the characters "1" and "2".

Resources