CRC exploring encoding principles(reserve) - crc16

We are looking for a solution to the problem of guessing the result value of crc16 with a specific Hex input.
hello. Currently, I am working on estimating the result of crc-16 using specific hex data.
I have figured out the type of input hex value and the crc-16 algorithm, but the result value does not match no matter how the hex value is combined, so I leave a question.
The types of hex values are 0x170, 0xA, 0x00, 0x31
The CRC-16 algorithm used is CRC-16-CCITT XMODEM (Poly = 0x1021, Init = 0x0000).
And the result you want to output is 0x6121 or 0x2161.
It is thought that 0x0170 and 0xA among the above input hex are mixed and divided in some way and input to CRC-16 (for example, after AND operation with 0x017A, division into 0x01 and 0x7A), 0x01, 0x70, 0x0A, 0x00, 0x31 in order Input, 0x31, 0x00, 0x0A, 0x70, 0x01 Even if you change the input order in various ways, such as inputting in reverse order, the result does not come out.
Can you tell me how to find an input sequence or hex input data combination that can solve the above problem?
Waiting for your reply.
thank you

Related

Checksum/CRC reverse engineering of Microsoft NDIS packet

I am trying to decode a 42-byte packet which seems to include 2-byte CRC / checksum field
This is a Microsoft NDIS packet (type IPX) which is sent in HLK (WHQL) tests
I have decoded most parts of the NDIS header but I can't seem to figure out the CRC/Checksum algorithm
Sample of a 45-byte packet (just to explain the decoded fields):
char packet_bytes[] = {
0x02, 0xe4, 0x55, 0xee, 0x12, 0x56, 0x02, 0x93,
0x19, 0x40, 0x89, 0x00, 0x00, 0x1f, 0xaa, 0xaa,
0x03, 0x00, 0x00, 0x00, 0x81, 0x37, 0x4e, 0x44,
0x49, 0x53, 0x01, 0x49, 0x03, 0x00, 0x98, 0xd4,
0x58, 0x55, 0x25, 0xf5, 0x39, 0x00, 0x14, 0x00,
0x00, 0x00, 0x49, 0x4a, 0x4b
};
Raw: 02e455ee1256029319408900001faaaa0300000081374e4449530149030098d4585525f5390014000000494a4b
Decoded fields:
802.2 ethernet header: (Wireshark decoding)
02e455ee1256 : Destination
029319408900 : Source
001f : Length
Logical_link Control: (Wireshark decoding)
aa : DSAP
aa : SSAP
03 : Control
000000 : Organization
8137 : Type (Netware IPX/SPX)
NDIS header: (my estimation for NDIS decoded fields)
4e444953 : NDIS ascii String ("NDIS")
01 : Unknown
49 : payload counter start (first byte of payload, with increasing value afterwards)
0300 : Payload length ( = 0003)
98d4 : test identification number (equal on all packets of the same test)
5855 : Assumed to be checksum
25f53900 : Packet counter ( = 0039f525, Increases gradually per packet)
14000000 : Payload offset ( = 00000014), offset from start of NDIS header to start of payload
494a4b : Payload (3 bytes of increasing counter 49,4a,4b)
To try to understand the checksum algorithm with minimal packet bytes,
I've captured the minimal packets size (42 bytes)
Those packets include the headers above but without payload at all
And tried to reverse eng them using reveng CRC decoder which fail to find any known CRC algorithm
Sample 42-byte packets:
02e455ee1256029319408900001caaaa0300000081374e444953016b000098d495262502000014000000
02e455ee1256029319408900001caaaa0300000081374e44495301a2000098d481ef3802000014000000
02e455ee1256029319408900001caaaa0300000081374e4449530152000098d47f3f3b02000014000000
02e455ee1256029319408900001caaaa0300000081374e44495301d0000098d476c14302000014000000
02e455ee1256029319408900001caaaa0300000081374e44495301f7000098d4539a6602000014000000
02e455ee1256029319408900001caaaa0300000081374e44495301b6000098d444db7502000014000000
02e455ee1256029319408900001caaaa0300000081374e44495301a6000098d431eb8802000014000000
02e455ee1256029319408900001caaaa0300000081374e444953016a000098d40627b402000014000000
Reverse eng the CRC:
reveng.exe -w 16 -s 02e455ee1256029319408900001caaaa0300000081374e444953016b000098d495262502000014000000 02e455ee1256029319408900001caaaa0300000081374e44495301a2000098d481ef3802000014000000 02e455ee1256029319408900001caaaa0300000081374e4449530152000098d47f3f3b02000014000000 02e455ee1256029319408900001caaaa0300000081374e44495301d0000098d476c14302000014000000 02e455ee1256029319408900001caaaa0300000081374e44495301f7000098d4539a6602000014000000 02e455ee1256029319408900001caaaa0300000081374e44495301b6000098d444db7502000014000000 02e455ee1256029319408900001caaaa0300000081374e44495301a6000098d431eb8802000014000000 02e455ee1256029319408900001caaaa0300000081374e444953016a000098d40627b402000014000000
reveng.exe: no models found
Tried reverse eng only the NDIS header part:
4e444953016b000098d495262502000014000000
4e44495301a2000098d481ef3802000014000000
4e4449530152000098d47f3f3b02000014000000
4e44495301d0000098d476c14302000014000000
4e44495301f7000098d4539a6602000014000000
4e44495301b6000098d444db7502000014000000
4e44495301a6000098d431eb8802000014000000
4e444953016a000098d40627b402000014000000
reveng.exe -w 16 -s 4e444953016b000098d495262502000014000000 4e44495301a2000098d481ef3802000014000000 4e4449530152000098d47f3f3b02000014000000 4e44495301d0000098d476c14302000014000000 4e44495301f7000098d4539a6602000014000000 4e44495301b6000098d444db7502000014000000 4e44495301a6000098d431eb8802000014000000 4e444953016a000098d40627b402000014000000
reveng.exe: no models found
Any help would be appreciated.
This seems to be the Internet Checksum, described in RFC 1071, calculated over the NDIS header part of the packet.
In short, you need to add up all of the header contents (except the 16-bit checksum field itself) as 16-bit values, then add the carries (if any) to the least significant 16 bits of the result (thus forming the one's complement sum), and finally, calculate one's complement of this one's complement sum by inverting all bits.
For the example packet you listed, the manual calculation steps would be the following.
Given the whole packet:
02e455ee1256029319408900001faaaa0300000081374e4449530149030098d4585525f5390014000000494a4b
Extract the NDIS header part only, without the payload:
4e4449530149030098d4585525f5390014000000
Split into 16-bit values:
4e44
4953
0149
0300
98d4
5855
25f5
3900
1400
0000
Substitute the checksum field with zeroes:
4e44
4953
0149
0300
98d4
0000
25f5
3900
1400
0000
Add all those 16-bit values together:
1A7A9
Here, the 16 least significant bits are A7A9 and the arithmetic carry is 1. So, add these together (as 16-bit words), to form the so-called one's complement sum:
0001
+ A7A9
= A7AA
Now, invert all bits (apply the bitwise NOT operation), to get the one's complement:
~ A7AA
= 5855
Place this checksum back into the place (which we temporarily zeroed out):
4e44
4953
0149
0300
98d4
5855
25f5
3900
1400
0000
If you only want to check the checksum, do the following.
First, take the original NDIS header (as 16-bit values):
4e44
4953
0149
0300
98d4
5855
25f5
3900
1400
0000
Then sum all of this up:
1FFFE
Again, add the carry to the 16-bit LSB part:
0001
+ FFFE
= FFFF
If all the bits of the result are 1 (i.e., if the result is FFFF), the check is successful.

How are floating-point pixel values converted to integer values?

How does image library (such as PIL, OpenCV, etc) convert floating-point values to integer pixel values?
For example
import numpy as np
from PIL import Image
# Creates a random image and saves in a file
def get_random_img(m=0, s=1, fname='temp.png'):
im = m + s * np.random.randn(60, 60, 3) # For eg. min: -3.8947058634971179, max: 3.6822041760496904
print(im[0, 0]) # for eg. array([ 0.36234732, 0.96987366, 0.08343])
imp = Image.fromarray(im, 'RGB') # (*)
print(np.array(imp)[0, 0]) # [140 , 74, 217]
imp.save(fname)
return im, imp
For the above method, an example is provided in the comment (which is randomly produced). My question is: how does (*) convert ndarray (which can range from - infinity to plus infinity) to pixel values between 0 and 255?
I tried to investigate the Pil.Image.fromarray method and eventually ended by at line #798 d.decode(data) within Pil.Image.Image().frombytes method. I could find the implementation of decode method, thus unable to know what computation goes behind the conversion.
My initial thought was that maybe the method use minimum (to 0) and maximum (to 255) value from the array and then map all the other values accordingly between 0 and 255. But upon investigations, I found out that's not what is happening. Moreover, how does it handle when the values of the array range between 0 and 1 or any other range of values?
Some libraries assume that floating-point pixel values are between 0 and 1, and will linearly map that range to 0 and 255 when casting to 8-bit unsigned integer. Some others will find the minimum and maximum values and map those to 0 and 255. You should always explicitly do this conversion if you want to be sure of what happened to your data.
In general, a pixel does not need to be 8-bit unsigned integer. A pixel can have any numerical type. Usually a pixel intensity represents an amount of light, or a density of some sort, but this is not always the case. Any physical quantity can be sampled in 2 or more dimensions. The range of meaningful values thus depends on what is imaged. Negative values are often also meaningful.
Many cameras have 8-bit precision when converting light intensity to a digital number. Likewise, displays typically have an b-bit intensity range. This is the reason many image file formats store only 8-bit unsigned integer data. However, some cameras have 12 bits or more, and some processes derive pixel data with a higher precision that one does not want to quantize. Therefore formats such as TIFF and ICS will allow you to save images in just about any numeric format you can think of.
I'm afraid it has done nothing anywhere near as clever as you hoped! It has merely interpreted the first byte of the first float as a uint8, then the second byte as another uint8...
from random import random, seed
import numpy as np
from PIL import Image
# Generate repeatable random data, so other folks get the same results
np.random.seed(42)
# Make a single RGB pixel
im = np.random.randn(1, 1, 3)
# Print the floating point values - not that we are interested in them
print(im)
# OUTPUT: [[[ 0.49671415 -0.1382643 0.64768854]]]
# Save that pixel to a file so we can dump it
im.tofile('array.bin')
# Now make a PIL Image from it and print the uint8 RGB values
imp = Image.fromarray(im, 'RGB')
print(imp.getpixel((0,0)))
# OUTPUT: (124, 48, 169)
So, PIL has interpreted our data as RGB=124/48/169
Now look at the hex we dumped. It is 24 bytes long, i.e. 3 float64 (8-byte) values, one for red, one for green and one for blue for the 1 pixel in our image:
xxd array.bin
Output
00000000: 7c30 a928 2aca df3f 2a05 de05 a5b2 c1bf |0.(*..?*.......
00000010: 685e 2450 ddb9 e43f h^$P...?
And the first byte (7c) has become 124, the second byte (30) has become 48 and the third byte (a9) has become 169.
TLDR; PIL has merely taken the first byte of the first float as the Red uint8 channel of the first pixel, then the second byte of the first float as the Green uint8 channel of the first pixel and the third byte of the first float as the Blue uint8 channel of the first pixel.

How to decode GSM-TCAP messages using asn1c generated code

I am using the c code generated by asn1c from the TCAP protocol specification (i.e., the corresponding ASN1 files).
I can successfully encode TCAP packets by the generated code.
However, trying to "decode" related byte streams fails.
A sample code is as follows.
// A real byte stream of a TCAP message:
unsigned char packet_bytes[] = {
0x62, 0x43, 0x48, 0x04, 0x00, 0x18, 0x02, 0x78,
0x6b, 0x1a, 0x28, 0x18, 0x06, 0x07, 0x00, 0x11,
0x86, 0x05, 0x01, 0x01, 0x01, 0xa0, 0x0d, 0x60,
0x0b, 0xa1, 0x09, 0x06, 0x07, 0x04, 0x00, 0x00,
0x01, 0x00, 0x14, 0x03, 0x6c, 0x1f, 0xa1, 0x1d,
0x02, 0x01, 0x00, 0x02, 0x01, 0x2d, 0x30, 0x15,
0x80, 0x07, 0x91, 0x64, 0x21, 0x92, 0x05, 0x31,
0x74, 0x81, 0x01, 0x01, 0x82, 0x07, 0x91, 0x64,
0x21, 0x00, 0x00, 0x90, 0x02
};
// Initializing ...
TCAP_TCMessage_t _pdu, *pdu = &_pdu;
memset(pdu, 0, sizeof(*pdu));
// Decoding:
asn_dec_rval_t dec_ret = ber_decode(NULL, &asn_DEF_TCAP_TCMessage, (void **) &pdu, packet_bytes, sizeof(packet_bytes));
While the message type ("Begin", in this case), is correctly detected, but other paramters are not parsed.
Using other encoding rules, i.e., aper_decode() and uper_decode(), also fails.
I would be thankful if anyone can describe how to use the auto-generated c code for decoding (parsing) a byte string of TCAP messages.
#Vasil, thank you very much for your answer.
Which asn1c are you using (git commit id) and where do you get it
from as there are quite a log of forks out there?
I use the mouse07410's branch.
How do you know that Begin is correctly detected?
From the field present of the pdu variable that is evaluated by ber_decode (you can see the pdu type in the sample code).
From the "Wireshark" output for this byte stream, I know that the correct type of the message is Begin.
You could try compiling with -DASN_EMIT_DEBUG=1 in CFLAGS (or
-DEMIT_ASN_DEBUG=1 depending on the asn1c version you are using) to get some more debug messages.
Thanks for providing the hint; it was helpful.
The problem was related to the asn1 files I was using.
I used osmocom asn1 files and compiled them by
ASN=../asn
asn1c $ASN/DialoguePDUs.asn $ASN/tcap.asn $ASN/UnidialoguePDUs.asn
in which, DialoguePortion is defined as follows (note that the first definition is commented):
--DialoguePortion ::= [APPLICATION 11] EXPLICIT EXTERNAL
-- WS adaptation
DialoguePortion ::= [APPLICATION 11] IMPLICIT DialogueOC
DialogueOC ::= OCTET STRING
To be able to decode TCAP messages,
one needs to use the former definition (as is in the standard), i.e., DialoguePortion should be defined as
DialoguePortion ::= [APPLICATION 11] EXPLICIT EXTERNAL
When using this latter definition in the asn1 file,
and recompiling the asn1 files, the problem solved.
P.S.: This question is also related to my problem.
I am using the c code generated by asn1c from the TCAP protocol specification
Which asn1c are you using (git commit id) and where do you get it from as there are quite a log of forks out there?
While the message type ("Begin", in this case), is correctly detected, but other paramters are not parsed.
How do you know that Begin is correctly detected?
Using other encoding rules, i.e., aper_decode() and uper_decode(), also fails.
There is no point in trying other encodings as they are not binary compatible.
I would be thankful if anyone can describe how to use the auto-generated c code for decoding (parsing) a byte string of TCAP messages.
You are using it correctly and probably there is a bug somewhere in the BER decoder.
You could try compiling with -DASN_EMIT_DEBUG=1 in CFLAGS (or -DEMIT_ASN_DEBUG=1 depending on the asn1c version you are using) to get some more debug messages.

Understanding stanag 4609 klv format

I am trying to parse a stanag 4609 klv stream from external camera.
For beginning, I am trying to figure the altitude value received in stream.
By stanag 4609 documentation, the value is 2 - bytes long, in feet, represented as float.
I know that the camera altitude is approximately 39.8 meters, but I can't interpret the 2 - bytes I receive to that value (in feet).
The 2 bytes I received are {12,23}.
How can I interpret it in the correct way?
In STANAG 4609 KLV, floating point values are encoded as integers. You can check MISB ST0601 for the particular data element you're interested in. It will give you the conversion formula to convert the 2-byte integer into the correct floating point value.
Assuming you're referring to the Sensor True Altitude (tag 15), the conversion formula is (19900/65535) * int_value - 900.
Applying this to your data:
Interpret the bytes [12, 23] ([0x0C, 0x17] in hexadecimal) as an integer. 0xC17 is equal to 3095.
Apply the formula. (19900/65535) * 3095 - 900 = 39.81 meters

What does this CRC implementation mean by having a seed value?

I am trying to implement a CRC algorithm in Verilog for the SENT sensor protocol.
In a document put out by the SAE, they say their CRC uses the generator polynomial
x^4 + x^3 + x^2 + 1 and a seed value of 0101. I understand the basic concept of calculating a CRC using XOR division and saving the remainder, but everytime I try to compute a CRC I get the wrong answer.
I know this because in the same document they have a list of examples with data bits and the corresponding checksum.
For example, the series of hex values x"73E73E" has checksum 15 and the series x"748748" has checksum 3. Is there anyone who can arrive at these values using the information above? If so, how did you do it?
This is a couple of sentences copied from the document: "The CRC checksum can be implemented as a series of shift left by 4 (multiply by 16) followed by a 256 element array lookup. The checksum is determined by using all data nibbles in sequence and then checksumming the result with an
extra zero value."
Take a look at RevEng, which can determine the CRC parameters from examples (it would need more examples than you have provided).
The seed is simply the initial value of your crc calculation. It is usual to have a non-zero seed to avoid the crc result being zero in the case of all zero data
I just had to find out the same thing. I was checking a CRC implementation for the CRC algorithm which was cryptic albeit working. So I wanted to get the "normal" CRC algorithm to give me the same numbers so I could refactor without problems.
For the numbers you gave I get 0x73E73E => 12, 0x748748 => 3.
As you can read in Koopman the seed value "Prevents all-zero data word from resulting in all-zero check sequence".
I wrote my standard implementation using the algorithm from Wikipedia in Python:
def nCRCcalc( poly, data, crc, n):
crctemp = ( data << n ) | crc
# data width assumed to be 32 bits
shift = 32
while shift > n:
shift = shift - 1
mask = 1 << shift
if mask & crctemp:
crctemp = crctemp ^ ( poly << (shift - n) )
return crctemp
Poly is the polynomial, data is the data, crc is the seed value and n is the number of bits. So In this case Polynomial is 29, crc is 5 and n is 4.
You might need to reverse nibble order, depending on in which format you receive your data. Also this is obviously not the implementation with the table, just for checking.

Resources