Converting two 8 bit values to 16 bit unsigned int and back in lua - lua

I'm having trouble converting to 16 bit int values from raw data I'm receiving over Ethernet.
For example, I might receive this:
\x00\x0A\x00\x00\x00\x09\x01\x10\x00\x01\x00\x10\x02\x00\x00
I need to take two of these raw data bytes and convert them to a 16 bit unsigned value.
So far I've tried with tonumber() but I can't find a way to make it combine the 2 bytes, I've seen some examples of using string.gsub() on here to do the conversions but these all deal with an an ASCII representation of the raw data.
TIA

Use string.byte() on a single character to turn it into its numerical value, then just multiply the more significant one by 256 (or if you're on Lua 5.3 or newer, shift it left by 8 bits), then add them.

If you're on Lua 5.3 or newer, try also string.unpack. You can select the byte order with < and >:
s="\x00\x0A\x00\x00\x00\x09\x01\x10\x00\x01\x00\x10\x02\x00\x00\x00"
print("<",">")
for i=1,#s,2 do
print((string.unpack("<i2",s,i)),(string.unpack(">i2",s,i)))
end

Related

Why is there no NSShort or NSByte?

Newbie in IOS programming here.
I was looking at the Foundation Types Data Reference and have started to use the NSInteger typedef on the assumption that it will make my app more portable. However I often have a use for 16-bit and 8-bit integers and I don't see an NSShort or NSByte.
It seems wasteful to allocate a 32/64 bit variable for something that has a small range, say 0 to 12.
Are there any symbols that are defined for that?
Use uint8_t and uint16_t if you want types that are a specific size. There are also similar types for 32 and 64 bits values.

How to convert DWORD to FILETIME in LUA?

I am trying to read a file that has two DWORDs for the FILETIME (this is a prefetch file).
I read at offset 0x81 (0x80 + 1 because of 1-index in lua). How do I go about taking the 8 bytes and converting into a filetime using only lua?
Starting at 0x80 in my hex editor, I have:
FB54B341B70CCf01
Needs to correlate to 01/08/2014
What is FILETIME
The Windows platform defines FILETIME to be a 64-bit integer "count of 100ns intervals since January 1, 1601 UTC".
You will have at least two challenges with dealing with FILETIME in Lua.
First, it a FILETIME is a 64-bit integer and Lua stores numbers internally as IEEE double precision, which only supports 56 bits of precision. To the precision of the envelope I just scribbled on, you need more than 57 significant bits to name any time today as a FILETIME.
(Aside: I estimated that by noticing that there are about 1e7*pi seconds in a year, 1e7 100ns ticks in a second, and today is about 413 years after the FILETIME epoch. So dates in 2014 need about log2(413e14 * pi) bits, or a little more than 57 bits.)
Second, pure Lua doesn't have easy to use functions for converting binary data structures to and from native Lua data types. It isn't difficult to build such functions out of string.byte() and string.sub() and that is even safe to do since strings are 8-bit clean. But it is something you have to build yourself, or find from a third-party source.
But be aware that although there are binary structure libraries out there, many of them only provide limited support for 64-bit integers due to the limitations of Lua numbers. You may be better suited by a hand-crafted module in C that stores a FILETIME in a userdata and provides suitable operators to allow them to be compared, converted to and from a string, and so forth.
Your Example
Starting at 0x80 in my hex editor, I have:
FB54B341B70CCf01
Needs to correlate to 01/08/2014
Windows on a PC is a little-endian platform. That means that values are stored with the least-significant byte at the lowest address. So we can rewrite your sample timestamp to be more readable by reversing the bytes:
01CF0CB741B354FB
As expected, the 57'th bit is the most significant set bit, so this value is plausible for this century.

Working on 16 bit unsigned integer (uint16_t)

I want to generate a 16 bit unsigned integer (uint16_t) which could represent following:
First 2 digits representing some version like 1, 2, 3 etc.
Next 3 digits representing another number may be 123, 345, 071 etc.
And last 11 digits representing a number T234, T566 etc.
How can we do this using objective C. I would like to parse this data later on to get these components back. Please advise.
I think you are misunderstanding just what uint16_t means. It doesn't mean a 16 digit decimal number (which would be any number between 0 and 9,999,999,999,999,999). It means an unsigned number that can be expressed using 16 bits. The range of such a value is 0 to 65535 in decimal. If you really wanted to store the numbers you are talking about you would need 52 bits. You would also be making things very difficult for yourself, since you wouldn't easily be able to extract the first two decimal digits from that 52 bit sequence; You'd have to treat the number as a decimal value then modulus 100 it, you couldn't just say it's bits 1 to 8.
There is a scheme called Binary Coded Decimal that could help you. You would take a 64 bit value (uint64_t) and you'd say that within this value the bits 1-7 are the version (which could be a value up to 127), bits 8-17 are the second number (which could be a value up to 1023) and bits 18-63 could be your third number (those 46 bits would be able to store a number up to 70,368,744,177,663.
All this is technically possible, but you are really going to be making things hard for yourself. It looks like you are storing a version, minor version and build number and most people do that using strings, not decimals

Separating decimal value to least & most significant byte

I'm working on some 65802 code (don't ask :P) and I need to separate a 16-bit value into two 8-bit bytes to store it in memory. How would I go about this?
EDIT:
Also, how would I take two similar bytes and combine them into one 16-bit value?
EDIT:
To clarify, many of the solutions available on the internet are not possible with the programming language I'm using (a version of MS-BASIC). I can't take modulo, and I can't left or rightshift. I've figured out that I can put the two bytes together by multiplying the high byte by 256 and adding it to the low byte, but how would I reverse the process?

How could I guess a checksum algorithm?

Let's assume that I have some packets with a 16-bit checksum at the end. I would like to guess which checksum algorithm is used.
For a start, from dump data I can see that one byte change in the packet's payload totally changes the checksum, so I can assume that it isn't some kind of simple XOR or sum.
Then I tried several variations of CRC16, but without much luck.
This question might be more biased towards cryptography, but I'm really interested in any easy to understand statistical tools to find out which CRC this might be. I might even turn to drawing different CRC algorithms if everything else fails.
Backgroud story: I have serial RFID protocol with some kind of checksum. I can replay messages without problem, and interpret results (without checksum check), but I can't send modified packets because device drops them on the floor.
Using existing software, I can change payload of RFID chip. However, unique serial number is immutable, so I don't have ability to check every possible combination. Allthough I could generate dumps of values incrementing by one, but not enough to make exhaustive search applicable to this problem.
dump files with data are available if question itself isn't enough :-)
Need reference documentation? A PAINLESS GUIDE TO CRC ERROR DETECTION ALGORITHMS is great reference which I found after asking question here.
In the end, after very helpful hint in accepted answer than it's CCITT, I
used this CRC calculator, and xored generated checksum with known checksum to get 0xffff which led me to conclusion that final xor is 0xffff instread of CCITT's 0x0000.
There are a number of variables to consider for a CRC:
Polynomial
No of bits (16 or 32)
Normal (LSB first) or Reverse (MSB first)
Initial value
How the final value is manipulated (e.g. subtracted from 0xffff), or is a constant value
Typical CRCs:
LRC: Polynomial=0x81; 8 bits; Normal; Initial=0; Final=as calculated
CRC16: Polynomial=0xa001; 16 bits; Normal; Initial=0; Final=as calculated
CCITT: Polynomial=0x1021; 16 bits; reverse; Initial=0xffff; Final=0x1d0f
Xmodem: Polynomial=0x1021; 16 bits; reverse; Initial=0; Final=0x1d0f
CRC32: Polynomial=0xebd88320; 32 bits; Normal; Initial=0xffffffff; Final=inverted value
ZIP32: Polynomial=0x04c11db7; 32 bits; Normal; Initial=0xffffffff; Final=as calculated
The first thing to do is to get some samples by changing say the last byte. This will assist you to figure out the number of bytes in the CRC.
Is this a "homemade" algorithm. In this case it may take some time. Otherwise try the standard algorithms.
Try changing either the msb or the lsb of the last byte, and see how this changes the CRC. This will give an indication of the direction.
To make it more difficult, there are implementations that manipulate the CRC so that it will not affect the communications medium (protocol).
From your comment about RFID, it implies that the CRC is communications related. Usually CRC16 is used for communications, though CCITT is also used on some systems.
On the other hand, if this is UHF RFID tagging, then there are a few CRC schemes - a 5 bit one and some 16 bit ones. These are documented in the ISO standards and the IPX data sheets.
IPX: Polynomial=0x8005; 16 bits; Reverse; Initial=0xffff; Final=as calculated
ISO 18000-6B: Polynomial=0x1021; 16 bits; Reverse; Initial=0xffff; Final=as calculated
ISO 18000-6C: Polynomial=0x1021; 16 bits; Reverse; Initial=0xffff; Final=as calculated
Data must be padded with zeroes to make a multiple of 8 bits
ISO CRC5: Polynomial=custom; 5 bits; Reverse; Initial=0x9; Final=shifted left by 3 bits
Data must be padded with zeroes to make a multiple of 8 bits
EPC class 1: Polynomial=custom 0x1021; 16 bits; Reverse; Initial=0xffff; Final=post processing of 16 zero bits
Here is your answer!!!!
Having worked through your logs, the CRC is the CCITT one. The first byte 0xd6 is excluded from the CRC.
It might not be a CRC, it might be an error correcting code like Reed-Solomon.
ECC codes are often a substantial fraction of the size of the original data they protect, depending on the error rate they want to handle. If the size of the messages is more than about 16 bytes, 2 bytes of ECC wouldn't be enough to be useful. So if the message is large, you're most likely correct that its some sort of CRC.
I'm trying to crack a similar problem here and I found a pretty neat website that will take your file and run checksums on it with 47 different algorithms and show the results. If the algorithm used to calculate your checksum is any of these algorithms, you would simply find it among the list of checksums produced with a simple text search.
The website is https://defuse.ca/checksums.htm
You would have to try every possible checksum algorithm and see which one generates the same result. However, there is no guarantee to what content was included in the checksum. For example, some algorithms skip white spaces, which lead to different results.
I really don't see why would somebody want to know that though.

Resources