I am developing a small MQTT client to subscribe to and to monitor certain topics. For the most part, it works well assuming a one-byte length info (2nd byte). But I sometimes get this subscribe 0x30 response, that I can't understand. It seems to have a multi byte length but neither length byte has its MSB set.
Header
0000: 3031312700127b6c756d6f7375727d2f 011'..{lumosur}/
0010: 6461746574696d65323032302d30322d datetime2020-02-
0020: 30342032333a32313a3437311900127b 04 23:21:471...{
How to figure it out?
Thanks for your help.
mm.
Nevermind. Though I was staring at that problem for hours, it was just after I posted the question that it dawned on me. The protocol doesn't have a problem: I was reading the data wrongly.
This is a binary protocol so I have to read the blocks according to the length field in the header. I didn't do that correctly so the data I assumed was correct header data actually wasn't aligned correctly.
Sorry to bother you :D
Michaela
Related
I need to work on a Bluetooth hardware and need to send the bytes data. I know the format but not sure about the last byte:-
[0x6E,0x01,0x00,0x24,0x93]
The last byte is the - byte5(verify). The preceding bytes add up, but I am not sure how value 93 came up.
Is there any specific logic that needs to be added for this.
I'm writing a parser for the SQLi-protocol ("turbo") used by Informix. I have most opcodes covered by now, yet SQ_FETCHBLOB I don't have a clue yet. Reverse engineering the driver is difficult since it copies values from its internal state machine, which itself is hard to track. All I know is that SQ_FETCHBLOB is followed by 56 bytes of data, some of which seem to be the BLOB's total size and fetch-offset.
Does anyone have some information on how to decode SQ_FETCHBLOB as used by Informix SQLi ?
I can't comment on the specifics of the SQ_FETCHBLOB SQLI packet type but you might want to look at the file $INFORMIXDIR/incl/esql/blob.h which is shipped with Client SDK. This describes the tblob_t data structure which is 56 bytes.
I'm programming a server that accepts an incoming connection from a client and then reads from it (via net.Conn.Read()). I'm going to be reading the message into a []byte slice obviously and then processing it in an unrelated way, but the question is - how do I find out the length of this message first to create a slice of according length?
It is entirely dependent on the design of the protocol you are attempting to read from the connection.
If you are designing your own protocol you will need to design some way for your reader to determine when to stop reading or predeclare the length of the message.
For binary protocols, you will often find some sort of fixed size header that will contain a length value (for example, a big-endian int64) at some known/discoverable header offset. You can then parse the value at the length offset and use that value to read the correct amount of data once you reach the offset beginning the variable length data. Some examples of binary protocols include DNS and HTTP/2.
For text protocols, when to stop reading will be encoded in the parsing rules. Some examples of text protocols include HTTP/1.x and SMTP. An HTTP/1.1 request for example, declares the protocol to look something like:
METHOD /path HTTP/1.1\r\n
Header-1: value\r\n
Header-2: value\r\n
Content-Length: 20\r\n
\r\n
This is the content.
The first line (where line is denoted as ending with \r\n) must include the HTTP method, followed by the path (could be absolute or relative), followed by the version.
Subsequent lines are defined to be headers, made up of a key and a value.
The key includes any text from the beginning of the line up to, but not including, the colon. After the colon comes a variable number of insignificant space characters followed by the value.
One of these headers is special and denotes the length of the upcoming body: Content-Length. The value for this header contains the number of bytes to read as the body. For our simple case (ignoring trailers, chunked encoding, etc.), we will assume that the end of the body denotes the end of the request and another request may immediately follow.
After the last header comes a blank line denoting the end of the header block and the beginning of the body (\r\n\r\n).
Once you are finished reading all the headers you would then take the value from the Content-Length header you parsed and read the next number of bytes corresponding to its value.
For more information check out:
https://nerdland.net/2009/12/designing-painless-protocols/
http://www.catb.org/esr/writings/taoup/html/ch05s03.html
https://www.ietf.org/rfc/rfc3117.txt
In the end what I did was create a 1024 byte slice, then read the message from the connection, then shorten the slice to the number of integers read.
The solution selected as correct is not good. What happens if the messages is more than 1024 Bytes? You need to have a protocol of the form TLV (Type Length Value) or just LV if you don't have different types of messages. For example, Type can be 2 Bytes and Length can be 2 Bytes. Then you first always read 4 Bytes, then based on the length indicated in Bytes 2 and 3, you will know how many Bytes come later and then you read the rest. There is something else you need to take into account: TCP is stream oriented so in order to read the complete TCP message you might need to read many times. Read this (it is for Java but useful for any language): How to read all of Inputstream in Server Socket JAVA
Hi I am using dispatch_io_read with a socket in swift 2 on Xcode 7 Beta3. It looks like the read action will hold there when the expected receiving data size is smaller than the length I specified. For example,
If I do
dispatch_io_read(channel!, 0, 1000, inputQueue!, myReadHandler)
and the data from the server is less than 1000 bytes, myReadHandler will never get called.
To solve this, I have to do read bytes one by one, is there a better solution?
Thanks.
This probably is a little late, but for anyone who has the same problem
apple's documentation shows that..
"The length parameter indicates the number of bytes that should be read from the I/O channel. Pass SIZE_MAX to keep reading until EOF is encountered (for a channel created from a disk-based file this happens when reading past the end of the physical file)."
So, simply using SIZE_MAX will read all the available data attached to the file descriptor.
Unfortunately, this seems to not work due to a bug in Swift 3 with DispatchIO.read().
Not sure if this is a blatantly terrible misunderstanding, but I've been having so trouble with inspecting memory. Here's the following from gdb from examining with x/8w.
0xbffff7a0: 0xb7f9f729 0xb7fd6ff4 0xbffff7d8 0x08048529
0xbffff7b0: 0xb7fd6ff4 0xbffff870 0xbffff7d8 0x00000000
So I'm assuming that 0xb7f9f729 is at 0xbffff7a0, then 0xb7fd6ff4 is at 0xbffff7a4, etc. Could you explain how this works byte wise? Is that 16 bytes from the first memory portion to the next, and each 4 bytes holds it's own word?
I'm having a hard time grasping this memory concept, anyone know a good resource that makes learning it easier?
Yes and yes to both questions.
gdb(1) understands w modifier in your x/8w command as "four byte words", so you are printing 32 bytes in groups of four. gdb(1) just lays them out in short lines with offsets for readability.
I should mention that the exact values printed actually depend on the platform endianness.
You would get similar but probably more understandable layout with x/32.
It's all in the fine manual.