My project contains a Sender/Receiver framework that can only talk one-way , there is no a return channel from the receiver to the sender.
The sender compress part of the data it's send to the receiver using zlib.
I want to make my receiver resilient to crashes/reboots/restarts, is it possible to join the zlib stream from a random point somehow?
Both Sender/Receiver using Z_SYNC_FLUSH.
Some ideas I had:
Saving state structures to disk and reload them after restart of the receiver.
Replacing Z_SYNC_FLUSH to Z_FULL_FLUSH.
I tried saving the first chunk of zlib compressed data, restart the receiver and than resend the first chunk again and after that continue the stream from a random chunk and it seems to work - I don't understand why, is it a solid solution or it was just a luck?
Replacing to Z_FULL_FLUSH didn't seem to make any difference.
Is there another way to work around this? Do you think I missed something?
Thanks a lot,
Jason
To assure that you can start decompression at some point with no history, you must either use Z_FULL_FLUSH or simply end the stream and start a new one.
For the former you could do a Z_SYNC_FLUSH followed by a Z_FULL_FLUSH in order to insert two markers resulting in the nine bytes 00 00 ff ff 00 00 00 ff ff, which would be unlikely to be seen randomly in the compressed data. You can do the same for the latter, simply inserting a large-enough marker between the end of the previous zlib stream and the start of the next zlib stream.
Related
I am developing a small MQTT client to subscribe to and to monitor certain topics. For the most part, it works well assuming a one-byte length info (2nd byte). But I sometimes get this subscribe 0x30 response, that I can't understand. It seems to have a multi byte length but neither length byte has its MSB set.
Header
0000: 3031312700127b6c756d6f7375727d2f 011'..{lumosur}/
0010: 6461746574696d65323032302d30322d datetime2020-02-
0020: 30342032333a32313a3437311900127b 04 23:21:471...{
How to figure it out?
Thanks for your help.
mm.
Nevermind. Though I was staring at that problem for hours, it was just after I posted the question that it dawned on me. The protocol doesn't have a problem: I was reading the data wrongly.
This is a binary protocol so I have to read the blocks according to the length field in the header. I didn't do that correctly so the data I assumed was correct header data actually wasn't aligned correctly.
Sorry to bother you :D
Michaela
Edit (abstract)
I tried to interpret Char/String data as Byte, 4 bytes at a time. This was because I could only get TComport/TDatapacket to interpret streamed data as String, not as any other data type. I still don't know how to get the Read method and OnRxBuf event handler to work with TComport.
Problem Summary
I'm trying to get data from a mass spectrometer (MS) using some Delphi code. The instrument is connected with a serial cable and follows the RS232 protocol. I am able to send commands and process the text-based outputs from the MS without problems, but I am having trouble with interpreting the data buffer.
Background
From the user manual of this instrument:
"With the exception of the ion current values, the output of the RGA are ASCII character strings terminated by a linefeed + carriage return terminator. Ion signals are represented as integers in units of 10^-16 Amps, and transmitted directly in hex format (four byte integers, 2's complement format, Least Significant Byte first) for maximum data throughput."
I'm not sure whether (1) hex data can be stored properly in a string variable. I'm also not sure how to (2) implement 2's complement in Delphi and (3) the Least Significant Byte first.
Following #David Heffernan 's advice, I went and revised my data types. Attempting to harvest binary data from characters doesn't work, because not all values from 0-255 can be properly represented. You lose data along the way, basically. Especially it your data is represented 4 bytes at a time.
The solution for me was to use the Async Professional component instead of Denjan's Comport lib. It handles datastreams better and has a built-in log that I could use to figure out how to interpret streamed resposes from the instrument. It's also better documented. So, if you're new to serial communications (like I am), rather give that a go.
I'm in a situation where some data in a database is not compressed, but I want to enable compression for new data coming in, without having to update all records currently in the database to make them compressed as well.
So I need to be able to say, if deflated, inflate it and process it, else just process it. But I can't see how to gracefully check whether the data is already compressed before trying to process it, unless I do a 'begin ... rescue' block:
begin
process(Zlib::Inflate.inflate(data))
rescue Zlib::DataError
process(data)
end
Is there a better way? I've seen references to magic numbers and checking the first couple of bytes of the file, but no good examples of how to achieve these things in ruby. Any help appreciated. Thanks.
What you propose is the best way. It will very quickly determine that the first two bytes are not a zlib header. If by accident the input data appears to be a zlib header (about a 1 in 1024 chance), then the decompression will detect an invalid deflate stream given random data within on the order of 30 bytes almost all the time.
As you suggested, you could either rescue the specific exception or manually validate the file type by reading the magic numbers.
Ruby IO's readpartial can read the specified number of bytes, which you can compare with the magic number.
I personally would stick to rescuing the exception as a lot of the core libraries perform the same magic number check before raising the exception.
I'm working with a ELM327 and I'd like to be able to set the header and data portions of CAN messages to be sent. I see that there is a code for setting the header for messages
SH xxyyzz
But I'm having trouble finding out how to set the data portion and control when the message gets sent.
Do these both occur when I send a ASCII request for a PID with extra characters for the data field?
And would that use the header that was set by the SH command?
Is there a better way to do this?
Datasheet: http://elmelectronics.com/DSheets/ELM327DS.pdf
If you're using the ELM327, and you're on a protocol such as J1850 vpw, or J1850 pwm (older than 2003 CAN vehicles).. Then you will use this to set the header.
The header will consist of xx yy zz
xx = priority of message (i.e. 68)
yy = Target address of module you want to talk to (i.e. 5A)
zz = Sender address, which can usually be F1
So your command would look like this ATSH 68 5A F1
This sets the header.. Now you want to send data. Any data you send from now on will use that header, and send the data to that module.
So if you wanted to get the RPM's, you can just send 01 0C
You will get something like 41 0C 23. The last data byte is the value of RPM's. You will have to figure out the formula to convert this into a human readable format though.. A lot of information can be found here..
https://en.wikipedia.org/wiki/OBD-II_PIDs
By the way, if you're communicating on a CAN network, you would just use the module ID as the header.. ATSH 7E0, then send your data. all vehicles 2008+ are CAN.. some 2003-2007 are also.
This might be an old question, but I just found an online link which describes in detail how to send arbitrary CAN messages using the ELM327. So anyone (like me) coming past that question can still find a valid answer.
Look here for details on send arbitrary CAN messages with the ELM327:
https://www.elmelectronics.com/wp-content/uploads/2017/11/AppNote07.pdf
Best
If you're using an ELM327 chipset, you need to call ATSH or AT SH, to set the header first. Then send the message separately (The data bytes).
https://www.sparkfun.com/datasheets/Widgets/ELM327_AT_Commands.pdf
I tried using PARSE on a PORT! and it does not work:
>> parse open %test-data.r [to end]
** Script error: parse does not allow port! for its input argument
Of course, it works if you read the data in:
>> parse read open %test-data.r [to end]
== true
...but it seems it would be useful to be able to use PARSE on large files without first loading them into memory.
Is there a reason why PARSE couldn't work on a PORT! ... or is it merely not implemented yet?
the easy answer is no we can't...
The way parse works, it may need to roll-back to a prior part of the input string, which might in fact be the head of the complete input, when it meets the last character of the stream.
ports copy their data to a string buffer as they get their input from a port, so in fact, there is never any "prior" string for parse to roll-back to. its like quantum physics... just looking at it, its not there anymore.
But as you know in rebol... no isn't an answer. ;-)
This being said, there is a way to parse data from a port as its being grabbed, but its a bit more work.
what you do is use a buffer, and
APPEND buffer COPY/part connection amount
Depending on your data, amount could be 1 byte or 1kb, use what makes sense.
Once the new input is added to your buffer, parse it and add logic to know if you matched part of that buffer.
If something positively matched, you remove/part what matched from the buffer, and continue parsing until nothing parses.
you then repeat above until you reach the end of input.
I've used this in a real-time EDI tcp server which has an "always on" tcp port in order to break up a (potentially) continuous stream of input data, which actually piggy-backs messages end to end.
details
The best way to setup this system is to use /no-wait and loop until the port closes (you receive none instead of "").
Also make sure you have a way of checking for data integrity problems (like a skipped byte, or erroneous message) when you are parsing, otherwise, you will never reach the end.
In my system, when the buffer was beyond a specific size, I tried an alternate rule which skipped bytes until a pattern might be found further down the stream. If one was found, an error was logged, the partial message stored and a alert raised for sysadmin to sort out the message.
HTH !
I think that Maxim's answer is good enough. At this moment the parse on port is not implemented. I don't think it's impossible to implement it later, but we must solve other issues first.
Also as Maxim says, you can do it even now, but it very depends what exactly you want to do.
You can parse large files without need to read them completely to the memory, for sure. It's always good to know, what you expect to parse. For example all large files, like files for music and video, are divided into chunks, so you can just use copy|seek to get these chunks and parse them.
Or if you want to get just titles of multiple web pages, you can just read, let's say, first 1024 bytes and look for the title tag here, if it fails, read more bytes and try it again...
That's exactly what must be done to allow parse on port natively anyway.
And feel free to add a WISH in the CureCode database: http://curecode.org/rebol3/