How do IMAP servers detect message end and response a APPEND completed? - imap

Interesting about how the server detect the end of client's message in https://www.rfc-editor.org/rfc/rfc4315
As I know multipart/mixed contains a lot of empty lines, it would be wrong using just CRLF.

Messages and similar multiline structures are preceded by a byte count.
> A1 APPEND INBOX {4082}\r\n
< + Go ahead\r\n
> (4082 bytes of message follow)\r\n
Simple as that.

Related

Gcp Dataflow processes invalid data

We have an API as a proxy between clients and google Pub/Sub, so it basically retrieves a JSON body and publishes it to the topic. Then, it is processed by DataFlow, which stores it in BigQuery. Also, we use transform UDF to, for instance, convert a field value to upper case; it parses JSON sent and produces a new one.
The problem is the following. The number of bytes sent to the destination table is much less than to the deadletter, and the error message is 99% percent contains the error saying that the sent JSON is invalid. And that's true, the payloadstring column contains distorted JSONs: they could be truncated, concatenated with other ones, or even both. I've added logs on the API side to see where did the message set corrupted, but neither received or sent by the API JSON bodies are invalid.
How can I debug this problem? Is it any chance of pub/sub or dataflow to corrupt messages? If so, what can I do to fix it?
UPD. By the way, we use a Google-provided template called "pubsub topic to bigquery"
UPD2. API is written in Go, and the way we send the message is simply by calling
res := p.topic.Publish(ctx, &pubsub.Message{Data: msg})
The res variable is then used for error logging. p here is a custom struct.
The message we sent is a JSON with 15 fields, and just to be concise I'll mock it and UDF.
Message:
{"MessageName":"Name","MessageTimestamp":123123123",...}
UDF:
function transform(inJson) {
var obj;
try {
obj = JSON.parse(inJson);
} catch (error){
throw 'parse JSON error: '+error;
}
if (Object.keys(obj).length !== 15){
throw "Message is invalid";
}
if (!(obj.hasOwnProperty('EventSource') && typeof obj.EventSource === 'string' && obj.MessageName.length>0)) {
throw "MessageName is absent or invalid";
}
/*
other fields check
*/
obj.MessageName = obj.MessageName.toUpperCase()
/*
other fields transform
*/
return JSON.stringify(obj);
}
UPD3:
Besides being corrupted, I've noticed that every single message is duplicated at least once, and the duplicates are often truncated.
The problem occurred several days ago when it was a massive increase in the number of messages, but now it got back to normal, and the error is still there. The problem was seeing before, but it was a much more rare case.
The behavior you describe suggests that the data is corrupt before it gets to Pubsub or Dataflow.
I have performed a test, sending JSON messages containing 15 fields. Your UDF function as well as the Dataflow template work fine since I was able to insert the data to BigQuery.
Based on that, it seems your messages are already corrupted before getting to Pub/Sub, I suggest you to check your messages once they arrived to Pub/Sub and see if they have the correct format.
Please notice that it's required for the messages schema match with the BigQuery table schema.

HL7 Message terminator

I have to send HL7 message to web service. I am adding CHAR(13) (carriage return or \r) as segment terminator in stored procedure and calling a web service to send the HL7 message. When the service receives the message they are saying I am adding extra CHAR(10) (line feed or \n ) in my segment terminators. I have looked into my values and just before sending it only has \r as segment terminators. How to make sure that the service also receives it as only \r without extra \n. I have looked around but haven't found any solution so far.
Have you looked at the message in Fiddler or TCP Spy depending on how you are sending your message?
It will at least prove if you are providing anything other than the /r
I've been caught by messages having multiple ways of breaking the line. \r, \n and also a combo of the 2.
Have you tried redirecting the message to somewhere you can actually read yourself at different stages of the processing? It's getting changed somewhere and reading it out at different stages caught the error for me. Or just a series of the below at different stages.
content = hl7message.read()
if "\n" in content:
print "ERROR"

Read a filestream (named pipe) with a timeout in Smalltalk

I posted this to the Squeak Beginners list too - I'll be sure to make sure any answers from there get here :)
I'm using Squeak 4.2 and working on the smalltalk end of a named pipe connection, which sends a message to the named pipe server with:
msg := 'Here''s Johnny!!!!'.
pipe nextPutAll: msg; flush.
It should then receive an acknowledgement, which will be a 32-byte md5 hash of the received message (which the smalltalk app can then verify). It's possible the named pipe server may have gone away or otherwise been unable to deal with the request, and so I'd like to set a timeout on reading the acknowledgement. I've tried using this:
ack := [ pipe next: 32 ] valueWithin: (Duration seconds: 3) onTimeout: [ 'timeout'. ].
and then made the pipe server pause artificially to test the code. But the smalltalk thread blocks on the read and doesn't carry on (even after the timeout), although if I then get the pipe server to send the correct response (after a 5 second delay, for example), the value of 'ack' is 'timeout'. Obviously the timeout did what it's supposed to do, but couldn't 'unblock' the blocking read on the pipe.
Is there a way to accomplish this even with a blocking FileStream read? I'd rather avoid a busy wait on there being 32 characters available if at all possible.
This one may come in handy but not on Windows I am afraid.
http://www.samadhiweb.com/blog/2013.07.27.unixdomainsockets.html

How do you send a file separater in Mirth?

The HL7 receiver I am sending to is expecting a very specific end of file marker in a TCP Message:
<FS><CR>
Where <FS> is ascii 28 and <CR> is ascii 13.
We are using Mirth 2.x as our HL7 engine. The <CR> (Carriage Return) is fairly straight forward.
But how do I send the The File separator?
Here is how I was able to solve this problem.
In the source transformer I defined "Start of File" and "End of File" variables like this:
channelMap.put('SOF',String.fromCharCode(11)); // Start Of File: returns \v (vertical tab));
channelMap.put('EOF',String.fromCharCode(28,13)); // End Of File: returns <FS><CR>);
In the destination template I then did this:
${SOF}${message.encodedData}${EOF}
I wrote the messages out to temporary file and opened them in a Hex Editor. I was able to confirm that the a 0x0B (Ascii 11) was written prior to the message and the message closed with 0x1C 0x0D (Ascii 28, Ascii 13)
I'd recommend using the LLP Sender in Mirth. It can be configured to use different separator chars if needed.
My guess is that the two bytes you are seeing are the end of of segment and the end of message chars.

SOCKS 5 - Failure behaviour?

I have read the RFC1928 several times and still couldn't understand what a compliant SOCKS 5 server is supposed to reply in case of failure. This doubt comes from the fact that the ATYP, BND.ADDR and BND.PORT fields from a SOCKS reply simply don't make sense if, for instance, a request with an invalid command is received. Must the server not send these fields or just send blanks?
I just red putty source code, and found out that, when there is an error in reply (REP != 0) ATYP is IPv4 (1) BND.ADDR and BND.PORT are all NULL bytes.
I guess this behavior helps developpers to parse requests ?
In a failure reply, only the VER and REP fields are meaningful. The other fields may be present but are not used. You don't even need to look at those bytes unless REP is zero.

Resources