IMAP MIME boundary life - imap

Between the RFC for IMAP, the RFC for MIME Format of Internet Message Bodies and the RFC for MIME Media Types I can not see anything explicitly said about the life of MIME boundaries. Does a MIME boundary for a part of a message have to stay unchanged for as long as that message exists?
I have looked at how Clever Internet Suite implemented their IMAP server, and it seems that the boundary is generated (and re-generated) on the fly, i.e. separate commands executed against the same mailbox will get different boundaries for the same parts of the same message. Meaning that if you request the boundary as part of the BODYSTRUCTURE and retrieve the body parts later then you may receive those body parts with different boundaries than indicated in the BODYSTRUCTURE request.

Does a MIME boundary for a part of a message have to stay unchanged for as long as that message exists?
Yes.
and it seems that the boundary is generated (and re-generated) on the fly, i.e. separate commands executed against the same mailbox will get different boundaries for the same parts of the same message.
That is horribly broken.
Part of the BODYSTRUCTURE response are the Content-Type parameter values of which the boundary is one of them. It cannot go changing between sessions, never mind between commands.
One of the reasons that the boundary cannot simply be regenerated on the fly is that for multipart/signed parts, in order to verify the digital signature, you need to serialize the child tree byte-for-byte exactly the way it was when it got signed originally or, obviously, it will fail to verify.

Related

Camel file component charset with more than one endpoints

when i have a route that sends data to two different file component endpoints,
where about one EP i don't really care about the encoding but about the other EP, i need to ensure a certain encoding, should i still set the charsetname in both encodings?
I'm asking because a client of ours had a problem in that area. the route receives UTF-8 and and we need to write iso-8859-1 to the file.
And now, after the whole hardware was restarted (after power-outage), we found things like "??" instead of the expected "รค".
Now, by specifying the charsetname on all file producer endpoints, we were able to solve the issue.
My actual question now is:
do you think i can now expect that the problem is solved for good?
Or shouldn't there be a relation and I would be well advised not to lean back until I 100% understand the issue.
Notes that might be helpfull:
in addition, before writing to any of those two file endpoints, we
also do .convertBodyTo(byte[].class, "iso-8859-1")
we use camel 2.16.1
In the end, the problem was not about having two file endpoints in one pipeline.
It was about the JVM's default encoding as written here:
http://camel.465427.n5.nabble.com/Q-about-how-to-help-the-file2-component-to-do-a-better-job-td5783381.html

NSURLConnection uploading large files

I need to upload large files like video from my iphone. Basically I need to read data as chunks and upload each chunk. My upload is multipart upload. How to achieve this using NSURLConnection?
Thanks in advance!
You likely use a "Form-based File Upload over HTML". This is a specialized form of a multipart/form-data POST request.
See Form-based File Upload in HTML and many other sources in the web.
When dealing with large files, you need to strive to keep your memory footprint acceptable low. Thus, the input source for the request data should be a NSInputStream which precisely avoids this problem. You create an instance of NSInputStream with a class factory method where you specify the file you want to upload. When setting up the NSMutableURLRequest you set the input stream via setHTTPBodyStream.
At any rate, use NSURLConnection in asynchronous mode implementing the delegates. You will need to keep a reference of the connection object in order to be able to cancel it, if this is required.
Every multipart shall have a Content-Type - especially the file part - and every part should have a Content-Length, unless chunked transfer encoding is used.
You may want to explicitly set the Content-Length header of the file part with the correct length. Otherwise, if NSURLConnection cannot determine the content length itself - and this is the true when you set an input stream - then NSURLConnection uses chunked transfer encoding. Depending on the content type a few servers may have difficulties processing either chunked transfer encoded bodies or very large bodies.
Since there is a high chance for mobile devices to loose their connection in the field during a upload request, you should also consider to utilize "HTTP range headers". Both, server and client need to support this.
See "14.35 Range" RFC 2616, and various other sources regarding "resumable file upload".
There is no system framework that helps you setting up the multipart body and calculating the correct content length for the whole message. Doing this yourself without third party library support is quite error prone and cumbersome, but doable.

PHP fails to parse large post variable

I'm trying to pass a rather large post request to php, and when I var_dump $_POST array, one, the most large, variable is missing. (Actually that's base64 encoded binary upload as part of a post request)
Funny thing, that on my development PC exactly same request is parsed correctly, without any missing variables.
I checked out contents of php://input on server and development PC and they are exactly the same, md5 matches. Yet development PC recognizes all variables, and server misses one.
I tried changing many different options in php.ini, and got zero effect.
Maybe someone will point me to the right one.
Here is my php://input (~5 megabytes) http://www.mediafire.com/?lp0uox53vhr35df
It's possible the server is blocking it because of Suhosin extension.
http://www.hardened-php.net/suhosin/configuration.html#suhosin.post.max_value_length
suhosin.post.max_value_length
Type: Integer Default: 65000 Defines the maximum length of a variable
that is registered through a POST request.
This will have to be changed in the php.ini.
Keep in mind that this is different than the Suhosin patch which is common on alot of shared hosts. I don't know if the patch would cause this problem.

Methods of reducing URL size?

So, we have a very large and complex website that requires a lot of state information to be placed in the URL. Most of the time, this is just peachy and the app works well. However, there are (an increasing number of) instances where the URL length gets reaaaaallllly long. This causes huge problems in IE because of the URL length restriction.
I'm wondering, what strategies/methods have people used to reduce the length of their URLs? Specifically, I'd just need to reduce certain parameters in the URL, maybe not the entire thing.
In the past, we've pushed some of this state data into session... however this decreases addressability in our application (which is really important). So, any strategy which can maintain addressability would be favored.
Thanks!
Edit: To answer some questions and clarify a little, most of our parameters aren't an issue... however some of them are dynamically generated with the possibility of being very long. These parameters can contain anything legal in a URL (meaning they aren't just numbers or just letters, could be anything). Case sensitivity may or may not matter.
Also, ideally we could convert these to POST, however due to the immense architectural changes required for that, I don't think that is really possible.
If you don't want to store that data in the session scope, you can:
Send the data as a POST parameter (in a hidden field), so data will be sent in the HTTP request body instead of the URL
Store the data in a database and pass a key (that gives you access to the corresponding database record) back and forth, which opens a lot of scalability and maybe security issues. I suppose this approach is similar to use the session scope.
most of our parameters aren't an issue... however some of them are dynamically generated with the possibility of being very long
I don't see a way to get around this if you want to keep full state info in the URL without resorting to storing data in the session, or permanently on server side.
You might save a few bytes using some compression algorithm, but it will make the URLs unreadable, most algorithms are not very efficient on small strings, and compressing does not produce predictable results.
The only other ideas that come to mind are
Shortening parameter names (query => q, page=> p...) might save a few bytes
If the parameter order is very static, using mod_rewritten directory structures /url/param1/param2/param3 may save a few bytes because you don't need to use parameter names
Whatever data is repetitive and can be "shortened" back into numeric IDs or shorter identifiers (like place names of company branches, product names, ...) keep in an internal, global, permanent lookup table (London => 1, Paris => 2...)
Other than that, I think storing data on server side, identified by a random key as #Guido already suggests, is the only real way. The up side is that you have no size limit at all: An URL like
example.com/?key=A23H7230sJFC
can "contain" as much information on server side as you want.
The down side, of course, is that in order for these URLs to work reliably, you'll have to keep the data on your server indefinitely. It's like having your own little URL shortening service... Whether that is an attractive option, will depend on the overall situation.
I think that's pretty much it!
One option which is good when they really are navigatable parameters is to work these parameters into the first section of the URL e.g.
http://example.site.com/ViewPerson.xx?PersonID=123
=>
http://example.site.com/View/Person/123/
If the data in the URL is automatically generated can't you just generate it again when needed?
With little information it is hard to think of a solution but I'd start by researching what RESTful architectures do in terms of using hypermedia (i.e. links) to keep state. REST in Practice (http://tinyurl.com/287r6wk) is a very good book on this very topic.
Not sure what application you are using. I have had the same problem and I use a couple of solutions (ASP.NET):
Use Server.Transfer and HttpContext (PreviousPage in .Net 2+) to get access to a public property of the source page which holds the data.
Use Server.Transfer along with a hidden field in the source page.
Using compression on querystring.

Strange rare out-of-order data received using Indy

We're having a bizarre problem with Indy10 where two large strings (a few hundred characters each) that we send out one after the other using TCP are appearing at the other end intertwined oddly. This happens extremely infrequently.
Each string is a complete XML message terminated with a LF and in general the READ process reads an entire XML message, returning when it sees the LF.
The call to actually send the message is protected by a critical section around the call to the IOHandler's writeln method and so it is not possible for two threads to send at the same time. (We're certain the critical section is implemented/working properly). This problem happens very rarely. The symptoms are odd...when we send string A followed by string B what we received at the other end (on the rare occasions where we have failure) is the trailing section of string A by itself (i.e., there's a LF at the end of it) followed by the leading section of string A and then the entire string B followed by a single LF. We've verified that the "timed out" property is not true after the partial read - we log that property after every read that returns content. Also, we know there are no embedded LF characters in the string, as we explicitly replace all non-alphanumeric characters in the string with spaces before appending the LF and sending it.
We have log mechanisms inside the critical sections on both the transmission and receiving ends and so we can see this behavior at the "wire".
We're completely baffled and wondering (although always the lowest possibility) whether there could be some low-level Indy issues that might cause this issue, e.g., buffers being sent in the wrong order....very hard to believe this could be the issue but we're grasping at straws.
Does anyone have any bright ideas?
You could try Wireshark to find out how the data is tranferred. This way you can find out whether the problem is in the server or in the client. Also remember to use TCP to get "guaranteed" valid data in right order.
Are you using TCP or UDP? If you are using UDP, it is possible (and expected) that the UDP packets can be received in a different order than they were transmitted due to the routing across the network. If this is the case, you'll need to add some sort of packet ID to each UDP packet so that the receiver can properly order the packets.
Do you have multiple threads reading from the same socket at the same time on the receiving end? Even just to query the Connected() status causes a read to occur. That could cause your multiple threads to read the inbound data and store it into the IOHandler.InputBuffer in random order if you are not careful.
Have you checked the Nagle settings of the IOHandler? We had a similar problem that we fixed by setting UseNagle to false. In our case sending and receiving large amounts of data in bursts was slow due to Nagle coalescing, so it's not quite the same as your situation.

Resources