NSURLConnection uploading large files - upload

I need to upload large files like video from my iphone. Basically I need to read data as chunks and upload each chunk. My upload is multipart upload. How to achieve this using NSURLConnection?
Thanks in advance!

You likely use a "Form-based File Upload over HTML". This is a specialized form of a multipart/form-data POST request.
See Form-based File Upload in HTML and many other sources in the web.
When dealing with large files, you need to strive to keep your memory footprint acceptable low. Thus, the input source for the request data should be a NSInputStream which precisely avoids this problem. You create an instance of NSInputStream with a class factory method where you specify the file you want to upload. When setting up the NSMutableURLRequest you set the input stream via setHTTPBodyStream.
At any rate, use NSURLConnection in asynchronous mode implementing the delegates. You will need to keep a reference of the connection object in order to be able to cancel it, if this is required.
Every multipart shall have a Content-Type - especially the file part - and every part should have a Content-Length, unless chunked transfer encoding is used.
You may want to explicitly set the Content-Length header of the file part with the correct length. Otherwise, if NSURLConnection cannot determine the content length itself - and this is the true when you set an input stream - then NSURLConnection uses chunked transfer encoding. Depending on the content type a few servers may have difficulties processing either chunked transfer encoded bodies or very large bodies.
Since there is a high chance for mobile devices to loose their connection in the field during a upload request, you should also consider to utilize "HTTP range headers". Both, server and client need to support this.
See "14.35 Range" RFC 2616, and various other sources regarding "resumable file upload".
There is no system framework that helps you setting up the multipart body and calculating the correct content length for the whole message. Doing this yourself without third party library support is quite error prone and cumbersome, but doable.

Related

Clarification needed regarding streaming responses with Aqueduct

I'm reading docs of Aqueduct HTTP web server for Dart.
In the section about streaming response body I see following 2 statements which do not completely fit together for me:
A body object may also be a Stream<T>. Stream<T> body objects are most
often used when serving files. This allows the contents of the file to
be streamed from disk to the HTTP client without having to load the
whole file into memory first.
and
When a body object is a Stream<T>, the response will not be sent until
the stream is closed. For finite streams - like those from opened
filed - this happens as soon as the entire file is read.
So how does it send the response only after the entire file is read without having to load the whole file into memory first?
That's a great question and the wording in the docs could be improved. The code that matters is here: https://github.com/stablekernel/aqueduct/blob/master/aqueduct/lib/src/http/request.dart#L237.
The outgoing HTTP response is a stream, too. When you write bytes to that stream, the bytes are transmitted across the wire (if you enable buffering, which is on by default, a buffer is built up before sending; IIRC this is 8kb by default).
Your source stream - the stream representing your body object - is piped to the HTTP response stream (after being transformed by any codecs, if applicable). As soon as you return the Response object from your controller, your source stream starts being consumed. The response cannot be completed until the source stream indicates it is 'closed'. If you take a peek at the source for FileController, you can see that it uses File.openRead, which automatically closes the source stream once all bytes have been read.
If your source stream is manually controlled, you return your Response object and then asynchronously add bytes to the stream and close it once complete. The key takeaway is that if you own the stream, you need to close it, and system utilities will typically close the stream for you. Hope this answers the question.

HTTP/2 Streaming and static compression

I need to implement an http2 server, both in node and in C++. Anyhow, I can't grasp how to make streaming works with static compression:
I want to compress my files with the highest compression possible, and this is done statically at build time
I want to stream my HTML, so the browser receives the <head> asap, and can either prefetch resources or retrieve them from the local cache
But files that are compressed can't be read before receiving all the data, can they?
Should I give up compression, or should I compress HTML stream chunks separately? Is there a better way?
But files that are compressed can't be read before receiving all the data, can they?
This is (generally) incorrect. Deflate based compression (e.g. gzip, brotli) as used for HTML files can be decompressed without receiving all the data.
These work mostly by back-referencing data. For example the above sentence has a repeated reference to the text “compress”:
Deflate based compression (e.g. gzip, brotli) can be decompressed without receiving all the data.
So the second instance could be replaced with a back reference to the first one:
Deflate based compression (e.g. gzip, brotli) can be de(-49,8)ed without receiving all the data.
So you can see that as long as you are reading in order (which HTTP guarantees) and from the beginning, then you don’t need any subsequent data to decompress what you’ve already received - but you do need any previous text.
Similarly JPEGs are often displayed before they are fully received, either by loading it line by line (non-progressive JPEGs), or by having a blurry image which is enhanced as more data is loaded (progressive JPEGs).

Need an use case example for stream response in ChicagoBoss

ChicageBoss controller API has this
{stream, Generator::function(), Acc0}
Stream a response to the client using HTTP chunked encoding. For each
chunk, the Generator function is passed an accumulator (initally Acc0)
and should return either {output, Data, Acc1} or done.
I am wondering what is the use case for this? There are others like Json, output. When will this stream be useful?
Can someone present an use case in real world?
Serving large files for download might be the most straight-forward use case.
You could argue that there are also other ways to serve files so that users can download them, but these might have other disadvantages:
By streaming the file, you don't have to read the entire file into memory before starting to send the response to the client. For small files, you could just read the content of the file, and return it as {output, BinaryContent, CustomHeader}. But that might become tricky if you want to serve large files like disk images.
People often suggest to serve downloadable files as static files (e.g. here). However, these downloads bypass all controllers, which might be an issue if you want things like download counters or access restrictions. Caching might be an issue, too.

NSURLConnection consuming huge memory

I'm using NSURLConnection to interact with the server side and I have observed that when the server take time to respond the system allows about 40 mo.
I don't know if I'm the only one to have this problem.
thanks in advance.
Yes this is possible in case if your data for response is large in size. Generally what we do, we create instance of NSData and append all downloaded data to this variable.This works perfect when your data is comparatively small. If you have large data in response, the better way is to create file in Document directory and append all data to that file when connection receives data. Read this data after connection finishes loading.
This concept of saving data is applicable on android also.

MSStream - what's the point?

Bear with me on this one please.
When setting response of a WinJS.xhr response I can set it to, among other things, to 'ms-stream' or blob. I was hoping to leverage the stream concept when downloading a file in such a way that I don't have to keep the whole response in memory (video files can be huge).
However, all I can do with 'ms-stream' object is read it with an MSStreamReader. This would be great if I could say to it 'consume 1024 bytes from the stream, and 'loop' this, until stream is exhausted. However from reading the docs (haven't tried this, so correct me if I'm wrong), it appears I can only read from the stream once (e.g. readAsBlob method) and I can't set the start position. This means I need to read the whole response into memory as a blob. Which I can achieve with responseType set to 'blob' in the first place. So what is the point of MSStream anyway?
Well, it turns out that the method msDetachStream gives access to underlying stream and doesn't interrupt the download process. I initially thought that any data that was not downloaded was lost when calling this since the docs mention that MSStream object is closed.
I wrote a blog post a while back to help answer questions about MSStream and other oddball object types that you encounter in WinRT and the host for JavaScript apps. See http://www.kraigbrockschmidt.com/2013/03/22/msstream-blob-objects-html5/. Yes, you can use MSStreamReader to for some work (it's a synchronous API), but you can also pass an MSStream to URL.createObjectURL to assign it to an img.src and so forth.
With MSStream, here's some of what I wrote: "MSStream is technically an extension of this HTML5 File API that provides interop with WinRT. When you get MSStream (or Blob) objects from some HTML5 API (like an XmlHttpRequest with responseType of “ms-stream,” as you’d use when downloading a file or video, or from the canvas’ msToBlob method), you can pass those results to various WinRT APIs that accept IInputStream or IRandomAccessStream as input. To use the canvas example, the msRandomAccessStream in a blob from msToBlob can be fed into APIs in Windows.Graphics.Imaging for transform or transcoding. A video stream can be similarly worked with using the APIs in Windows.Media.Transcoding. You might also just want to write the contents of a stream to a StorageFile (that isn’t necessarily on the file system) or copy them to a buffer for encryption."
So MSStreamReader isn't the end-all. The real use of MSStream is to pass the object into WinRT APIs that accept the aforementioned interface types, which opens many possibilities.
Admittedly, this is an under-documented area, which is exactly why I wrote my series of posts under the title, Q&A on Files, Streams, Buffers, and Blobs (the initial post is on http://www.kraigbrockschmidt.com/2013/03/18/why-doesnt-storagefile-close-method/).

Resources