HTTP/2 Streaming and static compression - stream

I need to implement an http2 server, both in node and in C++. Anyhow, I can't grasp how to make streaming works with static compression:
I want to compress my files with the highest compression possible, and this is done statically at build time
I want to stream my HTML, so the browser receives the <head> asap, and can either prefetch resources or retrieve them from the local cache
But files that are compressed can't be read before receiving all the data, can they?
Should I give up compression, or should I compress HTML stream chunks separately? Is there a better way?

But files that are compressed can't be read before receiving all the data, can they?
This is (generally) incorrect. Deflate based compression (e.g. gzip, brotli) as used for HTML files can be decompressed without receiving all the data.
These work mostly by back-referencing data. For example the above sentence has a repeated reference to the text “compress”:
Deflate based compression (e.g. gzip, brotli) can be decompressed without receiving all the data.
So the second instance could be replaced with a back reference to the first one:
Deflate based compression (e.g. gzip, brotli) can be de(-49,8)ed without receiving all the data.
So you can see that as long as you are reading in order (which HTTP guarantees) and from the beginning, then you don’t need any subsequent data to decompress what you’ve already received - but you do need any previous text.
Similarly JPEGs are often displayed before they are fully received, either by loading it line by line (non-progressive JPEGs), or by having a blurry image which is enhanced as more data is loaded (progressive JPEGs).

Related

Save gzip compressed data to cache storage from service worker

We are trying to store some api responses in cache storage for a PWA app.We are intercepting the fetch request in service worker and are storing the responses to cache.But our uncompressed apis size is little large and we want to keep the compressed(gzip) version in the cache and uncompress it when needed.
Is there any way we can prevent the browser from automatically uncompressing the responses from server
I'm not aware of any way to do this automatically. Most servers will only compress their response bodies on the fly if the incoming request indicates that the browser supports compression, and in that case, the browser will automatically decompress the body before you have access to the compressed bytes.
You may have better luck either explicitly compressing the files on the server and downloading and storing that compressed version (i.e. fetch('asset.json.gz')), or alternatively, using a CompressionStream (which isn't widely supported outside of Chromium-based browsers) to compress your data client-side prior to storing it.

Clarification needed regarding streaming responses with Aqueduct

I'm reading docs of Aqueduct HTTP web server for Dart.
In the section about streaming response body I see following 2 statements which do not completely fit together for me:
A body object may also be a Stream<T>. Stream<T> body objects are most
often used when serving files. This allows the contents of the file to
be streamed from disk to the HTTP client without having to load the
whole file into memory first.
and
When a body object is a Stream<T>, the response will not be sent until
the stream is closed. For finite streams - like those from opened
filed - this happens as soon as the entire file is read.
So how does it send the response only after the entire file is read without having to load the whole file into memory first?
That's a great question and the wording in the docs could be improved. The code that matters is here: https://github.com/stablekernel/aqueduct/blob/master/aqueduct/lib/src/http/request.dart#L237.
The outgoing HTTP response is a stream, too. When you write bytes to that stream, the bytes are transmitted across the wire (if you enable buffering, which is on by default, a buffer is built up before sending; IIRC this is 8kb by default).
Your source stream - the stream representing your body object - is piped to the HTTP response stream (after being transformed by any codecs, if applicable). As soon as you return the Response object from your controller, your source stream starts being consumed. The response cannot be completed until the source stream indicates it is 'closed'. If you take a peek at the source for FileController, you can see that it uses File.openRead, which automatically closes the source stream once all bytes have been read.
If your source stream is manually controlled, you return your Response object and then asynchronously add bytes to the stream and close it once complete. The key takeaway is that if you own the stream, you need to close it, and system utilities will typically close the stream for you. Hope this answers the question.

How do I track the amount of data sent/received on iOS?

I'd like to track how much data is transferred to/from an iOS app's backend server. I'm using Apple's NSURLSessionDataTask, which automatically handles gzip decompression. I can easily get the size of the decompressed data. Is there any way to get the size of the data before decompression? Or even better, the size of all data transferred, not just the body?

Need an use case example for stream response in ChicagoBoss

ChicageBoss controller API has this
{stream, Generator::function(), Acc0}
Stream a response to the client using HTTP chunked encoding. For each
chunk, the Generator function is passed an accumulator (initally Acc0)
and should return either {output, Data, Acc1} or done.
I am wondering what is the use case for this? There are others like Json, output. When will this stream be useful?
Can someone present an use case in real world?
Serving large files for download might be the most straight-forward use case.
You could argue that there are also other ways to serve files so that users can download them, but these might have other disadvantages:
By streaming the file, you don't have to read the entire file into memory before starting to send the response to the client. For small files, you could just read the content of the file, and return it as {output, BinaryContent, CustomHeader}. But that might become tricky if you want to serve large files like disk images.
People often suggest to serve downloadable files as static files (e.g. here). However, these downloads bypass all controllers, which might be an issue if you want things like download counters or access restrictions. Caching might be an issue, too.

NSURLConnection uploading large files

I need to upload large files like video from my iphone. Basically I need to read data as chunks and upload each chunk. My upload is multipart upload. How to achieve this using NSURLConnection?
Thanks in advance!
You likely use a "Form-based File Upload over HTML". This is a specialized form of a multipart/form-data POST request.
See Form-based File Upload in HTML and many other sources in the web.
When dealing with large files, you need to strive to keep your memory footprint acceptable low. Thus, the input source for the request data should be a NSInputStream which precisely avoids this problem. You create an instance of NSInputStream with a class factory method where you specify the file you want to upload. When setting up the NSMutableURLRequest you set the input stream via setHTTPBodyStream.
At any rate, use NSURLConnection in asynchronous mode implementing the delegates. You will need to keep a reference of the connection object in order to be able to cancel it, if this is required.
Every multipart shall have a Content-Type - especially the file part - and every part should have a Content-Length, unless chunked transfer encoding is used.
You may want to explicitly set the Content-Length header of the file part with the correct length. Otherwise, if NSURLConnection cannot determine the content length itself - and this is the true when you set an input stream - then NSURLConnection uses chunked transfer encoding. Depending on the content type a few servers may have difficulties processing either chunked transfer encoded bodies or very large bodies.
Since there is a high chance for mobile devices to loose their connection in the field during a upload request, you should also consider to utilize "HTTP range headers". Both, server and client need to support this.
See "14.35 Range" RFC 2616, and various other sources regarding "resumable file upload".
There is no system framework that helps you setting up the multipart body and calculating the correct content length for the whole message. Doing this yourself without third party library support is quite error prone and cumbersome, but doable.

Resources