Save gzip compressed data to cache storage from service worker - service-worker

We are trying to store some api responses in cache storage for a PWA app.We are intercepting the fetch request in service worker and are storing the responses to cache.But our uncompressed apis size is little large and we want to keep the compressed(gzip) version in the cache and uncompress it when needed.
Is there any way we can prevent the browser from automatically uncompressing the responses from server

I'm not aware of any way to do this automatically. Most servers will only compress their response bodies on the fly if the incoming request indicates that the browser supports compression, and in that case, the browser will automatically decompress the body before you have access to the compressed bytes.
You may have better luck either explicitly compressing the files on the server and downloading and storing that compressed version (i.e. fetch('asset.json.gz')), or alternatively, using a CompressionStream (which isn't widely supported outside of Chromium-based browsers) to compress your data client-side prior to storing it.

Related

iOS UIWebView - caching assets in native apap

I am evaluating a project that was originally targeted to be just a PWA using React and Redux.
The application needs offline support though, and needs a sizable amount of media assets (images and videos) to be available offline.
Since the service worker storage limit is just 50MB, this is not feasible for iOS.
I have toyed with the idea of using a native app wrapper that handles the storage of the media files, with most of the app remaining a Redux/React implementation.
Is there a good way to expose such assets from to the UIWebView from the native app? Or are there other common approaches for this situation?
First off all you should try to cache only that assets which are necessary for your PWA.However still if you want to store large files I would suggest you can go with IndexDB API.
IndexedDB is a low-level API for client-side storage of significant amounts of structured data, including files/blobs. This API uses indexes to enable high-performance searches of this data. While Web Storage is useful for storing smaller amounts of data, it is less useful for storing larger amounts of structured data. IndexedDB provides a solution.
Why IndexDB?
When quota exceeds on IndexedDB API, the error calls the transaction's onabort() function with Event as an argument.
When a browser requests user a permission for extending storage size, all browsers call this function only when the user doesn't allow it, otherwise, continue the transaction.
If you want know about other possible DB I would suggest you to go through this link
https://www.html5rocks.com/en/tutorials/offline/quota-research/

HTTP/2 Streaming and static compression

I need to implement an http2 server, both in node and in C++. Anyhow, I can't grasp how to make streaming works with static compression:
I want to compress my files with the highest compression possible, and this is done statically at build time
I want to stream my HTML, so the browser receives the <head> asap, and can either prefetch resources or retrieve them from the local cache
But files that are compressed can't be read before receiving all the data, can they?
Should I give up compression, or should I compress HTML stream chunks separately? Is there a better way?
But files that are compressed can't be read before receiving all the data, can they?
This is (generally) incorrect. Deflate based compression (e.g. gzip, brotli) as used for HTML files can be decompressed without receiving all the data.
These work mostly by back-referencing data. For example the above sentence has a repeated reference to the text “compress”:
Deflate based compression (e.g. gzip, brotli) can be decompressed without receiving all the data.
So the second instance could be replaced with a back reference to the first one:
Deflate based compression (e.g. gzip, brotli) can be de(-49,8)ed without receiving all the data.
So you can see that as long as you are reading in order (which HTTP guarantees) and from the beginning, then you don’t need any subsequent data to decompress what you’ve already received - but you do need any previous text.
Similarly JPEGs are often displayed before they are fully received, either by loading it line by line (non-progressive JPEGs), or by having a blurry image which is enhanced as more data is loaded (progressive JPEGs).

Need an use case example for stream response in ChicagoBoss

ChicageBoss controller API has this
{stream, Generator::function(), Acc0}
Stream a response to the client using HTTP chunked encoding. For each
chunk, the Generator function is passed an accumulator (initally Acc0)
and should return either {output, Data, Acc1} or done.
I am wondering what is the use case for this? There are others like Json, output. When will this stream be useful?
Can someone present an use case in real world?
Serving large files for download might be the most straight-forward use case.
You could argue that there are also other ways to serve files so that users can download them, but these might have other disadvantages:
By streaming the file, you don't have to read the entire file into memory before starting to send the response to the client. For small files, you could just read the content of the file, and return it as {output, BinaryContent, CustomHeader}. But that might become tricky if you want to serve large files like disk images.
People often suggest to serve downloadable files as static files (e.g. here). However, these downloads bypass all controllers, which might be an issue if you want things like download counters or access restrictions. Caching might be an issue, too.

Does pdf.js supports chunking of Pdf i.e it loads some chunks of pdf while remaining part being downloaded on the background?

Does pdf.js supports chunking of Pdf i.e it loads some chunks of pdf while remaining part being downloaded on the background? How ?
PDF.js automatically detects if the browser and the server can handle chunk loading properly. PDF.js uses XHR in the worker code (pdf.worker.js) to fetch entire binary PDF data as arraybuffer. It might abort initial full request and use several HTTP range requests the get portions of the data if server script signals that it can support range requests. (Benefit: first page is showing faster.) If server script wrongfully sets HTTP headers or not properly processeses HTTP requests, PDF.js performance suffers. Also, if browser can load binary data progressively if will not abort the main requests and continue loading data using main request in parallel.
Few notes about browsers limitations when chunking request does not work:
Safari has defect with range requests caching so chunking disabled for Safari (and any iOS browser);
IE9 has no XHR arraybuffer support.

NSURLConnection uploading large files

I need to upload large files like video from my iphone. Basically I need to read data as chunks and upload each chunk. My upload is multipart upload. How to achieve this using NSURLConnection?
Thanks in advance!
You likely use a "Form-based File Upload over HTML". This is a specialized form of a multipart/form-data POST request.
See Form-based File Upload in HTML and many other sources in the web.
When dealing with large files, you need to strive to keep your memory footprint acceptable low. Thus, the input source for the request data should be a NSInputStream which precisely avoids this problem. You create an instance of NSInputStream with a class factory method where you specify the file you want to upload. When setting up the NSMutableURLRequest you set the input stream via setHTTPBodyStream.
At any rate, use NSURLConnection in asynchronous mode implementing the delegates. You will need to keep a reference of the connection object in order to be able to cancel it, if this is required.
Every multipart shall have a Content-Type - especially the file part - and every part should have a Content-Length, unless chunked transfer encoding is used.
You may want to explicitly set the Content-Length header of the file part with the correct length. Otherwise, if NSURLConnection cannot determine the content length itself - and this is the true when you set an input stream - then NSURLConnection uses chunked transfer encoding. Depending on the content type a few servers may have difficulties processing either chunked transfer encoded bodies or very large bodies.
Since there is a high chance for mobile devices to loose their connection in the field during a upload request, you should also consider to utilize "HTTP range headers". Both, server and client need to support this.
See "14.35 Range" RFC 2616, and various other sources regarding "resumable file upload".
There is no system framework that helps you setting up the multipart body and calculating the correct content length for the whole message. Doing this yourself without third party library support is quite error prone and cumbersome, but doable.

Resources