Does pdf.js supports chunking of Pdf i.e it loads some chunks of pdf while remaining part being downloaded on the background? - pdf.js

Does pdf.js supports chunking of Pdf i.e it loads some chunks of pdf while remaining part being downloaded on the background? How ?

PDF.js automatically detects if the browser and the server can handle chunk loading properly. PDF.js uses XHR in the worker code (pdf.worker.js) to fetch entire binary PDF data as arraybuffer. It might abort initial full request and use several HTTP range requests the get portions of the data if server script signals that it can support range requests. (Benefit: first page is showing faster.) If server script wrongfully sets HTTP headers or not properly processeses HTTP requests, PDF.js performance suffers. Also, if browser can load binary data progressively if will not abort the main requests and continue loading data using main request in parallel.
Few notes about browsers limitations when chunking request does not work:
Safari has defect with range requests caching so chunking disabled for Safari (and any iOS browser);
IE9 has no XHR arraybuffer support.

Related

Does AVPlayer support live footage served directly from a fragmented MP4 file?

Overview
I have a server generating a livestream of video that is exposed as a fragmented MP4 file.
That file is being served to an iOS emulator trying to play the video using react-native-video, which, I believe uses AVPlayer.
The first request the emulator makes is a range request for bytes 0-1. I record the X-Playback-Session-Id and respond with: 206 partial content, bytes 0-1, and the content-range bytes 0-1/*. According to the specification, the size of * indicates that the value is unknown.
I then receive an error on the AVPlayer stating that the server is not correctly configured. According to the apple docs this indicates the server does not support range requests.
I have implemented support for range requests. As an experiment, I set the content-range to respond with a very large size, instead of * (bytes 0-1/17179869176). Which works to an extent. The AVPlayer follows through with multiple range requests for different byte-ranges (0-17179869175). Though sometimes it only requests a singular range.
This buffers for a while and displays nothing until I stop the server (with a breakpoint), and a short while after the video stops buffering (but does not close any active connections) and plays what it has so far loaded. Given that this is a livestream that's not acceptable.
Playing the livestream inside chrome or an android emulator works exactly as I'd expect - the video is played as soon as it gets all the necessary data. But chrome also does not require any of the support for byte range requests to be able to play a video.
I can understand that without any source of content-length the AVPlayer is unable to make range requests as it doesn't know where the file ends. However, as the media I'm exposing is a live stream I don't have a meaningful content-length to give it. So there must be something I can specify either in headers on the server or as AVPlayer settings on the client that states the video is a livestream and so cannot be handled through range requests, or that it must request chunks of footage at a time.
I've looked online and found some useful documents regarding the subject of livestreaming, though all of them are surrounding use of HLS and m3u playlist files. However, changing the back-end to generate m3u playlist files and to decode the video to work out the durations for the chunks correctly would probably take more weeks and months of development time, and I don't understand why it'd be necessary, given that I'm only exposing a single resolution of a single video file that does not need to seek, and that it does work perfectly fine on android.
After having spent so long and having come across so many different hard to resolve issues it's starting to feel like I've somehow gone down the wrong path and that I'm going about this completely the wrong way.
My question is two-fold
Does AVPlayer support live footage served directly from a fragmented MP4 file?
If so, how do I implement it?

Save gzip compressed data to cache storage from service worker

We are trying to store some api responses in cache storage for a PWA app.We are intercepting the fetch request in service worker and are storing the responses to cache.But our uncompressed apis size is little large and we want to keep the compressed(gzip) version in the cache and uncompress it when needed.
Is there any way we can prevent the browser from automatically uncompressing the responses from server
I'm not aware of any way to do this automatically. Most servers will only compress their response bodies on the fly if the incoming request indicates that the browser supports compression, and in that case, the browser will automatically decompress the body before you have access to the compressed bytes.
You may have better luck either explicitly compressing the files on the server and downloading and storing that compressed version (i.e. fetch('asset.json.gz')), or alternatively, using a CompressionStream (which isn't widely supported outside of Chromium-based browsers) to compress your data client-side prior to storing it.

Clarification needed regarding streaming responses with Aqueduct

I'm reading docs of Aqueduct HTTP web server for Dart.
In the section about streaming response body I see following 2 statements which do not completely fit together for me:
A body object may also be a Stream<T>. Stream<T> body objects are most
often used when serving files. This allows the contents of the file to
be streamed from disk to the HTTP client without having to load the
whole file into memory first.
and
When a body object is a Stream<T>, the response will not be sent until
the stream is closed. For finite streams - like those from opened
filed - this happens as soon as the entire file is read.
So how does it send the response only after the entire file is read without having to load the whole file into memory first?
That's a great question and the wording in the docs could be improved. The code that matters is here: https://github.com/stablekernel/aqueduct/blob/master/aqueduct/lib/src/http/request.dart#L237.
The outgoing HTTP response is a stream, too. When you write bytes to that stream, the bytes are transmitted across the wire (if you enable buffering, which is on by default, a buffer is built up before sending; IIRC this is 8kb by default).
Your source stream - the stream representing your body object - is piped to the HTTP response stream (after being transformed by any codecs, if applicable). As soon as you return the Response object from your controller, your source stream starts being consumed. The response cannot be completed until the source stream indicates it is 'closed'. If you take a peek at the source for FileController, you can see that it uses File.openRead, which automatically closes the source stream once all bytes have been read.
If your source stream is manually controlled, you return your Response object and then asynchronously add bytes to the stream and close it once complete. The key takeaway is that if you own the stream, you need to close it, and system utilities will typically close the stream for you. Hope this answers the question.

CocoaAsyncSocket set the buffer size

I have written a VB.NET server that communicates with a silverlight client and a iOS client (using CocoaAsyncSocket).
I'm sending and receiving JSON data, and pdf documents encoded as base64 strings.
When receiving encoded pdf documents on the client side I have some performance issues, it was easily fixed in the silverlight client by adjusting the ReceiveBufferSize, and setting the SendBufferSize on the server (both currently set to 65536). But on iOS client I can't find any where to set the buffer size.
Receiving a document about 6MB in silverlight takes 3-4 sec, and on iOS 25-30 sec.
I have found the problem, it had nothing to do with the buffer size (CocoaAsyncSocket seams to handle that by it self). I had a NSLog writing out all strings, so it was the output to the console that slowed everything down. I thought that all NSLog call where ignored when building the app for release, but that's not the case, it still prints everything out.

how to process a HTTP stream with Dart

cross posted from dartisans G+ where I got no response so far:
Q: how to do (async) simultaneous stream downloads.
Hi, I'm still learning Dart and, for training, I'd like to create a web page from which I can fetch datas from 1 to 10 URLs that are HTTP streams of binary data. Once I got a chunk of data from each streams , simultaneously, I then perform a computation and proceed to next chunks and so on, ad lib. I need parallelism because the client has much more network bandwith than the servers.
Also I do not want to fully download each URL they're too big to fit in memory or even on local storage. Actually, It's pretty similar to video streaming but it's not video data it's binary data and instead of displaying data I just do some computation and I do that on many streams at a time.
Can I do that with Dart and how ? do dart:io or dart:async have the classes I can use to do that ? do I need to use "webworkers" to spawn 1 to 10 simultaneous http requests ?
any pointers/advices/similar samples would be greatly appreciated.
tl;dr: how to process a HTTP stream of data chunk by chunk and how to parallelize this to process many streams at the same time.

Resources