Clarification needed regarding streaming responses with Aqueduct - dart

I'm reading docs of Aqueduct HTTP web server for Dart.
In the section about streaming response body I see following 2 statements which do not completely fit together for me:
A body object may also be a Stream<T>. Stream<T> body objects are most
often used when serving files. This allows the contents of the file to
be streamed from disk to the HTTP client without having to load the
whole file into memory first.
and
When a body object is a Stream<T>, the response will not be sent until
the stream is closed. For finite streams - like those from opened
filed - this happens as soon as the entire file is read.
So how does it send the response only after the entire file is read without having to load the whole file into memory first?

That's a great question and the wording in the docs could be improved. The code that matters is here: https://github.com/stablekernel/aqueduct/blob/master/aqueduct/lib/src/http/request.dart#L237.
The outgoing HTTP response is a stream, too. When you write bytes to that stream, the bytes are transmitted across the wire (if you enable buffering, which is on by default, a buffer is built up before sending; IIRC this is 8kb by default).
Your source stream - the stream representing your body object - is piped to the HTTP response stream (after being transformed by any codecs, if applicable). As soon as you return the Response object from your controller, your source stream starts being consumed. The response cannot be completed until the source stream indicates it is 'closed'. If you take a peek at the source for FileController, you can see that it uses File.openRead, which automatically closes the source stream once all bytes have been read.
If your source stream is manually controlled, you return your Response object and then asynchronously add bytes to the stream and close it once complete. The key takeaway is that if you own the stream, you need to close it, and system utilities will typically close the stream for you. Hope this answers the question.

Related

MSStream - what's the point?

Bear with me on this one please.
When setting response of a WinJS.xhr response I can set it to, among other things, to 'ms-stream' or blob. I was hoping to leverage the stream concept when downloading a file in such a way that I don't have to keep the whole response in memory (video files can be huge).
However, all I can do with 'ms-stream' object is read it with an MSStreamReader. This would be great if I could say to it 'consume 1024 bytes from the stream, and 'loop' this, until stream is exhausted. However from reading the docs (haven't tried this, so correct me if I'm wrong), it appears I can only read from the stream once (e.g. readAsBlob method) and I can't set the start position. This means I need to read the whole response into memory as a blob. Which I can achieve with responseType set to 'blob' in the first place. So what is the point of MSStream anyway?
Well, it turns out that the method msDetachStream gives access to underlying stream and doesn't interrupt the download process. I initially thought that any data that was not downloaded was lost when calling this since the docs mention that MSStream object is closed.
I wrote a blog post a while back to help answer questions about MSStream and other oddball object types that you encounter in WinRT and the host for JavaScript apps. See http://www.kraigbrockschmidt.com/2013/03/22/msstream-blob-objects-html5/. Yes, you can use MSStreamReader to for some work (it's a synchronous API), but you can also pass an MSStream to URL.createObjectURL to assign it to an img.src and so forth.
With MSStream, here's some of what I wrote: "MSStream is technically an extension of this HTML5 File API that provides interop with WinRT. When you get MSStream (or Blob) objects from some HTML5 API (like an XmlHttpRequest with responseType of “ms-stream,” as you’d use when downloading a file or video, or from the canvas’ msToBlob method), you can pass those results to various WinRT APIs that accept IInputStream or IRandomAccessStream as input. To use the canvas example, the msRandomAccessStream in a blob from msToBlob can be fed into APIs in Windows.Graphics.Imaging for transform or transcoding. A video stream can be similarly worked with using the APIs in Windows.Media.Transcoding. You might also just want to write the contents of a stream to a StorageFile (that isn’t necessarily on the file system) or copy them to a buffer for encryption."
So MSStreamReader isn't the end-all. The real use of MSStream is to pass the object into WinRT APIs that accept the aforementioned interface types, which opens many possibilities.
Admittedly, this is an under-documented area, which is exactly why I wrote my series of posts under the title, Q&A on Files, Streams, Buffers, and Blobs (the initial post is on http://www.kraigbrockschmidt.com/2013/03/18/why-doesnt-storagefile-close-method/).

How to know whether this is the last packet sent from a bulletin board system

I am writing a bulletin board system (BBS) reader on ios. I use GCDAsyncSocket library to handle packets sending and receiving. The issue that I have is the server always splits the data to send into multiple packets. I can see that happens by printing out the receiving string in didReceiveData() function.
From the GCDAsyncSocket readme, I understand TCP is a stream. I also know there are some end of stream mechanisms, such as double CR LFs at the end. I have used WireShark to parse the packets, but there is no sign of some sort of pattern in the last data packet. The site is not owned by me, so I couldn't make it to send certain bytes. There must be some way to detect the last packet, otherwise how BBS clients handle displaying data?
Double CR LFs are not end of stream. That is just part of the details of HTTP protocol, for example, but has nothing to do with closing the stream. HTTP 1.1 allows me to send multiple responses on a single stream, with double CR LF after HTTP header, without end of stream.
The TCP socket stream will return 0 on a read when it is closed from the other end, blocking or non-blocking.
So assuming the server will close the socket when it is done sending to you, you can loop and perform a blocking read and if returns > 0, process the data, then read again. if < 0, process the error code (could be fatal or not), and if == 0, socket is closed from the other side, don't read again.
For non-blocking sockets, you can use select() or some other API to detect when the stream becomes ready to read. I'm not familiar with the specific lib you are using but if it is a POSIX / Berkeley sockets API, it will work that way.
In either case, you should build a buffer of input, concatenating the results of each read until you are ready to process. As you've found, you can't assume that a single read will return a complete application level packet. But as to your question, unless the server wants you to close the socket, you should wait for read to return 0.

NSURLConnection uploading large files

I need to upload large files like video from my iphone. Basically I need to read data as chunks and upload each chunk. My upload is multipart upload. How to achieve this using NSURLConnection?
Thanks in advance!
You likely use a "Form-based File Upload over HTML". This is a specialized form of a multipart/form-data POST request.
See Form-based File Upload in HTML and many other sources in the web.
When dealing with large files, you need to strive to keep your memory footprint acceptable low. Thus, the input source for the request data should be a NSInputStream which precisely avoids this problem. You create an instance of NSInputStream with a class factory method where you specify the file you want to upload. When setting up the NSMutableURLRequest you set the input stream via setHTTPBodyStream.
At any rate, use NSURLConnection in asynchronous mode implementing the delegates. You will need to keep a reference of the connection object in order to be able to cancel it, if this is required.
Every multipart shall have a Content-Type - especially the file part - and every part should have a Content-Length, unless chunked transfer encoding is used.
You may want to explicitly set the Content-Length header of the file part with the correct length. Otherwise, if NSURLConnection cannot determine the content length itself - and this is the true when you set an input stream - then NSURLConnection uses chunked transfer encoding. Depending on the content type a few servers may have difficulties processing either chunked transfer encoded bodies or very large bodies.
Since there is a high chance for mobile devices to loose their connection in the field during a upload request, you should also consider to utilize "HTTP range headers". Both, server and client need to support this.
See "14.35 Range" RFC 2616, and various other sources regarding "resumable file upload".
There is no system framework that helps you setting up the multipart body and calculating the correct content length for the whole message. Doing this yourself without third party library support is quite error prone and cumbersome, but doable.

Using AudioFileStreamParseBytes to parse bytes by reading from a file does not call kAudioFileStreamProperty_ReadyToProducePackets

I am trying to parse bytes by opening a audio file (M4A), reading 2048 bytes in a loop and passing it to AudioFileStreamParseBytes. This does not call the callback with the property kAudioFileStreamProperty_ReadyToProducePackets. But the property kAudioFileStreamProperty_FileFormat is being successfully called. (So I know the callback mechanism is working.)
Questions :
Can I use AudioFileStreamParseBytes to parse audio data by reading from a local file ? Most examples show how to use AudioFileStreamParseBytes by parsing HTTP stream data, but not reading from a local file.
Has anyone tried to do the above and has been successful processing audio files ?
NOTE : The reason I am not using AudioFileOpenWithCallbacks to open the m4a file is that the api call is failing to open the following file after downloading it locally ( http://www.arsenal-music.com/podcast/arsenal-podcast-05.m4a ). This m4a file has images embedded in it and AudioFileOpenWithCallbacks is not able to parse the file. At the same time I have used Matt Gallagher AudioStreamer code that opens this file, parses it and plays it fine. So I am leaning towards using AudioFileStreamerParseBytes by reading data from a local file. (As far as I understand there should be no difference whether the data is coming from HTTP stream or a local file. But I could be wrong.)
I do not see any errors in the any API calls (just in case someone asks me this question). I can paste the code if needed, but all I want to know is if the approach of reading from file and passing it to AudioFileStreamParseBytes will work or not ?
After some testing I found that one can use AudioFileStreamParseBytes to parse bytes that are read from a local file. Data need not come from a Http stream.

webmmux directshow seeking queues IStream

I am using the directshow filter for muxing a vp8 and vorbis.
And MOST IMPORTANTLY I am sending (trying to send actually) the webm file in real time.
So there is no file being created.
As data is packed into webm after being encoder i send it off to the socket.
The filesinker filter uses IStream to do the file IO. And it heavely uses the seek operation.
Which I can not use. Since I can not seek on a socket.
Has anyone implemented or know how to use this muxer so that seek operation in not called.
Or maybe a version on the muxer with queues so that it supports fragmentation.
Thanks
I am using the directshow filter providede by www.webmproject.org
Implementation of IStream on writers allow multiplexers update cross references in the written stream/file. So they don't have to write sequentially which is impossible for most of container formats without creating huge buffers or temporary files.
Now if you are creating the file on runtime to progressively send over network, which I suppose you are trying to achieve, you don't know what, where and when the multiplexer is going to update to close the file. Whether it is going to revisit data in the beginning of the file and update references, headers etc.
You are supposed to create the full file first, and then deliver it. Or you need to substitute the whole writer thing and deliver onto socket all writes, including overwrites of already existing data. The most appropriate method to deliver real time data over network however is not not transfer the files at all. Sender send individual streams and receivers either use them as such, or multiplex into file after receiving then is it is necessary.

Resources