When a StreamSubscription is paused; are events buffered or dropped? - dart

The StreamSubscription class has a pause() method. The docs don't indicate whether events are buffered while a stream is paused (and then all fired once resumed), or dropped; which is it?

A StreamSubscription is always expected to buffer events while it is paused.
It may pass the pause state on to its source to avoid being swamped, but even if it can't, it will buffer data until it runs out of memory.
For a broadcast stream, where events are typically not part of a greater whole, you might not want the events. In that case you can cancel the subscription and create a new one when you need events again. Broadcast streams should generally allow resubscribing after a cancel, but some may have been set up in such a way that it isn't possible, e.g., by dropping its resources after the last client cancels.
For a single subscription stream, where events are often a sequence of chunks of a bigger thing, dropping events should probably never happen.

The docs also include this text:
Currently DOM streams silently drop events when the stream is paused. This is a bug and will be fixed.
This suggests that intention is that events will be buffered and then released once you unpause. If you wish not to receive the events during this period, you are best cancelling and resubscribing.

Related

do dart streams come with extra overhead?

I have a general efficiency question about dart streams.
I have a project that makes some use of them, but it has been proposed that we convert nearly everything (functions and data) to be dart streams. This is in order to achieve a fully reactive architecture.
I don't know how streams really work under the hood, so I don't really know if this kind of design comes with any kind of memory or computational overhead.
Thanks for your attention to this question.
There is an overhead. It's not necessarily big, but it's there.
Streams have a well-defined asynchronous behavior, and it's documented how they react to listeners being added, paused or cancelled, even if that happens while an event is being delivered (because, most often, that is when it happens).
Streams are asynchronous, which means there is a delay between adding an event to the stream (through a StreamController), and that event being received by the listener. That delay makes it necessary to store (buffer) the event, schedule a microtask, and then unbuffer the event and deliver it in that later microtask. Scheduling a microtask costs. There might be zones involved, which can cost extra.
On top of that, because the stream needs to be able to react to pause and cancel events in a timely manner, which means that each event delivery is also flanked by extra checks of whether the event handler has paused or cancelled. It's not a lot of overhead, but it's there.
For single-subscription streams, that's about it.
For broadcast streams, which can have multiple listeners, there can be a little extra overhead to handle new listeners being added while delivering the event. Again, not a lot, but it's there. The state-space for a stream is actually quite complicated.
(You can create "a synchoronous StreamController" which delivers events "immediately", but most of the time, you shouldn't. Those are not for avoiding asynchrony, they are for avoiding adding extra asynchronous delays when propagating already synchronous events, and should be used very carefully to avoid breaking code assuming that they won't get events in the middle of something else. A properly implemented reactive framework will use such controllers in their implementation, but that will not get rid of the original inherent delay of delivering the original asynchronous event.)
Now, performance is not absolute. Using streams everywhere might make your life easier, and if the performance is good enough for your application (it's not dominating the actual computations), then the increased development speed and maintainability might pay for itself. You should measure (and have repeatable benchmarks to measure) before making a decision about an implementation strategy based on performance alone.

Is there a way to have a future complete when a Stream is "done" without actually draining the messages, in Dart?

I want to see if the other side gave up and closed the sink of a StreamChannel, without actually reading the messages yet.
(I'm going to be handing the stream to someone else, so i can't listen() to it, since you're only allowed to listen once per stream.)
[posting for a friend, credit to them for asking the question]
In short, no.
There is no concept of "giving up". If you put events into a non-broadcast stream, they'll stay there until someone listens to the stream (which is why you shouldn't put data there until someone listens, you're just wasting memory).
That includes the done event, and you won't get to the done event without first reading all the preceding events. That's the core abstraction of a stream - a source of events accessed in order, it's not done until it's actually done.
What I think you are looking for is a "side channel" that can communicate information about the stream without going through the stream (that is, out-of-band).
Something like that can surely be built - in about one gazillion different ways, depending on what you want, but it's just not something that a Stream supports by default, nor does a StreamChannel, if I read it correctly (I have never used a StreamChannel myself).

Playing back a WAV file streamed gradually over a network connection in iOS

I'm working with a third party API that behaves as follows:
I have to connect to its URL and make my request, which involves POSTing request data;
the remote server then sends back, "chunk" at a time, the corresponding WAV data (which I receive in my NSURLConnectionDataDelegate's didReceiveData callback).
By "chunk" for argument's sake, we mean some arbitrary "next portion" of the data, with no guarantee that it corresponds to any meaningful division of the audio (e.g. it may not be aligned to a specific multiple of audio frames, the number of bytes in each chunk is just some arbitrary number that can be different for each chunk, etc).
Now-- correct me if I'm wrong, I can't simply use an AVAudioPlayer because I need to POST to my URL, so I need to pull back the data "manually" via an NSURLConnection.
So... given the above, what is then the most painless way for me to play back that audio as it comes down the wire? (I appreciate that I could concatenate all the arrays of bytes and then pass the whole thing to an AVAudioPlayer at the end-- only that this will delay the start of playback as I have to wait for all the data.)
I will give a bird's eye view to the solution. I think that this will help you a great deal in the direction to find a concrete, coded solution.
iOS provides a zoo of audio APIs and several of them can be used to play audio. Which one of them you choose depends on your particular requirements. As you wrote already, the AVAudioPlayer class is not suitable for your case, because with this one, you need to know all the audio data in the moment you start playing audio. Obviously, this is not the case for streaming, so we have to look for an alternative.
A good tradeoff between ease of use and versatility are the Audio Queue Services, which I recommend for you. Another alternative would be Audio Units, but they are a low level C API and therefor less intuitive to use and they have many pitfalls. So stick to Audio Queues.
Audio Queues allow you to define callback functions which are called from the API when it needs more audio data for playback - similarly to the callback of your network code, which gets called when there is data available.
Now the difficulty is how to connect two callbacks, one which supplies data and one which requests data. For this, you have to use a buffer. More specifically, a queue (don't confuse this queue with the Audio Queue stuff. Audio Queue Services is the name of an API. On the other hand, the queue I'm talking about next is a container object). For clarity, I will call this one buffer-queue.
To fill data into the buffer-queue you will use the network callback function, which supplies data to you from the network. And data will be taken out of the buffer-queue by the audio callback function, which is called by the Audio Queue Services when it needs more data.
You have to find a buffer-queue implementation which supports concurrent access (aka it is thread safe), because it will be accessed from two different threads, the audio thread and the network thread.
Alternatively to finding an already thread safe buffer-queue implementation, you can take care of the thread safety on your own, e.g. by executing all code dealing with the buffer-queue on a certain dispatch queue (3rd kind of queue here; yes, Apple and IT love them).
Now, what happens if either
The audio callback is called and your buffer-queue is empty, or
The network callback is called and your buffer-queue is already full?
In both cases, the respective callback function can't proceed normally. The audio callback function can't supply audio data if there is none available and the network callback function can't store incoming data if the buffer-queue is full.
In these cases, I would first try out blocking further execution until more data is available or respectively space is available to store data. On the network side, this will most likely work. On the audio side, this might cause problems. If it causes problems on the audio side, you have an easy solution: if you have no data, simply supply silence as data. That means that you need to supply zero-frames to the Audio Queue Services, which it will play as silence to fill the gap until more data is available from the network.
This is the concept that all streaming players use when suddenly the audio stops and it tells you "buffering" next to some kind of spinning icon indicating that you have to wait and nobody knows for how long.

When NSStream NSStreamEventHasSpaceAvailable event called?

I cant really understand this event.
I'm hoping that it is called when the sending queue (or something similar internal structure) is done sending previously written packets.
Is it a correct assumption?
I'm working on a video streamer over Multipeer connectivity, and I want to use this property to decide if I should drop a camera frame (if there is no NSStreamEventHasSpaceAvailable), or I can submit it for NSOutputStream.
Imagine a BlueTooth connection, where I really need to drop a lot of camera frame, instead of submit every frame to NSStream.
The NSStreamEventHasSpaceAvailable event indicates that you can write (at least one byte!) to the stream without blocking. That does not mean that previously written data is completely
delivered to the other endpoint of the connection.

How to guarantee a process starts at an exact time in iOS

We are playing a metronome audio file at time intervals (bpm), while simultaneously recording an audio file. However currently the start time of the two threads are not exactly simultaneously, and there is a slight time difference, which for music, is not allowable.
What strategies can we use to guarantee that the two processes start at the exact same time (or under a few milliseconds)?
Thanks!
I can think of three ways to get this done (but obviously I never tested them).
Each of your threads should do all the initialization they can up front, then wait for an "event". A few timing events I can think of:
use a Notification - both threads an listen for some "start" notification. That should be fairly quick.
have both threads do keyValue listening - so they both are listening for changes to some property on a known object, like appDelegate (or a singleton), or any object they both know (delegate?)
have each call a delegate when there initialization is done. When both are "ready", the delegate can send each a message, one after the other (on the main thread) to "start".
You could also experiment with NSLock and friends - not sure what kind of latency you would get there. Key-Value Observing is pretty fast and lightweight, and works on any thread.
The most accurate and reliable way of achieving this is to implement audio record and metronome playback in CoreAudio audio render/input handlers rather than using higher level APIs and relying on synchronising two threads. None of the mechanisms in #David H's answer provide any guarantees about thread execution by the kernel, although they'll probably all work most of the time on a lightly loaded system.
The callbacks are called on a real-time thread managed to CoreAudio, and synchronously with the hardware audio-clock - which is probably asynchronous with the kernel's timers.
You will need to load the metronome sample into memory and convert to the output format on initialisation - probably using one of the AudioToolbox APIs. The audio render callback simply copies this to the output buffer at the appropriate time.

Resources