What is the difference between a limited queue size channel and a reactive stream? - stream

What is the difference between a normal channel with limited queue size, and a reactive stream? Both seem to solve the problem of fast-producer-slow-consumer.
What is the difference between 'cold flow' and 'hot channel with unit capacity'? Both seem to process elements one by one, lazily.

Related

Is it sufficient to set ROS publisher buffer to 1 and Subscriber buffer to 1000 and still not loose any messages

I am trying to understand subscriber and publisher buffers. If I set subsrciber buffer to 1000 and publisher buffer to 1, are there any chances that I loose messages ? Could anyone please explain me the same?
Yes, in theory you may lose messages with these settings, in practice it depends.
Theory: spinner threads
On both sides, publisher as well as subscriber, there are so called spinner threads responsible for handling the callbacks (for message sending on the publisher side and message evaluation on the subscriber-side). These spinner threads are working in parallel to the main thread. If messages are arriving faster from the main thread than they are being processed by the spinner thread, the number of messages given by the queue size will be buffered up before beginning to throw away the oldest ones. Therefore if you publish at a very high rate the publisher-sided spinner thread might drop older messages, while if your callback function on the subscriber side takes too long to execute your subscriber queue will start dropping messages. To improve this one can use multi-threaded spinners where one increases the number of spinner threads and activate concurrency in order to process the callback queue more quickly. Read more about it here.
Practice: Choosing the queue size
The queue size of the publisher queue you should set depends on which rate you publish and if you publish in bursts. If you publish in bursts or at higher frequencies (e.g. > 10 Hz) a publisher queue size of 1 won't be sufficient. On the subscriber side it is harder to give recommendations as it also depends on how long the callback takes to process the information.
It is actually also possible to set the value 0 for the queues which results in an arbitrarily large queue but this might be problematic as the required memory might grow indefinitely, well at least until your computer freezes. Furthermore having a large queue size might often be disadvantageous: If you set a large queue and the callback takes long to execute you might be working on very outdated data while the queue gets longer and longer.
Alternative communication patterns
If you want to guarantee that information is actually being processed (e.g. real-time or safety-relevant information) ROS topics are probably the wrong choice. Depending on what precisely you need the other two communication methods services or actions might be an alternative. But for things like large information streams of safety-relevant real-time data there are no perfect communication mechanisms in ROS1.

What is Fuseable interface in Reactor project for?

There are many usages of Fuseable interface in Reactor source code but I can't find any reference what is it. Could someone explain it's purpose?
The Fuseable interface, and its containing interfaces define the contracts used for stream fusion. Stream fusion is a reactive streams optimisation.
Without any such optimisation (in "normal" execution if you will), each reactive operator:
Subscribes to a previous operator in the chain
Is notified when the subscriber has completed
Performs its operation
Notifies its subscribers
...and then the cycle repeats for all operators. This is fantastic for making sure everything stays non-blocking, but all of those asynchronous calls come with some amount of overhead.
"Stream fusion" (or "operator fusion") significantly reduces this overhead by performing two or more of the operations in one chunk (fusing them together as one unit), passing values between them using a Queue or similar rather than via subscriptions, eliminating this overhead. It's not always possible of course - it can't be done this way if running in parallel, when certain side effects come into play, etc. - but a neat optimisation when it is possible.

Semaphores in NSOperationQueues

I'm taking my first swing at a Swift/NSOperationQueue based design, and I'm trying to figure out how to maintain data integrity across queues.
I'm early in the design process, but the architecture is probably going to involve one queue (call it sensorQ) handling a stream of sensor measurements from a variety of sensors that will feed a fusion model. Sensor data will come in at a variety of rates, some quite fast (accelerometer data, for example), but some will require extended computation that could take, say, a second or more.
What I'm trying to figure out is how to capture the current state into the UI. The UI must be handled by the main queue (call it mainQ) but will reflect the current state of the fusion engine.
I don't want to hammer the UI thread with every update that happens on the sensor queue because they could be happening quite frequently, so an NSOperationQueue.mainQueue.addOperationWithBlock() call passing state back to the UI doesn't seem feasible. By the same token, I don't want to send queries to the sensor queue because if it's processing a long calculation I'll block waiting for it.
I'm thinking to set up an NSTimer that might copy the state into the UI every tenth of a second or so.
To do that, I need to make sure that the state isn't being updated on the sensor queue at the same time I'm copying it out to the UI queue. Seems like a job for a semaphore, but I'm not turning up much mention of semaphores in relation to NSOperationQueues.
I am finding references to dispatch_semaphore_t objects in Grand Central Dispatch.
So my question is basically, what's the recommended way of handling these situations? I see repeated admonitions to work at the highest levels of abstraction (NSOperationQueue) unless you need the optimization of a lower level such as GCD. Is this a case I need the optimization? Can the dispatch_semiphore_t work with NSOperationQueue? Is there an NSOperationQueue based approach here that I'm overlooking?
How much data are you sending to the UI? A few numbers? A complex graph?
If you are processing all your sensor data on an NSOperationQueue (let's call it sensorQ), why not make the queue serial? Then when your timer fires, you can post an "Update UI" task to sensorQ. When your update task arrives on the sensorQ, you know no other sensor is modifying the state. You can bundle up your data and post to the main (UI) queue.
A better answer could be provided if we knew:
1. Do your sensors have a minimum and maximum data rate?
2. How many sensors are contributing to your fusion model?
3. How are you synchronizing access from the sensors to your fusion model?
4. How much data and in what format is the "update" to the UI?
My hunch is, semaphores are not required.
One method that might work here is to decouple the sensor data queue from your UI activities via a ring buffer. This effectively eliminates the need for semaphores.
The idea is that the sensor data processing component pushes data into the ring buffer and the UI component pulls the data from the ring buffer. The sensor data thread writes at the rate determined by your sensor/processing and the UI thread reads at whatever refresh rate is appropriate for your application.

How do I set the inputBuffer of an akka stream publisher?

I'm using Akka streams in a context where sinks for a single source will come and go. For this reason I'm creating a publisher from a source and attaching subscribers as the need arise:
val publisher= mySource.runWith(Sink.publisher(true))
with
publisher.subscribe(subscriber1)// There will be others
Some of the subscribers will be faster than others and I'd like to allow the faster ones to go ahead independently of the slowest, at least to the extend permitted by the input buffer of the publisher. This buffer is described by the comment on the Sink.publisher(true) method:
If fanout is true, the materialized Publisher will support multiple Subscribers and the size of the inputBuffer configured for this stage becomes the maximum number of elements that the fastest [[org.reactivestreams.Subscriber]] can be ahead of the slowest one before slowing the processing down due to back pressure.
My problem is that I don't know how to set this inputBuffer value "for this stage". The closest I have seen is described in the Dropping Broadcast section of this article but this seems to insist on the use of the Flow DSL. I believe that I can't use the DSL because of my need to continually attach new Subscribers.
As a result, my overall stream rate is held back by the slowest subscriber. A related aspect of what I am trying to do relates to making sure the different subscribers are running on different threads (without creating explicit actors as subscribers).
It'd look something like (for Akka Streams 2.0.1):
Sink.asPublisher(true).addAttributes(Attributes.inputBuffer(initialSize, maxSize))

MTAudioProcessingTap - produce more output samples?

Inside my iOS 8.0. App I need to apply some custom audio processing on (non-realtime) audio playback. Typically, the audio comes from a device-local audio file.
Currently, I use MTAudioProcessingTap on a AVMutableAudioMix. Inside the process callback I then call my processing code. In certain cases this processing code may produce more samples than the amount of samples being passed in and I wonder what's the best way to handle this (think time stretching effect for example)
The process callback takes an incoming CMItemCount *numberFramesOut argument that signals the amount of outgoing frames. For in-place processing where the amount of incoming frames and outgoing frames is identical this is no problem. In the case where my processing generates more samples I need a way to get the playback going until my output buffers are emptied.
Is MTAudioProcessingTap the right choice here anyway?
MTAudioProcessingTap does not support changing the number of samples between the input and the output (to skip silences for instance).
You will need a custom audio unit graph for this.
A circular buffer/fifo is one of the most common methods to intermediate between different producer and consumer rates, as long as the long term rate is the same. If long term, you plan on producing more samples than are played, you may need to occasionally temporarily stop producing samples, while still playing, in order not to fill up all of the buffer or the systems memory.

Resources