RxJava Observable to smooth out bursts of events - twitter

I'm writing a streaming Twitter client that simply throws the stream up onto a tv. I'm observing the stream with RxJava.
When the stream comes in a burst, I want to buffer it and slow it down so that each tweet is displayed for at least 6 seconds. Then during the quiet times, any buffer that's been built up will gradually empty itself out by pulling the head of the queue, one tweet every 6 seconds. If a new tweet comes in and faces an empty queue (but >6s after the last was displayed), I want it to be displayed immediately.
I imagine the stream looking like that described here:
Raw: --oooo--------------ooooo-----oo----------------ooo|
Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o|
And I understand that the question posed there has a solution. But I just can't wrap my head around its answer. Here is my solution:
myObservable
.concatMap(new Func1<Long, Observable<Long>>() {
#Override
public Observable<Long> call(Long l) {
return Observable.concat(
Observable.just(l),
Observable.<Long>empty().delay(6, TimeUnit.SECONDS)
);
}
})
.subscribe(...);
So, my question is: Is this too naïve of an approach? Where is the buffering/backpressure happening? Is there a better solution?

Looks like you want to delay a message if it came too soon relative to the previous message. You have to track the last target emission time and schedule a new emission after it:
public class SpanOutV2 {
public static void main(String[] args) {
Observable<Integer> source = Observable.just(0, 5, 13)
.concatMapEager(v -> Observable.just(v).delay(v, TimeUnit.SECONDS));
long minSpan = 6;
TimeUnit unit = TimeUnit.SECONDS;
Scheduler scheduler = Schedulers.computation();
long minSpanMillis = TimeUnit.MILLISECONDS.convert(minSpan, unit);
Observable.defer(() -> {
AtomicLong lastEmission = new AtomicLong();
return source
.concatMapEager(v -> {
long now = scheduler.now();
long emission = lastEmission.get();
if (emission + minSpanMillis > now) {
lastEmission.set(emission + minSpanMillis);
return Observable.just(v).delay(emission + minSpanMillis - now, TimeUnit.MILLISECONDS);
}
lastEmission.set(now);
return Observable.just(v);
});
})
.timeInterval()
.toBlocking()
.subscribe(System.out::println);
}
}
Here, the source is delayed by the number of seconds relative to the start of the problem. 0 should arrive immediately, 5 should arrive # T = 6 seconds and 13 should arrive # T = 13. concatMapEager makes sure the order and timing is kept. Since only standard operators are in use, backpressure and unsubscription composes naturally.

Related

How can I restitue a raw audio data stream using WebAudio?

I use WebAudio API and basically my setup is fairly simple.
I use 1 AudioWorkletNode as an emitter and another one as a receiver
emitter:
process(inputs, outputs) {
inputs[ 0 ].length && this.port.postMessage( inputs[ 0 ] );
return ( true );
}
receiver:
inputs = [ new Float32Array(128), new Float32Array(128) ]
constructor() {
super();
// Create a message port to receive messages from the main thread
this.port.onmessage = (event) => {
this.inputs = event.data.inputs;
};
}
process( inputs, outputs) {
const output = outputs[0];
for (let channel = 0; channel < output.length; ++channel) {
output[ channel ].set( this.inputs[ channel ] );
}
return true;
}
on client side I have
//emitter
this.inputWorklet.port.onmessage = e => this.receiverWorklet.port.postMessage( { inputs: e.data } );
and for receiving the data I have connected the nodes together
this.receiverWorklet.connect( this.gainNode );
This works but my problem is that the sound is really glitchy
One thing I though of is there might be a delay between events also WebAudio is in a DOM context
Do you have any ideas How I could achieve a fluid stream restitution?
or maybe another technique?
The reason for the glitchy audio is that your code only works if everything always happens in the exact same order.
The input worklet's process() function needs to be called. It sends an event.
The event needs to pass through the main thread.
The event needs to arrive at the receiver worklet.
Only after that the receiver worklet's process() function needs to be called.
Since there is no buffer it always has to happen in the exact same order. If for some reason the main thread is busy and it can't process the events right away the receiver will continue playing the old audio.
I think you can almost keep the current implementation by buffering a few events in your receiver worklet before you start playing. It will of course also add some latency.
Another approach would be to use a SharedArrayBuffer instead of sending events. Your input worklet would write to the SharedArrayBuffer and your receiver worklet would read from it.

BackgroundFetch in Codename One

I'm developing a Codename One app for iOS and I'm trying to use the BackgroundFetch interface.
I copied the sample code as it is written in Javadoc (https://www.codenameone.com/javadoc/com/codename1/background/BackgroundFetch.html) and I added the ios.background_modes=fetch build hint.
Launching the app on the simulator, the background operation is correctly executed.
Launching it on a real device (iPhone 7s, iOs 12.1.4), the behaviour is unpredictable. Despite the setPreferredBackgroundFetchInterval(10), I noticed almost every time I launch the app, the background operation is not executed. Rarely, the background operation is executed, but the app must be in background some minutes before to resume it, instead of 10 seconds, as set through the setPreferredBackgroundFetchInterval(10) method.
The Display.isBackgroundFetchSupported() method returns true.
I don't understand how to make if affordable and predictable.
EDIT
I modified the sample code, only in the performBackgroundFetch() implementation (the Display.setPreferredBackgroundFetchInterval(10) is not changed). I just put some text in the label:
#Override
public void performBackgroundFetch(long deadline, Callback<Boolean> onComplete) {
supported.setText("deadline: " + deadline + "; timeMillis: " + System.currentTimeMillis());
onComplete.onSucess(Boolean.TRUE);
}
I observed two different behaviours for simulator and real device.
In simulator, the method is executed exactly 10 seconds after entering in pause status. In real device, the method isn't executed 10 seconds after entering in pause status: in some cases, it's executed after 20 minutes (in other cases, it's not executed at all).
However, in both cases, I could calculate the difference between the deadline and the time when the method executed: it's always 25 minutes.
As an example, you can see the following screenshot of the app (running on iPhone):
Deadline = 1560246881647
Timestamp = 1560245381647
Deadline - Timestamp = 1500000 ms = 1500 s = 25 minutes.
As I understood, on iOS, there is a limit of 30 seconds to perform background fetches, otherwise the OS will kill the app. Moreover, the Display.setPreferredBackgroundFetchInterval() is used to set the preferred time interval between background fetches, but it's not guaranteed, as iOS keeps the control over the execution of background fetches.
What is the right way to use background fetch?
Here is the complete code:
public class MyApplication implements BackgroundFetch{
private Form current;
private Resources theme;
List<Map> records;
Label supported;
// Container to hold the list of records.
Container recordsContainer;
public void init(Object context) {
theme = UIManager.initFirstTheme("/theme");
// Enable Toolbar on all Forms by default
Toolbar.setGlobalToolbar(true);
// Pro only feature, uncomment if you have a pro subscription
// Log.bindCrashProtection(true);
}
public void start() {
if(current != null){
// Make sure we update the records as we are coming in from the
// background.
updateRecords();
current.show();
return;
}
Display d = Display.getInstance();
// This call is necessary to initialize background fetch
d.setPreferredBackgroundFetchInterval(10);
Form hi = new Form("Background Fetch Demo");
hi.setLayout(new BoxLayout(BoxLayout.Y_AXIS));
supported = new Label();
if (d.isBackgroundFetchSupported()){
supported.setText("Background Fetch IS Supported");
} else {
supported.setText("Background Fetch is NOT Supported");
}
hi.addComponent(new Label("Records:"));
recordsContainer = new Container(new BoxLayout(BoxLayout.Y_AXIS));
//recordsContainer.setScrollableY(true);
hi.addComponent(recordsContainer);
hi.addComponent(supported);
updateRecords();
hi.show();
}
/**
* Update the UI with the records that are currently loaded.
*/
private void updateRecords() {
recordsContainer.removeAll();
if (records != null) {
for (Map m : records) {
recordsContainer.addComponent(new SpanLabel((String)m.get("title")));
}
} else {
recordsContainer.addComponent(new SpanLabel("Put the app in the background, wait 10 seconds, then open it again. The app should background fetch some data from the Slashdot RSS feed and show it here."));
}
if (Display.getInstance().getCurrent() != null) {
Display.getInstance().getCurrent().revalidate();
}
}
public void stop() {
current = Display.getInstance().getCurrent();
if(current instanceof Dialog) {
((Dialog)current).dispose();
current = Display.getInstance().getCurrent();
}
}
public void destroy() {
}
/**
* This method will be called in the background by the platform. It will
* load the RSS feed. Note: This only runs when the app is in the background.
* #param deadline
* #param onComplete
*/
#Override
public void performBackgroundFetch(long deadline, Callback<Boolean> onComplete) {
supported.setText("deadline: " + deadline + "; timeMillis: " + System.currentTimeMillis());
onComplete.onSucess(Boolean.TRUE);
}
}
The setPreferredBackgroundFetchInterval javadoc states:
Sets the preferred time interval between background fetches. This is only a preferred interval and is not guaranteed. Some platforms, like iOS, maintain sovereign control over when and if background fetches will be allowed. This number is used only as a guideline.

QOS_CLASS_USER_INITIATED blocking UI in iOS8?

I am porting a card-melding-game from Android to iOS (see https://play.google.com/store/apps/details?id=com.pipperpublishing.fivekings). Each turn you choose from the drawpile or discard pile, meld your cards, and then discard. To keep it responsive, I have the computer player start pre-calculating its "best action" based on your discard, before it is apparently playing. In Android, I do my own thread management; in iOS I am trying to use GCD.
What I am finding is that the computer pre-calculations running on a QOS_CLASS_USER_INITIATED queue sometimes blocks the UI, especially when testing on iOS8 on an iPhone 6. I can barely imagine that happening for USER_INTERACTIVE. otoh, I've read some confusing stuff about GCD reusing the UI thread for such queues, so maybe I am not understanding.
Here's the relevant code:
EasyComputerPlayer definition of my own queue (things were even slower when I used a global queue):
class EasyComputerPlayer : Player {
static let qos_attr = dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT, QOS_CLASS_USER_INITIATED, 0)
static let concurrentDiscardTestQueue = dispatch_queue_create("com.pipperpublishing.FiveKings", qos_attr)
...
Here is the pre-calculation, which is called immediately after the human player discards - testHand.main() does the actually calculations for the possible choices of the computer's discard if the computer picked up the card that the human just discarded.
override func findBestHandStart(isFinalTurn : Bool, addedCard : Card) {
let cardsWithAdded : CardList = CardList(cardList: hand!);
cardsWithAdded.add(addedCard);
//Create the different test hands
testHandSets[method.rawValue].removeAll(keepCapacity: true)
for disCard in cardsWithAdded.cards {
let cards : CardList = CardList(cardList: cardsWithAdded);
cards.remove(disCard);
testHandSets[method.rawValue].append(ThreadedHand(parent: self, roundOf: self.hand!.getRoundOf(), cards: cards, discard: disCard, method: self.method, isFinalTurn: isFinalTurn)) //creates new hand with replaced cards
}
//and then dispatch them
dispatchGroups[method.rawValue] = dispatch_group_create()
for (iTask,testHand) in testHandSets[method.rawValue].enumerate(){
let card = testHand.hand.discard
dispatch_group_enter(dispatchGroups[method.rawValue])
dispatch_async(EasyComputerPlayer.concurrentDiscardTestQueue) {
testHand.main() //calls meldAndEvaluate
dispatch_group_leave(self.dispatchGroups[self.method.rawValue])
}
}
}
In the log, I will see the tasks dispatched, and then the UI sometimes hangs until they all finish (which in later rounds can take 5 seconds).
I replaced QOS_CLASS_USER_INITIATED with QOS_CLASS_UTILITY which seems to have fixed the problem temporarily, but of course I am worried that I have just reduced the frequency :)

Moving data from one BlockingCollection to the other

I have a code, that copies integers to buffer1, then from buffer1 to buffer2 and then consumes all data from buffer2.
It processes 1000 values in 15 seconds, which is a lot of time compared to size of input. When I remove the " Task.Delay(1).Wait() " from the second task t2, it completes quite fast.
Now, my question is: is the slowdown because of two threads competing for the lock or is my code somehow faulty?
var source = Enumerable.Range(0, 1000).ToList();
var buffer1 = new BlockingCollection<int>(100);
var buffer2 = new BlockingCollection<int>(100);
var t1 = Task.Run
(
delegate
{
foreach (var i in source)
{
buffer1.Add(i);
}
buffer1.CompleteAdding();
}
).ConfigureAwait(false);
var t2 = Task.Run
(
delegate
{
foreach (var i in buffer1.GetConsumingEnumerable())
{
buffer2.Add(i);
//Task.Delay(1).Wait();
}
buffer2.CompleteAdding();
}
).ConfigureAwait(false);
CollectionAssert.AreEqual(source.ToList(), buffer2.GetConsumingEnumerable().ToList());
An update: this is just a demo code, I am blocking for 1 milisecond just to simulate some computations that take place in my real code. I put 1 milisecond there because it's such a small amount. I cannot believe that removing it makes the code complete almost immediately.
The clock has ~15ms resolution. 1ms is rounded up to 15. That's why 1000 items take ~15 seconds. (Actually, I'm surprised. On average each wait should take about 7.5ms. Anyway.)
Simulating work with sleep is a common mistake.

AVFoundation.AVAudioPlayer stops randomly

I'm trying to play multiple sounds at the same time. However sometimes the sounds just stops playing or never starts at all.
I have an eventhandler that recieves an event when a sound effect should be played:
void HandlePlaySound (object sender, EventArgs e)
{
this.InvokeOnMainThread (()=>{
...
[set url to path]
...
MonoTouch.AVFoundation.AVAudioPlayer player = MonoTouch.AVFoundation.AVAudioPlayer.FromUrl(url);
player.Play();
});
}
This works fine most of the time but when two sounds gets triggered at the same time it's seems like one of them will be killed or both. I must be doing something really wrong here.
Is there a more correct way of playing sounds in an iPhone app. Each sound is supposed to play till end and there could be multiple sounds playing at the same time.
If I were to guess, I'd say that sometimes, the GC comes in and disposes the player that has gone out of scope, causing your random stop behaviour. I found a stable solution being first establishing how many simultaneous audio streams you'd like to able to play, and then enforcing those rules:
// I'd like a maximum of 5 simultaneous audio streams
Queue<AVAudioPlayer> players = new Queue<AVAudioPlayer>(5);
void PlayAudio (string fileName)
{
NSUrl url = NSUrl.FromFilename(fileName);
AVAudioPlayer player = AVAudioPlayer.FromUrl(url);
if (players.Count == 5) {
players.Dequeue().Dispose();
}
players.Enqueue(player);
player.Play();
}
// In my example, I'll select files from my Sounds folder (containing a couple of .wav, a couple of .mp3 and an .aif)
string[] files;
int fileIndex = 0;
string GetNextFileName ()
{
if (files == null)
files = Directory.GetFiles("Sounds");
if (fileIndex == files.Length)
fileIndex = 0;
return files[fileIndex++];
}
partial void OnPlayButtonTapped (NSObject sender)
{
string fileName = GetNextFileName();
PlayAudio(fileName);
}

Resources