Performance issues with Web Audio API playing audio samples - ios

I am creating a complex metronome app using Ionic and Web Audio API.
At certain points, the metronome could be playing 10+ 'beats' a second.
This essentially calls this function 10 times+ a second.
function playSound(e, name) {
var buffer = audioBuffers[name];
var source = audioContext.createBufferSource();
var gain = audioContext.createGain();
source.connect(gain);
gain.connect(audioContext.destination);
gain.gain.value = 1;
source.buffer = buffer;
source.connect(audioContext.destination);
sched.nextTick(e.playbackTime, () => {
source.start(0);
});
}
The user can choose multiple samples, so I fetch them all first once and store the buffer in an array to improve performance instead of making a XMLHttpRequest() every time.
The issue is that when playing at these higher rates the playback gets odd and sometimes goes out of sync. I am using https://github.com/mohayonao/web-audio-scheduler which works lovely so I know its not a timing issue.
If I swap out the sample playback for a basic oscillator:
function oscillator(e) {
const t0 = e.playbackTime;
const t1 = t0 + 0.4;
const osc = audioContext.createOscillator();
const amp = audioContext.createGain();
osc.frequency.value = 1000;
osc.start(t0);
osc.stop(t1);
osc.connect(amp);
amp.gain.setValueAtTime(1, t0);
amp.gain.exponentialRampToValueAtTime(1e-6, t1);
amp.connect(masterGain);
sched.nextTick(t1, () => {
osc.disconnect();
amp.disconnect();
});
}
Performance is fine no matter what tempo. Is there any improvements I can make to the sample playback to help improve performance?

Your first function just uses source.start(0); which makes me think you're relying on setTimeout or setInterval to "schedule" the audio. The second one properly uses the Web Audio scheduler ("start(t0)"). See "A Tale of Two Clocks" for more: https://www.html5rocks.com/en/tutorials/audio/scheduling/.

What cwilso says is right. Use AudioContext.CurrentTime and +/- to determine the next time for setTimeout manually and not with that scheduler library. Then everything should be fine.

Related

How can I restitue a raw audio data stream using WebAudio?

I use WebAudio API and basically my setup is fairly simple.
I use 1 AudioWorkletNode as an emitter and another one as a receiver
emitter:
process(inputs, outputs) {
inputs[ 0 ].length && this.port.postMessage( inputs[ 0 ] );
return ( true );
}
receiver:
inputs = [ new Float32Array(128), new Float32Array(128) ]
constructor() {
super();
// Create a message port to receive messages from the main thread
this.port.onmessage = (event) => {
this.inputs = event.data.inputs;
};
}
process( inputs, outputs) {
const output = outputs[0];
for (let channel = 0; channel < output.length; ++channel) {
output[ channel ].set( this.inputs[ channel ] );
}
return true;
}
on client side I have
//emitter
this.inputWorklet.port.onmessage = e => this.receiverWorklet.port.postMessage( { inputs: e.data } );
and for receiving the data I have connected the nodes together
this.receiverWorklet.connect( this.gainNode );
This works but my problem is that the sound is really glitchy
One thing I though of is there might be a delay between events also WebAudio is in a DOM context
Do you have any ideas How I could achieve a fluid stream restitution?
or maybe another technique?
The reason for the glitchy audio is that your code only works if everything always happens in the exact same order.
The input worklet's process() function needs to be called. It sends an event.
The event needs to pass through the main thread.
The event needs to arrive at the receiver worklet.
Only after that the receiver worklet's process() function needs to be called.
Since there is no buffer it always has to happen in the exact same order. If for some reason the main thread is busy and it can't process the events right away the receiver will continue playing the old audio.
I think you can almost keep the current implementation by buffering a few events in your receiver worklet before you start playing. It will of course also add some latency.
Another approach would be to use a SharedArrayBuffer instead of sending events. Your input worklet would write to the SharedArrayBuffer and your receiver worklet would read from it.

Web Audio API not playing sound sample on device, but works in browser

I have an Ionic app that is a metronome. Using Web Audio API I have made everything work using the oscillator feature, but when switching to use a wav file no audio is playing on a real device (iPhone).
When testing in the browser using Ionic Serve (chrome) the audio plays fine.
Here is what I have:
function snare(e) {
var audioSource = 'assets/audio/snare1.wav';
var request = new XMLHttpRequest();
request.open('GET', audioSource, true);
request.responseType = 'arraybuffer';
// Decode asynchronously
request.onload = function() {
audioContext.decodeAudioData(request.response, function(theBuffer) {
buffer = theBuffer;
playSound(buffer);
});
}
request.send();
}
function playSound(buffer) {
var source = audioContext.createBufferSource();
source.buffer = buffer;
source.connect(audioContext.destination);
source.start(0);
}
The audio sample is in www/assets/audio.
Any ideas where this could be going wrong?
I believe iOS devices require a user-gesture of some sort to allow playing of audio.
It's July 2017, iOS 10.3.2 and we're still finding this issue on Safari on iPhones. Interestingly Safari on a MacBook is fine.
#Raymond Toy's general observation still appears to be true. But #padenot's approach (via https://gist.github.com/laziel/7aefabe99ee57b16081c) did not work for me in a situation where I wanted to play a sound in response to some external event/trigger.
Using the original poster's code, I've had some success with this
var buffer; // added to make it work with OP's code
// keep the original function snare()
function playSound() { // dropped the argument for simplicity.
var source = audioContext.createBufferSource();
source.buffer = buffer;
source.connect(audioContext.destination);
source.start(0);
}
function triggerSound() {
function playSoundIos(event) {
document.removeEventListener('touchstart', playSoundIos);
playSound();
}
if (/iPad|iPhone/.test(navigator.userAgent)) {
document.addEventListener('touchstart', playSoundIos);
}
else { // Android etc. or Safari, but not on iPhone
playSound();
}
}
Now calling triggerSound() will produce the sound immediately on Android and will produce the sound on iOS after the browser page has been touched.
Still not ideal, but better than no sound at all...
I had a similar issue in current iOS (15). I tried to play base64 encoded binary data which worked in all browsers, but not on iOS.
Finally reordering of the statements solved my issue:
let buffer = Uint8Array.from(atob(base64), c => c.charCodeAt(0));
let context = new AudioContext();
// these lines were within "play()" before
audioSource = context.createBufferSource();
audioSource.connect(context.destination);
audioSource.start(0);
// ---
context.decodeAudioData(buffer.buffer, play, (e) => {
console.warn("error decoding audio", e)
});
function play(audioBuffer) {
audioSource.buffer = audioBuffer;
}
Also see this commit in my project.
I assume that calling audioSource.start(0) within the play() method was somehow too late because it's within a callback after context.decodeAudioData() and therefore maybe "too far away" from a user interaction for the standards of iOS.

RxJava Observable to smooth out bursts of events

I'm writing a streaming Twitter client that simply throws the stream up onto a tv. I'm observing the stream with RxJava.
When the stream comes in a burst, I want to buffer it and slow it down so that each tweet is displayed for at least 6 seconds. Then during the quiet times, any buffer that's been built up will gradually empty itself out by pulling the head of the queue, one tweet every 6 seconds. If a new tweet comes in and faces an empty queue (but >6s after the last was displayed), I want it to be displayed immediately.
I imagine the stream looking like that described here:
Raw: --oooo--------------ooooo-----oo----------------ooo|
Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o|
And I understand that the question posed there has a solution. But I just can't wrap my head around its answer. Here is my solution:
myObservable
.concatMap(new Func1<Long, Observable<Long>>() {
#Override
public Observable<Long> call(Long l) {
return Observable.concat(
Observable.just(l),
Observable.<Long>empty().delay(6, TimeUnit.SECONDS)
);
}
})
.subscribe(...);
So, my question is: Is this too naïve of an approach? Where is the buffering/backpressure happening? Is there a better solution?
Looks like you want to delay a message if it came too soon relative to the previous message. You have to track the last target emission time and schedule a new emission after it:
public class SpanOutV2 {
public static void main(String[] args) {
Observable<Integer> source = Observable.just(0, 5, 13)
.concatMapEager(v -> Observable.just(v).delay(v, TimeUnit.SECONDS));
long minSpan = 6;
TimeUnit unit = TimeUnit.SECONDS;
Scheduler scheduler = Schedulers.computation();
long minSpanMillis = TimeUnit.MILLISECONDS.convert(minSpan, unit);
Observable.defer(() -> {
AtomicLong lastEmission = new AtomicLong();
return source
.concatMapEager(v -> {
long now = scheduler.now();
long emission = lastEmission.get();
if (emission + minSpanMillis > now) {
lastEmission.set(emission + minSpanMillis);
return Observable.just(v).delay(emission + minSpanMillis - now, TimeUnit.MILLISECONDS);
}
lastEmission.set(now);
return Observable.just(v);
});
})
.timeInterval()
.toBlocking()
.subscribe(System.out::println);
}
}
Here, the source is delayed by the number of seconds relative to the start of the problem. 0 should arrive immediately, 5 should arrive # T = 6 seconds and 13 should arrive # T = 13. concatMapEager makes sure the order and timing is kept. Since only standard operators are in use, backpressure and unsubscription composes naturally.

How do I release one audio track and not all audio ionic/cordova cordova-plugin-media

I'm using cordova/ionic and cordova-plugin-media to crossfade two music tracks together.
I've got a problem where I'm crossfading one Media object with another.
It works fine but when I call .release() on the finished Media object in Android, but on ios, .release() kills all audio. If I don't call release in ios I end up with a 'memory leak'. I say 'memory leak' because it isn't really - I know that that object is still there.
It doesn't matter if I reset the object and null it, according to xcode, the memory footprint steadily gets larger.
my cut down loadTrack code:
var loadTrack = function(playerName,track){
var pn = playerName;
var mediaOnEnd = function(){
console.log('mediaOnEnd RELEASING: '+pn);
$scope.player.media[pn].release();
};
var mediaError = function(error){
prepareNextTrack(pn);
};
var mediaStatus = function(status){
switch(status){
case 2:
console.log('playing:'+pn);
break;
case 4:
console.log('stop:'+pn);
break;
}
};
$scope.player.media[pn] = new $window.Media(
config.local_path+$scope.currentTrack.id+config.file_extention,
mediaOnEnd,
mediaError,
mediaStatus
);
$scope.player.media[pn].play();
};
When mediaOnEnd is called, or when I call it in my crossfade function (not included), all audio is stopped.
Is there anywhere I can do this properly?

Moving data from one BlockingCollection to the other

I have a code, that copies integers to buffer1, then from buffer1 to buffer2 and then consumes all data from buffer2.
It processes 1000 values in 15 seconds, which is a lot of time compared to size of input. When I remove the " Task.Delay(1).Wait() " from the second task t2, it completes quite fast.
Now, my question is: is the slowdown because of two threads competing for the lock or is my code somehow faulty?
var source = Enumerable.Range(0, 1000).ToList();
var buffer1 = new BlockingCollection<int>(100);
var buffer2 = new BlockingCollection<int>(100);
var t1 = Task.Run
(
delegate
{
foreach (var i in source)
{
buffer1.Add(i);
}
buffer1.CompleteAdding();
}
).ConfigureAwait(false);
var t2 = Task.Run
(
delegate
{
foreach (var i in buffer1.GetConsumingEnumerable())
{
buffer2.Add(i);
//Task.Delay(1).Wait();
}
buffer2.CompleteAdding();
}
).ConfigureAwait(false);
CollectionAssert.AreEqual(source.ToList(), buffer2.GetConsumingEnumerable().ToList());
An update: this is just a demo code, I am blocking for 1 milisecond just to simulate some computations that take place in my real code. I put 1 milisecond there because it's such a small amount. I cannot believe that removing it makes the code complete almost immediately.
The clock has ~15ms resolution. 1ms is rounded up to 15. That's why 1000 items take ~15 seconds. (Actually, I'm surprised. On average each wait should take about 7.5ms. Anyway.)
Simulating work with sleep is a common mistake.

Resources