AudioKit metronome synced with time-pitched audio loops - audiokit

I'm making an app that plays synced audio loops with a metronome. For example, I might have 3 files like this:
bass_60bpm.m4a
drums_60bpm.m4a
guitar_60bpm.m4a
And a metronome sound tick.m4a, which I play with AKSamplerMetronome.
I need to play them back at arbitrary tempos, so I use AKTimePitcher on the AKAudioFiles (so playing at 90bpm, I'd play bass_60bpm.m4a at 1.5x).
This almost works, but after 3-5 loops, the metronome gets out of sync with the audio loops. I think I understand why that happens (audio_sample_length * floating_point_number is not equivalent to AKSamplerMetronome's tempo calculations), but I don't know how to fix it.
What I suspect I need to do is manually reimplement some or all of AKSamplerMetronome, but play the metronome ticks based on AKTimePitcher's output, but I can't piece together enough info from the API, docs, and examples to make it happen.

An alternate approach might be to use AKSequencer instead of AKSamplerMetronome. The midi output of the sequencer's track could be sent to an AKCallbackInstrument, and the sequencer's events could get the callback function to trigger both the time-stretched sample and the metronome ticks (and you could also trigger synchronized UI events from there as a bonus). This would guarantee that they stay in sync.
Apple's MusicSequence, which is what AKSequencer uses under the hood, is a little flakey with its timing immediately after you call play, but it's pretty solid after that. If you start the sequencer just before its looping point (i.e., if you have a 1 bar loop, start it one sixteenth note before the end of the first bar) then you can get passed that flakiness before the actual loop starts.

Related

How many sounds can be played at a time on iOS - AVAudioPlayer vs. AVAudioEngine & AVAudioPlayerNode

I have an application in which there is a set of about 50 sounds, which range in length from about 300 ms to about 4 seconds. Various combinations of sounds need to be played at precise times (up to 10 of them can be triggered at once). Some sounds need to be repeated at intervals as short as 100 ms.
I've implemented this is as a two dimensional array of AVAudioPlayers, all of which are loaded with sounds at application launch. There are several players for each sound, to accommodate rapidly repeating sounds. The players for a particular sound are reused in strict rotation. When a new sound is scheduled, the oldest player for that sound is stopped and its current time is set to 0, so the sound will repeat from the start, the next time it's scheduled using player.play(atTime:). There's a thread that schedules new sets of sounds about 300 ms before they are to be played.
It all works quite nicely, up to a point that varies with the device. Eventually, as sounds are played more rapidly, and/or more simultaneous sounds are scheduled, some sounds will refuse to play.
I'm contemplating switching to AVAudioEngine and AVAudioPlayerNodes, using a mixer node. Does anyone know if that approach is likely to handle more simultaneous sounds? My guess is that both approaches translate into a rather similar set of CoreAudio functions, but I haven't actually written the code to test that hypothesis - before I do that, I'm hoping that someone else may have explored this issue before me. I've been deep into CoreAudio before, and I'm hoping to be able to use these handy high-level functions instead!
Also, does anyone know of a way to trigger a closure when a sound initiates? The documented functionality allows for a callback closure, but the only way I've been able to trigger events when the sounds start, is to create a high quality of service queue for DispatchQueue. Unfortunately, depending on the system load, queued events may be executed at times that vary from the scheduled times by up to about 50 ms, which is not quite as precise as I'd prefer to be.
Using AVAudioEngine with AVAudioPlayersNodes provides much better performance, albeit at the cost of a bit of code complexity. I was able to easily increase the playback rate by a factor of five, with better buffer control.
The main drawback in switching to this approach was that Apple's documentation is less than stellar. A few additions to Apple's documentation would have made this task a LOT easier:
Mixer nodes are documented as being able to convert sample rates and channel counts, so I attempted to configure audioEngine.mainMixerNode to convert mono buffers to the output node's settings. Setting the main mixer node's output to the output node's format appeared to be accepted, but threw opaque errors at run time that complained about channel count mismatches.
It appears that the main mixer node is not actually a fully functional mixer node. To get this to work, I had to insert another mixer node that performed the channel conversion, and connect it to the main mixer node. If Apple's documentation had actually mentioned this, it would have saved me a lot of experimentation.
Also, just scheduling a buffer does not cause anything to play. You need to call play() on the player node before anything will happen. Apple's documentation is confusing here - it says that calling play() with no arguments will cause playback to occur immediately, which wasn't what I wanted. It took some experimentation to determine that play() just tells the player node to wake up, and that scheduled buffers will actually be played at the scheduled time, rather than immediately.
It would have been enormously helpful if Apple had provided more than the auto-generated class documentation. A bit of human-generated documentation would have saved me an awful lot of frustrating experimentation.
Chris Adamson's well-written "Learning Core Audio" was very helpful when I was working with Core Audio - it's a shame that the newer AVAudioEngine functionality isn't documented nearly as well.

Superpowered audio player pause at position

I'm streaming audio using _audioPlayer->openHLS() and I need to start and stop at specific positions.
The best way seems to be to use loopBetween and then call exitLoop in the LoopEnd event. However, I can't get loopBetween to play!
_audioPlayer->loopBetween(startTimeMS, stopTimeMs, true, 255, false);
I have tried calling _audioPlayer->play(false) before or after the loopBetween, but then the audio plays without stopping. If I just call loopBetween it never starts playing.
Is there some config I'm missing to get loopBetween to work? The SDK has no sample code covering looping.
EDIT: I've found one way to do this, by polling positionMs in the audio processing callback. I'd still like to know how to make looping work, as that seems like a cleaner solution.
Looping doesn't work for HLS streams. Polling positionMs in the audio processing callback is a perfect solution.

Playback multiple sounds starting in exactly the same moment

I need to playback short audio samples with precise timing, including up to 4 sounds starting simultaneously.
These sound samples are triggered with NSTimers (alternatively, I've also tried dispatch_after).
I've tried with AVPlayer and AVAudioPlayer but they are just not precise enough in timing.
Multiple sounds played at once will be all over the place, especially on the real device.
I've read about NSTimer allowing up to a few 100 milliseconds deviation, which is just too much for me.
As a test I've setup a few AVAudioPlayers with one audio sample each and triggered them all at the same time in didSelectRow...() but they will not sound in exactly the same moment, even with no NSTimer involved.
It seems that it's just not possible to playback 2 sounds starting exactly at the same time with AVAudioPlayer. Is this confirmed?
From what I've gathered there are not many alternatives, Audio Queue Service being one that allows precise timing and multiple sounds at once.
However, it's written in C, which I've never worked with, and it is hard to find any examples showing how to integrate this for simple audio playback of a sound (I'm using Swift). I'd basically just need to know how to integrate Audio Queue Services to playback a simple sound.
If someone can point me in the right direction (or knows a better solution to what I'm looking for), that would be much appreciated.

IOS 8: Real Time Sound Processing and Sound Pitching - OpenAL or another framework

I'm trying to realize an app which plays a sequence of tones in a loop.
Actually, I use OpenAL and my experiences with such framework are positive, as I can perform a sound pitch also.
Here's the scenario:
load a short sound (3 seconds) from a CAF file
play that sound in a loop and perform a sound shift also.
This works well, provided that the tact rate isn't too high - I mean a time of more than 10 milliseconds per tone.
Anyhow, my NSTimer (which embeds my sound sequence to play) should be configurable - and as soon as my tact rate increases (I mean less than 10 ms per tone), the sound is no more echoed correctly - even some tones are dropped in an obvious random way.
It seems that real time sound processing becomes an issue.
I'm still a novice in IOS programming, but I believe that Apple sets a limit concerning time consumption and/or semaphore.
Now my questions:
OpenAL is written in C - until now, I didn't understand the whole code and philosophy behind that framework. Is there a possibility to resolve my above mentioned problem making some modifications - I mean setting flags/values or overwriting certain methods?
If not, do you know another IOS sound framework more appropriate for such kind of real time sound processing?
Many thanks in advance!
I know that it deals with a quite extraordinary and difficult problem - maybe s.o. of you has resolved a similar one? Just to emphasize: sound pitch must be guaranteed!
It is not immediately clear from the explanation precisely what you're trying to achieve. Some code is expected.
However, your use of NSTimer to sequence audio playback is clearly problematic. It is neither intended as a reliable nor a high resolution timer.
NSTimer delivers events through a run-loop queue - probably your application's main queue - where they content with user interface events.
As the main thread is not a real-time thread, it may not even be scheduled to run for some time.
There may be quantisation effects on with the delay you requested, meaning your events effectively round to zero clock ticks and get scheduled immediately.
Perioidic timers have deleterious effects on battery life. iOS and MacOSX both take steps to reduce their impact by timer coalescing
The clock you should be using for sequencing events is the playback sample clock - which is available in the render handler of whatever framework you use. As well as being reliable this is efficient as well, as the render handler will be running periodically anyway, and in a real-time thread.

loadSound: don't wait for the entire download before play, but not have it start automatically

I am trying to play an MP3 using Actionscript 2. I have the following requirements:
I don't want to wait for the MP3 to load before playing it.
I want to know when enough of the MP3 has downloaded that I can start playing it.
I don't want the MP3 to start playing immediately: I need to control when the play starts.
An example scenario is that I need to start playing a 30-second MP3 exactly 8 seconds from now (at the top of the minute, let's say). Depending on the connection, I may or may not be able to download the entire MP3 by then, but I can almost certainly download enough to start playing without interruption.
The closest way I can see to do this is Sound.loadSound(url, isStreamable). If I pass true for the isStreamable parameter, though, the sound will start playing immediately (docs say: Playback begins when sufficient data has been received to start the decompressor).
I've tried the following:
call mySound.loadSound(mp3Url, true)
mySound.stop(); // so that the auto-play won't happen
set a timer for the top of the minute (8 seconds from now).
In the timer, check the duration of the sound (which continues to get bigger as the file gets loaded). If the duration is < 5 seconds, we don't have enough buffered sound, so generate an error. Otherwise, start playing the sound with s.start(0).
The behavior I see is that the sound doesn't start playing until it's entirely downloaded.
I found your posting (which is a little older now, but... anyway):
there are two options you can use in the Sound-class:
Sound.getBytesTotal
and
Sound.getBytesLoaded
If you compare these two, you can get the amount of bytes loaded at a certain point of time. (See also Sound.onLoad and Sound.onSoundComplete, these two are helpful)
There are also some examples in the Flash help for this.
Greetings,
Draco
I do not believe that this is possible using ActionScript 2. I think you are going to have to either move to AS3 or wrap the MP3 in a SWF.
Even with AS3 you may have to target FP10 in order to use the new sound methods and events that were just added (Sound.extract and Event.SAMPLE_DATA).
In general Sound capabilities in Flash have really lagged until the most recent version of the player.

Resources