AudioKit bounce recording sync - ios

I need help with an issue... I am recording a bounce with renderToFile, in AudioKit 4 and an artist has complained that the different tracks are not aligned to the sample. In preRender, we have a list of players with the different tracks and records that are set to play in succession, the problem is that I cannot set the AVAudioTime scheduling because it crashes, I suppose due to the fact of the engine being in manualrenderingmode. Is there a way to sync them to the sample? I suppose this is an issue tied to the underlaying AVAudioEngine...
I cannot use AVMutableComposition because we need the recording to be exactly as the one played by AudioKit, and volume would differ.

I've experienced random crashes after setting offline rendering mode, as well as after going back to online rendering mode after being offline.
This situation seems to be triggered by this kind of conditions:
Setting offline rendering mode when the engine hasn't completely finished processing.
Start processing right after calling disableManualRenderingMode, without giving enough time for the engine to start.
A partial workaround I've found is to wait some time before setting offline mode, as well as waiting a small time interval after going online. So I have functions in my code as follows:
func setOnlineMode(successCompletion: #escaping() -> Void, failCompletion: #escaping() -> Void) {
AKManager.engine.stop()
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 0.5, execute: {
AKManager.engine.disableManualRenderingMode()
do {
try AKManager.engine.start()
} catch {
print("failed to start engine")
failCompletion()
}
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 0.5, execute: {
successCompletion()
})
})
}
I also try to avoid resetting the engine after going to offline mode:
/* From AudioKit renderToFile function */
try AKTry {
// Engine can't be running when switching to offline render mode.
if self.isRunning { self.stop() }
try self.enableManualRenderingMode(.offline, format: audioFile.processingFormat, maximumFrameCount: maximumFrameCount)
// This resets the sampleTime of offline rendering to 0.
self.reset() /* I don't do this */
try self.start()
}
To be honest, I've heavily modified my code to avoid going back and forth between online and offline mode. I only do it now at one point in my app, after a delay as explained above.

Related

Audio duration sometimes changes to near-zero in Safari

I am encountering a strange issue on Safari, both on MacOS and iOS. Initially it seemed like audio would sometimes just not play, but not emit any errors. After adding much logging I found that when when I call audio.play() the audio.duration property would accurately reflect the duration of the clip. The pause and then ended events will then almost immediately be emitted and, after that the duration for the same clip is suddenly somewhere between 0.001 and 0.003 seconds.
I am retaining a reference to the audio element and reusing it to play the same audio multiple times. It almost always works the first time, but after the first playthrough, on about 50% of subsequent plays the symptoms described above will present themselves.
The code where I play the audio is below:
// In the constructor for the class managing audio playback:
this.mediaElement.addEventListener('pause', e => {
console.log('Media paused', this.mediaElement.duration); // This shows the very short duration if the audio did not play
});
// In the function in the class that plays the media.
try {
await this.mediaElement.play();
console.log('Media is playing', this.mediaElement.duration); // This shows an accurate duration
this.state = Playable.state.PLAYING;
} catch(e) {
console.error('Playing media failed:', e);
if (e && e.name === 'NotAllowedError') {
ErrorHandler.playbackNotAllowed(e);
this.state = Playable.state.PAUSED;
return;
} else {
this._fail(e);
this._fakePlay();
}
}
As you can see, I'm not doing anything strange or complicated here. So far I haven't been able to figure out why the duration would change in this way only after playing the audio. Is this a known bug, or is there something I may be doing that could cause this behavior. The closest thing I can think of is that sometimes I will set the currentTime to 0 if I need to start playing the audio from the beginning, but that should not change the duration.

Connecting bluetooth headphones while app is recording in the background causes the recording to stop

I am facing the following issue and hoping someone else encountered it and can offer a solution:
I am using AVAudioEngine to access the microphone. Until iOS 12.4, every time the audio route changed I was able to restart the AVAudioEngine graph to reconfigure it and ensure the input/output audio formats fit the new input/output route. Due to changes introduced in iOS 12.4 it is no longer possible to start (or restart for that matter) an AVAudioEngine graph while the app is backgrounded (unless it's after an interruption).
The error Apple now throw when I attempt this is:
2019-10-03 18:34:25.702143+0200 [1703:129720] [aurioc] 1590: AUIOClient_StartIO failed (561145187)
2019-10-03 18:34:25.702528+0200 [1703:129720] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1544:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
2019-10-03 18:34:25.711668+0200 [1703:129720] [Error] Unable to start audio engine The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.)
I'm guessing Apple closed a security vulnerability there. So now I removed the code to restart the graph when an audio route is changed (e.g. bluetooth headphones are connected).
It seems like when an I/O audio format changes (as happens when the user connects a bluetooth device), an .AVAudioEngingeConfigurationChange notification is fired, to allow the integrating app to react to the change in format. This is really what I should've used to handle changes in I/O formats from the beginning, instead of brute forcing restarting the graph. According to the Apple documentation - “When the audio engine’s I/O unit observes a change to the audio input or output hardware’s channel count or sample rate, the audio engine stops, uninitializes itself, and issues this notification.” (see the docs here). When this happens while the app is backgrounded, I am unable to start the audio engine with the correct audio i/o formats, because of point #1.
So bottom line, it looks like by closing a security vulnerability, Apple introduced a bug in reacting to audio I/O format changes while the app is backgrounded. Or am I missing something?
I'm attaching a code snippet to better describe the issue. For a plug-and-play AppDelegate see here - https://gist.github.com/nevosegal/5669ae8fb6f3fba44505543e43b5d54b.
class RCAudioEngine {
​
private let audioEngine = AVAudioEngine()
init() {
setup()
start()
NotificationCenter.default.addObserver(self, selector: #selector(handleConfigurationChange), name: .AVAudioEngineConfigurationChange, object: nil)
}
​
#objc func handleConfigurationChange() {
//attempt to call start()
//or to audioEngine.reset(), setup() and start()
//or any other combination that involves starting the audioEngine
//results in an error 561145187.
//Not calling start() doesn't return this error, but also doesn't restart
//the recording.
}
public func setup() {
​
//Setup nodes
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
let mainMixerNode = audioEngine.mainMixerNode
​
//Mute output to avoid feedback
mainMixerNode.outputVolume = 0.0
​
inputNode.installTap(onBus: 0, bufferSize: 4096, format: inputFormat) { (buffer, _) -> Void in
//Do audio conversion and use buffers
}
}
​
public func start() {
RCLog.debug("Starting audio engine")
guard !audioEngine.isRunning else {
RCLog.debug("Audio Engine is already running")
return
}
​
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
RCLog.error("Unable to start audio engine \(error.localizedDescription)")
}
}
}
I see only a fix that had gone into iOS 12.4. I am not sure if that causes the issue.
With the release notes https://developer.apple.com/documentation/ios_ipados_release_notes/ios_12_4_release_notes
"Resolved an issue where running an app in iOS 12.2 or later under the Leaks instrument resulted in random numbers of false-positive leaks for every leak check after the first one in a given run. You might still encounter this issue in Simulator, or in macOS apps when using Instruments from Xcode 10.2 and later. (48549361)"
You can raise issue with Apple , if you are a signed developer. They might help you if the defect is on their part.
You can also test with upcoming iOS release to check if your code works in the future release (with the apple beta program)

ARSession and Recording Video

I’m manually writing a video recorder. Unfortunately it’s necessary if you want to record video and use ARKit at the same time. I’ve got most of it figured out, but now I need to optimize it a bit because my phone gets pretty hot running ARKit, Vision and this recorder all at once.
To make the recorder, you need to use an AVAssetWriter with an AVAssetWriterInput (and AVAssetWriterInputPixelBufferAdaptor). The input has a isReadyForMoreMediaData property you need to check before you can write another frame. I’m recording in real-time (or as close to as possible).
Right now, when ARKit.ARSession gives me a new session I immediately pass it to the AVAssetWriterInput. What I want to do is add it to a queue, and have loop check to see if there’s samples available to write. For the life of me I can’t figure out how to do that efficiently.
I want to just run a while loop like this, but it seems like it would be a bad idea:
func startSession() {
// …
while isRunning {
guard !pixelBuffers.isEmpty && writerInput.isReadyForMoreMediaData else {
continue
}
// process sample
}
}
Can I run this a separate thread from the ARSession.delegateQueue? I don't want to run into issues with CVPixelBuffers from the camera being retained for too long.

Timing accuracy with swift using GCD dispatch_after

I'm trying to create a metronome for iOS in Swift. I'm using a GCD dispatch queue to time an AVAudioPlayer. The variable machineDelay is being used to time the player, but its running slower than the time I'm asking of it.
For example, if I ask for a delay of 1sec, it plays at 1.2sec. 0.749sec plays at about 0.92sec, and 0.5sec plays at about 0.652sec. I could try to compensate by adjusting for this discrepancy but I feel like there's something I'm missing here.
If there's a better way to do this altogether, please give suggestions. This is my first personal project so I welcome ideas.
Here are the various functions that should apply to this question:
func milliseconds(beats: Int) -> Double {
let ms = (60 / Double(beats))
return ms
}
func audioPlayerDidFinishPlaying(player: AVAudioPlayer, successfully flag: Bool) {
if self.playState == false {
return
}
playerPlay(playerTick, delay: NSTimeInterval(milliseconds(bpm)))
}
func playerPlay(player: AVAudioPlayer, delay: NSTimeInterval) {
let machineDelay: Int64 = Int64((delay - player.duration) * Double(NSEC_PER_SEC))
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, machineDelay),dispatch_get_main_queue(), { () -> Void in
player.play()
})
}
I have never really done anything with sound on iOS but I can tell you why you are getting those inconsistent timings.
What happens when you use dispatch_after() is that some timer is set somewhere in the OS and at some point soon after it expires, it puts your block on the queue. "at some point after" is going to be short, but depending on what the OS is doing, it will almost certainly not be close to zero.
The main queue is serviced by the main thread using the run loop. This means your task to play the sound is competing for use of the CPU with all the UI functionality. This means that the chance of it playing the sound immediately is quite low.
Finally, the completion handler will fire at some short time after the sound finishes playing but not necessarily straight away.
All of these little delays add up to the latency you are seeing. Unfortunately, depending on what the device is doing, that latency can vary. This is never going to work for something that needs precise timings.
There are, I think, a couple of ways to achieve what you want. However, audio programming is beyond my area of expertise. You probably want to start by looking at Core Audio. My five minutes of research suggests either Audio Queue Services or OpenAL, but those five minutes are literally everything I know about sound on iOS.
dispatch_after is not intended for sample accurate callbacks.
If you are writing audio applications there is no way to escape, you need to implement some CoreAudio code in one way or another.
It will "pull" specific counts of samples. Do the math (figuratively ;)

Web Audio API on iOS Safari do not play even after user interaction

I know that there is a limitation in iOS Safari where the audio is not playing until user triggers an interaction. So I have placed the code inside a touchstart event. But unfortunately, I have tried almost every combination, and I couldn't get it to play on iOS Safari.
Here are the things I have tried:
putting the audio load outside the touchstart callback
try adding a gain node
use 0.01 as the start time
and none of the above works in iOS Safari, but they can all play in desktop Chrome and Safari. Here is the link to the gist, you can see the versions where I made the changes (P.S. the click event is used for testing on desktop)
https://gist.github.com/angelathewebdev/32e0fbd817410db5dea1
Sounds play only when currentTime starts to run, but scheduling sounds exactly at currentTime doesn't seem to work. They need to be a little bit into the future (ex: 10ms). You can use the following createAudioContext function to wait until the context is ready to make noise. User action doesn't seem to be required on iPhone, but no such success on iPad just yet.
function createAudioContext(callback, errback) {
var ac = new webkitAudioContext();
ac.createGainNode(); // .. and discard it. This gets
// the clock running at some point.
var count = 0;
function wait() {
if (ac.currentTime === 0) {
// Not ready yet.
++count;
if (count > 600) {
errback('timeout');
} else {
setTimeout(wait, 100);
}
} else {
// Ready. Pass on the valid audio context.
callback(ac);
}
}
wait();
}
Subsequently, when playing a note, don't call .noteOn(ac.currentTime), but do .noteOn(ac.currentTime + 0.01) instead.

Resources