I've implemented the audio EQ via AVAudioEngine and AVAudioPlayerNode and it is working fine (tried both scheduling a buffer or a file). However, once the app goes to background the sound just fades away. Background mode is correctly set, as is the audio session, and I've verified it by playing music with AVPlayer and then going to background). No audio engine notifications are received.
Here's the code for initializing the engine:
let x = CrbnPlayerEQ()
let eq = AVAudioUnitEQ(numberOfBands: 8)
x.audioUnitEQ = eq
x.audioUnitEQ?.globalGain = 0
x.audioEngine.attach(eq)
x.audioEngine.attach(CrbnPlayer.shared.player.nodePlayer)
let mixer = x.audioEngine.mainMixerNode
x.audioEngine.connect(CrbnPlayer.shared.player.nodePlayer, to: eq, format: mixer.outputFormat(forBus: 0))
x.audioEngine.connect(eq, to: mixer, format: mixer.outputFormat(forBus: 0))
try? x.audioEngine.start()
And here's the play part for the AVAudioPlayerNode:
CrbnPlayerEQ.shared.audioEngine.prepare()
try? CrbnPlayerEQ.shared.audioEngine.start()
self.nodePlayer.stop()
self.nodePlayer.scheduleFile(audioFile, at: nil) {
}
The result remains the same when I use the scheduleBuffer instead of the scheduleFile. I've tried changing playback modes and audio session options but none of that helped. I've also tried stopping and starting the audio session when the app goes to background.
One solution would be to switch to the AVPlayer once the app goes to background but then I'd lose the EQ.
Does anyone know how to ensure the buffer keeps playing even after the app goes to background?
Related
CoreAudio is always a mystery due to lack of documentations. Recently I hit some stone again:
In my program, I invoke RemoteIO and VoiceProcessingIO (VPIO) back and forth, and also change AVAudiosession in between. I tried to turn off AGC on VPIO with the follwing code:
if (ASBD.componentSubType == kAudioUnitSubType_VoiceProcessingIO) {
UInt32 turnOff = 0;
status = AudioUnitSetProperty(_myAudioUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
0,
&turnOff,
sizeof(turnOff));
NSAssert1(status == noErr, #"Error setting AGC status: %d", (int)status);
}
Well I'm still not sure if this code disables AGC on the microphone side or the speaker side on VPIO, but anyways, let's continue. Here's the sequence to reproduce the problem:
Create a RemoteIO output audio unit with PlayAndRecord audio session category, work with it and destroy the unit;
Switch audio session to Playback only category;
Switch audio session to PlayAndRecord again and create another VPIO, work with it and destroy it;
Switch audio session to Playback and then PlayAndRecord category;
After these steps, then whatever RemoteIO/VPIO created later will bear this amplified microphone signal (as if a huge AGC is always applied) and there's no way to go back until manually kill the app and start over.
Maybe it's my particular sequence that triggered this, wonder if anyone seen this before and maybe know a correct workaround?
Try setting the mode AVAudioSessionModeMeasurement, or AVAudioSession.Mode .measurement, when configuring your app's Audio Session.
I am facing the following issue and hoping someone else encountered it and can offer a solution:
I am using AVAudioEngine to access the microphone. Until iOS 12.4, every time the audio route changed I was able to restart the AVAudioEngine graph to reconfigure it and ensure the input/output audio formats fit the new input/output route. Due to changes introduced in iOS 12.4 it is no longer possible to start (or restart for that matter) an AVAudioEngine graph while the app is backgrounded (unless it's after an interruption).
The error Apple now throw when I attempt this is:
2019-10-03 18:34:25.702143+0200 [1703:129720] [aurioc] 1590: AUIOClient_StartIO failed (561145187)
2019-10-03 18:34:25.702528+0200 [1703:129720] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1544:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
2019-10-03 18:34:25.711668+0200 [1703:129720] [Error] Unable to start audio engine The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.)
I'm guessing Apple closed a security vulnerability there. So now I removed the code to restart the graph when an audio route is changed (e.g. bluetooth headphones are connected).
It seems like when an I/O audio format changes (as happens when the user connects a bluetooth device), an .AVAudioEngingeConfigurationChange notification is fired, to allow the integrating app to react to the change in format. This is really what I should've used to handle changes in I/O formats from the beginning, instead of brute forcing restarting the graph. According to the Apple documentation - “When the audio engine’s I/O unit observes a change to the audio input or output hardware’s channel count or sample rate, the audio engine stops, uninitializes itself, and issues this notification.” (see the docs here). When this happens while the app is backgrounded, I am unable to start the audio engine with the correct audio i/o formats, because of point #1.
So bottom line, it looks like by closing a security vulnerability, Apple introduced a bug in reacting to audio I/O format changes while the app is backgrounded. Or am I missing something?
I'm attaching a code snippet to better describe the issue. For a plug-and-play AppDelegate see here - https://gist.github.com/nevosegal/5669ae8fb6f3fba44505543e43b5d54b.
class RCAudioEngine {
private let audioEngine = AVAudioEngine()
init() {
setup()
start()
NotificationCenter.default.addObserver(self, selector: #selector(handleConfigurationChange), name: .AVAudioEngineConfigurationChange, object: nil)
}
#objc func handleConfigurationChange() {
//attempt to call start()
//or to audioEngine.reset(), setup() and start()
//or any other combination that involves starting the audioEngine
//results in an error 561145187.
//Not calling start() doesn't return this error, but also doesn't restart
//the recording.
}
public func setup() {
//Setup nodes
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
let mainMixerNode = audioEngine.mainMixerNode
//Mute output to avoid feedback
mainMixerNode.outputVolume = 0.0
inputNode.installTap(onBus: 0, bufferSize: 4096, format: inputFormat) { (buffer, _) -> Void in
//Do audio conversion and use buffers
}
}
public func start() {
RCLog.debug("Starting audio engine")
guard !audioEngine.isRunning else {
RCLog.debug("Audio Engine is already running")
return
}
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
RCLog.error("Unable to start audio engine \(error.localizedDescription)")
}
}
}
I see only a fix that had gone into iOS 12.4. I am not sure if that causes the issue.
With the release notes https://developer.apple.com/documentation/ios_ipados_release_notes/ios_12_4_release_notes
"Resolved an issue where running an app in iOS 12.2 or later under the Leaks instrument resulted in random numbers of false-positive leaks for every leak check after the first one in a given run. You might still encounter this issue in Simulator, or in macOS apps when using Instruments from Xcode 10.2 and later. (48549361)"
You can raise issue with Apple , if you are a signed developer. They might help you if the defect is on their part.
You can also test with upcoming iOS release to check if your code works in the future release (with the apple beta program)
I am trying to run the Apple SpeakToMe: Using Speech Recognition with AVAudioEngine sample from their website here. My problem is that when you stop the AVAudioEngine and SpeechRecognizer you can no longer use system sounds.
How do you release the AVAudioEngine and SpeechRecognizer so that sounds will work again after recording stops?
To duplicate this:
download the sample code
add a UITextField to the storyboard.
run the project and type into the text field (you'll notice you can hear your typing event sounds).
Then start recording and stop recording
Type into the text field again (Now there will be no sound)
UPDATE
This only happens on a real device - not on the simulator.
After hours of debugging I came across the un-released object causing issues. In their sample code they do not release the AVAudioSession. This causes the sound channels to be blocked.
The fix is to make the AVAudioSession a property:
private var audioSession : AVAudioSession?
And then set audioSession.active to false when stopping the recording:
if let audioSession = audioSession {
do {
try audioSession.setActive(false, with: .notifyOthersOnDeactivation)
} catch {
// handle error
}
}
I am trying to build a very simple audio effects chain using Core Audio for iOS. So far I have implemented an EQ - Compression - Limiter chain which works perfectly fine in the simulator. However on device, the application crashes when connecting nodes to the AVAudioEngine due to an apparent mismatch in the input and output hardware formats.
'com.apple.coreaudio.avfaudio', reason: 'required condition is false:
IsFormatSampleRateAndChannelCountValid(outputHWFormat)'
Taking a basic example, my Audio Graph is as follows.
Mic -> Limiter -> Main Mixer (and Output)
and the graph is populated using
engine.connect(engine.inputNode!, to: limiter, format: engine.inputNode!.outputFormatForBus(0))
engine.connect(limiter, to: engine.mainMixerNode, format: engine.inputNode!.outputFormatForBus(0))
which crashes with the above exception. If I instead use the limiter's format when connecting to the mixer
engine.connect(engine.inputNode!, to: limiter, format: engine.inputNode!.outputFormatForBus(0))
engine.connect(limiter, to: engine.mainMixerNode, format: limiter.outputFormatForBus(0))
the application crashes with an kAudioUnitErr_FormatNotSupported error
'com.apple.coreaudio.avfaudio', reason: 'error -10868'
Before connecting the audio nodes in the engine, inputNode has 1 channel and a sample rate of 44.100Hz, while the outputNode has 0 channels and a sample rate of 0Hz (deduced using outputFormatForBus(0) property). But this could be because there is no node yet connected to the output mixer? Setting the preferred sample rate on AVAudioSession made no difference.
Is there something that I am missing here? I have Microphone access (verified using AVAudioSession.sharedInstance().recordPermission()), and I have set the AVAudioSession mode to record (AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryRecord)).
The limiter is an AVAudioUnitEffect initialized as follows:
let limiter = AVAudioUnitEffect( audioComponentDescription:
AudioComponentDescription(
componentType: kAudioUnitType_Effect,
componentSubType: kAudioUnitSubType_PeakLimiter,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0) )
engine.attachNode(limiter)
and engine is a global, class variable
var engine = AVAudioEngine()
As I said, this works perfectly fine using the simulator (and Mac's default hardware), but continually crashes on various iPads on iOS8 & iOS9. I have a super basic example working which simply feeds the mic input to a player to the output mixer
do {
file = try AVAudioFile(forWriting: NSURL.URLToDocumentsFolderForName(name: "test", WithType type: "caf")!, settings: engine.inputNode!.outputFormatForBus(0).settings)
} catch {}
engine.connect(player, to: engine.mainMixerNode, format: file.processingFormat)
Here the inputNode has 1 channel and 44.100Hz sampling rate, while the outputNode has 2 channels and 44.100Hz sampling rate, but no mismatching seems to occur. Thus the issue must be the manner in which the AVAudioUnitEffect is connected to the output mixer.
Any help would be greatly appreciated.
This depends on some factors outside of the code you've shared, but it's possible you're using the wrong AVAudioSession category.
I ran into this same issue, under some slightly different circumstances. When I was using AVAudioSessionCategoryRecord as the AVAudioSession category, I ran into this same issue when attempting to connect an audio tap. I not only received that error, but my AVAudioEngine inputNode showed an outputFormat with 0.0 sample rate.
Changing it to AVAudioSessionCategoryPlayAndRecord I received the expected 44.100Hz sample rate and the issue resolved.
I just started testing this very simple audio recording application that was built through Monotouch on actual iPhone devices today. I encountered an issue with what seemed to be the re-use of the AVAudioRecorder and AVPlayer objects after their first use and I am wondering how I might could solve it.
Basic Overview
The application consists of the following three sections :
List of Recordings (TableViewController)
Recording Details (ViewController)
New Recording (ViewController)
Workflow
When creating a recording, the user would click the "Add" button from the List of Recordings area and the application pushes the New Recording View Controller.
Within the New Recording Controller, the following variables are available:
AVAudioRecorder recorder;
AVPlayer player;
each are initialized prior to their usage:
//Initialized during the ViewDidLoad event
recorder = AVAudioRecorder.Create(audioPath, audioSettings, out error);
and
//Initialized in the "Play" event
player = new AVPlayer(audioPath);
Each of this work as intended on the initial load of the New Recording Controller area, however any further attempts do not seem to work (No Audio Playback)
The Details area also has a playback portion to allow the user to playback any recordings, however, much like the New Recording Controller, playback doesn't function there either.
Disposal
They are both disposed as follows (upon exiting / leaving the View) :
if(recorder != null)
{
recorder.Dispose();
recorder = null;
}
if(player != null)
{
player.Dispose();
player = null;
}
I have also attempted to remove any observers that could possible keep any of the objects "alive" in hopes that would solve the issue and have ensured they are each instantiated with each display of the New Recording area, however I still receive no audio playback after the initial Recording session.
I would be happy to provide more code if necessary. (This is using MonoTouch 6.0.6)
After further investigation, I determined that the issue was being caused by the AudioSession as both recording and playback were occurring within the same controller.
The two solutions that I determined were as follows:
Solution 1 (AudioSessionCategory.PlayAndRecord)
//A single declaration of this will allow both AVAudioRecorders and AVPlayers
//to perform alongside each other.
AudioSession.Category = AudioSessionCategory.PlayAndRecord;
//Upon noticing very quiet playback, I added this second line, which allowed
//playback to come through the main phone speaker
AudioSession.OverrideCategoryDefaultToSpeaker = true;
Solution 2 (AudioSessionCategory.RecordAudio & AudioSessionCategory.MediaPlayback)
void YourRecordingMethod()
{
//This sets the session to record audio explicitly
AudioSession.Category = AudioSessionCategory.RecordAudio;
MyRecorder.record();
}
void YourPlaybackMethod()
{
//This sets the session for playback only
AudioSession.Category = AudioSessionCategory.MediaPlayback;
YourAudioPlayer.play();
}
For some additional information on usage of the AudioSession, visit Apple's AudioSession Development Area.