I am trying to build a very simple audio effects chain using Core Audio for iOS. So far I have implemented an EQ - Compression - Limiter chain which works perfectly fine in the simulator. However on device, the application crashes when connecting nodes to the AVAudioEngine due to an apparent mismatch in the input and output hardware formats.
'com.apple.coreaudio.avfaudio', reason: 'required condition is false:
IsFormatSampleRateAndChannelCountValid(outputHWFormat)'
Taking a basic example, my Audio Graph is as follows.
Mic -> Limiter -> Main Mixer (and Output)
and the graph is populated using
engine.connect(engine.inputNode!, to: limiter, format: engine.inputNode!.outputFormatForBus(0))
engine.connect(limiter, to: engine.mainMixerNode, format: engine.inputNode!.outputFormatForBus(0))
which crashes with the above exception. If I instead use the limiter's format when connecting to the mixer
engine.connect(engine.inputNode!, to: limiter, format: engine.inputNode!.outputFormatForBus(0))
engine.connect(limiter, to: engine.mainMixerNode, format: limiter.outputFormatForBus(0))
the application crashes with an kAudioUnitErr_FormatNotSupported error
'com.apple.coreaudio.avfaudio', reason: 'error -10868'
Before connecting the audio nodes in the engine, inputNode has 1 channel and a sample rate of 44.100Hz, while the outputNode has 0 channels and a sample rate of 0Hz (deduced using outputFormatForBus(0) property). But this could be because there is no node yet connected to the output mixer? Setting the preferred sample rate on AVAudioSession made no difference.
Is there something that I am missing here? I have Microphone access (verified using AVAudioSession.sharedInstance().recordPermission()), and I have set the AVAudioSession mode to record (AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryRecord)).
The limiter is an AVAudioUnitEffect initialized as follows:
let limiter = AVAudioUnitEffect( audioComponentDescription:
AudioComponentDescription(
componentType: kAudioUnitType_Effect,
componentSubType: kAudioUnitSubType_PeakLimiter,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0) )
engine.attachNode(limiter)
and engine is a global, class variable
var engine = AVAudioEngine()
As I said, this works perfectly fine using the simulator (and Mac's default hardware), but continually crashes on various iPads on iOS8 & iOS9. I have a super basic example working which simply feeds the mic input to a player to the output mixer
do {
file = try AVAudioFile(forWriting: NSURL.URLToDocumentsFolderForName(name: "test", WithType type: "caf")!, settings: engine.inputNode!.outputFormatForBus(0).settings)
} catch {}
engine.connect(player, to: engine.mainMixerNode, format: file.processingFormat)
Here the inputNode has 1 channel and 44.100Hz sampling rate, while the outputNode has 2 channels and 44.100Hz sampling rate, but no mismatching seems to occur. Thus the issue must be the manner in which the AVAudioUnitEffect is connected to the output mixer.
Any help would be greatly appreciated.
This depends on some factors outside of the code you've shared, but it's possible you're using the wrong AVAudioSession category.
I ran into this same issue, under some slightly different circumstances. When I was using AVAudioSessionCategoryRecord as the AVAudioSession category, I ran into this same issue when attempting to connect an audio tap. I not only received that error, but my AVAudioEngine inputNode showed an outputFormat with 0.0 sample rate.
Changing it to AVAudioSessionCategoryPlayAndRecord I received the expected 44.100Hz sample rate and the issue resolved.
Related
Problem
I'm trying to record microphone data using AVAudioEngine. This is my setup.
First I create singleton session, set sessions category and activate it.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.record)
try audioSession.setActive(true)
} catch {
...
}
After that i create input format for bus 0, connect input and output nodes and install tap on input node(also tried to tap output node)
let inputFormat = self.audioEngine.inputNode.inputFormat(forBus: 0)
self.audioEngine.connect(self.audioEngine.inputNode, to: self.audioEngine.outputNode, format: inputFormat)
self.audioEngine.outputNode.installTap(onBus: 0, bufferSize: bufferSize, format: inputFormat) { (buffer, time) in
let theLength = Int(buffer.frameLength)
var samplesAsDoubles:[Double] = []
for i in 0 ..< theLength
{
let theSample = Double((buffer.floatChannelData?.pointee[i])!)
samplesAsDoubles.append( theSample )
}
...
}
All of the above is in one function. I also have another function called startRecording which contains the following.
do {
try audioEngine.start()
} catch {
...
}
I have also verified microphone permissions which are granted.
Start method fails and this is the response
The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error -10875.)
Questions
Based on documentation there are three possible causes
- There’s a problem in the structure of the graph, such as the input can’t route to an output or to a recording tap through converter nodes.
- An AVAudioSession error occurs.
- The driver fails to start the hardware.
```
I don't believe it's the session one, because i set it up via documentation.
If the driver fails, how would i detect that and handle it?
If graph is setup incorrectly, where did i mess it up?
If you want to record the microphone, attach the tap to the inputNode, not the outputNode. Taps observe the output of a node. There is no "output" of the outputNode. It's the sink.
If you need to install a tap "immediately before the outputNode" (which might not be the input node in a more complicated graph), you can insert a mixer between the input(s) and the output and attach the tap to the mixer.
Also, make sure you really want to connect the input node to the output node here. There's no need to do that unless you want to playback audio live. You can just attach a tap to the input node without building any more of the graph if you just want to record the microphone. (This also might be causing part of your problem, since you set your category to .record, i.e. no playback. I don't know that wiring something to the output in that case causes an exception, but it definitely doesn't make sense.)
I've implemented the audio EQ via AVAudioEngine and AVAudioPlayerNode and it is working fine (tried both scheduling a buffer or a file). However, once the app goes to background the sound just fades away. Background mode is correctly set, as is the audio session, and I've verified it by playing music with AVPlayer and then going to background). No audio engine notifications are received.
Here's the code for initializing the engine:
let x = CrbnPlayerEQ()
let eq = AVAudioUnitEQ(numberOfBands: 8)
x.audioUnitEQ = eq
x.audioUnitEQ?.globalGain = 0
x.audioEngine.attach(eq)
x.audioEngine.attach(CrbnPlayer.shared.player.nodePlayer)
let mixer = x.audioEngine.mainMixerNode
x.audioEngine.connect(CrbnPlayer.shared.player.nodePlayer, to: eq, format: mixer.outputFormat(forBus: 0))
x.audioEngine.connect(eq, to: mixer, format: mixer.outputFormat(forBus: 0))
try? x.audioEngine.start()
And here's the play part for the AVAudioPlayerNode:
CrbnPlayerEQ.shared.audioEngine.prepare()
try? CrbnPlayerEQ.shared.audioEngine.start()
self.nodePlayer.stop()
self.nodePlayer.scheduleFile(audioFile, at: nil) {
}
The result remains the same when I use the scheduleBuffer instead of the scheduleFile. I've tried changing playback modes and audio session options but none of that helped. I've also tried stopping and starting the audio session when the app goes to background.
One solution would be to switch to the AVPlayer once the app goes to background but then I'd lose the EQ.
Does anyone know how to ensure the buffer keeps playing even after the app goes to background?
I am facing the following issue and hoping someone else encountered it and can offer a solution:
I am using AVAudioEngine to access the microphone. Until iOS 12.4, every time the audio route changed I was able to restart the AVAudioEngine graph to reconfigure it and ensure the input/output audio formats fit the new input/output route. Due to changes introduced in iOS 12.4 it is no longer possible to start (or restart for that matter) an AVAudioEngine graph while the app is backgrounded (unless it's after an interruption).
The error Apple now throw when I attempt this is:
2019-10-03 18:34:25.702143+0200 [1703:129720] [aurioc] 1590: AUIOClient_StartIO failed (561145187)
2019-10-03 18:34:25.702528+0200 [1703:129720] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1544:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
2019-10-03 18:34:25.711668+0200 [1703:129720] [Error] Unable to start audio engine The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.)
I'm guessing Apple closed a security vulnerability there. So now I removed the code to restart the graph when an audio route is changed (e.g. bluetooth headphones are connected).
It seems like when an I/O audio format changes (as happens when the user connects a bluetooth device), an .AVAudioEngingeConfigurationChange notification is fired, to allow the integrating app to react to the change in format. This is really what I should've used to handle changes in I/O formats from the beginning, instead of brute forcing restarting the graph. According to the Apple documentation - “When the audio engine’s I/O unit observes a change to the audio input or output hardware’s channel count or sample rate, the audio engine stops, uninitializes itself, and issues this notification.” (see the docs here). When this happens while the app is backgrounded, I am unable to start the audio engine with the correct audio i/o formats, because of point #1.
So bottom line, it looks like by closing a security vulnerability, Apple introduced a bug in reacting to audio I/O format changes while the app is backgrounded. Or am I missing something?
I'm attaching a code snippet to better describe the issue. For a plug-and-play AppDelegate see here - https://gist.github.com/nevosegal/5669ae8fb6f3fba44505543e43b5d54b.
class RCAudioEngine {
private let audioEngine = AVAudioEngine()
init() {
setup()
start()
NotificationCenter.default.addObserver(self, selector: #selector(handleConfigurationChange), name: .AVAudioEngineConfigurationChange, object: nil)
}
#objc func handleConfigurationChange() {
//attempt to call start()
//or to audioEngine.reset(), setup() and start()
//or any other combination that involves starting the audioEngine
//results in an error 561145187.
//Not calling start() doesn't return this error, but also doesn't restart
//the recording.
}
public func setup() {
//Setup nodes
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
let mainMixerNode = audioEngine.mainMixerNode
//Mute output to avoid feedback
mainMixerNode.outputVolume = 0.0
inputNode.installTap(onBus: 0, bufferSize: 4096, format: inputFormat) { (buffer, _) -> Void in
//Do audio conversion and use buffers
}
}
public func start() {
RCLog.debug("Starting audio engine")
guard !audioEngine.isRunning else {
RCLog.debug("Audio Engine is already running")
return
}
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
RCLog.error("Unable to start audio engine \(error.localizedDescription)")
}
}
}
I see only a fix that had gone into iOS 12.4. I am not sure if that causes the issue.
With the release notes https://developer.apple.com/documentation/ios_ipados_release_notes/ios_12_4_release_notes
"Resolved an issue where running an app in iOS 12.2 or later under the Leaks instrument resulted in random numbers of false-positive leaks for every leak check after the first one in a given run. You might still encounter this issue in Simulator, or in macOS apps when using Instruments from Xcode 10.2 and later. (48549361)"
You can raise issue with Apple , if you are a signed developer. They might help you if the defect is on their part.
You can also test with upcoming iOS release to check if your code works in the future release (with the apple beta program)
I'm using the latest version of AudioKit, 4.8. I have set up a simple audio chain to record some audio from the microphone, and it works great on the simulator, but when I switch to a physical device, nothing is recorded and it keeps printing this is the console:
AKNodeRecorder.swift:process(buffer:time:):137:Write failed: error -> Cannot complete process.(“com.apple.coreaudio.avfaudio”Error -50.)
So you see, nothing is recorded. -50 is the bad param error, but obviously I couldn't find what I'm missing. The audio session is set to playAndRecord and I DID request the microphone usage. If I don't use AKNodeRecorder but add my own tap instead, the same error still shows up when I try AKAudioFile.write(from:).
Here's my code:
guard let file = try? AKAudioFile(forWriting: destinationURL, settings: [AVFormatIDKey: kAudioFormatMPEG4AAC, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue]) else {
return
}
microphone?.start()
let recorder = try? AKNodeRecorder(node: microphone, file: file)
try? recorder?.record()
What should I do?
P.S. before I upgraded to AudioKit 4.8, I was using 4.7, and instead of giving me error -50, it simply crashed when I began recording on a physical device, giving me this. All is fine on simulators.
Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: '[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10865 "(null)"'
I am attempting to use an AVAudioEnvironmentNode to produce 3D spatialized sound for a game I'm working on. The AVAudioEnvironmentNode documentation states, "It is important to note that only inputs with a mono channel connection format to the environment node are spatialized. If the input is stereo, the audio is passed through without being spatialized. Currently inputs with connection formats of more than 2 channels are not supported." I have indeed found this to be the case. When I load audio buffers with two channels into an AudioPlayerNode and connect the node to an AVAudioEnvironmentNode, the output sound is not spatialize. My question is, how can I send mono data to the AVAudioEnvironmentNode?
I've tried creating a mono .wav file using Audacity as well as loading an AVAudioPCMBuffer with sine wave data programmatically. I find that either way, when I create a single channel audio buffer and attempt to load the buffer into an AudioPlayerNode, my program crashes with the following error:
2016-02-17 06:36:07.695 Test Scene[1577:751557] 06:36:07.694 ERROR:
[0x1a1691000] AVAudioPlayerNode.mm:423: ScheduleBuffer: required
condition is false: _outputFormat.channelCount ==
buffer.format.channelCount 2016-02-17 06:36:07.698 Test
Scene[1577:751557] *** Terminating app due to uncaught exception
'com.apple.coreaudio.avfaudio', reason: 'required condition is false:
_outputFormat.channelCount == buffer.format.channelCount'
Checking the AVAudioPlayerNode output bus does indeed reveal that it expects 2 channels. It's unclear to me how this can be changed, or even if it should be.
I stipulate that I have very little experience working with AVFoundation or audio data in general. Any help you can provide would be greatly appreciated.
I hope you have solved this already, but for anyone else:
When connecting your AVAudioPlayerNode to an AVAudioMixerNode or any other node that it can connect to, you need to specify the number of channels there:
audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: AVAudioFormat.init(standardFormatWithSampleRate: 96000, channels: 1))
You can check if the sample rate is 96000 for your file in Audacity, or 'Get Info' in Finder.