I have trouble understanding AVAudioEngine's behaviour when switching audio input sources.
Expected Behaviour
When switching input sources, AVAudioEngine's inputNode should adopt the new input source seamlessly.
Actual Behaviour
When switching from AirPods to the iPhone speaker, AVAudioEngine stops working. No audio is routed through anymore. Querying engine.isRunning still returns true.
When subsequently switching back to AirPods, it still isn't working, but now engine.isRunning returns false.
Stopping and starting the engine on a route change does not help. Neither does calling reset(). Disconnecting and reconnecting the input node does not help, either. The only thing that reliably helps is discarding the whole engine and creating a new one.
OS
This is on iOS 14, beta 5. I can't test this on previous versions I'm afraid; I only have one device around.
Code to Reproduce
Here is a minimum code example. Create a simple app project in Xcode (doesn't matter whether you choose SwiftUI or Storyboard), and give it permissions to access the microphone in Info.plist. Create the following file Conductor.swift:
import AVFoundation
class Conductor {
static let shared: Conductor = Conductor()
private let _engine = AVAudioEngine?
init() {
// Session
let session = AVAudioSession.sharedInstance()
try? session.setActive(false)
try! session.setCategory(.playAndRecord, options: [.defaultToSpeaker,
.allowBluetooth,
.allowAirPlay])
try! session.setActive(true)
_engine.connect(_engine.inputNode, to: _engine.mainMixerNode, format: nil)
_engine.prepare()
}
func start() { _engine.start() }
}
And in AppDelegate, call:
Conductor.shared.start()
This example will route the input straight to the output. If you don't have headphones, it will trigger a feedback loop.
Question
What am I missing here? Is this expected behaviour? If so, it does not seem to be documented anywhere.
Related
I'm working on an iOS/Flutter application, and am trying to work out if it's possible to play audio from the Music library on iOS with audio modifications (e. g. equalization settings) applied.
It seems like I'm looking for a solution that can work with MPMusicPlayerController, since that appears to be the strategy for playing local audio from the user's iOS Music library. I can find examples of applying EQ to audio on iOS (e. g. using AVAudioUnitEQ and AVAudioEngine: SO link, tutorial), but I'm unable to find any resources to help me understand if it's possible to bridge the gap between these resources.
Flutter specific context:
There are Flutter plugins that provide some of the functionality I'm looking for, but don't appear to work together. For example, the just_audio plugin has a robust set of features for modifying audio, but does not work with the local Music application on iOS/MPMusicPlayerController. Other plugins that do work with MPMusicPlayerController, like playify, do not have the ability to modify/transform the audio.
Even though I'm working with Flutter, any general advice on the iOS side would be very helpful. I appreciate any insight someone with more knowledge may be able to share with me!
Updating with my own answer here for future people: It looks like my only path forward (for now) is leaning into into AVAudioEngine directly. This is the rough POC that worked for me:
var audioPlayer = AVAudioPlayerNode()
var audioEngine = AVAudioEngine()
var eq = AVAudioUnitEQ()
let mediaItemCollection: [MPMediaItem] = MPMediaQuery.songs().items!
let song = mediaItemCollection[0]
do {
let file = try AVAudioFile(forReading: song.assetURL!)
audioEngine.attach(audioPlayer)
audioEngine.attach(eq)
audioEngine.connect(audioPlayer, to: eq, format: nil)
audioEngine.connect(eq, to: audioEngine.outputNode, format: file.processingFormat)
audioPlayer.scheduleFile(file, at: nil)
try audioEngine.start()
audioPlayer.play()
} catch {
// catch
}
The trickiest part for me was working out how to bridge together the "Music library/MPMediaItem" world to "AVAudioEngine" world -- which was just AVAudioFile(forReading: song.assetURL!)
The app i'm writing contains 2 parts:
An audio player that plays stereo MP3 files
Video conferencing using webRTC
Each part works perfectly in isolation, but the moment i try them together, one of two things happens:
The video conference audio fades out and we just hear the audio files (in stereo)
We get audio output from both, but the audio files are played in mono, coming out of both ears equally
My digging had taken me down a few routes:
https://developer.apple.com/forums/thread/90503
&
https://github.com/twilio/twilio-video-ios/issues/77
Which suggest that the issue could be with the audio session category, mode or options.
However i've tried lots of the combos and am struggling to get anything working as intended.
Does anyone have a better understanding of the audio options to point in the right direction?
My most recent combination
class BBAudioClass {
static private var audioCategory : AVAudioSession.Category = AVAudioSession.Category.playAndRecord
static private var audioCategoryOptions : AVAudioSession.CategoryOptions = [
AVAudioSession.CategoryOptions.mixWithOthers,
AVAudioSession.CategoryOptions.allowBluetooth,
AVAudioSession.CategoryOptions.allowAirPlay,
AVAudioSession.CategoryOptions.allowBluetoothA2DP
]
static private var audioMode = AVAudioSession.Mode.default
static func setCategory() -> Void {
do {
let audioSession: AVAudioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(
BBAudioClass.audioCategory,
mode: BBAudioClass.audioMode,
options: BBAudioClass.audioCategoryOptions
)
} catch {
};
}
}
Update
I managed to get everything working as i wanted by:
Starting the audio session
Connecting to the video conference (at this point all audio is mono)
Forcing all output to the speaker
Forcing output back to the headphones
Obviously this is a crazy thing to have to do, but does prove that it should work.
But it would be great if anyone knew WHY this works, in order that i can actually get things to work properly first time without going through all these hacky steps
I am currently developing an application with SwiftUI.
There is a function that plays a sound effect when you tap on it.
When I tested it on the actual device, the Spotify music playing in the background stopped. Is it possible to use AVFoundation to play sound effects without stopping the music? Also, if there is a better way to implement this, please help me.
import Foundation
import AVFoundation
var audioPlayer: AVAudioPlayer?
func playSound(sound: String, type: String) {
if let path = Bundle.main.path(forResource: sound, ofType: type) {
do {
audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path))
audioPlayer?.play()
} catch {
print("ERROR:Could not find and play the sound file.")
}
}
}
Set your AVAudioSession category options to .mixWithOthers.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.ambient, mode: .default, options: [.mixWithOthers])
} catch {
print("Failed to set audio session category.")
}
audioSession.setActive(true) // When you're ready to play something
ambient indicates that sound is not critical to this application. The default is soloAmbient, which is non-mixable. (It's possible you can just set the category to ambient here and you'll get mixWithOthers for free. I don't often use that mode, so I don't remember how much you get by default.) If sound is critical, see the mode .playback.
As a rule, you set the category once in the app, and then set the session active or inactive as you need it. If I remember correctly, AVAudioPlayer will automatically activate the session, so you may not need to do that (I typically work at much lower levels, and don't always remember what the high-level tools do automatically). It is legal to change your category, however, if different features of your app have different needs.
For more details, see AVAudioSession.Category and AVAudioSession. There are many options in sound playback, and many of them are not obvious (and a few have very confusing names, I'm looking at you .allowBluetooth), so it's worth reading through the docs.
I'm currently trying to have my device to record audio for a capture session through device mic while having audio output on a bluetooth device (AirPods).
The reason I am doing this is because with bluetooth headphones and especially AirPods when the bluetooth mic is active the playback quality is horrible.
I tried using setPreferredInput but it changes both input and output, here's what I have so far.
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth, .mixWithOthers])
print(session.currentRoute.outputs)
try session.setAllowHapticsAndSystemSoundsDuringRecording(true)
try session.setActive(true, options: .notifyOthersOnDeactivation)
if let mic = session.availableInputs?.first(where: {$0.portType == AVAudioSession.Port.builtInMic}) {
try session.setPreferredInput(mic)
}
} catch let err {
print("Audio session err", err.localizedDescription)
}
Also I saw an old api that could have helped but it seems to be long depreciated now (kAudioSessionProperty_OverrideCategoryEnableBluetoothInput) for AudioSession.
There are other apps on the App Store that seem to have achieved the split recording so it seems to be possible.
Get rid of allowBluetooth and use allowBluetoothA2DP. You also don't want defaultToSpeaker here.
"Allow Bluetooth" actually means "prefer HFP" which is why the audio is so bad. HFP is a low-bandwidth bidirectional protocol used generally for phone calls. The enum name is very confusing IMO. People get confused about it all the time.
A2DP is a high-bandwidth unidirectional protocol (it doesn't support a microphone). When you request that, the headset's microphone will be disabled, and you'll get the iPhone's microphone by default (provided there isn't some other wired microphone available, but that's very unlikely).
Note that this is NOT a duplicate of this SO Post because in that post only WHAT method to use is given but there's no example on HOW should I use it.
So, I have dug into AKOfflineRenderNode as much as I can and have viewed all examples I could find. However, my code never seemed to work correctly on iOS 10.3.1 devices(and other iOS 10 versions), for the result is always silent. I try to follow examples provided in other SO posts but no success. I try to follow that in SongProcessor but it uses an older version of Swift and I can't even compile it. Trying SongProcessor's way to use AKOfflineRenderNode didn't help either. It always turned out silent.
I created a demo project just to test this. Because I don't own the audio file I used to test with, I couldn't upload it to my GitHub. Please add an audio file named "Test" into the project before compiling onto an iOS 10.3.1 simulator. (And if your file isn't in m4a, remember to change the file type in code where I initialize AKPlayer)
If you don't want to download and run the sample, the essential part is here:
#IBAction func export() {
// url, player, offlineRenderer and others are predefined and connected as player >> aPitchShifter >> offlineRenderer
// AudioKit.output is already offlineRenderer
offlineRenderer.internalRenderEnabled = false
try! AudioKit.start()
// I also tried using AKAudioPlayer instead of AKPlayer
// Also tried getting time in these ways:
// AVAudioTime.secondsToAudioTime(hostTime: 0, time: 0)
// player.audioTime(at: 0)
// And for hostTime I've tried 0 as well as mach_absolute_time()
// None worked
let time = AVAudioTime(sampleTime: 0, atRate: offlineRenderer.avAudioNode.inputFormat(forBus: 0).sampleRate)
player.play(at: time)
try! offlineRenderer.renderToURL(url, duration: player.duration)
player.stop()
player.disconnectOutput()
offlineRenderer.internalRenderEnabled = true
try? AudioKit.stop()
}