I want to make sound effects without stopping the music playing in the background in another app - ios

I am currently developing an application with SwiftUI.
There is a function that plays a sound effect when you tap on it.
When I tested it on the actual device, the Spotify music playing in the background stopped. Is it possible to use AVFoundation to play sound effects without stopping the music? Also, if there is a better way to implement this, please help me.
import Foundation
import AVFoundation
var audioPlayer: AVAudioPlayer?
func playSound(sound: String, type: String) {
if let path = Bundle.main.path(forResource: sound, ofType: type) {
do {
audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path))
audioPlayer?.play()
} catch {
print("ERROR:Could not find and play the sound file.")
}
}
}

Set your AVAudioSession category options to .mixWithOthers.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.ambient, mode: .default, options: [.mixWithOthers])
} catch {
print("Failed to set audio session category.")
}
audioSession.setActive(true) // When you're ready to play something
ambient indicates that sound is not critical to this application. The default is soloAmbient, which is non-mixable. (It's possible you can just set the category to ambient here and you'll get mixWithOthers for free. I don't often use that mode, so I don't remember how much you get by default.) If sound is critical, see the mode .playback.
As a rule, you set the category once in the app, and then set the session active or inactive as you need it. If I remember correctly, AVAudioPlayer will automatically activate the session, so you may not need to do that (I typically work at much lower levels, and don't always remember what the high-level tools do automatically). It is legal to change your category, however, if different features of your app have different needs.
For more details, see AVAudioSession.Category and AVAudioSession. There are many options in sound playback, and many of them are not obvious (and a few have very confusing names, I'm looking at you .allowBluetooth), so it's worth reading through the docs.

Related

Audio Recording is not working during Screen-share with Zoom or other app

I am trying to record voice with AVAudioRecorder. It is working fine if Screen-share is not enable. But notice when i share my device screen with Zoom or any other app. AVAudioSession is not active.
Here i attach code that i added for audio record
UIApplication.shared.beginReceivingRemoteControlEvents()
let session = AVAudioSession.sharedInstance()
do{
try session.setCategory(.playAndRecord,options: .defaultToSpeaker)
try session.setActive(true)
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2,
AVEncoderAudioQualityKey:AVAudioQuality.high.rawValue
]
audioRecorder = try AVAudioRecorder(url: getFileUrl(), settings: settings)
audioRecorder.delegate = self
audioRecorder.isMeteringEnabled = true
audioRecorder.prepareToRecord()
self.nextBtn.isHidden = true
}catch let error {
print("Error \(error)")
}
When i hit record button it shows me error NSOSStatusErrorDomain Code=561017449 "Session activation failed".
Here i attach video.
https://share.icloud.com/photos/0a09o5DCNip6Rx_GnTpht7K3A
I don't have the reputation to comment or I would. (Almost there lol!) Have you tried AVAudioSession.CategoryOptions.overridemutedmicrophoneinterrupt?
Edit
The more I looked into this it seems like if Zoom is using the hardware then the iPhone won't be able to record that stream. I think that's the idea behind the AVAudioSession.sharedSession() being a singleton.
From the docs:
Type Property
overrideMutedMicrophoneInterruption: An option that indicates
whether the system interrupts the audio session when it mutes the
built-in microphone.
Declaration
AVAudioSession.CategoryOptions { get }
Discussion
Some devices include a privacy feature that mutes the built-in
microphone at the hardware level in certain conditions, such as when
you close the Smart Folio cover of an iPad. When this occurs, the
system interrupts the audio session that’s capturing input from the
microphone. Attempting to start audio input after the system mutes the
microphone results in an error. If your app uses an audio session
category that supports input and output, such as playAndRecord, you
can set this option to disable the default behavior and continue using
the session. Disabling the default behavior may be useful to allow
your app to continue playback when recording or monitoring a muted
microphone doesn’t lead to a poor user experience. When you set this
option, playback continues as normal, and the microphone hardware
produces sample buffers, but with values of 0.
Important
Attempting to use this option with a session category that doesn’t
support audio input results in an error.

AVAudioEngine stops running when changing input to AirPods

I have trouble understanding AVAudioEngine's behaviour when switching audio input sources.
Expected Behaviour
When switching input sources, AVAudioEngine's inputNode should adopt the new input source seamlessly.
Actual Behaviour
When switching from AirPods to the iPhone speaker, AVAudioEngine stops working. No audio is routed through anymore. Querying engine.isRunning still returns true.
When subsequently switching back to AirPods, it still isn't working, but now engine.isRunning returns false.
Stopping and starting the engine on a route change does not help. Neither does calling reset(). Disconnecting and reconnecting the input node does not help, either. The only thing that reliably helps is discarding the whole engine and creating a new one.
OS
This is on iOS 14, beta 5. I can't test this on previous versions I'm afraid; I only have one device around.
Code to Reproduce
Here is a minimum code example. Create a simple app project in Xcode (doesn't matter whether you choose SwiftUI or Storyboard), and give it permissions to access the microphone in Info.plist. Create the following file Conductor.swift:
import AVFoundation
class Conductor {
static let shared: Conductor = Conductor()
private let _engine = AVAudioEngine?
init() {
// Session
let session = AVAudioSession.sharedInstance()
try? session.setActive(false)
try! session.setCategory(.playAndRecord, options: [.defaultToSpeaker,
.allowBluetooth,
.allowAirPlay])
try! session.setActive(true)
_engine.connect(_engine.inputNode, to: _engine.mainMixerNode, format: nil)
_engine.prepare()
}
func start() { _engine.start() }
}
And in AppDelegate, call:
Conductor.shared.start()
This example will route the input straight to the output. If you don't have headphones, it will trigger a feedback loop.
Question
What am I missing here? Is this expected behaviour? If so, it does not seem to be documented anywhere.

iOS: How to play audio without fps drops?

I am in the process of developing a game for iOS 9+ using Sprite Kit and preferably using Swift libraries.
Currently I'm using a Singleton where I preload my audio files, each connected to a separate instance of AVAudioPlayer.
Here's a short code-snipped to get the idea:
import SpriteKit
import AudioToolbox
import AVFoundation
class AudioEngine {
static let sharedInstance = AudioEngine()
internal var sfxPing: AVAudioPlayer
private init() {
self.sfxPing = AVAudioPlayer()
if let path = NSBundle.mainBundle().pathForResource("ping", ofType: "m4a") {
do {
let url = NSURL(fileURLWithPath:path)
sfxPing = try AVAudioPlayer(contentsOfURL: url)
sfxPing.prepareToPlay()
} catch {
print("ERROR: Can't load ping.m4a audio file.")
}
}
}
}
This Singleton is initialised during app start-up. In the game-loop I then just call the following line to play a specific audio file:
AudioEngine.sharedInstance.sfxPing.play()
This basically works, but I always get glitches when a file is played and the frame rate drops from 60.0 to 56.0 on my iPad Air.
Someone any idea how to fix this performance issue with AVAudioPlayer ?
I also watched out for 3rd party libraries, namely:
AudioKit [Looks very heavy-weighted]
ObjectAL [Last Update 2013 ...]
AVAudioEngine [Based on AVAudioPlayer, same problems ?]
Requirements:
Play a lot of very short samples (like shots, hits, etc..)
Play some motor effects (thus pitching would be nice)
Play some background / ambient sound in a loop
NO nasty glitches / frame rate drops !
Could you recommend any of the above mentioned libraries for my requirements or point out the problems using the above code ?
UPDATE:
Playing short sounds with:
self.runAction(SKAction.playSoundFileNamed("sfx.caf", waitForCompletion: false))
does indeed improve the frame rate. I exported the audio files with Audiacity to the .caf format (Apple's Core Audio Format). But in the tutorial, they export with "Signed 32-bit PCM" encoding which led to disturbed audio playback in my case. Using any of the other encoding options (32-bit float, U-Law, A-Law, etc..) worked fine for me.
Why using caf format? Because it's uncompressed and thus loaded faster into memory with less CPU overhead compared to compressed formats like m4a. For short sound effects played a lot in short intervals, this makes sense and disk usage is not affected much for short audio files consuming few kilobytes. For bigger audio files, like ambient and background music, using compressed formats (mp3, m4a) is obviously the better choice.
According to your question, if you develop a game for iOS 9+, you can use the new iOS 9 library SKAudioNode (official Apple doc):
var backgroundMusic: SKAudioNode!
For example you can add this to didMoveToView():
if let musicURL = NSBundle.mainBundle().URLForResource("music", withExtension: "m4a") {
backgroundMusic = SKAudioNode(URL: musicURL)
addChild(backgroundMusic)
}
You can also use to play a simple effect:
let beep = SKAudioNode(fileNamed: "beep.wav")
beep.autoplayLooped = false
self.addChild(beep)
Finally, if you want to change the volume:
beep.runAction(SKAction.changeVolumeTo(0.4, duration: 0))
Update:
I see you have update your question speaking about AVAudioPlayer and SKAction. I've tested both of them for my iOS8+ compatible games.
About AVAudioPlayer, I personally use a custom library maked by me based from the old SKTAudio.
I see your code, about AVAudioPlayer init, and my code is different because I use:
#available(iOS 7.0, *)
public init(contentsOfURL url: NSURL, fileTypeHint utiString: String?)
I don't know if fileTypeHint make the difference, so try and fill me about your test.
Advantages about your code:
With a shared instance audio manager based to AVAudioPlayer you can control volume, use your manager wherever you want, ensure compatibility with iOS8
Disadvantages about your code:
Everytime you play a sound and you want to play another sound, the previous is broken, especially if you have launch a background music.
How to solve? According with this SO post to work well without issues seems AVFoundation is limited to 4 AVAudioPlayer properties instantiables, so you can do this:
1) backgroundMusicPlayer: AVAudioPlayer!
2) soundEffectPlayer1: AVAudioPlayer!
3) soundEffectPlayer2: AVAudioPlayer!
4) soundEffectPlayer3: AVAudioPlayer!
You could build a method that switch through the 3 soundEffect to see if is occupied:
if player.playing
and use the next free player. With this workaround you have always your sound played correctly, even your background music.

Using the default audio ouput for AVAudioEngine in iOS

I'm trying to create a basic iOS application that let me record audio and then play it back in different speeds and pitches.
I'm using AVAudioPlayer to play the recorded audio either slower or faster and it works as expected. The sound is played back in my headphones.
I'm using AVAudioEngine to play the recorded audio in a different pitch and it works except that the audio is played back on my Thunderbolt display speakers, not my headphones.
I've been going through the documentation in order to understand this behavior but come up short. In my view controller in the method, viewDidLoad, I've setup the audio session as follows:
let session = AVAudioSession.sharedInstance()
session.setCategory(AVAudioSessionCategoryPlayback, error: &error)
session.setActive(true, error: &error)
and further down, I've defined a function that triggers the sound that utilizes the audio engine as follows:
let playerNode = AVAudioPlayerNode()
audioEngine.attachNode(playerNode)
let changePitchNode = AVAudioUnitTimePitch()
changePitchNode.pitch = 1000
audioEngine.attachNode(changePitchNode)
audioEngine.connect(playerNode, to: changePitchNode, format: audioFile.processingFormat)
audioEngine.connect(changePitchNode, to: audioEngine.outputNode, format: audioFile.processingFormat)
playerNode.scheduleFile(audioFile, atTime: nil, completionHandler: nil)
audioEngine.startAndReturnError(nil)
playerNode.play()
audioEngine and audioFile are globally declared in the class. The whole project can be found at https://github.com/KevinSjoberg/pitch-perfect.
Can anyone shed some light to why it persists on playing the sound on my monitor speaker instead of my headphones?
I've figured it out. The reason for this happening was that I was using the Thunderbolt Display microphone to record my voice. For some unknown reason this makes the AVAudioEngine use the monitor speakers instead. By switching the input source to the microphone of my MBP made everything work as expected.
I'm only guessing, but it does sound like a bug and so I've submitted a report to Apple.

Play audio through upper (phone call) speaker

I'm trying to get audio in my app to play through the upper speaker on the iPhone, the one you press to your ear during a phone call. I know it's possible, because I've played a game from the App Store ("The Heist" by "tap tap tap") that simulates phone calls and does exactly that.
I've done a lot of research online, but I'm having a surprisingly hard time finding ANYONE who has even discussed the possibility. The overwhelming majority of posts seem to be about the handsfree speaker vs plugged-in earphones, (like this and this and this), rather than the upper "phone call" speaker vs the handsfree speaker. (Part of that problem might be not having a good name for it: "phone speaker" often means the handsfree speaker at the bottom of the device, etc, so it's hard to do a really well-targeted search). I've looked into Apple's Audio Session Category Route Overrides, but those again seem to (correct me if I'm wrong) deal only with the handsfree speaker at the bottom, not the speaker at the top of the phone.
I have found ONE post that seems to be about this: link. It even provides a bunch of code, so I thought I was home free, but now I can't seem to get the code to work. For simplicity I just copied the DisableSpeakerPhone method (which if I understand it correctly should be the one to re-route audio to the upper speaker) into my viewDidLoad to see if it would work, but the first "assert" line fails, and the audio continues to play out the bottom. (I also imported the AudioToolbox Framework, as suggested in the comment, so that isn't the problem.)
Here is the main block of code I'm working with (this is what I copied into my viewDidLoad to test), although there are a few more methods in the article I linked to:
void DisableSpeakerPhone () {
UInt32 dataSize = sizeof(CFStringRef);
CFStringRef currentRoute = NULL;
OSStatus result = noErr;
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &dataSize, &currentRoute);
// Set the category to use the speakers and microphone.
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty (
kAudioSessionProperty_AudioCategory,
sizeof (sessionCategory),
&sessionCategory
);
assert(result == kAudioSessionNoError);
Float64 sampleRate = 44100.0;
dataSize = sizeof(sampleRate);
result = AudioSessionSetProperty (
kAudioSessionProperty_PreferredHardwareSampleRate,
dataSize,
&sampleRate
);
assert(result == kAudioSessionNoError);
// Default to speakerphone if a headset isn't plugged in.
// Overriding the output audio route
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
dataSize = sizeof(audioRouteOverride);
AudioSessionSetProperty(
kAudioSessionProperty_OverrideAudioRoute,
dataSize,
&audioRouteOverride);
assert(result == kAudioSessionNoError);
AudioSessionSetActive(YES);
}
So my question is this: can anyone either A) help me figure out why that code doesn't work, or B) offer a better suggestion for being able to press a button and route the audio up to the upper speaker?
PS I am getting more and more familiar with iOS programming, but this is my first foray into the world of AudioSessions and such, so details and code samples are much appreciated! Thank you for your help!
UPDATE:
From the suggestion of "He Was" (below) I've removed the code quoted above and replaced it with:
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error:nil];
[[AVAudioSession sharedInstance] setActive: YES error:nil];
at the beginning of viewDidLoad. It still isn't working, though, (by which I mean the audio is still coming out of the speaker at the bottom of the phone instead of the receiver at the top). Apparently the default behavior should be for AVAudioSessionCategoryPlayAndRecord to send audio out of the receiver on its own, so something is still wrong.
More specifically what I'm doing with this code is playing audio through the iPod Music Player (initialized right after the AVAudioSession lines above in viewDidLoad, for what it's worth):
_musicPlayer = [MPMusicPlayerController iPodMusicPlayer];
and the media for that iPod Music Player is chosen through an MPMediaPickerController:
- (void) mediaPicker: (MPMediaPickerController *) mediaPicker didPickMediaItems: (MPMediaItemCollection *) mediaItemCollection {
if (mediaItemCollection) {
[_musicPlayer setQueueWithItemCollection: mediaItemCollection];
[_musicPlayer play];
}
[self dismissViewControllerAnimated:YES completion:nil];
}
This all seems fairly straightforward to me, I've got no errors or warnings, and I know that the Media Picker and Music Player are working correctly because the correct songs start playing, it's just out of the wrong speaker. Could there be a "play media using this AudioSession" method or something? Or is there a way to check what audio session category is currently active, to confirm that nothing could have switched it back or something? Is there a way to emphatically tell the code to USE the receiver, rather than relying on the default to do so? I feel like I'm on the one-yard line, I just need to cross that final bit...
EDIT: I just thought of a theory, wherein it's something about the iPod Music Player that doesn't want to play out of the receiver. My reasoning: it is possible to set a song to start playing through the official iPod app and then seamlessly adjust it (pause, skip, etc) through the app I'm developing. The continuous playback from one app to the next made me think that maybe the iPod Music Player has its own audio route settings, or maybe it doesn't stop to check the settings in the new app? Does anyone who knows what they're talking about think it could it be something like that?
Was struggling with this for a while too. Maybe this would help someone later.You can also use the newer methods of overriding ports. Many of the methods in your sample code are actually deprecated.
So if you have your AudioSession sharedInstance by getting,
NSError *error = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[session setActive: YES error:nil];
The session category has to be AVAudioSessionCategoryPlayAndRecord
You can get the current output by checking this value.
AVAudioSessionPortDescription *routePort = session.currentRoute.outputs.firstObject;
NSString *portType = routePort.portType;
And now depending on the port you want to send it to, simply toggle the output using
if ([portType isEqualToString:#"Receiver"]) {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];
} else {
[session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&error];
}
This should be a quick way to toggle the outputs to the speaker phone and receiver.
You have to initialise your audio session first.
Using the C API
AudioSessionInitialize (NULL, NULL, NULL, NULL);
In iOS6 you can use AVAudioSession methods instead (you will need to import the AVFoundation framework to use AVAudioSession):
Initialization using AVAudioSession
self.audioSession = [AVAudioSession sharedInstance];
Setting the audioSession category using AVAudioSession
[self.audioSession setCategory:AVAudioSessionCategoryPlayAndRecord
error:nil];
For further research, if you want better search terms, here are the full names of the constants for the speakers:
const CFStringRef kAudioSessionOutputRoute_BuiltInReceiver;
const CFStringRef kAudioSessionOutputRoute_BuiltInSpeaker;
see apple's docs here
But the real mystery is why you are having any trouble routing to the receiver. It's the default behaviour for the playAndRecord category. Apple's documentation of kAudioSessionOverrideAudioRoute_None:
"Specifies, for the kAudioSessionCategory_PlayAndRecord category, that output audio should go to the receiver. This is the default output audio route for this category."
update
In your updated question you reveal that you are using the MPMusicPlayerController class. This class invokes the global music player (the same player used in the Music app). This music player is separate from your app, and so doesn't share the same audio session as your app's audioSession. Any properties you set on your app's audioSession will be ignored by the MPMusicPlayerController.
If you want control over your app's audio behaviour, you need to use an audio framework internal to your app. This would be AVAudioRecorder / AVAudioPlayer or Core Audio (Audio Queues, Audio Units or OpenAL). Whichever method you use, the audio session can be controlled either via AVAudioSession properties or via the Core Audio API. Core Audio gives you more fine-grained control, but with each new release of iOS more of it is ported over to AVFoundation, so start with that.
Also remember that the audio session provides a way for you to describe the intended behaviour of your app's audio in relation to the total iOS environment, but it will not hand you total control. Apple takes care to ensure that the user's expectations of their device's audio behaviour remain consistent between apps, and when one app needs to interrupt another's audio stream.
update 2
In your edit you allude to the possibility of audio sessions checking other app's audio session settings. That does not happen1. The idea is that each app sets it's preferences for it's own audio behaviour using it's self-contained audio session. The operating system arbitrates between conflicting audio requirements when more than one app competes for an unshareable resource, such as the internal microphone or one of the speakers, and will usually decide in favour of that behaviour which is most likely to meet the user's expectations of the device as a whole.
The MPMusicPlayerController class is slightly unusual in that it gives you the ability for one app to have some degree of control over another. In this case, your app is not playing the audio, it is sending a request to the Music Player to play audio on your behalf. Your control is limited by the extent of the MPMusicPlayerController API. For more control, your app will have to provide it's own implementation of audio playback.
In your comment you wonder:
Could there be a way to pull an MPMediaItem from the MPMusicPlayerController and then play them through the app-specific audio session, or anything like that?
That's a (big) subject for a new question. Here is a good starting read (from Chris Adamson's blog) From iPod Library to PCM Samples in Far Fewer Steps Than Were Previously Necessary - it's the sequel to From iphone media library to pcm samples in dozens of confounding and potentially lossy steps - that should give you a sense to the complexity you will face. This may have got easier since iOS6 but I wouldn't be so sure!
1 there is an otherAudioPlaying read-only BOOL property in ios6, but that's about it
Swift 3.0 Code
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
let routePort: AVAudioSessionPortDescription? = obsession. current Route. outputs. first
let portType: String? = routePort?.portType
if (portType == "Receiver") {
try? audioSession.overrideOutputAudioPort(.speaker)
}
else {
try? audioSession.overrideOutputAudioPort(.none)
}
swift 5.0
func activateProximitySensor(isOn: Bool) {
let device = UIDevice.current
device.isProximityMonitoringEnabled = isOn
if isOn {
NotificationCenter.default.addObserver(self, selector: #selector(proximityStateDidChange), name: UIDevice.proximityStateDidChangeNotification, object: device)
let session = AVAudioSession.sharedInstance()
do{
try session.setCategory(.playAndRecord)
try session.setActive(true)
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
} else {
NotificationCenter.default.removeObserver(self, name: UIDevice.proximityStateDidChangeNotification, object: device)
}
}
#objc func proximityStateDidChange(notification: NSNotification) {
if let device = notification.object as? UIDevice {
print(device)
let session = AVAudioSession.sharedInstance()
do{
let routePort: AVAudioSessionPortDescription? = session.currentRoute.outputs.first
let portType = routePort?.portType
if let type = portType, type.rawValue == "Receiver" {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} else {
try session.overrideOutputAudioPort(AVAudioSession.PortOverride.none)
}
} catch {
print ("\(#file) - \(#function) error: \(error.localizedDescription)")
}
}
}

Resources