Callkit - No audio if starting a call from background - ios

This started to happen since iOS 13.3.1
On my app I use Callkit + WebRTC to establish VOIP connections. I always managed to establish connections without an issue.
However, since 13.3.1 that I'm not able to start a Callkit call if the app's not on the foreground: I manage to establish the connection but the callkit isn't started (no green icon/bar on the top) and the microphone isn't picked up also.
I always get the following error:
Error requesting transaction ((
" contactIdentifier=(null) video=0 relay=0 upgrade=0 retry=0 emergency=0 isVoicemail=0 ttyType=0 localLandscapeAspectRatio={0, 0} localPortraitAspectRatio={0, 0} dateStarted=(null) localSenderIdentityUUID=(null) shouldSuppressInCallUI=0>"
)): (Error Domain=com.apple.CallKit.error.requesttransaction Code=6 "(null)")
From what I've gathered (there is almost no information about this code 6 error) Callkit may terminate if the AudioSession isn't active. However I'm not understanding what happened since 13.3.1 to affect this on background (I have Audio,Airplay and PIP / Voice over IP / Background fetch) modes active.
In the meanwhile I tried numerous things, from activating the session myself (both before callController.request and also before provider.reportOutgoingCall)
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.voiceChat, options: .mixWithOthers)
try AVAudioSession.sharedInstance().overrideOutputAudioPort(.speaker)
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error)
}
to playing a silent audio (trying to force the AVAudioSession to activate) but had no luck whatsoever.
Any suggestions?

I experienced same thing when I implement call kit with my project, I tried everything with AudioSession but turns out it was related to library which I was using for webrtc and sip, there was one line inside webrtc library which check application state if it is in background or not, if it is if was not connecting audio. So my advice is check webrtc code base or search inside the codes for app state checks like UIApplicationStateBackground or directly [UIApplication sharedApplication].applicationState.
I hope this will help!

let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(AVAudioSession.Category(rawValue: convertFromAVAudioSessionCategory(AVAudioSession.Category.playback)), options: [.allowBluetooth])
try session.setMode(AVAudioSession.Mode.voiceChat)
try session.setPreferredSampleRate(44100.0)
try session.setPreferredIOBufferDuration(0.005)
} catch {
print("Error configuring audio session webrtc",error)
}
Note: Use the Category to "playback". Do not use "playAndRecord" for AVAudioSession which causes the No Audio in background mode.

Related

Does iOS Reduce Speaker Volume for Apps Using a Microphone?

I am developing an Xcode/Swift/SwiftUI app for real-time music visualization. I allow the user to push a button to toggle between microphone-input and file-play input (but never both at the same time). My app runs fine on my Mac and on my iPad, but on my iPhone, the speaker audio is only at half-volume (and appears to be only coming from the back speakers) - even when I am in file-play mode. I have traced the problem to one offending line in my code - namely the declaration
let mic = engine.inputNode // where engine = AVAudioEngine()
When I comment-out this line, the iPhone speaker level (for file-play mode) is fine. But when I un-comment it, the iPhone speaker level is barely audible. Even when I wrap this line inside a conditional if(micEnabled){} construct, the sound level is fine at first; but as soon as I select the microphone and then toggle back to file-play, the volume again decreases.
I suspect that iOS detects when a microphone is declared and automatically reduces the speaker volume to avoid audio feedback. This would make sense because nobody wants music playing when they are speaking on a telephone call. But it would also make sense to provide developers a way to override this feature if they want to handle it themselves. In my case, for the microphone-input case, I purposely assign the audio stream a zero-volume after it is tapped and before going to the speaker.
My source code is available here. All of the audio code is inside the MuVis / Shared / AudioManager.swift class.
Can anyone help me to get the file-play mode to work with full volume on my iPhone - while also allowing the user the option to select microphone-input mode?
Many thanks to Rob Napier for pointing me in the right direction for solving my problem.
As a macOS-only developer, I had ignored AVAudioSession (since it caused compiler errors on macOS). When I converted my MuVis app from macOS-only to multiplatform, I simply started a new Xcode project with the appropriate multiplatform settings, and then pasted my existing code into the shared folder. After cleaning up a few errors (mostly calls to NSObject), it magically worked on all Apple platforms - except for the iPhone audio problem described in my question. After a little research and a lot of trial-and-error, I found that my audio-volume problem is solved by inserting the following code into my setupAudio() function:
#if os(iOS)
// For iOS devices, set the audioSession category, mode, and options:
let session = AVAudioSession.sharedInstance() // Get the singleton instance of an AVAudioSession.
do {
if(filePlayEnabled) {
// This is required by iOS to prevent output audio from going only to the iPhone's rear speaker.
try session.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: [.defaultToSpeaker])
}
else {
try session.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: [])
}
} catch { print("Failed to set audioSession category.") }
#endif
Again, thank you Rob.

MPMusicPlayerController fails to play Apple Music songs

I am using an instance of MPMusicPlayerController.systemMusicPlayer to enqueue an array of store IDs. This has worked for months now. Earlier today I updated to iOS 14.3, and the player is now failing to play songs.
The code below is the minimal amount needed to replicate the bug:
// note: repo using any play method you want
let player = MPMusicPlayerController.systemMusicPlayer
let descriptor: MPMusicPlayerStoreQueueDescriptor?
func setup() {
let storeIDs: [String] = ["lorem", "ipsum"] // fetch real IDs from the API
descriptor = MPMusicPlayerStoreQueueDescriptor(queue: storeIDs)
}
func play() {
self.player.setQueue(with: descriptor!)
self.player.play()
}
// Expected: plays song with store ID "lorem"
// Actual: app freezes and I see error logs
When I play a song, instead of playing it, the app completely freezes (meaning it doesn't respond to user interaction), and I see the following logs:
[SDKPlayback] ASYNC-WATCHDOG-1: Attempting to wake up the remote process
[SDKPlayback] SYNC-WATCHDOG-1: Attempting to wake up the remote process
[SDKPlayback] ASYNC-WATCHDOG-2: Tearing down connection
[SDKPlayback] SYNC-WATCHDOG-2: Tearing down connection
The MPMusicPlayerController plays music just fine on iOS 14.2.
Can anybody confirm or shed some light on what's going on here?
I filed a TSI/bug report with Apple in the meantime.
I can confirm the issue is still present, but after doing some testing I found out that what it's actually doing is blocking the main thread from executing. So a workaround that at least worked for me is executing the play function inside the background thread like this:
DispatchQueue.global(qos: .background).async {
player.prepareToPlay()
player.play()
}
Now the issue may still be present sometimes but i found that moving it to the background thread makes it way less tedious and less often. Also adding prepare to play also seems to make it work 99% of the time.

AVSpeechSynthesizer uses Apple Watch's speaker and not headset as output channel

I am using AVSpeechSynthesizer inside a WatchKit App Extension.
The logic is simple, and can be summarized as the following:
let utterance = AVSpeechUtterance(string: "Hello, World")
synth.speak(utterance)
This works fine but the speech always gets relayed via the Apple Watch's onboard speakers.
I require the speech to come through my airpods which are connected to my iPhone.
Previously I had delegated the task to the iPhone via WatchConnectivity which worked well but due to delays in WatchConnectivity communication, I moved the control logic directly onto the Apple Watch.
I thought watchOS would internally hand over the audio to the BLE device but it's not going as planned.
Maybe I am missing something?
Do I need to specify the audio channel synth.outputChannels?
Do I need to show the AirPlay popup asking user to select an audio output source?
If so how do I go about this?
I am unable to find much information on this matter online so any help would be greatly appreciated.
I am just trying to find a way to get the speech over my AirPods.
You can use the following code to display an audio device picker and direct audio to the selected device:
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(AVAudioSession.Category.playback,
mode: .default,
policy: .longFormAudio,
options: [])
session.activate(options: []) { (success, error) in
// Check for an error and play audio.
if let err = error) {
print(err)
}
}
} catch {
print(error)
}

How to create an iOS alarm clock that runs in the background properly

I would like to insert an alarm clock function in an iOS app I am developing, and as a reference, I installed a popular App called "Alarmy."
I managed to keep my app running in the background, just using AVAudioSession properties; however, I noticed that the app consumes a lot of battery during the phone sleep.
After some testing, I think this is due to the app activating the speakers (and keeping them ON) immediately after the AVAudioSession activation.
Even if there is no actual sound playing until the audioPlayer.play(atTime: audioPlayer.deviceCurrentTime + Double(seconds)) is triggered, if I get very very close to my iPhone 7 speakers, I can hear the little buzzing sound that indicates that the speakers are ON. This implicates that the speakers are playing an "empty sound" de facto.
This buzzing sound does not exist when I set the alarm with Alarmy; it just starts playing when it is supposed to.
I did not find any other way to maintain my app in the background and play an alarm sound at a specified time. There are Local Notifications, of course, but they do not allow to play a sound when the phone is silenced.
Going back to "Alarmy," I've seen that they are not only able to play a background alarm without any need to activate the speakers first, but they are also able to put the volume at the max level in the background. Are they maybe triggering some other iOS background mode to achieve those, perhaps using Background Fetch or Processing in some clever way? Is there any known way to replicate those behaviors?
Thanks in advance!
Here is the current code I use to set the alarm:
private func setNewAlarm(audioPlayer: AVAudioPlayer, seconds: Int, ringtone: String) {
do {
self.setNotificationAlarm(audioPlayer: audioPlayer, seconds: seconds, ringtone: ringtone, result: result)
//This calls the method I use to set a secondary alarm using local notifications, just in case the user closes the app
try AVAudioSession.sharedInstance().setActive(false)
try AVAudioSession.sharedInstance().setCategory(.playback, options: [ .mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
} catch let error as NSError {
print("AVAudioSession error: \(error.localizedDescription)")
}
audioPlayer.prepareToPlay()
audioPlayer.play(atTime: audioPlayer.deviceCurrentTime + Double(seconds))
result(true)
}

iOS text-to-speech in background

I am having an intermittent (aargh!) problem playing Text-to-Speech in the background, triggered from Apple Watch. I have properly set up the background mode, the AVSession category, and the WatchKitExtensionRequest handler. (See below.) I had this working before, and can't figure out what changed. (Could it be iOS 9 has issues? "Before" means, among other things, iOS 8.)
The problem is this: when the app gets the request from the Watch and the app is either in the background or the phone is sleeping (locked), the speech sometimes plays right away, and other times doesn't play until the app is brought to the foreground. The OS seems to be sometimes queuing the audio, and sometimes not. I can't find any common thread between success and failure cases. I can verify with logging that the call to speakUtterance() is being made in all situations. But its behavior varies, apparently randomly. The only clue is that it might be the case that the longer the app is in the background, the less likely it is to speak right away.
This is making me pull my hair out. Suggestions welcome.
In info.plist:
Required background modes: App plays audio or streams audio/video using AirPlay
In AppDelegate.application:didFinishLaunching:withOptions():
do {
try AVAudioSession.sharedInstance().setCategory(
AVAudioSessionCategoryPlayback,
withOptions:.DuckOthers
)
try AVAudioSession.sharedInstance().setActive(true)
} catch let error as NSError {
// etc...
}
In AppDelegate.application:handleWatchKitExtensionRequest...():
var bgTaskId:UIBackgroundTaskIdentifier = 0
bgTaskId = application.beginBackgroundTaskWithName(
"Prose WKE handler",
expirationHandler: {
application.endBackgroundTask(bgTaskId)
}
)
//... Post notification to call Text-to-Speech
application.endBackgroundTask(bgTaskId)
Here's a workaround: play a second snippet of sound (I used a half-second of silence), using AVAudioPlayer, right after the call to speakUtterance(), This seems to "jog the pipeline".

Resources