Chromecast Sleep/Background issue in iOS app - ios

I am facing a very big issue while using Chromecast in my application. I am using normal GCKUICastButton to connect to the Chromecast. The video plays well.
I am using the default Cast receiver for my application. When the device goes to sleep, sometimes, Chromecast stops and sometimes, after sleep mode, the device disconnects after some time. After going through lot of forums and Stack Overflow questions, I implemented the below code
extension GCKSessionManager {
static func ignoreAppBackgroundModeChange() {
let oldMethod = class_getInstanceMethod(GCKSessionManager.self, #selector(GCKSessionManager.suspendSession(with:)))
let newMethod = class_getInstanceMethod(GCKSessionManager.self, #selector(GCKSessionManager.suspendSessionIgnoringAppBackgrounded(with:)))
method_exchangeImplementations(oldMethod, newMethod)
}
func suspendSessionIgnoringAppBackgrounded(with reason: GCKConnectionSuspendReason) -> Bool {
guard reason != .appBackgrounded else { return false }
return suspendSession(with:reason)
}
}
Then in my code I wrote the below line
GCKSessionManager.ignoreAppBackgroundModeChange()
Now suddenly, the Chromecast does not disconnect however after few minutes of sleep, it disconnects as well as, kill the app as well. How can I retain the Chromecast play session even if the device goes to sleep or goes to background.
As I am using GCKUICastButton so I am not using the GCKDeviceManager so I am unable to use the ignoreAppStateNotification using GCKDeviceManager, can you advice if I can use that as well.
I have also added the GCKCastOption code as well in AppDelegate.

Related

Connecting bluetooth headphones while app is recording in the background causes the recording to stop

I am facing the following issue and hoping someone else encountered it and can offer a solution:
I am using AVAudioEngine to access the microphone. Until iOS 12.4, every time the audio route changed I was able to restart the AVAudioEngine graph to reconfigure it and ensure the input/output audio formats fit the new input/output route. Due to changes introduced in iOS 12.4 it is no longer possible to start (or restart for that matter) an AVAudioEngine graph while the app is backgrounded (unless it's after an interruption).
The error Apple now throw when I attempt this is:
2019-10-03 18:34:25.702143+0200 [1703:129720] [aurioc] 1590: AUIOClient_StartIO failed (561145187)
2019-10-03 18:34:25.702528+0200 [1703:129720] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1544:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
2019-10-03 18:34:25.711668+0200 [1703:129720] [Error] Unable to start audio engine The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.)
I'm guessing Apple closed a security vulnerability there. So now I removed the code to restart the graph when an audio route is changed (e.g. bluetooth headphones are connected).
It seems like when an I/O audio format changes (as happens when the user connects a bluetooth device), an .AVAudioEngingeConfigurationChange notification is fired, to allow the integrating app to react to the change in format. This is really what I should've used to handle changes in I/O formats from the beginning, instead of brute forcing restarting the graph. According to the Apple documentation - “When the audio engine’s I/O unit observes a change to the audio input or output hardware’s channel count or sample rate, the audio engine stops, uninitializes itself, and issues this notification.” (see the docs here). When this happens while the app is backgrounded, I am unable to start the audio engine with the correct audio i/o formats, because of point #1.
So bottom line, it looks like by closing a security vulnerability, Apple introduced a bug in reacting to audio I/O format changes while the app is backgrounded. Or am I missing something?
I'm attaching a code snippet to better describe the issue. For a plug-and-play AppDelegate see here - https://gist.github.com/nevosegal/5669ae8fb6f3fba44505543e43b5d54b.
class RCAudioEngine {
​
private let audioEngine = AVAudioEngine()
init() {
setup()
start()
NotificationCenter.default.addObserver(self, selector: #selector(handleConfigurationChange), name: .AVAudioEngineConfigurationChange, object: nil)
}
​
#objc func handleConfigurationChange() {
//attempt to call start()
//or to audioEngine.reset(), setup() and start()
//or any other combination that involves starting the audioEngine
//results in an error 561145187.
//Not calling start() doesn't return this error, but also doesn't restart
//the recording.
}
public func setup() {
​
//Setup nodes
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
let mainMixerNode = audioEngine.mainMixerNode
​
//Mute output to avoid feedback
mainMixerNode.outputVolume = 0.0
​
inputNode.installTap(onBus: 0, bufferSize: 4096, format: inputFormat) { (buffer, _) -> Void in
//Do audio conversion and use buffers
}
}
​
public func start() {
RCLog.debug("Starting audio engine")
guard !audioEngine.isRunning else {
RCLog.debug("Audio Engine is already running")
return
}
​
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
RCLog.error("Unable to start audio engine \(error.localizedDescription)")
}
}
}
I see only a fix that had gone into iOS 12.4. I am not sure if that causes the issue.
With the release notes https://developer.apple.com/documentation/ios_ipados_release_notes/ios_12_4_release_notes
"Resolved an issue where running an app in iOS 12.2 or later under the Leaks instrument resulted in random numbers of false-positive leaks for every leak check after the first one in a given run. You might still encounter this issue in Simulator, or in macOS apps when using Instruments from Xcode 10.2 and later. (48549361)"
You can raise issue with Apple , if you are a signed developer. They might help you if the defect is on their part.
You can also test with upcoming iOS release to check if your code works in the future release (with the apple beta program)

I am facing a crash related to custom audio driver provided by tokbox

I am using custom audio driver of tokbox. i have a class named 'DefaultAudioDevice' in this i am facing many crashes but these crashes occur some time. Like
func disposeAudioUnit(audioUnit: inout AudioUnit?) {
if let unit = audioUnit {
AudioUnitUninitialize(unit)
AudioComponentInstanceDispose(unit)
}
audioUnit = nil
}
when i set audioUnit to nil some time app crashes some time it runs perfectly.
And when i call this function then some time app crashes some time it runs perfectly.
AudioOutputUnitStop(unit)
Any soluton will be very helpfull.

Callkit loudspeaker bug / how WhatsApp fixed it?

I have an app with Callkit functionality. When I press the loudspeaker button, it will flash and animate to the OFF state (sometimes the speaker is set to LOUD but the icon is still OFF). When I tap on it multiple times... it can be clearly seen that this functionality is not behaving correctly.
However, WhatsApp has at the beginning the loudspeaker turned OFF and after 3+ seconds it activates it and its working. Has anyone encountered anything similar and can give me a solution?
Youtube video link to demonstrate my problem
There is a workaround proposed by an apple engineer which should fix callkit not activating the audio session correctly:
a workaround would be to configure your app's audio session (call configureAudioSession()) earlier in your app's lifecycle, before the -provider:performAnswerCallAction: method is invoked. For instance, you could call configureAudioSession() immediately before calling -[CXProvider reportNewIncomingCallWithUUID:update:completion:] in order to ensure that the audio session is fully configured prior to informing CallKit about the incoming call.
From: https://forums.developer.apple.com/thread/64544#189703
If this doesn't help, you probably should post an example project which reproduces your behaviour for us to be able to analyse it further.
Above answer is correct, "VoiceChat" mode ruin everything.
Swift 4 example for WebRTC.
After connection was established call next
let rtcAudioSession = RTCAudioSession.sharedInstance()
rtcAudioSession.lockForConfiguration()
do {
try rtcAudioSession.setCategory(AVAudioSession.Category.playAndRecord.rawValue, with:
AVAudioSession.CategoryOptions.mixWithOthers)
try rtcAudioSession.setMode(AVAudioSession.Mode.default.rawValue)
try rtcAudioSession.overrideOutputAudioPort(.none)
try rtcAudioSession.setActive(true)
} catch let error {
debugPrint("Couldn't force audio to speaker: \(error)")
}
rtcAudioSession.unlockForConfiguration()
You can use AVAudioSession.sharedInstance() as well instead RTC
Referd from Abnormal behavior of speaker button on system provided call screen
The same issue has been experienced in the previous versions as well. So this is not the new issue happening on the call kit.
This issue has to be resolved from iOS. We don't have any control over this.
Please go through the apple developer forum
CallKit/detect speaker set
and
[CALLKIT] audio session not activating?
Maybe you can setMode to AVAudioSessionModeDefault.
When I use CallKit + WebRTC
I configure AVAudioSessionModeDefault mode.
Alloc CXProvider and reportNewIncomingCallWithUUID
Use WebRTC , after ICEConnected, WebRTC change mode to AVAudioSessionModeVoiceChat, then speaker issue happen.
Later I setMode back to AVAudioSessionModeDefault, the speaker works well.
I've fixed the issue by doing following steps.
In CXAnswerCallAction, use below code to set audiosession config.
RTCDispatcher.dispatchAsync(on: RTCDispatcherQueueType.typeAudioSession) {
let audioSession = RTCAudioSession.sharedInstance()
audioSession.lockForConfiguration()
let configuration = RTCAudioSessionConfiguration.webRTC()
configuration.categoryOptions = [AVAudioSessionCategoryOptions.allowBluetoothA2DP,AVAudioSessionCategoryOptions.duckOthers,
AVAudioSessionCategoryOptions.allowBluetooth]
try? audioSession.setConfiguration(configuration)
audioSession.unlockForConfiguration()}
After call connected, I'm resetting AudioSession category to default.
func configureAudioSession() {
let session = RTCAudioSession.sharedInstance()
session.lockForConfiguration()
do {
try session.setCategory(AVAudioSession.Category.playAndRecord.rawValue, with: .allowBluetooth)
try session.setMode(AVAudioSession.Mode.default.rawValue)
try session.setPreferredSampleRate(44100.0)
try session.setPreferredIOBufferDuration(0.005)
}
catch let error {
debugPrint("Error changeing AVAudioSession category: \(error)")
}
session.unlockForConfiguration()}
Thanks to SO #Алексей Смольский for the help.

Unable to get ReplayKit (w/RPBroadcastActivityViewController) to stream to YouTube live - get "The user declined application recording" error

I'm trying to use ReplayKit to live stream from within an iOS app on iOS 11 and Swift 4. My code succesfully live streams to MobCrush, but when I select YouTube and the broadcast is supposed to get kicked off it fails.
Relevant code:
func broadcastActivityViewController(_ broadcastActivityViewController: RPBroadcastActivityViewController,
didFinishWith broadcastController: RPBroadcastController?,
error: Error?) {
//1
guard error == nil else {
print("Broadcast Activity Controller is not available.")
print("ERROR BROADCASTING: " + error!.localizedDescription)
return
}
//2
broadcastActivityViewController.dismiss(animated: true) {
//3
broadcastController?.startBroadcast { error in
//4
//TODO: Broadcast might take a few seconds to load up. I recommend that you add an activity indicator or something similar to show the user that it is loading.
//5
if error == nil {
print("Broadcast started successfully!")
self.broadcastStarted()
}
}
}
}
It Prints:
Broadcast Activity Controller is not available.
ERROR BROADCASTING: The user declined application recording
Trying to figure out if this is an issue with YouTube or with some permissions/implementation problem on my side.
It's worth noting that ReplayKit streaming clearly does not work for some of the advertised platforms (e.g. Periscope), but I have successfully gotten YouTube ReplayKit to work with some other apps I tested, so it should be possible.
I'm seeing a similar thing.
MobCrush - Works beautifully
Periscope - Stream starts, connects & the entry shows-up in Periscope but the video is blank/inaccessable when you want to view it either live or saved.
Youtube - An error occurs, stopping streaming from starting, yet a Scheduled Livestream entry appears for the live stream you attempted to do. This is scheduled about 8 hours in the past for me. (But I'm sure this value depends on your system clock relative to the US West Coast)
So. It appears only MobCrush seems to have upheld its end of the bargain.

NotificationCenter stops working when the screen is locked

I'm having trouble with an app I'm building, the app objective is to play audio files, it works by requesting an audio file from a public API, playing it and wait until it ends, after it ends it requests another audio and starts over.
Here's a shortened version of the code that does this, I omitted the error checking part for simplicity
func requestEnded(audioSource: String) {
let url = URL(string: "https://example.com/audios/" + audioSource)
audio = AVPlayer(url: url!)
NotificationCenter.default.addObserver(self,selector: #selector(MediaItem.finishedPlaying(_:)), name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: audio?.currentItem)
audio?.play()
}
#objc func finishedPlaying(_ notification: NSNotification) {
print("Audio ended")
callAPI()
}
func callAPI() {
// do all the http request stuff
requestEnded(audioSource: "x.m4a")
}
// callAPI() is called when the app is initialized
It works well when the screen is unlocked. When I lock the phone the current audio keeps playing but when it ends finishedPlaying() never gets called (the print statement is not shown on the console).
How can I make it so the app would know the audio ended and trigger another one all while locked?, In the android version I got around the screenlock problem by using a partial wakelock which made it run normally even with the screen off.
It has to be done this way because the API decides the audio on realtime and it's all done on the backend so no way to buffer more than one audio without breaking the requirements of the app
What are my options here?, any help is appreciated.

Resources