Why CXProvider's providerDidReset invoked? - ios

We integrated CallKit for audio/video calling features.
Recently, few production users reported that the system prompts a Call Failed alert during a call. This is happening on a device running iOS 16.1, and it happens very frequently.
CXProvider setup
let providerConfiguration = CXProviderConfiguration(localizedName: appName()! )
providerConfiguration.supportsVideo = true
providerConfiguration.maximumCallsPerCallGroup = 1
providerConfiguration.maximumCallGroups = 1
providerConfiguration.supportedHandleTypes = [.phoneNumber, .generic]
providerConfiguration.iconTemplateImageData = UIImage.callKitIcon?.pngData()
providerConfiguration.includesCallsInRecents = false
callProvider = CXProvider(configuration: providerConfiguration)
callProvider.setDelegate(self, queue: nil)
We investigated the device logs and found that CXProvider's providerDidReset(_:) callback was invoked.
Apple developer documentation explains this func called when the provider is reset. We could not understand the reason behind this failure.
I would like to know more about this callback, in which situation the provider reset.
Any help is highly appreciated.
Thanks.

Related

When I use the apple to log in, the selection box will pop up. I choose to use the password to continue and the prompt is not complete

iOS13 (beta) Apple Login error
#available(iOS 13.0, *)
func authorizationController(controller: ASAuthorizationController, didCompleteWithError error: Error) {
// Handle error.
crprint(error.localizedDescription)
}
Failed to complete operation. (com.apple.AuthenticationServices.AuthorizationError error 1000.)
I've encountered the same issue yesterday and I've managed to fix it following these steps:
Go to https://appleid.apple.com/account/manage, under the Devices section you should find devices on which you are signed in with your Apple ID,
Find device/simulator on which Apple SSO is not working, click on it and click remove from the account,
Go back to your device/simulator settings, it will ask you to authenticate again. When you successfully authenticate, Apple SSO should work again!
I'm not sure what caused this issue, probably some issue between the simulator and Apple ID.
In my case, launching ASAuthorizationController including a request for ASAuthorizationPasswordProvider was causing the error.
Failed to complete operation. (com.apple.AuthenticationServices.AuthorizationError error 1000.)
From the ASAuthorizationError.Code documentation; 1000 is for unknown
ASAuthorizationError.Code.unknown
The authorization attempt failed for an unknown reason.
Declaration
case unknown = 1000
Ref: https://developer.apple.com/documentation/authenticationservices/asauthorizationerror/code/unknown
Now that's not particularly helpful but did give me a clue to check my ASAuthorizationController setup which I was trying to launch with 2 requests from ASAuthorizationAppleIDProvider & ASAuthorizationPasswordProvider, like so:
func loginWithAppleButtonPressed() {
let appleSignInRequest = ASAuthorizationAppleIDProvider().createRequest()
appleSignInRequest.requestedScopes = [.fullName, .email]
let anySignInRequest = ASAuthorizationPasswordProvider().createRequest()
let controller = ASAuthorizationController(authorizationRequests: [appleSignInRequest,
anySignInRequest])
controller.delegate = self
controller.presentationContextProvider = self
controller.performRequests()
}
I tried this on a simulator that had an Apple ID with 2FA enabled and also on a device with another Apple ID without 2FA, and both times it would just go to authorizationController(controller:didCompleteWithError error:) and that's it.
Solution:
So to keep it simple, I launched ASAuthorizationController with only ASAuthorizationAppleIDProvider like so:
func loginWithAppleButtonPressed() {
let appleSignInRequest = ASAuthorizationAppleIDProvider().createRequest()
appleSignInRequest.requestedScopes = [.fullName, .email]
let controller = ASAuthorizationController(authorizationRequests: [appleSignInRequest])
controller.delegate = self
controller.presentationContextProvider = self
controller.performRequests()
}
And voilà! This time things worked as expected:
When using an Apple ID with 2FA
popped up with the login request
When using an Apple ID without 2FA
popped up an error telling me to enable 2FA
called authorizationController(controller:didCompleteWithError error:) with error 1000
So seems that in my case ASAuthorizationPasswordProvider was the culprit but since ASAuthorizationError.Code.unknown is a generic error case, this solution may not work for you.
Also, In my case I need only ASAuthorizationAppleIDProvider for Apple ID sign in so dropped the support for ASAuthorizationPasswordProvider.
In my case i needed to first check ASAuthorizationPasswordProvider, then, if there are no stored credential, use ASAuthorizationAppleIDProvider. For this case i had to make some crunches. Code below:
// Initial point
public func fire(appleIDCompletion: #escaping AppleIDServiceCompletion) {
self.completion = appleIDCompletion
let requestPassword = ASAuthorizationPasswordProvider().createRequest()
performRequest(requestPassword)
}
// help function
private func performRequest(_ request: ASAuthorizationRequest) {
let controller = ASAuthorizationController(authorizationRequests: [request])
controller.delegate = self
controller.presentationContextProvider = self
controller.performRequests()
}
// delegate
func authorizationController(controller: ASAuthorizationController, didCompleteWithError error: Error) {
if let e = error as? ASAuthorizationError {
switch e.code {
case .canceled:
trace("User did cancel authorization.")
return
case .failed:
trace("Authorization failed.")
case .invalidResponse:
trace("Authorization returned invalid response.")
case .notHandled:
trace("Authorization not handled.")
case .unknown:
if controller.authorizationRequests.contains(where: { $0 is ASAuthorizationPasswordRequest }) {
trace("Unknown error with password auth, trying to request for appleID auth..")
let requestAppleID = ASAuthorizationAppleIDProvider().createRequest()
requestAppleID.requestedScopes = [.email, .fullName]
requestAppleID.requestedOperation = .operationImplicit
performRequest(requestAppleID)
return
} else {
trace("Unknown error for appleID auth.")
}
default:
trace("Unsupported error code.")
}
}
completion?(.rejected(error))
}
Works like a charm 🔥
Simply Add + "Sign In with Apple" from Capability.
I've resolved it by adding sign in with apple as key in entitlements plist .
From Apple's example,
performExistingAccountSetupFlows, only call this method once on viewDidAppear. If user info exists already then Apple will show it to login. If not then it will throw error.
handleAuthorizationAppleIDButtonPress, whenever user taps on Sign in with Apple button, note that if an account already had existed it would have shown it to the user already. I believe its still in progress and not all use cases are covered, for example if user sees the login info initially from ViewDidAppear call and cancels it then user have to create a new account when tapping on this method since its missing ASAuthorizationPasswordProvider request. If user had some login info then in that case this call (with ASAuthorizationPasswordProvider) will succeed but if no data is available then user will not see any action on tapping this button since it will throw error.
I am still figuring this out, if I have anything more to add then I will update the answer. So, for now we can only have this one use case to use this Sign in with Apple option.
Update:
Once I created a new account, I was offered by this same flow to login with the already existing account. So, I can say that there is no need to include call to ASAuthorizationPasswordProvider request in handleAuthorizationAppleIDButtonPress method. I am doing all the testing on device.
You can always go to Settings -> AppleId -> Password & Security -> Apple ID Logins to check and delete account if you need to test various scenarios.
Update 2:
Everything seems to work fine in other scenarios too if you already have a saved password or App Id account created, so even if I pass ASAuthorizationPasswordProvider in the handleAuthorizationAppleIDButtonPress call, it is working fine. I would suggest to not pass ASAuthorizationPasswordProvider in the next call and keep the flow as described above, this way if no saved password is present or Apple Id created then it will provide option to the user to create a new id, if there is already an id that exists then it will show that id.
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
performExistingAccountSetupFlows()
}
func performExistingAccountSetupFlows() {
// Prepare requests for both Apple ID and password providers.
let requests = [ASAuthorizationAppleIDProvider().createRequest(),
ASAuthorizationPasswordProvider().createRequest()]
// Create an authorization controller with the given requests.
let authorizationController = ASAuthorizationController(authorizationRequests: requests)
authorizationController.delegate = self
authorizationController.presentationContextProvider = self
authorizationController.performRequests()
}
#objc
func handleAuthorizationAppleIDButtonPress() {
let appleIDProvider = ASAuthorizationAppleIDProvider()
let request = appleIDProvider.createRequest()
request.requestedScopes = [.fullName, .email]
let authorizationController = ASAuthorizationController(authorizationRequests: [request])
authorizationController.delegate = self
authorizationController.presentationContextProvider = self
authorizationController.performRequests()
}
I resolved this by holding my finger down on finger print scanner til completion. I'm not an iphone user so I'm not used to the finger print scanner. If you pull your finger off too soon you get this error.

Hold callkit call when incoming cellular call

I have a problem (but not really) with callkit.
I implemented callkit in my app and it works great. I can get a second call to my app and callkit offeres me options to End&Accept, Decline or Hold&Accept. Same goes if I am in a cellular (gsm) call and I get a call on my app. But when I am in app call (on callkit) and get a cellular(gsm) call I only get 2 options: Decline or End&Accept.
Any idea why? Or how I can get all 3 options?
static var providerConfiguration: CXProviderConfiguration {
var providerConfiguration: CXProviderConfiguration
providerConfiguration = CXProviderConfiguration(localizedName: "app name")
providerConfiguration.supportsVideo = false
providerConfiguration.maximumCallsPerCallGroup = 1
providerConfiguration.maximumCallGroups = 3
providerConfiguration.supportedHandleTypes = [.phoneNumber]
return providerConfiguration
}
I have implemented:
providerDidReset,
CXStartCallAction,
CXAnswerCallAction,
CXEndCallAction,
CXSetHeldCallAction,
CXSetMutedCallAction,
timedOutPerforming action,
didActivate audioSession,
didDeactivate audioSession.
In my app delegate I have function that checks useractivity. I put breakpoints in all of the functions but nothing gets called before the view for incoming cellular (gsm) call is shown.
I googled but couldn't find the solution. As far as I can see, callkit is working perfectly.
I struggled with this for outgoing calls. For outgoing calls, make sure you call this method for the call once it is answered by the remote side:
[self.provider reportOutgoingCallWithUUID:currentCall.uuid connectedAtDate:[NSDate date]];
If you do not, the call is stuck "connecting" from CallKit's perspective and I have found that the native incoming call UI for other calls will not provide the "send to voicemail" and "hold and accept" options for incoming calls while another call is "connecting".
I struggled with this for a bit today until I figured that part out. I also am calling:
[self.provider reportOutgoingCallWithUUID:currentCall.uuid startedConnectingAtDate:[NSDate date]];
from within:
- (void)provider:(CXProvider *)provider performStartCallAction:(CXStartCallAction *)action
Not sure if that part is necessary but I'm doing it because that's what the Speakerbox demo does. Kind of, they do it in a callback... I just do it immediately.
While you were sending CXCallUpdate object to CallKit before calling, make sure you kept supportsHolding value as true.
My CXCallUpdate looks something like below:
let callHandle = CXHandle(type: .phoneNumber, value: handle)
let callUpdate = CXCallUpdate()
if userName != nil{
callUpdate.localizedCallerName = userName;
}
callUpdate.remoteHandle = callHandle
callUpdate.supportsDTMF = true
callUpdate.supportsHolding = true
callUpdate.supportsGrouping = false
callUpdate.supportsUngrouping = false
callUpdate.hasVideo = false
Meaning of above different properties:
localizedCallerName = If you want to show name of user on system's call screen, otherwise phone number/email based on type of handle will be shown
supportsDTMF = On system's main screen, if you want to allow keypad numbers to be typed while call is running, if you make it false, keypad option get disabled.
supportsHolding = If you want your call to be held, when some other call get triggered, keep this property true
supportsGrouping = If you want to allow conference calling(Merge call option enabled in calling screen), then keep this one true
supportsUngrouping = Inverse of last one, After call getting merged(conference call), should allow it to ungroup or not.
hasVideo = If you support video call, the system will automatically start camera for you.
#Redssie, let me know if any further help related to Callkit required.

How to handle a video call through voip and call kit

I'm new to Apples callKit and Pushkit. i'm using OpenTok in my application for video and audio call handling. To handle native like calling in my app i'm using VOIP with callkit . Audio native call is working fine, When user interacts with the native UI of callkit it goes to background the application gets to foreground. Has i looked into speaker box of apple documentation about call kit. It has some Intent handlers to handle calls
Please can anyone Help me out of by giving any idea about handling video and audio calls natively
Thanks in advance..
I'm doing the same with OpenTok. As far as I'm aware you can't handle video calls natively from the lock screen, however you can use OpenTok with CallKit for just audio. See this link
CallKit have a property supportsVideo of CXProviderConfiguration and one property hasVideo of CXHandle.
It's working fine for me. Check this below demo link.
https://websitebeaver.com/callkit-swift-tutorial-super-easy
func setupVdeoCall() {
let config = CXProviderConfiguration(localizedName: "My App")
config.iconTemplateImageData = UIImagePNGRepresentation(UIImage(named: "pizza")!)
config.ringtoneSound = "ringtone.caf"
config.includesCallsInRecents = false;
config.supportsVideo = true;
let provider = CXProvider(configuration: config)
provider.setDelegate(self, queue: nil)
let update = CXCallUpdate()
update.remoteHandle = CXHandle(type: .generic, value: "Pete Za")
update.hasVideo = true
provider.reportNewIncomingCall(with: UUID(), update: update, completion: { error in })
}

Why doesn't my iOS (Swift) app properly recognize some external display devices?

So I have an odd issue and my google-fu utterly fails to even provide me the basis of where to start investigating, so even useful keywords to search on may be of use.
I have an iOS application written in swift. I have a model hooked up to receive notifications about external displays. On some adaptors, I'm able to properly detect and respond to the presence of an external display and programatically switch it out to be something other than a mirror (see code block below). But with another adaptor, instead of just 'magically' becoming a second screen, I'm asked to 'trust' the external device, and it simply mirrors the device screen. Not the intended design at all.
func addSecondScreen(screen: UIScreen){
self.externalWindow = UIWindow.init(frame: screen.bounds)
self.externalWindow!.screen = screen
self.externalWindow!.rootViewController = self.externalVC
self.externalWindow!.isHidden = false;
}
#objc func handleScreenDidConnectNotification( _ notification: NSNotification){
let newScreen = notification.object as! UIScreen
if(self.externalWindow == nil){
addSecondScreen(screen: newScreen)
}
}
#objc func handleScreenDidDisconnectNotification( _ notification: NSNotification){
if let externalWindow = self.externalWindow{
externalWindow.isHidden = true
self.externalWindow = nil
}
}
The worst issue here is that because I'm connecting to an external display to do this, I can't even run this code through the debugger to find out what is going on. I don't know where to even begin.
Any ideas?
Edit:
Thanks to someone pointing out wifi debugging, I can tell you my notifications are firing off, but they're both firing at the same time, one after the other, when the external adaptor is disconnected.

iOS Audio not working during call answered when phone is locked. WebRTC used for calling

I am facing a problem with Audio When using Callkit with WebRTC for VOIP call, While answering the call from Lock Screen.
General Functionality :
My app activates the audioSession when it's launched. For an incoming call, SDP Offer & Answer are generated and exchanged. Peer Connection is set up. Both audio and video streams are generated, whether it's audio call or video call. Then Call is reported to callkit by using the following code:
callProvider.reportNewIncomingCall(with: currentCallUUID!, update: update) { error in }
If app is in the foreground, it works fine.
But, when the phone is locked, and user answers the call from lock screen, the Streams are exchanged but no audio comes on either end until user enters into the app himself.
As the user enters into the App, audio becomes active on both the ends.
All the background settings and capabilities are set properly.
I have also referred to the following work around provided by Apple staff. But even it does not work.
https://forums.developer.apple.com/thread/64544
As I mentioned, I am using WebRTC for calling. If I exchange the media streams after the user answers the call( still on Lock Screen) and peer connection is set at that time. It works fine (But it adds the delay in making the call connection).
But if Peer Connection is made before displaying call (say before reporting call to callkit), the audio stops working.
I am able to resolve this issue.
Steps that I followed -
I checked the code related to WebRTC here
I added RTCAudioSession header file which is actually a private class of Webrtc. So every time I receive a call event from signaling, I enable RTCAudiosession and on end of the call, I disable it.
I have to render the incoming streams to a dummy view (Although it is not displayed when the call is going and the app is not yet open, but it is required to make audio working).
I hope this will help if someone is facing the same issue.
#abhimanyu are you still facing the issue or you made it work. I am facing same issue with CallKit.
As per my understanding in WebRTC M60 release they have fixed on issue related to CallKit, which I think created a side effect and caused this issue.
The issue which they have fixed is related to System AudioSession, when ever CallKit presents incoming call UI and play ringer tone CallKit takes control of AudioSession and after user action (accept/ decline) it releases control. In WebRTC M60 release, now they have added observers for this control exchange. That's why it is working if app is in foreground, but if phone is locked and any incoming call is accepted then (I am assuming you are using CallKit UI for call and not redirecting user to App on accept from lock screen) due to Native UI of call it is not possible for WebRTC to activate its own AudioSession instance as call is going through CallKit Screen.
Link for bug which has been fixed on WebRTC M60: https://bugs.chromium.org/p/webrtc/issues/detail?id=7446
If you found any workaround for this issue please let me know.
Please Note that I share my code and its about to my needs and I share for reference. you need to change it according to your need.
when you receive voip notification create new incident of your webrtc handling class, and
add this two lines to code block because enabling audio session from voip notification fails
RTCAudioSession.sharedInstance().useManualAudio = true
RTCAudioSession.sharedInstance().isAudioEnabled = false
didReceive method;
func pushRegistry(_ registry: PKPushRegistry, didReceiveIncomingPushWith payload: PKPushPayload, for type: PKPushType, completion: #escaping () -> Void) {
let state = UIApplication.shared.applicationState
if(payload.dictionaryPayload["hangup"] == nil && state != .active
){
Globals.voipPayload = payload.dictionaryPayload as! [String:Any] // I pass parameters to Webrtc handler via Global singleton to create answer according to sdp sent by payload.
RTCAudioSession.sharedInstance().useManualAudio = true
RTCAudioSession.sharedInstance().isAudioEnabled = false
Globals.sipGateway = SipGateway() // my Webrtc and Janus gateway handler class
Globals.sipGateway?.configureCredentials(true) // I check janus gateway credentials stored in Shared preferences and initiate websocket connection and create peerconnection
to my janus gateway which is signaling server for my environment
initProvider() //Crating callkit provider
self.update.remoteHandle = CXHandle(type: .generic, value:String(describing: payload.dictionaryPayload["caller_id"]!))
Globals.callId = UUID()
let state = UIApplication.shared.applicationState
Globals.provider.reportNewIncomingCall(with:Globals.callId , update: self.update, completion: { error in
})
}
}
func initProvider(){
let config = CXProviderConfiguration(localizedName: "ulakBEL")
config.iconTemplateImageData = UIImage(named: "ulakbel")!.pngData()
config.ringtoneSound = "ringtone.caf"
// config.includesCallsInRecents = false;
config.supportsVideo = false
Globals.provider = CXProvider(configuration:config )
Globals.provider.setDelegate(self, queue: nil)
update = CXCallUpdate()
update.hasVideo = false
update.supportsDTMF = true
}
modify your didActivate and didDeActive delegate functions like below,
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
print("CallManager didActivate")
RTCAudioSession.sharedInstance().audioSessionDidActivate(audioSession)
RTCAudioSession.sharedInstance().isAudioEnabled = true
// self.callDelegate?.callIsAnswered()
}
func provider(_ provider: CXProvider, didDeactivate audioSession: AVAudioSession) {
print("CallManager didDeactivate")
RTCAudioSession.sharedInstance().audioSessionDidDeactivate(audioSession)
RTCAudioSession.sharedInstance().isAudioEnabled = false
}
in Webrtc handler class configure media senders and audiosession
private func createPeerConnection(webRTCCallbacks:PluginHandleWebRTCCallbacksDelegate) {
let rtcConfig = RTCConfiguration.init()
rtcConfig.iceServers = server.iceServers
rtcConfig.bundlePolicy = RTCBundlePolicy.maxBundle
rtcConfig.rtcpMuxPolicy = RTCRtcpMuxPolicy.require
rtcConfig.continualGatheringPolicy = .gatherContinually
rtcConfig.sdpSemantics = .planB
let constraints = RTCMediaConstraints(mandatoryConstraints: nil,
optionalConstraints: ["DtlsSrtpKeyAgreement":kRTCMediaConstraintsValueTrue])
pc = sessionFactory.peerConnection(with: rtcConfig, constraints: constraints, delegate: nil)
self.createMediaSenders()
self.configureAudioSession()
if webRTCCallbacks.getJsep() != nil{
handleRemoteJsep(webrtcCallbacks: webRTCCallbacks)
}
}
mediaSenders;
private func createMediaSenders() {
let streamId = "stream"
// Audio
let audioTrack = self.createAudioTrack()
self.pc.add(audioTrack, streamIds: [streamId])
// Video
/* let videoTrack = self.createVideoTrack()
self.localVideoTrack = videoTrack
self.peerConnection.add(videoTrack, streamIds: [streamId])
self.remoteVideoTrack = self.peerConnection.transceivers.first { $0.mediaType == .video }?.receiver.track as? RTCVideoTrack
// Data
if let dataChannel = createDataChannel() {
dataChannel.delegate = self
self.localDataChannel = dataChannel
}*/
}
private func createAudioTrack() -> RTCAudioTrack {
let audioConstrains = RTCMediaConstraints(mandatoryConstraints: nil, optionalConstraints: nil)
let audioSource = sessionFactory.audioSource(with: audioConstrains)
let audioTrack = sessionFactory.audioTrack(with: audioSource, trackId: "audio0")
return audioTrack
}
audioSession ;
private func configureAudioSession() {
self.rtcAudioSession.lockForConfiguration()
do {
try self.rtcAudioSession.setCategory(AVAudioSession.Category.playAndRecord.rawValue)
try self.rtcAudioSession.setMode(AVAudioSession.Mode.voiceChat.rawValue)
} catch let error {
debugPrint("Error changeing AVAudioSession category: \(error)")
}
self.rtcAudioSession.unlockForConfiguration()
}
Please consider that because I worked with callbacks and delegates code includes delegates and callback chunks. you can ignore them accordingly!!
FOR REFERENCE You can also check the example at this link

Resources