How to handle a video call through voip and call kit - ios

I'm new to Apples callKit and Pushkit. i'm using OpenTok in my application for video and audio call handling. To handle native like calling in my app i'm using VOIP with callkit . Audio native call is working fine, When user interacts with the native UI of callkit it goes to background the application gets to foreground. Has i looked into speaker box of apple documentation about call kit. It has some Intent handlers to handle calls
Please can anyone Help me out of by giving any idea about handling video and audio calls natively
Thanks in advance..

I'm doing the same with OpenTok. As far as I'm aware you can't handle video calls natively from the lock screen, however you can use OpenTok with CallKit for just audio. See this link

CallKit have a property supportsVideo of CXProviderConfiguration and one property hasVideo of CXHandle.
It's working fine for me. Check this below demo link.
https://websitebeaver.com/callkit-swift-tutorial-super-easy
func setupVdeoCall() {
let config = CXProviderConfiguration(localizedName: "My App")
config.iconTemplateImageData = UIImagePNGRepresentation(UIImage(named: "pizza")!)
config.ringtoneSound = "ringtone.caf"
config.includesCallsInRecents = false;
config.supportsVideo = true;
let provider = CXProvider(configuration: config)
provider.setDelegate(self, queue: nil)
let update = CXCallUpdate()
update.remoteHandle = CXHandle(type: .generic, value: "Pete Za")
update.hasVideo = true
provider.reportNewIncomingCall(with: UUID(), update: update, completion: { error in })
}

Related

Why CXProvider's providerDidReset invoked?

We integrated CallKit for audio/video calling features.
Recently, few production users reported that the system prompts a Call Failed alert during a call. This is happening on a device running iOS 16.1, and it happens very frequently.
CXProvider setup
let providerConfiguration = CXProviderConfiguration(localizedName: appName()! )
providerConfiguration.supportsVideo = true
providerConfiguration.maximumCallsPerCallGroup = 1
providerConfiguration.maximumCallGroups = 1
providerConfiguration.supportedHandleTypes = [.phoneNumber, .generic]
providerConfiguration.iconTemplateImageData = UIImage.callKitIcon?.pngData()
providerConfiguration.includesCallsInRecents = false
callProvider = CXProvider(configuration: providerConfiguration)
callProvider.setDelegate(self, queue: nil)
We investigated the device logs and found that CXProvider's providerDidReset(_:) callback was invoked.
Apple developer documentation explains this func called when the provider is reset. We could not understand the reason behind this failure.
I would like to know more about this callback, in which situation the provider reset.
Any help is highly appreciated.
Thanks.

iOS 13 Incoming Call UI goes to Recents

I'm developing VoIP based audio call in my application. I have a strange issue for which I couldn't find a solution.
For iOS 13+ devices, sometimes incoming CallKit UI goes in Background. That means incoming CallKit UI doesn't show upfront, but I'm able to hear the call ringtone audio and vibration. When I double-tap on the Home button, I'm able to see my app with the IncomingCall UI in Recents. When I tap on it, it shows the CallKit UI and then I'm not able to move to other applications via double-tapping the home button.
It's inconsistently happening on iOS 13+ versions. Is there any way to show CallKit UI prominently while receiving incoming calls?
I'm using the method below to show an incoming call.
let update = CXCallUpdate()
update.remoteHandle = CXHandle(type: .generic, value: callObj.name ?? "")
update.hasVideo = false
provider.reportNewIncomingCall(with: newUUID, update: update) { error in
if error == nil {
let call = Call(uuid: newUUID, handle: callObj.name ?? "", roomID: self.callObj?.roomId ?? "")
self.callManager.add(call: call)
}
completion?(error as NSError?)
}

After CallKit integration video call app ends the call if power button is pressed

After integrating callkit into video call app pressing the power button is ending the call when the call is in progress
Below is the provider configuration:
static var providerConfiguration: CXProviderConfiguration {
let providerConfiguration = CXProviderConfiguration(localizedName: "AppName")
providerConfiguration.supportsVideo = true
providerConfiguration.maximumCallsPerCallGroup = 1
providerConfiguration.supportedHandleTypes = [.phoneNumber]
return providerConfiguration
}
below is CXCallUpdate to report that there is an incoming call:
let update = CXCallUpdate()
update.remoteHandle = CXHandle(type: .generic, value: handle)
update.supportsDTMF = true;
update.hasVideo = hasVideo;
update.supportsGrouping = false;
update.supportsUngrouping = false;
update.supportsHolding = false;
If we see cisco webex video call, there also callkit has been integrated, but for video call pressing the power button is not ending the call when call is in progress. But pressing the power button is ending the call for audio call. I observed the same behaviour with WhatsApp video call as well.
This is the intended behaviour: if try to do the same thing with the iOS built-in phone app, you'll obtain the same result.
EDIT
The power button ends a call if and only if the call is running through the built-in speaker on top of the screen. In any other case (i.e. the audio is playing through headphones, bluetooth or built-in loudspeaker) the power button will not end the call.

Hold callkit call when incoming cellular call

I have a problem (but not really) with callkit.
I implemented callkit in my app and it works great. I can get a second call to my app and callkit offeres me options to End&Accept, Decline or Hold&Accept. Same goes if I am in a cellular (gsm) call and I get a call on my app. But when I am in app call (on callkit) and get a cellular(gsm) call I only get 2 options: Decline or End&Accept.
Any idea why? Or how I can get all 3 options?
static var providerConfiguration: CXProviderConfiguration {
var providerConfiguration: CXProviderConfiguration
providerConfiguration = CXProviderConfiguration(localizedName: "app name")
providerConfiguration.supportsVideo = false
providerConfiguration.maximumCallsPerCallGroup = 1
providerConfiguration.maximumCallGroups = 3
providerConfiguration.supportedHandleTypes = [.phoneNumber]
return providerConfiguration
}
I have implemented:
providerDidReset,
CXStartCallAction,
CXAnswerCallAction,
CXEndCallAction,
CXSetHeldCallAction,
CXSetMutedCallAction,
timedOutPerforming action,
didActivate audioSession,
didDeactivate audioSession.
In my app delegate I have function that checks useractivity. I put breakpoints in all of the functions but nothing gets called before the view for incoming cellular (gsm) call is shown.
I googled but couldn't find the solution. As far as I can see, callkit is working perfectly.
I struggled with this for outgoing calls. For outgoing calls, make sure you call this method for the call once it is answered by the remote side:
[self.provider reportOutgoingCallWithUUID:currentCall.uuid connectedAtDate:[NSDate date]];
If you do not, the call is stuck "connecting" from CallKit's perspective and I have found that the native incoming call UI for other calls will not provide the "send to voicemail" and "hold and accept" options for incoming calls while another call is "connecting".
I struggled with this for a bit today until I figured that part out. I also am calling:
[self.provider reportOutgoingCallWithUUID:currentCall.uuid startedConnectingAtDate:[NSDate date]];
from within:
- (void)provider:(CXProvider *)provider performStartCallAction:(CXStartCallAction *)action
Not sure if that part is necessary but I'm doing it because that's what the Speakerbox demo does. Kind of, they do it in a callback... I just do it immediately.
While you were sending CXCallUpdate object to CallKit before calling, make sure you kept supportsHolding value as true.
My CXCallUpdate looks something like below:
let callHandle = CXHandle(type: .phoneNumber, value: handle)
let callUpdate = CXCallUpdate()
if userName != nil{
callUpdate.localizedCallerName = userName;
}
callUpdate.remoteHandle = callHandle
callUpdate.supportsDTMF = true
callUpdate.supportsHolding = true
callUpdate.supportsGrouping = false
callUpdate.supportsUngrouping = false
callUpdate.hasVideo = false
Meaning of above different properties:
localizedCallerName = If you want to show name of user on system's call screen, otherwise phone number/email based on type of handle will be shown
supportsDTMF = On system's main screen, if you want to allow keypad numbers to be typed while call is running, if you make it false, keypad option get disabled.
supportsHolding = If you want your call to be held, when some other call get triggered, keep this property true
supportsGrouping = If you want to allow conference calling(Merge call option enabled in calling screen), then keep this one true
supportsUngrouping = Inverse of last one, After call getting merged(conference call), should allow it to ungroup or not.
hasVideo = If you support video call, the system will automatically start camera for you.
#Redssie, let me know if any further help related to Callkit required.

iOS Audio not working during call answered when phone is locked. WebRTC used for calling

I am facing a problem with Audio When using Callkit with WebRTC for VOIP call, While answering the call from Lock Screen.
General Functionality :
My app activates the audioSession when it's launched. For an incoming call, SDP Offer & Answer are generated and exchanged. Peer Connection is set up. Both audio and video streams are generated, whether it's audio call or video call. Then Call is reported to callkit by using the following code:
callProvider.reportNewIncomingCall(with: currentCallUUID!, update: update) { error in }
If app is in the foreground, it works fine.
But, when the phone is locked, and user answers the call from lock screen, the Streams are exchanged but no audio comes on either end until user enters into the app himself.
As the user enters into the App, audio becomes active on both the ends.
All the background settings and capabilities are set properly.
I have also referred to the following work around provided by Apple staff. But even it does not work.
https://forums.developer.apple.com/thread/64544
As I mentioned, I am using WebRTC for calling. If I exchange the media streams after the user answers the call( still on Lock Screen) and peer connection is set at that time. It works fine (But it adds the delay in making the call connection).
But if Peer Connection is made before displaying call (say before reporting call to callkit), the audio stops working.
I am able to resolve this issue.
Steps that I followed -
I checked the code related to WebRTC here
I added RTCAudioSession header file which is actually a private class of Webrtc. So every time I receive a call event from signaling, I enable RTCAudiosession and on end of the call, I disable it.
I have to render the incoming streams to a dummy view (Although it is not displayed when the call is going and the app is not yet open, but it is required to make audio working).
I hope this will help if someone is facing the same issue.
#abhimanyu are you still facing the issue or you made it work. I am facing same issue with CallKit.
As per my understanding in WebRTC M60 release they have fixed on issue related to CallKit, which I think created a side effect and caused this issue.
The issue which they have fixed is related to System AudioSession, when ever CallKit presents incoming call UI and play ringer tone CallKit takes control of AudioSession and after user action (accept/ decline) it releases control. In WebRTC M60 release, now they have added observers for this control exchange. That's why it is working if app is in foreground, but if phone is locked and any incoming call is accepted then (I am assuming you are using CallKit UI for call and not redirecting user to App on accept from lock screen) due to Native UI of call it is not possible for WebRTC to activate its own AudioSession instance as call is going through CallKit Screen.
Link for bug which has been fixed on WebRTC M60: https://bugs.chromium.org/p/webrtc/issues/detail?id=7446
If you found any workaround for this issue please let me know.
Please Note that I share my code and its about to my needs and I share for reference. you need to change it according to your need.
when you receive voip notification create new incident of your webrtc handling class, and
add this two lines to code block because enabling audio session from voip notification fails
RTCAudioSession.sharedInstance().useManualAudio = true
RTCAudioSession.sharedInstance().isAudioEnabled = false
didReceive method;
func pushRegistry(_ registry: PKPushRegistry, didReceiveIncomingPushWith payload: PKPushPayload, for type: PKPushType, completion: #escaping () -> Void) {
let state = UIApplication.shared.applicationState
if(payload.dictionaryPayload["hangup"] == nil && state != .active
){
Globals.voipPayload = payload.dictionaryPayload as! [String:Any] // I pass parameters to Webrtc handler via Global singleton to create answer according to sdp sent by payload.
RTCAudioSession.sharedInstance().useManualAudio = true
RTCAudioSession.sharedInstance().isAudioEnabled = false
Globals.sipGateway = SipGateway() // my Webrtc and Janus gateway handler class
Globals.sipGateway?.configureCredentials(true) // I check janus gateway credentials stored in Shared preferences and initiate websocket connection and create peerconnection
to my janus gateway which is signaling server for my environment
initProvider() //Crating callkit provider
self.update.remoteHandle = CXHandle(type: .generic, value:String(describing: payload.dictionaryPayload["caller_id"]!))
Globals.callId = UUID()
let state = UIApplication.shared.applicationState
Globals.provider.reportNewIncomingCall(with:Globals.callId , update: self.update, completion: { error in
})
}
}
func initProvider(){
let config = CXProviderConfiguration(localizedName: "ulakBEL")
config.iconTemplateImageData = UIImage(named: "ulakbel")!.pngData()
config.ringtoneSound = "ringtone.caf"
// config.includesCallsInRecents = false;
config.supportsVideo = false
Globals.provider = CXProvider(configuration:config )
Globals.provider.setDelegate(self, queue: nil)
update = CXCallUpdate()
update.hasVideo = false
update.supportsDTMF = true
}
modify your didActivate and didDeActive delegate functions like below,
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
print("CallManager didActivate")
RTCAudioSession.sharedInstance().audioSessionDidActivate(audioSession)
RTCAudioSession.sharedInstance().isAudioEnabled = true
// self.callDelegate?.callIsAnswered()
}
func provider(_ provider: CXProvider, didDeactivate audioSession: AVAudioSession) {
print("CallManager didDeactivate")
RTCAudioSession.sharedInstance().audioSessionDidDeactivate(audioSession)
RTCAudioSession.sharedInstance().isAudioEnabled = false
}
in Webrtc handler class configure media senders and audiosession
private func createPeerConnection(webRTCCallbacks:PluginHandleWebRTCCallbacksDelegate) {
let rtcConfig = RTCConfiguration.init()
rtcConfig.iceServers = server.iceServers
rtcConfig.bundlePolicy = RTCBundlePolicy.maxBundle
rtcConfig.rtcpMuxPolicy = RTCRtcpMuxPolicy.require
rtcConfig.continualGatheringPolicy = .gatherContinually
rtcConfig.sdpSemantics = .planB
let constraints = RTCMediaConstraints(mandatoryConstraints: nil,
optionalConstraints: ["DtlsSrtpKeyAgreement":kRTCMediaConstraintsValueTrue])
pc = sessionFactory.peerConnection(with: rtcConfig, constraints: constraints, delegate: nil)
self.createMediaSenders()
self.configureAudioSession()
if webRTCCallbacks.getJsep() != nil{
handleRemoteJsep(webrtcCallbacks: webRTCCallbacks)
}
}
mediaSenders;
private func createMediaSenders() {
let streamId = "stream"
// Audio
let audioTrack = self.createAudioTrack()
self.pc.add(audioTrack, streamIds: [streamId])
// Video
/* let videoTrack = self.createVideoTrack()
self.localVideoTrack = videoTrack
self.peerConnection.add(videoTrack, streamIds: [streamId])
self.remoteVideoTrack = self.peerConnection.transceivers.first { $0.mediaType == .video }?.receiver.track as? RTCVideoTrack
// Data
if let dataChannel = createDataChannel() {
dataChannel.delegate = self
self.localDataChannel = dataChannel
}*/
}
private func createAudioTrack() -> RTCAudioTrack {
let audioConstrains = RTCMediaConstraints(mandatoryConstraints: nil, optionalConstraints: nil)
let audioSource = sessionFactory.audioSource(with: audioConstrains)
let audioTrack = sessionFactory.audioTrack(with: audioSource, trackId: "audio0")
return audioTrack
}
audioSession ;
private func configureAudioSession() {
self.rtcAudioSession.lockForConfiguration()
do {
try self.rtcAudioSession.setCategory(AVAudioSession.Category.playAndRecord.rawValue)
try self.rtcAudioSession.setMode(AVAudioSession.Mode.voiceChat.rawValue)
} catch let error {
debugPrint("Error changeing AVAudioSession category: \(error)")
}
self.rtcAudioSession.unlockForConfiguration()
}
Please consider that because I worked with callbacks and delegates code includes delegates and callback chunks. you can ignore them accordingly!!
FOR REFERENCE You can also check the example at this link

Resources