Enable video calling with Linphone iOS SDK - ios

I'm trying to enable video calling on my swift app using Linphone.
I was able to enable audio calling, but I can't make it working with video. The app is always crashing if I enable this line:
linphone_call_params_enable_video(linCallParams, 1)
I want to only receive video and audio here.
#objc func startVideoCall() {
linphone_core_enable_video_display(theLinphone.lc, 1)
linphone_core_enable_video_capture(theLinphone.lc, 1)
let linCallParams = linphone_core_create_call_params(theLinphone.lc, nil)
linphone_call_params_enable_video(linCallParams, 1)
linphone_call_params_set_video_direction(linCallParams, LinphoneMediaDirectionSendRecv)
linphone_call_params_set_audio_direction(linCallParams, LinphoneMediaDirectionSendRecv)
let call = linphone_core_invite_with_params(theLinphone.lc, calleeAccount, linCallParams)
linphone_core_set_native_video_window_id(theLinphone.lc, &videoStreamView)
linphone_core_set_native_preview_window_id(theLinphone.lc, &videoStreamPreview)
do {
try audioSession.setActive(true)
} catch {
print("Audio error: \(error.localizedDescription)")
}
linphone_call_params_unref(linCallParams)
}

This code combo fixed my issue
private func bridge<T: AnyObject>(obj : T) -> UnsafeRawPointer {
let pointer = Unmanaged.passUnretained(obj).toOpaque()
return UnsafeRawPointer(pointer)
}
let viewPointer = UnsafeMutableRawPointer(mutating: bridge(obj: view))
linphone_core_set_native_video_window_id(theLinphone.lc, viewPointer)
let previewPointer = UnsafeMutableRawPointer(mutating: bridge(obj: previewStream))
linphone_core_set_native_preview_window_id(theLinphone.lc, previewPointer)

Related

Switch Audio Between Speaker, Built in mic, Bluetooth or No Audio

I am working on a Video/Audio Call App where i need to provide four options related to the Audio Output:
Speaker, Built in mic, Any BLE Device supporting audio, No Audio output
Below functions i have used:
static func setBuiltInMic() {
let outputs = audioSession.availableInputs
for output in outputs! {
if output.portType.rawValue == AVAudioSession.Port.builtInMic.rawValue {
do {
try audioSession.setPreferredInput(output)
} catch let error {
print("Setting Built in Mic Port: \(error.localizedDescription)")
}
}
}
}
static func setAndCheckBLEAudioPort() -> Bool {
let outputs = audioSession.availableInputs
for output in outputs! {
if output.portType.rawValue == AVAudioSession.Port.bluetoothHFP.rawValue {
do {
try audioSession.setPreferredInput(output)
return true
} catch let error {
print("Setting BLE Port: \(error.localizedDescription)")
return false
}
}
}
return false
}
static func setupAudioSession(isSpeakerEnabled: Bool) {
do {
try audioSession.setCategory(.playAndRecord)
try audioSession.setMode(.voiceChat)
try audioSession.overrideOutputAudioPort(isSpeakerEnabled ? .speaker : .none)
try audioSession.setActive(true, options: [])
} catch let error as NSError {
print("Fail: \(error.localizedDescription)")
}
}
But this doesn't work Audio keeps coming from different source like speaker even if i try to mute it using setupAudioSession
Anyone has an idea or reference for me to look into it?
The code is working fine, it was an issue with a third party library used for audio and video calls prior to Twilio.

CallKit Audio session starts when navigating to the application only

I'm working on a voip app now and want to support holding.
But when a second call comes and I hold my current call. Switching to my first call I hear no sound at all.
The way to hear it is to navigate from callKit native screen to my app and hence I can hear the voice.
func configureAudioSession() {
_ = try? AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: .videoChat, options: AVAudioSession.CategoryOptions.mixWithOthers)
_ = try? AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSession.PortOverride.none)
_ = try? AVAudioSession.sharedInstance().setMode(AVAudioSession.Mode.voiceChat)
}
func startAudio() {
print("Starting audio")
do {
_ = try AVAudioSession.sharedInstance().setActive(true)
} catch {
}
}
func stopAudio() {
print("Stopping audio")
do {
_ = try AVAudioSession.sharedInstance().setActive(false)
} catch {
}
}
For supporting holding, you dont have to start/stop audio session, instead you can use CXSetHeldCallAction provided by Callkit itself. Here, is the code for hold that i use.
let callKitCallController = CXCallController()
func performHoldAction(isOnHold:Bool, uuid:UUID) {
let holdCallAction = CXSetHeldCallAction(call: uuid, onHold: isOnHold)
let transaction = CXTransaction(action: holdCallAction)
callKitCallController.request(transaction) { error in
if let error = error {
CPrint("holdCallAction transaction request failed: \(error.localizedDescription).")
return
}
CPrint("holdCallAction transaction request successful")
}
}
Once system puts the call on hold(by above method OR due to other incoming call accepting or any other reason), then in CXProviderDelegate, the method func provider(_ provider: CXProvider, perform action: CXSetHeldCallAction) gives you the callback for the detail.
Here, system/callkit itself interacts with audio, you dont have to explicitly start or stop audio for holding.
Note: Make sure that you given supportsHolding to true for the CXCallUpdate that you gave for new call.

How to 10 second forward or backward in Spotify player

I am trying to add(move forward) 10 second song duration or minus(move backward) 10 second in Spotify player but i am really confused how to add or minus.
When i m trying to use this code the song is not changed duration
// forward button action
#IBAction func moveFrdBtnAction(_ sender: Any) {
SpotifyManager.shared.audioStreaming(SpotifyManager.shared.player, didSeekToPosition: TimeInterval(10))
}
// spotify delegate method seekToPosition
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didSeekToPosition position: TimeInterval) {
player?.seek(to: position, callback: { (error) in
let songDuration = audioStreaming.metadata.currentTrack?.duration as Any as! Double
self.delegate?.getSongTime(timeCount: Int(songDuration)+1)
})
}
We are making a music application using the same SDK in both the platforms (Android & iOS), the seekToPosition method of the Spotify SDK is working correctly in the Android version, however, it is not working in the iOS one.The delegate method calls itself but the music stops.
Can you kindly let us know why this scenario is happening, and what should we do to run it on the iOS devices as well.
Can someone please explain to me how to solve this , i've tried to solve this but no results yet.
Any help would be greatly appreciated.
Thanks in advance.
I don't use this API so my answer will be based your code and Spotify's reference documentation.
I think there are a few things wrong with your flow:
As Robert Dresler commented, you should (approximately) never call a delegate directly, a delegate calls you.
I'm pretty sure your action currently results in jumping to exactly 10 seconds, not by 10 seconds.
(As an aside, I'd suggest changing the name of your function moveFrdBtnAction to at least add more vowels)
Anyway, here's my best guess at what you want:
// forward button action
#IBAction func moveForwardButtonAction(_ sender: Any) {
skipAudio(by: 10)
}
#IBAction func moveBackButtonAction(_ sender: Any) {
skipAudio(by: -10)
}
func skipAudio(by interval: TimeInterval) {
if let player = player {
let position = player.playbackState.position // The documentation alludes to milliseconds but examples don't.
player.seek(to: position + interval, callback: { (error) in
// Handle the error (if any)
})
}
}
// spotify delegate method seekToPosition
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didSeekToPosition position: TimeInterval) {
// Update your UI
}
Note that I have not handled seeking before the start of the track, nor after the end which could happen with a simple position + interval. The API may handle this for you, or not.
You could take a look at the examples here: spotify/ios-sdk. In the NowPlayingView example they use the 'seekForward15Seconds', maybe you could use that? If you still need 10s I have added a function below. The position is in milliseconds.
"position: The position to seek to in milliseconds"
docs
ViewController.swift
var appRemote: SPTAppRemote {
get {
return AppDelegate.sharedInstance.appRemote
}
}
fileprivate func seekForward15Seconds() {
appRemote.playerAPI?.seekForward15Seconds(defaultCallback)
}
fileprivate seekBackward15Seconds() {
appRemote.playerAPI?.seekBackward15Seconds(defaultCallback)
}
// TODO: Or you could try this function
func seekForward(seconds: Int){
appRemote.playerAPI?.getPlayerState({ (result, error) in
// playback position in milliseconds
let current_position = self.playerState?.playbackPosition
let seconds_in_milliseconds = seconds * 1000
self.appRemote.playerAPI?.seek(toPosition: current_position + seconds_in_milliseconds, callback: { (result, error) in
guard error == nil else {
print(error)
return
}
})
})
}
var defaultCallback: SPTAppRemoteCallback {
get {
return {[weak self] _, error in
if let error = error {
self?.displayError(error as NSError)
}
}
}
}
AppDelegate.swift
lazy var appRemote: SPTAppRemote = {
let configuration = SPTConfiguration(clientID: self.clientIdentifier, redirectURL: self.redirectUri)
let appRemote = SPTAppRemote(configuration: configuration, logLevel: .debug)
appRemote.connectionParameters.accessToken = self.accessToken
appRemote.delegate = self
return appRemote
}()
class var sharedInstance: AppDelegate {
get {
return UIApplication.shared.delegate as! AppDelegate
}
}
Edit1:
For this to work you need to follow the Prepare Your Environment:
Add the SpotifyiOS.framework to your Xcode project
Hope it helps!

How to use main speaker programmatically in Swift

I'm implementing video chat using webrtc. In that I want use the main speaker when the other participant joins the session. For that, I wrote this code, but I'm getting a low voice volume (means voice coming from ear speaker)
func audioSetting() {
RTCAudioSession.sharedInstance().isAudioEnabled = true
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(AVAudioSession.Category.playAndRecord, mode: .videoChat, options: [])
if self.speakerOn {
try session.overrideOutputAudioPort(.none)
}
else {
try session.overrideOutputAudioPort(.speaker)
}
try session.setActive(true)
self.speakerOn = !self.speakerOn
}
catch let error {
print("Couldn't set audio to speaker: \(error)")
}
}
I am working on webRTC with socket.IO,
func setSpeakerStates(enabled: Bool)
{
let session = AVAudioSession.sharedInstance()
var _: Error?
try? session.setCategory(AVAudioSession.Category.playAndRecord)
try? session.setMode(AVAudioSession.Mode.voiceChat)
if enabled {
try? session.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} else {
try? session.overrideOutputAudioPort(AVAudioSession.PortOverride.none)
}
try? session.setActive(true)
}
Please try this method at the end of the viewdidload after adding audio streaming and video streaming.

setup MPRemoteCommandCenter in watchOS Now Playing with WKAudioFilePlayer

I'm working on an app that play audio in Apple Watch; All working fine from inside the app.
I'm trying to setup the MPRemoteCommandCenter in 'Now Playing' to change the next/prev track to skipForward/skipBackward and to add a handler for the pause command.
Nothing change in Now Playing commands the the handler not being trigged.
Below a snippet code:
Play method
var player: WKAudioFilePlayer!
#IBAction func play() {
let avSession = AVAudioSession.sharedInstance()
try! avSession.setCategory(.playback, mode: .default, policy: .longForm, options: [])
let url = Bundle.main.url(forResource: "sample", withExtension: "mp3")!
let item = WKAudioFilePlayerItem(asset: WKAudioFileAsset(url: url))
player = WKAudioFilePlayer(playerItem: item)
do {
try avSession.setActive(true, options: .notifyOthersOnDeactivation)
player.play()
} catch {
print("Error")
}
}
Remote controls setup method:
func setupRemoteControls() {
// Get the shared MPRemoteCommandCenter
let commandCenter = MPRemoteCommandCenter.shared()
commandCenter.skipForwardCommand.preferredIntervals = [NSNumber(value: 15)]
commandCenter.skipForwardCommand.addTarget { (event) -> MPRemoteCommandHandlerStatus in
// skip forward
return .success
}
commandCenter.skipForwardCommand.isEnabled = true
commandCenter.skipBackwardCommand.preferredIntervals = [NSNumber(value: 15)]
commandCenter.skipBackwardCommand.addTarget { (event) -> MPRemoteCommandHandlerStatus in
// skip backword
return .success
}
commandCenter.skipBackwardCommand.isEnabled = true
// Add handler for play Command
commandCenter.playCommand.addTarget { (event) -> MPRemoteCommandHandlerStatus in
self.player.play()
return .success
}
// Add handler for Pause Command
commandCenter.pauseCommand.addTarget { (event) -> MPRemoteCommandHandlerStatus in
self.player.pause()
return .success
}
}
I'm calling self.setupRemoteControls() in awake method. Also I tried to move the setup to the ExtensionDelegate -> applicationDidFinishLaunching
Apple references I used:
https://developer.apple.com/videos/play/wwdc2018/504/
https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/MediaPlaybackGuide/Contents/Resources/en.lproj/RefiningTheUserExperience/RefiningTheUserExperience.html
:: UPDATE ::
I found when using the AVAudioPlayer instead of WKAudioFilePlayer the setup for MPRemoteCommandCenter working fine.
The problem that I don't use local audio files!.. I stream 'm3u8' files using wowza which worked only with WKAudioFilePlayer.
And if I tried to stream using AVAudioPlayer I get this error
The operation couldn’t be completed. (OSStatus error 2003334207.)
So my issue now to get MPRemoteCommandCenter being configured while still using WKAudioFilePlayer, or find a way to stream using AVAudioPlayer ??

Resources