I am creating a voice only (no video) chat application. I have created my own node.js/socket.io based server for signaling.
For WebRTC, I am using the following pod: https://cocoapods.org/pods/WebRTC
I have been successful in creating peer connection, adding local stream, setting local/remote sdp, and send/receive ice candidates. The "didAddStream" delegate method is also called successfully having audio tracks but I am stuck here. I don't know what should I do with the audio track. What should be the next step? How would I send/receive audio on both sides?
Also, if I integrate CallKit, what changes do I need to make.
I got stuck on this one too. You have to retain the RTCMediaStream object in order for the audio to play. You don't need to do anything with the RTCAudioTrack, it will play automatically. I simply assign it to property so it can get retained. See my example here: https://github.com/redfearnk/WebRTCVideoChat/blob/master/WebRTCVideoChat/WebRTCClient.swift#L143
Related
For an iOS application, I need to know that when the remote party is stopped its video for an existing call. Is there any API or event that triggered by pjsip library?
Thanks in advance for your help.
Method reference for pjsua_call_vid_stream_is_running:
Determine if video stream for the specified call is currently running (i.e. has been created, started, and not being paused) for the specified direction.
https://www.pjsip.org/pjsip/docs/html/group__PJSUA__LIB__CALL.htm#ga23c0bd5a335b5fa0d02404cd03ca0d5e
Using the decoding direction we can check if the incoming video is running, but using the PJMEDIA_DIR_DECODING direction.
bool video_active =
pjsua_call_vid_stream_is_running(call_id, med_idx, PJMEDIA_DIR_DECODING);
I have implemented the opentok in my app, and did all necessary implementation which is required. I have been gone through sample code and did exactly same, by calling destination phone, the call reached perfectly, then issues comes.
Sometimes call connected to both device but most of the times not, receiver phone started the call with time counting but it has not been connected with voice chat in either way.
Some times loud speaker become active automatically in receiver phone and also by tapping it has not been disable.
Does anybody went through this situation and resolved it ? A
Any help will be appreciable.
Thanks.
If you're using callkit for voice calls then follow below solution to make your audio session active.
while making call or answering call :
Deactivate audio session.
initiate your audio session again.
answer/make call.
In my app I am streaming audio and there is a period of 5-10 sec depending on the connection where the buffer is loading and after this, my app starts to play the audio. When it starts to play the audio this symbol comes up in the screen.
Here is an image of what im talking about.
http://img27.imageshack.us/img27/3667/img0596.png
I want to change a label in my app when this symbol comes up in the screen, but i dont know which function let me detect this.
The symbol is the "Play" button common to music devices. There is most likely an NSNotification center message that can be "listened for". However, depending on how you are buffering your sounds there is probably a delegate that can notify a selector once it has begun playback. Without more details I can not give more detailed advice. If I were in your position I would take a very hard look at the API you are utilizing, most likely several methods exist to either post notification or send delegate messages notifying the state of the stream as well as playback. I have worked with some streaming audio API and I was able to get status of the buffer as well many other messages from the stream object(s). These are just part of good design, so most likely it is there.
I want to create a voice chat software using FMOD.
Now I can receive data from microphone and play it immediately.
It's also easy to send sound data to another computer on the network.
But I don't know how to play the sent data on the other computer with FMOD.
Can anyone help me?
When you receive the incoming sound data on the destination machine you need to create a streaming buffer to play the audio. The simplest method would be to look at the userccreatedsound example. It shows how to create a custom stream buffer and use the pcmreadcallback to populate the sound with data as needed.
I am using AVPlayer to play an audio stream, and it's possible to keep it playing in the background. I'm wondering how could I handle a situtation where the user loses internet connectivity, so that I could provide some feedback or maybe try to re-establish the playback after some seconds.
EDIT: I know that the question regards AVPlayer, but if you have an answer with MPMoviePlayerController it might be useful as well. Right now, by using MPMoviePlayerController, I'm trying to get the MPMovieFinishReasonPlaybackError case of the MPMoviePlayerPlaybackDidFinishReasonUserInfoKey, by subscribing to the MPMoviePlayerPlaybackDidFinishNotification but if f.e. my audio is playing in the background and I turn airplane mode on, I never get this notification; I only get MPMovieFinishReasonPlaybackEnded, and I don't know how to separate that from the case that the user stops the audio himself.
I tried looking around for the actual source but I remember reading somewhere that if the audio playback stops (for whatever reason) it kills the background thread. The person writing about the issue talked about possible feeding the stream some empty audio content to keep the thread alive. You might be able to send a local notification from a call back error notifying the user that the audio experienced an error and will have to be manually restarted from within the application. Haven't played around with the API enough to know which callback is the best one to use in this case. If I find the link I'm looking for I'll update.
EDIT: Here's Grant Pannell's take on audio streaming and multitasking.