I want to mute my self in a twilio video app. But the problem is that when I use the method mentioned in the documentation
room.localParticipant.videoTracks.forEach((publication) => {
publication.track.disable();
});
This hides the video on my screen, but the remote participants can still see my video. Same happens for audio too. When I tried to debug, the tracks received on the remote participant side still have enabled true. How should I properly mute/turn off my video so that it is reflected on remote participant
Related
I am working on an iOS application (Swift) in which I have used tokbox SDK for adding video chat into iOS app and now I want to add mute/unmute and video on/off buttons to video chat.
In my ios application, I want to add single mute/unmute and video on/off buttons common for both publisher and subscriber at the bottom of the screen.
Assume there are 2 people who joined the call, when publisher clicks on mute button only publisher audio should be muted and like wise for subscriber.
How to find whether a user is publisher or subscriber so that I can mute their individual audio?
Can anyone guide me?
If I am understanding correctly you want to be able to mute in a two-person 1 to 1 call?
To mute a person in a call you need to stop publishing audio to the session. This means that anyone subscribed to this publisher will not get the audio. It also works the same way for video.
So as long as you are toggling audio/video for the publisher it will work fine. In a 1 to 1 call like this, both users/devices are publishing and subscribing at the same time.
Device A is publishing audio/video to a session, and subscribing to audio/video from the session (in this case from Device B).
Device B is publishing audio/video to a session, and subscribing to audio/video from the session (in this case from Device A).
More info is available here: https://tokbox.com/developer/guides/audio-video/ios-swift/
I have setup an event handler for the trackPublished event so that I can update the video element with the remote users video stream when its published after the local user is already connected.
participant.on('trackPublished', publication => {
trackSubscribed(remoteMediaDiv, publication.track);
});
The issue is the track is always null. Its also not subscribed, I think that might be related but don't have a great grasp of the publish/subscribe states.
The local user loads the remote streams with the trackSubscribed without any issue when the remote user connects to the room before the local user.
Any ideas?
I am creating a voice only (no video) chat application. I have created my own node.js/socket.io based server for signaling.
For WebRTC, I am using the following pod: https://cocoapods.org/pods/WebRTC
I have been successful in creating peer connection, adding local stream, setting local/remote sdp, and send/receive ice candidates. The "didAddStream" delegate method is also called successfully having audio tracks but I am stuck here. I don't know what should I do with the audio track. What should be the next step? How would I send/receive audio on both sides?
Also, if I integrate CallKit, what changes do I need to make.
I got stuck on this one too. You have to retain the RTCMediaStream object in order for the audio to play. You don't need to do anything with the RTCAudioTrack, it will play automatically. I simply assign it to property so it can get retained. See my example here: https://github.com/redfearnk/WebRTCVideoChat/blob/master/WebRTCVideoChat/WebRTCClient.swift#L143
In my app I am streaming audio and there is a period of 5-10 sec depending on the connection where the buffer is loading and after this, my app starts to play the audio. When it starts to play the audio this symbol comes up in the screen.
Here is an image of what im talking about.
http://img27.imageshack.us/img27/3667/img0596.png
I want to change a label in my app when this symbol comes up in the screen, but i dont know which function let me detect this.
The symbol is the "Play" button common to music devices. There is most likely an NSNotification center message that can be "listened for". However, depending on how you are buffering your sounds there is probably a delegate that can notify a selector once it has begun playback. Without more details I can not give more detailed advice. If I were in your position I would take a very hard look at the API you are utilizing, most likely several methods exist to either post notification or send delegate messages notifying the state of the stream as well as playback. I have worked with some streaming audio API and I was able to get status of the buffer as well many other messages from the stream object(s). These are just part of good design, so most likely it is there.
I am using AVPlayer to play an audio stream, and it's possible to keep it playing in the background. I'm wondering how could I handle a situtation where the user loses internet connectivity, so that I could provide some feedback or maybe try to re-establish the playback after some seconds.
EDIT: I know that the question regards AVPlayer, but if you have an answer with MPMoviePlayerController it might be useful as well. Right now, by using MPMoviePlayerController, I'm trying to get the MPMovieFinishReasonPlaybackError case of the MPMoviePlayerPlaybackDidFinishReasonUserInfoKey, by subscribing to the MPMoviePlayerPlaybackDidFinishNotification but if f.e. my audio is playing in the background and I turn airplane mode on, I never get this notification; I only get MPMovieFinishReasonPlaybackEnded, and I don't know how to separate that from the case that the user stops the audio himself.
I tried looking around for the actual source but I remember reading somewhere that if the audio playback stops (for whatever reason) it kills the background thread. The person writing about the issue talked about possible feeding the stream some empty audio content to keep the thread alive. You might be able to send a local notification from a call back error notifying the user that the audio experienced an error and will have to be manually restarted from within the application. Haven't played around with the API enough to know which callback is the best one to use in this case. If I find the link I'm looking for I'll update.
EDIT: Here's Grant Pannell's take on audio streaming and multitasking.