We want to use Twilio video SDKs in our Web, Android and iOS application. Since Twilio provides everything out of the box for conferencing, but we need to add some features from our end to Twilio SDK in order to use them in our application.
Feature needed: We want to include a minimize button present in video conference room which will minimize the screen of video call and user can use the base application concurrently (similar to WhatsApp video call). Also, maximize button will be added when the minimization of call has been done so that user can switch back to video call.
Our basic requirements are:
Audio and Video conferencing
Screen sharing
Recording of meetings
Mute options control for audio/video
Participants limit:
Minimum: 3, Maximum: 50
Duration limit:
Minimum 30 minutes, Maximum 240 minutes
Requirements specific to Web Application(in React):
Conferencing control resides within the application. Existing Web app will be the base interface for video conferencing.
Anyone can mute audio/video of any participant as per his will.
Requirements specific to Mobile Application(in Flutter):
Flexibility to user to switch between video call and our application(identical to how WhatsApp video call works). The video call screen gets minimized and user can use the application normally and still be present in conferencing.
How could I go ahead with this? Any help?
For the Flutter solution we are building an opensource plugin hosted on pub.dev. At the moment screen share has not yet been added, but it will eventually. The API docs can be found here.
Regarding your switch like WhatApp, you need to look into the Picture-in-Picture mode (PiP). Which is also part of the development fase of this plugin. You can find the milestone here and the issues related to it here.
You mention React for the Web. There are already plugins for React on the web. But the Web will also be implemented in the Flutter solution.
Related
I want to be able to enable a user on a video networking web platform to only need to grant camera permission one time, and be able to have separate video chats with multiple users.Part of the 'event" will have multiple one to one video chats. There is a one to one video chat with one user. it ends. there is another 1 to one video chat with another user. It ends, etc.... As it is this permission is needed to be granted for each separate video chat. I am having this issue primarily with ios on safari. I am having someone else build this web platform and the person is not able to solve this issue with the video plug in they are using. They claim it is an issue with mac devices that cannot grant permission to particular websites. But I know that this issue has been solved with other networking platforms. Can I accomplish this with tokbox (vonage)? Or please tell me what video platform to use and the specific way to accomplish this. I am not a developer but will pass on exactly what you give me to my developer team. I am considering having the website be rebuilt with tokbox but first want to be sure that I can accomplish this. The website it being built with PHP but this issue is so sognificant that I might have it bilt from scratch in whatever way is needed. Thank you very much!!!! I know this issue is solveable as I've seen this on other platforms - Zoom and other video networking platforms like remo.. thanks!!!
The Agora web SDK requests the browser for camera and microphone permissions. Now remembering these permissions is done by the browser itself and not the JavaScript SDK.
As to answer how other platforms offer this is because those are native apps and not a mobile website. So they don't have to play by the rules of the browser. If you are interested in that, you can take a look at our IOS SDK.
https://docs.agora.io/en/Video/start_call_ios?platform=iOS
I am using iOS SDK Skype for Business following are my concerns.
Latency issue - It takes lots of time to connect the call even at good network also I am keeping video service on demand default connection is only for Audio feeds.
After call connected audio feed default set to muted, didChangeIsMuted delegate returns true (Mute). User has to manually press the button to unmute it.
Latest SDK Demo at url https://github.com/OfficeDev/skype-ios-app-sdk-samples/tree/master/BankingAppSwift is not compiling successfully. Few resource files are missing (Helper files).
The Meeting join only connects to IM and Audio. It does not enable Video by default. You have to explicitly start the video service one the meeting join in complete (Conversation moves to established state).
For Latency, can you give an estimate.. how long did it take to join the meeting?
We will look at this one and reply back.
Please use the Helper files from the SDK Zip package. They are not provided by default in the GitHub samples.
We are trying to integrate Youtube app on a STB that is using RDK middleware (capable of running HTML5/javascript applications). I have been through the "YouTube TV HTML5 Technical Requirements 2016" document and have some questions.
1) As per my understanding youtube is an opensource app and integration work will be required? Will there be any customization be required? For example there is a difference how search functionality is available on different device types. Youtube app being run on a browser on a PC, a textbox is available where you can type what you want to search and then press the search icon next to it to start the search. However on the devices like Smart TV, Set Top Box where user does not have the pointing device and the keyboard, usually soft keyboard is required to be shown on the screen and search starts automatically after entering certain number of characters. I want to know if this functionality is customized by the app integrator or there are different code bases for different device types?
Similar questions i have is for the settings menu. For example to support dial 2.0 protocol to remotely launch the youtube application from the remote device you need to have settings menu to let you to pair / unpair the device. So settings menu seems to be different for different device types.
2) Similarly there are differences how user is allowed to perform forward / rewind during the playback. On PC browser i have seen user can seek to any position with in a stream using a mouse. However on smart TV's there is a rewind forward button which result in seek -/+ 10 secs. I have not seen trick modes on any implementation. Are trick modes required and how they are performed? If they are required then using seek or some sort of iframe tag file to allow smooth trickmodes? Again doesn't that part come from the app itself?
3) I'm trying to find if Youtube support any or all of these MPEG-DASH, Apple HLS, Microsoft Smooth Streaming, adaptive bit rate protocols. However not having much luck with them. I tried to capture the packets using wireshark and launched the youtube application and played back the video but i was unable to see any http calls that can give me hint that youtube app is using any of the above ABR formats (may be all the communication was under TLS and so encrypted and so i was unable to find whats going on). Even youtube app running from a browser on a PC, when i playback the video, i can see under settings -> Quality always remain at auto, 480p for the whole duration of playback. And if i change the quality to any value e.g 720p it always stay there for the whole duration of the playback. This is telling me it is not using any of the ABR formats. So i guess these ABR formats are probably for future use?
4) Under the youtube specifications i can see that target device must implement at least com.youtube.playready and com.widevine.alpha (for 4K contents) DRM's. I was trying to find if you tube has any content available in these formats but was unable to find any. Can you please confirm?
I would appreciate if someone can answer these or point me in the right direction.
Best Regards,
Farhan
Hope anyone can help me
We are developing an Android app that integrates the latest YouTube API Player. Everything seems to be ok when user wants to load a video (using the video Id), and the user is able to visualize it.
However, we have identified in our lab an strange behaviour in the YouTube player when the quality of the network where the user is located gets worse. We thought that the YouTube player would automatically adapt the video quality depending on the network conditions (no quality selection is done, thinking that by default is set to automatic), however it does not, but the native YouTube App does.
Let me describe the test performed where we have observed this behaviour:
An android device with the native YouTube App and our app installed and connected to our WLAN network
The WLAN network provides access to internet and we can inject some impairments on that network (i.e. reduce the bandwith)
Configure excellent bandwith in the nertwork
Start the video and visualize some seconds
Configure bad bandwith (i.e. 256 kbit/s) in the network
After a few seconds:
a. YouTube App stalls just a little, then decrease the quality and continue playing the video.
b. AT4-App stalls longer to continue playing at the initial quality.
We think we already use the Youtube API at the most so we don’t know if we can upgrade our app to behave exactly like native YouTube App, because it does not make any sense that the API has less functionality that the native App (which is supposed to rely on the same APIs).
Thks
The Soundcloud widget works fine playing multiple tracks in succession. If I then hide that frame, play a youtube video in a youtube iframe, and then switch back to a new track in the Soundcloud widget, it loads but will not play (ignoring the autoplay setting and any widget.play() calls). I had this working on Chromecast with the developer preview SDK and the 1.0 cast receiver but now with the 2.0.0 receiver it's broken. Any ideas how to proceed?
Currently there is no supported mechanism in the SDK to play YouTube videos outside of the YouTube app. Note that in general, applications may not allow other senders launch or control their receiver side, for example Hulu+ may not like it if you wan to write your own app to launch and control that application on your Chromecast; if they decide to allow such model, they need to publish the steps (for example, they can publish their App ID and additional custom data that would link deep into their application). YouTube is no different in that respect.
ok, got this working so hopefully this is useful to others. Assuming only one is active and visible at a time, the trick is to destroy the prior widgets rather than try to reuse them. For YouTube this does not mean reloading the iframe_api but simply calling YTPlayer.destroy() and new YT.Player() next time around. For SoundCloud keep a handle of the iframe and then call iframe.parentNode.removeChild(iframe) to destroy and then create again next time.