We are trying to integrate Youtube app on a STB that is using RDK middleware (capable of running HTML5/javascript applications). I have been through the "YouTube TV HTML5 Technical Requirements 2016" document and have some questions.
1) As per my understanding youtube is an opensource app and integration work will be required? Will there be any customization be required? For example there is a difference how search functionality is available on different device types. Youtube app being run on a browser on a PC, a textbox is available where you can type what you want to search and then press the search icon next to it to start the search. However on the devices like Smart TV, Set Top Box where user does not have the pointing device and the keyboard, usually soft keyboard is required to be shown on the screen and search starts automatically after entering certain number of characters. I want to know if this functionality is customized by the app integrator or there are different code bases for different device types?
Similar questions i have is for the settings menu. For example to support dial 2.0 protocol to remotely launch the youtube application from the remote device you need to have settings menu to let you to pair / unpair the device. So settings menu seems to be different for different device types.
2) Similarly there are differences how user is allowed to perform forward / rewind during the playback. On PC browser i have seen user can seek to any position with in a stream using a mouse. However on smart TV's there is a rewind forward button which result in seek -/+ 10 secs. I have not seen trick modes on any implementation. Are trick modes required and how they are performed? If they are required then using seek or some sort of iframe tag file to allow smooth trickmodes? Again doesn't that part come from the app itself?
3) I'm trying to find if Youtube support any or all of these MPEG-DASH, Apple HLS, Microsoft Smooth Streaming, adaptive bit rate protocols. However not having much luck with them. I tried to capture the packets using wireshark and launched the youtube application and played back the video but i was unable to see any http calls that can give me hint that youtube app is using any of the above ABR formats (may be all the communication was under TLS and so encrypted and so i was unable to find whats going on). Even youtube app running from a browser on a PC, when i playback the video, i can see under settings -> Quality always remain at auto, 480p for the whole duration of playback. And if i change the quality to any value e.g 720p it always stay there for the whole duration of the playback. This is telling me it is not using any of the ABR formats. So i guess these ABR formats are probably for future use?
4) Under the youtube specifications i can see that target device must implement at least com.youtube.playready and com.widevine.alpha (for 4K contents) DRM's. I was trying to find if you tube has any content available in these formats but was unable to find any. Can you please confirm?
I would appreciate if someone can answer these or point me in the right direction.
Best Regards,
Farhan
Related
We want to use Twilio video SDKs in our Web, Android and iOS application. Since Twilio provides everything out of the box for conferencing, but we need to add some features from our end to Twilio SDK in order to use them in our application.
Feature needed: We want to include a minimize button present in video conference room which will minimize the screen of video call and user can use the base application concurrently (similar to WhatsApp video call). Also, maximize button will be added when the minimization of call has been done so that user can switch back to video call.
Our basic requirements are:
Audio and Video conferencing
Screen sharing
Recording of meetings
Mute options control for audio/video
Participants limit:
Minimum: 3, Maximum: 50
Duration limit:
Minimum 30 minutes, Maximum 240 minutes
Requirements specific to Web Application(in React):
Conferencing control resides within the application. Existing Web app will be the base interface for video conferencing.
Anyone can mute audio/video of any participant as per his will.
Requirements specific to Mobile Application(in Flutter):
Flexibility to user to switch between video call and our application(identical to how WhatsApp video call works). The video call screen gets minimized and user can use the application normally and still be present in conferencing.
How could I go ahead with this? Any help?
For the Flutter solution we are building an opensource plugin hosted on pub.dev. At the moment screen share has not yet been added, but it will eventually. The API docs can be found here.
Regarding your switch like WhatApp, you need to look into the Picture-in-Picture mode (PiP). Which is also part of the development fase of this plugin. You can find the milestone here and the issues related to it here.
You mention React for the Web. There are already plugins for React on the web. But the Web will also be implemented in the Flutter solution.
Hope anyone can help me
We are developing an Android app that integrates the latest YouTube API Player. Everything seems to be ok when user wants to load a video (using the video Id), and the user is able to visualize it.
However, we have identified in our lab an strange behaviour in the YouTube player when the quality of the network where the user is located gets worse. We thought that the YouTube player would automatically adapt the video quality depending on the network conditions (no quality selection is done, thinking that by default is set to automatic), however it does not, but the native YouTube App does.
Let me describe the test performed where we have observed this behaviour:
An android device with the native YouTube App and our app installed and connected to our WLAN network
The WLAN network provides access to internet and we can inject some impairments on that network (i.e. reduce the bandwith)
Configure excellent bandwith in the nertwork
Start the video and visualize some seconds
Configure bad bandwith (i.e. 256 kbit/s) in the network
After a few seconds:
a. YouTube App stalls just a little, then decrease the quality and continue playing the video.
b. AT4-App stalls longer to continue playing at the initial quality.
We think we already use the Youtube API at the most so we don’t know if we can upgrade our app to behave exactly like native YouTube App, because it does not make any sense that the API has less functionality that the native App (which is supposed to rely on the same APIs).
Thks
I'm trying to learn more about YouTube's TOS. More specifically:
II. Prohibitions
8: separate, isolate, or modify the audio or video components of any YouTube audiovisual content made available through the YouTube API;
I'm working inside of a Google Chrome Extension which consists of a persistant background page and a foreground pop-up page. I would like to display audiovisual content in the foreground to users. This is fine and works, however, upon closing the foreground -- the audiovisual content ceases because the page has been destroyed.
As such, I would like to sync two YouTube players such that one in the background is unmuted with the one in the foreground being muted, but with its visual content sync'ed to that of the background. Would this violate YouTube's TOS? I'm hoping the answer is no - it seems akin to having a tab open. Sometimes the visual content can be seen (at the user's discretion) but the audio content would be uninterrupted.
Thanks
If I interpret that correctly:
"to separate or isolate" means to cut off the video or the audio part (or even different channels of it, if any) of the returned/streamed media
"to modify" means that you transform the data in some way and you display it to the user, instead of the original data (i. e. you are prohibited to make a video streaming application that displays every movie in black and white).
So, unfortunately, I think your requirement does indeeed violate the TOS.
In short:
There's no way for you to make an app that allows you to listen to a youtube song with your display off...
(I believe they want you to see the ads or they want to prove to those who pay them to put the ads that people sees the ads they put on all youtube videos).
I'd like to stream video from the camera on an iOS device to a receiver via wifi, in effect turning the device into a wireless webcam. Is there a way to build a small app that captures video input on an iOS app and sends it via an RTSP stream or similar?
As this is an ad hoc experiment, I'm not concerned about App Store guidelines and can jailbreak if necessary.
If I interpret your question correctly you more or less need to solve four problems:
Get the camera feed.
Convert/encode this to the right format.
Stream the data.
Prevent the phone from locking itself and going into deep sleep.
The first one is fairly simple and Apple has as always provided good documentation and examples -> API link. Make sure you check out their example in the end as you will get a CMSampleBufferRef data object back.
For the second and third part, you should check out the CFNetwork framework and specially CFFTPStream for streaming using FTP.
If your are only building this for yourself then you can always turn off the Auto-Lock feature in the settings. If you on the other hand would like to distribute this to other users you could use a trick to play a mute sound every 10 seconds. This is more or less how all the alarm clocks work in the App Store. Here's a tutorial. =)
I hope I helped a little bit at least.
Good luck and best regards!
I'm 70% of the way to doing the same thing. Here's how I did it:
Capture content from video input
Chop video into files for use in HTML Live Streaming.
Spin up a web server on the iPhone and make the video files available.
Connect to the IP address of the phone and viola! you've got live streaming video.
Last time I touched the code I was trying to debug my Live Streaming not working. I'll try and get my source code posted on github this weekend, if you'd like to take a look.
I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls. The app would allow the user to enter a mode where the microphone was live and listened for predefined keywords like 'down', 'up', 'next', 'back', 'home', etc.
I don't want to reinvent the wheel on this so I'm just wondering first, if someone has done this already and if not, are there any good tutorials or SDKs available to help with recording someone's voice, and then comparing future output to see if it matches, or just dealing with the microphone in general?
Let's put aside that this is a fairly vaguely worded question for the moment.
If you are expecting to allow voice control in your app that somehow works throughout the entire device, it's just not possible. Your app would only work to control itself -- or at least itself and whatever external hooks you can normally get to the rest of the device, like, say, playing a song out of the user's iTunes library.
If you're planning on doing this in a jailbroken environment, then you should find some open-source library that does voice recognition -- if there are any -- and start from there. Be prepared for a very long haul, though.
Dragon Mobile SDK is what you're looking for.
http://dragonmobile.nuancemobiledeveloper.com/
There maybe others voice recognition SDKs out there, but this is the only one I can think of from the top of my head.
You can find a library called CMU Sphinx. There's an iphone version for it called
PocketSphinx. See if it fits your needs.
I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls.
The iOS 13 new feature Voice Control fully meets your request because you can control your device and your app with your voice exactly the same as with touches.
It's also possible to define actions for some specific words for instance.
The device settings are perfectly well detailed to handle this amazing new feature (Accessibility - Voice Control):
If you need dedicated names to be read out in your app, use the accessibilityUserInputLabels property to define them.
That's definitely the built-in tool your need to reach your goal: no need to use external library or SDK, everything is natively provided. ;o)