I am trying to synchronize a youtube live video with a shown presentation on a website. The problem is the latency of the video which differs between 15 and 25 seconds on the clients. When I click "Show stats for nerds" in the video context menu I can see that current latency.
If I knew this latency in the script I could adapt the slide switching accordingly.
So my question is: Is it possible to get the current latency with JavaScript?
Is it possible to get the current latency with JavaScript?
No.
When I click "Show stats for nerds" in the video context menu I can see that current latency.
That's only the latency YouTube knows about. There may be other latency as well, on the source end.
I haven't tested it, but you might consider calling getCurrentTime(). It may tell you the time offset since the beginning of the encoding, which would be a reliable source for synchronizing other things to the video.
Related
Hello Friends,
I am working on OTT platform app, I need to play video very smoothly without any delay like Snapchat and instagram as reference. I am using Cloudinary for uploading videos and everything is working good but at first time, AVPlayer takes time of 1-2 second to start video, which is bad thing for Me. Once video play, next time I come on same video it plays smoothly with less delay of max Half second.
As far as I tried to learn through different blogs and stack over flow answers, I get rid this is default AVPlayer Buffering time and it depends of video durations and its fetching video information like title, metadata etc. But I don't have to use these information anywhere.
I tried to set false this property of AVPlayer .automaticallyWaitsToMinimizeStalling = false, but still no luck.
I tried few solutions from StackOverflow posts, but didn't get success
How to reduce iOS AVPlayer start delay
This is demo video Link Which you can try http://res.cloudinary.com/dtzhnffrp/video/upload/v1621013678/1on1/bgasthklqvixukocv6xy.mov
If you can suggest, what I can use for OTT platforms to play video smoothly really grateful to everyone...
Thanks In Advance
Most streaming services use ABR, which creates multiple resolution copies of the video and beaks each into 2-10 second, typicaLLY, chunks.
One benefit of ABR is that to speed up playback start up, the video can start on a lower resolution bit rate and then 'step up' to higher bit rates as it proceeds.
You can often see this on popular streaming services where you will see the video quality is lower when the video starts and improves after a short time.
See here for more on ABRs: https://stackoverflow.com/a/42365034/334402
This requires you to do work on the server side to prepare the video for HLS and DASH streaming, the two most common ABR streaming protocols.
Typically dedicated streaming servers, or a combination of encoders and packagers, are use to prepare and serve the ABR streams. There are also cloud services, for example AWS Media Services or Azure Media Services, which allow on demand streaming models.
You can make the videos smaller either by reducing the dimensions or by compressing it more. Both of these have the effect of lowering startup time - but will sacrifice quality in exchange.
Cloudinary will create ABR versions for you, but the last I checked, you pay for each version created.
I am a newbie in video streaming and I just build a sample website which plays videos. Here i just give the video file location to the video tag in html5. I just noticed that in youtube the video tag contains the blob url and had a look into this. I found that the video data comes in segments and came across a term called pseudo streaming. Whereas it seems likes the website that i build downloads the whole file and plays the video. I am not trying to do any live streaming, just trying to stream local videos. I thought maybe the way video data is received in segments is done by a video streaming server. I came across RED5 open source streaming server, but most of the examples that is given is for live streaming which I am not experimenting on. Its been few days and I am not sure whether i am on the right track
The segmented approach you refer to is to support Adaptive Bit Rate streaming - ABR.
ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions. See here for an example:
https://stackoverflow.com/a/42365034/334402
For your existing site, so long as your server supports range requests then you probably are not actually downloading the whole video. With Range Requests, the browser or player will request just part of the file at a time so it can start playback before the whole file is downloaded.
For MP4 files, it is worth noting that you need to have the header information, which is contained in a 'block' or 'atom' called MOOV atom, at the start of the file rather than the end - it is at the end for regular MP4 files. There are a number of tools which will allow you move it to the start - e.g.:
http://multimedia.cx/eggs/improving-qt-faststart/
You are definitely on the right track with your investigations - video hosting and streaming is a specialist area so it is generally easier to leverage existing streaming technologies and services rather than to build them your self. Some good places to look to get a feel for open source solutions:
https://gstreamer.freedesktop.org
http://www.videolan.org/vlc/streaming.html
I heard from a WWDC video that it measures the speed of previous HLS downloads to pick which rendition to use, but how does it choose which one to use at the very start? Is the download speed of the file for the list of renditions or the download speed of the file for a specific rendition used at all? I want to make sure that I'm not tricking the video player into using too high quality of a rendition by loading metadata files instantly from the cache.
It picks the first entry:
The first entry in the variant playlist will be played at the initiation of a stream and is used as part of a test to determine which stream is most appropriate. The order of the other streams is irrelevant. Therefore, the first bit rate in the playlist should be the one that most clients can sustain.
From the Bit rate recommendations section of Apple's Technical Note TN2224:
Best Practices for Creating and Deploying HTTP Live Streaming Media for the iPhone and iPad
We have an FFMPEG stream being streamed to mobile devices. We're using the HTML5 <video src="..." webkit-playsinline> tag to display the video inline (inside a real-time streaming app). We've managed to reduce the delay at the FFMPEG end down to the minimum but there's still a lag at the iOS end, where the player presumably buffers for a couple of seconds.
Is there any way to reduce the client-side delay?
We need as close to real-time as possible and skipping is acceptable.
If you are using an HTML5 video tag then the iOS device will use Quicktime to playback the video. Apple offers no control over internal mechanism like buffer settings for its Quicktime player. For a project on Apple TV I even work with a guy in Cupertino at Apple and they just won't allow any access to the information you would require on their device.
Typically if you use HLS:
Is this a real-time delivery system?
No. It has inherent latency corresponding to the size and duration of the media files containing stream segments. At least one segment must fully download before it can be viewed by the client, and two may be required to ensure seamless transitions between segments. In addition, the encoder and segmenter must create a file from the input; the duration of this file is the minimum latency before media is available for download. Typical latency with recommended settings is in the neighborhood of 30 seconds.
What is the latency?
Approximately 30 seconds, with recommended settings. See question #15.
For live streaming scenario on iOS you better off tuning the streaming chain before the actual player:
capture -> transcoding -> upload -> streaming server -> delivery -> playback
Using ffmpeg you can tune for zero lantency streaming at transcoding level which I understand you have done. After that using a well established streaming server like Wowza and CDN delivery will help you get there (of course at a certain cost - and assuming you need a streaming server which you may not).
If you go all native for your iOS app you may look at MPMoviePlayerController. I have no experience with native app code in iOS so I let you decide if it is worth the time (still I doubt it will be possible because of the underlying Quicktime/HLS layer).
I also came across this which sounds interesting but I have not tested it and even with such an approach you will face limitations.
Even if it may not be the answer you were looking for I hope this helps.
I have access to a proxy server and I can find out the time a video was requested. The log has the form (time, IP, URL). I want somehow figure out for how many seconds did a particular user using IP address A watched a YouTube video. Any suggestions?
If you only have access to requests, you obviously can't tell the difference if someone just loaded a video or watched it.
So, the best you can do is to come up with a set of heuristics that tries to 'guess' it by observing certain actions of the user. Here are a few ideas:
Does you log count the requests for the video buffer itself? If it does, you can see how much of the video was actually loaded, and the watched time can't be more than that.
If you (quite naively, I guess) assume that they're finished watching when they request another video URL, you can use this as your trigger for ending a 'video session'.
Install Wireshark or similar and start watching activity from YouTube during the video. Can you identify if there's a request when advertising is shown, or the related videos are displayed when the video finishes?
In all honesty, though, I think it will be virtually impossible trying to derive such an specific metric like seconds watched from such limited data as the point in time a video was requested. Just think of what could mess up any strategy you come up with: the user could load several videos in different tabs in a burst, or he could load a video page, pause it and forget it for several minutes or hours before he does watch it.
In short: I don't think you'll get a reliable guess using only the data you have, but if you absolutely must at least try, observing network activity between client and YouTube that only happens when a video is in the 'playing state' (pulling advertisings, related videos, some sort of internal YouTube logging, etc) is probably your best bet. Even that probably won't have a granularity nearly close to seconds, though.