How to reduce initial buffering time of AVPlayer to play video? - ios

Hello Friends,
I am working on OTT platform app, I need to play video very smoothly without any delay like Snapchat and instagram as reference. I am using Cloudinary for uploading videos and everything is working good but at first time, AVPlayer takes time of 1-2 second to start video, which is bad thing for Me. Once video play, next time I come on same video it plays smoothly with less delay of max Half second.
As far as I tried to learn through different blogs and stack over flow answers, I get rid this is default AVPlayer Buffering time and it depends of video durations and its fetching video information like title, metadata etc. But I don't have to use these information anywhere.
I tried to set false this property of AVPlayer .automaticallyWaitsToMinimizeStalling = false, but still no luck.
I tried few solutions from StackOverflow posts, but didn't get success
How to reduce iOS AVPlayer start delay
This is demo video Link Which you can try http://res.cloudinary.com/dtzhnffrp/video/upload/v1621013678/1on1/bgasthklqvixukocv6xy.mov
If you can suggest, what I can use for OTT platforms to play video smoothly really grateful to everyone...
Thanks In Advance

Most streaming services use ABR, which creates multiple resolution copies of the video and beaks each into 2-10 second, typicaLLY, chunks.
One benefit of ABR is that to speed up playback start up, the video can start on a lower resolution bit rate and then 'step up' to higher bit rates as it proceeds.
You can often see this on popular streaming services where you will see the video quality is lower when the video starts and improves after a short time.
See here for more on ABRs: https://stackoverflow.com/a/42365034/334402
This requires you to do work on the server side to prepare the video for HLS and DASH streaming, the two most common ABR streaming protocols.
Typically dedicated streaming servers, or a combination of encoders and packagers, are use to prepare and serve the ABR streams. There are also cloud services, for example AWS Media Services or Azure Media Services, which allow on demand streaming models.

You can make the videos smaller either by reducing the dimensions or by compressing it more. Both of these have the effect of lowering startup time - but will sacrifice quality in exchange.
Cloudinary will create ABR versions for you, but the last I checked, you pay for each version created.

Related

Strange AVPlayer behavior with HLS redundant streams and bad network

I have an app which can play video HLS streams.
HLS master playlist contains redundant steams to provide backup service
Looks like this:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1500000,RESOLUTION=638x480
https://example.com/playlist.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1564000,RESOLUTION=638x480
https://example.com/playlist.m3u8?redundant=1
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1564000,RESOLUTION=638x480
https://example.com/playlist.m3u8?redundant=2
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1564000,RESOLUTION=638x480
https://example.com/playlist.m3u8?redundant=3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400000,RESOLUTION=638x480
https://example.com/playlist_lq.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400000,RESOLUTION=638x480
https://example.com/playlist_lq.m3u8?redundant=1
....
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400000,RESOLUTION=638x480
https://example.com/playlist_lq.m3u8?redundant=5
So, I decided to test out how this setup will fly in case of a bad network scenario. For this, I used network link conditioner's 3G preset, which provides 750kbs of download bandwidth. Naturally I expected relatively smooth playback of 400kbs video but alas, it took 60 seconds to fully load test clip (800kb total size).
What I noticed is that AVPlayer sends requests for all listed redundant playlist (and I have 5 for each bandwidth). If I remove them and keep only 1 media-playlist per bandwidth - video loads in 10 seconds and plays without hiccups.
It looks like AVPlayer try to process all redundant links in parallel with main video load and chokes hard.
Is there any way to restrict this behavior of AVPlayer and force him to go for redundant streams only in case of actual load error?
Any idea why it tries to load all of them? Maybe some HLS tags can help?
Also it sometimes display errors like this in console:
{OptimizedCabacDecoder::UpdateBitStreamPtr} bitstream parsing error!!!!!!!!!!!!!!
And I cant find much info about it
Problem was in incorrectly set BANDWIDTH value, AVPlayer has some obscure logic with switching redundant streams if property current one doesn't match m3u8 values

How to build a simple video streaming server?

I am a newbie in video streaming and I just build a sample website which plays videos. Here i just give the video file location to the video tag in html5. I just noticed that in youtube the video tag contains the blob url and had a look into this. I found that the video data comes in segments and came across a term called pseudo streaming. Whereas it seems likes the website that i build downloads the whole file and plays the video. I am not trying to do any live streaming, just trying to stream local videos. I thought maybe the way video data is received in segments is done by a video streaming server. I came across RED5 open source streaming server, but most of the examples that is given is for live streaming which I am not experimenting on. Its been few days and I am not sure whether i am on the right track
The segmented approach you refer to is to support Adaptive Bit Rate streaming - ABR.
ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions. See here for an example:
https://stackoverflow.com/a/42365034/334402
For your existing site, so long as your server supports range requests then you probably are not actually downloading the whole video. With Range Requests, the browser or player will request just part of the file at a time so it can start playback before the whole file is downloaded.
For MP4 files, it is worth noting that you need to have the header information, which is contained in a 'block' or 'atom' called MOOV atom, at the start of the file rather than the end - it is at the end for regular MP4 files. There are a number of tools which will allow you move it to the start - e.g.:
http://multimedia.cx/eggs/improving-qt-faststart/
You are definitely on the right track with your investigations - video hosting and streaming is a specialist area so it is generally easier to leverage existing streaming technologies and services rather than to build them your self. Some good places to look to get a feel for open source solutions:
https://gstreamer.freedesktop.org
http://www.videolan.org/vlc/streaming.html

How does HLS video on iOS pick which rendition to start out with?

I heard from a WWDC video that it measures the speed of previous HLS downloads to pick which rendition to use, but how does it choose which one to use at the very start? Is the download speed of the file for the list of renditions or the download speed of the file for a specific rendition used at all? I want to make sure that I'm not tricking the video player into using too high quality of a rendition by loading metadata files instantly from the cache.
It picks the first entry:
The first entry in the variant playlist will be played at the initiation of a stream and is used as part of a test to determine which stream is most appropriate. The order of the other streams is irrelevant. Therefore, the first bit rate in the playlist should be the one that most clients can sustain.
From the Bit rate recommendations section of Apple's Technical Note TN2224:
Best Practices for Creating and Deploying HTTP Live Streaming Media for the iPhone and iPad

Reduce/remove buffer lag on <video> element (iOS)

We have an FFMPEG stream being streamed to mobile devices. We're using the HTML5 <video src="..." webkit-playsinline> tag to display the video inline (inside a real-time streaming app). We've managed to reduce the delay at the FFMPEG end down to the minimum but there's still a lag at the iOS end, where the player presumably buffers for a couple of seconds.
Is there any way to reduce the client-side delay?
We need as close to real-time as possible and skipping is acceptable.
If you are using an HTML5 video tag then the iOS device will use Quicktime to playback the video. Apple offers no control over internal mechanism like buffer settings for its Quicktime player. For a project on Apple TV I even work with a guy in Cupertino at Apple and they just won't allow any access to the information you would require on their device.
Typically if you use HLS:
Is this a real-time delivery system?
No. It has inherent latency corresponding to the size and duration of the media files containing stream segments. At least one segment must fully download before it can be viewed by the client, and two may be required to ensure seamless transitions between segments. In addition, the encoder and segmenter must create a file from the input; the duration of this file is the minimum latency before media is available for download. Typical latency with recommended settings is in the neighborhood of 30 seconds.
What is the latency?
Approximately 30 seconds, with recommended settings. See question #15.
For live streaming scenario on iOS you better off tuning the streaming chain before the actual player:
capture -> transcoding -> upload -> streaming server -> delivery -> playback
Using ffmpeg you can tune for zero lantency streaming at transcoding level which I understand you have done. After that using a well established streaming server like Wowza and CDN delivery will help you get there (of course at a certain cost - and assuming you need a streaming server which you may not).
If you go all native for your iOS app you may look at MPMoviePlayerController. I have no experience with native app code in iOS so I let you decide if it is worth the time (still I doubt it will be possible because of the underlying Quicktime/HLS layer).
I also came across this which sounds interesting but I have not tested it and even with such an approach you will face limitations.
Even if it may not be the answer you were looking for I hope this helps.

How to serve videos like Youtube? Almost instant play and fast seeking

How to serve videos like Youtube does ? Even if the video is long (almost 2 hours long) and is viewed in HD, it would almost instantly play and seeking to not yet loaded parts are very fast.
I'm using a dedicated server from Rackspace with 100Mb up/down for this test, my ping time is below 50ms to the server. My local internet connection is 10Mb, I could maximize my internet connection when I download something from the server so connection to the server is not the issue here.
I'm trying to emulate this and I've tried Real time streaming using Wowza and Pseudostreaming using the H264 Streaming Module. Neither could compare to how fast Youtube delivers video.
Video test file is MP4 (h.264), 300MB, 2 hours long, total bitrate is set to 500kbps, and JWPlayer as the video player
Wowza Streaming (RTMP) - Loading then playing the video is fast, but not as fast as youtube. Seeking is not as fast as well it takes
around 5 - 7 seconds to move to the new position and continue playing the video.
Pseudostreaming H264 Streaming Module (HTTP) - Loading the video takes a long time since its downloading the video header first before
playing it. A 2 hours video has around 2.5MB of MOOV ATOM (video
header file) that it needs to download first before it could play.
Once it starts playing seeking to not downloaded parts is on par with
Wowza but not as fast as Youtube.
What do I need to serve videos with the speed of Youtube? I also need it to buffer/download the video when paused just like Youtube so Real Streaming like Wowza is out.
Pseudostreaming using the H264 Streaming module would have been nice since it does buffer when paused, its just that the initial loading time is very long! Anyway I could remove that initial load time?
What are my other options? I'm open to any other option that I could use in my server.
The way YouTube works is different and they keep on changing the way it works. Doing the reverse engineering on that by capturing the YouTube feeds over wire-shark over last 4 years told me that the pattern is very dynamic. The segmentation is one key, the dual buffer, multiple caching servers and techniques, using the client machine as the buffer render and the functionalities of the player matters a lot. There are many many factors which make YouTube video fast and sleek.
You can emulate the same to some extent but building exactly the same needs loads of efforts and infrastructure.

Resources