I'm using AVPlayer to play streamed content. AFAIK, there are 3 kinds of stream contents
Progressive download: like VOD, this is a complete video. I can rewind & forward and get the duration of the video
Live streaming: this is like watching a TV channel. I can't get the duration
Live event: like a football match
Correct me if I'm wrong.
My question is Can AVPlayer work with live event? For example, the football match starts at 7:00AM and is about to last for 2 hours. You open the stream at 8:00AM, can you rewind back? Does the AVPlayer update the duration continuously ?
Also, I found the currentPlaybackTime
For video-on-demand or progressively downloaded content, this value is
measured in seconds from the beginning of the current item. Changing
the value of this property moves the playhead to the new location. For
content streamed live from a server, this value represents the time
from the beginning of the playlist when it was first loaded.
Not sure what For content streamed live from a server, this value represents the time from the beginning of the playlist when it was first loaded. mean
I find this document to be useful Technical Note TN2288 Example Playlist Files for use with HTTP Live Streaming
Basic Video on Demand (VOD) Playlist
The index file is static and contains a complete list of URLs to all
media files created since the beginning of the presentation. This kind
of session allows the client full access to the entire program
Live Playlist (Sliding Window)
For live sessions, the index file is updated by removing media URIs
from the file as new media files are created and made available.
Event Playlist
However, with the EVENT tag, you cannot change the playlist at all;
you may only append new segments to the end of the file. They cannot
be added at the front. New segments are added until the event has
concluded, at which time the EXT-X-ENDLIST tag is appended
So technically, I think iOS AVPlayer can handle live event rewind, it only depends on how the server generates the playlist file
Related
I have a lot of long (45 mins - 90 mins) MP4 videos in a public S3 bucket and I want to play them in my iOS app using AVPlayer.
I am using AVPlayerViewController to play them but I need to wait several minutes before they start playing as it downloads the whole video rather than streaming it.
I am caching it locally so this is only happening the first time but I would love to stream the video so the user doesn't have to wait for the entire video to download.
Some people are pointing out that I need Cloudfront to stream videos but in the documentation, I've read that this is only necessary when you have many people streaming the same file. I'm building a MVP so I only need a simple solution.
Is there any way to stream an MP4 video from an S3 bucket with AVPlayerViewController without it fully downloading the file before playing it to the user?
TLDR
AVPlayer does not support 'streaming' (HTTP range requests) as you would define it, so either use an alternative video player that does or use a real media streaming protocol like HLS which is supported by AVPlayer & would start the video before downloading it all.
CloudFront is great for delivery in general but is not truly needed - you may have seen it mentioned due to CloudFront RTMP distributions but they now have been discontinued.
Detailed Answer
S3 supports a concept called byte-range fetches using HTTP range requests - you can verify this by doing a HEAD request to your video file & seeing that the Accept-Ranges header exists with a value set to bytes (or not 'none').
Load your MP4 file in the browser & notice that it can start as soon as you click play. You're also able to move to the end of the video file and yet, you haven't really downloaded the entire video file. HTTP range requests are what allow this mechanism to work. Small chunks of the video can be downloaded as & when the user gets to that part of the video. This saves the file server & the user bandwidth while providing a much better user experience than the client downloading the entire file.
The server would need to support byte-range fetches in the first instance before the client can then decide to make range requests (or not to). The key is that, once the server supports it, it is up to the HTTP client to decide whether it wants to fetch the data in chunks or all in one go.
This isn't really 'streaming' as you know it & are referring to in your question but it is more 'downloading the video from the server in chunks and playing it back' using HTTP 206 Partial Content responses.
You can see this in the Network tab of your browser as a series of multiple 206 responses when seeking in the video. The entire video is not downloaded but the video is streamed from whichever position that you skip to.
The problem with AVPlayer
Unfortunately, AVPlayer does not support 'streaming' using HTTP range requests & HTTP 206 Partial Content responses. I've verified this manually by creating a demo iOS app in Xcode.
This has nothing to do with S3 - if you stored these files on any other cloud provider or file server, you'd see that the file is still fully loaded before playing.
The possible solutions
Now that the problem is clear, there are 2 solutions.
Using an alternative video player
The easiest solution is to use an alternative video player which does support byte-range fetches. I'm not an expert in iOS development so I sadly can't help in recommending an alternative but I'm sure there'll be a popular library that the industry prefers over the in-built AVPlayer. This would provide you with your (extremely common) definition of 'streaming'.
Using a video streaming protocol
However, if you must use AVPlayer, the solution is to implement true media streaming with a video streaming protocol - true streaming also allows you to leverage features like adaptive bitrate switching, live audio switching, licensing etc.
There are quite a few of these protocols available like DASH (Dynamic Adaptive Streaming over HTTP), SRT (Secure Reliable Transport) & last but not least, HLS (HTTP Live Streaming).
Today, the most widely used streaming protocol on the internet is HLS, created by Apple themselves (hey, maybe the reason to not support range requests is to force you to use the protocol). Apple's own documentation is really wonderful for delving deeper if you are interested.
Without getting too much into protocol detail, HLS will allow playback to start more quickly in general, fast-forwarding can be much quicker & delivers video as it is being watched for the true streaming experience.
To go ahead with HLS:
Use AWS Elemental MediaConvert to convert your MP4 file to HLS format - the resulting output will be 1 (or more) .M3U8 manifest files in addition to .ts media segment file(s)
Upload the resulting output to S3
Point AVPlayer to the .M3U8 file
let asset = AVURLAsset(url: "https://ermiya.s3.eu-west-1.amazonaws.com/videos/video1playlist.m3u8")
let item = AVPlayerItem(asset: asset)
...
Enjoy near-instant loading of the video
CloudFront
In regards to Amazon CloudFront, it isn't required per se & S3 is sufficient in this case but a quick Google search will mention loads of benefits that it provides, especially caching which can help you save on S3 costs later on.
Conclusion
I would go with converting to HLS if you can, as it will yield more possibilities down the line & is a better true streaming experience in general, but using an alternative video player will work just as well due to iOS AVPlayer restrictions.
Whether to use CloudFront or not, will depend on your user base, usage of S3 and other factors.
As you're creating an MVP, I would recommend just doing a batch conversion of your MP4 files to HLS format & not using CloudFront which would add additional complexity to your cloud configuration.
Like #ErmiyaEskandary said, you could just use HLS to solve your problem, which is probably a good idea, but you should not have to wait for the entire MP4 file to download before playing it with AVPlayer. The issue is actually not with AVPlayer or byte-range requests at all, but rather with how your MP4 files are formatted.
You could have your MP4 file configured incorrectly for streaming. MP4's have a metadata section called the MOOV atom. By default, many encoders put this at the back of the file. In this case, the player would have to download the entire file before it could begin playing.
For streaming usecases, this would need to be put at the front of the file. The player then will only need to buffer the MOOV atom, and it can begin playing the video as the data is loaded.
You can use ffmpeg with the fast start flag enabled to move the MOOV atom to the beginning of the file.
I am a newbie in video streaming and I just build a sample website which plays videos. Here i just give the video file location to the video tag in html5. I just noticed that in youtube the video tag contains the blob url and had a look into this. I found that the video data comes in segments and came across a term called pseudo streaming. Whereas it seems likes the website that i build downloads the whole file and plays the video. I am not trying to do any live streaming, just trying to stream local videos. I thought maybe the way video data is received in segments is done by a video streaming server. I came across RED5 open source streaming server, but most of the examples that is given is for live streaming which I am not experimenting on. Its been few days and I am not sure whether i am on the right track
The segmented approach you refer to is to support Adaptive Bit Rate streaming - ABR.
ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions. See here for an example:
https://stackoverflow.com/a/42365034/334402
For your existing site, so long as your server supports range requests then you probably are not actually downloading the whole video. With Range Requests, the browser or player will request just part of the file at a time so it can start playback before the whole file is downloaded.
For MP4 files, it is worth noting that you need to have the header information, which is contained in a 'block' or 'atom' called MOOV atom, at the start of the file rather than the end - it is at the end for regular MP4 files. There are a number of tools which will allow you move it to the start - e.g.:
http://multimedia.cx/eggs/improving-qt-faststart/
You are definitely on the right track with your investigations - video hosting and streaming is a specialist area so it is generally easier to leverage existing streaming technologies and services rather than to build them your self. Some good places to look to get a feel for open source solutions:
https://gstreamer.freedesktop.org
http://www.videolan.org/vlc/streaming.html
Not sure if this is something obvious or not. After creating an YouTube LiveBroadcast, binding that to a LiveStream with a specific CDN format (let's say "720p"), and transitioning the broadcast from "ready" to "live" ... how can I change the stream quality without having to create a new broadcast?
Trying to unbind the current stream - exception is returned, cannot unbind the stream.
Trying to bind broadcast to another stream - same exception as above.
In addition, after looking through the support pages for YouTube live streaming, it is suggested that "ingest settings cannot be modified after the broadcast has started" - it says nothing about the actual API not being able to support this, but it looks like a major limitation from somewhere deeper. I only thought it applies to the web Live Control room.
I need this functionality so that I can change the stream quality for when a user switches from WiFi to mobile data. Currently streaming RTMP data in another resolution that what the LiveStream CDN format is configured for, results in health errors and encoding artifacts on YouTube's side. As suggested by the support pages, creating a "1080p" live stream ("maximum expected resolution") should work, but when that stream is receiving a 720p or 480p stream, depending on whether it was started or not, it either doesn't start at all, or goes to a gray scene with high-pitch audio (my stream is sent correctly, since I can output it to a dozen more outputs, like MP4, FLV, and other RTMP servers).
Solution?
HLS streams can be "live" or "VOD". Downloading a VOD HLS stream is easy.
However, I want to download (or record) say 5 minutes of a LIVE HLS stream. Is this possible?
If I do so, I am sure I have to make significant changes to the m3u8 file... One reason is live streams do not have a "duration", but the stream I download has to be streamed as VOD so it must have a duration. There might be various other changes required that I am not aware of. Presumably URLs of ts segments would also need to be changed.
Any tips or advice (hopefully actual code!)?
Thanks!
PS. Note that this question is not about playing back the stream in offline mode - I know I need an HTTP server for that.
The Live playlist uses a sliding-window. You need to periodically reload it after target-duration time and download only the new segments as they appear in the list (they will be removed at a later time).
Save the #EXTINF for each segment and start writing them in a VOD playlist using the same target-duration and a media sequence starting at 0.
When you want to stop recording add the EXT-X-ENDLIST tag at the end.
It doesn't matter how you name your segments as long as you use the same name in the m3u8.
I have a website that plays back videos using the html5 video tag and the javascript api for the video tag, and also plays back youtube videos using the youtube javascript api. I notice a bug on some browsers for youtube videos - when I seek to a certain point and play (all this in response to a user click on a button) the video doesn't seek. It plays from the beginning.
This is not a problem for videos played back with the html5 video tag. I think the reason that it is not a problem is that I use the "preload" option with that tag, which means that the video is mostly loaded and buffered before the user even clicks the button that does the SEEK.
So to get it to work with youtube, I need an equivalent of a "preload", or, perhaps I can make autoplay true, but then pause the video after a millisecond, just to get the buffering started.
Is there some solution to this that I don't know about?
Which function are you using to do the seeking? It sounds a little like you might be using "playVideoAt()" -- with that function, if you specify a time beyond what is loaded, then it is expected behavior for it to start to play at the beginning. If, however, you're using the "seekTo()" method, then it shouldn't be doing that ... seekTo() should allow playing beyond what is loaded without a problem.
Another possibility would be to see if the "allowSeekAhead" parameter (to the "seekTo()" method) is being set or not.