How to play smooth streaming video in Chromecast? - smooth-streaming

According the Chromecast Developers page, Chromecast supports the SmoothStreaming container, which I believe uses video chunks with the .ismv extension. I am having problems getting those video files to play.
If I am not mistaken, Chrome/Chromecasts's implementation of the video tag only supports .mp4 and .webm files, so using cast.MediaLoadRequest (in a Chrome Sender App) would not work if you pass it a url for a manifest file or .ismv container.
It does seem possible to write code that stitches together MPEG-DASH chunks using the MediaSource API from a MPEG-DASH manifest file. However, it doesn't appear that Chrome's implementation of the MediaSource spec supports .ismv chunks and therefore a means to play smooth streaming video.
Assuming you parsed a manifest file to get the smooth streaming video chunks, how would it be possible for Chromecast to play .ismv h.264 containers, such as the ones that can be found here? Or does Chrome not support .ismv files? If so, what SmoothStreaming containers does Chrome/Chromecast support?

Chromecast supports MPEG-DASH and Smooth Streaming. See more detail here:
https://developers.google.com/cast/supported_media_types
We'll provide some code snippet of smooth streaming soon. Stay tuned.

The default Receiver provided does not support SmoothStreaming (nor MPEG-DASH).
You'll need to code your own receiver to do so.
See https://stackoverflow.com/a/17978070/2665789 for a little more help.
Hopefully Google posts working samples of Live streaming soon!

You can throw SmoothStreaming to some sample receivers provided by Google.
The cast-custom-receiver and the Cast-Media-Player-Library-Sample supports SS with PlayReady encryption out of the box.
Well, you need to do some tricks like modify the extension from "ism/" to "ism/Manifest" and it just work. You'll need to do the same in the [cast-sender-tool-chrome] adding the file extension to the list of three inside the main html file.

Related

How can I stream MP4 videos from S3 without AVPlayer downloading the files before playing them?

I have a lot of long (45 mins - 90 mins) MP4 videos in a public S3 bucket and I want to play them in my iOS app using AVPlayer.
I am using AVPlayerViewController to play them but I need to wait several minutes before they start playing as it downloads the whole video rather than streaming it.
I am caching it locally so this is only happening the first time but I would love to stream the video so the user doesn't have to wait for the entire video to download.
Some people are pointing out that I need Cloudfront to stream videos but in the documentation, I've read that this is only necessary when you have many people streaming the same file. I'm building a MVP so I only need a simple solution.
Is there any way to stream an MP4 video from an S3 bucket with AVPlayerViewController without it fully downloading the file before playing it to the user?
TLDR
AVPlayer does not support 'streaming' (HTTP range requests) as you would define it, so either use an alternative video player that does or use a real media streaming protocol like HLS which is supported by AVPlayer & would start the video before downloading it all.
CloudFront is great for delivery in general but is not truly needed - you may have seen it mentioned due to CloudFront RTMP distributions but they now have been discontinued.
Detailed Answer
S3 supports a concept called byte-range fetches using HTTP range requests - you can verify this by doing a HEAD request to your video file & seeing that the Accept-Ranges header exists with a value set to bytes (or not 'none').
Load your MP4 file in the browser & notice that it can start as soon as you click play. You're also able to move to the end of the video file and yet, you haven't really downloaded the entire video file. HTTP range requests are what allow this mechanism to work. Small chunks of the video can be downloaded as & when the user gets to that part of the video. This saves the file server & the user bandwidth while providing a much better user experience than the client downloading the entire file.
The server would need to support byte-range fetches in the first instance before the client can then decide to make range requests (or not to). The key is that, once the server supports it, it is up to the HTTP client to decide whether it wants to fetch the data in chunks or all in one go.
This isn't really 'streaming' as you know it & are referring to in your question but it is more 'downloading the video from the server in chunks and playing it back' using HTTP 206 Partial Content responses.
You can see this in the Network tab of your browser as a series of multiple 206 responses when seeking in the video. The entire video is not downloaded but the video is streamed from whichever position that you skip to.
The problem with AVPlayer
Unfortunately, AVPlayer does not support 'streaming' using HTTP range requests & HTTP 206 Partial Content responses. I've verified this manually by creating a demo iOS app in Xcode.
This has nothing to do with S3 - if you stored these files on any other cloud provider or file server, you'd see that the file is still fully loaded before playing.
The possible solutions
Now that the problem is clear, there are 2 solutions.
Using an alternative video player
The easiest solution is to use an alternative video player which does support byte-range fetches. I'm not an expert in iOS development so I sadly can't help in recommending an alternative but I'm sure there'll be a popular library that the industry prefers over the in-built AVPlayer. This would provide you with your (extremely common) definition of 'streaming'.
Using a video streaming protocol
However, if you must use AVPlayer, the solution is to implement true media streaming with a video streaming protocol - true streaming also allows you to leverage features like adaptive bitrate switching, live audio switching, licensing etc.
There are quite a few of these protocols available like DASH (Dynamic Adaptive Streaming over HTTP), SRT (Secure Reliable Transport) & last but not least, HLS (HTTP Live Streaming).
Today, the most widely used streaming protocol on the internet is HLS, created by Apple themselves (hey, maybe the reason to not support range requests is to force you to use the protocol). Apple's own documentation is really wonderful for delving deeper if you are interested.
Without getting too much into protocol detail, HLS will allow playback to start more quickly in general, fast-forwarding can be much quicker & delivers video as it is being watched for the true streaming experience.
To go ahead with HLS:
Use AWS Elemental MediaConvert to convert your MP4 file to HLS format - the resulting output will be 1 (or more) .M3U8 manifest files in addition to .ts media segment file(s)
Upload the resulting output to S3
Point AVPlayer to the .M3U8 file
let asset = AVURLAsset(url: "https://ermiya.s3.eu-west-1.amazonaws.com/videos/video1playlist.m3u8")
let item = AVPlayerItem(asset: asset)
...
Enjoy near-instant loading of the video
CloudFront
In regards to Amazon CloudFront, it isn't required per se & S3 is sufficient in this case but a quick Google search will mention loads of benefits that it provides, especially caching which can help you save on S3 costs later on.
Conclusion
I would go with converting to HLS if you can, as it will yield more possibilities down the line & is a better true streaming experience in general, but using an alternative video player will work just as well due to iOS AVPlayer restrictions.
Whether to use CloudFront or not, will depend on your user base, usage of S3 and other factors.
As you're creating an MVP, I would recommend just doing a batch conversion of your MP4 files to HLS format & not using CloudFront which would add additional complexity to your cloud configuration.
Like #ErmiyaEskandary said, you could just use HLS to solve your problem, which is probably a good idea, but you should not have to wait for the entire MP4 file to download before playing it with AVPlayer. The issue is actually not with AVPlayer or byte-range requests at all, but rather with how your MP4 files are formatted.
You could have your MP4 file configured incorrectly for streaming. MP4's have a metadata section called the MOOV atom. By default, many encoders put this at the back of the file. In this case, the player would have to download the entire file before it could begin playing.
For streaming usecases, this would need to be put at the front of the file. The player then will only need to buffer the MOOV atom, and it can begin playing the video as the data is loaded.
You can use ffmpeg with the fast start flag enabled to move the MOOV atom to the beginning of the file.

How to build a simple video streaming server?

I am a newbie in video streaming and I just build a sample website which plays videos. Here i just give the video file location to the video tag in html5. I just noticed that in youtube the video tag contains the blob url and had a look into this. I found that the video data comes in segments and came across a term called pseudo streaming. Whereas it seems likes the website that i build downloads the whole file and plays the video. I am not trying to do any live streaming, just trying to stream local videos. I thought maybe the way video data is received in segments is done by a video streaming server. I came across RED5 open source streaming server, but most of the examples that is given is for live streaming which I am not experimenting on. Its been few days and I am not sure whether i am on the right track
The segmented approach you refer to is to support Adaptive Bit Rate streaming - ABR.
ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions. See here for an example:
https://stackoverflow.com/a/42365034/334402
For your existing site, so long as your server supports range requests then you probably are not actually downloading the whole video. With Range Requests, the browser or player will request just part of the file at a time so it can start playback before the whole file is downloaded.
For MP4 files, it is worth noting that you need to have the header information, which is contained in a 'block' or 'atom' called MOOV atom, at the start of the file rather than the end - it is at the end for regular MP4 files. There are a number of tools which will allow you move it to the start - e.g.:
http://multimedia.cx/eggs/improving-qt-faststart/
You are definitely on the right track with your investigations - video hosting and streaming is a specialist area so it is generally easier to leverage existing streaming technologies and services rather than to build them your self. Some good places to look to get a feel for open source solutions:
https://gstreamer.freedesktop.org
http://www.videolan.org/vlc/streaming.html

YouTube stream from video file

is this possible to create live event by simply using video file instead of web camera? I don't see option like this in live event creation
For doing this directly on youtube: No
For doing this by encoding some video file and push to youtube in real time: Yes
How to do?
Try wirecast play. Just like a live-feed console but free with some limit. Also other rtmp server may work. One of them is ffmpeg. I tried before and can ensure it works. But it's a backend with only command line. For more functionality, you need a front-end app(you can stream/pipe to ffmpeg).
About ffmpeg rtmp read this:
https://www.ffmpeg.org/ffmpeg-protocols.html#rtmp

Can Amazon Cloudfront Stream to an iOS Device

I am building a CDN. I want to be able to stream to an iPhone and iPad. Is this possible using Amazon Cloudfront?
Let me clarify. Is there any documentation anywhere or an example anywhere of someone doing this?
Progressive download works if you ensure that the media's metadata is at the beginning of the file. Google "ffmpeg qtfastart" to accomplish this in the easiest manner (in my experience). If this is not done, the player (in iOS) must download the complete file before it gets to the metadata that it needs to read in order to play. If you are not doing this step in your production workflow, then your progressive download is not functioning as "progressive download", it is actually downloading the entire file (as stated before...so it can get to the metadata) and then playing. This can be done with any video/audio file supported by your platform.
NOTE: I am not sure how this affects any attempts at high-speed scrubbing. It seems the file would need to be downloaded to the point that the app is trying to scrub to.
Another alternative may be to create the format needed for iOS streaming (using a segmenter/transcoder), and serving up those files through http on your regular Cloudfront distribution. Theoretically that should work.
To be more clear - Cloudfront uses and older version of Flash Media Server(v 3.5) that supports streaming through various RTMP protocols. These can be enabled by creating a Streaming Distribution (that is how we do streaming for web and Android) and using something like JW Player on the front end.
http://help.adobe.com/en_US/FlashMediaServer/3.5_TechOverview/WS5b3ccc516d4fbf351e63e3d119ed944a1a-7ffa.html
http://www.adobe.com/devnet/logged_in/ktowes_fms35.html
IOS streaming is done using HTTP Live Streaming which is different. https://developer.apple.com/streaming/
Your options would be to do as I mentioned above, or use EC2 and stand up your own FMS 4.5 instance ( http://aws.typepad.com/aws/2012/03/live-streaming-cloudfront-fms-4-5.html ).
Have struggled a lot over this..
Finally got it working through Audio Streamer.. Love this ...
http://www.cocoawithlove.com/2009/06/revisiting-old-post-streaming-and.html
Awesome way ....
You simply want to use Progressive Download, which means upload the file to S3, create a distribution, and go! It's super simple.

How to implement the Adobe HTTP Streaming spec without using their Streaming server

As of Flash 10.1, they have added the ability to add bytes into the NetStream object via the appendBytes method (described here http://www.bytearray.org/?p=1689). The main reason for this addition is that Adobe is finally supporting HTTP streaming of video. This is great, but it seems that you need to use the Adobe Media Streaming Server (http://www.adobe.com/products/httpdynamicstreaming/) to create the correct video chunks from your existing video to allow for smooth streaming.
I have tried to do a hacked version of HTTP streaming in the past where I swap out the NetStream objects (similar to here http://video.leizhu.com/video.html), but there is always a momentary pause between the chunks. With the new appendBytes, I tried to do a quick mock up with the two sections of video from the preceding site, but even then, the skip still remains.
Does anyone know how the two consecutive .FLV files needs to be formated in order for the appendBytes method on the NetStream object to create a nice smooth video without a noticeable skip between the segments?
I was able to get this working using Adobe's File Packager Tool which Samuel described. I didn't use the NetStream object but I used the OSMF Sample Player which I assume uses this internally. Here's how to do with without using FMS:
Get Adobe's File Packager for Http Dynamic Streaming from http://www.adobe.com/products/httpdynamicstreaming/
Run the File Packager on an existing MP4 file containing H.264/AAC like this:
C:\Program Files\Adobe\Flash Media Server 4\tools\f4fpackager>
f4fpackager.exe --input-file="MyFile.mp4" --segment-duration=30
This will result in 30 second long F4F files, also F4X and a F4M file. The F4F files are your correctly segmented (and fragmented) MP4 files that should play.
If you want to test this using the OSMF Player also do the following:
Get Apache Server
Get Adobe's Http Origin Module for Apache from http://www.adobe.com/products/httpdynamicstreaming/
Install the module according to http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS8d6ed60bd880807c48597a9e1265edd6cc0-8000.html
Put the F4F, F4X and F4M file into the vod directory under httpdocs
Get the “OSMF Sample Player for HTTP Dynamic Streaming” from http://www.osmf.org/downloads/OSFMPlayer_zeri2.zip
Put the Sample Player in the httpdocs directory
Load the html file from the Sample Player in a browser eg http://localhost/OSMFPlayer.html
Press the eject button and put in the URL of your F4M file, it should play
So to answer the original question Adobe's File Packager is the file splitter to use, you don't need to buy FMS to use it and it works for FLV and MP4/F4V files.
You don't need to use their server. Wowza supports Adobe's version of HTTP Streaming and you can implement it yourself by segmenting the videos properly and loading all the segments on a standard HTTP server.
Links to all the specs for Adobe's HTTP Streaming are here:
http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS9463dbe8dbe45c4c-1ae425bf126054c4d3f-7fff.html
Trying to hack the client to do some custom style http streaming will be a lot more troublesome.
Note that HTTP Streaming does not support streaming several different videos but streams a single file that was broken off into separate segments.
File Packager
A command-line tool that translates on-demand media files into fragments and writes the fragments to F4F files. The File Packager is an offline tool. You can use the File Packager to encrypt files for use with Flash Access. For more information, see Packaging on-demand media.
The File Packager is available from adobe.com and is installed with Adobe® Flash® Media Server to the rootinstall/tools/f4fpackager folder.
Packager download link is on right here: Download File Packager for HTTP Dynamic Streaming
http://www.adobe.com/products/httpdynamicstreaming/
You could use F4Pack, it's a GUI around the commandline-tool from Adobe, that lets you process your flv/f4v file so they can be used for HTTP Dynamic Streaming.
The place in the OSMF code where this happens is the timer-fired state machine inside of the HTTPNetStream class implementation... might be an informative read. I think I even put some helpful comments in there when I wrote it.
As far as the general question:
If you read an entire FLV file into a ByteArray and pass it to appendBytes, it will play. If you break that FLV file in half, and pass the first half as a byte array and then the second half as a byte array, that will play as well.
If you want to be able to switch around between bitrates without a gap, you need to split up your FLV files at matching keyframe points... and remember that only the first call to appendBytes has the initial FLV file header ('F', 'L', 'V', flags, offset)... the rest just expect a continuation of the FLV byte sequence.
I recently found a similar project for node.js to achieve m3u8 transcoding (https://github.com/andrewschaaf/media-server) but have yet to hear of one besides Wowza doing it outside of Origin module for Apache. Since the payloads are nearly identical you're better off looking for a good mp4 segmenting solution (plenty out there) than looking for f4m segmenting. The problem is moov atoms especially on larger mp4 video are difficult to manage and put in their proper initial (near beginning of file) location. Even using optimal ffmpeg settings and 'qtfaststart' you end up with noticeably slower seeking, inefficient bandwidth usage (usually greedy), and a few minor headaches relating to scrubbing/time that you don't get with flv/f4v playback.
In my player I have or intend to switch between HTTP Dynamic Streaming (HDS) and MP4 based on load and realtime log parsing Apache using awk/cron instead of licensing Adobe's Access product for stream protection .. both have unique 'onmetadata' handlers.. but in the end I receive sequenced time/byte hashes virtually equivalent. Just MP4 is slower. So mod_origin is just a synchronizer / request router for Flash clients (over http). I'm still looking for ways to speed up mp4-container-based playback. One incredible solution I read this recently and was rather awestruck by it http://zehfernando.com/2011/flash-video-frame-time-woes/ where a video editor (guy) and flash developer came up with their own mp4 timecoding solution that literally added (via Adobe Premiere script) about 50 pixels to the bottom of every video frame with a visual 'binary' stamp like a frame barcode.. and those binary values translate into highly-accurate timecode values. So Flash could analyze the video frames as they were painted (realtime) and determine precisely where the player was and what bytes were needed from any kind of mp4 byte-segmenting-friendly webserver. The thing is (and perhaps I'm wrong here) Flash seems to arbitrarily choose when it gets to moov data, especially on large video files (.5-1.5gigs). Even if you make sure to run your mp4 through MP4Box (i.e. MP4Box -frag 10000 -inter 0 movie.mp4) I guess this has been a problem OSMF and HDS have worked on quite well
now, though it is annoying that you need Apache and a proprietary closed-source module to use it imo. Its probably just a matter of time before open source implementations arrive as HDS is only 1-2 years old, and it just needs a little reverse engineering like that Andrew Chaaf guy with node.js + mpegts streaming (live or not).
In the end I may just end up using OSMF exclusively beneath my UI as it seems to have similar virtues to HDS if not more so i.e. Strobe if you need sick extensible HDS or MP4 open player platform to hack from to realize your own custom player.
Adobe's F4F format is based on MP4 files, are you able to use F4V or MP4 instead of FLV files?
There are plenty of MP4 file splitters around but you would need to make sure the timestamps in the files are continuous, maybe the pause happens when it sees a zero timestamp within the audio or video stream inside the file.

Resources