I'm using postman to test a continous webm videostream. But the range request is being ignored.
If I try using an image in the same server, range works ok. If I try it on as fixed length video it works ok.
This link has the fixed webm video I tested and worked: https://github.com/rubu/WebMPlayer/blob/master/Samples/big_buck_bunny_live.webm?raw=true
I can't provide the specific continuous stream I'm using because it's a corporative asset. Nor I was able to find any webm streams live on the web
Anyone knows how I can make range work for streams ? If someone knows any continuous webm stream that we could use as a test here it would also be helpful.
What I want is to generate a request that I can use in JMeter to include in a load test. So if someone can demonstrate how to do this request that loads a small amount of a continuous web stream in JMeter the answer is valid too.
You can see that accepting ranges is a server-side configuration as explained here
You'll see that in a less public server (than GitHub) you will be able to send a GET request and receive a Accept-Ranges: bytes so it depends on the configuration.
My working example is illustrated here :
NB : As stated in RFC7233's Intro this is only possible with GET requests.
Related
Overview
I have a server generating a livestream of video that is exposed as a fragmented MP4 file.
That file is being served to an iOS emulator trying to play the video using react-native-video, which, I believe uses AVPlayer.
The first request the emulator makes is a range request for bytes 0-1. I record the X-Playback-Session-Id and respond with: 206 partial content, bytes 0-1, and the content-range bytes 0-1/*. According to the specification, the size of * indicates that the value is unknown.
I then receive an error on the AVPlayer stating that the server is not correctly configured. According to the apple docs this indicates the server does not support range requests.
I have implemented support for range requests. As an experiment, I set the content-range to respond with a very large size, instead of * (bytes 0-1/17179869176). Which works to an extent. The AVPlayer follows through with multiple range requests for different byte-ranges (0-17179869175). Though sometimes it only requests a singular range.
This buffers for a while and displays nothing until I stop the server (with a breakpoint), and a short while after the video stops buffering (but does not close any active connections) and plays what it has so far loaded. Given that this is a livestream that's not acceptable.
Playing the livestream inside chrome or an android emulator works exactly as I'd expect - the video is played as soon as it gets all the necessary data. But chrome also does not require any of the support for byte range requests to be able to play a video.
I can understand that without any source of content-length the AVPlayer is unable to make range requests as it doesn't know where the file ends. However, as the media I'm exposing is a live stream I don't have a meaningful content-length to give it. So there must be something I can specify either in headers on the server or as AVPlayer settings on the client that states the video is a livestream and so cannot be handled through range requests, or that it must request chunks of footage at a time.
I've looked online and found some useful documents regarding the subject of livestreaming, though all of them are surrounding use of HLS and m3u playlist files. However, changing the back-end to generate m3u playlist files and to decode the video to work out the durations for the chunks correctly would probably take more weeks and months of development time, and I don't understand why it'd be necessary, given that I'm only exposing a single resolution of a single video file that does not need to seek, and that it does work perfectly fine on android.
After having spent so long and having come across so many different hard to resolve issues it's starting to feel like I've somehow gone down the wrong path and that I'm going about this completely the wrong way.
My question is two-fold
Does AVPlayer support live footage served directly from a fragmented MP4 file?
If so, how do I implement it?
What I'm doing
Basically I'm writing simple a Q&A site with an option to create links to specific positions in media files. As of now the app is intended to be used in LAN environment only.
I have put a video in appRoot/public folder and created a view using
html5 video tag.
It works and even seeking is available. Wow...
What I don't understand
I'm clueless as to the tech behind and its limitations.
It just worked, so I don't even know a key word to hit google with.
What I know
With the way I'm doing:
No encryption
No way to prevent users to save video files
No automatic trans-coding available
The real question
What is the name of the tech behind.
How well can rails handle streaming and seeking requests with the way I did as compared to using dedicated video streaming servers or gems.
As long as your underlying web server understands how to handle the MIME types for video, and responds correctly to byte range requests - as it seems to be - that's all you need. The underlying mechanics of streaming video with HTML5 is that the browser asks for a chunk of content as a range of bytes from the source (enough to keep the buffer full) and the server delivers it.
You might want to look at using ffmpeg to optimize your videos so that the metadata is in the right place in the file to start streaming quicker.
You've correctly pointed out the limitations of the solution in your environment. The other thing to be aware of is capacity - if the videos are long and a lot of people are accessing them concurrently then without caching (in a LAN via a caching proxy or on the internet via a CDN service) your server capacity may be stretched
From what have gathered so far, Apple provided tools to make Mac to act as HTTP Live Streaming server. But my goal is different. I want to make iDevices to be the HTTP Live Streaming server. (for local network only)
Can it be done at all?
Yes and no. Apple does not provide a way to stream encoded media data, so that part is 100% up to you. Also, Apple does not provide a way to access encoded frames directly (i.e. you can easily get an encoded file or the raw frames, but not easily get "encoded frames'). So you need to develop a way to get these encoded frames from the files for streaming, or encode the raw frames on the fly.
It may or may not fit your use case, but if you first right the streamer portion, you should be able to say small/short clips to disk, and stream them out as they are created with minimal overall latency.
I am having problem with HTML 5 video on iPad.It is working in all major browsers.I have hosted a video on the Apache tomcat and try in the ipad.It works well.But I need to play a URL which is having matrix parameters in iPad.
It would be great if somebody can tell me,how can I host a video in the Apache tomcat and add some matrix parameters to it.It will work,If we add some junk query parameters with the URL.If we add matrix parameters in the same way,it will not work.
is there any specification says that,the matrix parameters will not work on iPad ?
This question is a little old, but our research may help someone else who chances across this.
We spent some time diagnosing problems playing back content on iOS/QuickTime using URLs with matrix parameters. Specifically we were trying to play HLS content using URLs of the form 'http://myserver.mydomain.com/path;a=b.m3u8'
The video would play fine the first time, then fail the second (and every other) time. If the matrix parameters weren't present, the video played fine every time.
Eventually we concluded it was an issue with iOS/QuickTime writing bad cache entries. So the first time the server returned content, this was as cache miss on iOS/QuickTime so it played fine. The next time, the server returned a 304 (not modified -- i.e. a cache hit), iOS/QuickTime tried to pull it from its cache, this failed so the video wouldn't play.
Our solution was to prevent client caching by setting the Cache-Control header to no-cache. Another solution would be to not use matrix parameters.
Note that in terms of HLS, this bug only seemed to occur with the first m3u8 file loaded -- m3u8 URLs listed in the first m3u8 that contained matrix parameters seemed to play fine.
As of Flash 10.1, they have added the ability to add bytes into the NetStream object via the appendBytes method (described here http://www.bytearray.org/?p=1689). The main reason for this addition is that Adobe is finally supporting HTTP streaming of video. This is great, but it seems that you need to use the Adobe Media Streaming Server (http://www.adobe.com/products/httpdynamicstreaming/) to create the correct video chunks from your existing video to allow for smooth streaming.
I have tried to do a hacked version of HTTP streaming in the past where I swap out the NetStream objects (similar to here http://video.leizhu.com/video.html), but there is always a momentary pause between the chunks. With the new appendBytes, I tried to do a quick mock up with the two sections of video from the preceding site, but even then, the skip still remains.
Does anyone know how the two consecutive .FLV files needs to be formated in order for the appendBytes method on the NetStream object to create a nice smooth video without a noticeable skip between the segments?
I was able to get this working using Adobe's File Packager Tool which Samuel described. I didn't use the NetStream object but I used the OSMF Sample Player which I assume uses this internally. Here's how to do with without using FMS:
Get Adobe's File Packager for Http Dynamic Streaming from http://www.adobe.com/products/httpdynamicstreaming/
Run the File Packager on an existing MP4 file containing H.264/AAC like this:
C:\Program Files\Adobe\Flash Media Server 4\tools\f4fpackager>
f4fpackager.exe --input-file="MyFile.mp4" --segment-duration=30
This will result in 30 second long F4F files, also F4X and a F4M file. The F4F files are your correctly segmented (and fragmented) MP4 files that should play.
If you want to test this using the OSMF Player also do the following:
Get Apache Server
Get Adobe's Http Origin Module for Apache from http://www.adobe.com/products/httpdynamicstreaming/
Install the module according to http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS8d6ed60bd880807c48597a9e1265edd6cc0-8000.html
Put the F4F, F4X and F4M file into the vod directory under httpdocs
Get the “OSMF Sample Player for HTTP Dynamic Streaming” from http://www.osmf.org/downloads/OSFMPlayer_zeri2.zip
Put the Sample Player in the httpdocs directory
Load the html file from the Sample Player in a browser eg http://localhost/OSMFPlayer.html
Press the eject button and put in the URL of your F4M file, it should play
So to answer the original question Adobe's File Packager is the file splitter to use, you don't need to buy FMS to use it and it works for FLV and MP4/F4V files.
You don't need to use their server. Wowza supports Adobe's version of HTTP Streaming and you can implement it yourself by segmenting the videos properly and loading all the segments on a standard HTTP server.
Links to all the specs for Adobe's HTTP Streaming are here:
http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS9463dbe8dbe45c4c-1ae425bf126054c4d3f-7fff.html
Trying to hack the client to do some custom style http streaming will be a lot more troublesome.
Note that HTTP Streaming does not support streaming several different videos but streams a single file that was broken off into separate segments.
File Packager
A command-line tool that translates on-demand media files into fragments and writes the fragments to F4F files. The File Packager is an offline tool. You can use the File Packager to encrypt files for use with Flash Access. For more information, see Packaging on-demand media.
The File Packager is available from adobe.com and is installed with Adobe® Flash® Media Server to the rootinstall/tools/f4fpackager folder.
Packager download link is on right here: Download File Packager for HTTP Dynamic Streaming
http://www.adobe.com/products/httpdynamicstreaming/
You could use F4Pack, it's a GUI around the commandline-tool from Adobe, that lets you process your flv/f4v file so they can be used for HTTP Dynamic Streaming.
The place in the OSMF code where this happens is the timer-fired state machine inside of the HTTPNetStream class implementation... might be an informative read. I think I even put some helpful comments in there when I wrote it.
As far as the general question:
If you read an entire FLV file into a ByteArray and pass it to appendBytes, it will play. If you break that FLV file in half, and pass the first half as a byte array and then the second half as a byte array, that will play as well.
If you want to be able to switch around between bitrates without a gap, you need to split up your FLV files at matching keyframe points... and remember that only the first call to appendBytes has the initial FLV file header ('F', 'L', 'V', flags, offset)... the rest just expect a continuation of the FLV byte sequence.
I recently found a similar project for node.js to achieve m3u8 transcoding (https://github.com/andrewschaaf/media-server) but have yet to hear of one besides Wowza doing it outside of Origin module for Apache. Since the payloads are nearly identical you're better off looking for a good mp4 segmenting solution (plenty out there) than looking for f4m segmenting. The problem is moov atoms especially on larger mp4 video are difficult to manage and put in their proper initial (near beginning of file) location. Even using optimal ffmpeg settings and 'qtfaststart' you end up with noticeably slower seeking, inefficient bandwidth usage (usually greedy), and a few minor headaches relating to scrubbing/time that you don't get with flv/f4v playback.
In my player I have or intend to switch between HTTP Dynamic Streaming (HDS) and MP4 based on load and realtime log parsing Apache using awk/cron instead of licensing Adobe's Access product for stream protection .. both have unique 'onmetadata' handlers.. but in the end I receive sequenced time/byte hashes virtually equivalent. Just MP4 is slower. So mod_origin is just a synchronizer / request router for Flash clients (over http). I'm still looking for ways to speed up mp4-container-based playback. One incredible solution I read this recently and was rather awestruck by it http://zehfernando.com/2011/flash-video-frame-time-woes/ where a video editor (guy) and flash developer came up with their own mp4 timecoding solution that literally added (via Adobe Premiere script) about 50 pixels to the bottom of every video frame with a visual 'binary' stamp like a frame barcode.. and those binary values translate into highly-accurate timecode values. So Flash could analyze the video frames as they were painted (realtime) and determine precisely where the player was and what bytes were needed from any kind of mp4 byte-segmenting-friendly webserver. The thing is (and perhaps I'm wrong here) Flash seems to arbitrarily choose when it gets to moov data, especially on large video files (.5-1.5gigs). Even if you make sure to run your mp4 through MP4Box (i.e. MP4Box -frag 10000 -inter 0 movie.mp4) I guess this has been a problem OSMF and HDS have worked on quite well
now, though it is annoying that you need Apache and a proprietary closed-source module to use it imo. Its probably just a matter of time before open source implementations arrive as HDS is only 1-2 years old, and it just needs a little reverse engineering like that Andrew Chaaf guy with node.js + mpegts streaming (live or not).
In the end I may just end up using OSMF exclusively beneath my UI as it seems to have similar virtues to HDS if not more so i.e. Strobe if you need sick extensible HDS or MP4 open player platform to hack from to realize your own custom player.
Adobe's F4F format is based on MP4 files, are you able to use F4V or MP4 instead of FLV files?
There are plenty of MP4 file splitters around but you would need to make sure the timestamps in the files are continuous, maybe the pause happens when it sees a zero timestamp within the audio or video stream inside the file.