How to implement 'Adaptive Bit Rate' (HLS) with AVPlayer in iOS 9+ - ios

I am trying to implement adaptive bit rate with AVPlayer but i don't know how to switch between a low/high stream. I am a bit confused and have few questions:
Is it the sole responsibility of the server to implement HLS on its side OR the client also has to do something about it OR the client handles it automatically?
I am getting the following URLs from server, can someone tell me how to switch between the them based on network speed and what other steps are involved?
{
"VideoStreamUrl": "http://50.7.149.74:1935/pitvlive/aplus3.stream/playlist.m3u8?",
"VideoStreamUrlLow": "http://50.7.149.74:1935/pitvlive/aplus3_240p.stream/playlist.m3u8?",
"VideoStreamUrlHD": null
}

AVPlayer supports HLS natively from the framework so you shouldnt need to do anything to support this.
The framework will automatically switch between low and high streams according to the current available bandwidth, so you dont actually need to pick a stream.

Related

What options exist to stream video from iOS to browser?

I'm looking for a way to implement real-time streaming of video (and optionally audio) from iOS device to a browser. In this case iOS device is a server and browser is a client.
Video resolution must be in the range 800x600-1920x1080. Probably the most important criteria is lag that should be less than 500 msec.
I've tried a few approaches so far.
1. HLS
Server: Objective-C, AVFoundation, UIKit, custom HTTP-server implementation
Client: JS, VIDEO tag
Works well. Streams smoothly. The VIDEO tag in the browser handles incoming video steam out of the box. This is great! However, it has lags that are hard to minimize. It feels like this protocol was built for non-interactive video streaming. Something like twitch where a few seconds of lag is fine.
Tried Enabling Low-Latency. A lot of requests. A lot of hassle with the playlist. Let me know if this is the right option and I have to push harder in this direction.
2. Compress every frame into JPEG and send to a browser via WebSockets
Server: Objective-C, AVFoundation, UIKit, custom HTTP-server implementation, WebSockets server
Client: JS, rendering via IMG tag
Works super-fast and super-smooth. Latency is 20-30 msec! However, when I receive a frame in a browser, I have to load it using loading from a Blob field via base64 encoded URL. At the start, all of this works fast and smoothly, but after a while, the browser starts to slow down and lags. Not sure why I haven't investigated too deeply yet. Another issue is that frames compressed as JPEGs are much larger (60-120kb per frame) than MP4 video stream of HLS. This means that more data is pumped through WiFi, and other WiFi consumers are starting to struggle. This approach works but doesn't feel like a perfect solution.
Any ideas or hints (frameworks, protocols, libraries, approaches, e.t.c.) are appreciated!
HLS
… It feels like this protocol was built for non-interactive video streaming …
Yep, that's right. The whole point of HLS was to utilize generic HTTP servers as media streaming infrastructure, rather than using proprietary streaming servers. As you've seen, several tradeoffs are made. The biggest problem is that media is chunked, which naturally causes latency of at least the size of the chunk. In practice, it ends up being the size of a couple chunks.
"Low latency" HLS is a hack to return to the methods we had before HLS, with servers that just stream content from the origin, in a way compatible with all the HLS stuff we have to deal with now.
Compress every frame into JPEG and send to a browser via WebSockets
In this case, you've essentially recreated a video codec, and added the overhead of Web Sockets. Also, with the base64 encoding rather than sending it binary, you're adding extra CPU and memory requirements, as well as ~33% overhead in bandwidth.
If you really wanted to go this route, you could simply use MediaRecorder, an HTTP PUT request, stream the output of the recorder, send it to the server, to relay on to the client over HTTP. The client then just needs a <video> tag referencing some URL on the server, and nothing special to playback. You'll get nice low latency without all the overhead and hassle.
However, don't go that route. Suppose the bandwidth drops out? What if some packets are lost and you need to re-sync? How will you set up communication between each end to continually adjust quality, buffering, codec negotiation, etc.? What if peer-to-peer connections are advantageous?
Use WebRTC
It's a full purpose-built stack for maintaining low latency. Libraries are available for most any stack on most any platform. It works in browsers.
Rather than reinventing all of this, you can take advantage of what's there.
The downside is complexity... it isn't easy to get started with, but well worth it for most low latency use cases.

How to play any HLS stream url in chromecast?

Interested in the question m3u8-play video streams on chromecast device. Studying the docks as I understand it does not necessarily write Custom Receiver, ready enough to use Default Receiver or Styled Media Receiver. But some servers with HLS videos have the problem with CORS.
What are the options to solve this problem to play any m3u8-streams (from any server)?
Use CORS-proxy or something else?
You cannot get around the CORS requirement so your options are limited; the proxy approach seems to be your only option. Note that your options are the same regardless of using a custom receiver or a default/styled; even if you write a custom receiver, you will run into the very same CORS requirement.

Reduce/remove buffer lag on <video> element (iOS)

We have an FFMPEG stream being streamed to mobile devices. We're using the HTML5 <video src="..." webkit-playsinline> tag to display the video inline (inside a real-time streaming app). We've managed to reduce the delay at the FFMPEG end down to the minimum but there's still a lag at the iOS end, where the player presumably buffers for a couple of seconds.
Is there any way to reduce the client-side delay?
We need as close to real-time as possible and skipping is acceptable.
If you are using an HTML5 video tag then the iOS device will use Quicktime to playback the video. Apple offers no control over internal mechanism like buffer settings for its Quicktime player. For a project on Apple TV I even work with a guy in Cupertino at Apple and they just won't allow any access to the information you would require on their device.
Typically if you use HLS:
Is this a real-time delivery system?
No. It has inherent latency corresponding to the size and duration of the media files containing stream segments. At least one segment must fully download before it can be viewed by the client, and two may be required to ensure seamless transitions between segments. In addition, the encoder and segmenter must create a file from the input; the duration of this file is the minimum latency before media is available for download. Typical latency with recommended settings is in the neighborhood of 30 seconds.
What is the latency?
Approximately 30 seconds, with recommended settings. See question #15.
For live streaming scenario on iOS you better off tuning the streaming chain before the actual player:
capture -> transcoding -> upload -> streaming server -> delivery -> playback
Using ffmpeg you can tune for zero lantency streaming at transcoding level which I understand you have done. After that using a well established streaming server like Wowza and CDN delivery will help you get there (of course at a certain cost - and assuming you need a streaming server which you may not).
If you go all native for your iOS app you may look at MPMoviePlayerController. I have no experience with native app code in iOS so I let you decide if it is worth the time (still I doubt it will be possible because of the underlying Quicktime/HLS layer).
I also came across this which sounds interesting but I have not tested it and even with such an approach you will face limitations.
Even if it may not be the answer you were looking for I hope this helps.

Is it posible to cast to the receiver the input from the phone microphone?

I would like to know if it is posible to cast the audio taken directly from the microphone iOS device to the receiver. (in a live way)
I´ve downloaded all the git example projects, and in all of them use a "loadMedia" method to start the casting. Example of one of those:
- (NSInteger)loadMedia:(GCKMediaInformation *)mediaInfo
autoplay:(BOOL)autoplay
playPosition:(NSTimeInterval)playPosition;
Can I follow this approach to do what I want? If so, what´s the expected delay?
Thanks a lot
Echo is likely if the device (iOS, Android, or Chrome) is in range of the speakers. That said:
Pick a fast codec that is supported, such as CELT/Opus or Vorbis
I haven't tried either of these, but they should be possible.
Implement your own protocol using CastChannel that passes the binary data. You'll want to do some simple conversion of the stream from Binary to something a bit more friendly. Take a look at Intro to Web Audio for using AudioContext.
or, 2. Setup a trivial server to stream from on your device, then tell the Receiver to just access that local server.

Keeping two AVPlayers in sync

I have a client who has a very specific request for the app that requires two AVPlayers to be in sync. One video is for some content and the other one is for a presenter speaking about the content. Using a AVMutableComposition to combine them into one video is not an option because the presenter video has to be able to respond to user generated events (e.g. they want to have a feature to show/hide the presenter) and I don't believe there is a way to have that kind of control over a specific AVMutableCompositionTrack.
So, I'm left with figuring out how to ensure that two AVPlayers stay in sync and I was wondering if anyone has had experience with this or suggestions for other tools to accomplish this.
Thanks
The following methods are the ones to use
- (void)setRate:(float)rate
time:(CMTime)itemTime
atHostTime:(CMTime)hostClockTime;
- (void)prerollAtRate:(float)rate
completionHandler:(void (^)(BOOL finished))completionHandler;
Caveats
Important This method is not currently supported for HTTP Live
Streaming or when automaticallyWaitsToMinimizeStalling is YES. For
clients linked against iOS 10.0 and later or macOS 10.12 and later,
invoking this method when automaticallyWaitsToMinimizeStalling is YES
will raise an NSInvalidArgument exception.
This is an expected behavior since "live" is "present" and cannot be seek forward and setting the rate to less than 1.0 it will cause to extra buffering the stream (second point is a guess).
Documentation
https://developer.apple.com/documentation/avfoundation/avplayer/1386591-setrate?language=objc
https://developer.apple.com/documentation/avfoundation/avplayer/1389712-prerollatrate?language=objc
As a side note consider that HLS streams are not truly live streams, the "present moment" could vary several seconds among clientes consuming the stream, the opposite of WebRTC for example where the delay between publishers and consumers is kinda of warranted for 1 second max.

Resources