What options exist to stream video from iOS to browser? - ios

I'm looking for a way to implement real-time streaming of video (and optionally audio) from iOS device to a browser. In this case iOS device is a server and browser is a client.
Video resolution must be in the range 800x600-1920x1080. Probably the most important criteria is lag that should be less than 500 msec.
I've tried a few approaches so far.
1. HLS
Server: Objective-C, AVFoundation, UIKit, custom HTTP-server implementation
Client: JS, VIDEO tag
Works well. Streams smoothly. The VIDEO tag in the browser handles incoming video steam out of the box. This is great! However, it has lags that are hard to minimize. It feels like this protocol was built for non-interactive video streaming. Something like twitch where a few seconds of lag is fine.
Tried Enabling Low-Latency. A lot of requests. A lot of hassle with the playlist. Let me know if this is the right option and I have to push harder in this direction.
2. Compress every frame into JPEG and send to a browser via WebSockets
Server: Objective-C, AVFoundation, UIKit, custom HTTP-server implementation, WebSockets server
Client: JS, rendering via IMG tag
Works super-fast and super-smooth. Latency is 20-30 msec! However, when I receive a frame in a browser, I have to load it using loading from a Blob field via base64 encoded URL. At the start, all of this works fast and smoothly, but after a while, the browser starts to slow down and lags. Not sure why I haven't investigated too deeply yet. Another issue is that frames compressed as JPEGs are much larger (60-120kb per frame) than MP4 video stream of HLS. This means that more data is pumped through WiFi, and other WiFi consumers are starting to struggle. This approach works but doesn't feel like a perfect solution.
Any ideas or hints (frameworks, protocols, libraries, approaches, e.t.c.) are appreciated!

HLS
… It feels like this protocol was built for non-interactive video streaming …
Yep, that's right. The whole point of HLS was to utilize generic HTTP servers as media streaming infrastructure, rather than using proprietary streaming servers. As you've seen, several tradeoffs are made. The biggest problem is that media is chunked, which naturally causes latency of at least the size of the chunk. In practice, it ends up being the size of a couple chunks.
"Low latency" HLS is a hack to return to the methods we had before HLS, with servers that just stream content from the origin, in a way compatible with all the HLS stuff we have to deal with now.
Compress every frame into JPEG and send to a browser via WebSockets
In this case, you've essentially recreated a video codec, and added the overhead of Web Sockets. Also, with the base64 encoding rather than sending it binary, you're adding extra CPU and memory requirements, as well as ~33% overhead in bandwidth.
If you really wanted to go this route, you could simply use MediaRecorder, an HTTP PUT request, stream the output of the recorder, send it to the server, to relay on to the client over HTTP. The client then just needs a <video> tag referencing some URL on the server, and nothing special to playback. You'll get nice low latency without all the overhead and hassle.
However, don't go that route. Suppose the bandwidth drops out? What if some packets are lost and you need to re-sync? How will you set up communication between each end to continually adjust quality, buffering, codec negotiation, etc.? What if peer-to-peer connections are advantageous?
Use WebRTC
It's a full purpose-built stack for maintaining low latency. Libraries are available for most any stack on most any platform. It works in browsers.
Rather than reinventing all of this, you can take advantage of what's there.
The downside is complexity... it isn't easy to get started with, but well worth it for most low latency use cases.

Related

What is the limitations of streaming video files in public folder with html5 video tag in Ruby on Rails 5

What I'm doing
Basically I'm writing simple a Q&A site with an option to create links to specific positions in media files. As of now the app is intended to be used in LAN environment only.
I have put a video in appRoot/public folder and created a view using
html5 video tag.
It works and even seeking is available. Wow...
What I don't understand
I'm clueless as to the tech behind and its limitations.
It just worked, so I don't even know a key word to hit google with.
What I know
With the way I'm doing:
No encryption
No way to prevent users to save video files
No automatic trans-coding available
The real question
What is the name of the tech behind.
How well can rails handle streaming and seeking requests with the way I did as compared to using dedicated video streaming servers or gems.
As long as your underlying web server understands how to handle the MIME types for video, and responds correctly to byte range requests - as it seems to be - that's all you need. The underlying mechanics of streaming video with HTML5 is that the browser asks for a chunk of content as a range of bytes from the source (enough to keep the buffer full) and the server delivers it.
You might want to look at using ffmpeg to optimize your videos so that the metadata is in the right place in the file to start streaming quicker.
You've correctly pointed out the limitations of the solution in your environment. The other thing to be aware of is capacity - if the videos are long and a lot of people are accessing them concurrently then without caching (in a LAN via a caching proxy or on the internet via a CDN service) your server capacity may be stretched

Reduce/remove buffer lag on <video> element (iOS)

We have an FFMPEG stream being streamed to mobile devices. We're using the HTML5 <video src="..." webkit-playsinline> tag to display the video inline (inside a real-time streaming app). We've managed to reduce the delay at the FFMPEG end down to the minimum but there's still a lag at the iOS end, where the player presumably buffers for a couple of seconds.
Is there any way to reduce the client-side delay?
We need as close to real-time as possible and skipping is acceptable.
If you are using an HTML5 video tag then the iOS device will use Quicktime to playback the video. Apple offers no control over internal mechanism like buffer settings for its Quicktime player. For a project on Apple TV I even work with a guy in Cupertino at Apple and they just won't allow any access to the information you would require on their device.
Typically if you use HLS:
Is this a real-time delivery system?
No. It has inherent latency corresponding to the size and duration of the media files containing stream segments. At least one segment must fully download before it can be viewed by the client, and two may be required to ensure seamless transitions between segments. In addition, the encoder and segmenter must create a file from the input; the duration of this file is the minimum latency before media is available for download. Typical latency with recommended settings is in the neighborhood of 30 seconds.
What is the latency?
Approximately 30 seconds, with recommended settings. See question #15.
For live streaming scenario on iOS you better off tuning the streaming chain before the actual player:
capture -> transcoding -> upload -> streaming server -> delivery -> playback
Using ffmpeg you can tune for zero lantency streaming at transcoding level which I understand you have done. After that using a well established streaming server like Wowza and CDN delivery will help you get there (of course at a certain cost - and assuming you need a streaming server which you may not).
If you go all native for your iOS app you may look at MPMoviePlayerController. I have no experience with native app code in iOS so I let you decide if it is worth the time (still I doubt it will be possible because of the underlying Quicktime/HLS layer).
I also came across this which sounds interesting but I have not tested it and even with such an approach you will face limitations.
Even if it may not be the answer you were looking for I hope this helps.

iDevices as HTTP Live Streaming server

From what have gathered so far, Apple provided tools to make Mac to act as HTTP Live Streaming server. But my goal is different. I want to make iDevices to be the HTTP Live Streaming server. (for local network only)
Can it be done at all?
Yes and no. Apple does not provide a way to stream encoded media data, so that part is 100% up to you. Also, Apple does not provide a way to access encoded frames directly (i.e. you can easily get an encoded file or the raw frames, but not easily get "encoded frames'). So you need to develop a way to get these encoded frames from the files for streaming, or encode the raw frames on the fly.
It may or may not fit your use case, but if you first right the streamer portion, you should be able to say small/short clips to disk, and stream them out as they are created with minimal overall latency.

optimize upload videos in different signal strength

I have a question, my app is a short video share application just like vine, but now I encounter questions when used in subway or some places with weak signals, it will fail sometimes and have poor user experience.
I am a newbie for network programming and iOS. I did a lot search on Google, and have some general sense, let me sum up my finds and pls help to give some suggestions for it.
My requirement is:1. support resume when uploading interrupt. 2. can success upload in weak signal. Actually I do NOT need to think about the realtime problems or how to compress the video, just think the video as a file is totally ok. BTW the server is a REST style, I use post to upload datas.
Questions:
which is the better way for my requirement, using stream(stream NOT mean live stream video just data stream like NSOutStream&NSInputStream, just play the video after all of it has uploaded, NOT the live stream video playing and downloading at meantime) or divide the whole file into several chunks and upload chunk by chunk.
someone said, using stream is good for resource efficiency since the stream will read files into memory and control the size of the buffer and after setup connection with server we use delegate to control the failure so easy to use.
Upload chunk by chunk is good at speed, I have puzzled with this statement, upload by chunks after successfully upload one chunk we need to release the connection resources and setup another connection then do upload I think this will spend time to do these preparation stuffs.
If upload by chunks which size should be good, one video file is almost 1M bytes, someone said 8k is a safe choice, but......
since the app needs to adapt to different signal strength, is there any way? for example the chunk size is depended on the bandwidth or other ways
Is there any private API already support resume uploading interrupt or is there any apple api can support this, my app needs to run on iOS 5 and above so can NOT use NSURLSession
Concurrent uploading is a way to speed up? If so how to implement or any API available?
Thank you in advance for helping a newbie like me. Thank you very much.
It takes o lot of topics your question. iOS doesn't have an public API to stream video (such as the face time components). The main issue here is sending frame by frame will require a lot of network traffic, instead if you use the normal video writer you get hardware compression, that will be a lot better. There's more and you can check here: Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone), Upload live streaming video from iPhone like Ustream or Qik, How send to stream video from iOS device to server? and here
If real time is not your problem I would suggest you just to use a good network manager such as: MKNetworkkit or AFNetworking 2.0 . They will take care of most of the aspect that you asked.

iOS streaming audio from network -- random access of a 6-hour file

A potential client has come to me asking for a an app which will stream a six hour audio file. The user needs to be able to set the "playback head" to any position along the file. Presumably, this means that the app must not be forced to download the entire file before it beings playing back starting at an arbitrary
An added complication -- there are actually four files which need to be streamed and mixed simultaneously.
My questions are:
1) Is there an out-of-the box technology which will allow me random access of streaming audio, on iOS? Can this be done with standard server technology and a single long file, or will it involve some fancy server tech?
2) Which iOS framework is best suited for this. Is there anything high-level that would allow me to easily mix these four audio files?
3) Can this be done entirely with standard browser technology on the client side? (i.e. HTML5)
Have a close look at the MP3 format. It is remarkably easy and efficient to parse, chop up into little bits, and reassemble into a custom stream.
Hence rolling your own server-side code to grab what you want and send to the client will not be as crazy or difficult as it may sound.
MP3 is also widely supported by various clients. I strongly suspect any HTML5 capable browser will be able of play the stream you generate via a long-lived bit-rate regulated HTTP request.

Resources