From what have gathered so far, Apple provided tools to make Mac to act as HTTP Live Streaming server. But my goal is different. I want to make iDevices to be the HTTP Live Streaming server. (for local network only)
Can it be done at all?
Yes and no. Apple does not provide a way to stream encoded media data, so that part is 100% up to you. Also, Apple does not provide a way to access encoded frames directly (i.e. you can easily get an encoded file or the raw frames, but not easily get "encoded frames'). So you need to develop a way to get these encoded frames from the files for streaming, or encode the raw frames on the fly.
It may or may not fit your use case, but if you first right the streamer portion, you should be able to say small/short clips to disk, and stream them out as they are created with minimal overall latency.
Related
I'm looking for a way to implement real-time streaming of video (and optionally audio) from iOS device to a browser. In this case iOS device is a server and browser is a client.
Video resolution must be in the range 800x600-1920x1080. Probably the most important criteria is lag that should be less than 500 msec.
I've tried a few approaches so far.
1. HLS
Server: Objective-C, AVFoundation, UIKit, custom HTTP-server implementation
Client: JS, VIDEO tag
Works well. Streams smoothly. The VIDEO tag in the browser handles incoming video steam out of the box. This is great! However, it has lags that are hard to minimize. It feels like this protocol was built for non-interactive video streaming. Something like twitch where a few seconds of lag is fine.
Tried Enabling Low-Latency. A lot of requests. A lot of hassle with the playlist. Let me know if this is the right option and I have to push harder in this direction.
2. Compress every frame into JPEG and send to a browser via WebSockets
Server: Objective-C, AVFoundation, UIKit, custom HTTP-server implementation, WebSockets server
Client: JS, rendering via IMG tag
Works super-fast and super-smooth. Latency is 20-30 msec! However, when I receive a frame in a browser, I have to load it using loading from a Blob field via base64 encoded URL. At the start, all of this works fast and smoothly, but after a while, the browser starts to slow down and lags. Not sure why I haven't investigated too deeply yet. Another issue is that frames compressed as JPEGs are much larger (60-120kb per frame) than MP4 video stream of HLS. This means that more data is pumped through WiFi, and other WiFi consumers are starting to struggle. This approach works but doesn't feel like a perfect solution.
Any ideas or hints (frameworks, protocols, libraries, approaches, e.t.c.) are appreciated!
HLS
… It feels like this protocol was built for non-interactive video streaming …
Yep, that's right. The whole point of HLS was to utilize generic HTTP servers as media streaming infrastructure, rather than using proprietary streaming servers. As you've seen, several tradeoffs are made. The biggest problem is that media is chunked, which naturally causes latency of at least the size of the chunk. In practice, it ends up being the size of a couple chunks.
"Low latency" HLS is a hack to return to the methods we had before HLS, with servers that just stream content from the origin, in a way compatible with all the HLS stuff we have to deal with now.
Compress every frame into JPEG and send to a browser via WebSockets
In this case, you've essentially recreated a video codec, and added the overhead of Web Sockets. Also, with the base64 encoding rather than sending it binary, you're adding extra CPU and memory requirements, as well as ~33% overhead in bandwidth.
If you really wanted to go this route, you could simply use MediaRecorder, an HTTP PUT request, stream the output of the recorder, send it to the server, to relay on to the client over HTTP. The client then just needs a <video> tag referencing some URL on the server, and nothing special to playback. You'll get nice low latency without all the overhead and hassle.
However, don't go that route. Suppose the bandwidth drops out? What if some packets are lost and you need to re-sync? How will you set up communication between each end to continually adjust quality, buffering, codec negotiation, etc.? What if peer-to-peer connections are advantageous?
Use WebRTC
It's a full purpose-built stack for maintaining low latency. Libraries are available for most any stack on most any platform. It works in browsers.
Rather than reinventing all of this, you can take advantage of what's there.
The downside is complexity... it isn't easy to get started with, but well worth it for most low latency use cases.
I'm building an icecast server using butt. The stream runs on 320 kbps and it's hard on bad internet connections. So, I decided to create a 2nd stream, so that anyone who has a bad connection can change to it.
The problem I cannot find out how to put another stream on the same icecast I already use.
You need something called a transcoder. Basically it's both a client and a source. It connects to the full quality stream, decodes it and encodes it with a different codec or at a lower quality and then sends it to a new mount point on the same or a different Icecast server.
There are quite a few options. You can just use ffmpeg/avconv, or you can use liquidsoap, or ezstream, or...
My personal recommendation would be to first optimize the primary stream for quality, not for bitrate, e.g. Opus at an average of 128-140 kbit/s will probably beat 320 kbit/s MP3 hands down. MP3 is an ancient codec by internet standards, the technology behind it is 20 years old or older. If you positively need a MP3 stream to support bad client software, then you should transcode to it.
Standard disclaimer: The format of your files is irrelevant for your primary stream format, as 99% of all use cases require the source client to run an encoder.
Is there any way, using currently available SDK frameworks on Cocoa (touch) to create a streaming solution where I would host my mp4 content on some server and stream it to my iOS client app?
I know how to write such a client, but it's a bit confusing on server side.
AFAIK cloudKit is not suitable for that task because behind the scenes it keeps a synced local copy of datastore which is NOT what I want. I want to store media content remotely and stream it to the client so that it does not takes precious space on a poor 16 GB iPad mini.
Can I accomplish that server solution using Objective-C / Cocoa Touch at all?
Should I instead resort to Azure and C#?
It's not 100% clear why would you do anything like that?
If you have control over the server side, why don't you just set up a basic HTTP server, and on client side use AVPlayer to fetch the mp4 and play it back to the user? It is very simple. A basic apache setup would do the job.
If it is live media content you want to stream, then it is worth to read this guide as well:
https://developer.apple.com/Library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/StreamingMediaGuide.pdf
Edited after your comment:
If you would like to use AVPlayer as a player, then I think those two things don't fit that well. AVPlayer needs to buffer different ranges ahead (for some container formats the second/third request is reading the end of the stream). As far as I can see CKFetchRecordsOperation (which you would use to fetch the content from the server) is not capable of seeking in the stream.
If you have your private player which doesn't require seeking, then you might be able to use CKFetchRecordsOperation's perRecordProgressBlock to feed your player with data.
Yes, you could do that with CloudKit. First, it is not true that CloudKit keeps a local copy of the data. It is up to you what you do with the downloaded data. There isn't even any caching in CloudKit.
To do what you want to do, assuming the content is shared between users, you could upload it to CloudKit in the public database of your app. I think you could do this with the CloudKit web interface, but otherwise you could create a simple Mac app to manage the uploads.
The client app could then download the files. It couldn't stream them though, as far as I know. It would have to download all the files.
If you want a streaming solution, you would probably have to figure out how to split the files into small chunks, and recombine them on the client app.
I'm not sure whether this document is up-to-date, but there is paragraph "Requirements for Apps" which demands using HTTP Live Streaming if you deliver any video exceeding 10min. or 5MB.
Is it possible to encrypt/decrypt smaller segments of a file for HTTP Live Streaming, with industry-standard encryption techniques such as PlayReady and AES-128?
I don't know how the default HLS implementation in iOS works with AES-128 encryption - i.e, is it still able to download partial segments (TS) and stream files progressively? Or does it have to download the full file, decrypt the entire contents and only then start playback?
In some PlayReady clients I've been exposed to, I've observed the latter approach (download in full first). But it seems like an awful compromise on playback latency, to attain security (and perhaps there is no way around it).
Some light on this subject would be very helpful. Thanks!
P.S: References to technical documentation or manuals would be great!
AES works a block cipher with a block size of 128 bits. So if the decryption program knows the key and the IV, it can return plaintext from the first block -- there's no reason to decrypt an entire segment/fragment first.
I'm guessing the PlayReady implementations you see are only doing it on a file basis because that is slightly easier to implement. But there's no technical reason that a client would need to retrieve the entire file before initiating the decryption.
You can verify what a client is doing by making a fragment/segment extremely large and checking client behavior.
A quick search found me this project which is decrypting HLS as it reads:
https://code.google.com/p/mlbtv-hls-nexdef/source/browse/trunk/mlb.c
A potential client has come to me asking for a an app which will stream a six hour audio file. The user needs to be able to set the "playback head" to any position along the file. Presumably, this means that the app must not be forced to download the entire file before it beings playing back starting at an arbitrary
An added complication -- there are actually four files which need to be streamed and mixed simultaneously.
My questions are:
1) Is there an out-of-the box technology which will allow me random access of streaming audio, on iOS? Can this be done with standard server technology and a single long file, or will it involve some fancy server tech?
2) Which iOS framework is best suited for this. Is there anything high-level that would allow me to easily mix these four audio files?
3) Can this be done entirely with standard browser technology on the client side? (i.e. HTML5)
Have a close look at the MP3 format. It is remarkably easy and efficient to parse, chop up into little bits, and reassemble into a custom stream.
Hence rolling your own server-side code to grab what you want and send to the client will not be as crazy or difficult as it may sound.
MP3 is also widely supported by various clients. I strongly suspect any HTML5 capable browser will be able of play the stream you generate via a long-lived bit-rate regulated HTTP request.