Play socket-streamed h.264 movie on iOS using AVFoundation - ios

I’m working on a small iPhone app which is streaming movie content over a network connection using regular sockets. The video is in H.264 format. I’m however having difficulties with playing/decoding the data. I’ve been considering using FFMPEG, but the license makes it unsuitable for the project. I’ve been looking into Apple’s AVFoundation framework (AVPlayer in particular), which seems to be able to handle h264 content, however I’m only able to find methods to initiate the movie using an url – not by proving a memory buffer streamed from the network.
I’ve been doing some tests to make this happen anyway, using the following approaches:
Play the movie using a regular AVPlayer. Every time data is received on the network, it’s written to a file using fopen with append-mode. The AVPlayer’s asset is then reloaded/recreated with the updated data. There seems to be two issues with this approach: firstly, the screen goes black for a short moment while the first asset is unloaded and the new loaded. Secondly, I do not know exactly where the playing stopped, so I’m unsure how I would find out the right place to start playing the new asset from.
The second approach is to write the data to the file as in the first approach, but with the difference that the data is loaded into a second asset. A AVQueuedPlayer is then used where the second asset is inserted/queued in the player and then called when the buffering has been done. The first asset can then be unloaded without a black screen. However, using this approach it’s even more troublesome (than the first approach) to find out where to start playing the new asset.
Has anyone done something like this and made it work? Is there a proper way of doing this using AVFoundation?

The official method to do this is the HTTP Live Streaming format which supports multiple quality levels (among other things) and automatically switches between them (eg: if the user moves from WiFi to cellular).
You can find the docs here: Apple Http Streaming Docs

Related

getting pcm audio for visualization via Spotify iOS SDK

We're currently looking at taking our music visualization software that's been around for many years to an iOS app that plays music via the new iOS Spotify SDK -- check out http://soundspectrum.com to see our visuals such as G-Force and Aeon.
Anyway, we have the demo projects in the Spotify iOS SDK all up and running and things look good, but the major step forward is to get access to the audio pcm so we can sent it into our visual engines, etc.
Could a Spotify dev or someone in the know kindly suggest what possibilities are available to get a hold of the pcm audio? The audio pcm block can be as simple as a circular buffer of a few thousand of the latest samples (that we would use to FFT, etc).
Thanks in advance!
Subclass SPTCoreAudioController and do one of two things:
Override connectOutputBus:ofNode:toInputBus:ofNode:inGraph:error: and use AudioUnitAddRenderNotify() to add a render callback to destinationNode's audio unit. The callback will be called as the output node is rendered and will give you access to the audio as it's leaving for the speakers. Once you've done that, make sure you call super's implementation for the Spotify iOS SDK's audio pipeline to work correctly.
Override attemptToDeliverAudioFrames:ofCount:streamDescription:. This gives you access to the PCM data as it's produced by the library. However, there's some buffering going on in the default pipeline so the data given in this callback might be up to half a second behind what's going out to the speakers, so I'd recommend using suggestion 1 over this. Call super here to continue with the default pipeline.
Once you have your custom audio controller, initialise an SPTAudioStreamingController with it and you should be good to go.
I actually used suggestion 1 to implement iTunes' visualiser API in my Mac OS X Spotify client that was built with CocoaLibSpotify. It's not working 100% smoothly (I think I'm doing something wrong with runloops and stuff), but it drives G-Force and Whitecap pretty well. You can find the project here, and the visualiser stuff is in VivaCoreAudioController.m. The audio controller class in CocoaLibSpotify and that project is essentially the same as the one in the new iOS SDK.

Is it possible to play a video from a file still being written using MPMoviePlayerViewController?

I am trying to start playing a video file while still being downloaded(i.e. I am trying to emulate buffering.)
My approach:
I maintain a file handle to the video file created. In – connection:didReceiveData: implementation I append the data received to the video file(I ensure this with seekToEndOfFile). Once the total data received passes a threshold value, I start playing the file. Meanwhile I expect – connection:didReceiveData: to keep working the same way as before by appending the data coming in. This approach is inspired from the following post.
http://lists.apple.com/archives/cocoa-dev/2011/Jun/msg00844.html
Result:
Though the author of post above seems to be able to play at least part of the file, in my case the MoviePlayerViewController just shows up on the screen and goes away as though there are no contents in the file.
The code works perfectly fine if I write the whole video data to the file and play once the connection finishes loading.
Has anyone attempted this kind of approach before and succeeded at it?
Experimented a lot. Don't think it's possible without having a dedicated streaming server https://developer.apple.com/library/mac/#documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
Hence went with a different approach.

Capture, encode then stream video from an iPhone to a server

I've got experience with building iOS apps but don't have experience with video. I want to build an iPhone app that streams real time video to a server. Once on the server I will deliver that video to consumers in real time.
I've read quite a bit of material. Can someone let me know if the following is correct and fill in the blanks for me.
To record video on the iPhone I should use the AVFoundation classes. When using the AVCaptureSession the delegate method captureOutput:didOutputSampleBuffer::fromConnection I can get access to each frame of video. Now that I have the video frame I need to encode the frame
I know that the Foundation classes only offer H264 encoding via AVAssetWriter and not via a class that easily supports streaming to a web server. Therefore, I am left with writing the video to a file.
I've read other posts that say they can use two AssetWritters to write 10 second blocks then NSStream those 10 second blocks to the server. Can someone explain how to code the use of two AVAssetWriters working together to achieve this. If anyone has code could they please share.
You are correct that the only way to use the hardware encoders on the iPhone is by using the AVAssetWriter class to write the encoded video to a file. Unfortunately the AVAssetWriter does not write the moov atom to the file (which is required to decode the encoded video) until the file is closed.
Thus one way to stream the encoded video to a server would be to write 10 second blocks of video to a file, close it, and send that file to the server. I have read that this method can be used with no gaps in playback caused by the closing and opening of files, though I have not attempted this myself.
I found another way to stream video here.
This example opens 2 AVAssetWriters. Then on the first frame it writes to two files but immediately closes one of the files so the moov atom gets written. Then with the moov atom data the second file can be used as a pipe to get a stream of encoded video data. This example only works for sending video data but it is very clean and easy to understand code that helped me figure out how to deal with many issues with video on the iPhone.

How can I stream a movie in iOS and playback from the filesystem later?

I've got an app that currently ships with all the videos it can play embedded in it. This doesn't scale well, and unless you want to play all the movies, wastes disk space. It also makes it less desirable to upgrade the app because you have to re-download all movies.
What I would like to do is download the movie on the fly, play it back while downloading, and then if it's successfully downloaded, save it to the file system so that next time they want to watch it, it streams from the local file.
I can do whatever is needed to the video, but currently I'm serving it up as an .mp4 file from Amazon S3, with a mimetype of video/mp4, and so the first half of my issue works fine: the movie downloads, and MPMovieViewController will start playing it as soon as it thinks it has downloaded "enough."
Is there any way to tap into the cache of that video file so that I can save it and control how long it resides on the filesystem? This seems like it would be the easiest approach.
I am targeting iOS 5+6, but if the only solution available required iOS 6, I would consider it also. Thanks!
UPDATE: Using AFNetworking, I am now half-way there, I think. I am downloading the video file from the server, and listening for the download progress. Once I see 25% of the video has been downloaded, I start playback on the local file using an MPMoviePlayerController.
The main issue I'm running into now is playback seems to get screwed up. It's going along fine, 25% downloaded, playback starts... download continues normally... then the file finishes downloading completely, and shortly thereafter video freezes. The onscreen playback timer still indicates playback is ongoing and I don't see any "playback finished" type notifications, but the video is frozen. My guess based on the behavior is that perhaps the initial buffer for the video playback was used up, and it isn't detecting that more video is available on disk now?
Is there any way to interact with MPMoviePlayerController to let it know periodically to refresh the buffer it's playing out of? Or some other way to handle this situation?
UPDATE: Make sure to see the newer answer from #TomHamming.
I have yet to find a conclusive answer, but at this time I believe the answer is: you can't reliably do this. At least not without a lot of work which seems too much like a hack. I filed a feature request with Apple as it really seems like this should be possible with some adjustments to MPMoviePlayerController.
I will go over the variety of things I tried or considered, and the results I encountered.
Pass MPMoviePlayerController a URL to your movie file, which allows it to stream, and then pull the file out of the cache it was saved into, into your local Documents folder. Won't work, as of iOS 6. I filed a feature request with Apple, but as it stands now there's no way to get your hands on the file they are downloading, AFAIK.
Start downloading the movie file with NSURLConnection (or something like AFNetwork), and then when a "decent amount" has been downloaded to the device, pass the file URL to the MPMoviePlayerController and let it stream from disk. Sort of works, but not well. Three problems:
It's really hard to know when to start playing the file. I haven't figured out the algorithm Apple uses, and so I always erred on the side of caution, waiting for 25% to be downloaded before playing.
The MPMoviePlayerController interface provides no sense of the movie being streamed, as it does when Apple is doing the calculations via the network. It appears to the user that the file is totally downloaded when it really is not.
And most importantly, MPMoviePlayerController seems to not work well with playing a file that is not completely downloaded. I experienced playback problems once the file finished downloading, or if the player caught up with the amount downloaded, and never found a graceful way to handle these situations.
Same procedure as above, but use AVFoundation classes to more finely control the playback process, and avoid the issues described above regarding playback stopping, etc. Might work, but I want all the features of MPMoviePlayerController. Re-implementing MPMoviePlayerController myself just to get this one feature seems like a waste of time.
Same procedure as #1 above, but run a small web server in your app to handle streaming the video from the disk to MPMoviePlayerController, with the hope being that the streaming would work more like it normally does when streaming the file directly from an external web server. Works, but results were still sporadic and performance seemed to suffer. I did my test with CocoaHTTP. I decided against this approach because it just felt like a terrible hack.
Run a lightweight HTTP proxy, thus intercepting the downloaded movie file data as it gets streamed from the internet into your MPMoviePlayerController. Not sure if this works or not. I was not able to test this yet, as I have not found a lightweight HTTP proxy written in Objective-C, and at this point don't feel like implementing one just to try this experiment. It seems like the next easiest of all these hacks to implement -- if you don't have to write the proxy!
At this point I've decided to go the less-hacky, but also less user-friendly route of simply downloading the file completely, and then passing it to MPMoviePlayerController, until a better solution comes along.
You can do this as of iOS 10 with AVAssetDownloadTask. See this WWDC 2016 session and this documentation.
Alternatively, if your movie isn't DRM'd, you can do it with AVAssetResourceLoaderDelegate, which effectively lets you give an AVPlayer an arbitrary stream of bytes. See this walkthrough.

What exactly is an audio queue processing tap?

These have been around in OS X for a little while now and just recently became available in ios with ios 6. I am trying to figure what they let you do exactly. So the idea is you can tap into an audio queue and process the data before sending it on. Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible? I have read over the audioQueue.h file and can't quite figure out what to make of it.
Consider it a mid-level entry for your audio custom processing (e.g. insert effect) or reading (e.g. for analysis or display purposes) of the queue's sample data. A basic interface for reading or processing an AQ's data.
Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible?
Nope - it's not inter-process; you have no access to other processes' audio queues. These are for your queues' sample data. They can be used to simplify general audio render or analysis chains (the common case, by app count). My guess is that it was provided because a lot of people wanted an easier entry to access this sample data for processing or analysis. Custom processing entries on iOS can also be more complicated to implement (i.e. AudioUnit availability is restricted).

Resources