My iOS app uses AVPlayer to decode H.264 videos with AAC audio tracks out of local device storage. Content with bit rate spikes cause audio to drop shortly (less than a second) after the spike is played, yet video playback continues normally. Playing the videos through Safari seems to work fine, and this behavior is repeatable on several models of iPhones ranging from 6s through 8 plus.
I've been looking for any messages generated, delegates called with error information, or interesting KVOs, but there's been no helpful information so far. What might I do to get some sort of more detailed information that can point me in the right direction?
Turned out that the AVPlayer was configured to utilize methods for loading data in a custom way. The implementation of these methods failed to follow the pattern of satisfying the requests completely. (Apple docs are a vague about this.) The video portion of the AVPlayer asked for more data repeatedly, so eventually all its data got pulled. However, the audio portion patiently waited for the data to come in because there were neither an error state reported nor was all the data provided -- the presumption being that it was pending.
So, in short, sounds like there's provisions in the video handling code to treat missing data as a stall of some form and to plow onward, whereas audio doesn't have that feature. Not a bad design -- if audio cuts out it's very noticeable, and it's also by far the smaller stream so it's much less likely.
Despite spending quite a few days on the problem before posting, the lack of any useful signals made it hard to chase down the problem. I eventually reasoned that if there's no error in producing output from the stream, the problem must be in the delivery of the stream, and the problem revealed itself once I started tweaking the data loading code.
Related
I have an app which can play video HLS streams.
HLS master playlist contains redundant steams to provide backup service
Looks like this:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1500000,RESOLUTION=638x480
https://example.com/playlist.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1564000,RESOLUTION=638x480
https://example.com/playlist.m3u8?redundant=1
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1564000,RESOLUTION=638x480
https://example.com/playlist.m3u8?redundant=2
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1564000,RESOLUTION=638x480
https://example.com/playlist.m3u8?redundant=3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400000,RESOLUTION=638x480
https://example.com/playlist_lq.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400000,RESOLUTION=638x480
https://example.com/playlist_lq.m3u8?redundant=1
....
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=400000,RESOLUTION=638x480
https://example.com/playlist_lq.m3u8?redundant=5
So, I decided to test out how this setup will fly in case of a bad network scenario. For this, I used network link conditioner's 3G preset, which provides 750kbs of download bandwidth. Naturally I expected relatively smooth playback of 400kbs video but alas, it took 60 seconds to fully load test clip (800kb total size).
What I noticed is that AVPlayer sends requests for all listed redundant playlist (and I have 5 for each bandwidth). If I remove them and keep only 1 media-playlist per bandwidth - video loads in 10 seconds and plays without hiccups.
It looks like AVPlayer try to process all redundant links in parallel with main video load and chokes hard.
Is there any way to restrict this behavior of AVPlayer and force him to go for redundant streams only in case of actual load error?
Any idea why it tries to load all of them? Maybe some HLS tags can help?
Also it sometimes display errors like this in console:
{OptimizedCabacDecoder::UpdateBitStreamPtr} bitstream parsing error!!!!!!!!!!!!!!
And I cant find much info about it
Problem was in incorrectly set BANDWIDTH value, AVPlayer has some obscure logic with switching redundant streams if property current one doesn't match m3u8 values
I've got an app that currently ships with all the videos it can play embedded in it. This doesn't scale well, and unless you want to play all the movies, wastes disk space. It also makes it less desirable to upgrade the app because you have to re-download all movies.
What I would like to do is download the movie on the fly, play it back while downloading, and then if it's successfully downloaded, save it to the file system so that next time they want to watch it, it streams from the local file.
I can do whatever is needed to the video, but currently I'm serving it up as an .mp4 file from Amazon S3, with a mimetype of video/mp4, and so the first half of my issue works fine: the movie downloads, and MPMovieViewController will start playing it as soon as it thinks it has downloaded "enough."
Is there any way to tap into the cache of that video file so that I can save it and control how long it resides on the filesystem? This seems like it would be the easiest approach.
I am targeting iOS 5+6, but if the only solution available required iOS 6, I would consider it also. Thanks!
UPDATE: Using AFNetworking, I am now half-way there, I think. I am downloading the video file from the server, and listening for the download progress. Once I see 25% of the video has been downloaded, I start playback on the local file using an MPMoviePlayerController.
The main issue I'm running into now is playback seems to get screwed up. It's going along fine, 25% downloaded, playback starts... download continues normally... then the file finishes downloading completely, and shortly thereafter video freezes. The onscreen playback timer still indicates playback is ongoing and I don't see any "playback finished" type notifications, but the video is frozen. My guess based on the behavior is that perhaps the initial buffer for the video playback was used up, and it isn't detecting that more video is available on disk now?
Is there any way to interact with MPMoviePlayerController to let it know periodically to refresh the buffer it's playing out of? Or some other way to handle this situation?
UPDATE: Make sure to see the newer answer from #TomHamming.
I have yet to find a conclusive answer, but at this time I believe the answer is: you can't reliably do this. At least not without a lot of work which seems too much like a hack. I filed a feature request with Apple as it really seems like this should be possible with some adjustments to MPMoviePlayerController.
I will go over the variety of things I tried or considered, and the results I encountered.
Pass MPMoviePlayerController a URL to your movie file, which allows it to stream, and then pull the file out of the cache it was saved into, into your local Documents folder. Won't work, as of iOS 6. I filed a feature request with Apple, but as it stands now there's no way to get your hands on the file they are downloading, AFAIK.
Start downloading the movie file with NSURLConnection (or something like AFNetwork), and then when a "decent amount" has been downloaded to the device, pass the file URL to the MPMoviePlayerController and let it stream from disk. Sort of works, but not well. Three problems:
It's really hard to know when to start playing the file. I haven't figured out the algorithm Apple uses, and so I always erred on the side of caution, waiting for 25% to be downloaded before playing.
The MPMoviePlayerController interface provides no sense of the movie being streamed, as it does when Apple is doing the calculations via the network. It appears to the user that the file is totally downloaded when it really is not.
And most importantly, MPMoviePlayerController seems to not work well with playing a file that is not completely downloaded. I experienced playback problems once the file finished downloading, or if the player caught up with the amount downloaded, and never found a graceful way to handle these situations.
Same procedure as above, but use AVFoundation classes to more finely control the playback process, and avoid the issues described above regarding playback stopping, etc. Might work, but I want all the features of MPMoviePlayerController. Re-implementing MPMoviePlayerController myself just to get this one feature seems like a waste of time.
Same procedure as #1 above, but run a small web server in your app to handle streaming the video from the disk to MPMoviePlayerController, with the hope being that the streaming would work more like it normally does when streaming the file directly from an external web server. Works, but results were still sporadic and performance seemed to suffer. I did my test with CocoaHTTP. I decided against this approach because it just felt like a terrible hack.
Run a lightweight HTTP proxy, thus intercepting the downloaded movie file data as it gets streamed from the internet into your MPMoviePlayerController. Not sure if this works or not. I was not able to test this yet, as I have not found a lightweight HTTP proxy written in Objective-C, and at this point don't feel like implementing one just to try this experiment. It seems like the next easiest of all these hacks to implement -- if you don't have to write the proxy!
At this point I've decided to go the less-hacky, but also less user-friendly route of simply downloading the file completely, and then passing it to MPMoviePlayerController, until a better solution comes along.
You can do this as of iOS 10 with AVAssetDownloadTask. See this WWDC 2016 session and this documentation.
Alternatively, if your movie isn't DRM'd, you can do it with AVAssetResourceLoaderDelegate, which effectively lets you give an AVPlayer an arbitrary stream of bytes. See this walkthrough.
I’m working on a small iPhone app which is streaming movie content over a network connection using regular sockets. The video is in H.264 format. I’m however having difficulties with playing/decoding the data. I’ve been considering using FFMPEG, but the license makes it unsuitable for the project. I’ve been looking into Apple’s AVFoundation framework (AVPlayer in particular), which seems to be able to handle h264 content, however I’m only able to find methods to initiate the movie using an url – not by proving a memory buffer streamed from the network.
I’ve been doing some tests to make this happen anyway, using the following approaches:
Play the movie using a regular AVPlayer. Every time data is received on the network, it’s written to a file using fopen with append-mode. The AVPlayer’s asset is then reloaded/recreated with the updated data. There seems to be two issues with this approach: firstly, the screen goes black for a short moment while the first asset is unloaded and the new loaded. Secondly, I do not know exactly where the playing stopped, so I’m unsure how I would find out the right place to start playing the new asset from.
The second approach is to write the data to the file as in the first approach, but with the difference that the data is loaded into a second asset. A AVQueuedPlayer is then used where the second asset is inserted/queued in the player and then called when the buffering has been done. The first asset can then be unloaded without a black screen. However, using this approach it’s even more troublesome (than the first approach) to find out where to start playing the new asset.
Has anyone done something like this and made it work? Is there a proper way of doing this using AVFoundation?
The official method to do this is the HTTP Live Streaming format which supports multiple quality levels (among other things) and automatically switches between them (eg: if the user moves from WiFi to cellular).
You can find the docs here: Apple Http Streaming Docs
I'm designing a simple proof of concept for multitrack recorder.
Obvious starting point is to play from file A.caf to headphones while simultaneously recording microphone input into file B.caf
This question -- Record and play audio Simultaneously -- points out that there are three levels at which I can work:
AVFoundation API (AVAudioPlayer + AVAudioRecorder)
Audio Queue API
Audio Unit API (RemoteIO)
What is the best level to work at? Obviously the generic answer is to work at the highest level that gets the job done, which would be AVFoundation.
But I'm taking this job on from someone who gave up due to latency issues (he was getting a 0.3sec delay between the files), so maybe I need to work at a lower level to avoid these issues?
Furthermore, what source code is available to springboard from? I have been looking at SpeakHere sample ( http://developer.apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html ). if I can't find something simpler I will use this.
But can anyone suggest something simpler/else? I would rather not work with C++ code if I can avoid it.
Is anyone aware of some public code that uses AVFoundation to do this?
EDIT: AVFoundation example here: http://www.iphoneam.com/blog/index.php?title=using-the-iphone-to-record-audio-a-guide&more=1&c=1&tb=1&pb=1
EDIT(2): Much nicer looking one here: http://www.switchonthecode.com/tutorials/create-a-basic-iphone-audio-player-with-av-foundation-framework
EDIT(3): How do I record audio on iPhone with AVAudioRecorder?
To avoid latency issues, you will have to work at a lower level than AVFoundation alright. Check out this sample code from Apple - Auriotouch. It uses Remote I/O.
As suggested by Viraj, here is the answer.
Yes, you can achieve very good results using AVFoundation. Firstly you need to pay attention to the fact that for both the player and the recorder, activating them is a two step process.
First you prime it.
Then you play it.
So, prime everything. Then play everything.
This will get your latency down to about 70ms. I tested by recording a metronome tick, then playing it back through the speakers while holding the iPhone up to the speakers and simultaneously recording.
The second recording had a clear echo, which I found to be ~70ms. I could have analysed the signal in Audacity to get an exact offset.
So in order to line everything up I just performSelector:x withObject: y afterDelay: 70.0/1000.0
There may be hidden snags, for example the delay may differ from device to device. it may even differ depending on device activity. It is even possible the thread could get interrupted/rescheduled in between starting the player and starting the recorder.
But it works, and is a lot tidier than messing around with audio queues / units.
I had this problem and I solved it in my project simply by changing the PreferredHardwareIOBufferDuration parameter of the AudioSession. I think I have just 6ms latency now, that is good enough for my app.
Check this answer that has a good explanation.
am newbie for multimedia work.i want to capture audio by samples and transfer to some other ios device via network.how to start my work??? .i have just gone through apple multi media guide and speakhere example ,it is full of c++ code and they are writing in file and then start services ,but i need buffer...please help me to start my work in correct way .
Thanks in advance
I just spent a bunch of time working on real time audio stuff you can use AudioQueue but it has latency issues around 100-200ms.
If you want to do something like the t-pain app, you have to use
RemoteIO API
Audio Unit API
They are equally difficult to implement, so I would just pick the remote IO path.
Source can be found here:
http://atastypixel.com/blog/using-remoteio-audio-unit/
I have upvoted the answer above, but I wanted to add a piece of information that took me a while to figure out. When using AudioQueue for recording, the intuitive notion is that the callback is done in regular intervals of whatever the number of samples represent. That notion is incorrect, AudioQueue seems to gather the samples for a long period of time, then deliver them in very fast iterations of the callback.
In my case, I was doing 20ms samples, and receiving 320 samples per callback. When printing out the timestamps for the call, I noticed a pattern of: 1 call every 2 ms, then after a while one call of ~180ms. Since I was doing VoIP, this presented the symptom of an increasing delay on the receiving end. Switching to Remote I/O seems to have solved the issue.