iOS/AVFoundation: Design pattern for asynch handlers when turning arrays of images into tracks and then into a single video? - ios

Can you point me to design pattern guides to adapt my style to AVFoundation's asynch approach?
Working an app where you create an image and place audio onto hotspots on it. I'm implementing export to a movie that is the image with effects (glow of hotspot) playing under the audio.
I can reliably create the video and audio tracks and can correctly get audio into an AVMutableComposition and play it back. Problem is with the video. I've narrowed it to my having written a synchronous solution to a problem that requires use of AVFoundation's asynch writing methods.
The current approach and where it fails (each step is own method):
Create array of dictionaries. 2 objects in dictionary. One dictionary object is image representing a keyframe, another object is URL of audio that ends on that keyframe. First dictionary has start keyframe but not audio URL.
For each dictionary in the array, replace the UIImage with an array of start image->animation tween images->end state image, with proper count for FPS and duration of audio.
For each dictionary in the array, convert image array into a soundless mp4 and save using [AVAssetWriter finishWritingWithCompletionHandler], then replace image array in dictionary with URL of mp4. Each dictionary of mp4 & audio URL represents a segment of final movie, where order of dictionaries in array dictates insert order for final movie
-- all of above works, stuff gets made & ordered right, vids and audio playback --
For each dictionary with mp4 & audio URL, load into AVAssets and insert into an AVMutableComposition track, one track for audio & one for video. The audio load & insert works, plays back. But the video fails and appears to fail because step 4 starts before step 3's AVAssetWriter finishWritingWithCompletionHandler finishes for all MP4 tracks.
One approach would be to pause via while loop and wait for status on the AVAssetWriter to say done. This smacks of working against the framework. In practice it is also leading to ugly and sometimes seemingly infinite waits for loops to end.
But simply making step 4 the completion handler for finishWritingWithCompletionHandler is non-trivial because I am writing multiple tracks but I want step 4 to launch only after the last track is written. Because step 3 is basically a for-each processor, I think all completion handlers would need to be the same. I guess I could use bools or counters to change up the completion handler, but it just feels like a kluge.
If any of the above made any sense, can someone give me/point to a primer on design patterns for asynch handling like this? TIA.

You can use GCD dispatch groups for that sort of problem.
From the docs:
Grouping blocks allows for aggregate synchronization. Your application
can submit multiple blocks and track when they all complete, even
though they might run on different queues. This behavior can be
helpful when progress can’t be made until all of the specified tasks
are complete.
The basic idea is, that you call dispatch_group_enter for each of your async tasks. In the completion handler of your tasks, you call dispatch_group_leave.
Dispatch groups work similar to counting semaphores. You increment a counter (using dipsatch_group_wait) when you start a task, and you decrement a counter when a task finishes.
dispatch_group_notify lets you install a completion handler block for your group. This block gets executed when the counter reaches 0.
This blog post provides a good overview and a complete code sample: http://amro.co/post/48248949039/using-gcd-to-wait-on-many-tasks

#weichsel Thank you very much. That seems like it should work. But, I'm using dispatch_group_wait and it seems to not wait. I've been banging against it for several hours since you first replied but now luck. Here's what I've done:
Added property that is a dispatch group, called videoDispatchGroup, and call dispatch_group_create in the init of the class doing the video processing
In the method that creates the video tracks, use dispatch_group_async(videoDispatchGroup, dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [videoWriter finishWritingWithCompletionHandler:^{
The video track writing method is called from a method chaining together the various steps. In that method, after the call to write the tracks, I call dispatch_group_wait(videoProcessingGroup, DISPATCH_TIME_FOREVER);
In the dealloc, call dispatch_release(videoDispatchGroup)
That's all elided a bit, but essentially the call to dispatch_group_wait doesn't seem to be waiting. My guess is it has something to do with the dispatch_group_asyn call, but I'm not sure exactly what.
I've found another means of handling this, using my own int count/decrement via the async handler on finishWritingWithCompletion handler. But I'd really like to up my skills by understanding GCD better.
Here's the code-- dispatch_group_wait never seems to fire, but the movies themselves are made. Code is elided a bit for brevity, but nothing was removed that doesn't work without the GCD code.
#implementation MovieMaker
// This is the dispatch group
#synthesize videoProcessingGroup = _videoProcessingGroup;
-(id)init {
self = [super init];
if (self) {
_videoProcessingGroup = dispatch_group_create();
}
return self;
}
-(void)dealloc {
dispatch_release(self.videoProcessingGroup);
}
-(id)convert:(MTCanvasElementViewController *)sourceObject {
// code fails in same way with or without this line
dispatch_group_enter(self.videoProcessingGroup);
// This method works its way down to writeImageArrayToMovie
_tracksData = [self collectTracks:sourceObject];
NSString *fileName = #"";
// The following seems to never stop waiting, the movies themselves get made though
// Wait until dispatch group finishes processing temp tracks
dispatch_group_wait(self.videoProcessingGroup, DISPATCH_TIME_FOREVER);
// never gets to here
fileName = [self writeTracksToMovie:_tracksData];
// Wait until dispatch group finishes processing final track
dispatch_group_wait(self.videoProcessingGroup, DISPATCH_TIME_FOREVER);
return fileName;
}
// #param videoFrames should be NSArray of UIImage, all of same size
// #return path to temp file
-(NSString *)writeImageArrayToMovie:(NSArray *)videoFrames usingDispatchGroup:(dispatch_group_t)dispatchGroup {
// elided a bunch of stuff, but it all works
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:result]
fileType:AVFileTypeMPEG4
error:&error];
//elided stuff
//Finish the session:
[writerInput markAsFinished];
dispatch_group_async(dispatchGroup, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[videoWriter finishWritingWithCompletionHandler:^{
dispatch_group_leave(dispatchGroup);
// not sure I ever get here? NSLogs don't write out.
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
}];
});
return result;
}

Related

iOS interface freeze caused by background thread

I have an app that needs to preload a bunch of streamed videos as soon as possible so that they play instantly when the user clicks on them.
I am able to achieve this with a collection of AVPlayer objects, initialized right when the app is launched:
-(void)preloadVideos {
for (Video* video in arrayOfVideos){
NSString *streamingURL = [NSString stringWithFormat:#"https://mywebsite.com/%#.m3u8", video.fileName];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:[NSURL URLWithString:streamingURL] options:nil];
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset];
AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem];
pthread_mutex_lock(&mutex_videoPlayers);
[_videoPlayers setObject:player forKey:videoKey];
pthread_mutex_unlock(&mutex_videoPlayers);
}
}
The lock is defined in init as:
pthread_mutex_init(&mutex_videoPlayers, NULL);
My problem is that when I invoke this function, the app freezes for about 1 minute, then continues on with no problem. This is obviously because there is a lot of processing going on - according to the debug dashboard in xcode, CPU usage spikes to about 67% during the freeze.
So I thought I could solve this by putting the operation into a background thread:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
[self preloadVideos];
});
but the app still froze briefly in exactly the same way, and CPU usage had the same pattern. I thought maybe its because the task is too intensive and needed to be broken up into smaller tasks, so I tried serializing the loop as distinct tasks:
preloadQueue = dispatch_queue_create("preloadQueue", NULL);
...
-(void)preloadVideos {
for (Video* video in arrayOfVideos){
dispatch_async(preloadQueue, ^(void){
[self preloadVideo:video]; // a new function with the logic above
});
}
but that seemed to make the freeze period longer, even though max CPU usage went down to 48%.
Am I missing something with these GCD functions? Why does the AVPlayer creation block the main thread when put into background tasks?
I know its not that there are too many AVPlayers created, because there are only 6 of them, and the app runs fine after they are created.
After adding log messages I notice that (in all implementations), the setObject call is called for every single video player before the interface's viewDidAppear method is called. Also, 5 videos load instantly, and the last - a longer one - takes a while but the freeze ends right when it completes.
Why is the app waiting for background tasks to finish before updating the views?
Update:
The app accesses videoPlayers while these tasks are running, but since I use a lock while writing, I don't lock while reading. Here is the definition:
#property (atomic, retain) NSMutableDictionary *videoPlayers;
Update: updated preloadVideos with mutex locks, still seeing the freezing
Turns out the background thread was locking a resource that the main thread was accessing elsewhere. The main thread needed to wait for the resource to become freed, which caused the interface to freeze.
Your dispatch_async code should not be freezing the main thread. That should be creating the asset objects in the background. It will take time before the assets become available, but that should be ok.
What do you mean "...the app still froze briefly..." Froze how? And for how long?
How are you using the _videoPlayers array once you've loaded it? What are you doing to handle the fact that the array may only be partially loaded? (If you are looping through the _videoPlayers array when it gets saved to from the background you may crash.) At the very least you should make videoPlayers an atomic property of you class and always reference it (read and write) using property notation (self.videoPlayers or [self videoPlayers], never _videoPlayers.) You will probably need better protection than that, like using #synchronized for the code that accesses the array.

Adding audio buffer [from file] to 'live' audio buffer [recording to file]

What I'm trying to do:
Record up to a specified duration of audio/video, where the resulting output file will have a pre-defined background music from external audio-file added - without further encoding/exporting after recording.
As if you were recording video using the iPhones Camera-app, and all the recorded videos in 'Camera Roll' have background-songs. No exporting or loading after ending recording, and not in a separate AudioTrack.
How I'm trying to achieve this:
By using AVCaptureSession, in the delegate-method where the (CMSampleBufferRef)sample buffers are passed through, I'm pushing them to an AVAssetWriter to write to file. As I don't want multiple audio tracks in my output file, I can't pass the background-music through a separate AVAssetWriterInput, which means I have to add the background-music to each sample buffer from the recording while it's recording to avoid having to merge/export after recording.
The background-music is a specific, pre-defined audio file (format/codec: m4a aac), and will need no time-editing, just adding beneath the entire recording, from start to end. The recording will never be longer than the background-music-file.
Before starting the writing to file, I've also made ready an AVAssetReader, reading the specified audio-file.
Some pseudo-code(threading excluded):
-(void)startRecording
{
/*
Initialize writer and reader here: [...]
*/
backgroundAudioTrackOutput = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:
backgroundAudioTrack
outputSettings:nil];
if([backgroundAudioReader canAddOutput:backgroundAudioTrackOutput])
[backgroundAudioReader addOutput:backgroundAudioTrackOutput];
else
NSLog(#"This doesn't happen");
[backgroundAudioReader startReading];
/* Some more code */
recording = YES;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if(!recording)
return;
if(videoConnection)
[self writeVideoSampleBuffer:sampleBuffer];
else if(audioConnection)
[self writeAudioSampleBuffer:sampleBuffer];
}
The AVCaptureSession is already streaming the camera-video and microphone-audio, and is just waiting for the BOOL recording to be set to YES. This isn't exactly how I'm doing this, but a short, somehow equivalent representation. When the delegate-method receives a CMSampleBufferRef of type Audio, I call my own method writeAudioSamplebuffer:sampleBuffer. If this was to be done normally, without a background-track as I'm trying to do, I'd simply put something like this: [assetWriterAudioInput appendSampleBuffer:sampleBuffer]; instead of calling my method. In my case though, I need to overlap two buffers before writing it:
-(void)writeAudioSamplebuffer:(CMSampleBufferRef)recordedSampleBuffer
{
CMSampleBufferRef backgroundSampleBuffer =
[backgroundAudioTrackOutput copyNextSampleBuffer];
/* DO MAGIC HERE */
CMSampleBufferRef resultSampleBuffer =
[self overlapBuffer:recordedSampleBuffer
withBackgroundBuffer:backgroundSampleBuffer];
/* END MAGIC HERE */
[assetWriterAudioInput appendSampleBuffer:resultSampleBuffer];
}
The problem:
I have to add incremental sample buffers from a local file to the live buffers coming in. The method I have created named overlapBuffer:withBackgroundBuffer: isn't doing much right now. I know how to extract AudioBufferList, AudioBuffer and mData etc. from a CMSampleBufferRef, but I'm not sure how to actually add them together - however - I haven't been able to test different ways to do that, because the real problem happens before that. Before the Magic should happen, I am in possession of two CMSampleBufferRefs, one received from microphone, one read from file, and this is the problem:
The sample buffer received from the background-music-file is different than the one I receive from the recording-session. It seems like the call to [self.backgroundAudioTrackOutput copyNextSampleBuffer]; receives a large number of samples. I realize that this might be obvious to some people, but I've never before been at this level of media-technology. I see now that it was wishful thinking to call copyNextSampleBuffer each time I receive a sampleBuffer from the session, but I don't know when/where to put it.
As far as I can tell, the recording-session gives one audio-sample in each sample-buffer, while the file-reader gives multiple samples in each sample-buffer. Can I somehow create a counter to count each received recorded sample/buffers, and then use the first file-sampleBuffer to extract each sample, until the current file-sampleBuffer has no more samples 'to give', and then call [..copyNext..], and do the same to that buffer?
As I'm in full control of both the recording and the file's codecs, formats etc, I am hoping that such a solution wouldn't ruin the 'alignment'/synchronization of the audio. Given that both samples have the same sampleRate, could this still be a problem?
Note
I'm not even sure if this is possible, but I see no immediate reason why it shouldn't.
Also worth mentioning that when I try to use a Video-file instead of an Audio-file, and try to continually pull video-sampleBuffers, they align up perfectly.
I am not familiarized with AVCaptureOutput, since all my sound/music sessions were built using AudioToolbox instead of AVFoundation. However, I guess you should be able to set the size of the recording capturing buffer. If not, and you are still get just one sample, I would recommend you to store each individual data obtained from the capture output in an auxiliar buffer. When the auxiliar buffer reaches the same size as the file-reading buffer, then call [self overlapBuffer:auxiliarSampleBuffer withBackgroundBuffer:backgroundSampleBuffer];
I hope this would help you. If not, I can provide example about how to do this using CoreAudio. Using CoreAudio I have been able to obtain 1024 LCPM samples buffer from both microphone capturing and file reading. So the overlapping is immediate.

Using Blocks and GCD to manage tasks

I'm learning iOS and when it comes to GCD, it's confusing.
Let's get it out of the way, I'm writing a small program that fetch data from the internet.
Here is my viewcontroller
NSMutableArray dataArray = [NSMutableArray array];
[querysomethingwithblock:(^ {
//do some stuff here
[otherquerywithblock:( ^ {
//do some stuff here
// Here I got the data from internet
// Do loop action
[dataArray addObject:data];
})];
})];
// here I want to perform some actions only after get data from internet
[self performAction:dataArray];
How can I achieve this purpose. In practical, [self performAction:dataArray] always get fired before I get the data. I tried to play with GCD but no luck.
Here is some patterns I've tried so far
dispatch_async(queue, ^{
// Do query stuff here
dispatch_async(dispatch_get_mainqueue(), ^{
//perform action here
});
{;
Or using dispatch_group_async, dispatch_group_wait, dispatch_group_notify
The only way I can handle right now is to use dispatch_after but the point is the downloading time is variable, it's not good practice to have a specific time here
Thank you so much for any advice.
The part of code called Do query stuff here i assume is async already, why put it inside a dispatch_queue then?
If instead you manage to do a synchronous query, your code (the second snippet) would work, as the dispatch to the main queue would be executed only after the query finished.
If you don't have an option to execute the query in a synchronous manner, then you need some mechanism to register either a block or a callback to be executed when the download is finished.
At the end of the day, it all depends on what kind of query you have in there and what methods it offers for you to register an action to be performed when the download is finished.

Playing sound at interval with GCD timer not behaving as expected

Hi !
I'm building a timer using GCD for the purpose of playing a sound at a specific interval, to be more precise, it's a metronome sound. I've been trying for days to solve my issue but nothing. Everything is good but when I set my tempo to a bigger value , let's say 150 bpm or 200 bpm, when the sound starts for the first time, it fires very quickly(almost like two sounds in the same time meaning it does not have the expected interval) and after this , it calibrates. I start the sound the second time , all is good... so this happens only the first time I resume my dispatch source so I'm guessing it has something to do with loading the sound from the disk , like in this post : Slow start for AVAudioPlayer the first time a sound is played . For my sound I used at first an instance of AVAudioPlayer with prepareToPlay and play and also created it in the AppDelegate class, it hasn't work...I have even tried the SoundManager class developed by #NickLockwood,same issue. At present, I'm using a SystemSoundID. As for the timers, this is my first GCD timer , I've already tried the classical NSTimer, CADisplayLink and other timers found on git... all in vain.
Another interesting issue is that with the other timers , everything is perfect on the simulator but on the device the same glitch.
Here's the code, I hope someone will bring me to the light.
-(void)playButtonAction //
{
if (_metronomeIsAnimatingAndPLaying == NO)
{
[self startAnimatingArm]; // I start my animation and create my timer
metronomeTimer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0));
dispatch_source_set_timer(metronomeTimer,dispatch_time(DISPATCH_TIME_NOW, duration * NSEC_PER_SEC),duration * NSEC_PER_SEC,duration *NSEC_PER_SEC);
dispatch_source_set_event_handler(metronomeTimer, ^{[self playTick];});
dispatch_resume(metronomeTimer);
_metronomeIsAnimatingAndPLaying = YES;
}
}
-(void)playTick
{
AudioServicesPlaySystemSound(appDeleg.soundID); // soundID is created in appDelegate
}
In my application didFinishLaunching
NSString *path = [[NSBundle mainBundle] pathForResource:#"tick"
ofType:#"caf"];
AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:path]
, &_soundID);
And BPM setter and getter :
- (NSUInteger)bpm
{
return round(60.0 / duration);
}
- (void)setBpm:(NSUInteger)bpm
{
if (bpm >= MaxBPM) {
bpm = MaxBPM;
} else if (bpm <= MinBPM) {
bpm = MinBPM;
}
duration = (60.0 / bpm);
}
This arrangement will fundamentally never work.
GCD is a thread-pool designed to facilitate task-level parallelism. It is usually asynchronous and non real-time. These are almost precisely the opposite characteristics to those required in an audio application.
Each thread servicing a GCD queue is contending with other threads in the system for an opportunity to execute. Furthermore, the queue may be busy at requested time processing something else. If that something else is long-running - and long-running tasks are precisely the kind of thing that GCD is made for - the scheduler may pre-empt the thread before the operation has completed and penalise the queue; it may wait a long time for service.
The Manpage for GCD states the following about timers on GCD queues:
A best effort attempt is made to submit the event handler block to the target queue at the specified time; however, actual invocation may occur at a later time.
NSTimer will not be any better. Its documentation states A timer is not a real-time mechanism. Since you'll probably run this on the application's main run-loop, it will also be very unpredictable.
The solution to this problem is to use lower-level audio APIs - specifically Audio Units. The advantage of doing so is that soft-syth units have an event queue which is serviced by the unit's render handler. This runs on a real-time thread, and offers extremely robust and predictable service. Since you can queue a considerable number of events with timestamps in the future, your timing requirements are now quite loose. You could safely use either GCD or a NSTimer for this.

iOS play sound before performing other action

I have a need to play a brief sound 3 seconds or so (like a count down beep) before I perform some other action in an iOS application.
The use case is as follows:
User clicks a button... the beeps play (simple beeps using AudioServicesPlaySystemSound... then the rest of the method is run.
I cannot seem to find out a way to block my method while the tones are playing.
I've tried the following:
[self performSelector:#selector(playConfirmationBeep) onThread:[NSThread currentThread] withObject:nil waitUntilDone:YES];
But the tones play synchronously while the rest of the method is performed.
What am I missing with the above call?
AudioServicesPlaySystemSound is asynchronous so you can't block on it. What you want to do is get the audio services to notify you when playback is finished. You can do that via AudioServicesAddSystemSoundCompletion.
It's a C-level API so things are a bit ugly, but you probably want something like:
// somewhere, a C function like...
void audioServicesSystemSoundCompleted(SystemSoundID ssID, void *clientData)
{
[(MyClass *)clientData systemSoundCompleted:ssID];
}
// meanwhile, in your class' init, probably...
AudioServicesAddSystemSoundCompletion(
soundIDAsYoullPassToAudioServicesPlaySystemSound,
NULL, // i.e. [NSRunloop mainRunLoop]
NULL, // i.e. NSDefaultRunLoopMode
audioServicesSystemSoundCompleted,
self);
// in your dealloc, to avoid a dangling pointer:
AudioServicesRemoveSystemSoundCompletion(
soundIDAsYoullPassToAudioServicesPlaySystemSound);
// somewhere in your class:
- (void)systemSoundCompleted:(SystemSoundID)sound
{
if(sound == soundIDAsYoullPassToAudioServicesPlaySystemSound)
{
NSLog(#"time to do the next thing!");
}
}
If you actually want to block the UI while the sound is playing, and assuming your class is a view controller, you should probably just disable self.view.userInteractionDisable for the relevant period. What you definitely don't want to do is block the main run loop; that'll stop important system events like low memory warnings getting through and hence potentially cause your app to be force quit. You also probably still want to obey things like device rotations.

Resources