ios: How to resume AVAsset video based writing session - ios

I'm working on video editing application for iphone/ipod touch. My app simply asks user to choose one of already existing videos in the camera roll directory, then frame by frame changes pixel values and saves again under different name in the camera roll directory. Because video processing might be quite long I really need to implement some kind of functionality to resume previously started session(ie. when video processing reaches 30% of total video length and user presses down the home button(or there is a phone call) when my application is brought back to the foreground video processing should start from 30%, not from beginning).
Most important parts of my code(simplified a bit to be more readable):
AVAssetReader* assetReader = [[AVAssetReader alloc] initWithAsset:sourceAsset error:&error];
NSArray* videoTracks = [sourceAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack* videoTrack = [videoTracks objectAtIndex:0];
AVAssetReaderTrackOutput* assetReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack
[assetReader addOutput:assetReaderOutput]);
[assetReader addOutput:assetReaderOutput];
[mVideoWriter startWriting]
[mAssetReader startReading]
[mVideoWriter startSessionAtSourceTime: kCMTimeZero];
mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
while (mWriterInput.readyForMoreMediaData) {
CMSampleBufferRef nextBuffer = [mAssetReaderOutput copyNextSampleBuffer];
if (nextBuffer) {
// frame processing here
} // if
} // while
}]; // block
// when done saving my video to camera roll directory
I'm listening to my app's delegate callback methods like applicationWillResignActive, applicationWillEnterForeground but I'm not sure how to handle them in proper way. What I've tried already:
1) [AVAssetWriter start/endSessionAtSourceTime], unfortunately stopping and starting in the last frame presentation time did not work for me
2) saving "by hand" every part of movie that was processed and when processing reaches 100% merging all of them using AVMutableComposition, this solution however sometimes crashes in dispatch queue
Would be really great if someone could give me some hints how it should be done correctly...

I'm pretty sure AVAssetWriter can't append, so something along the lines of 2), saving the pieces then merging them, is probably the best solution if you must make your export restartable.
But first you have to resolve that crash.
Also, before you start creating hundreds of movie pieces however, you should have a look AVAssetWriter.movieFragmentInterval as with careful management of presentation time stamps/durations you may be able to use it minimise the number of pieces you have to merge.

Have you tried -[UIApplication beginBackgroundTaskWithExpirationHandler:]? This seems like a good place to request extra operating time in the background to finish the heavy lifting.

Related

Reduce memory usage of AVAssetWriter

As the title says, I am having some trouble with AVAssetWriter and memory.
Some notes about my environment/requirements:
I am NOT using ARC, but if there is a way to simply use it and get it all working I'm all for it. My attempts have not made any difference though. And the environment I will be using this in requires memory to be minimised / released ASAP.
Objective-C is a requirement
Memory usage must be as low as possible, the 300mb it takes up now is unstable when testing on my device (iPhone X).
The code
This is the code used when taking the screenshots below https://gist.github.com/jontelang/8f01b895321d761cbb8cda9d7a5be3bd
The problem / items kept around in memory
Most of the things that seem to take up a lot of memory throughout the processing seems to be allocated in the beginning.
So at this point it doesn't seem to me that the issues are with my code. The code that I personally have control over seems to not be an issue, namely loading the images, creating the buffer, releasing it all seems to not be where the memory has a problem. For example if I mark in Instruments the majority of the time after the one above, the memory is stable and none of the memory is kept around.
The only reason for the persistent 5mb is that it is deallocated just after the marking period ends.
Now what?
I actually started writing this question with the focus being on wether my code was releasing things correctly or not, but now it seems like that is fine. So what are my options now?
Is there something I can configure within the current code to make the memory requirements smaller?
Is there simply something wrong with my setup of the writer/input?
Do I need to use a totally different way of making a video to be able to make this work?
A note on using CVPixelBufferPool
In the documentation of CVPixelBufferCreate Apple states:
If you need to create and release a number of pixel buffers, you should instead use a pixel buffer pool (see CVPixelBufferPool) for efficient reuse of pixel buffer memory.
I have tried with this as well, but I saw no changes in the memory usage. Changing the attributes for the pool didn't seem to have any effect as well, so there is a small possibility that I am not actually using it 100% properly, although from comparing to code online it seems like I am, at least. And the output file works.
The code for that, is here https://gist.github.com/jontelang/41a702d831afd9f9ceeb0f9f5365de03
And here is a slightly different version where I set up the pool in a slightly different way https://gist.github.com/jontelang/c0351337bd496a6c7e0c94293adf881f.
Update 1
So I looked a bit deeper into a trace, to figure out when/where the majority of the allocations are coming from. Here is an annotated image of that:
The takeaway is:
The space is not allocated "with" the AVAssetWriter
The 500mb that is held until the end is allocated within 500ms after the processing starts
It seems that it is done internally in AVAssetWriter
I have the .trace file uploaded here: https://www.dropbox.com/sh/f3tf0gw8gamu924/AAACrAbleYzbyeoCbC9FQLR6a?dl=0
When creating Dispatch Queue, ensure you create a queue with Autorlease Pool. Replace DISPATCH_QUEUE_SERIAL with DISPATCH_QUEUE_SERIAL_WITH_AUTORELEASE_POOL.
Wrap each iteration of for loop into autorelease pool as well
like this:
[assetWriterInput requestMediaDataWhenReadyOnQueue:recordingQueue usingBlock:^{
for (int i = 1; i < 200; ++i) {
#autoreleasepool {
while (![assetWriterInput isReadyForMoreMediaData]) {
[NSThread sleepForTimeInterval:0.01];
}
NSString *path = [NSString stringWithFormat:#"/Users/jontelang/Desktop/SnapperVideoDump/frames/frame_%i.jpg", i];
UIImage *image = [UIImage imageWithContentsOfFile:path];
CGImageRef ref = [image CGImage];
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:ref pool:writerAdaptor.pixelBufferPool];
CMTime presentTime = CMTimeAdd(CMTimeMake(i, 60), CMTimeMake(1, 60));
[writerAdaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
CVPixelBufferRelease(buffer);
}
}
[assetWriterInput markAsFinished];
[assetWriter finishWritingWithCompletionHandler:^{}];
}];
No, I see it is around 240 mb peaking in app. It's my first time using this allocation - interesting.
I'm using AssetWriter to write a video file by streaming cmSampleBuffer : CMSampleBuffer. It gets from AVCaptureVideoDataOutputSampleBufferDelegate by Camera CaptureOutput Realtime.
While I have not yet found the actual issue, the memory problem I described in this question was solved by simply doing it on the actual device instead of the simulator.
#Eugene_Dudnyk Answer is on spot, the auto release pool INSIDE the for or while loop is the key, here is how I got it working for Swift, also, please use AVAssetWriterInputPixelBufferAdaptor for pixel buffer pool:
videoInput.requestMediaDataWhenReady(on: videoInputQueue) { [weak self] in
while videoInput.isReadyForMoreMediaData {
autoreleasepool {
guard let sample = assetReaderVideoOutput.copyNextSampleBuffer(),
let buffer = CMSampleBufferGetImageBuffer(sample) else {
print("Error while processing video frames")
videoInput.markAsFinished()
DispatchQueue.main.async {
videoFinished = true
closeWriter()
}
return
}
// Process image and render back to buffer (in place operation, where ciProcessedImage is your processed new image)
self?.getCIContext().render(ciProcessedImage, to: buffer)
let timeStamp = CMSampleBufferGetPresentationTimeStamp(sample)
self?.adapter?.append(buffer, withPresentationTime: timeStamp)
}
}
}
My memory usage stopped rising.

iOS interface freeze caused by background thread

I have an app that needs to preload a bunch of streamed videos as soon as possible so that they play instantly when the user clicks on them.
I am able to achieve this with a collection of AVPlayer objects, initialized right when the app is launched:
-(void)preloadVideos {
for (Video* video in arrayOfVideos){
NSString *streamingURL = [NSString stringWithFormat:#"https://mywebsite.com/%#.m3u8", video.fileName];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:[NSURL URLWithString:streamingURL] options:nil];
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset];
AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem];
pthread_mutex_lock(&mutex_videoPlayers);
[_videoPlayers setObject:player forKey:videoKey];
pthread_mutex_unlock(&mutex_videoPlayers);
}
}
The lock is defined in init as:
pthread_mutex_init(&mutex_videoPlayers, NULL);
My problem is that when I invoke this function, the app freezes for about 1 minute, then continues on with no problem. This is obviously because there is a lot of processing going on - according to the debug dashboard in xcode, CPU usage spikes to about 67% during the freeze.
So I thought I could solve this by putting the operation into a background thread:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
[self preloadVideos];
});
but the app still froze briefly in exactly the same way, and CPU usage had the same pattern. I thought maybe its because the task is too intensive and needed to be broken up into smaller tasks, so I tried serializing the loop as distinct tasks:
preloadQueue = dispatch_queue_create("preloadQueue", NULL);
...
-(void)preloadVideos {
for (Video* video in arrayOfVideos){
dispatch_async(preloadQueue, ^(void){
[self preloadVideo:video]; // a new function with the logic above
});
}
but that seemed to make the freeze period longer, even though max CPU usage went down to 48%.
Am I missing something with these GCD functions? Why does the AVPlayer creation block the main thread when put into background tasks?
I know its not that there are too many AVPlayers created, because there are only 6 of them, and the app runs fine after they are created.
After adding log messages I notice that (in all implementations), the setObject call is called for every single video player before the interface's viewDidAppear method is called. Also, 5 videos load instantly, and the last - a longer one - takes a while but the freeze ends right when it completes.
Why is the app waiting for background tasks to finish before updating the views?
Update:
The app accesses videoPlayers while these tasks are running, but since I use a lock while writing, I don't lock while reading. Here is the definition:
#property (atomic, retain) NSMutableDictionary *videoPlayers;
Update: updated preloadVideos with mutex locks, still seeing the freezing
Turns out the background thread was locking a resource that the main thread was accessing elsewhere. The main thread needed to wait for the resource to become freed, which caused the interface to freeze.
Your dispatch_async code should not be freezing the main thread. That should be creating the asset objects in the background. It will take time before the assets become available, but that should be ok.
What do you mean "...the app still froze briefly..." Froze how? And for how long?
How are you using the _videoPlayers array once you've loaded it? What are you doing to handle the fact that the array may only be partially loaded? (If you are looping through the _videoPlayers array when it gets saved to from the background you may crash.) At the very least you should make videoPlayers an atomic property of you class and always reference it (read and write) using property notation (self.videoPlayers or [self videoPlayers], never _videoPlayers.) You will probably need better protection than that, like using #synchronized for the code that accesses the array.

Adding audio buffer [from file] to 'live' audio buffer [recording to file]

What I'm trying to do:
Record up to a specified duration of audio/video, where the resulting output file will have a pre-defined background music from external audio-file added - without further encoding/exporting after recording.
As if you were recording video using the iPhones Camera-app, and all the recorded videos in 'Camera Roll' have background-songs. No exporting or loading after ending recording, and not in a separate AudioTrack.
How I'm trying to achieve this:
By using AVCaptureSession, in the delegate-method where the (CMSampleBufferRef)sample buffers are passed through, I'm pushing them to an AVAssetWriter to write to file. As I don't want multiple audio tracks in my output file, I can't pass the background-music through a separate AVAssetWriterInput, which means I have to add the background-music to each sample buffer from the recording while it's recording to avoid having to merge/export after recording.
The background-music is a specific, pre-defined audio file (format/codec: m4a aac), and will need no time-editing, just adding beneath the entire recording, from start to end. The recording will never be longer than the background-music-file.
Before starting the writing to file, I've also made ready an AVAssetReader, reading the specified audio-file.
Some pseudo-code(threading excluded):
-(void)startRecording
{
/*
Initialize writer and reader here: [...]
*/
backgroundAudioTrackOutput = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:
backgroundAudioTrack
outputSettings:nil];
if([backgroundAudioReader canAddOutput:backgroundAudioTrackOutput])
[backgroundAudioReader addOutput:backgroundAudioTrackOutput];
else
NSLog(#"This doesn't happen");
[backgroundAudioReader startReading];
/* Some more code */
recording = YES;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if(!recording)
return;
if(videoConnection)
[self writeVideoSampleBuffer:sampleBuffer];
else if(audioConnection)
[self writeAudioSampleBuffer:sampleBuffer];
}
The AVCaptureSession is already streaming the camera-video and microphone-audio, and is just waiting for the BOOL recording to be set to YES. This isn't exactly how I'm doing this, but a short, somehow equivalent representation. When the delegate-method receives a CMSampleBufferRef of type Audio, I call my own method writeAudioSamplebuffer:sampleBuffer. If this was to be done normally, without a background-track as I'm trying to do, I'd simply put something like this: [assetWriterAudioInput appendSampleBuffer:sampleBuffer]; instead of calling my method. In my case though, I need to overlap two buffers before writing it:
-(void)writeAudioSamplebuffer:(CMSampleBufferRef)recordedSampleBuffer
{
CMSampleBufferRef backgroundSampleBuffer =
[backgroundAudioTrackOutput copyNextSampleBuffer];
/* DO MAGIC HERE */
CMSampleBufferRef resultSampleBuffer =
[self overlapBuffer:recordedSampleBuffer
withBackgroundBuffer:backgroundSampleBuffer];
/* END MAGIC HERE */
[assetWriterAudioInput appendSampleBuffer:resultSampleBuffer];
}
The problem:
I have to add incremental sample buffers from a local file to the live buffers coming in. The method I have created named overlapBuffer:withBackgroundBuffer: isn't doing much right now. I know how to extract AudioBufferList, AudioBuffer and mData etc. from a CMSampleBufferRef, but I'm not sure how to actually add them together - however - I haven't been able to test different ways to do that, because the real problem happens before that. Before the Magic should happen, I am in possession of two CMSampleBufferRefs, one received from microphone, one read from file, and this is the problem:
The sample buffer received from the background-music-file is different than the one I receive from the recording-session. It seems like the call to [self.backgroundAudioTrackOutput copyNextSampleBuffer]; receives a large number of samples. I realize that this might be obvious to some people, but I've never before been at this level of media-technology. I see now that it was wishful thinking to call copyNextSampleBuffer each time I receive a sampleBuffer from the session, but I don't know when/where to put it.
As far as I can tell, the recording-session gives one audio-sample in each sample-buffer, while the file-reader gives multiple samples in each sample-buffer. Can I somehow create a counter to count each received recorded sample/buffers, and then use the first file-sampleBuffer to extract each sample, until the current file-sampleBuffer has no more samples 'to give', and then call [..copyNext..], and do the same to that buffer?
As I'm in full control of both the recording and the file's codecs, formats etc, I am hoping that such a solution wouldn't ruin the 'alignment'/synchronization of the audio. Given that both samples have the same sampleRate, could this still be a problem?
Note
I'm not even sure if this is possible, but I see no immediate reason why it shouldn't.
Also worth mentioning that when I try to use a Video-file instead of an Audio-file, and try to continually pull video-sampleBuffers, they align up perfectly.
I am not familiarized with AVCaptureOutput, since all my sound/music sessions were built using AudioToolbox instead of AVFoundation. However, I guess you should be able to set the size of the recording capturing buffer. If not, and you are still get just one sample, I would recommend you to store each individual data obtained from the capture output in an auxiliar buffer. When the auxiliar buffer reaches the same size as the file-reading buffer, then call [self overlapBuffer:auxiliarSampleBuffer withBackgroundBuffer:backgroundSampleBuffer];
I hope this would help you. If not, I can provide example about how to do this using CoreAudio. Using CoreAudio I have been able to obtain 1024 LCPM samples buffer from both microphone capturing and file reading. So the overlapping is immediate.

`[AVCaptureSession canAddOutput:output]` returns NO intermittently. Can I find out why?

I am using canAddOutput: to determine if I can add a AVCaptureMovieFileOutput to a AVCaptureSession and I'm finding that canAddOutput: is sometimes returning NO, and mostly returning YES. Is there a way to find out why a NO was returned? Or a way to eliminate the situation that is causing the NO to be returned? Or anything else I can do that will prevent the user from just seeing an intermittent failure?
Some further notes: This happens approximately once in 30 calls. As my app is not launched, it has only been tested on one device: an iPhone 5 running 7.1.2
Here is quote from documentation (discussion of canAddOutput:)
You cannot add an output that reads from a track of an asset other than the asset used to initialize the receiver.
Explanation that will help you (Please check if your code is matching to this guide, if you're doing all right, it should not trigger error, because basically canAddOuput: checks the compatibility).
AVCaptureSession
Used for the connection between the organizations Device Input and output, similar to the connection of the DShow the filter. If you can connect the input and output, after the start, the data will be read from input to the output.
Several main points:
a) AVCaptureDevice, the definition of equipment, both camera Device.
b) AVCaptureInput
c) AVCaptureOutput
Input and output are not one-to-one, such as the video output while video + audio Input.
Before and after switching the camera:
AVCaptureSession * session = <# A capture session #>;
[session beginConfiguration];
[session removeInput: frontFacingCameraDeviceInput];
[session addInput: backFacingCameraDeviceInput];
[session commitConfiguration];
Add the capture INPUT:
To add a capture device to a capture session, you use an instance of AVCaptureDeviceInput (a concrete
subclass of the abstract AVCaptureInput class). The capture device input manages the device's ports.
NSError * error = nil;
AVCaptureDeviceInput * input =
[AVCaptureDeviceInput deviceInputWithDevice: device error: & error];
if (input) {
// Handle the error appropriately.
}
Add output, output classification:
To get output from a capture session, you add one or more outputs. An output is an instance of a concrete
subclass of AVCaptureOutput;
you use:
AVCaptureMovieFileOutput to output to a movie file
AVCaptureVideoDataOutput if you want to process frames from the video being captured
AVCaptureAudioDataOutput if you want to process the audio data being captured
AVCaptureStillImageOutput if you want to capture still images with accompanying metadata
You add outputs to a capture session using addOutput:.
You check whether a capture output is compatible
with an existing session using canAddOutput:.
You can add and remove outputs as you want while the
session is running.
AVCaptureSession * captureSession = <# Get a capture session #>;
AVCaptureMovieFileOutput * movieInput = <# Create and configure a movie output #>;
if ([captureSession canAddOutput: movieInput]) {
[captureSession addOutput: movieInput];
}
else {
// Handle the failure.
}
Save a video file, add the video file output:
You save movie data to a file using an AVCaptureMovieFileOutput object. (AVCaptureMovieFileOutput
is a concrete subclass of AVCaptureFileOutput, which defines much of the basic behavior.) You can configure
various aspects of the movie file output, such as the maximum duration of the recording, or the maximum file
size. You can also prohibit recording if there is less than a given amount of disk space left.
AVCaptureMovieFileOutput * aMovieFileOutput = [[AVCaptureMovieFileOutput alloc]
init];
CMTime maxDuration = <# Create a CMTime to represent the maximum duration #>;
aMovieFileOutput.maxRecordedDuration = maxDuration;
aMovieFileOutput.minFreeDiskSpaceLimit = <# An appropriate minimum given the quality
of the movie format and the duration #>;
Processing preview video frame data, each frame view finder data can be used for subsequent high-level processing, such as face detection, and so on.
An AVCaptureVideoDataOutput object uses delegation to vend video frames.
You set the delegate using
setSampleBufferDelegate: queue:.
In addition to the delegate, you specify a serial queue on which they
delegate methods are invoked. You must use a serial queue to ensure that frames are delivered to the delegate
in the proper order.
You should not pass the queue returned by dispatch_get_current_queue since there
is no guarantee as to which thread the current queue is running on. You can use the queue to modify the
priority given to delivering and processing the video frames.
Data processing for the frame, there must be restrictions on the size (image size) and the processing time limit, if the processing time is too long, the underlying sensor will not send data to the layouter and the callback.
You should set the session output to the lowest practical resolution for your application.
Setting the output
to a higher resolution than necessary wastes processing cycles and needlessly consumes power.
You must ensure that your implementation of
captureOutput: didOutputSampleBuffer: fromConnection: is able to process a sample buffer within
the amount of time allotted to a frame. If it takes too long, and you hold onto the video frames, AVFoundation
will stop delivering frames, not only to your delegate but also other outputs such as a preview layer.
Deal with the capture process:
AVCaptureStillImageOutput * stillImageOutput = [[AVCaptureStillImageOutput alloc]
init];
NSDictionary * outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG,
AVVideoCodecKey, nil];
[StillImageOutput setOutputSettings: outputSettings];
Able to support different format also supports directly generate jpg stream.
If you want to capture a JPEG image, you should typically not specify your own compression format. Instead,
you should let the still image output do the compression for you, since its compression is hardware-accelerated.
If you need a data representation of the image, you can use jpegStillImageNSDataRepresentation: to
get an NSData object without re-compressing the data, even if you modify the image's metadata.
Camera preview display:
You can provide the user with a preview of what's being recorded using an AVCaptureVideoPreviewLayer
object. AVCaptureVideoPreviewLayer is a subclass of CALayer (see Core Animation Programming Guide. You don't need any outputs to show the preview.
AVCaptureSession * captureSession = <# Get a capture session #>;
CALayer * viewLayer = <# Get a layer from the view in which you want to present the
The preview #>;
AVCaptureVideoPreviewLayer * captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer
alloc] initWithSession: captureSession];
[viewLayer addSublayer: captureVideoPreviewLayer];
In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation
Programming Guide). You can scale the image and perform transformations, rotations and so on just as you
would any layer. One difference is that you may need to set the layer's orientation property to specify how
it should rotate images coming from the camera. In addition, on iPhone 4 the preview layer supports mirroring
(This is the default when previewing the front-facing camera).
Referring from this answer, there might be a possibility that this delegate method may be running in the background, which causes the previous AVCaptureSession not disconnected properly sometimes resulting in canAddOutput: to be NO sometimes.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
The solution might be to use stopRunning in the above delegate(Of course after doing necessary actions and condition checks, you need to finish off your previous sessions properly right?).
Adding on to that, It would be better if you provide some code of what you are trying to do.
It's can be one from this 2 cases
1) Session is running
2) You already added output
You can't add 2 output or 2 input, and also you can't create 2 different sessions
It may be a combination of:
Calling this method when the camera is busy.
Not properly removing your previously connected AVCaptureSession.
You should try to only add it once (where I guess canAddOutput: will always be YES) and just pause/resume your session as needed:
// Stop session if possible
if (_captureSession.running && !_captureInProgress)
{
[_captureSession stopRunning];
NBULogVerbose(#"Capture session: {\n%#} stopped running", _captureSession);
}
You can take a look here.
I think this will help you
canAddOutput:
Returns a Boolean value that indicates whether a given output can be added to the session.
- (BOOL)canAddOutput:(AVCaptureOutput *)output
Parameters
output
An output that you want to add to the session.
Return Value
YES if output can be added to the session, otherwise NO.
Availability
Available in OS X v10.7 and later.
Here is the link for apple doc Click here

iOS/AVFoundation: Design pattern for asynch handlers when turning arrays of images into tracks and then into a single video?

Can you point me to design pattern guides to adapt my style to AVFoundation's asynch approach?
Working an app where you create an image and place audio onto hotspots on it. I'm implementing export to a movie that is the image with effects (glow of hotspot) playing under the audio.
I can reliably create the video and audio tracks and can correctly get audio into an AVMutableComposition and play it back. Problem is with the video. I've narrowed it to my having written a synchronous solution to a problem that requires use of AVFoundation's asynch writing methods.
The current approach and where it fails (each step is own method):
Create array of dictionaries. 2 objects in dictionary. One dictionary object is image representing a keyframe, another object is URL of audio that ends on that keyframe. First dictionary has start keyframe but not audio URL.
For each dictionary in the array, replace the UIImage with an array of start image->animation tween images->end state image, with proper count for FPS and duration of audio.
For each dictionary in the array, convert image array into a soundless mp4 and save using [AVAssetWriter finishWritingWithCompletionHandler], then replace image array in dictionary with URL of mp4. Each dictionary of mp4 & audio URL represents a segment of final movie, where order of dictionaries in array dictates insert order for final movie
-- all of above works, stuff gets made & ordered right, vids and audio playback --
For each dictionary with mp4 & audio URL, load into AVAssets and insert into an AVMutableComposition track, one track for audio & one for video. The audio load & insert works, plays back. But the video fails and appears to fail because step 4 starts before step 3's AVAssetWriter finishWritingWithCompletionHandler finishes for all MP4 tracks.
One approach would be to pause via while loop and wait for status on the AVAssetWriter to say done. This smacks of working against the framework. In practice it is also leading to ugly and sometimes seemingly infinite waits for loops to end.
But simply making step 4 the completion handler for finishWritingWithCompletionHandler is non-trivial because I am writing multiple tracks but I want step 4 to launch only after the last track is written. Because step 3 is basically a for-each processor, I think all completion handlers would need to be the same. I guess I could use bools or counters to change up the completion handler, but it just feels like a kluge.
If any of the above made any sense, can someone give me/point to a primer on design patterns for asynch handling like this? TIA.
You can use GCD dispatch groups for that sort of problem.
From the docs:
Grouping blocks allows for aggregate synchronization. Your application
can submit multiple blocks and track when they all complete, even
though they might run on different queues. This behavior can be
helpful when progress can’t be made until all of the specified tasks
are complete.
The basic idea is, that you call dispatch_group_enter for each of your async tasks. In the completion handler of your tasks, you call dispatch_group_leave.
Dispatch groups work similar to counting semaphores. You increment a counter (using dipsatch_group_wait) when you start a task, and you decrement a counter when a task finishes.
dispatch_group_notify lets you install a completion handler block for your group. This block gets executed when the counter reaches 0.
This blog post provides a good overview and a complete code sample: http://amro.co/post/48248949039/using-gcd-to-wait-on-many-tasks
#weichsel Thank you very much. That seems like it should work. But, I'm using dispatch_group_wait and it seems to not wait. I've been banging against it for several hours since you first replied but now luck. Here's what I've done:
Added property that is a dispatch group, called videoDispatchGroup, and call dispatch_group_create in the init of the class doing the video processing
In the method that creates the video tracks, use dispatch_group_async(videoDispatchGroup, dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [videoWriter finishWritingWithCompletionHandler:^{
The video track writing method is called from a method chaining together the various steps. In that method, after the call to write the tracks, I call dispatch_group_wait(videoProcessingGroup, DISPATCH_TIME_FOREVER);
In the dealloc, call dispatch_release(videoDispatchGroup)
That's all elided a bit, but essentially the call to dispatch_group_wait doesn't seem to be waiting. My guess is it has something to do with the dispatch_group_asyn call, but I'm not sure exactly what.
I've found another means of handling this, using my own int count/decrement via the async handler on finishWritingWithCompletion handler. But I'd really like to up my skills by understanding GCD better.
Here's the code-- dispatch_group_wait never seems to fire, but the movies themselves are made. Code is elided a bit for brevity, but nothing was removed that doesn't work without the GCD code.
#implementation MovieMaker
// This is the dispatch group
#synthesize videoProcessingGroup = _videoProcessingGroup;
-(id)init {
self = [super init];
if (self) {
_videoProcessingGroup = dispatch_group_create();
}
return self;
}
-(void)dealloc {
dispatch_release(self.videoProcessingGroup);
}
-(id)convert:(MTCanvasElementViewController *)sourceObject {
// code fails in same way with or without this line
dispatch_group_enter(self.videoProcessingGroup);
// This method works its way down to writeImageArrayToMovie
_tracksData = [self collectTracks:sourceObject];
NSString *fileName = #"";
// The following seems to never stop waiting, the movies themselves get made though
// Wait until dispatch group finishes processing temp tracks
dispatch_group_wait(self.videoProcessingGroup, DISPATCH_TIME_FOREVER);
// never gets to here
fileName = [self writeTracksToMovie:_tracksData];
// Wait until dispatch group finishes processing final track
dispatch_group_wait(self.videoProcessingGroup, DISPATCH_TIME_FOREVER);
return fileName;
}
// #param videoFrames should be NSArray of UIImage, all of same size
// #return path to temp file
-(NSString *)writeImageArrayToMovie:(NSArray *)videoFrames usingDispatchGroup:(dispatch_group_t)dispatchGroup {
// elided a bunch of stuff, but it all works
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:result]
fileType:AVFileTypeMPEG4
error:&error];
//elided stuff
//Finish the session:
[writerInput markAsFinished];
dispatch_group_async(dispatchGroup, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[videoWriter finishWritingWithCompletionHandler:^{
dispatch_group_leave(dispatchGroup);
// not sure I ever get here? NSLogs don't write out.
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
}];
});
return result;
}

Resources