iOS / iPhone - AVAssetWriter : How to use endSessionAtSourceTime: - ios

I am using AVAssetWriter to create an MPEG4 file.
I start a video session with:
[assetWriter startSessionAtSourceTime:kCMTimeZero];
Now the video file is written fine if I finish the session with this:
[assetWriter finishWritingWithCompletionHandler:^{
}];
But if I call [assetWriter endSessionAtSourceTime:endTime]; before [assetWriter finishWritingWithCompletionHandler then it doesn't write the file.
This is how I call endSessionAtSourceTime:
endTime = CMTimeMakeWithSeconds(secondsRecorded, 30);
[assetWriter endSessionAtSourceTime:endTime];
Any ideas what i am doing wrong?

I think the issue is that the behavior of endSessionAtSourceTime: doesn't do what you're expecting.
endSessionAtSourceTime: is almost the same thing as calling finishRecording() in that it stops the recording when called. The difference is that after recording, endSessionAtSourceTime: will edit out (remove) any frames received after the specified sourceTime.
Instead, if your intended result is to record a 30second clip, you need to setup an NSTimer or something similar and then call endSessionAtSourceTime: or finishRecording() when the 30 seconds has elapsed

Related

AVFoundation no audio tracks for long videos

While recording a video using AVFoundation's - (void)startRecordingToOutputFileURL:(NSURL*)outputFileURL recordingDelegate:(id<AVCaptureFileOutputRecordingDelegate>)delegate; method, if the video duration is more than 12 seconds, there is no audio track in the output file. Everything works fine, if the video duration is less than 12 seconds...
Delegate in which the output file URL is received is:
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error
{
NSLog(#"AUDIO %#", [[AVAsset assetWithURL:outputFileURL] tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]); //App crashes here...
NSLog(#"VIDEO %#", [[AVAsset assetWithURL:outputFileURL] tracksWithMediaType:AVMediaTypeVideo]);
}
My app crashes for a video that is longer than 12 seconds with this error: *** -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array'
My guess is that AVCaptureMovieFileOutput has better support for QuickTime containers (.qt, .mov) than for mp4 although it is the industry standard. For instance when writing a movie file in fragments to an .mp4, something probably happens to the fragment table (sample table).
So you could either change the file format to .mov or turn of writing the file in fragments. See this question:
ios-8-ipad-avcapturemoviefileoutput-drops-loses-never-gets-audio-track-after
Spent almost 1 day to solve this & This is the perfect solution for this...
After a lot got help from iOS 8 iPad AVCaptureMovieFileOutput drops / loses / never gets audio track after 13 - 14 seconds of recording ...
Just add this line
avCaptureMovieFileOutput.movieFragmentInterval = kCMTimeInvalid
Just changed the extension of the path to which the video is being recorded to mov from mp4 and it worked...

iOS/AVFoundation: Design pattern for asynch handlers when turning arrays of images into tracks and then into a single video?

Can you point me to design pattern guides to adapt my style to AVFoundation's asynch approach?
Working an app where you create an image and place audio onto hotspots on it. I'm implementing export to a movie that is the image with effects (glow of hotspot) playing under the audio.
I can reliably create the video and audio tracks and can correctly get audio into an AVMutableComposition and play it back. Problem is with the video. I've narrowed it to my having written a synchronous solution to a problem that requires use of AVFoundation's asynch writing methods.
The current approach and where it fails (each step is own method):
Create array of dictionaries. 2 objects in dictionary. One dictionary object is image representing a keyframe, another object is URL of audio that ends on that keyframe. First dictionary has start keyframe but not audio URL.
For each dictionary in the array, replace the UIImage with an array of start image->animation tween images->end state image, with proper count for FPS and duration of audio.
For each dictionary in the array, convert image array into a soundless mp4 and save using [AVAssetWriter finishWritingWithCompletionHandler], then replace image array in dictionary with URL of mp4. Each dictionary of mp4 & audio URL represents a segment of final movie, where order of dictionaries in array dictates insert order for final movie
-- all of above works, stuff gets made & ordered right, vids and audio playback --
For each dictionary with mp4 & audio URL, load into AVAssets and insert into an AVMutableComposition track, one track for audio & one for video. The audio load & insert works, plays back. But the video fails and appears to fail because step 4 starts before step 3's AVAssetWriter finishWritingWithCompletionHandler finishes for all MP4 tracks.
One approach would be to pause via while loop and wait for status on the AVAssetWriter to say done. This smacks of working against the framework. In practice it is also leading to ugly and sometimes seemingly infinite waits for loops to end.
But simply making step 4 the completion handler for finishWritingWithCompletionHandler is non-trivial because I am writing multiple tracks but I want step 4 to launch only after the last track is written. Because step 3 is basically a for-each processor, I think all completion handlers would need to be the same. I guess I could use bools or counters to change up the completion handler, but it just feels like a kluge.
If any of the above made any sense, can someone give me/point to a primer on design patterns for asynch handling like this? TIA.
You can use GCD dispatch groups for that sort of problem.
From the docs:
Grouping blocks allows for aggregate synchronization. Your application
can submit multiple blocks and track when they all complete, even
though they might run on different queues. This behavior can be
helpful when progress can’t be made until all of the specified tasks
are complete.
The basic idea is, that you call dispatch_group_enter for each of your async tasks. In the completion handler of your tasks, you call dispatch_group_leave.
Dispatch groups work similar to counting semaphores. You increment a counter (using dipsatch_group_wait) when you start a task, and you decrement a counter when a task finishes.
dispatch_group_notify lets you install a completion handler block for your group. This block gets executed when the counter reaches 0.
This blog post provides a good overview and a complete code sample: http://amro.co/post/48248949039/using-gcd-to-wait-on-many-tasks
#weichsel Thank you very much. That seems like it should work. But, I'm using dispatch_group_wait and it seems to not wait. I've been banging against it for several hours since you first replied but now luck. Here's what I've done:
Added property that is a dispatch group, called videoDispatchGroup, and call dispatch_group_create in the init of the class doing the video processing
In the method that creates the video tracks, use dispatch_group_async(videoDispatchGroup, dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [videoWriter finishWritingWithCompletionHandler:^{
The video track writing method is called from a method chaining together the various steps. In that method, after the call to write the tracks, I call dispatch_group_wait(videoProcessingGroup, DISPATCH_TIME_FOREVER);
In the dealloc, call dispatch_release(videoDispatchGroup)
That's all elided a bit, but essentially the call to dispatch_group_wait doesn't seem to be waiting. My guess is it has something to do with the dispatch_group_asyn call, but I'm not sure exactly what.
I've found another means of handling this, using my own int count/decrement via the async handler on finishWritingWithCompletion handler. But I'd really like to up my skills by understanding GCD better.
Here's the code-- dispatch_group_wait never seems to fire, but the movies themselves are made. Code is elided a bit for brevity, but nothing was removed that doesn't work without the GCD code.
#implementation MovieMaker
// This is the dispatch group
#synthesize videoProcessingGroup = _videoProcessingGroup;
-(id)init {
self = [super init];
if (self) {
_videoProcessingGroup = dispatch_group_create();
}
return self;
}
-(void)dealloc {
dispatch_release(self.videoProcessingGroup);
}
-(id)convert:(MTCanvasElementViewController *)sourceObject {
// code fails in same way with or without this line
dispatch_group_enter(self.videoProcessingGroup);
// This method works its way down to writeImageArrayToMovie
_tracksData = [self collectTracks:sourceObject];
NSString *fileName = #"";
// The following seems to never stop waiting, the movies themselves get made though
// Wait until dispatch group finishes processing temp tracks
dispatch_group_wait(self.videoProcessingGroup, DISPATCH_TIME_FOREVER);
// never gets to here
fileName = [self writeTracksToMovie:_tracksData];
// Wait until dispatch group finishes processing final track
dispatch_group_wait(self.videoProcessingGroup, DISPATCH_TIME_FOREVER);
return fileName;
}
// #param videoFrames should be NSArray of UIImage, all of same size
// #return path to temp file
-(NSString *)writeImageArrayToMovie:(NSArray *)videoFrames usingDispatchGroup:(dispatch_group_t)dispatchGroup {
// elided a bunch of stuff, but it all works
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:result]
fileType:AVFileTypeMPEG4
error:&error];
//elided stuff
//Finish the session:
[writerInput markAsFinished];
dispatch_group_async(dispatchGroup, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[videoWriter finishWritingWithCompletionHandler:^{
dispatch_group_leave(dispatchGroup);
// not sure I ever get here? NSLogs don't write out.
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
}];
});
return result;
}

Performance issues with AVAssetWriterInput (audio) and single-core devices

I'm having a strange performance issue with AVAssetWriterInput on single-core devices like the iPhone 3GS and iPhone 4. Basically, I have an AVAssetWriter with two AVAssetWriterInputs, one for video and one for audio. Roughly, it looks like:
AVAssetWriter * assetWriter = [[AVAssetWriter alloc] initWithURL:videoURL
fileType:AVFileTypeMPEG4
error:&error];
videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
videoWriterInput.expectsMediaDataInRealTime = YES;
NSDictionary * audioSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatMPEG4AAC], AVFormatIDKey,
[NSNumber numberWithInt:64000], AVEncoderBitRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSNumber numberWithInt:44100], AVSampleRateKey,
currentChannelLayoutData, AVChannelLayoutKey,
nil];
audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:audioSettings];
audioWriterInput.expectsMediaDataInRealTime = YES;
[assetWriter addInput:videoWriterInput];
[assetWriter addInput:audioWriterInput];
There's also an AVAssetWriterInputPixelBufferAdaptor in there but I left it out for brevity's sake. I create one serial dispatch queue that I use for both video and audio writing. When I get a new video frame, I dispatch the job onto that serial queue, and when I get new audio bytes, I dispatch that job to the same serial queue. This way, most of the audio writing code doesn't happen on the audio callback thread (which would otherwise slow down the high-priority audio thread).
I'm able to write a video with audio successfully and it performs fine on dual-core devices. However, on single core devices, I find that there's a noticeable stutter on the device's display roughly every 500-700ms. I've tracked down the culprit to the following:
if ([audioWriterInput isReadyForMoreMediaData])
{
if (![audioWriterInput appendSampleBuffer:sampleBuffer])
{
NSLog(#"Couldn't append audio sample buffer: %d", numAudioCallbacks_);
}
}
If I comment out the appendSampleBuffer for the audioWriterInput, then obviously no audio gets written, but there's no stutter in the actual display either. It's weird because all of this is happening off of the main thread. CPU/GPU times per frame are on the order of 4-5 ms, so it's not like the CPU/GPU are bottlenecked.
Another clue/curiosity: when I do write audio buffers, I get a saw-tooth like memory graph like the following:
It seems to me that the audioWriterInput is queueing up some buffers and then writing/releasing them.
One last hint is that it seems like the stuttering is less severe when I record as mono instead of stereo. Unfortunately, the sounds that come in are stereo, so when I do that, the sound is wrong. Either way, the stuttering doesn't go away completely so it's not really a valid solution, but might hint at the fact that stereo takes up more resources to handle.
Does anyone have any ideas on why writing audio with the AVAssetInputWriter causes the entire app to stutter? These devices obviously only have one core so all threads will run on that, but it seems strange that the main thread should stutter when CPU load is so low.

ios: How to resume AVAsset video based writing session

I'm working on video editing application for iphone/ipod touch. My app simply asks user to choose one of already existing videos in the camera roll directory, then frame by frame changes pixel values and saves again under different name in the camera roll directory. Because video processing might be quite long I really need to implement some kind of functionality to resume previously started session(ie. when video processing reaches 30% of total video length and user presses down the home button(or there is a phone call) when my application is brought back to the foreground video processing should start from 30%, not from beginning).
Most important parts of my code(simplified a bit to be more readable):
AVAssetReader* assetReader = [[AVAssetReader alloc] initWithAsset:sourceAsset error:&error];
NSArray* videoTracks = [sourceAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack* videoTrack = [videoTracks objectAtIndex:0];
AVAssetReaderTrackOutput* assetReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack
[assetReader addOutput:assetReaderOutput]);
[assetReader addOutput:assetReaderOutput];
[mVideoWriter startWriting]
[mAssetReader startReading]
[mVideoWriter startSessionAtSourceTime: kCMTimeZero];
mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
while (mWriterInput.readyForMoreMediaData) {
CMSampleBufferRef nextBuffer = [mAssetReaderOutput copyNextSampleBuffer];
if (nextBuffer) {
// frame processing here
} // if
} // while
}]; // block
// when done saving my video to camera roll directory
I'm listening to my app's delegate callback methods like applicationWillResignActive, applicationWillEnterForeground but I'm not sure how to handle them in proper way. What I've tried already:
1) [AVAssetWriter start/endSessionAtSourceTime], unfortunately stopping and starting in the last frame presentation time did not work for me
2) saving "by hand" every part of movie that was processed and when processing reaches 100% merging all of them using AVMutableComposition, this solution however sometimes crashes in dispatch queue
Would be really great if someone could give me some hints how it should be done correctly...
I'm pretty sure AVAssetWriter can't append, so something along the lines of 2), saving the pieces then merging them, is probably the best solution if you must make your export restartable.
But first you have to resolve that crash.
Also, before you start creating hundreds of movie pieces however, you should have a look AVAssetWriter.movieFragmentInterval as with careful management of presentation time stamps/durations you may be able to use it minimise the number of pieces you have to merge.
Have you tried -[UIApplication beginBackgroundTaskWithExpirationHandler:]? This seems like a good place to request extra operating time in the background to finish the heavy lifting.

mpmovieplayercontroller stop function, and then play with current time (iPhone)

I have this situation:
Play (streamed) video, stop by code the video, and then play again.
The problem is that the video is begin from the start and not when i stopped it.
What solution do you now of?
EDIT: As #willcodejavaforfood has pointed out, stop only pauses the movie - calling play should restart where you left off so you shouldn't ever need my answer! Can you post the code you are using so we can see what's going on? Thanks.
If you want it to restart where you left off, you will need to remember how far through the movie you were and set the initialPlaybackTime property before you call play again.
i.e. Store the time when you start the movie
...
[myMoviePlayer start];
startDate = [[NSDate date] retain];
...
Store the time you stop the movie
...
[myMoviePlayer stop];
endDate= [[NSDate date] retain];
...
And when you start playback again, use the difference between these times
...
[myMoviePlayer setInitialPlaybackTime:[endDate timeIntervalSinceDate:startDate]];
[myMoviePlayer play];
...
Sam
PS To force it to start at the start, just set the initial playback time to 0.0
PPS I don't know how accurate NSDate is in milliseconds so you might have to use a better way of working out the current position in the movie :)

Resources