Creating Video from array of images and saving it to camera roll - ios

I was trying to make a video from a set of images and I found some code and got it to work
but the video won't be saved to the photo library it gives me this error :
Documentsa.mov cannot be saved to the saved photos album: Error Domain=NSOSStatusErrorDomain Code=2 "This movie could not be played." UserInfo=0x922cf60 {NSLocalizedDescription=This movie could not be played.}
Here's the code I use :
NSString *documentsDirectoryPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
UIImage * i = [UIImage imageNamed:#"IMG_1650.JPG"];
CGAffineTransform transform = CGAffineTransformMakeRotation((M_PI/180)*90);
GPUImageTransformFilter * filter = [[GPUImageTransformFilter alloc]init];
[filter setAffineTransform:transform];
UIImage * im = [filter imageByFilteringImage:i];
im = [filter imageByFilteringImage:im];
[self writeImagesToMovieAtPath:[NSString stringWithFormat:#"%#/%#",documentsDirectoryPath,#"a.mov"] withSize:CGSizeMake(i.size.width, i.size.height)];
NSString* exportVideoPath = [NSString stringWithFormat:#"%#%#",documentsDirectoryPath,#"a.mov"];
UISaveVideoAtPathToSavedPhotosAlbum (exportVideoPath,self, #selector(video:didFinishSavingWithError: contextInfo:), nil);
And this is the code I use to create the video :
-(void) writeImagesToMovieAtPath:(NSString *) path withSize:(CGSize) size
{
NSLog(#"Write Started");
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
int frameCount = 1;
int kRecordingFPS = 30;
UIImage * im = [UIImage imageNamed:#"IMG_1650.JPG"];
CGAffineTransform transform = CGAffineTransformMakeRotation((M_PI/180)*90);
GPUImageTransformFilter * filter = [[GPUImageTransformFilter alloc]init];
[filter setAffineTransform:transform];
UIImage * i = [filter imageByFilteringImage:im];
i = [filter imageByFilteringImage:im];
NSArray * imageArray = [[NSArray alloc]initWithObjects:i,i,i,i,i,i,i,i,i,i,i,i,i,i,i,i, nil];
for(UIImage * img in imageArray)
{
buffer = [self pixelBufferFromCGImage:[img CGImage] andSize:size];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30)
{
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
printf("appending %d attemp %d\n", frameCount, j);
CMTime frameTime = CMTimeMake(frameCount,(int32_t) kRecordingFPS);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(buffer)
CVBufferRelease(buffer);
[NSThread sleepForTimeInterval:0.05];
}
else
{
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n", frameCount, j);
}
frameCount++;
}
//Finish the session:
[videoWriterInput markAsFinished];
[videoWriter finishWriting];
NSLog(#"Write Ended");
}
I tried to check if the video is compatible using
UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(videoPath)
and I found out that the video isn't compatible
but i don't know why or how to fix this

The videoPath
NSString* exportVideoPath = [NSString stringWithFormat:#"%#%#",documentsDirectoryPath,#"a.mov"];
is wrong. You are missing '/' separator.
Change it to this
NSString* exportVideoPath = [NSString stringWithFormat:#"%#/%#",documentsDirectoryPath,#"a.mov"];
Hope this should sove your issue.

I tried the solution by A-Live in a previous comment
the problem turned out to be the size of the image was too big
is there a way to overcome this?

What do you mean by overcome ? I believe you are fine to send the file from documents to any backend server to store it and other devices with a better (you know what i mean) video support will be able to play it. But exceeding the OS limitations will not likely go well even if possible by some 'hack'.
I mean 1080p is the greatest video file resolution to be at library as the media at library must be playable. Still you could store the file at the app documents and decode it manually, sounds like an interesting task :) Make sure you are not going to sync it with iCloud though to avoid huge traffic leaks making the customers crazy.

Related

How to save recorded video using AVAssetWriter?

I tried many other blogs and stack overflow. I didn't get solution for this, I can able to create custom camera with preview. I need video with custom frame, that's why I am using AVAssetWriter. But i unable to save recorded video into documents. I tried like this,
-(void) initilizeCameraConfigurations {
if(!captureSession) {
captureSession = [[AVCaptureSession alloc] init];
[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPresetHigh;
self.view.backgroundColor = UIColor.blackColor;
CGRect bounds = self.view.bounds;
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
captureVideoPreviewLayer.backgroundColor = [UIColor clearColor].CGColor;
captureVideoPreviewLayer.bounds = self.view.frame;
captureVideoPreviewLayer.connection.videoOrientation = AVCaptureVideoOrientationPortrait;
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.position = CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
[self.view.layer addSublayer:captureVideoPreviewLayer];
[self.view bringSubviewToFront:self.controlsBgView];
}
// Add input to session
NSError *err;
videoCaptureDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&err];
if([captureSession canAddInput:videoCaptureDeviceInput]) {
[captureSession addInput:videoCaptureDeviceInput];
}
docPathUrl = [[NSURL alloc] initFileURLWithPath:[self getDocumentsUrl]];
assetWriter = [AVAssetWriter assetWriterWithURL:docPathUrl fileType:AVFileTypeQuickTimeMovie error:&err];
NSParameterAssert(assetWriter);
//assetWriter.movieFragmentInterval = CMTimeMakeWithSeconds(1.0, 1000);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:300], AVVideoWidthKey,
[NSNumber numberWithInt:300], AVVideoHeightKey,
nil];
writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
writerInput.expectsMediaDataInRealTime = YES;
writerInput.transform = CGAffineTransformMakeRotation(M_PI);
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:300], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:300], kCVPixelBufferHeightKey,
nil];
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
if([assetWriter canAddInput:writerInput]) {
[assetWriter addInput:writerInput];
}
// Set video stabilization mode to preview layer
AVCaptureVideoStabilizationMode stablilizationMode = AVCaptureVideoStabilizationModeCinematic;
if([videoCaptureDevice.activeFormat isVideoStabilizationModeSupported:stablilizationMode]) {
[captureVideoPreviewLayer.connection setPreferredVideoStabilizationMode:stablilizationMode];
}
// image output
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[captureSession addOutput:stillImageOutput];
[captureSession commitConfiguration];
if (![captureVideoPreviewLayer.connection isEnabled]) {
[captureVideoPreviewLayer.connection setEnabled:YES];
}
[captureSession startRunning];
}
-(IBAction)startStopVideoRecording:(id)sender {
if(captureSession) {
if(isVideoRecording) {
[writerInput markAsFinished];
[assetWriter finishWritingWithCompletionHandler:^{
NSLog(#"Finished writing...checking completion status...");
if (assetWriter.status != AVAssetWriterStatusFailed && assetWriter.status == AVAssetWriterStatusCompleted)
{
// Video saved
} else
{
NSLog(#"#123 Video writing failed: %#", assetWriter.error);
}
}];
} else {
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
isVideoRecording = YES;
}
}
}
-(NSString *) getDocumentsUrl {
NSString *docPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
docPath = [[docPath stringByAppendingPathComponent:#"Movie"] stringByAppendingString:#".mov"];
if([[NSFileManager defaultManager] fileExistsAtPath:docPath]) {
NSError *err;
[[NSFileManager defaultManager] removeItemAtPath:docPath error:&err];
}
NSLog(#"Movie path : %#",docPath);
return docPath;
}
#end
Correct me if anything wrong. Thank you in advance.
You don't say what actually goes wrong, but two things look wrong with your code:
docPath = [[docPath stringByAppendingPathComponent:#"Movie"] stringByAppendingString:#".mov"];
looks like it creates an undesired path like this #"/path/Movie/.mov", when you want this:
docPath = [docPath stringByAppendingPathComponent:#"Movie.mov"];
And your timeline is wrong. Your asset writer starts at time 0, but the sampleBuffers start at CMSampleBufferGetPresentationTimestamp(sampleBuffer) > 0, so instead do this:
-(void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if(firstSampleBuffer) {
[assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimestamp(sampleBuffer)];
}
[writerInput appendSampleBuffer:sampleBuffer];
}
Conceptually, you have to main functional areas: One that generates video frames – this the AVCaptureSession, and everything that is attached to it –, and another that writes these frames to a file – in your case the AVAssetWriter with attached inputs.
The problem with your code is: There is no connection between these two. No video frames / images coming out of the capture session are passed to the asset writer inputs.
Furthermore, the AVCaptureStillImageOutput method -captureStillImageAsynchronouslyFromConnection:completionHandler: is nowhere called, so the capture session actually produces no frames.
So, as a minimum, implement something like this:
-(IBAction)captureStillImageAndAppend:(id)sender
{
[stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageOutput.connections.firstObject completionHandler:
^(CMSampleBufferRef imageDataSampleBuffer, NSError* error)
{
// check error, omitted here
if (CMTIME_IS_INVALID( startTime)) // startTime is an ivar
[assetWriter startSessionAtSourceTime:(startTime = CMSampleBufferGetPresentationTimeStamp( imageDataSampleBuffer))];
[writerInput appendSampleBuffer:imageDataSampleBuffer];
}];
}
Remove the AVAssetWriterInputPixelBufferAdaptor, it's not used.
But there are issues with AVCaptureStillImageOutput:
it's only intended to produce still images, not videos
it must be configured to produce uncompressed sample buffers if the asset writer input is configured to compress the appended sample buffers (stillImageOutput.outputSettings = #{ (NSString*)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)};)
it's deprecated under iOS
If you actually want to produce a video, as opposed to a sequence of still images, instead of the AVCaptureStillImageOutput add a AVCaptureVideoDataOutput to the capture session. It needs a delegate and a serial dispatch queue to output the sample buffers. The delegate has to implement something like this:
-(void)captureOutput:(AVCaptureOutput*)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
if (CMTIME_IS_INVALID( startTime)) // startTime is an ivar
[assetWriter startSessionAtSourceTime:(startTime = CMSampleBufferGetPresentationTimeStamp( sampleBuffer))];
[writerInput appendSampleBuffer:sampleBuffer];
}
Note that
you will want to make sure that the AVCaptureVideoDataOutput only outputs frames when you're actually recording; add/remove it from the capture session or enable/disable its connection in the startStopVideoRecording action
reset the startTime to kCMTimeInvalid before starting another recording

Memory leak in CVPixelBufferPoolCreatePixelBuffer

At the moment I am debugging some code. What it does it reads movie file into frames array, applies some transformations to the frames and compiles everything back to the video file. I have fixed all memory leaks which were called by myself the remaining one is pretty serious. It leaves almost 400 mb of memory after the process. Here is the screenshot of the leak.
As you can see, the higher level call is only in VideoToolbox library. Hovever, I do not even include this library into my project. I don't believe that this leak comes with Apple's library and there is nothing I can do about it.
Here is the only code that uses something related to h264 and decoding which were mentioned in the call tree.
-(void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size
{
if ([[NSFileManager defaultManager] fileExistsAtPath:path]) {
[[NSFileManager defaultManager] removeItemAtPath:path error:nil];
}
NSError *error = nil;
// FIRST, start up an AVAssetWriter instance to write your video
// Give it a destination path (for us: tmp/temp.mov)
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a SESSION of writing.
// After you start a session, you will keep adding image frames
// until you are complete - then you will tell it you are done.
[videoWriter startWriting];
// This starts your video at time = 0
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//int frameDuration = (1/fps)*600.0;
//NSLog(#"FRAME DURATION: %d",frameDuration);
int i = 0;
while (1)
{
// Check if the writer is ready for more data, if not, just wait
if(writerInput.readyForMoreMediaData){
CMTime frameTime = CMTimeMake(1, fps);
// CMTime = Value and Timescale.
// Timescale = the number of tics per second you want
// Value is the number of tics
// For us - each frame we add will be 1/4th of a second
// Apple recommend 600 tics per second for video because it is a
// multiple of the standard video rates 24, 30, 60 fps etc.
CMTime lastTime=CMTimeMake(i, fps);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
if (i == 0) {presentTime = CMTimeMake(0, 600);}
// This ensures the first frame starts at 0.
if (i >= [array count])
{
buffer = NULL;
}
else
{
// This command grabs the next UIImage and converts it to a CGImage
CVPixelBufferRef tempBuffer = buffer;
CVPixelBufferRelease(tempBuffer);
buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:i] CGImage]andSize:size];
}
if (buffer)
{
// Give the CGImage to the AVAssetWriter to add to your video
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
i++;
}
else
{
//Finish the session:
// This is important to be done exactly in this order
[writerInput markAsFinished];
// WARNING: finishWriting in the solution above is deprecated.
// You now need to give a completion handler.
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(#"Finished writing...checking completion status...");
if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted)
{
NSLog(#"Video writing succeeded.");
// Move video to camera roll
// NOTE: You cannot write directly to the camera roll.
// You must first write to an iOS director then move it!
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0]; // Get documents folder
NSString *audioTemp = [documentsDirectory stringByAppendingPathComponent:#"/temp-audio.m4a"];
NSString *finalTemp = [documentsDirectory stringByAppendingPathComponent:#"/temp.mov"];
[self mergeVideoFromPath:[NSURL fileURLWithPath:path] withAudioFromPath:[NSURL fileURLWithPath:audioTemp] atPath:[NSURL fileURLWithPath:finalTemp]];
} else
{
NSLog(#"Video writing failed: %#", videoWriter.error);
}
}]; // end videoWriter finishWriting Block
CVPixelBufferRelease(buffer);
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
NSLog (#"Done");
break;
}
}
}
}

AVAssetExportSession combine video files and freeze frame between videos

I have an app which combines video files together to make a long video. There could be a delay between videos (e.g. V1 starts at t=0s and runs for 5 seconds, V1 starts at t=10s). In this case, I want the video to freeze the last frame of V1 until V2 starts.
I'm using the code below, but between videos, the whole video goes white.
Any ideas how I can get the effect I'm looking for?
Thanks!
#interface VideoJoins : NSObject
-(instancetype)initWithURL:(NSURL*)url
andDelay:(NSTimeInterval)delay;
#property (nonatomic, strong) NSURL* url;
#property (nonatomic) NSTimeInterval delay;
#end
and
+(void)joinVideosSequentially:(NSArray*)videoJoins
withFileType:(NSString*)fileType
toOutput:(NSURL*)outputVideoURL
onCompletion:(dispatch_block_t) onCompletion
onError:(ErrorBlock) onError
onCancel:(dispatch_block_t) onCancel
{
//From original question on http://stackoverflow.com/questions/6575128/how-to-combine-video-clips-with-different-orientation-using-avfoundation
// Didn't add support for portrait+landscape.
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
CMTime startTime = kCMTimeZero;
/*videoClipPaths is a array of paths of the video clips recorded*/
//for loop to combine clips into a single video
for (NSInteger i=0; i < [videoJoins count]; i++)
{
VideoJoins* vj = videoJoins[i];
NSURL *url = vj.url;
NSTimeInterval nextDelayTI = 0;
if(i+1 < [videoJoins count])
{
VideoJoins* vjNext = videoJoins[i+1];
nextDelayTI = vjNext.delay;
}
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
CMTime assetDuration = [asset duration];
CMTime assetDurationWithNextDelay = assetDuration;
if(nextDelayTI != 0)
{
CMTime nextDelay = CMTimeMakeWithSeconds(nextDelayTI, 1000000);
assetDurationWithNextDelay = CMTimeAdd(assetDuration, nextDelay);
}
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//set the orientation
if(i == 0)
{
[compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
}
BOOL ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDurationWithNextDelay) ofTrack:videoTrack atTime:startTime error:nil];
ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDuration) ofTrack:audioTrack atTime:startTime error:nil];
startTime = CMTimeAdd(startTime, assetDurationWithNextDelay);
}
//Delete output video if it exists
NSString* outputVideoString = [outputVideoURL absoluteString];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputVideoString])
{
[[NSFileManager defaultManager] removeItemAtPath:outputVideoString error:nil];
}
//export the combined video
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:composition
presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL = outputVideoURL;
exporter.outputFileType = fileType;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted: {
onCompletion();
break;
}
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export Failed");
NSError* err = exporter.error;
NSLog(#"ExportSessionError: %#", [err localizedDescription]);
onError(err);
break;
}
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export Cancelled");
NSLog(#"ExportSessionError: %#", [exporter.error localizedDescription]);
onCancel();
break;
}
}];
}
EDIT: Got it working. Here is how I extract the images and generate the videos from those images:
+ (void)writeImageAsMovie:(UIImage*)image
toPath:(NSURL*)url
fileType:(NSString*)fileType
duration:(NSTimeInterval)duration
completion:(VoidBlock)completion
{
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:url
fileType:fileType
error:&error];
NSParameterAssert(videoWriter);
CGSize size = image.size;
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//Write samples:
CMTime halfTime = CMTimeMakeWithSeconds(duration/2, 100000);
CMTime endTime = CMTimeMakeWithSeconds(duration, 100000);
CVPixelBufferRef buffer = [VideoCreator pixelBufferFromCGImage:image.CGImage];
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
[adaptor appendPixelBuffer:buffer withPresentationTime:halfTime];
[adaptor appendPixelBuffer:buffer withPresentationTime:endTime];
//Finish the session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWritingWithCompletionHandler:^{
if(videoWriter.error)
{
NSLog(#"Error:%#", [error localizedDescription]);
}
if(completion)
{
completion();
}
}];
}
+(void)generateVideoImageFromURL:(NSURL*)url
atTime:(CMTime)thumbTime
withMaxSize:(CGSize)maxSize
completion:(ImageBlock)handler
{
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
if(!asset)
{
if(handler)
{
handler(nil);
return;
}
}
if(CMTIME_IS_POSITIVE_INFINITY(thumbTime))
{
thumbTime = asset.duration;
}
else if(CMTIME_IS_NEGATIVE_INFINITY(thumbTime) || CMTIME_IS_INVALID(thumbTime) || CMTIME_IS_INDEFINITE(thumbTime))
{
thumbTime = CMTimeMake(0, 30);
}
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
generator.maximumSize = maxSize;
CMTime actualTime;
NSError* error;
CGImageRef image = [generator copyCGImageAtTime:thumbTime actualTime:&actualTime error:&error];
UIImage *thumb = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);
if(handler)
{
handler(thumb);
}
}
AVMutableComposition can only stitch videos together. I did it by doing two things:
Extracting last frame of the first video as image.
Making a video using this image(duration depends on your requirement).
Then you can compose these three videos (V1,V2 and your single image video). Both tasks are very easy to do.
For extracting the image out of the video, look at this link. If you don't want to use MPMoviePlayerController,which is used by accepted answer, then look at other answer by Steve.
For making video using the image check out this link. Question is about the issue of audio but I don't think you need audio. So just look at the method mentioned in question itself.
UPDATE:
There is an easier way but it comes with a disadvantage. You can have two AVPlayer. First one plays your video which has white frames in between. Other one sits behind paused at last frame of video 1. So when the middle part comes, you will see the second AVPlayer loaded with last frame. So as a whole it will look like video 1 is paused. And trust me naked eye can't make out when player got changed. But the obvious disadvantage is that your exported video will be same with blank frames. So if you are just going to play it back in your app only, you can go with this approach.
The first frame of video asset is always black or white
CMTime delta = CMTimeMake(1, 25); //1 frame (if fps = 25)
CMTimeRange timeRangeInVideoAsset = CMTimeRangeMake(delta,clipVideoTrack.timeRange.duration);
nextVideoClipStartTime = CMTimeAdd(nextVideoClipStartTime, timeRangeInVideoAsset.duration);
Merged more then 400 shirt videos in one.

Mixing Images and Video using AVFoundation

I'm trying to splice in images into a pre-existing video to create a new video file using AVFoundation on Mac.
So far I've read the Apple documentation example,
ASSETWriterInput for making Video from UIImages on Iphone Issues
Mix video with static image in CALayer using AVVideoCompositionCoreAnimationTool
AVFoundation Tutorial: Adding Overlays and Animations to Videos and a few other SO links
Now these have proved to be pretty useful at times, but my problem is that I'm not creating a static watermark or an overlay exactly I want to put in images between parts of the video.
So far I've managed to get the video and create blank sections for these images to be inserted and export it.
My problem is getting the images to insert them selves in these blank sections. The only way I can see to feasibly do it is to create a series of layers that are animated to change their opacity at the correct times, but I can't seem to get the animation to work.
The code below is what I'm using to create the video segments and layer animations.
//https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_Editing.html#//apple_ref/doc/uid/TP40010188-CH8-SW7
// let's start by making our video composition
AVMutableComposition* mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack* mutableCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableVideoComposition* mutableVideoComposition = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:gVideoAsset];
// if the first point's frame doesn't start on 0
if (gFrames[0].startTime.value != 0)
{
DebugLog("Inserting vid at 0");
// then add the video track to the composition track with a time range from 0 to the first point's startTime
[mutableCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, gFrames[0].startTime) ofTrack:gVideoTrack atTime:kCMTimeZero error:&gError];
}
if(gError)
{
DebugLog("Error inserting original video segment");
GetError();
}
// create our parent layer and video layer
CALayer* parentLayer = [CALayer layer];
CALayer* videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, 1280, 720);
videoLayer.frame = CGRectMake(0, 0, 1280, 720);
[parentLayer addSublayer:videoLayer];
// create an offset value that should be added to each point where a new video segment should go
CMTime timeOffset = CMTimeMake(0, 600);
// loop through each additional frame
for(int i = 0; i < gFrames.size(); i++)
{
// create an animation layer and assign it's content to the CGImage of the frame
CALayer* Frame = [CALayer layer];
Frame.contents = (__bridge id)gFrames[i].frameImage;
Frame.frame = CGRectMake(0, 720, 1280, -720);
DebugLog("inserting empty time range");
// add frame point to the composition track starting at the point's start time
// insert an empty time range for the duration of the frame animation
[mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)];
// update the time offset by the duration
timeOffset = CMTimeAdd(timeOffset, gFrames[i].duration);
// make the layer completely transparent
Frame.opacity = 0.0f;
// create an animation for setting opacity to 0 on start
CABasicAnimation* frameAnim = [CABasicAnimation animationWithKeyPath:#"opacity"];
frameAnim.duration = 1.0f;
frameAnim.repeatCount = 0;
frameAnim.autoreverses = NO;
frameAnim.fromValue = [NSNumber numberWithFloat:0.0];
frameAnim.toValue = [NSNumber numberWithFloat:0.0];
frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero;
frameAnim.speed = 1.0f;
[Frame addAnimation:frameAnim forKey:#"animateOpacity"];
// create an animation for setting opacity to 1
frameAnim = [CABasicAnimation animationWithKeyPath:#"opacity"];
frameAnim.duration = 1.0f;
frameAnim.repeatCount = 0;
frameAnim.autoreverses = NO;
frameAnim.fromValue = [NSNumber numberWithFloat:1.0];
frameAnim.toValue = [NSNumber numberWithFloat:1.0];
frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].startTime);
frameAnim.speed = 1.0f;
[Frame addAnimation:frameAnim forKey:#"animateOpacity"];
// create an animation for setting opacity to 0
frameAnim = [CABasicAnimation animationWithKeyPath:#"opacity"];
frameAnim.duration = 1.0f;
frameAnim.repeatCount = 0;
frameAnim.autoreverses = NO;
frameAnim.fromValue = [NSNumber numberWithFloat:0.0];
frameAnim.toValue = [NSNumber numberWithFloat:0.0];
frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].endTime);
frameAnim.speed = 1.0f;
[Frame addAnimation:frameAnim forKey:#"animateOpacity"];
// add the frame layer to our parent layer
[parentLayer addSublayer:Frame];
gError = nil;
// if there's another point after this one
if( i < gFrames.size()-1)
{
// add our video file to the composition with a range of this point's end and the next point's start
[mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime,
CMTimeMake(gFrames[i+1].startTime.value - gFrames[i].startTime.value, 600))
ofTrack:gVideoTrack
atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError];
}
// else just add our video file with a range of this points end point and the videos duration
else
{
[mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime, CMTimeSubtract(gVideoAsset.duration, gFrames[i].startTime)) ofTrack:gVideoTrack atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError];
}
if(gError)
{
char errorMsg[256];
sprintf(errorMsg, "Error inserting original video segment at: %d", i);
DebugLog(errorMsg);
GetError();
}
}
Now in that segment the Frame's opacity is set to 0.0f, however when I set it to 1.0f all it does is just place the last one of these frames on top of the video for the entire duration.
After that the vide is exported using an AVAssetExportSession as shown below
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
// create a layer instruction for our newly created animation tool
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:gVideoTrack];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
[instruction setTimeRange:CMTimeRangeMake(kCMTimeZero, [mutableComposition duration])];
[layerInstruction setOpacity:1.0f atTime:kCMTimeZero];
[layerInstruction setOpacity:0.0f atTime:mutableComposition.duration];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
// set the instructions on our videoComposition
mutableVideoComposition.instructions = [NSArray arrayWithObject:instruction];
// export final composition to a video file
// convert the videopath into a url for our AVAssetWriter to create a file at
NSString* vidPath = CreateNSString(outputVideoPath);
NSURL* vidURL = [NSURL fileURLWithPath:vidPath];
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPreset1280x720];
exporter.outputFileType = AVFileTypeMPEG4;
exporter.outputURL = vidURL;
exporter.videoComposition = mutableVideoComposition;
exporter.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted)
{
DebugLog("!!!file created!!!");
_Close();
}
else if(exporter.status == AVAssetExportSessionStatusFailed)
{
DebugLog("failed damn");
DebugLog(cStringCopy([[[exporter error] localizedDescription] UTF8String]));
DebugLog(cStringCopy([[[exporter error] description] UTF8String]));
_Close();
}
else
{
DebugLog("NoIdea");
_Close();
}
});
}];
}
I get the feeling that the animation is not being started but I don't know. Am I going the right way about this to splice in image data into a video like this?
Any assistance would be greatly appreciated.
Well I solved my issue in another way. The animation route was not working, so my solution was to compile all my insertable images into a temporary video file and use that video to insert the images into my final output video.
Starting with the first link I originally posted ASSETWriterInput for making Video from UIImages on Iphone Issues I created the following function to create my temporary video
void CreateFrameImageVideo(NSString* path)
{
NSLog(#"Creating writer at path %#", path);
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:path] fileType:AVFileTypeMPEG4
error:&error];
NSLog(#"Creating video codec settings");
NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:gVideoTrack.estimatedDataRate/*128000*/], AVVideoAverageBitRateKey,
[NSNumber numberWithInt:gVideoTrack.nominalFrameRate],AVVideoMaxKeyFrameIntervalKey,
AVVideoProfileLevelH264MainAutoLevel, AVVideoProfileLevelKey,
nil];
NSLog(#"Creating video settings");
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
codecSettings,AVVideoCompressionPropertiesKey,
[NSNumber numberWithInt:1280], AVVideoWidthKey,
[NSNumber numberWithInt:720], AVVideoHeightKey,
nil];
NSLog(#"Creating writter input");
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
NSLog(#"Creating adaptor");
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
[videoWriter addInput:writerInput];
NSLog(#"Starting session");
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CMTime timeOffset = kCMTimeZero;//CMTimeMake(0, 600);
NSLog(#"Video Width %d, Height: %d, writing frame video to file", gWidth, gHeight);
CVPixelBufferRef buffer;
for(int i = 0; i< gAnalysisFrames.size(); i++)
{
while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) {
NSLog(#"Waiting inside a loop");
NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
[[NSRunLoop currentRunLoop] runUntilDate:maxDate];
}
//Write samples:
buffer = pixelBufferFromCGImage(gAnalysisFrames[i].frameImage, gWidth, gHeight);
[adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset];
timeOffset = CMTimeAdd(timeOffset, gAnalysisFrames[i].duration);
}
while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) {
NSLog(#"Waiting outside a loop");
NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
[[NSRunLoop currentRunLoop] runUntilDate:maxDate];
}
buffer = pixelBufferFromCGImage(gAnalysisFrames[gAnalysisFrames.size()-1].frameImage, gWidth, gHeight);
[adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset];
NSLog(#"Finishing session");
//Finish the session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:timeOffset];
BOOL successfulWrite = [videoWriter finishWriting];
// if we failed to write the video
if(!successfulWrite)
{
NSLog(#"Session failed with error: %#", [[videoWriter error] description]);
// delete the temporary file created
NSFileManager *fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:path]) {
NSError *error;
if ([fileManager removeItemAtPath:path error:&error] == NO) {
NSLog(#"removeItemAtPath %# error:%#", path, error);
}
}
}
else
{
NSLog(#"Session complete");
}
[writerInput release];
}
After the video is created it is then loaded as an AVAsset and it's track is extracted then the video is inserted by replacing the following line (from the first code block in the original post)
[mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)];
with:
[mutableCompositionTrack insertTimeRange:CMTimeRangeMake(timeOffset,gAnalysisFrames[i].duration)
ofTrack:gFramesTrack
atTime:CMTimeAdd(gAnalysisFrames[i].startTime, timeOffset) error:&gError];
where gFramesTrack is the AVAssetTrack created from the temporary frame video.
all the code relating to CALayer and CABasicAnimation objects have been removed as it just was not working.
Not the most elegant solution, I don't think but one that at least works. I hope that someone finds this useful.
This code also works on iOS devices (tested using an iPad 3)
Side note: The DebugLog function from the first post is just a callback to a function that prints out log messages, they can be replaced with NSLog() calls if need be.

OpenGL ES 2.0 to Video on iPad/iPhone

I am at my wits end here despite the good information here on StackOverflow...
I am trying to write an OpenGL renderbuffer to a video on the iPad 2 (using iOS 4.3). This is more exactly what I am attempting:
A) set up an AVAssetWriterInputPixelBufferAdaptor
create an AVAssetWriter that points to a video file
set up an AVAssetWriterInput with appropriate settings
set up an AVAssetWriterInputPixelBufferAdaptor to add data to the video file
B) write data to a video file using that AVAssetWriterInputPixelBufferAdaptor
render OpenGL code to the screen
get the OpenGL buffer via glReadPixels
create a CVPixelBufferRef from the OpenGL data
append that PixelBuffer to the AVAssetWriterInputPixelBufferAdaptor using the appendPixelBuffer method
However, I am having problems doings this. My strategy right now is to set up the AVAssetWriterInputPixelBufferAdaptor when a button is pressed. Once the AVAssetWriterInputPixelBufferAdaptor is valid, I set a flag to signal the EAGLView to create a pixel buffer and append it to the video file via appendPixelBuffer for a given number of frames.
Right now my code is crashing as it tries to append the second pixel buffer, giving me the following error:
-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0
Here is my AVAsset setup code (a lot of was based on Rudy Aramayo's code, which does work on normal images, but is not set up for textures):
- (void) testVideoWriter {
//initialize global info
MOVIE_NAME = #"Documents/Movie.mov";
CGSize size = CGSizeMake(480, 320);
frameLength = CMTimeMake(1, 5);
currentTime = kCMTimeZero;
currentFrame = 0;
NSString *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
NSError *error = nil;
unlink([betaCompressionDirectory UTF8String]);
videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
//writerInput.expectsMediaDataInRealTime = NO;
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];
adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[adaptor retain];
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
VIDEO_WRITER_IS_READY = true;
}
Ok, now that my videoWriter and adaptor are set up, I tell my OpenGL renderer to create a pixel buffer for every frame:
- (void) captureScreenVideo {
if (!writerInput.readyForMoreMediaData) {
return;
}
CGSize esize = CGSizeMake(eagl.backingWidth, eagl.backingHeight);
NSInteger myDataLength = esize.width * esize.height * 4;
GLuint *buffer = (GLuint *) malloc(myDataLength);
glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CVPixelBufferRef pixel_buffer = NULL;
CVPixelBufferCreateWithBytes (NULL, esize.width, esize.height, kCVPixelFormatType_32BGRA, buffer, 4 * esize.width, NULL, 0, NULL, &pixel_buffer);
/* DON'T FREE THIS BEFORE USING pixel_buffer! */
//free(buffer);
if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
NSLog(#"FAIL");
} else {
NSLog(#"Success:%d", currentFrame);
currentTime = CMTimeAdd(currentTime, frameLength);
}
free(buffer);
CVPixelBufferRelease(pixel_buffer);
}
currentFrame++;
if (currentFrame > MAX_FRAMES) {
VIDEO_WRITER_IS_READY = false;
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];
[self moveVideoToSavedPhotos];
}
}
And finally, I move the Video to the camera roll:
- (void) moveVideoToSavedPhotos {
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
NSURL* fileURL = [NSURL fileURLWithPath:localVid];
[library writeVideoAtPathToSavedPhotosAlbum:fileURL
completionBlock:^(NSURL *assetURL, NSError *error) {
if (error) {
NSLog(#"%#: Error saving context: %#", [self class], [error localizedDescription]);
}
}];
[library release];
}
However, as I said, I am crashing in the call to appendPixelBuffer.
Sorry for sending so much code, but I really don't know what I am doing wrong. It seemed like it would be trivial to update a project which writes images to a video, but I am unable to take the pixel buffer I create via glReadPixels and append it. It's driving me crazy! If anyone has any advice or a working code example of OpenGL --> Video that would be amazing... Thanks!
I just got something similar to this working in my open source GPUImage framework, based on the above code, so I thought I'd provide my working solution to this. In my case, I was able to use a pixel buffer pool, as suggested by Srikumar, instead of the manually created pixel buffers for each frame.
I first configure the movie to be recorded:
NSError *error = nil;
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
NSLog(#"Error: %#", error);
}
NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];
assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;
// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
nil];
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[assetWriter addInput:assetWriterVideoInput];
then use this code to grab each rendered frame using glReadPixels():
CVPixelBufferRef pixel_buffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
return;
}
else
{
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}
// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);
if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime])
{
NSLog(#"Problem appending pixel buffer at time: %lld", currentTime.value);
}
else
{
// NSLog(#"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);
CVPixelBufferRelease(pixel_buffer);
One thing I noticed is that if I tried to append two pixel buffers with the same integer time value (in the basis provided), the entire recording would fail and the input would never take another pixel buffer. Similarly, if I tried to append a pixel buffer after retrieval from the pool failed, it would abort the recording. Thus, the early bailout in the code above.
In addition to the above code, I use a color-swizzling shader to convert the RGBA rendering in my OpenGL ES scene to BGRA for fast encoding by the AVAssetWriter. With this, I'm able to record 640x480 video at 30 FPS on an iPhone 4.
Again, all of the code for this can be found within the GPUImage repository, under the GPUImageMovieWriter class.
Looks like a few things to do here -
According to the docs, it looks like the recommended way to create a pixel buffer is to use CVPixelBufferPoolCreatePixelBuffer on the adaptor.pixelBufferPool.
You can then fill in the buffer by getting the address using CVPixelBufferLockBaseAddress followed by CVPixelBufferGetBaseAddress and unlocking the memory using CVPixelBufferUnlockBaseAddress before passing it to the adaptor.
The pixel buffer can be passed to the input when writerInput.readyForMoreMediaData is YES. This means a "wait until ready". A usleep until it becomes YES works, but you can also use key-value observing.
The rest of the stuff is alright. With this much, the original code results in a playable video file.
“In case anyone stumbles across this, I got this to work finally... and understand a bit more about it now than I did. I had an error in the above code where I was freeing the data buffer filled from glReadPixels before calling appendPixelBuffer. That is, I thought it was safe to free it since I had already created the CVPixelBufferRef. I've edited the code above so the pixel buffer actual has data now! – Angus Forbes Jun 28 '11 at 5:58”
this is the real reason for your crash, i met this problem too.
Do not free the buffer even if you have created the CVPixelBufferRef.
Seems like improper memory management. The fact the error states that the message was sent to __NSCFDictionary instead of AVAssetWriterInputPixelBufferAdaptor is highly suspicious.
Why do you need to retain the adaptor manually? This looks hacky since CocoaTouch is fully ARC.
Here's a starter to nail down the memory issue.
from your error message -[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0
Looks like a your pixelBufferAdapter was released and now its pointing to a dictionary.
The only code I've ever gotten to work for this is at:
https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html
// [_context presentRenderbuffer:GL_RENDERBUFFER];
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
// To capture the output to an OpenGL render buffer...
NSInteger myDataLength = _backingWidth * _backingHeight * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// To swap the pixel buffer to a CoreGraphics context (as a CGImage)
CGDataProviderRef provider;
CGColorSpaceRef colorSpaceRef;
CGImageRef imageRef;
CVPixelBufferRef pixelBuffer;
#try {
provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * _backingWidth;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
} #catch (NSException *exception) {
NSLog(#"Exception: %#", [exception reason]);
} #finally {
if (imageRef) {
// To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter)
pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef];
// To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer)
imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext];
}
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
}
}
});
.
.
.
The callback to free the data in the instance of the CGDataProvider class:
static void releaseDataCallback (void *info, const void *data, size_t size) {
free((void*)data);
}
The CVCGImageUtil class interface and implementation files, respectively:
#import Foundation;
#import CoreMedia;
#import CoreGraphics;
#import QuartzCore;
#import CoreImage;
#import UIKit;
#interface CVCGImageUtil : NSObject
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;
#end
#import "CVCGImageUtil.h"
#implementation CVCGImageUtil
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
{
// CVPixelBuffer to CoreImage
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
CGPoint origin = [image extent].origin;
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
// CoreImage to CGImage via CoreImage context
CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];
// CGImage to UIImage (OPTIONAL)
//UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
//return (CGImageRef)uiImage.CGImage;
return cgImage;
}
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
CGImageGetHeight(image));
NSDictionary *options =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status =
CVPixelBufferCreate(
kCFAllocatorDefault, frameSize.width, frameSize.height,
kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(
pxdata, frameSize.width, frameSize.height,
8, CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,
(CGBitmapInfo)kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
{
CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
CMSampleBufferRef newSampleBuffer = NULL;
CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(
NULL, pixelBuffer, &videoInfo);
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pixelBuffer,
true,
NULL,
NULL,
videoInfo,
&timimgInfo,
&newSampleBuffer);
return newSampleBuffer;
}
#end
That answers part B of your question, to-the-letter. Part A follows in a separate answer...
I've never failed to read and write a video file to iPhone with this code; in your implementation, you will simply need to substitute the calls in the processFrame method, found at the end of the implementation method, to calls to whatever methods to which you pass pixel buffers as parameters to its equivalent, and otherwise modify that method to return the pixel buffer generated as per the sample code above--that's basic, so you should be okay:
//
// ExportVideo.h
// ChromaFilterTest
//
// Created by James Alan Bush on 10/30/16.
// Copyright © 2016 James Alan Bush. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import "GLKitView.h"
#interface ExportVideo : NSObject
{
AVURLAsset *_asset;
AVAssetReader *_reader;
AVAssetWriter *_writer;
NSString *_outputURL;
NSURL *_outURL;
AVAssetReaderTrackOutput *_readerAudioOutput;
AVAssetWriterInput *_writerAudioInput;
AVAssetReaderTrackOutput *_readerVideoOutput;
AVAssetWriterInput *_writerVideoInput;
CVPixelBufferRef _currentBuffer;
dispatch_queue_t _mainSerializationQueue;
dispatch_queue_t _rwAudioSerializationQueue;
dispatch_queue_t _rwVideoSerializationQueue;
dispatch_group_t _dispatchGroup;
BOOL _cancelled;
BOOL _audioFinished;
BOOL _videoFinished;
AVAssetWriterInputPixelBufferAdaptor *_pixelBufferAdaptor;
}
#property (readwrite, retain) NSURL *url;
#property (readwrite, retain) GLKitView *renderer;
- (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer;
- (void)startProcessing;
#end
//
// ExportVideo.m
// ChromaFilterTest
//
// Created by James Alan Bush on 10/30/16.
// Copyright © 2016 James Alan Bush. All rights reserved.
//
#import "ExportVideo.h"
#import "GLKitView.h"
#implementation ExportVideo
#synthesize url = _url;
- (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer {
NSLog(#"ExportVideo");
if (!(self = [super init])) {
return nil;
}
self.url = url;
self.renderer = renderer;
NSString *serializationQueueDescription = [NSString stringWithFormat:#"%# serialization queue", self];
_mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:#"%# rw audio serialization queue", self];
_rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:#"%# rw video serialization queue", self];
_rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);
return self;
}
- (void)startProcessing {
NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
_asset = [[AVURLAsset alloc] initWithURL:self.url options:inputOptions];
NSLog(#"URL: %#", self.url);
_cancelled = NO;
[_asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler: ^{
dispatch_async(_mainSerializationQueue, ^{
if (_cancelled)
return;
BOOL success = YES;
NSError *localError = nil;
success = ([_asset statusOfValueForKey:#"tracks" error:&localError] == AVKeyValueStatusLoaded);
if (success)
{
NSFileManager *fm = [NSFileManager defaultManager];
NSString *localOutputPath = [self.url path];
if ([fm fileExistsAtPath:localOutputPath])
//success = [fm removeItemAtPath:localOutputPath error:&localError];
success = TRUE;
}
if (success)
success = [self setupAssetReaderAndAssetWriter:&localError];
if (success)
success = [self startAssetReaderAndWriter:&localError];
if (!success)
[self readingAndWritingDidFinishSuccessfully:success withError:localError];
});
}];
}
- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
// Create and initialize the asset reader.
_reader = [[AVAssetReader alloc] initWithAsset:_asset error:outError];
BOOL success = (_reader != nil);
if (success)
{
// If the asset reader was successfully initialized, do the same for the asset writer.
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
_outputURL = paths[0];
NSFileManager *manager = [NSFileManager defaultManager];
[manager createDirectoryAtPath:_outputURL withIntermediateDirectories:YES attributes:nil error:nil];
_outputURL = [_outputURL stringByAppendingPathComponent:#"output.mov"];
[manager removeItemAtPath:_outputURL error:nil];
_outURL = [NSURL fileURLWithPath:_outputURL];
_writer = [[AVAssetWriter alloc] initWithURL:_outURL fileType:AVFileTypeQuickTimeMovie error:outError];
success = (_writer != nil);
}
if (success)
{
// If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
NSArray *audioTracks = [_asset tracksWithMediaType:AVMediaTypeAudio];
if ([audioTracks count] > 0)
assetAudioTrack = [audioTracks objectAtIndex:0];
NSArray *videoTracks = [_asset tracksWithMediaType:AVMediaTypeVideo];
if ([videoTracks count] > 0)
assetVideoTrack = [videoTracks objectAtIndex:0];
if (assetAudioTrack)
{
// If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
NSDictionary *decompressionAudioSettings = #{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
_readerAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
[_reader addOutput:_readerAudioOutput];
// Then, set the compression settings to 128kbps AAC and create the asset writer input.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *compressionAudioSettings = #{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
_writerAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
[_writer addInput:_writerAudioInput];
}
if (assetVideoTrack)
{
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
NSDictionary *decompressionVideoSettings = #{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
(id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
};
_readerVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
[_reader addOutput:_readerVideoOutput];
CMFormatDescriptionRef formatDescription = NULL;
// Grab the video format descriptions from the video track and grab the first one if it exists.
NSArray *formatDescriptions = [assetVideoTrack formatDescriptions];
if ([formatDescriptions count] > 0)
formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
CGSize trackDimensions = {
.width = 0.0,
.height = 0.0,
};
// If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
if (formatDescription)
trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
else
trackDimensions = [assetVideoTrack naturalSize];
NSDictionary *compressionSettings = nil;
// If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
if (formatDescription)
{
NSDictionary *cleanAperture = nil;
NSDictionary *pixelAspectRatio = nil;
CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
if (cleanApertureFromCMFormatDescription)
{
cleanAperture = #{
AVVideoCleanApertureWidthKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
AVVideoCleanApertureHeightKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
AVVideoCleanApertureVerticalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
};
}
CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
if (pixelAspectRatioFromCMFormatDescription)
{
pixelAspectRatio = #{
AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
AVVideoPixelAspectRatioVerticalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
};
}
// Add whichever settings we could grab from the format description to the compression settings dictionary.
if (cleanAperture || pixelAspectRatio)
{
NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
if (cleanAperture)
[mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
if (pixelAspectRatio)
[mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
compressionSettings = mutableCompressionSettings;
}
}
// Create the video settings dictionary for H.264.
NSMutableDictionary *videoSettings = (NSMutableDictionary *) #{
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : [NSNumber numberWithDouble:trackDimensions.width],
AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
};
// Put the compression settings into the video settings dictionary if we were able to grab them.
if (compressionSettings)
[videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
// Create the asset writer input and add it to the asset writer.
_writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetVideoTrack mediaType] outputSettings:videoSettings];
NSDictionary *pixelBufferAdaptorSettings = #{
(id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange),
(id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary],
(id)kCVPixelBufferWidthKey : [NSNumber numberWithDouble:trackDimensions.width],
(id)kCVPixelBufferHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
};
_pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_writerVideoInput sourcePixelBufferAttributes:pixelBufferAdaptorSettings];
[_writer addInput:_writerVideoInput];
}
}
return success;
}
- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
BOOL success = YES;
// Attempt to start the asset reader.
success = [_reader startReading];
if (!success) {
*outError = [_reader error];
NSLog(#"Reader error");
}
if (success)
{
// If the reader started successfully, attempt to start the asset writer.
success = [_writer startWriting];
if (!success) {
*outError = [_writer error];
NSLog(#"Writer error");
}
}
if (success)
{
// If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
_dispatchGroup = dispatch_group_create();
[_writer startSessionAtSourceTime:kCMTimeZero];
_audioFinished = NO;
_videoFinished = NO;
if (_writerAudioInput)
{
// If there is audio to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(_dispatchGroup);
// Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
[_writerAudioInput requestMediaDataWhenReadyOnQueue:_rwAudioSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (_audioFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([_writerAudioInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next audio sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [_readerAudioOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [_writerAudioInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
BOOL oldFinished = _audioFinished;
_audioFinished = YES;
if (oldFinished == NO)
{
[_writerAudioInput markAsFinished];
}
dispatch_group_leave(_dispatchGroup);
}
}];
}
if (_writerVideoInput)
{
// If we had video to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(_dispatchGroup);
// Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
[_writerVideoInput requestMediaDataWhenReadyOnQueue:_rwVideoSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (_videoFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([_writerVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [_readerVideoOutput copyNextSampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
_currentBuffer = pixelBuffer;
[self performSelectorOnMainThread:#selector(processFrame) withObject:nil waitUntilDone:YES];
if (_currentBuffer != NULL)
{
//BOOL success = [_writerVideoInput appendSampleBuffer:sampleBuffer];
BOOL success = [_pixelBufferAdaptor appendPixelBuffer:_currentBuffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
BOOL oldFinished = _videoFinished;
_videoFinished = YES;
if (oldFinished == NO)
{
[_writerVideoInput markAsFinished];
}
dispatch_group_leave(_dispatchGroup);
}
}];
}
// Set up the notification that the dispatch group will send when the audio and video work have both finished.
dispatch_group_notify(_dispatchGroup, _mainSerializationQueue, ^{
BOOL finalSuccess = YES;
NSError *finalError = nil;
// Check to see if the work has finished due to cancellation.
if (_cancelled)
{
// If so, cancel the reader and writer.
[_reader cancelReading];
[_writer cancelWriting];
}
else
{
// If cancellation didn't occur, first make sure that the asset reader didn't fail.
if ([_reader status] == AVAssetReaderStatusFailed)
{
finalSuccess = NO;
finalError = [_reader error];
NSLog(#"_reader finalError: %#", finalError);
}
// If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
[_writer finishWritingWithCompletionHandler:^{
[self readingAndWritingDidFinishSuccessfully:finalSuccess withError:[_writer error]];
}];
}
// Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
});
}
// Return success here to indicate whether the asset reader and writer were started successfully.
return success;
}
- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
if (!success)
{
// If the reencoding process failed, we need to cancel the asset reader and writer.
[_reader cancelReading];
[_writer cancelWriting];
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to failure.
});
}
else
{
// Reencoding was successful, reset booleans.
_cancelled = NO;
_videoFinished = NO;
_audioFinished = NO;
dispatch_async(dispatch_get_main_queue(), ^{
UISaveVideoAtPathToSavedPhotosAlbum(_outputURL, nil, nil, nil);
});
}
NSLog(#"readingAndWritingDidFinishSuccessfully success = %# : Error = %#", (success == 0) ? #"NO" : #"YES", error);
}
- (void)processFrame {
if (_currentBuffer) {
if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly))
{
[self.renderer processPixelBuffer:_currentBuffer];
CVPixelBufferUnlockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly);
} else {
NSLog(#"processFrame END");
return;
}
}
}
#end

Resources