Recording and merging video for screen capture - ios

I am currently grabbing the frames of a video using AVPlayerItemVideoOutput. I use CADisplayLink to grab frames from the output. Then i pass the pixel buffer off to the assetwriter. I do it like this:
- (void)displayLinkCallback:(CADisplayLink *)sender
{
CMTime outputItemTime = kCMTimeInvalid;
// Calculate the nextVsync time which is when the screen will be refreshed next.
CFTimeInterval nextVSync = (sender.timestamp + sender.duration);
outputItemTime = [self.videoOutput itemTimeForHostTime:nextVSync];
if (self.playerOne.playerAsset.playable) {
if ([[self videoOutput] hasNewPixelBufferForItemTime:outputItemTime] && self.newSampleReady) {
dispatch_async(self.captureSessionQueue, ^{
CVPixelBufferRelease(self.lastPixelBuffer);
self.lastPixelBuffer = [self.videoOutput copyPixelBufferForItemTime:outputItemTime itemTimeForDisplay:NULL];
CMTime fpsTime = CMTimeMake(1, 24);
self.currentVideoTime = CMTimeAdd(self.currentVideoTime, fpsTime);
[_assetWriterInputPixelBufferAdaptor appendPixelBuffer:self.lastPixelBuffer withPresentationTime:self.currentVideoTime];
self.newSampleReady = NO;
});
}
}
}
This allows me to switch videos real time and keep making a screen recording. But I also want to switch to a split view with two players, grab each of the frames from the players and merge them into a single video. AVComposition would work except that you have to know in advance what tracks and times you want to merge. My screen capture program lets the user switch freely between single and split view and back. Is there a way to get the pixel buffer's and use those to merge the recordings into a single video?
I have tried doing the following by just taking the first pixel buffer and creating two images, combining them and then creating a new pixel buffer that I pass back to the assetwriter but I just get a black screen video. Here's my code for this:
-(CVPixelBufferRef)pixelBufferToCGImageRef:(CVPixelBufferRef)pixelBuffer withSecond:(CVPixelBufferRef)pixelBuffer2
{
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
UIImage *im1 = [UIImage imageWithCIImage:ciImage];
UIImage *im2 = [UIImage imageWithCIImage:ciImage];
CGSize newSize = CGSizeMake(640, 480);
UIGraphicsBeginImageContext( newSize );
[im1 drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height)];
[im2 drawInRect:CGRectMake(newSize.width/2,0,newSize.width/2,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
CIImage *newCIImage = [newImage CIImage];
UIGraphicsEndImageContext();
CVPixelBufferRef pbuff = NULL;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
640,
480,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef)(options),
&pbuff);
if (status == kCVReturnSuccess) {
[temporaryContext render:newCIImage
toCVPixelBuffer:pbuff
bounds:CGRectMake(0, 0, 640, 480)
colorSpace:nil];
} else {
NSLog(#"Failed create pbuff");
}
return pbuff;
}
Any suggestions?

The black screen I was getting was because ciImage becomes nil right after I get it on the simulator. If I run the code on a device then it works.

Related

CGContextDrawImage huge memory peak

I'm developing a movie maker application which make some effects on imported videos.
I'm using AVAssetWriter to code my application.
Everything works very good but I have a big problem in memory.
My app takes over 500 MB of the RAM in buffering process.
Simply the algorithm for making a filtered video is going like this:
1- import video.
2- extract all the frames for the video as CMSampleBuffer objects.
3- convert CMSampleBuffer object to uiimage.
4- implement the filter on the uiimage.
5- convert the uiimage back to a new CMSAmpleBuffer object.
6- Append the new buffer to a writer output.
7- Finally save the new movie to PhotoGallery.
The problem is in step5 I have a function which converts a UIImage to cvpixelBuffer object and return it.
Then I convert the CVPixelBuffer object to CMSampleBuffer.
The function increases the memory a lot and the application crashes at the end.
This is my code:
-(CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize)size
{
double height = CGImageGetHeight(image);
double width = CGImageGetWidth(image);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,size.width ,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
CGFloat Y ;
if (height == size.height)
Y = 0;
else
Y = (size.height /2) - (height/2) ;
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, Y,width,height), image);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
CGContextDrawImage increases the memory by 2~5 MB per frame conversion.
I tried the following solutions:
1- releasing pxbuffer using CFRelease.
2- I used CGImageRelease to release the image ref.
3- I surrounded the code with #autoreleasepool block.
4- I used CGContextRelease.
5- UIGraphicsEndImageContext.
6- Used Analyze too in Xcode and fixed all the points.
Here is the full code for Video filtering:
- (void)assetFilteringMethod:(FilterType)filterType AndAssetURL:(NSURL *)assetURL{
CMSampleBufferRef sbuff ;
[areader addOutput:rout];
[areader startReading];
UIImage* bufferedImage;
while ([areader status] != AVAssetReaderStatusCompleted) {
sbuff = [rout copyNextSampleBuffer];
if (sbuff == nil)
[areader cancelReading];
else{
if (writerInput.readyForMoreMediaData) {
#autoreleasepool {
bufferedImage = [self imageFromSampleBuffer:sbuff];
bufferedImage = [FrameFilterClass convertImageToFilterWithFilterType:filterType andImage: bufferedImage];
CVPixelBufferRef buffer = NULL;
buffer = [self pixelBufferFromCGImage:[bufferedImage CGImage] andSize:CGSizeMake(320,240)];
[adaptor appendPixelBuffer:buffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sbuff)];
CFRelease(buffer);
CFRelease(sbuff);
}
}
}
}
//Finished buffering
[videoWriter finishWritingWithCompletionHandler:^{
if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted){
dispatch_async(dispatch_get_main_queue(), ^{
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library
videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:moviePath]]) {
[library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:moviePath]
completionBlock:^(NSURL *assetURL, NSError *error){
}];
}
});
}
else
NSLog(#"Video writing failed: %#", videoWriter.error);
}];
}
I spent around 3 to 4 days trying to solve this problem...
Any help would be appreciated.
You have to release the image using this line:
cgimagerelease(image.cgimage)

iPhone 6+ (Wrong scale?)

I have an iOS app using the camera to take pictures.
It uses a path(CGPath) drawn on the screen (for example a rectangle), and it takes a photo within that path. The app supports only portrait orientation.
For that to happen I use: AVCaptureSession, AVCaptureStillImageOutput, AVCaptureDevice, AVCaptureVideoPreviewLayer
(I guess all familiar to developers making this kind of apps).
My code uses UIScreen.mainScreen().bounds and UIScreen.mainScreen().scale to adapt do various devices and do its job.
It all goes fine(on iPhone 5, iPhone 6), until I try the app on an iPhone 6+ (running iOS 9.3.1) and see that something is wrong.
The picture taken is not layed out in the right place anymore.
I had someone try on an iPhone 6+, and by putting an appropriate message I was able to confirm that (UIScreen.mainScreen().scale) is what it shoud be: 3.0.
I have put the proper size launch images(640 × 960, 640 × 1136, 750 × 1334, 1242 × 2208) in the project.
So what could be the problem?
I use the code below in an app, it works on 6+.
The code starts a AVCaptureSession, pulling video input from the device's camera.
As it does so, it continuously updates the runImage var, from the captureOutput delegate function.
When the user wants to take a picture, the takePhoto method is called. This method creates a temporary UIImageview and feeds the runImage into it. This temp UIImageView is then used to draw another variable called currentImage to the scale of the device.
The currentImage, in my case, is square, matching the previewHolder frame, but I suppose you can make anything you want.
Declare these:
AVCaptureDevice * device;
AVCaptureDeviceInput * input;
AVCaptureVideoDataOutput * output;
AVCaptureSession * session;
AVCaptureVideoPreviewLayer * preview;
AVCaptureConnection * connection;
UIImage * runImage;
Load scanner:
-(void)loadScanner
{
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
output = [AVCaptureVideoDataOutput new];
session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
[session addInput:input];
[session addOutput:output];
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[output setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
preview = [AVCaptureVideoPreviewLayer layerWithSession:session];
preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
preview.frame = previewHolder.bounds;
connection = preview.connection;
[connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[previewHolder.layer insertSublayer:preview atIndex:0];
}
Ongoing image capture, updates runImage var.
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
runImage = [self imageForBuffer:sampleBuffer];
}
Related to above.
-(UIImage *)imageForBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
UIImage * rotated = [[UIImage alloc] initWithCGImage:image.CGImage scale:1.0 orientation:UIImageOrientationRight];
return rotated;
}
On take photo:
-(void)takePhoto
{
UIImageView * temp = [UIImageView new];
temp.frame = previewHolder.frame;
temp.image = runImage;
temp.contentMode = UIViewContentModeScaleAspectFill;
temp.clipsToBounds = true;
[self.view addSubview:temp];
UIGraphicsBeginImageContextWithOptions(temp.bounds.size, NO, [UIScreen mainScreen].scale);
[temp drawViewHierarchyInRect:temp.bounds afterScreenUpdates:YES];
currentImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp removeFromSuperview];
//further code...
}
In case someone else has the same issue. Here is what made things go wrong for me:
I was naming a file : xyz#2x.png.
When UIScreen.mainScreen().scale == 3.0 (case of an iPhone 6+)
it has to be named : xyz#3x.png.

How to create real time image effect processing application iOS

I use AVCaptureSession to receive image from camera of iPhone. It return image in delegate function. In this function, I create image and call other thread to process this image:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
// static bool isFirstTime = true;
// if (isFirstTime == false) {
// return;
// }
// isFirstTime = false;
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
//Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//Create a CGImageRef from the CVImageBufferRef
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst/*kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast*/);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
// release some components
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage* uiimage = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationDown];
CGImageRelease(newImage);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
//[self performSelectorOnMainThread:#selector(setImageForImageView:) withObject:uiimage waitUntilDone:YES];
if(processImageThread == nil || (processImageThread != nil && processImageThread.isExecuting == false)){
[processImageThread release];
processImageThread = [[NSThread alloc] initWithTarget:self selector:#selector(processImage:) object:uiimage];
[processImageThread start];
}
[pool drain];
}
I process image on another thread, use CIFilters:
- (void) processImage:(UIImage*)image{
NSLog(#"Begin process");
CIImage* ciimage = [CIImage imageWithCGImage:image.CGImage];
CIFilter* filter = [CIFilter filterWithName:#"CIColorMonochrome"];// keysAndValues:kCIInputImageKey, ciimage, "inputRadius", [NSNumber numberWithFloat:10.0f], nil];
[filter setDefaults];
[filter setValue:ciimage forKey:#"inputImage"];
[filter setValue:[CIColor colorWithRed:0.5 green:0.5 blue:1.0] forKey:#"inputColor"];
CIImage* ciResult = [filter outputImage];
CIContext* context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:ciResult fromRect:[ciResult extent]];
UIImage* uiResult = [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationRight];
CFRelease(cgImage);
[self performSelectorOnMainThread:#selector(setImageForImageView:) withObject:uiResult waitUntilDone:YES];
NSLog(#"End process");
}
And set result image for a layer:
- (void) setImageForImageView:(UIImage*)image{
self.view.layer.contents = image.CGImage;
}
But it is very laggy. I found a open source, it create a real time image effect application very smooth (also use AVCaptureSession. So, what is difference here (my code and their code) ? How to create real time image effect processing application ?
This is the link of open source: https://github.com/gobackspaces/DLCImagePickerController#readme
The open source sample that you specified in your question using an outstanding open source library GPUImage by BradLarson for the real time photo and video processing. This library uses GPU-based filters (OpenGL ES 2.0) for image processing. Comparatively it is faster than the CPU-based image fileters that you are using by the core image framework.
GPUImage
The GPUImage framework is a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.
For massively parallel operations like processing images or live video frames, GPUs have some significant performance advantages over CPUs. On an iPhone 4, a simple image filter can be over 100 times faster to perform on the GPU than an equivalent CPU-based filter.

cv::Mat doesn't match UIImageView width?

I am using AVFoundation to capture video frames, process with opencv and display the result in an UIImageView on the new iPad. The opencv process does the followings ("inImg" is the video frame) :
cv::Mat testROI = inImg.rowRange(0,100);
testROI = testROI.colRange(0,10);
testROI.setTo(255); // this is a BGRA frame.
However, instead of getting a vertical white bar (100 row x 10 col) on the top left corner of the frame, I got 100 stair-like horizontal lines from top right corner to the bottom left, each with 10 pixel long.
After some investigation, I realized that the width of the displayed frame seems to be 8 pixel wider than the cv::Mat. (i.e. the 9th pixel of the 2nd row is right below the 1st pixel of the 1st row.).
The video frame itself is shown correctly (no displacement between rows).
The problem appears when the AVCaptureSession.sessionPreset is AVCaptureSessionPresetMedium (frame rows=480, cols=360) but does not appear when it is AVCaptureSessionPresetHigh (frame rows=640, cols=480).
There are 360 cols shown in full screen. (I tried traverse and modify the cv::Mat pixel-by-pixel. Pixel 1-360 were shown correctly. 361-368 disappeared and 369 was shown right under pixel 1).
I tried combinations of imageview.contentMode (UIViewContentModeScaleAspectFill and UIViewContentModeScaleAspectFit) and imageview.clipsToBound (YES/NO) but no luck.
What could be the problem?
Thank you very much.
I use the following code to create the AVCaptureSession:
NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
if ([devices count] == 0) {
NSLog(#"No video capture devices found");
return NO;
}
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionFront) {
_captureDevice = device;
}
}
NSError* error_exp = nil;
if ([_captureDevice lockForConfiguration:&error_exp]) {
[_captureDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
[_captureDevice unlockForConfiguration];
}
// Create the capture session
_captureSession = [[AVCaptureSession alloc] init];
_captureSession.sessionPreset = AVCaptureSessionPresetMedium;
// Create device input
NSError *error = nil;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:_captureDevice error:&error];
// Create and configure device output
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
dispatch_queue_t queue = dispatch_queue_create("cameraQueue", NULL);
[_videoOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
_videoOutput.alwaysDiscardsLateVideoFrames = YES;
OSType format = kCVPixelFormatType_32BGRA;
_videoOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Connect up inputs and outputs
if ([_captureSession canAddInput:input]) {
[_captureSession addInput:input];
}
if ([_captureSession canAddOutput:_videoOutput]) {
[_captureSession addOutput:_videoOutput];
}
AVCaptureConnection * captureConnection = [_videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (captureConnection.isVideoMinFrameDurationSupported)
captureConnection.videoMinFrameDuration = CMTimeMake(1, 60);
if (captureConnection.isVideoMaxFrameDurationSupported)
captureConnection.videoMaxFrameDuration = CMTimeMake(1, 60);
if (captureConnection.supportsVideoMirroring)
[captureConnection setVideoMirrored:NO];
[captureConnection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown];
When a frame is received, the followings is called:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
CGRect videoRect = CGRectMake(0.0f, 0.0f, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer));
AVCaptureConnection *currentConnection = [[_videoOutput connections] objectAtIndex:0];
AVCaptureVideoOrientation videoOrientation = [currentConnection videoOrientation];
CGImageRef quartzImage;
// For color mode a 4-channel cv::Mat is created from the BGRA data
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuffer);
cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, 0);
if ([self doFrame]) { // a flag to switch processing ON/OFF
[self processFrame:mat videoRect:videoRect videoOrientation:videoOrientation]; // "processFrame" is the opencv function shown above
}
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
quartzImage = [self.context createCGImage:ciImage fromRect:ciImage.extent];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationUp];
CGImageRelease(quartzImage);
[self.imageView performSelectorOnMainThread:#selector(setImage:) withObject:image waitUntilDone:YES];
I assume you're using the constructor Mat(int _rows, int _cols, int _type, void* _data, size_t _step=AUTO_STEP) and that AUTO_STEP is 0 and assumes that the row stride is width*bytesPerPixel.
This is generally wrong — it's very common to align rows to some larger boundary. In this case, 360 is not a multiple of 16 but 368 is; which strongly suggests that it's aligning to 16-pixel boundaries (perhaps to assist algorithms that process in 16×16 blocks?).
Try
cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, CVPixelBufferGetBytesPerRow(pixelBuffer));

iOS - video frame processing optimization

In my project, I need to copy a chunk of each frame of a video on one unique resulting image.
Capturing video frames is not a big deal. It would be something like :
// duration is the movie lenght in s.
// frameDuration is 1/fps. (or 24fps, frameDuration = 1/24)
// player is a MPMoviePlayerController
for (NSTimeInterval i=0; i < duration; i += frameDuration) {
UIImage * image = [player thumbnailImageAtTime:i timeOption:MPMovieTimeOptionExact];
CGRect destinationRect = [self getDestinationRect:i];
[self drawImage:image inRect:destinationRect fromRect:originRect];
// UI feedback
[self performSelectorOnMainThread:#selector(setProgressValue:) withObject:[NSNumber numberWithFloat:x/totalFrames] waitUntilDone:NO];
}
The problem comes when I try to implement drawImage:inRect:fromRect: method.
I tried this code, which :
create a new CGImage with CGImageCreateWithImageInRect from the video frame to extract the chunk of image.
Make a CGContextDrawImage on the ImageContext to draw the chunk
But when the video reaches 12-14s, my iPhone 4S is announcing his third memory warning and crashes. I've profiled the app with the Leak tool, and it found no leak at all...
I'm not very strong in Quartz. Is there better optimized way to achieve this?
Finally I kept the Quartz part of my code and changed the way I retrieved the images.
Now I use AVFoundation, which is a far faster solution.
// Creating the tools : 1/ the video asset, 2/ the image generator, 3/ the composition, which helps to retrieve video properties.
AVURLAsset *asset = [[[AVURLAsset alloc] initWithURL:moviePathURL
options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], AVURLAssetPreferPreciseDurationAndTimingKey, nil]] autorelease];
AVAssetImageGenerator *generator = [[[AVAssetImageGenerator alloc] initWithAsset:asset] autorelease];
generator.appliesPreferredTrackTransform = YES; // if I omit this, the frames are rotated 90° (didn't try in landscape)
AVVideoComposition * composition = [AVVideoComposition videoCompositionWithPropertiesOfAsset:asset];
// Retrieving the video properties
NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
frameDuration = CMTimeGetSeconds(composition.frameDuration);
CGSize renderSize = composition.renderSize;
CGFloat totalFrames = round(duration/frameDuration);
// Selecting each frame we want to extract : all of them.
NSMutableArray * times = [NSMutableArray arrayWithCapacity:round(duration/frameDuration)];
for (int i=0; i<totalFrames; i++) {
NSValue *time = [NSValue valueWithCMTime:CMTimeMakeWithSeconds(i*frameDuration, composition.frameDuration.timescale)];
[times addObject:time];
}
__block int i = 0;
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result == AVAssetImageGeneratorSucceeded) {
int x = round(CMTimeGetSeconds(requestedTime)/frameDuration);
CGRect destinationStrip = CGRectMake(x, 0, 1, renderSize.height);
[self drawImage:im inRect:destinationStrip fromRect:originStrip inContext:context];
}
else
NSLog(#"Ouch: %#", error.description);
i++;
[self performSelectorOnMainThread:#selector(setProgressValue:) withObject:[NSNumber numberWithFloat:i/totalFrames] waitUntilDone:NO];
if(i == totalFrames) {
[self performSelectorOnMainThread:#selector(performVideoDidFinish) withObject:nil waitUntilDone:NO];
}
};
// Launching the process...
generator.requestedTimeToleranceBefore = kCMTimeZero;
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.maximumSize = renderSize;
[generator generateCGImagesAsynchronouslyForTimes:times completionHandler:handler];
Even with very long video, it takes the time but it never crash !
In addition to Martin's answer I'd suggest shrinking the sizes of the images obtained by that call; that is, adding a property [generator.maximumSize = CGSizeMake(width,height)]; Make the images as small as possible so they wouldn't take up too much memory

Resources