Memory management in dispatch - ios

I try to make thumbs on my iPad app of all the view in the background using the following code:
NSString *path = [self.page previewPathForOrientation:currentOrientation];
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
#autoreleasepool {
UIGraphicsBeginImageContextWithOptions(self.previewView.bounds.size, NO, 0.0);
[self.previewView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.previewView = nil;
float scale = [UIScreen mainScreen].scale;
CGRect previewRect = currentOrientation == Landscape ? [[OrientationLandscape singleton] frameForPreviewImage] : [[OrientationPortrait singleton] frameForPreviewImage];
CGSize previewSize = CGSizeMake(previewRect.size.width * scale, previewRect.size.height * scale);
UIImage *scaledImage = [image scaleImageToSize:previewSize];
CGImageDestinationRef imageDestination = CGImageDestinationCreateWithURL((__bridge CFURLRef)[[NSURL alloc] initFileURLWithPath:path], (__bridge CFStringRef)#"public.png", 1, NULL);
CGImageDestinationAddImage(imageDestination, [scaledImage CGImage], NULL);
CGImageDestinationFinalize(imageDestination);
CFRelease(imageDestination);
NSFileManager *fileMngr = [[NSFileManager alloc] init];
if(![fileMngr fileExistsAtPath:path])
{
ZAssert(0, #"could not save preview file");
}
dispatch_async(dispatch_get_main_queue(), ^{
rendered++;
//DLog(#"rendered %d items", rendered);
[GetController addSkipBackupAttributeToItemAtPath:path];
[self.page setPreviewRenderedForOrientation:currentOrientation];
contentsCount = 0;
currentContentIndex = 0;
//[self prepareOtherOrientation];
if(self.journal == nil && (![self.page previewRenderedForOrientation:Landscape] || ![self.page previewRenderedForOrientation:Portrait])){
[self appendPage:self.page];
}
DLog(#"rendered page %# in orientation %d", self.page, currentOrientation);
self.page = nil;
[self retry];
});
}
});
The retry function uses an NSTimer to start the same function again, after a short delay and with a different page. Using the Allocations tool, the heap just keeps growing. After a while I get Memory Warnings, shortly after the app crashes.
Everything works fine when I remove all the dispatch calls, but of course thats not what I want. Also, when I increase the delay in the retry method to say 5 seconds, the problem disappears too, so it seems memory isn't released when things get processed in quick succession.
I absolutely ensured that this method isn't running more than once at a time... any ideas what's going on here?

Related

dispatch_get_main_queue() not running perfect well

I am Newbie in iOS Development. I have no knowledge about dispatch_get_main_queue() so I want to get image size from my Server image url like as
First I parse my JSON Data and Get image Size like as
[self.feedArray addObjectsFromArray:[pNotification.userInfo valueForKey:#"items"]];
[self fillHeightArray];
Here I set parse data in my self.feedArray and after that I get Height like as
-(void)fillHeightArray
{
NSMutableArray *requestArray=[[NSMutableArray alloc]init];
NSMutableArray *dataArray=[[NSMutableArray alloc]init];
for (int i=0; i<[self.feedArray count];i++)
{
NSString *urlString = [[self.feedArray objectAtIndex:i]valueForKey:#"photo"];
NSURL *imageFileURL = [NSURL URLWithString:urlString];
NSURLRequest *urlRequest = [NSURLRequest requestWithURL:imageFileURL];
[requestArray addObject:urlRequest];
}
dispatch_queue_t callerQueue = dispatch_get_main_queue();
dispatch_queue_t downloadQueue = dispatch_queue_create("Lots of requests", NULL);
dispatch_async(downloadQueue, ^{
for (NSURLRequest *request in requestArray) {
[dataArray addObject:[NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]];
}
dispatch_async(callerQueue, ^{
for (int i=0; i<[dataArray count]; i++)
{
UIImage *imagemain=[UIImage imageWithData:[dataArray objectAtIndex:i]];
UIImage *compimage =[self resizeImage:imagemain resizeSize:CGSizeMake(screenWidth/2-16,180)];
CGSize size = CGSizeMake(screenWidth/2-16,compimage.size.height);
[self.cellHeights addObject:[NSValue valueWithCGSize:size]];
}
[GlobalClass StopSpinner:self.view];
self.cltItem.hidden=FALSE;
[self.cltItem reloadData];
[self.cltItem.collectionViewLayout invalidateLayout];
[[NSUserDefaults standardUserDefaults]setValue:#"1" forKey:Loading];
});
});
}
And resize my image like as
-(UIImage *)resizeImage:(UIImage *)orginalImage resizeSize:(CGSize)size
{
float oldWidth = orginalImage.size.width;
float scaleFactor = size.width / oldWidth;
float newHeight = orginalImage.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[orginalImage drawInRect:CGRectMake(0,0,newWidth,newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
So, from this code i got fine result at first time but when i want to load more data so this code are Running second time then i got not valid image size
I don't understand what the issue is there, but I think my
dispatch_queue_t callerQueue = dispatch_get_main_queue();
dispatch_queue_t downloadQueue = dispatch_queue_create("Lots of requests", NULL);
Have issue when I load more data.
Please help me for that.
You are always adding objects in this line:
[self.cellHeights addObject:[NSValue valueWithCGSize:size]];
When you run the code a second time, the array gets bigger, and the old values are still present in its beginning. This is probably giving you bad results when running the code a second time.
EDIT:
It might work slower, because you've made some retain cycles / have memory leaks. In this scenario, it will work ok the first time, and slower for each extra run. I do not really see anything wrong with your code, besides the self.cellHeights table growing. Check the rest of the procedure for elements that are getting bigger every time, and ensure that the objects that are not going to be used anymore are getting released.
Also, try using the 'build & analyze' [ALT + CMD + B]. This might point you to some memory leaks, or other issues.
Profiling tools are also very efficient in localizing leaks, and you can access them with [CMD + I] on the keyboard.
Another thing you can try is calling the main_queue directly, like:
dispatch_async(dispatch_get_main_queue(), ^(void) {
//do sth
});
You will avoid creating another object, and you only use the main_queue once in the whole snippet anyway.
Try doing this and let me know if you got anything.

Multiple webservice Making UI unresponsive

The problem is that I am calling multiple webservices on my homepage and the webservice is returning me the images and text from the server. During this process the UI become fully unresponsive for say 1-2 minutes which is looking very bad as I cant do anything. I heard about dispatch and tried to implement it but I dont get any results.May be I am doing something wrong.
What I want now that I want to that I want to run this process in background So that a user can interact with the UI during the fetching operation from the server. I am implementing my code just tell me where to use dispatch.
-(void)WebserviceHomeSlider{
if([AppDelegate appDelegate].isCheckConnection){
//Internet connection not available. Please try again.
UIAlertView *alertView=[[UIAlertView alloc] initWithTitle:#"Internate error" message:#"Internet connection not available. Please try again." delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil, nil];
[alertView show];
return;
}else {
AFHTTPRequestOperationManager *manager = [AFHTTPRequestOperationManager manager];
manager.responseSerializer.acceptableContentTypes = [manager.responseSerializer.acceptableContentTypes setByAddingObject:#"text/html"];
[manager GET:ServiceUrl#"fpMainBanner" parameters:nil success:^(AFHTTPRequestOperation *operation,id responseObject)
{
//NSLog(#"JSON: %#", responseObject);
arrSlider = [responseObject objectWithJSONSafeObjects];
[_slideshow setTransitionType:KASlideShowTransitionSlide];
_slideshow.gestureRecognizers = nil;
[_slideshow addGesture:KASlideShowGestureSwipe];
// [_slideshow addImagesFromResources:[self getImages]]; // Add
// [_slideshow addTextFromResources:[self getText]];
// [slideShow subtextWithImages:[self getsubText]];
[_slideshow addImagesFromResources:[self getImages] stringArray:[self getText] stringsubArray:[self getsubText]];
}
failure:^(AFHTTPRequestOperation *operation, NSError *error) {
NSLog(#"Error: %#", error);
}];
}
}
Just tell me where to use dispatch or edit my code using dispatch if possible. I have gone through some examples but still my concept is not clear. Which dispatch method id best (DEFAULT or BACKGROUND). I will be very thankful to You.
This is the code you are looking for . Just tell me where to edit it using dispatch
-(NSArray *)getText{
NSMutableArray *textArr = [[NSMutableArray alloc] init];
for(int i=0; i<[arrSlider count];i++)
{
texxt=[[arrSlider objectAtIndex:i ]valueForKey:#"title" ];
[textArr addObject:[texxt uppercaseString]];
}
return textArr;
}
-(NSArray *)getsubText{
NSMutableArray *subtext = [[NSMutableArray alloc] init];
for(int i=0; i<[arrSlider count];i++)
{
subbtext=[[arrSlider objectAtIndex:i ]valueForKey:#"tagline_value" ];
if(i==8)
{
subbtext=#"MAKE YOURSELF STAND OUT GET YOUR FREE CARDS!";
}
NSLog(#"subtext is,,.,.,,.,%#.%#",#"k",subbtext);
[subtext addObject:[subbtext uppercaseString]];
}
return subtext;
}
-(NSArray *)getImages
{
NSMutableArray *mArr = [[NSMutableArray alloc] init];
for(int i=0; i<[arrSlider count];i++)
{
pathh=[[arrSlider objectAtIndex:i ]valueForKey:#"filepath" ];
NSString *newString = [pathh stringByReplacingOccurrencesOfString:#" " withString:#"%20"];
NSURL *imageURL = [NSURL URLWithString:newString];
NSData *imageData = [NSData dataWithContentsOfURL:imageURL];
UIImage *originalImage = [UIImage imageWithData:imageData];
CGSize destinationSize = CGSizeMake(320, 158);
UIGraphicsBeginImageContext(destinationSize);
[originalImage drawInRect:CGRectMake(0,0,destinationSize.width,destinationSize.height)];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// [whotshotimgview setImage:image];
[mArr addObject: img];
}
return mArr;
}
[_slideshow addImagesFromResources:[self getImages] stringArray:[self getText] stringsubArray:[self getsubText]];
The problem here is that you didn't showed implementation of addImagesFromResources method. You probably have to implement GCD in this method because i guess you are fetching images and setting on UI by this your main thread is getting blocked.
You are trying to access the UIView from thread other than the main which causes the UI unresponsiveness.
Please use this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[_slideshow setTransitionType:KASlideShowTransitionSlide];
_slideshow.gestureRecognizers = nil;
[_slideshow addGesture:KASlideShowGestureSwipe];
// [_slideshow addImagesFromResources:[self getImages]]; // Add
// [_slideshow addTextFromResources:[self getText]];
// [slideShow subtextWithImages:[self getsubText]];
[_slideshow addImagesFromResources:[self getImages] stringArray:[self getText] stringsubArray:[self getsubText]];
});
What I think is that your UI hangs due to downloading of image.
So you should use SDWebImage library that can help you to cache image and also prevent the UI getting hang. Using SDWebImage you can show the loader for the time the image is not loaded and it will cache the image. So from next time no image will be downloaded and also the UI will not hang. The link for complete reference to SDWebImage is :https://github.com/rs/SDWebImage
This block blocking your main UI to perform tasks so update this block as follow,
for(int i=0; i<[arrSlider count];i++)
{
pathh=[[arrSlider objectAtIndex:i ]valueForKey:#"filepath" ];
NSString *newString = [pathh stringByReplacingOccurrencesOfString:#" " withString:#"%20"];
dispatch_queue_t myQueue = dispatch_queue_create("My Queue",NULL);
dispatch_async(myQueue, ^{
NSURL *imageURL = [NSURL URLWithString:newString];
NSData *imageData = [NSData dataWithContentsOfURL:imageURL];
UIImage *originalImage = [UIImage imageWithData:imageData];
CGSize destinationSize = CGSizeMake(320, 158);
UIGraphicsBeginImageContext(destinationSize);
[originalImage drawInRect:CGRectMake(0,0,destinationSize.width,destinationSize.height)];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// [whotshotimgview setImage:image];
[mArr addObject: img];
dispatch_async(dispatch_get_main_queue(), ^{
// If you want to refresh your UI simultaniously, you can write statement for that i.e
// yourImageView.image = img;
// Otherwise remove this queue
});
});
}
The main thread should never be used for long processing. Its there for the UI.
AFNetworking provides asynchronous functionality for downloading files. You should also be careful of thrashing the network layer with too many simulatanous downloads. I think 4 or 5 is the max the last time I tested.
So the flow of execution should be as follows.
The controller sets up the slideshow, sends the downloading of the images to a background thread(automatically handled by AFNetworking), passing a completion block to it. In this completion block you need to dispatch the work back to the main thread using GCD, dispatch_async to dispatch_get_get_main_queue
Keep your main thread's run loop available for user interaction.
It is because the images are loading in another thread and you are populating your slider right after getting the data.
It is better to implement this protocol method of KASlidershow to use the slideshow in a more memory efficient way.
(UIImage *)slideShow:(KASlideShow *)slideShow imageForPosition:(KASlideShowPosition)position
{
pathh=[[arrSlider objectAtIndex:i ]valueForKey:#"filepath" ];
NSString *newString = [pathh stringByReplacingOccurrencesOfString:#" " withString:#"%20"];
dispatch_queue_t myQueue = dispatch_queue_create("My Queue",NULL);
dispatch_async(myQueue, ^{
NSURL *imageURL = [NSURL URLWithString:newString];
NSData *imageData = [NSData dataWithContentsOfURL:imageURL];
UIImage *originalImage = [UIImage imageWithData:imageData];
CGSize destinationSize = CGSizeMake(320, 158);
UIGraphicsBeginImageContext(destinationSize);
[originalImage drawInRect:CGRectMake(0,0,destinationSize.width,destinationSize.height)];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// [whotshotimgview setImage:image];
[mArr addObject: img];
return img;
});
}
i didn't try the code but i hope it will work with little changes in variable names. Good luck

Create a gif with UIImages

I am referring to this post . I am trying to make a gif file with the images created by screenshot. I am using a timer to create the snapshot of the screen,so that I get required amount of frames that could be used to create a gif. I take snaphots every 0.1 seconds (I will later end this timer after 3 seconds).
Here is my code to take snapshots of my UIView:
-(void)recordScreen{
self.timer= [NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:#selector(takeSnapShot) userInfo:nil repeats:YES];
}
-(void)takeSnapShot{
//capture the screenshot of the uiimageview and save it in camera roll
UIGraphicsBeginImageContext(self.drawView.frame.size);
[self.drawView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
and the post I am referring to, shows a helper function to create gif. I am not sure how should I pass my images to the helper function. Here is what I tried:
I tried to modify this part :
static NSUInteger kFrameCount = 10;
for (NSUInteger i = 0; i < kFrameCount; i++) {
#autoreleasepool {
UIImage *image = [self takeSnaphot];
CGImageDestinationAddImage(destination, image.CGImage, (__bridge CFDictionaryRef)frameProperties);
}
}
This creates the gif with 10 frames of my UIView.
Now....
What I am trying to do is:
I am drawing a simple drawing with finger on my UIView using UIBeizerPath and I am taking snapshots in parallel to my drawing ,so that I will have around 50-100 of PNG files. I am trying to pass all these images to the makeGifMethod.
WorkFlow:
Start the App.
Tap on a button which starts the timer, and take pics for every 0.1 sec
Draw with finger (so all the drawing will be captured evry 0.1 secs)
After each snapshot, I call makeanimatedgif method,so that the snapshot taken will be added to the previous frame in the gif file
Stop everything
Issue:
-Case 1:
Tap on button (which does create gif immediatley)
Start drawing
Stop
Check gif (a gif with 10 frames is created with white background,since I drew nothing when I hit button)
If I call my snapShot method in a loop, I get the last 10 frames of my drawing,but not everything.
for (NSUInteger i = 0; i < 10; i++) {
#autoreleasepool {
UIImage *image = [self takeSnapShot];
CGImageDestinationAddImage(destination, image.CGImage, (__bridge CFDictionaryRef)frameProperties);
}
}
-Case 2:
Hit button (start timer, take snapshot, call makeGifMethod in parallel to it)
Draw somthing
Stop
Check gif (empty gif is created without any frames,I beleieve calling makeGifMethod every 0.1 seconds didnt work as expected)
Here is the code for case 2:
-(void)recordScreen{
self.timer= [NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:#selector(takeSnapShot) userInfo:nil repeats:YES];
}
-(void)takeSnapShot{
//capture the screenshot of the uiimageview and save it in camera roll
UIGraphicsBeginImageContext(self.drawView.frame.size);
[self.drawView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self makeAnimatedGif:(UIImage *)viewImage];
});
}
-(void) makeAnimatedGif:(UIImage *)image{
NSDictionary *fileProperties = #{
(__bridge id)kCGImagePropertyGIFDictionary: #{
(__bridge id)kCGImagePropertyGIFLoopCount: #0, // 0 means loop forever
}
};
NSDictionary *frameProperties = #{
(__bridge id)kCGImagePropertyGIFDictionary: #{
(__bridge id)kCGImagePropertyGIFDelayTime: #0.02f, // a float (not double!) in seconds, rounded to centiseconds in the GIF data
}
};
documentsDirectoryURL = [[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:YES error:nil];
fileURL = [documentsDirectoryURL URLByAppendingPathComponent:#"animated.gif"];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)fileURL, kUTTypeGIF, 10, NULL);
CGImageDestinationSetProperties(destination, (__bridge CFDictionaryRef)fileProperties);
CGImageDestinationAddImage(destination, image.CGImage, (__bridge CFDictionaryRef)frameProperties);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"failed to finalize image destination");
}
CFRelease(destination);
NSLog(#"url=%#", fileURL);
}
Can someone please suggest me how to pass the captured images to the above method to make a gif?
After few work arounds, I choose to store all the captured images into an array, and use that array to pass the images to gifMethod. And it worked so cool!!!
I have stored all the images into array:
-(void)takeSnapShot{
//capture the screenshot of the uiimageview and save it in camera roll
UIGraphicsBeginImageContext(self.drawView.frame.size);
[self.drawView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//store all the images into array
[imgArray adDObject:viewImage];
}
NOTE: make sure you resize the image before you store them into array else you might end up with memory warning followed by app crash if this is used for a longer period.
and later used the same array :
-(void)makeAnimatedGif {
NSUInteger kFrameCount = imgArray.count;
NSDictionary *fileProperties = #{
(__bridge id)kCGImagePropertyGIFDictionary: #{
(__bridge id)kCGImagePropertyGIFLoopCount: #0, // 0 means loop forever
}
};
NSDictionary *frameProperties = #{
(__bridge id)kCGImagePropertyGIFDictionary: #{
(__bridge id)kCGImagePropertyGIFDelayTime: #0.08f, // a float (not double!) in seconds, rounded to centiseconds in the GIF data
}
};
NSURL *documentsDirectoryURL = [[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:YES error:nil];
NSURL *fileURL = [documentsDirectoryURL URLByAppendingPathComponent:#"animated.gif"];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)fileURL, kUTTypeGIF, kFrameCount, NULL);
CGImageDestinationSetProperties(destination, (__bridge CFDictionaryRef)fileProperties);
for (NSUInteger i = 0; i < kFrameCount; i++) {
#autoreleasepool {
UIImage *image =[imgArray objectAtIndex:i]; //Here is the change
CGImageDestinationAddImage(destination, image.CGImage, (__bridge CFDictionaryRef)frameProperties);
}
}
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"failed to finalize image destination");
}
CFRelease(destination);
NSLog(#"url=%#", fileURL);
}

How to download infinite number of images from remote server and display the images in UICollection View/UITableView?

I want to display the infinite images in UITableView/UICollectionview, here the images will be received from remote server, I've done this by using GCD but it causes Memory issues and app got crash.Please help to fix it. Also I noticed some images aren't deallocated. Here is piece of code I've used to download images.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
NSData *imgData = UIImageJPEGRepresentation(image, 0.1);
UIImage *image1 = [UIImage imageWithData:imgData];
if (image1.size.width != 130 || image1.size.height != 100)
{
CGSize itemSize = CGSizeMake(130, 100);
UIGraphicsBeginImageContextWithOptions(itemSize, NO, 0.0f);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
[image1 drawInRect:imageRect];
image1 = UIGraphicsGetImageFromCurrentImageContext();
[self setImage:image1 forKey:[url absoluteString]];
// NSLog(#" down Size of Image(bytes):%d",[imgData length]);
UIGraphicsEndImageContext();
}
dispatch_async(dispatch_get_main_queue(), ^{
completion(image1);
//image1=nil;
});
});
you can work with: AFNetworking+UIImageView
About the memory issues and crashes, be careful with the retain cycles, use the profiler/Instruments, if you want to continue with your code is good idea to change:
__weak MyClassViewOrViewController* weakSelf = self;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
...
__strong MyClassViewOrViewController *strongSelf = weakSelf;
[strongSelf setImage:image1 forKey:[url absoluteString]];
...
});
Thanks,
J.
You can do using LazyLoading Concept
Reference: https://developer.apple.com/library/ios/samplecode/LazyTableImages/Introduction/Intro.html
You can use SDWebImage to load infinite image, good performance, cache..

iOS - video frame processing optimization

In my project, I need to copy a chunk of each frame of a video on one unique resulting image.
Capturing video frames is not a big deal. It would be something like :
// duration is the movie lenght in s.
// frameDuration is 1/fps. (or 24fps, frameDuration = 1/24)
// player is a MPMoviePlayerController
for (NSTimeInterval i=0; i < duration; i += frameDuration) {
UIImage * image = [player thumbnailImageAtTime:i timeOption:MPMovieTimeOptionExact];
CGRect destinationRect = [self getDestinationRect:i];
[self drawImage:image inRect:destinationRect fromRect:originRect];
// UI feedback
[self performSelectorOnMainThread:#selector(setProgressValue:) withObject:[NSNumber numberWithFloat:x/totalFrames] waitUntilDone:NO];
}
The problem comes when I try to implement drawImage:inRect:fromRect: method.
I tried this code, which :
create a new CGImage with CGImageCreateWithImageInRect from the video frame to extract the chunk of image.
Make a CGContextDrawImage on the ImageContext to draw the chunk
But when the video reaches 12-14s, my iPhone 4S is announcing his third memory warning and crashes. I've profiled the app with the Leak tool, and it found no leak at all...
I'm not very strong in Quartz. Is there better optimized way to achieve this?
Finally I kept the Quartz part of my code and changed the way I retrieved the images.
Now I use AVFoundation, which is a far faster solution.
// Creating the tools : 1/ the video asset, 2/ the image generator, 3/ the composition, which helps to retrieve video properties.
AVURLAsset *asset = [[[AVURLAsset alloc] initWithURL:moviePathURL
options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], AVURLAssetPreferPreciseDurationAndTimingKey, nil]] autorelease];
AVAssetImageGenerator *generator = [[[AVAssetImageGenerator alloc] initWithAsset:asset] autorelease];
generator.appliesPreferredTrackTransform = YES; // if I omit this, the frames are rotated 90° (didn't try in landscape)
AVVideoComposition * composition = [AVVideoComposition videoCompositionWithPropertiesOfAsset:asset];
// Retrieving the video properties
NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
frameDuration = CMTimeGetSeconds(composition.frameDuration);
CGSize renderSize = composition.renderSize;
CGFloat totalFrames = round(duration/frameDuration);
// Selecting each frame we want to extract : all of them.
NSMutableArray * times = [NSMutableArray arrayWithCapacity:round(duration/frameDuration)];
for (int i=0; i<totalFrames; i++) {
NSValue *time = [NSValue valueWithCMTime:CMTimeMakeWithSeconds(i*frameDuration, composition.frameDuration.timescale)];
[times addObject:time];
}
__block int i = 0;
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result == AVAssetImageGeneratorSucceeded) {
int x = round(CMTimeGetSeconds(requestedTime)/frameDuration);
CGRect destinationStrip = CGRectMake(x, 0, 1, renderSize.height);
[self drawImage:im inRect:destinationStrip fromRect:originStrip inContext:context];
}
else
NSLog(#"Ouch: %#", error.description);
i++;
[self performSelectorOnMainThread:#selector(setProgressValue:) withObject:[NSNumber numberWithFloat:i/totalFrames] waitUntilDone:NO];
if(i == totalFrames) {
[self performSelectorOnMainThread:#selector(performVideoDidFinish) withObject:nil waitUntilDone:NO];
}
};
// Launching the process...
generator.requestedTimeToleranceBefore = kCMTimeZero;
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.maximumSize = renderSize;
[generator generateCGImagesAsynchronouslyForTimes:times completionHandler:handler];
Even with very long video, it takes the time but it never crash !
In addition to Martin's answer I'd suggest shrinking the sizes of the images obtained by that call; that is, adding a property [generator.maximumSize = CGSizeMake(width,height)]; Make the images as small as possible so they wouldn't take up too much memory

Resources