GPUImage slow memory accumulation - ios

I looked at forums to locate similar questions, but it seems that my issue is different.
I am using GPUImage, I built framework using bash script. I looked at samples of creating filters & their chain for processing.
But in my app I am facing constant increase in memory consumption. There are no memory leaks, I profiled my app.
image - is an image taken from the camera. I call the code described bellow in for loop. I do witness memory increase starting from 7MB, to 150MB, after than the app is terminated.
Here is the code :
- (void)processImageInternal:(UIImage *)image
{
GPUImagePicture * picture = [[GPUImagePicture alloc]initWithImage:image];
GPUImageLanczosResamplingFilter *lanczosResamplingFilter = [GPUImageLanczosResamplingFilter new];
CGFloat scale = image.scale;
CGSize size = image.size;
CGFloat newScale = roundf(2*scale);
CGSize newSize = CGSizeMake((size.width *scale)/newScale,(size.height *scale)/newScale);
[lanczosResamplingFilter forceProcessingAtSize:newSize];
[picture addTarget:lanczosResamplingFilter];
GPUImageGrayscaleFilter * grayScaleFilter = [GPUImageGrayscaleFilter new];
[grayScaleFilter forceProcessingAtSize:newSize];
[lanczosResamplingFilter addTarget:grayScaleFilter];
GPUImageMedianFilter * medianFilter = [GPUImageMedianFilter new];
[medianFilter forceProcessingAtSize:newSize];
[grayScaleFilter addTarget:medianFilter];
GPUImageSharpenFilter * sharpenFilter = [GPUImageSharpenFilter new];
sharpenFilter.sharpness +=2.0;
[sharpenFilter forceProcessingAtSize:newSize];
[medianFilter addTarget:sharpenFilter];
GPUImageGaussianBlurFilter * blurFilter = [GPUImageGaussianBlurFilter new];
blurFilter.blurSize = 0.5;
[blurFilter forceProcessingAtSize:newSize];
[sharpenFilter addTarget:blurFilter];
GPUImageUnsharpMaskFilter * unsharpMask = [GPUImageUnsharpMaskFilter new];
[unsharpMask forceProcessingAtSize:newSize];
[blurFilter addTarget:unsharpMask];
[picture processImage];
image = [unsharpMask imageFromCurrentlyProcessedOutput];
}
The code is executed in background thread.
Here is calling code:
for (NSUInteger i=0;i <100;i++)
{
NSLog(#" INDEX OF I : %d",i);
[self processImageInternal:image];
}
I even added some cleanup logic into - (void)processImageInternal:(UIImage *)image method
- (void)processImageInternal:(UIImage *)image
{
.........
[picture processImage];
image = [unsharpMask imageFromCurrentlyProcessedOutput];
//clean up code....
[picture removeAllTargets];
[self releaseResourcesForGPUOutput:unsharpMask];
[self releaseResourcesForGPUOutput:blurFilter];
[self releaseResourcesForGPUOutput:sharpenFilter];
[self releaseResourcesForGPUOutput:medianFilter];
[self releaseResourcesForGPUOutput:lanczosResamplingFilter];
[self releaseResourcesForGPUOutput:grayScaleFilter];
In release method, basically I am releasing whatever is possible to release: FBO, textures, targets. Here is the code:
- (void)releaseResourcesForGPUOutput:(GPUImageOutput *) output
{
if ([output isKindOfClass:[GPUImageFilterGroup class]])
{
GPUImageFilterGroup *group = (GPUImageFilterGroup *) output;
for (NSUInteger i=0; i<group.filterCount;i++)
{
GPUImageFilter * curFilter = (GPUImageFilter * )[group filterAtIndex:i];
[self releaseResourcesForGPUFilter:curFilter];
}
[group removeAllTargets];
}
else if ([output isKindOfClass:[GPUImageFilter class]])
{
[self releaseResourcesForGPUFilter:(GPUImageFilter *)output];
}
}
Unsharp mask it GPU group filters, i.e. composite of filters.
- (void)releaseResourcesForGPUFilter:(GPUImageFilter *)filter
{
if ([filter respondsToSelector:#selector(releaseInputTexturesIfNeeded)])
{
[filter releaseInputTexturesIfNeeded];
[filter destroyFilterFBO];
[filter cleanupOutputImage];
[filter deleteOutputTexture];
[filter removeAllTargets];
}
}
I did as Brad suggested to do: My code was called in UI touch processing loop. Autorealized pool objects are drained, but from Allocation Instruments I cannot see anything strange. But my application is terminated.
Total allocation was 130 MB at the end, but for some reasons my apps was terminated.
Here is how it looks like: ( I placed screen shots, log from device, even trace from Instruments). My app is called TesseractSample, but I completely switched off tesseract usage, even its' initialisation.
http://goo.gl/w5XrVb
In log from device I see that there maximum allow rpages were used:
http://goo.gl/ImG2Wt
But I do not have an clue what it means. rpages == recent_max ( 167601)

Related

iOS UIImageView memory not getting deallocated on ARC

I want to animate an image view in circular path and on click of image that image view need to change the new image. My problem is the images i allocated to the image view is not deallocated. And app receives memory warning and crashed. I surfed and tried lot of solutions for this problem but no use. In my case i need to create all ui components from Objective c class. Here am posting the code for creating image view and animation.
#autoreleasepool {
for(int i= 0 ; i < categories.count; i++)
{
NSString *categoryImage = [NSString stringWithFormat:#"%#_%ld.png",[categories objectAtIndex:i],(long)rating];
if (paginationClicked) {
if([selectedCategories containsObject:[categories objectAtIndex:i]]){
categoryImage = [NSString stringWithFormat:#"sel_%#",categoryImage];
}
}
UIImageView *imageView = [[UIImageView alloc] init];
imageView.image = [self.mySprites objectForKey:categoryImage];
imageView.contentMode = UIViewContentModeScaleAspectFit;
imageView.clipsToBounds = NO;
[imageView sizeToFit];
imageView.accessibilityHint = [categories objectAtIndex:i];
// imageView.frame = CGRectMake(location.x+sin(M_PI/2.5)*(self.view.frame.size.width*1.5),location.y+cos(M_PI/2.5)*(self.view.frame.size.width*1.5) , 150, 150);
imageView.userInteractionEnabled = YES;
imageView.multipleTouchEnabled = YES;
UITapGestureRecognizer *singleTap = [[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(categoryTapGestureCaptured:)];
singleTap.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:singleTap];
[categoryView addSubview:imageView];
CAKeyframeAnimation *animation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
UIBezierPath *path = [UIBezierPath bezierPath];
[path addArcWithCenter:location
radius:self.view.frame.size.width*1.5
startAngle:0.8
endAngle:-0.3+(0.1*(i+1))
clockwise:NO];
animation.path = path.CGPath;
imageView.center = path.currentPoint;
animation.fillMode = kCAFillModeForwards;
animation.removedOnCompletion = NO;
animation.duration = 1+0.25*i;
animation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn];
// Apply it
[imageView.layer addAnimation:animation forKey:#"animation.trash"];
}
}
And this is the code to change the image on click.
for (UIImageView *subview in subviews) {
NSString *key = [NSString stringWithFormat:#"%#_%ld.png",subview.accessibilityHint,(long)rating];
if ([SelectedCategory isEqualToString:subview.accessibilityHint]) {
NSString *tempSubCategory = [categoryObj objectForKey:SelectedCategory];
if([selectedCategories containsObject:SelectedCategory]){
subview.image = [self.mySprites objectForKey:key];
[selectedCategories removeObject:SelectedCategory];
if (tempSubCategory.length != 0) {
subCategoriesAvailable = subCategoriesAvailable-1;
}
[self showNoPagination:subCategoriesAvailable+2];
}else{
if(selectedCategories.count != 2){
key = [NSString stringWithFormat:#"sel_%#",key];
subview.image = [self.mySprites objectForKey:key];
[selectedCategories addObject:SelectedCategory];
if ([SelectedCategory isEqualToString:#"Other"]) {
[self showCommentDialog];
}else{
if (tempSubCategory.length != 0) {
subCategoriesAvailable = subCategoriesAvailable+1;
}
[self showNoPagination:subCategoriesAvailable+2];
}
}
}
[self disableCategories];
break;
}
}
And i don't know what am doing wrong here. I tried nullifying on for loop but no use.
Code which i used for removing the image view
UIView *categoryView = [self.view viewWithTag:500];
NSArray *subviews = [categoryView subviews];
for (UIImageView *subview in subviews) {
if(![selectedCategories containsObject:subview.accessibilityHint]){
[subview removeFromSuperview];
subview.image = Nil;
}
}
Adding sprite reader code for reference
#import "UIImage+Sprite.h"
#import "XMLReader.h"
#implementation UIImage (Sprite)
+ (NSDictionary*)spritesWithContentsOfFile:(NSString*)filename
{
CGFloat scale = [UIScreen mainScreen].scale;
NSString* file = [filename stringByDeletingPathExtension];
if ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] &&
(scale == 2.0))
{
file = [NSString stringWithFormat:#"%##2x", file];
}
NSString* extension = [filename pathExtension];
NSData* data = [NSData dataWithContentsOfFile:[NSString stringWithFormat:#"%#.%#", file,extension]];
NSError* error = nil;
NSDictionary* xmlDictionary = [XMLReader dictionaryForXMLData:data error:&error];
NSDictionary* xmlTextureAtlas = [xmlDictionary objectForKey:#"TextureAtlas"];
UIImage* image = [UIImage imageWithContentsOfFile:[NSString stringWithFormat:#"%#.%#", file,[[xmlTextureAtlas objectForKey:#"imagePath"]pathExtension]]];
CGSize size = CGSizeMake([[xmlTextureAtlas objectForKey:#"width"] integerValue],
[[xmlTextureAtlas objectForKey:#"height"] integerValue]);
if (!image || CGSizeEqualToSize(size, CGSizeZero)) return nil;
CGImageRef spriteSheet = [image CGImage];
NSMutableDictionary* tempDictionary = [[NSMutableDictionary alloc] init];
NSArray* xmlSprites = [xmlTextureAtlas objectForKey:#"sprite"];
for (NSDictionary* xmlSprite in xmlSprites)
{
CGRect unscaledRect = CGRectMake([[xmlSprite objectForKey:#"x"] integerValue],
[[xmlSprite objectForKey:#"y"] integerValue],
[[xmlSprite objectForKey:#"w"] integerValue],
[[xmlSprite objectForKey:#"h"] integerValue]);
CGImageRef sprite = CGImageCreateWithImageInRect(spriteSheet, unscaledRect);
// If this is a #2x image it is twice as big as it should be.
// Take care to consider the scale factor here.
[tempDictionary setObject:[UIImage imageWithCGImage:sprite scale:scale orientation:UIImageOrientationUp] forKey:[xmlSprite objectForKey:#"n"]];
CGImageRelease(sprite);
}
return [NSDictionary dictionaryWithDictionary:tempDictionary];
}
#end
Please help me to resolve this. Thanks in advance.
It looks like all the images are being retained by the dictionary(assumption) self.mySprites, as you are loading them with the call imageView.image = [self.mySprites objectForKey:categoryImage];
If you loaded the images into the dictionary with +[UIImage imageNamed:], then the dictionary initially contains only the compressed png images. Images are decompressed from png to bitmap as they are rendered to the screen, and these decompressed images use a large amount of RAM (that's the memory usage you're seeing labeled "ImageIO_PNG_Data"). If the dictionary is retaining them, then the memory will grow every time you render a new one to the screen, as the decompressed data is held inside the UIImage object retained by the dictionary.
Options available to you:
Store the image names in the self.mySprites dictionary, and load the images on demand. You should be aware that +[UIImage imageNamed:] implements internal RAM caching to speed things up, so this might also cause memory issues for you if the images are big, as the cache doesn't clear quickly. If this is an issue, consider using +[UIImage imageWithContentsOfFile:], although it requires some additional code (not much), which doesn't cache images in RAM.
Re-implement self.mySprites as an NSCache. NSCache will start throwing things out when the memory pressure gets too high, so you'll need to handle the case that the image is not there when you expect it to be, and load it from disk (perhaps using the above techniques)
CAKeyframeAnimation inherits from CAPropertyAnimation which in tern is inherited from CAAnimation.
If you see the delegate of CAAnimation class, it is strongly referenced as written below as it is declared -
/* The delegate of the animation. This object is retained for the
* lifetime of the animation object. Defaults to nil. See below for the
* supported delegate methods. */
#property(strong) id delegate;
Now you have added the reference of animation on imageView.layer, doing so the reference of imageView.layer will be strongly retained by CAAnimation reference.
Also you have set
animation.removedOnCompletion = NO;
which won't remove the animation from layer on completion
So if you are done with a image view then first removeAllAnimations from its layer and then release the image view.
I think as the CAAnimation strongly refers the imageView reference(it would also have increased it's retain count) and this could be the reason that you have removed the imageView from superview, after which it's retain count would still not be zero, and so leading to a leak.
Is there any specific requirement to set
animation.removedOnCompletion = NO;
since, setting
animation.removedOnCompletion = YES;
could solve the issue related to memory leak.
Alternatively, to resolve memory leak issue you can remove the corresponding animation on corresponding imageView' layer, by implementing delegate of CAAnimation, like as shown below -
/* Called when the animation either completes its active duration or
* is removed from the object it is attached to (i.e. the layer). 'flag'
* is true if the animation reached the end of its active duration
* without being removed. */
- (void)animationDidStop:(CAAnimation *)anim finished:(BOOL)flag {
    if (flag) {
        NSLog(#"%#", #"The animation is finished. Do something here.");
    }
//Get the reference of the corresponding imageView
    [imageView.layer removeAnimationForKey:#"animation.trash"];
}
UIView *categoryView = [self.view viewWithTag:500];
NSArray *subviews = [categoryView subviews];
for (UIImageView *subview in subviews) {
if(![selectedCategories containsObject:subview.accessibilityHint]){
[subview.layer removeAnimationForKey:#"animation.trash"]; // either this
//or// [subview.layer removeAllAnimations]; //alternatively
[subview removeFromSuperview];
subview.image = Nil;
}
}

Image auto-rotates after using CIFilter

I am writing an app that lets users take a picture and then edit it. I am working on implementing tools with UISliders for brightness/contrast/saturation and am using the Core Image Filter class to do so. When I open the app, I can take a picture and display it correctly. However, if I choose to edit a picture, and then use any of the described slider tools, the image will rotate counterclockwise 90 degrees. Here's the code in question:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.navigationItem.hidesBackButton = YES; //hide default nav
//get image to display
DBConnector *dbconnector = [[DBConnector alloc] init];
album.moments = [dbconnector getMomentsForAlbum:album.title];
Moment *mmt = [album.moments firstObject];
_imageView.image = [mmt.moment firstObject];
CGImageRef aCGImage = _imageView.image.CGImage;
CIImage *aCIImage = [CIImage imageWithCGImage:aCGImage];
_editor = [CIFilter filterWithName:#"CIColorControls" keysAndValues:#"inputImage", aCIImage, nil];
_context = [CIContext contextWithOptions: nil];
[self startEditControllerFromViewController:self];
}
//cancel and finish buttons
- (BOOL) startEditControllerFromViewController: (UIViewController*) controller {
[_cancelEdit addTarget:self action:#selector(cancelEdit:) forControlEvents:UIControlEventTouchUpInside];
[_finishEdit addTarget:self action:#selector(finishEdit:) forControlEvents:UIControlEventTouchUpInside];
return YES;
}
//adjust brightness
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
_imageView.image = [UIImage imageWithCGImage: cgiimg];
CGImageRelease(cgiimg);
}
I believe that the problem stems from the brightnessSlider method, based on breakpoints that I've placed. Is there a way to stop the auto-rotating of my photo? If not, how can I rotate it back to the normal orientation?
Mere minutes after posting, I figured out the answer to my own question. Go figure. Anyway, I simply changed the slider method to the following:
- (IBAction)brightnessSlider:(UISlider *)sender {
[_editor setValue:[NSNumber numberWithFloat:_brightnessSlider.value] forKey: #"inputBrightness"];
CGImageRef cgiimg = [_context createCGImage:_editor.outputImage fromRect:_editor.outputImage.extent];
UIImageOrientation originalOrientation = _imageView.image.imageOrientation;
CGFloat originalScale = _imageView.image.scale;
_imageView.image = [UIImage imageWithCGImage: cgiimg scale:originalScale orientation:originalOrientation];
CGImageRelease(cgiimg);
}
This simply records the original orientation and scale of the image, and re-sets them when the data is converted back to a UIImage. Hope this helps someone else!

dataProvider is 0x0 / nil (GPUImage Framework)

I wrote some code which creates a filter and can be controlled via a UISlider.
But if I slide the UISlider, the app crashes.
My code:
.m file:
- (void) viewDidLoad {
[_sliderBrightness addTarget:self action:#selector(brightnessFilter) forControlEvents:UIControlEventValueChanged];
_sliderBrightness.minimumValue = -1.0;
_sliderBrightness.maximumValue = 1.0;
_sliderBrightness.value = 0.0;
}
- (IBAction)sliderBrightness:(UISlider *)sender {
CGFloat midpoint = [(UISlider *)sender value];
[(GPUImageBrightnessFilter *)brightFilter setBrightness:midpoint - 0.1];
[(GPUImageBrightnessFilter *)brightFilter setBrightness:midpoint + 0.1];
[sourcePicture processImage];
}
- (void) brightnessFilter {
UIImage *inputImage = _imgView.image;
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
brightFilter = [[GPUImageBrightnessFilter alloc] init];
GPUImageView *imgView2 = (GPUImageView *)self.view;
[brightFilter useNextFrameForImageCapture];
[sourcePicture addTarget:brightFilter];
[sourcePicture processImage];
UIImage* outputImage = [brightFilter imageFromCurrentFramebufferWithOrientation:0];
[_imgView setImage:outputImage];
}
Error:
GPUImageFramebuffer.m:
}
else
{
[self activateFramebuffer];
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
[self unlock]; // Don't need to keep this around anymore
}
In this line of code:
[self activateFramebuffer];
Error message:
Thread 1: EXC_BAD_ACCESS (code=EXC_1386_GPFLT)
Console:
self = (GPUImageFramebuffer *const) 0x10a0a6960
rawImagePixels = (GLubyte *) 0x190
dataProvider = (CGDataProviderRef) 0x0
renderTarget = (CVPixelBufferRef) 0x8
Maybe the dataProvider causes the crash but I don't really know because I'm new in developing iOS apps.
This obviously isn't going to work (and shouldn't even compile) because GPUImageBrightnessFilter has no -setTopFocusLevel: or -setBottomFocusLevel: method. You copied this from my sample application without changing these methods to the one appropriate to a brightness filter (which is the brightness property).
It's also rather confusing (and potentially problematic) to have both a brightnessFilter instance variable and -brightnessFilter method. You probably want to rename the former to make it clear that's where you're performing your initial setup of the filter and source image. You'll also need to call that in your view controller's setup (after your Nib is loaded).

Global variable is not being updated fast enough 50% of the time

I have a photo taking app. When the user presses the button to take a photo, I set a global NSString variable called self.hasUserTakenAPhoto equal to YES. This works perfectly 100% of the time when using the rear facing camera. However, it only works about 50% of the time when using the front facing camera and I have no idea why.
Below are the important pieces of code and a quick description of what they do.
Here is my viewDidLoad:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.topHalfView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height/2);
self.takingPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.afterPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
self.bottomHalfView.frame = CGRectMake(0, 240, self.view.bounds.size.width, self.view.bounds.size.height/2);
PFFile *imageFile = [self.message objectForKey:#"file"];
NSURL *imageFileURL = [[NSURL alloc]initWithString:imageFile.url];
imageFile = nil;
self.imageData = [NSData dataWithContentsOfURL:imageFileURL];
imageFileURL = nil;
self.topHalfView.image = [UIImage imageWithData:self.imageData];
//START CREATING THE SESSION
self.session =[[AVCaptureSession alloc]init];
[self.session setSessionPreset:AVCaptureSessionPresetPhoto];
self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:&error];
if([self.session canAddInput:self.deviceInput])
[self.session addInput:self.deviceInput];
_previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:_session];
self.rootLayer = [[self view]layer];
[self.rootLayer setMasksToBounds:YES];
[_previewLayer setFrame:CGRectMake(0, 240, self.rootLayer.bounds.size.width, self.rootLayer.bounds.size.height/2)];
[_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.rootLayer insertSublayer:_previewLayer atIndex:0];
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoOutput.videoSettings = #{ (NSString *)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[self.session addOutput:self.videoOutput];
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[self.videoOutput setSampleBufferDelegate:self queue:queue];
[_session startRunning];
}
The Important part of viewDidLoad starts where I left the comment of //START CREATING THE SESSION
I basically create the session and then start running it. I have set this view controller as a AVCaptureVideoDataOutputSampleBufferDelegate, so as soon as the session starts running then the method below starts being called as well.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//Sample buffer data is being sent, but don't actually use it until self.hasUserTakenAPhoto has been set to YES.
NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);
if([self.hasUserTakenAPhoto isEqualToString:#"YES"]) {
//Now that self.hasUserTakenAPhoto is equal to YES, grab the current sample buffer and use it for the value of self.image aka the captured photo.
self.image = [self imageFromSampleBuffer:sampleBuffer];
}
}
This code is receiving the video output from the camera every second, but I don't actually do anything with it until self.hasUserTakenAPhoto is equal to YES. Once that has a string value of YES, then I use the current sampleBuffer from the camera and place it inside my global variable called self.image
So, here is when self.hasUserTakenAPhoto is actually set to YES.
Below is my IBAction code that is called when the user presses the button to capture a photo. A lot happens when this code runs, but really all that matters is the very first statement of: self.hasUserTakenAPhoto = #"YES";
-(IBAction)stillImageCapture {
self.hasUserTakenAPhoto = #"YES";
[self.session stopRunning];
if(self.inputDevice.position == 2) {
self.image = [self selfieCorrection:self.image];
} else {
self.image = [self rotate:UIImageOrientationRight];
}
CGFloat widthToHeightRatio = _previewLayer.bounds.size.width / _previewLayer.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (self.image.size.width < self.image.size.height) {
cropRect.size.width = self.image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = self.image.size.height * widthToHeightRatio;
cropRect.size.height = self.image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (self.image.size.height - cropRect.size.height)/2.0;
NSLog(#"Y Math: %f", (self.image.size.height - cropRect.size.height));
} else {
cropRect.origin.x = (self.image.size.width - cropRect.size.width)/2.0;
cropRect.origin.y = 0;
float cropValueDoubled = self.image.size.height - cropRect.size.height;
float final = cropValueDoubled/2;
finalXValueForCrop = final;
}
CGRect cropRectFinal = CGRectMake(cropRect.origin.x, finalXValueForCrop, cropRect.size.width, cropRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([self.image CGImage], cropRectFinal);
UIImage *image2 = [[UIImage alloc]initWithCGImage:imageRef];
self.image = image2;
CGImageRelease(imageRef);
self.bottomHalfView.image = self.image;
if ([self.hasUserTakenAPhoto isEqual:#"YES"]) {
[self.takingPhotoView setHidden:YES];
self.image = [self screenshot];
[_afterPhotoView setHidden:NO];
}
}
So basically the viewDidLoad method runs and the session is started, the session is sending everything the camera sees to the captureOutput method, and then as soon as the user presses the "take a photo" button we set the string value of self.hasUserTakenAPhoto to YES, the session stops, and since self.hasUserTakenAPhoto is equal to YES now, the captureOutput method places the very last camera buffer into the self.image object for me to use.
I just can't figure this out because like I said it works 100% of the time when using the rear facing camera. However, when using the front facing camera it only works 50% of the time.
I have narrowed the problem down to the fact that self.hasUserTakenAPhoto does not update to YES fast enough when using the ront facing camera, and I know because if you look in my 2nd code I posted it has the statement of NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto);.
When this works correctly, and the user has just pressed the button to capture a photo which also stops the session, the very last time that NSLog(#"Has the user taken a photo?: %#", self.hasUserTakenAPhoto); runs it will print with the correct value of YES.
However, when it doesn't work correctly and doesn't update fast enough, the very last time it runs it still prints to the log with a value of null.
Any ideas on why self.hasUserTakenAPhoto does not update fast enough 50% of the time whe using the front facing camera? Even if we can't figure that out, it doesn't matter. I just need help then coming up with an alternate solution to this.
Thanks for the help.
I think its a scheduling problem. At the return point of your methods
– captureOutput:didOutputSampleBuffer:fromConnection:
– captureOutput:didDropSampleBuffer:fromConnection:
add a CFRunLoopRun()

Generate and store many images at first launch iOS

I need to generate and save 320 images as PNGs when the game is first run. These images will then be loaded instead of being generated again. Here is the process:
load image template (black and white with alpha)
overlay non transparent pixels with specified colour
put on top the template at 0.3 opacity merging it to one final image
return back UIImage
store the UIImage, converted to NSData to PNG in Cache directory
This is done using UIGraphicsBeginImageContextWithOptions. This process needs to be done for 32 image templates in 10 colours on the background thread. The purpose is that these will be used as avatar/profile images in this game, scaled down at certain screens as appropriate. They cannot be generated every time though, because this causes too much lag.
The images are 400x400 each. They result being about 20/25 kB each when stored. When I try to use my current way of generating and storing, I get a memory warning and I see (using Instruments) that the number of alive CGImage and UIImage objects keeps increasing rapidly. This seems like they're being retained but I don't hold any references to them.
Here is my other question closer detailing the code I'm using: UIGraphicsBeginImageContext created image
What is the best way to create and store to secondary storage this many images? Thanks in advance.
Edit:
Here's the whole code I currently use to create and save the images:
//==========================================================
// Definitions and Macros
//==========================================================
//HEX color macro
#define UIColorFromRGB(rgbValue) [UIColor \
colorWithRed:((float)((rgbValue & 0xFF0000) >> 16))/255.0 \
green:((float)((rgbValue & 0xFF00) >> 8))/255.0 \
blue:((float)(rgbValue & 0xFF))/255.0 alpha:1.0]
//Colours
#define RED_COLOUR UIColorFromRGB(0xF65D58)
#define ORANGE_COLOUR UIColorFromRGB(0xFF8D16)
#define YELLOW_COLOUR UIColorFromRGB(0xFFD100)
#define LIGHT_GREEN_COLOUR UIColorFromRGB(0x82DE13)
#define DARK_GREEN_COLOUR UIColorFromRGB(0x67B74F)
#define TURQUOISE_COLOUR UIColorFromRGB(0x32ADA6)
#define LIGHT_BLUE_COLOUR UIColorFromRGB(0x11C9FF)
#define DARK_BLUE_COLOUR UIColorFromRGB(0x2E97F5)
#define PURPLE_COLOUR UIColorFromRGB(0x8F73FD)
#define PINK_COLOUR UIColorFromRGB(0xF35991)
#import "ViewController.h"
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
//Generate the graphics
[self generateAndSaveGraphics];
}
//==========================================================
// Generating and Saving Graphics
//==========================================================
-(void)generateAndSaveGraphics {
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self createAvatarImages];
//Here create all other images that need to be saved to Cache directory
dispatch_async( dispatch_get_main_queue(), ^{ //Finished
NSLog(#"DONE"); //always runs out of memory before getting here
});
});
}
-(void)createAvatarImages {
//Create avatar images
NSArray *colours = [NSArray arrayWithObjects:RED_COLOUR, ORANGE_COLOUR, YELLOW_COLOUR, LIGHT_GREEN_COLOUR, DARK_GREEN_COLOUR, TURQUOISE_COLOUR, LIGHT_BLUE_COLOUR, DARK_BLUE_COLOUR, PURPLE_COLOUR, PINK_COLOUR, nil];
NSString *cacheDir = [NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES) lastObject];
for(int i = 0; i < 32; i++) { //Avatar image templates are named m1 - m16 and f1 - f16
NSString *avatarImageName;
if(i < 16) { //female avatars
avatarImageName = [NSString stringWithFormat:#"f%i", i+1];
}
else { //male avatars
avatarImageName = [NSString stringWithFormat:#"m%i", i-15];
}
for(int j = 0; j < colours.count; j++) { //make avatar image for each colour
#autoreleasepool { //only helps very slightly
UIColor *colour = [colours objectAtIndex:j];
UIImage *avatarImage = [self tintedImageFromImage:[UIImage imageNamed:avatarImageName] colour:colour intensity:0.3];
NSString *fileName = [NSString stringWithFormat:#"%#_%i.png", avatarImageName, j];
NSString *filePath = [cacheDir stringByAppendingPathComponent:fileName];
NSData *imageData = [NSData dataWithData:UIImagePNGRepresentation(avatarImage)];
[imageData writeToFile:filePath atomically:YES];
NSLog(#"AVATAR IMAGE CREATED");
}
}
}
}
//==========================================================
// Universal Image Tinting Code
//==========================================================
//Creates a tinted image based on the source greyscale image and tinting intensity
-(UIImage *)tintedImageFromImage:(UIImage *)sourceImage colour:(UIColor *)color intensity:(float)intensity {
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(sourceImage.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(sourceImage.size);
}
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect rect = CGRectMake(0, 0, sourceImage.size.width, sourceImage.size.height);
// draw alpha-mask
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextDrawImage(context, rect, sourceImage.CGImage);
// draw tint color, preserving alpha values of original image
CGContextSetBlendMode(context, kCGBlendModeSourceIn);
[color setFill];
CGContextFillRect(context, rect);
//Set the original greyscale template as the overlay of the new image
sourceImage = [self verticallyFlipImage:sourceImage];
[sourceImage drawInRect:CGRectMake(0,0, sourceImage.size.width,sourceImage.size.height) blendMode:kCGBlendModeMultiply alpha:intensity];
UIImage *colouredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
colouredImage = [self verticallyFlipImage:colouredImage];
return colouredImage;
}
//Vertically flips an image
-(UIImage *)verticallyFlipImage:(UIImage *)originalImage {
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, tempImageView.frame.size.height);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return flippedImage;
}
#end
I've created a test project (in the zip) to illustrate the problem:
Project Files
For future reference, the solution is this one line of code:
tempImageView.image = nil;
Thanks to Matic.
It would seem that the issue is in method verticallyFlipImage. The graphics context seems to retain the temporary image view you create and with it the image you assign. This issue would probably be generally fixed by pushing each image through the process as its own dispatch call: Resample image -> callback -> resample next (or exit).
In the end of the whole resampling all the data is released and there is no memory leak. To make a quick fix you can simply call tempImageView.image = nil; before returning the image. The image view itself still produces a memory inflate but it is too small to have any impact.
This works for me and I hope it helps you.
EDIT: added the dispatch concept (comment reference)
dispatch_queue_t internalQueue;
- (void)createQueue {
dispatch_sync(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^(void) {
internalQueue = dispatch_queue_create("myQueue", DISPATCH_QUEUE_SERIAL); //we created a high priority queue
});
}
- (void)deleteQueue {
dispatch_release(internalQueue);
}
- (void)imageProcessingDone {
[self deleteQueue];
//all done here
}
- (void)processImagesInArray:(NSMutableArray *)imageArray {
//take out 1 of the objects (last in this case, you can do objectAtIndex:0 if you wish)
UIImage *img = [[imageArray lastObject] retain]; //note, image retained so the next line does not deallocate it (released at NOTE1)
[imageArray removeLastObject]; //remove from the array
dispatch_async(internalQueue, ^(void) { //dispach
//do all the image processing + saving
[img release];//NOTE1
//callback: In this case I push it the main thread. There should be little difference if you simply dispach it again on the internalQueue
if(imageArray.count > 0) {
[self performSelectorOnMainThread:#selector(processImagesInArray:) withObject:imageArray waitUntilDone:NO];
}
else {
[self performSelectorOnMainThread:#selector(imageProcessingDone) withObject:nil waitUntilDone:NO];
}
});
}

Resources