Does the code below have potential memory leak? - ios

The last two lines of code below it's returning gives me a potential memory leak warning. .....Is this a true positive warning or false positive warning? If true, how do i fix it? Thanks a lot for your help!
-(UIImage*)setMenuImage:(UIImage*)inImage isColor:(Boolean)bColor
{
int w = inImage.size.width + (_borderDeep * 2);
int h = inImage.size.height + (_borderDeep * 2);
CGColorSpaceRef colorSpace;
CGContextRef context;
if (YES == bColor)
{
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
}
else
{
colorSpace = CGColorSpaceCreateDeviceGray();
context = CGBitmapContextCreate(NULL, w, h, 8, w, colorSpace, kCGImageAlphaNone);
}
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextDrawImage(context, CGRectMake(_borderDeep, _borderDeep, inImage.size.width, inImage.size.height), inImage.CGImage);
CGImageRef image = CGBitmapContextCreateImage(context);
CGContextRelease(context); //releasing context
CGColorSpaceRelease(colorSpace); //releasing colorSpace
//// The two lines of code above caused Analyzer gives me a warning of potential leak.....Is this a true positive warning or false positive warning? If true, how do i fix it?
return [UIImage imageWithCGImage:image];
}

You're leaking the CGImage object (that's stored in your image variable). You can fix this by releasing the image after creating the UIImage.
UIImage *uiImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return uiImage;
The reason for this is that CoreGraphics follows the CoreFoundation ownership rules; in this case, the "Create" rule. Namely, functions with "Create" (or "Copy") return an object that you are required to release yourself. So in this case, CGBitmapContextCreateImage() is returning a CGImageRef that you are responsible for releasing.
Incidentally, why aren't you using the UIGraphics convenience functions to create your context? Those will handle putting the right scale on the resulting UIImage. If you want to match your input image, you can do that as well
CGSize size = inImage.size;
size.width += _borderDeep*2;
size.height += _borderDeep*2;
UIGraphicsBeginImageContextWithOptions(size, NO, inImage.scale); // could pass YES for opaque if you know it will be
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
[inImage drawInRect:(CGRect){{_borderDeep, _borderDeep}, inImage.size}];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;

You have to free CGImageRef you made. CGBitmapContextCreateImage has "create" in the name, which means (Apple is strict with its naming conventions) that you are responsible for freeing this memory.
Replace the last line with
UIImage *uiimage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return uiimage;

Related

Convert indexed color .png to RGB or greyscale

I'm writing this Today Widget, that needs to display an image.
I noticed, that every time the Widget loads, the image is redrawn. This takes about half a second.
After some investigation, I found out, that the culprit is, that the image file is in the Indexed color space.
So: my question is:
How do I convert this file to something that the iPhone can display more efficiently? For instance, an RGB file. I would then save it to a new file, and load that new file in my UIImageView.
I played around a bit with CGImage, since I believe that is the solution direction, but I end up with a white UIImageView.
This is my code:
UIImage * theCartoon = [UIImage imageWithData:imageData];
CGImageRef si = [theCartoon CGImage];
CGDataProviderRef src = CGImageGetDataProvider(si);
CGImageRef imageRef = CGImageCreateWithPNGDataProvider(src, NULL, NO, kCGRenderingIntentDefault);
cartoon.image = [[UIImage alloc] initWithCGImage:imageRef];
Any suggestions on this approach? Some obvious misprogramming?
Try this
// The source image
CGImageRef image = theCartoon.CGImage;
CGSize size = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
// The result image in RGB color space
CGImageRef result = nil;
// Check color space
CGColorSpaceRef srcColorSpace = CGImageGetColorSpace(image);
if (CGColorSpaceGetModel(srcColorSpace) != kCGColorSpaceModelRGB) {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, 8, 0, colorSpace, kCGImageAlphaNoneSkipLast);
CGRect rect = {CGPointZero, size};
CGContextDrawImage(context, rect, image);
result = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
}
It's been a while since the question was asked, but for others who might need this, here is my solution:
-(UIImage *) convertIndexedColorSpaceToRGB:(UIImage *) sourceImage {
CGImageRef originalImageRef = sourceImage.CGImage;
const CGBitmapInfo originalBitmapInfo = CGImageGetBitmapInfo(originalImageRef);
// See: http://stackoverflow.com/questions/23723564/which-cgimagealphainfo-should-we-use
const uint32_t alphaInfo = (originalBitmapInfo & kCGBitmapAlphaInfoMask);
CGBitmapInfo bitmapInfo = originalBitmapInfo;
switch (alphaInfo)
{
case kCGImageAlphaNone:
bitmapInfo |= kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast;
break;
case kCGImageAlphaPremultipliedFirst:
case kCGImageAlphaPremultipliedLast:
case kCGImageAlphaNoneSkipFirst:
case kCGImageAlphaNoneSkipLast:
break;
case kCGImageAlphaOnly:
case kCGImageAlphaLast:
case kCGImageAlphaFirst:
{
return sourceImage;
}
break;
}
const CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
const CGSize pixelSize = CGSizeMake(sourceImage.size.width * sourceImage.scale, sourceImage.size.height * sourceImage.scale);
const CGContextRef context = CGBitmapContextCreate(NULL,
pixelSize.width,
pixelSize.height,
CGImageGetBitsPerComponent(originalImageRef),
pixelSize.width*4,
colorSpace,
bitmapInfo
);
CGColorSpaceRelease(colorSpace);
if (!context) return sourceImage;
const CGRect imageRect = CGRectMake(0, 0, pixelSize.width, pixelSize.height);
UIGraphicsPushContext(context);
// Flip coordinate system. See: http://stackoverflow.com/questions/506622/cgcontextdrawimage-draws-image-upside-down-when-passed-uiimage-cgimage
CGContextTranslateCTM(context, 0, pixelSize.height);
CGContextScaleCTM(context, 1.0, -1.0);
[sourceImage drawInRect:imageRect];
UIGraphicsPopContext();
const CGImageRef decompressedImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *image = [UIImage imageWithCGImage:decompressedImageRef scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
CGImageRelease(decompressedImageRef);
return image; }

iOS retrieve different pixels in pixel by pixel comparison of UIImages

I am trying to do a pixel by pixel comparison of two UIImages and I need to retrieve the pixels that are different. Using this Generate hash from UIImage I found a way to generate a hash for a UIImage. Is there a way to compare the two hashes and retrieve the different pixels?
If you want to actually retrieve the difference, the hash cannot help you. You can use the hash to detect the likely presence of differences, but to get the actual differences, you have to use other techniques.
For example, to create a UIImage that consists of the difference between two images, see this accepted answer in which Cory Kilgor's illustrates the use of CGContextSetBlendMode with a blend mode of kCGBlendModeDifference:
+ (UIImage *) differenceOfImage:(UIImage *)top withImage:(UIImage *)bottom {
CGImageRef topRef = [top CGImage];
CGImageRef bottomRef = [bottom CGImage];
// Dimensions
CGRect bottomFrame = CGRectMake(0, 0, CGImageGetWidth(bottomRef), CGImageGetHeight(bottomRef));
CGRect topFrame = CGRectMake(0, 0, CGImageGetWidth(topRef), CGImageGetHeight(topRef));
CGRect renderFrame = CGRectIntegral(CGRectUnion(bottomFrame, topFrame));
// Create context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(colorSpace == NULL) {
printf("Error allocating color space.\n");
return NULL;
}
CGContextRef context = CGBitmapContextCreate(NULL,
renderFrame.size.width,
renderFrame.size.height,
8,
renderFrame.size.width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if(context == NULL) {
printf("Context not created!\n");
return NULL;
}
// Draw images
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextDrawImage(context, CGRectOffset(bottomFrame, -renderFrame.origin.x, -renderFrame.origin.y), bottomRef);
CGContextSetBlendMode(context, kCGBlendModeDifference);
CGContextDrawImage(context, CGRectOffset(topFrame, -renderFrame.origin.x, -renderFrame.origin.y), topRef);
// Create image from context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return image;
}

iOS Pixel Access of UIImage results in crash/skewing

I am trying to access the pixels of a certain image which has been resized using this block:
-(UIImage *)imageResize:(UIImage *)imageResizable scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[imageResizable drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
... where newSize is the size of an UIView, where I am trying to fit this image.
Now, I am supposed to access the pixels of this image, and do some filtering on it.
I use the following code block:
-(UIImage*) filter : (UIImage *) image
{
CGImageRef imageBuffTarget = [image CGImage];
CFMutableDataRef pixelDataTarget = CFDataCreateMutableCopy(0, 0, CGDataProviderCopyData(CGImageGetDataProvider(imageBuffTarget)));
NSUInteger width2 = CGImageGetWidth(imageBuffTarget);
NSUInteger height2 = CGImageGetHeight(imageBuffTarget);
UInt8 *target_image = (UInt8 *)CFDataGetMutableBytePtr(pixelDataTarget);
// Going forward, I want to do some processing here, on the *target_image data.
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo1 = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGContextRef context = CGBitmapContextCreate(target_image, CGImageGetWidth(imageBuffTarget), CGImageGetHeight(imageBuffTarget), CGImageGetBitsPerComponent(imageBuffTarget), CGImageGetBytesPerRow(imageBuffTarget), colorSpaceRef, bitmapInfo1);
CGImageRef imageRef = CGBitmapContextCreateImage (context);
UIImage *newimage = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
CFRelease(imageRef);
return newimage;
}
I take the image from the UIView, pass it on to the 'filter' method and set it back to the view.
But, on doing this, the app crashes and I get the following error in the console:
<Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 1280 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedLast.
<Error>: CGBitmapContextCreateImage: invalid context 0x0
When I change:
CGContextRef context = CGBitmapContextCreate(target_image, CGImageGetWidth(imageBuffTarget), CGImageGetHeight(imageBuffTarget), CGImageGetBitsPerComponent(imageBuffTarget), CGImageGetBytesPerRow(imageBuffTarget), colorSpaceRef, bitmapInfo1);
TO
CGContextRef context = CGBitmapContextCreate(target_image, CGImageGetWidth(imageBuffTarget), CGImageGetHeight(imageBuffTarget), CGImageGetBitsPerComponent(imageBuffTarget), 2*CGImageGetBytesPerRow(imageBuffTarget), colorSpaceRef, bitmapInfo1);
(2 multiplied to the 'bytes per row', which does exceed 1280)
the app doesn't crash, but the output on the view comes out to be a distorted and skewed version of the original image.
Please note that when I call CGImageGetHeight(imageBuffTarget) and CGImageGetWidth(imageBuffTarget), I get the exact height and width of the ImageView whose size I passed into the imageResize method.
Could you please help me figuring out the mistake in this code.
Thanks in advance.

quartz 2D image alpha masking

found this little code snippet that seems to do what i want, but im getting yelled at by xcode saying self.CGimage isnt a property of my view controller. (which makes sense since thats a UIimage property). What changes would i need to make to this code for it to be functional? Thanks!
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
UIImage* tempImage;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
CGImageRef maskingImage = [maskImage CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, maskImage.size.width, maskImage.size.height), maskingImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, image.size.width, image.size.height), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGContextRelease(mainViewContentContext);
CGImageRelease(mainViewContentBitmapContext);
maskingImage = nil;
CGImageRelease(maskingImage);
// return the image
return theImage;
}
Try replacing self.CGImage with image.CGImage.
Place this method in a UIImage category (or subclass).

App Crashes when trying to update a UIImageView after modifying the RGB values for an iOS project

I'm trying to apply multiple effects to images. I've created a separate file to handle the effects processing that I can send a UIImageView to and receive a modified copy. To save processing time, I first load the image into the separate processing file and store it in memory, rather than load the image every time I want to modify it. The flow is getImageData -> modifyRGB -> displayImage. Everything works till the last step. The returned modified image is displayed on screen for a split second, then Xcode crashes with a EXEC_BAD_ACCESS:code 1 error. I've been over the code repeatedly, and can't find the problem. Any help is greatly appreciated. Thank you!
UPDATE WITH MORE INFO
I'm using Xcode 4.3.1 with Automatic Reference Counting
Using Line Breaks, I can verify that the crash happens when the line self.imageView.image = [self.imageManipulation displayImage]; is executed. The image IS updated, but then the program immediately crashes.
Using NSZombie, i get an error -[Not A Type retain]: message sent to deallocated instance 0x2cceaf80
From my viewController I use:
[self.imageManipulation getImageData:self.imageView.image];
[self.imageManipulation modifyRGB];
self.imageView.image = [self.imageManipulation displayImage];
My ImageManipulation file consists of:
#implementation ImageManipulation
static unsigned char *rgbaDataOld;
static unsigned char *rgbaDataNew;
static int width;
static int height;
- (void)getImageData:(UIImage *)image
{
CGImageRef imageRef = [image CGImage];
width = CGImageGetWidth(imageRef);
height = CGImageGetHeight(imageRef);
rgbaDataOld = malloc(height * width * 4);
rgbaDataNew = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbaDataOld, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CGImageRelease(imageRef);
}
//modify rgb values
- (void)modifyRGB
{
for (int byteIndex = 0 ; byteIndex < width * height * 4; byteIndex += 4)
{
rgbaDataNew[byteIndex] = (char) (int) (rgbaDataOld[byteIndex] / 3) + 1;
rgbaDataNew[byteIndex+1] = (char) (int) (rgbaDataOld[byteIndex+1] / 3 + 1);
rgbaDataNew[byteIndex+2] = (char) (int) (rgbaDataOld[byteIndex+2] / 3) + 1;
rgbaDataNew[byteIndex+3] = (char) (int) 255;
}
}
//set image
- (UIImage *)displayImage
{
CGContextRef context;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(rgbaDataNew, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast );
CGImageRef imageRef = CGBitmapContextCreateImage (context);
UIImage *outputImage = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CGImageRelease(imageRef);
return outputImage;
}
#end
I changed my dispayImage() method to receive a UIImageView property and return VOID. This way, all work is done to the passed UIImageView instead of a localized instance and nothing is returned.
My guess is that the returned UIImage approach I was using before was crashing because the returned reference was deallocated the second the method completed. This also allows me to use CGImageRelease without any ill effects.
Here's the new approach:
- (void)displayImage:(UIImageView *)image
{
CGContextRef context;
CGImageRef imageRef = [image.image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(rgbaDataNew, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (context);
image.image = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CGImageRelease(imageRef);
image = nil;
}
Thank you to everyone that offered help! I learned a tremendous amount just from your suggestions. This was my first time using bt and NSZombie and now I'm using them religiously! Thanks again!
You're releasing the imageRef, which causes the UIImage to be autoreleased.
Make sure that you retain the UIImage in order to prevent such events from occurring:
- (UIImage *)displayImage
{
CGContextRef context;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(rgbaDataNew, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast );
CGImageRef imageRef = CGBitmapContextCreateImage (context);
UIImage *outputImage = [[UIImage imageWithCGImage:imageRef] retain];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CGImageRelease(imageRef); //this line would cause outputImage to be released
return outputImage;
}
In displayImage tell the compiler you want to retain the image with __strong:
UIImage __strong *outputImage = [UIImage imageWithCGImage:imageRef];
or if you prefer:
__strong UIImage *outputImage = [UIImage imageWithCGImage:imageRef];

Resources