Why is there a "potential leak"? - ios

Xcode's analyser is complaining that there is a "potential leak of an object". The first line within the following method is highlighted:
- (void)retrieveBeginRestoreData {
self.restoreContext = [self.image newARGBBitmapContext];
if (!self.restoreContext) self.restoreData = nil;
CGRect rect = {{0,0},self.image.size};
CGContextDrawImage(self.restoreContext, rect, self.image.CGImage);
self.restoreData = CGBitmapContextGetData(self.restoreContext);
}
I have a property declared as such:
#property (nonatomic, assign) CGContextRef restoreContext
The newARGBBitmapContext is defined by the following:
- (CGContextRef)newARGBBitmapContext {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
size_t bitmapByteCount;
size_t bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(self.CGImage);
size_t pixelsHigh = CGImageGetHeight(self.CGImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
// colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
I managed to resolve this issue by instead declaring restoreContext as an instance variable in the header file; the "potential leak" warning goes away.
Questions:
What was the issue in the first place?
How was the issue fixed when I stopped declaring restoreContext as a property?
What is the correct way to fix the issue with restoreContext being declared as a property?

This line
self.restoreContext = [self.image newARGBBitmapContext];
does the following:
It (potentially) creates an instance object of CGContext.
Since the method name starts with new, an ownership transfer is applied. That means that the receiver (your code) is responsible for releasing it.
When the line of code is run a second time, the reference of the already existing instance of CGContext is overridden without releasing the instance, it points to. The older instance will leak.

Related

Xcode Analyzer issue on memory leak and incorrect decrement with ARC

I am Using ARC in my project but still when i ran Analyser i got following issues.
And
Following is my code :-
#import "UIImage+ImageSize.h"
#implementation UIImage (ImageSize)
- (CGRect)cropRectForImage:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
CGContextRef context = [self createARGBBitmapContextFromImage:cgImage];
if (context == NULL) return CGRectZero;
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
CGRect rect = CGRectMake(0, 0, width, height);
CGContextDrawImage(context, rect, cgImage);
unsigned char *data = CGBitmapContextGetData(context);
CGContextRelease(context);
//Filter through data and look for non-transparent pixels.
int lowX = (int)width;
int lowY = (int)height;
int highX = 0;
int highY = 0;
if (data != NULL) {
for (int y=0; y<height; y++) {
for (int x=0; x<width; x++) {
int pixelIndex = (int)(width * y + x) * 4 /* 4 for A, R, G, B */;
if (data[pixelIndex] != 0) { //Alpha value is not zero; pixel is not transparent.
if (x < lowX) lowX = x;
if (x > highX) highX = x;
if (y < lowY) lowY = y;
if (y > highY) highY = y;
}
}
}
free(data);
} else {
return CGRectZero;
}
return CGRectMake(lowX, lowY, highX-lowX, highY-lowY);
}
- (CGContextRef)createARGBBitmapContextFromImage:(CGImageRef)inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void *bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(inImage);
size_t height = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (int)(width * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * height);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) return NULL;
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease(colorSpace);
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
width,
height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL) free (bitmapData);
// Make sure and release colorspace before returning
CGColorSpaceRelease(colorSpace);
return context;
}
#end
How can i make it correct ?
Please help me understand what does this issue means and why this is happening, because i used to think ARC handles all memory clean up issues by it self. there are other SO questions already asked on error almost same as this. but not on CGContextRef. so i had to ask a new question.
ARC only handles object pointer types and block pointer types. It does not handle Core Foundation-style reference types (e.g. CGContextRef).
Now to the analyzer issue. The analyzer (similar to ARC) pays attention to naming conventions to determine how a method or function is assumed to behave. For the implementation of the method or function, it then checks if it actually does behave in accordance to its naming convention. At call sites, it assumes it does and then checks that the surrounding code operates in accordance with that assumption.
Now, you may be aware that Core Foundation has the "Create Rule" where functions whose names contain "Create" or "Copy" generally return a +1 reference, while other functions generally return a +0 reference (the "Get Rule"). Maybe that's why you named the your method which returns a CGContext with "create" in its name. Unfortunately, the Core Foundation rules don't apply to Objective-C methods.
The Cocoa naming conventions are that methods whose names begin with "alloc", "new", "copy", or "mutableCopy" return a +1 reference. (If you weren't using ARC, the release method also returns a +1 reference.) Other methods return a +0 reference.
By the Cocoa naming conventions, your -createARGBBitmapContextFromImage: method is assumed to return a +0 reference. But, the actual implementation returns a +1 reference. That's one of the issues the analyzer reported. Then, at the call site, the calling code is assumed to receive a +0 reference. Therefore, it's not entitled to release that reference using CGContextRelease(). That's the other issue the analyzer is reporting.
You can fix this by renaming -createARGBBitmapContextFromImage: to -newARGBBitmapContextFromImage:. Then, by the Cocoa conventions, it would be expected to return a +1 reference and both the implementation and the call site would conform to this expectation.
Alternatively, you can have -createARGBBitmapContextFromImage: do return (CGContextRef)CFAutorelease(context); instead of just return context;. Then change the caller to not attempt to release the context. In this case, the method's name indicates it returns a +0 reference and, again, the implementation and call site both conform to that.

Find average color of an area inside UIImageView [duplicate]

I am writing this method to calculate the average R,G,B values of an image. The following method takes a UIImage as an input and returns an array containing the R,G,B values of the input image. I have one question though: How/Where do I properly release the CGImageRef?
-(NSArray *)getAverageRGBValuesFromImage:(UIImage *)image
{
CGImageRef rawImageRef = [image CGImage];
//This function returns the raw pixel values
const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));
NSUInteger imageHeight = CGImageGetHeight(rawImageRef);
NSUInteger imageWidth = CGImageGetWidth(rawImageRef);
//Here I sort the R,G,B, values and get the average over the whole image
int i = 0;
unsigned int red = 0;
unsigned int green = 0;
unsigned int blue = 0;
for (int column = 0; column< imageWidth; column++)
{
int r_temp = 0;
int g_temp = 0;
int b_temp = 0;
for (int row = 0; row < imageHeight; row++) {
i = (row * imageWidth + column)*4;
r_temp += (unsigned int)rawPixelData[i];
g_temp += (unsigned int)rawPixelData[i+1];
b_temp += (unsigned int)rawPixelData[i+2];
}
red += r_temp;
green += g_temp;
blue += b_temp;
}
NSNumber *averageRed = [NSNumber numberWithFloat:(1.0*red)/(imageHeight*imageWidth)];
NSNumber *averageGreen = [NSNumber numberWithFloat:(1.0*green)/(imageHeight*imageWidth)];
NSNumber *averageBlue = [NSNumber numberWithFloat:(1.0*blue)/(imageHeight*imageWidth)];
//Then I store the result in an array
NSArray *result = [NSArray arrayWithObjects:averageRed,averageGreen,averageBlue, nil];
return result;
}
I tried two things:
Option 1:
I leave it as it is, but then after a few cycles (5+) the program crashes and I get the "low memory warning error"
Option 2:
I add one line
CGImageRelease(rawImageRef)
before the method returns. Now it crashes after the second cycle, I get the EXC_BAD_ACCESS error for the UIImage that I pass to the method. When I try to analyze (instead of RUN) in Xcode I get the following warning at this line
"Incorrect decrement of the reference count of an object that is not owned at this point by the caller"
Where and how should I release the CGImageRef?
Thanks!
Your memory issue results from the copied data, as others have stated. But here's another idea: Use Core Graphics's optimized pixel interpolation to calculate the average.
Create a 1x1 bitmap context.
Set the interpolation quality to medium (see later).
Draw your image scaled down to exactly this one pixel.
Read the RGB value from the context's buffer.
(Release the context, of course.)
This might result in better performance because Core Graphics is highly optimized and might even use the GPU for the downscaling.
Testing showed that medium quality seems to interpolate pixels by taking the average of color values. That's what we want here.
Worth a try, at least.
Edit: OK, this idea seemed too interesting not to try. So here's an example project showing the difference. Below measurements were taken with the contained 512x512 test image, but you can change the image if you want.
It takes about 12.2 ms to calculate the average by iterating over all pixels in the image data. The draw-to-one-pixel approach takes 3 ms, so it's roughly 4 times faster. It seems to produce the same results when using kCGInterpolationQualityMedium.
I assume that the huge performance gain is a result from Quartz noticing that it does not have to decompress the JPEG fully but that it can use the lower frequency parts of the DCT only. That's an interesting optimization strategy when composing JPEG compressed pixels with a scale below 0.5. But I'm only guessing here.
Interestingly, when using your method, 70% of the time is spent in CGDataProviderCopyData and only 30% in the pixel data traversal. This hints to a lot of time spent in JPEG decompression.
Note: Here's a late follow up on the example image above.
You don't own the CGImageRef rawImageRef because you obtain it using [image CGImage]. So you don't need to release it.
However, you own rawPixelData because you obtained it using CGDataProviderCopyData and must release it.
CGDataProviderCopyData
Return Value:
A new data object containing a copy of the provider’s data. You are responsible for releasing this object.
I believe your issue is in this statement:
const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));
You should be releasing the return value of CGDataProviderCopyData.
Your mergedColor works great on an image loaded from a file, but not for an image capture by the camera. Because CGBitmapContextGetData() on the context created from a captured sample buffer doesn't return it bitmap. I changed your code to as following. It works on any image and it is as fast as your code.
- (UIColor *)mergedColor
{
CGImageRef rawImageRef = [self CGImage];
// scale image to an one pixel image
uint8_t bitmapData[4];
int bitmapByteCount;
int bitmapBytesPerRow;
int width = 1;
int height = 1;
bitmapBytesPerRow = (width * 4);
bitmapByteCount = (bitmapBytesPerRow * height);
memset(bitmapData, 0, bitmapByteCount);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate (bitmapData,width,height,8,bitmapBytesPerRow,
colorspace,kCGBitmapByteOrder32Little|kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextSetInterpolationQuality(context, kCGInterpolationMedium);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), rawImageRef);
CGContextRelease(context);
return [UIColor colorWithRed:bitmapData[2] / 255.0f
green:bitmapData[1] / 255.0f
blue:bitmapData[0] / 255.0f
alpha:1];
}
CFDataRef abgrData = CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef));
const UInt8 *rawPixelData = CFDataGetBytePtr(abgrData);
...
CFRelease(abgrData);

iOS: Overlay two images with Alpha offscreen

sorry, for this question, I know there is a similar question, but I can not get the answer to work. Probably some dumb error on my side ;-)
I want to overlay two images with Alpha on iOS. The images taken from two videos, read by an AssetReader and stored in two CVPixelBuffer. I know that the Alpha channel is not stored in the video, so I get it from a third file. All data looks fine. The Problem is the overlay, if I do it onscreen with [CIContext drawImage] everything is fine !
But if I do it offscreen because the format of the video is not identical to the screen format, I can not get it to work:
1. drawImage does work, but only on-screen
2. render:toCVPixelBuffer works, but ignores Alpha
3. CGContextDrawImage seems to do nothing at all (not even an error message)
So can somebody give me an idea what is wrong:
Init:
...(a lot of code before)
Setup color space and bitmap context
if(outputContext)
{
CGContextRelease(outputContext);
CGColorSpaceRelease(outputColorSpace);
}
outputColorSpace = CGColorSpaceCreateDeviceRGB();
outputContext = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
...
(a lot code after)
Drawing:
CIImage *backImageFromSample;
CGImageRef frontImageFromSample;
CVImageBufferRef nextImageBuffer = myPixelBufferArray[0];
CMSampleBufferRef sampleBuffer = NULL;
CMSampleTimingInfo timingInfo;
//draw the frame
CGRect toRect;
toRect.origin.x = 0;
toRect.origin.y = 0;
toRect.size = videoFormatSize;
//background image always full size, this part seems to work
if(drawBack)
{
CVPixelBufferLockBaseAddress( backImageBuffer, kCVPixelBufferLock_ReadOnly );
backImageFromSample = [CIImage imageWithCVPixelBuffer:backImageBuffer];
[coreImageContext render:backImageFromSample toCVPixelBuffer:nextImageBuffer bounds:toRect colorSpace:rgbSpace];
CVPixelBufferUnlockBaseAddress( backImageBuffer, kCVPixelBufferLock_ReadOnly );
}
else
[self clearBuffer:nextImageBuffer];
//Front image doesn't seem to do anything
if(drawFront)
{
unsigned long int numBytes = CVPixelBufferGetBytesPerRow(frontImageBuffer)*CVPixelBufferGetHeight(frontImageBuffer);
CVPixelBufferLockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, CVPixelBufferGetBaseAddress(frontImageBuffer), numBytes, NULL);
frontImageFromSample = CGImageCreate (CVPixelBufferGetWidth(frontImageBuffer) , CVPixelBufferGetHeight(frontImageBuffer), 8, 32, CVPixelBufferGetBytesPerRow(frontImageBuffer), outputColorSpace, (CGBitmapInfo) kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);
CGContextDrawImage ( outputContext, inrect, frontImageFromSample);
CVPixelBufferUnlockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
CGImageRelease(frontImageFromSample);
}
Any ideas anyone ?
So obviously I should stop to ask questions on stackflow. Every time I do that after hours of debugging I find the answer myself shortly afterwards. Sorry for that. The problem is in the initialisation, you can't do CVPixelBufferGetBaseAddress without locking the adresss first O_o. The adress gets NULL and this seems to be allowed, with the action then beeing not to do anything. So the correct code is:
if(outputContext)
{
CGContextRelease(outputContext);
CGColorSpaceRelease(outputColorSpace);
}
CVPixelBufferLockBaseAddress(pixelBuffer);
outputColorSpace = CGColorSpaceCreateDeviceRGB();
outputContext = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
CVPixelBufferUnlockBaseAddress(pixelBuffer);

Potential leak of object stored in context

I am getting a memory leak for this code.
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) imageRef {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(imageRef);
size_t pixelsHigh = CGImageGetHeight(imageRef);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGRect rect = {{0,0},{pixelsWide, pixelsHigh}};
//
// // Draw the image to the bitmap context. Once we draw, the memory
// // allocated for the context for rendering will then contain the
// // raw image data in the specified color space.
CGContextDrawImage(context, rect, self.CGImage);
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
error: Potential leak of object stored in context.
The memory leak analyser uses the name of the method to determine the change in retain count of the returned object. For Obj-C methods it is documented in Basic Memory Management Rules. It states
You create an object using a method whose name begins with “alloc”, “new”, “copy”, or “mutableCopy” (for example, alloc, newObject, or mutableCopy).
The name createARGBBitmapContextFromImage: is not matched by that rule. Instead you should name the method newARGBBitmapContextFromImage:.
Adding the word Create into a function name is used for C functions.

initWithCVPixelBuffer failed because the CVPixelBufferRef is not non-IOSurface backed

I receive YUV frames (kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) and when creating a CIImage from a CVPixelBufferRef I get:
initWithCVPixelBuffer failed because the CVPixelBufferRef is not non-IOSurface backed.
CVPixelBufferRef pixelBuffer;
size_t planeWidth[] = { width, width / 2 };
size_t planeHeight[] = { height, height / 2};
size_t planeBytesPerRow[] = { width, width / 2 };
CVReturn ret = CVPixelBufferCreateWithBytes(
kCFAllocatorDefault, width, height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
data, bytesPerRow, 0, 0, 0, &pixelBuffer
);
if (ret != kCVReturnSuccess)
{
NSLog(#"FAILED");
CVPixelBufferRelease(pixelBuffer);
return;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// fails
CIImage * image = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
[image release];
I'll assume the question is: "Why do I get this error?"
To make an CVPixelBuffer IOSurface backed you need to set properties on the CVPixelBuffer when you create it. Right now you are passing in "0" as the second to last parameter in CVPixelBufferCreateWithBytes.
Pass a dictionary with a key for kCVPixelBufferIOSurfacePropertiesKey and value that is an empty dictionary (to use default IOSurface options, others are not documented) in CVPixelBufferCreate (as you can't use kCVPixelBufferIOSurfacePropertiesKey with CVPixelBufferCreateWithBytes), copy correct bytes to created CVPixelBuffer (don't forget bytes alignment). That is how you make it IOSurface-backed.
Although I'm not sure if it will remove all errors for you because of the pixel format. My understanding is that the GPU has to be able to hold textures in that pixel format in order to be used as IOSurfaces though I'm not 100% sure.
Note: correct copying pixel bytes for kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange could be found in this SO answer.

Resources