The following code works the way I want it to, but every time I call it, Instruments tells me I have one CGImage memory leak. I've been having trouble understanding what to release and when. The following is from the #interface section of my file.
CGImageRef depthImageRef;
char *depthPixels;
NSData *depthData;
In the next code, I basically alter depthPixels and then store the result in a new depthImageRef.
size_t width = CGImageGetWidth(depthImageRef);
size_t height = CGImageGetHeight(depthImageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(depthImageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(depthImageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(depthImageRef);
for (int row = 0; row < height; row += 1) {
for (int bitPlace = 0; bitPlace < bytesPerRow; bitPlace += 4) {
CGPoint pointForHeight = CGPointMake((bitPlace/4) - place.x, row - place.y);
int distanceFromLocation = sqrt(pow(place.x - pointForHeight.x, 2) + pow(place.y - pointForHeight.y, 2));
int newHeight = blopHeight - (5 - sizeSlider.value)*distanceFromLocation;
NSInteger baseBitPlace = row*bytesPerRow + bitPlace;
CGFloat currentHeight = depthPixels[baseBitPlace];
if (newHeight > currentHeight) {
depthPixels[baseBitPlace] = newHeight;
}
}
}
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(depthImageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, depthPixels, [depthData length], NULL);
depthImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
CGColorSpaceRelease(colorspace);
CFRelease(provider);
CGDataProviderRelease(provider);
I believe the leak is created because I keep creating depthImageRef but never release it. I've tried putting CGImageRelease(depthImageRef) at various places and setting depthImageRef to nil, and usually when I do this I get crashes. Thanks!
Probably, you must be turning the depthImageRef back to UIImage somewhere. Like
UIImage *depthImage = [UIImage imageWithCGImage:depthImageRef];
Once you have the depthImage you can release depthImageRef. It should not cause any crash.
If you are recreating depthImageRef multiple times by calling the above code, you are leaking memory by repeatedly creating CGImageRefs and not releasing them. If you are recreating dataImageRef you should release the old image immediately before creating the new one.
// works, even if depthImageRef is NULL such as in the initial case.
CGImageRelease(depthImageRef);
depthImageRef = CGImageCreate ( //...
Also be sure you are calling CGImageRelease in your dealloc method and freeing your depthPixels buffer in dealloc. CGDataProviderRelease won't release the buffer you pass into it unless you pass a callback to already release that buffer. You don't need the call for CFRelease(provider) since you are calling CGDataProviderRelease(provider).
All you have to make sure is to release the CGImageRef once you are done with it's use.
CGImageRelease(cgImage);
Related
I'm writing some tests to detect changes to the lossless image formats (starting with PNG) and finding that on Linux and Windows the image loading mechanisms work as expected - but on iOS (haven't tried on macOS) the image data is always being very slightly changed if I load from a PNG file on disk or save to a PNG file on disk using Apples' methods.
If I create a PNG using any number of tools (GIMP/Paint.NET/whatever) and I use my cross platform PNG reading code to examine each pixel of the resulting loaded data - it matches exactly what I did in the tool (or programmatically generated with my cross platform PNG writing code.) Subsequent reloading into the creation tools yields the exactly same RGBA8888 components.
If I load the PNG from disk using Apple's:
NSString* pPathToFile = nsStringFromStdString( sPathToFile );
UIImage* pImageFromDiskPNG = [UIImage imageWithContentsOfFile:pPathToFile];
...then examine the resulting pixels it's similar but not the same. I would expect, like on other platforms, for the data to be identical.
Now, interestingly, if I load the data from the PNG using my code, and creating a UIImage with it (using some code I show below) I can use that UIImage and display it, copy it, whatever, and if I examine the pixel data - it's exactly what I gave it to begin with (which is why I think it's the loading saving part where Apple is modifying the image data.)
When I instruct it to save what I know to be a good UIImage with perfect pixel data, and then load that Apple saved image with my PNG loading code, I can see it's not exactly the same data. I have used several methods by which Apple suggests to save UIImage's to PNG (UIImagePNGRepresentation primarily.)
The only thing I can really think of is that Apple when loading or saving on iOS doesn't truly support RGBA8888 and is doing some sort of premultiply with the alpha channel - I speculate about this because when I first started using the code I posted below I was choosing
kCGImageAlphaLast
...instead of what I ultimately had to use
kCGImageAlphaPremultipliedLast
because the former is not supported on iOS for some reason.
Does anyone have any experience around this issue on iOS?
Cheers!
The code I use to push/pull RGBA8888 data into and out of UIImages is below:
- (unsigned char *) convertUIImageToBitmapRGBA8:(UIImage*)image dataSize:(NSUInteger*)dataSize
{
CGImageRef imageRef = image.CGImage;
// Create a bitmap context to draw the uiimage into
CGContextRef context = [self newBitmapRGBA8ContextFromImage:imageRef];
if(!context) {
return NULL;
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = CGRectMake(0, 0, width, height);
// Draw image into the context to get the raw image data
CGContextDrawImage(context, rect, imageRef);
// Get a pointer to the data
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(context);
// Copy the data and release the memory (return memory allocated with new)
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
size_t bufferLength = bytesPerRow * height;
unsigned char *newBitmap = NULL;
if(bitmapData) {
*dataSize = sizeof(unsigned char) * bytesPerRow * height;
newBitmap = (unsigned char *)malloc(sizeof(unsigned char) * bytesPerRow * height);
if(newBitmap) { // Copy the data
for(int i = 0; i < bufferLength; ++i) {
newBitmap[i] = bitmapData[i];
}
}
free(bitmapData);
} else {
NSLog(#"Error getting bitmap pixel data\n");
}
CGContextRelease(context);
return newBitmap;
}
- (CGContextRef) newBitmapRGBA8ContextFromImage:(CGImageRef) image
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
uint32_t *bitmapData;
size_t bitsPerPixel = 32;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
size_t bytesPerRow = width * bytesPerPixel;
size_t bufferLength = bytesPerRow * height;
colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace) {
NSLog(#"Error allocating color space RGB\n");
return NULL;
}
// Allocate memory for image data
bitmapData = (uint32_t *)malloc(bufferLength);
if(!bitmapData) {
NSLog(#"Error allocating memory for bitmap\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Create bitmap context
context = CGBitmapContextCreate( bitmapData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrder32Big );
if( !context )
{
free( bitmapData );
NSLog( #"Bitmap context not created" );
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (UIImage*) convertBitmapRGBA8ToUIImage:(unsigned char*) pBuffer withWidth:(int) nWidth withHeight:(int) nHeight
{
// Create the bitmap context
const size_t nColorChannels = 4;
const size_t nBitsPerChannel = 8;
const size_t nBytesPerRow = ((nBitsPerChannel * nWidth) / 8) * nColorChannels;
CGColorSpaceRef oCGColorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef oCGContextRef = CGBitmapContextCreate( pBuffer, nWidth, nHeight, nBitsPerChannel, nBytesPerRow , oCGColorSpaceRef, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrder32Big );
// create the image:
CGImageRef toCGImage = CGBitmapContextCreateImage(oCGContextRef);
UIImage* pImage = [[UIImage alloc] initWithCGImage:toCGImage];
return pImage;
}
Based on your source code, it appears that you are using BGRA (RGB + alpha channel) data which is imported from PNG source images. When you attach images to an iOS project, Xcode will pre-process each image to pre-multiply the RGB and A channel data for performance reasons. So, by the time the image is loaded on the iPhone device, the RGB values for non-opaque (not A = 255) pixels can be changed. The RGB numbers are modified, but the actual image data will come out the same when rendered to the screen by iOS. This is known as "straight alpha" vs "pre-multiplied alpha".
Store image data directly, don't use UIImage.pngData() to convert image to data, this method will change the pixel's rgb value if the pixel has alpha channel.
Look like the code snippet below, it use the #autoreleasepool block in this method.
+ (UIImage *)decodedImageWithImage:(UIImage *)image {
// while downloading huge amount of images
// autorelease the bitmap context
// and all vars to help system to free memory
// when there are memory warning.
// on iOS7, do not forget to call
// [[SDImageCache sharedImageCache] clearMemory];
if (image == nil) { // Prevent "CGBitmapContextCreateImage: invalid context 0x0" error
return nil;
}
#autoreleasepool{
// do not decode animated images
if (image.images != nil) {
return image;
}
CGImageRef imageRef = image.CGImage;
CGImageAlphaInfo alpha = CGImageGetAlphaInfo(imageRef);
BOOL anyAlpha = (alpha == kCGImageAlphaFirst ||
alpha == kCGImageAlphaLast ||
alpha == kCGImageAlphaPremultipliedFirst ||
alpha == kCGImageAlphaPremultipliedLast);
if (anyAlpha) {
return image;
}
// current
CGColorSpaceModel imageColorSpaceModel = CGColorSpaceGetModel(CGImageGetColorSpace(imageRef));
CGColorSpaceRef colorspaceRef = CGImageGetColorSpace(imageRef);
BOOL unsupportedColorSpace = (imageColorSpaceModel == kCGColorSpaceModelUnknown ||
imageColorSpaceModel == kCGColorSpaceModelMonochrome ||
imageColorSpaceModel == kCGColorSpaceModelCMYK ||
imageColorSpaceModel == kCGColorSpaceModelIndexed);
if (unsupportedColorSpace) {
colorspaceRef = CGColorSpaceCreateDeviceRGB();
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
// kCGImageAlphaNone is not supported in CGBitmapContextCreate.
// Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
// to create bitmap graphics contexts without alpha info.
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
bitsPerComponent,
bytesPerRow,
colorspaceRef,
kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
// Draw the image into the context and retrieve the new bitmap image without alpha
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef imageRefWithoutAlpha = CGBitmapContextCreateImage(context);
UIImage *imageWithoutAlpha = [UIImage imageWithCGImage:imageRefWithoutAlpha
scale:image.scale
orientation:image.imageOrientation];
if (unsupportedColorSpace) {
CGColorSpaceRelease(colorspaceRef);
}
CGContextRelease(context);
CGImageRelease(imageRefWithoutAlpha);
return imageWithoutAlpha;
}
}
(the method is in SDWebImageDecoder.m, the version is SDWebImage
3.7.0).
I am confused with it, because these temp objects will be released after the method return, so is it necessary to use the autoreleasepool to release them only a little before? the autoreleasepool will also occupy the memory.
anyone can explain it, thanks!
Go through this apple doc. It is mentioned that
Three occasions when you might use your own autorelease pool blocks:
If you are writing a program that is not based on a UI framework, such as a command-line tool.
If you write a loop that creates many temporary objects.
You may use an autorelease pool block inside the loop to dispose of those objects before the next iteration. Using an autorelease pool block in the loop helps to reduce the maximum memory footprint of the application.
If you spawn a secondary thread.
You must create your own autorelease pool block as soon as the thread begins executing; otherwise, your application will leak objects. (See Autorelease Pool Blocks and Threads for details.)
I am not sure about the first point, but SDWebImage will surely use autoreleasepool for other two points.
I'm implementing the instagram like image filters in my app and I'm using GPUImageFilters for that. But when I keep switching to different filter more than 10 times it got crashed then I tried with instruments and found out that there is a large memory allocation in GPUFilter class and its because of malloc. As I'm new to memory leak related issues, please help me out! Thanks
Here is the GPUImageFilter code:
- (UIImage *)imageFromCurrentlyProcessedOutput {
[GPUImageOpenGLESContext useImageProcessingContext];
[self setFilterFBO];
CGSize currentFBOSize = [self sizeOfFBO];
NSUInteger totalBytesForImage = (int)currentFBOSize.width * (int)currentFBOSize.height * 4;
GLubyte *rawImagePixels = (GLubyte *)malloc(totalBytesForImage); //here its showing the large memory allocation
glReadPixels(0, 0, (int)currentFBOSize.width, (int)currentFBOSize.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
CGColorSpaceRef defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImageFromBytes = CGImageCreate((int)currentFBOSize.width, (int)currentFBOSize.height, 8, 32, 4 * (int)currentFBOSize.width, defaultRGBColorSpace, kCGBitmapByteOrderDefault, dataProvider, NULL, NO, kCGRenderingIntentDefault);
UIImage *finalImage = [UIImage imageWithCGImage:cgImageFromBytes scale:1.0 orientation:UIImageOrientationUp];
// free(rawImagePixels);
CGImageRelease(cgImageFromBytes);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(defaultRGBColorSpace);
return finalImage;
}
Screenshot from instruments:
malloc doesn't free when whatever it allocates on one thread is deallocated on another.
Wrap your code in this:
dispatch_async(dispatch_get_main_queue(), ^{
// malloc and whatever other code goes here...
});
I'm using the PhotoScrollerNetwork project to provide a single high resolution image to a view in my project and automatically tile it, so memory is managed properly. It uses this block of code to draw the full high res image to memory, so that tiles can be calculated out of it.
-(void)drawImage:(CGImageRef)image {
madvise(ims[0].map.addr, ims[0].map.mappedSize - ims[0].map.emptyTileRowSize, MADV_SEQUENTIAL);
unsigned char *addr = ims[0].map.addr + ims[0].map.col0offset + ims[0].map.row0offset * ims[0].map.bytesPerRow;
CGContextRef context = CGBitmapContextCreate(addr, ims[0].map.width, ims[0].map.height, bitsPerComponent, ims[0].map.bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little);
assert(context);
CGContextSetBlendMode(context, kCGBlendModeCopy); // Apple uses this in QA1708
CGRect rect = CGRectMake(0, 0, ims[0].map.width, ims[0].map.height);
CGContextDrawImage(context, rect, image);
CGContextRelease(context);
madvise(ims[0].map.addr, ims[0].map.mappedSize - ims[0].map.emptyTileRowSize, MADV_FREE);
}
In the dealloc method of the class, the ims is freed ( 'free(ims)'), so this should be handled properly. However, if I make a new view (and thus a call to drawImage) repeatedly, my memory is getting filled. I found that if I comment CGContextDrawImage(context, rect, image);, the memory is ok, so I think something is kept in memory, but I can't get what... The dealloc method is always called, so that's not the problem.
EDIT:
My image is also released properly, this is the complete flow:
- (void)myFunc {
CFDictionaryRef options = [self createOptions];
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSourcRef, 0, options);
CFRelease(options);
CFRelease(imageSourcRef);
if (image) {
[self decodeImage:image];
CGImageRelease(image);
}
}
- (void)decodeImage:(CGImageRef)image {
assert(decoder == cgimageDecoder);
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
#if LEVELS_INIT == 0
zoomLevels = [self zoomLevelsForSize:CGSizeMake(width, height)];
ims = calloc(zoomLevels, sizeof(imageMemory));
#endif
[self mapMemoryForIndex:0 width:width height:height];
[self drawImage:image];
[self createLevelsAndTile];
}
Running both local from-bundle images, and network images, it appears any significant leak is gone. This with iOS7 and Xcode 5.
I can't figure out how to use the GLKView:snapshot method.
I'm using a GLKView to render some OpenGL stuff. It all works; seems like I have it all set up correctly.
But, when I try to do a snapshot, it fails: I get a null return value, and the following log message:
Error: CGImageCreate: invalid image size: 0 x 0.
Seems like this would mean the view itself is invalid for some reason, but it's not -- everything is working, aside from this.
I've looked at a few code samples, and I'm not doing anything different.
So... anyone seen this before? Ideas?
I never figured out the above problem; however, I found an excellent workaround. I found this chunk which just reads the render buffer and saves it to a UIImage. Problem solved!
- (UIImage*)snapshotRenderBuffer {
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
NSInteger dataLength = backingWidth * backingHeight * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(0.0f, 0.0f, backingWidth, backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(
backingWidth, backingHeight, 8, 32, backingWidth * 4, colorspace,
kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast,
ref, NULL, true, kCGRenderingIntentDefault);
// (sayeth abd)
// This creates a context with the device pixel dimensions -- not points.
// To be compatible with all devices, you're meant to keep everything as points and a scale factor; but,
// this gives us a scaled down image for purposes of saving. So, keep everything in device resolution,
// and worry about it later...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(backingWidth, backingHeight), NO, 0.0f);
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, backingWidth, backingHeight), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
return image;
}
Maybe this doesn't apply in your case, but the docs for GLKView:snapshot say:
Never call this method inside your drawing function.