I want to extract data from UIImage and do something with that data.
I found an memory issue while extracting the data from the UIImage.
To isolate the issue i created a new project with just a method that extract data from UIImage.
-(void)runMemoryValidation:(NSArray*)images {
for ( int i=0; i<images.count; i++) {
#autoreleasepool {
NSString* imageName = [images objectAtIndex:i];
UIImage* image = [UIImage imageNamed:imageName];
NSUInteger width = 500;//CGImageGetWidth(image.CGImage);
NSUInteger height = 500;//CGImageGetHeight(image.CGImage);
//Ref<IntMatrix> matrix(new IntMatrix(width,height));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *data = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(data, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage);
CGContextRelease(context);
free(data);
}
}
}
I'm sending this method 100 file names and for each loop i load the image and extract the data.
Attached a screenshot, you can see the memory getting higher very quickly and doesn't get released after the for loop finish
What am i doing wrong ?
Thanks!
You are not doing anything wrong related to memory management in this code. As #picciano said in his comment +imageNamed: method caches the images it loads, therefore use +imageWithContentsOfFile: method which doesn't. Also do your measures on an actual device, since there's a differece in memory usage, pressure, etc. when testing on the simulator.
Related
Below is some code that converts a UIImage to a CGImage, makes some changes to the CGImage, then converts it back to a UIImage.
This works if my architectures include arm6 and arm7 only. If I add arm64, the UIImage returned at the end is null.
There are some hard coded numbers in the code, which makes me think that is the problem. But I am not sure how to programmatically determine these values.
Here's the code, minus some details in the middle:
CGImageRef imageRef = [anImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Raw data malloc'd
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context =
CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
...
// Change the alpha or color of pixels based on certain criteria
...
CGImageRef ref = CGBitmapContextCreateImage(context);
free(rawData);
CGContextRelease(context);
image = [UIImage imageWithCGImage:ref];
CFRelease(ref);
Any thoughts on what's happening here?
I am taking a UIImage, breaking it down to the raw Pixel Data like so
CGImageRef imageRef = self.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
_rawData = (UInt8 *)calloc(height * width * 4, sizeof(UInt8));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(_rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
I then edit a couple pixels in the _rawData array with different colors, then re-create the UIImage like this from the edited _rawData pixel data like so. (In this, I am just changing the second pixel in the image to be red)
size_t width = CGImageGetWidth(_image.CGImage);
NSUInteger pixel = 1; // second pixel
NSUInteger position = pixel*4;
NSUInteger redIndex = position;
NSUInteger greenIndex = position+1;
NSUInteger blueIndex = position+2;
NSUInteger alphaIndex = position+3;
_rawData[redIndex] = 255;
_rawData[greenIndex] = 0;
_rawData[blueIndex] = 0;
_rawData[alphaIndex] = 255;
size_t height = CGImageGetHeight(_image.CGImage);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4*width;
size_t length = height*width*4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, _rawData, length, NULL);
CGImageRef newImageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
My problem begins here: I now have a new UIImage that has the second pixel changed to red, but I have a memory leak. I need to free the _rawData that has been calloc'd. Whenever I call
free(_rawData);
even though its after i've already created my "newImage". That image I just created is corrupted when I show it on screen. I thought that CGImageCreate() would create a new object in memory so then I could free the old memory. Is that not true?
What in the world I am doing wrong?
I'm try to work with the raw pixels of an image and I'm running into some problems.
First, calling .CGImage on a C4Image doesn't work so I have to use a UIImage to load the file.
Second, the byte array seems to be the wrong length and the image doesn't seem to have the right dimensions or colours.
I'm borrowing some code from the discussion here.
UIImage *image = [UIImage imageNamed:#"C4Table.png"];
CGImageRef imageRef = image.CGImage;
NSData *data = (__bridge NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
unsigned char * pixels = (unsigned char *)[data bytes];
for(int i = 0; i < [data length]; i += 4) {
pixels[i] = 0; // red
pixels[i+1] = pixels[i+1]; // green
pixels[i+2] = pixels[i+2]; // blue
pixels[i+3] = pixels[i+3]; // alpha
}
size_t imageWidth = CGImageGetWidth(imageRef);
size_t imageHeight = CGImageGetHeight(imageRef);
NSLog(#"width: %d height: %d datalength: %d" ,imageWidth ,imageHeight, [data length] );
C4Image *imgimgimg = [[C4Image alloc] initWithRawData:pixels width:imageWidth height:imageHeight];
[self.canvas addImage:imgimgimg];
Is there a better way to do this or am I missing a step?
Close. There is a loadPixelData method on C4Image, and if you check in the main C4 repo (C4iOS) you'll be able to see how the image class loads pixels... It can be tricky.
C4Image loadPixelData:
-(void)loadPixelData {
const char *queueName = [#"pixelDataQueue" UTF8String];
__block dispatch_queue_t pixelDataQueue = dispatch_queue_create(queueName, DISPATCH_QUEUE_CONCURRENT);
dispatch_async(pixelDataQueue, ^{
NSUInteger width = CGImageGetWidth(self.CGImage);
NSUInteger height = CGImageGetHeight(self.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
free(rawData);
rawData = malloc(height * bytesPerRow);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), self.CGImage);
CGContextRelease(context);
_pixelDataLoaded = YES;
[self postNotification:#"pixelDataWasLoaded"];
pixelDataQueue = nil;
});
}
To modify this for your question, I have done the following:
-(void)getRawPixelsAndCreateImages {
C4Image *image = [C4Image imageNamed:#"C4Table.png"];
NSUInteger width = CGImageGetWidth(image.CGImage);
NSUInteger height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
unsigned char *rawData = malloc(height * bytesPerRow);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage);
CGContextRelease(context);
C4Image *imgimgimg = [[C4Image alloc] initWithRawData:rawData width:width height:height];
[self.canvas addImage:imgimgimg];
for(int i = 0; i < height * bytesPerRow; i += 4) {
rawData[i] = 255;
}
C4Image *redImgimgimg = [[C4Image alloc] initWithRawData:rawData width:width height:height];
redImgimgimg.origin = CGPointMake(0,320);
[self.canvas addImage:redImgimgimg];
}
It can be quite confusing to learn how to work with pixel data, because you need to know how to work with Core Foundation (which is pretty much a C api). The main line of code, to populate the rawData is a call to CGContextDrawImage which basically copies the pixels from an image into the data array that you're going to play with.
I have created a gist that you can download to play around with in C4.
Working with Raw Pixels
In this gist you'll see that I actually grab the CGImage from a C4Image object, use that to populate an array of raw data, and then use that array to create a copy of the original image.
Then, I modify the red component of the pixel data by changing all values to 255, and then use the modified pixel array to create a tinted version of the original image.
Since CGContextDrawImage can be quite expensive, I'm trying to minimize the amount of data I'm giving it while I examine the pixel data. If I have two images and the CGRect of their intersection, can I get CGContextDrawImage to only draw the intersection of each independently (resulting in two CGContextRefs containing the intersection of one of the images)?
Here's some code that doesn't work, but should be close to what I need for one image...
CGImageRef imageRef = [image CGImage];
NSUInteger width = rectIntersect.size.width;
NSUInteger height = rectIntersect.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
// rectIntersect down here contains the intersection of the two images...
CGContextDrawImage(context, rectIntersect, imageRef);
Well, you don't need to draw them unless you really want a CGBitmapContext with them. Use CGImageCreateWithImageInRect() to create sub-images. This doesn't necessarily require the frameworks to copy any image data. It may just reference the image data from the original. So, it can be quite efficient.
If you really do need the images drawn into contexts, you can of course just draw the sub-images.
I'm trying to create specific custom filter effects on image for iOS. So far, I've been trying to get raw data using CGBitmapContextCreate. However, I don't really have an idea of how to modify my rawData. I hope to perform calculations on it. I hope to effect pixel by pixel with the rawData but I have no idea how to manipulate it.
I also don't know how I can draw my bitmap context to a UIImage, so I can render the finished product on an UIImageView.
Could somebody give me some pointers to how I might be able to achieve that?
Here's my code so far:
// First get the image into your data buffer
CGImageRef imageRef = imageView.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
//perform calculations on rawData? or context? not sure!! i hope to effect pixel by pixel.
//am i doing this correctly?
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//set the imageview with the new image
[imageView setImage:newImage];
CGContextRelease(context);
You are almost there.
You can perform your transactions on the RGB and Alpha channels via:
for (NSUInteger i = 0 ; i < rawDataSpace ; i+=4) {
rawData[i+0] = ...
rawData[i+1] = ...
rawData[i+2] = ...
rawData[i+3] = ... // = Alpha channel