Why doesn't CGImage data survive a round trip? - ios

I have a freehand drawing view (users can draw lines with their finger). I only use a few colors, so I have a compression algorithm that I wrote (want to send it over a local network to another iPad) but I can't seem to get the data out of the graphics context accurately, even with this simple test:
//Get the data
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0f);
CGContextRef c = UIGraphicsGetCurrentContext();
[self.layer renderInContext:c];
baseImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = baseImageView.image.CGImage;
NSData *dataToUse = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
//Reuse the data
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)dataToUse);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef test = CGImageCreate(width,height,8,32,4*width,colorSpace,
kCGBitmapByteOrder32Big|kCGImageAlphaPremultipliedLast,provider,NULL,false,
kCGRenderingIntentDefault); //I get width and height from another part of the program
imageView.image = [UIImage imageWithCGImage:test];
I simply copied out the data from one CGImage and tried to insert it into another. However, the result is garbage and not only that, for some reason it comes out as BGRA when I copy the data, but CGImageCreate wants RGBA. Where am I going wrong with this round-trip test?

Looks like the answer is that it's not enough to just get the data provider. You need to actually render the image into a bitmap context and take the data from there. Revised way:
//Get the data
CGImageRef imageRef = baseImageView.image.CGImage;
size_t height = CGImageGetHeight(imageRef);
size_t width = CGImageGetWidth(imageRef);
size_t bufferLength = width * height * 4;
unsigned char *rawData = (unsigned char *)malloc(bufferLength);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(bitmapData, width, height, 8,
4*width, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
NSData *dataToUse = [NSData dataWithBytes:rawData length:bufferLength];
//Later free(rawData);
Using the data is still the same.

Related

Why is UIImage from imageWithCGImage breaking in arm64?

Below is some code that converts a UIImage to a CGImage, makes some changes to the CGImage, then converts it back to a UIImage.
This works if my architectures include arm6 and arm7 only. If I add arm64, the UIImage returned at the end is null.
There are some hard coded numbers in the code, which makes me think that is the problem. But I am not sure how to programmatically determine these values.
Here's the code, minus some details in the middle:
CGImageRef imageRef = [anImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Raw data malloc'd
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context =
CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
...
// Change the alpha or color of pixels based on certain criteria
...
CGImageRef ref = CGBitmapContextCreateImage(context);
free(rawData);
CGContextRelease(context);
image = [UIImage imageWithCGImage:ref];
CFRelease(ref);
Any thoughts on what's happening here?

Create a sub-image from context

I was wondering if there was a way to create a CGImage corresponding to a rectangle inside the context?
What I am doing right now:
I am using CGBitmapContextCreateImage to create a CGImage from a context. Then, I use CGImageCreateWithImageInRect to extract that sub-image.
Anil
Try this:
static CGImageRef createImageWithSectionOfBitmapContext(CGContextRef bigContext,
size_t x, size_t y, size_t width, size_t height)
{
uint8_t *data = CGBitmapContextGetData(bigContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(bigContext);
size_t bytesPerPixel = CGBitmapContextGetBitsPerPixel(bigContext) / 8;
data += x * bytesPerPixel + y * bytesPerRow;
CGContextRef smallContext = CGBitmapContextCreate(data,
width, height,
CGBitmapContextGetBitsPerComponent(bigContext), bytesPerRow,
CGBitmapContextGetColorSpace(bigContext),
CGBitmapContextGetBitmapInfo(bigContext));
CGImageRef image = CGBitmapContextCreateImage(smallContext);
CGContextRelease(smallContext);
return image;
}
or this:
static CGImageRef createImageWithSectionOfBitmapContext(CGContextRef bigContext,
size_t x, size_t y, size_t width, size_t height)
{
uint8_t *data = CGBitmapContextGetData(bigContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(bigContext);
size_t bytesPerPixel = CGBitmapContextGetBitsPerPixel(bigContext) / 8;
data += x * bytesPerPixel + y * bytesPerRow;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data,
height * bytesPerRow, NULL);
CGImageRef image = CGImageCreate(width, height,
CGBitmapContextGetBitsPerComponent(bigContext),
CGBitmapContextGetBitsPerPixel(bigContext),
CGBitmapContextGetBytesPerRow(bigContext),
CGBitmapContextGetColorSpace(bigContext),
CGBitmapContextGetBitmapInfo(bigContext),
provider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
return image;
}
You can create a cropped image as follows as mentioned here,
For eg:-
UIImage *image = //original image
CGRect rect = //cropped rect
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
You need to get the CGImage from context to use the above code to crop it. You can use CGBitmapContextCreateImage as mentioned in question. Here is the documentation.
You could create your CGBitmapContext with a buffer that you allocated, and create a CGImage from scratch using the same buffer. With the context and the image sharing a buffer, you can draw into the context and then create a CGImage with that section of the master image.
Note that if you draw into the same context afterward, the cropped image may actually pick up the changes (depending on just how much shared-referencing-instead-of-copying is going on internally). Depending on what you're doing, you may or may not find this desirable.

App Crashes when trying to update a UIImageView after modifying the RGB values for an iOS project

I'm trying to apply multiple effects to images. I've created a separate file to handle the effects processing that I can send a UIImageView to and receive a modified copy. To save processing time, I first load the image into the separate processing file and store it in memory, rather than load the image every time I want to modify it. The flow is getImageData -> modifyRGB -> displayImage. Everything works till the last step. The returned modified image is displayed on screen for a split second, then Xcode crashes with a EXEC_BAD_ACCESS:code 1 error. I've been over the code repeatedly, and can't find the problem. Any help is greatly appreciated. Thank you!
UPDATE WITH MORE INFO
I'm using Xcode 4.3.1 with Automatic Reference Counting
Using Line Breaks, I can verify that the crash happens when the line self.imageView.image = [self.imageManipulation displayImage]; is executed. The image IS updated, but then the program immediately crashes.
Using NSZombie, i get an error -[Not A Type retain]: message sent to deallocated instance 0x2cceaf80
From my viewController I use:
[self.imageManipulation getImageData:self.imageView.image];
[self.imageManipulation modifyRGB];
self.imageView.image = [self.imageManipulation displayImage];
My ImageManipulation file consists of:
#implementation ImageManipulation
static unsigned char *rgbaDataOld;
static unsigned char *rgbaDataNew;
static int width;
static int height;
- (void)getImageData:(UIImage *)image
{
CGImageRef imageRef = [image CGImage];
width = CGImageGetWidth(imageRef);
height = CGImageGetHeight(imageRef);
rgbaDataOld = malloc(height * width * 4);
rgbaDataNew = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbaDataOld, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CGImageRelease(imageRef);
}
//modify rgb values
- (void)modifyRGB
{
for (int byteIndex = 0 ; byteIndex < width * height * 4; byteIndex += 4)
{
rgbaDataNew[byteIndex] = (char) (int) (rgbaDataOld[byteIndex] / 3) + 1;
rgbaDataNew[byteIndex+1] = (char) (int) (rgbaDataOld[byteIndex+1] / 3 + 1);
rgbaDataNew[byteIndex+2] = (char) (int) (rgbaDataOld[byteIndex+2] / 3) + 1;
rgbaDataNew[byteIndex+3] = (char) (int) 255;
}
}
//set image
- (UIImage *)displayImage
{
CGContextRef context;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(rgbaDataNew, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast );
CGImageRef imageRef = CGBitmapContextCreateImage (context);
UIImage *outputImage = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CGImageRelease(imageRef);
return outputImage;
}
#end
I changed my dispayImage() method to receive a UIImageView property and return VOID. This way, all work is done to the passed UIImageView instead of a localized instance and nothing is returned.
My guess is that the returned UIImage approach I was using before was crashing because the returned reference was deallocated the second the method completed. This also allows me to use CGImageRelease without any ill effects.
Here's the new approach:
- (void)displayImage:(UIImageView *)image
{
CGContextRef context;
CGImageRef imageRef = [image.image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(rgbaDataNew, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (context);
image.image = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CGImageRelease(imageRef);
image = nil;
}
Thank you to everyone that offered help! I learned a tremendous amount just from your suggestions. This was my first time using bt and NSZombie and now I'm using them religiously! Thanks again!
You're releasing the imageRef, which causes the UIImage to be autoreleased.
Make sure that you retain the UIImage in order to prevent such events from occurring:
- (UIImage *)displayImage
{
CGContextRef context;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(rgbaDataNew, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast );
CGImageRef imageRef = CGBitmapContextCreateImage (context);
UIImage *outputImage = [[UIImage imageWithCGImage:imageRef] retain];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CGImageRelease(imageRef); //this line would cause outputImage to be released
return outputImage;
}
In displayImage tell the compiler you want to retain the image with __strong:
UIImage __strong *outputImage = [UIImage imageWithCGImage:imageRef];
or if you prefer:
__strong UIImage *outputImage = [UIImage imageWithCGImage:imageRef];

how to perform calculations on image raw data with CoreGraphics?

I'm trying to create specific custom filter effects on image for iOS. So far, I've been trying to get raw data using CGBitmapContextCreate. However, I don't really have an idea of how to modify my rawData. I hope to perform calculations on it. I hope to effect pixel by pixel with the rawData but I have no idea how to manipulate it.
I also don't know how I can draw my bitmap context to a UIImage, so I can render the finished product on an UIImageView.
Could somebody give me some pointers to how I might be able to achieve that?
Here's my code so far:
// First get the image into your data buffer
CGImageRef imageRef = imageView.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
//perform calculations on rawData? or context? not sure!! i hope to effect pixel by pixel.
//am i doing this correctly?
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//set the imageview with the new image
[imageView setImage:newImage];
CGContextRelease(context);
You are almost there.
You can perform your transactions on the RGB and Alpha channels via:
for (NSUInteger i = 0 ; i < rawDataSpace ; i+=4) {
rawData[i+0] = ...
rawData[i+1] = ...
rawData[i+2] = ...
rawData[i+3] = ... // = Alpha channel

OpenGL ES texture from random image in iOS

I've been challenged with the task of putting an oddly sized image (with fixed proportion, though) on a GL_QUAD (well, a GL_TRIANGLE_STRIP resem--you got the point) and that seemed fairily easy to me at first, except for the part where I need to do this in iOS (4.2+). The solution is awkwardly easy anyway: just take the image, make a texture out of it, map it to the correct vertices and you're good to go.
As you may very well know, OpenGL ES textures are required to have width and height to be powers of 2, like 2, 4, 8, ..., 256, 512... (not sure this holds for regular OpenGL but I think it does... anyway, doesn't matter).
Since I have to download these images from the Intertubes (actually, the YouTube) I can't really do anything beforehand, so I have these 480x360 images (if I remember it correctly) and I have to splat them on my triangle strips. Fortunately we have texture mapping which allows us to select portions of the texture to be mapped where we want, so the obvious solution would be to (optionally up/downsize) and pad with some matte color the source image, and live with it.
Enter iOS. I get the data from the Intertubes, I happily build the corresponding UIImage, then I make another UIImage (yes, I know, bear with me, I'll optimize it later) just scaled down to the nearest power-of-2 in width, preserving aspect, so let's say 256x192, then I make a bitmap context , paint it black (or, for what matters, any other colour, but I think you can see why I chose black in this case), draw the UIImage (a CGImage) on it, and return the UIImage built using the aforementioned bitmap context.
I am now the happy owner of a 256x256 image ready to be mapped on my GL_TRIANGLE_STRIP. Except that it does not work. I tried with a prepared 512x512 image and it worked flawlessly. The code I'm pasting here does not include the retrieval of the image from YouTube, I just saved it locally to rule out networking problems. Also, I'm not including the GL code as it's clearly working.
- (void)viewDidLoad {
images = [[NSMutableArray alloc] init];
//NSURL *url = [NSURL URLWithString:#"http://i.ytimg.com/vi/d2wVgzXWE9Y/0.jpg"];
NSString *path = [[NSBundle mainBundle] pathForResource:#"opengl_texture" ofType:#"jpg"];
NSData *texData = [NSData dataWithContentsOfFile:path];
UIImage *rawImage = [[UIImage alloc] initWithData:texData];
float newWidth = (float)(1 << (int)floor(log2f(rawImage.size.width)));
// Scale means the scale of the current image relative to the resulting image.
float scale = rawImage.size.width / newWidth;
UIImage *midImage = [UIImage imageWithCGImage:[rawImage CGImage] scale:scale orientation:UIImageOrientationUp];
NSLog(#"%f %f %f", midImage.size.width, midImage.size.height, scale);
[rawImage release];
UIImage *image = [self padImage:midImage withColor:[UIColor redColor]];
NSLog(#"%f %f", image.size.width, image.size.height);
[images addObject:image];
textures = malloc(sizeof(GLuint));
glGenTextures(1, textures);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetWidth(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc(width * height * 4);
CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, 4*width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[midImage release];
[image release];
[texData release];
}
- (UIImage *)padImage:(UIImage *)image withColor:(UIColor *)color {
CGFloat size = round(image.size.width);
NSLog(#"%f", size);
CGContextRef bContext = [self createBitmapContextOfSize:CGSizeMake(size, size)];
CGContextSetFillColorWithColor(bContext, [color CGColor]);
CGContextFillRect(bContext, CGRectMake(0, 0, size, size));
CGContextDrawImage(bContext, CGRectMake(0, 0, size, size), [image CGImage]);
UIImage *result = [UIImage imageWithCGImage:CGBitmapContextCreateImage(bContext)];
CGContextRelease(bContext);
return result;
}
- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing (context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
Please don't bother mentioning obvious memory management issues unless you think they are the core of the problem. As for the "error message" or whatever: no, there's no such thing, the whole app just crashes.
Ok, now you can collectively smack my face with a large trout.
The problem was actually memory management, specifically I was releasing objects that were created with implicit methods (namely midImage and texData). Implicit creation does not increase the retain count, while explicit (alloc+init and friends) does. How may times did I already crash against this? Lots. Were them enough? Obviously not.
Second question: where can I find a large post-it, like 1x1m at least?

Resources