Why is UIImage from imageWithCGImage breaking in arm64? - ios

Below is some code that converts a UIImage to a CGImage, makes some changes to the CGImage, then converts it back to a UIImage.
This works if my architectures include arm6 and arm7 only. If I add arm64, the UIImage returned at the end is null.
There are some hard coded numbers in the code, which makes me think that is the problem. But I am not sure how to programmatically determine these values.
Here's the code, minus some details in the middle:
CGImageRef imageRef = [anImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Raw data malloc'd
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context =
CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
...
// Change the alpha or color of pixels based on certain criteria
...
CGImageRef ref = CGBitmapContextCreateImage(context);
free(rawData);
CGContextRelease(context);
image = [UIImage imageWithCGImage:ref];
CFRelease(ref);
Any thoughts on what's happening here?

Related

Freeing raw image data after creating a UIImage from it, corrupts image

I am taking a UIImage, breaking it down to the raw Pixel Data like so
CGImageRef imageRef = self.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
_rawData = (UInt8 *)calloc(height * width * 4, sizeof(UInt8));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(_rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
I then edit a couple pixels in the _rawData array with different colors, then re-create the UIImage like this from the edited _rawData pixel data like so. (In this, I am just changing the second pixel in the image to be red)
size_t width = CGImageGetWidth(_image.CGImage);
NSUInteger pixel = 1; // second pixel
NSUInteger position = pixel*4;
NSUInteger redIndex = position;
NSUInteger greenIndex = position+1;
NSUInteger blueIndex = position+2;
NSUInteger alphaIndex = position+3;
_rawData[redIndex] = 255;
_rawData[greenIndex] = 0;
_rawData[blueIndex] = 0;
_rawData[alphaIndex] = 255;
size_t height = CGImageGetHeight(_image.CGImage);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4*width;
size_t length = height*width*4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, _rawData, length, NULL);
CGImageRef newImageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
My problem begins here: I now have a new UIImage that has the second pixel changed to red, but I have a memory leak. I need to free the _rawData that has been calloc'd. Whenever I call
free(_rawData);
even though its after i've already created my "newImage". That image I just created is corrupted when I show it on screen. I thought that CGImageCreate() would create a new object in memory so then I could free the old memory. Is that not true?
What in the world I am doing wrong?

strange behavior of RGB values in non retina

I am creating a bitmap in one of the iOS apps in which I draw something on screen and capture the context using the following code :
CGImageRef imageRef = image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
unsigned char *rawData = (unsigned char*) calloc(height * width * bytesPerPixel, sizeof(unsigned char));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
This data I later use to check if the pixels are black or not. The problem I am facing is when I draw something and its black in color (RGB value 0,0,0) , the RGB value for that pixel is correctly detected in retina displays but in non retina its not giving the correct value.It does not give me 0,0,0 in non retina. Does anyone know about it?
Check the size of your UIImage. It may be different in non-retina display and hence you may check the color of another pixel if it's non retina display on the device.

Painting a Gradient inside an Image by tapping in Objective-C?

I have an animal image and that image have a white background and the shape of animal is black outline. That is fixed on my image view in my .xib.
Now I would like to paint on the image, however only on the particular closed part.
Suppose a user touches on the hand then only hands will fill the gradient. The rest of the image will remain the same.
- (UIImage*)imageFromRawData:(unsigned char *)rawData
{
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * self.imageDoodle.image.size.width;
CGImageRef imageRef = [self.imageDoodle.image CGImage];
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGContextRef context = CGBitmapContextCreate(rawData,self.imageDoodle.image.size.width,
self.imageDoodle.image.size.height,bitsPerComponent,bytesPerRow,colorSpace,
kCGImageAlphaPremultipliedLast);
imageRef = CGBitmapContextCreateImage (context);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(context);
CGImageRelease(imageRef);
return rawImage;
}
-(unsigned char*)rawDataFromImage:(UIImage *)image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSLog(#"w=%d,h=%d",width,height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return rawData;
}
Where would I need to change my code to support this?
This is possible by UIBezierPath but I don't know how to implement it in this case.

how to change the alpha of a groundoverlay in google maps ios sdk?

i added a groundoverlay to a mapview, and i found thoese ways to change the alpha of groundoverlay.icon.
How to set the opacity/alpha of a UIImage?
but it seems has no affect in the app, i still can not see the map or other groundoverlays behind the image.
is there a solution to handle this?
+ (UIImage *) setImage:(UIImage *)image withAlpha:(CGFloat)alpha
{
// Create a pixel buffer in an easy to use format
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//alter the alpha
int length = height * width * 4;
for (int i=0; i<length; i+=4)
{
m_PixelBuf[i+3] = 255*alpha;
}
//create a new image
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef newImgRef = CGBitmapContextCreateImage(ctx);
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
free(m_PixelBuf);
UIImage *finalImage = [UIImage imageWithCGImage:newImgRef];
CGImageRelease(newImgRef);
return finalImage;
}

how to perform calculations on image raw data with CoreGraphics?

I'm trying to create specific custom filter effects on image for iOS. So far, I've been trying to get raw data using CGBitmapContextCreate. However, I don't really have an idea of how to modify my rawData. I hope to perform calculations on it. I hope to effect pixel by pixel with the rawData but I have no idea how to manipulate it.
I also don't know how I can draw my bitmap context to a UIImage, so I can render the finished product on an UIImageView.
Could somebody give me some pointers to how I might be able to achieve that?
Here's my code so far:
// First get the image into your data buffer
CGImageRef imageRef = imageView.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
//perform calculations on rawData? or context? not sure!! i hope to effect pixel by pixel.
//am i doing this correctly?
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//set the imageview with the new image
[imageView setImage:newImage];
CGContextRelease(context);
You are almost there.
You can perform your transactions on the RGB and Alpha channels via:
for (NSUInteger i = 0 ; i < rawDataSpace ; i+=4) {
rawData[i+0] = ...
rawData[i+1] = ...
rawData[i+2] = ...
rawData[i+3] = ... // = Alpha channel

Resources