imageWithCGImage make app crashed - memory

My project is automatic reference counting, but when I use CGBitmapContextCreateImage and UIIMage's method imageWithCGImage, something happened. I wrote some testing code, when the code running, the memory increased until the app crashed.
The device:
[16G, iPad2, retina]
The code:
(UIImage*) createImageByImage:(UIImage*)img
{
CGSize imgSize = img.size;
imgSize.width *= [UIScreen mainScreen].scale;
imgSize.height *= [UIScreen mainScreen].scale;
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, imgSize.width, imgSize.height, 8, imgSize.width * (CGColorSpaceGetNumberOfComponents(space) + 1), space, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(space);
CGRect rect;
rect.origin = CGPointMake(0, 0);
rect.size = imgSize;
// here bypass some transformation to the CGContext
CGContextDrawImage(ctx, rect, img.CGImage);
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage* image = [UIImage imageWithCGImage:cgImage scale:[UIScreen mainScreen].scale orientation:UIImageOrientationDown];
CGImageRelease(cgImage);
return image;
}
the caller:
UIImage* baseImg = [UIImage imageNamed:#"corkboard.jpg"]; // any big image,e.g. 1024*768
for (int i = 0; i < 100; i++) {
UIImage* tempImg = [self createImageByImage:baseImg];
tempImg = nil;
}
baseImg = nil;
Is there someone who can explain? Waiting for your help!!!
Another thing: in the code, if I only replace
[UIImage imageWithCGImage:cgImage ....];
with
[UIImage imageWithCGImage:img.CGImage ....];
the error disappeared! But then the function changed, not met the original demand!

Related

Resizing image in iOS

Hi I am developing an iOS app. I have an UIImageView with a image associated with it. I am changing its dimensions in viewDidLoad() method.
Initially when I change the dimension I am able to resize the image size on view. However after I crop the image(using Photoshop) accordingly to the shape of the object in the image(i.e getting rid of unwanted part of the image). My resize method doesn't seem to work i.e the size of the image is not changing though I call the same method.
The method I am using for resizing is given below.
-(void)initXYZ{
CGSize size;
CGFloat x,y;
x = 0+myImageView1.frame.size.width;
y = myImageView2.center.y;
size.width = _myImageView2.frame.size.width/2;
size.height = _myImageView2.frame.size.width/2;
UIImage *image = [UIImage imageNamed:#"xyz.png"];
image = [HomeViewController imageWithImage:image scaledToSize:size xCord:x yCord:y];}
Utility method is given below
+(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize xCord:(CGFloat)X yCord:(CGFloat)Y{
UIGraphicsBeginImageContextWithOptions(newSize,NO,0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;}
Try this...
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
OR
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipV = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipV);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
Try this:
- (UIImage*)resizeAndStoreImages:(UIImage*)img
{
UIImage *chosenImage = img;
NSData *imageData = UIImageJPEGRepresentation(chosenImage, 1.0);
int resizedImgMaxHeight = 500;
int resizedImgMaxWidth = 500;
UIImage *resizedImageData;
if (chosenImage.size.height > chosenImage.size.width && chosenImage.size.height > resizedImgMaxHeight) { // portrait
int width = (chosenImage.size.width / chosenImage.size.height) * resizedImgMaxHeight;
CGRect rect = CGRectMake( 0, 0, width, resizedImgMaxHeight);
UIGraphicsBeginImageContext(rect.size);
[chosenImage drawInRect:rect];
UIImage *pic1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resizedImageData = [UIImage imageWithData:UIImageJPEGRepresentation(pic1, 1.0)];
pic1 = nil;
} else if (chosenImage.size.width > chosenImage.size.height && chosenImage.size.width > resizedImgMaxWidth) { // landscape
int height = (chosenImage.size.height / chosenImage.size.width) * resizedImgMaxWidth;
CGRect rect = CGRectMake( 0, 0, resizedImgMaxWidth, height);
UIGraphicsBeginImageContext(rect.size);
[chosenImage drawInRect:rect];
UIImage *pic1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resizedImageData = [UIImage imageWithData:UIImageJPEGRepresentation(pic1, 1.0)];
pic1 = nil;
} else {
if (chosenImage.size.height > resizedImgMaxHeight) {
int width = (chosenImage.size.width / chosenImage.size.height) * resizedImgMaxHeight;
CGRect rect = CGRectMake( 0, 0, width, resizedImgMaxHeight);
UIGraphicsBeginImageContext(rect.size);
[chosenImage drawInRect:rect];
UIImage *pic1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resizedImageData = [UIImage imageWithData:UIImageJPEGRepresentation(pic1, 1.0)];
pic1 = nil;
} else {
resizedImageData = [UIImage imageWithData:imageData];
}
}
return resizedImageData;
}
Adjust the resizedImgMaxHeight and resizedImgMaxWidth as per your need

Masking and Reverse Masking in Imageview ios

I have one Imageview that contains the Image . And one Mask Shape that contains shape of rabbit.
I have one code that gives the below result.
- (UIImage*)mynewmaskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL,320, 380, 8, 0, colorSpace,(CGBitmapInfo) kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = 320/ image.size.width;
if(ratio * image.size.height < 380) {
ratio = 380/ image.size.height;
}
CGRect rect1 = {{0, 0}, {320,380}};
CGRect rect2 = {{-((image.size.width*ratio)-320)/2 , -((image.size.height*ratio)-380)/2}, {image.size.width*ratio, image.size.height*ratio}};
// CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;}
The above code gives this result.
But I want the below result (like reverse masking).
How it is possible.Please Help me.
Thanks.
You should look through blendMode. Try smth like this:
[rabbitImage drawInRect:rect
blendMode:kCGBlendModeDestinationOut
alpha:1.0];

EXC_BAD_ACCESS error with CGImageRef

I have a function:
- (UIImage *) resizeImage:(UIImage *)image width:(CGFloat)resizedWidth height:(CGFloat)resizedHeight shouldCrop:(BOOL)crop
{
CGImageRef imageRef = [image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmap = CGBitmapContextCreate(NULL, resizedWidth, resizedHeight, 8, 4 * resizedWidth, colorSpace, (CGBitmapInfo) kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(bitmap, CGRectMake(0, 0, resizedWidth, resizedHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
if (crop) // for SmallImage (tableviewcell)
{
CGRect cropRect = CGRectMake(0, CGImageGetHeight(ref) * 0.5 - ((float) kIvHeight * 0.5), (float) kIvWidth, (float) kIvHeight);
ref = CGImageCreateWithImageInRect(ref, cropRect);
}
UIImage * result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
CGImageRelease(imageRef);
return result;
}
The line CGImageRelease(imageRef); causes my app to crash with EXC_BAD_ACCESS
it seems to work when I remove the line - but would this not cause memory leaks?
You don't own the CGImageRef imageRef because you obtain it using [image CGImage] so you don't need to release it.
Take a look at this one as well: How do I release a CGImageRef in iOS

I want to generate a gray image in iOS by drawing using cgcontextfillXXX, but I failed

CGRect rect = CGRectMake(0, 0, 100, 100);
float white[1] = {0.0f};
float gray[1] = {1.0f};
CGContextSetFillColorSpace(bitmap, colorSpace);
CGContextSetFillColorWithColor(bitmap, CGColorCreate(colorSpace, white));
CGContextClearRect(bitmap, rect);
CGContextSetFillColorWithColor(bitmap, CGColorCreate(colorSpace, gray));
CGContextFillEllipseInRect(bitmap, rect);
CGImageRef imgRef = CGBitmapContextCreateImage(bitmap);
UIImage *image = [[[UIImage alloc] initWithCGImage:imgRef] autorelease];
self.imageView.image = image;
CGContextRelease(bitmap);
CGColorSpaceRelease(colorSpace);
the code is above.it does't work as I expect.I'm not familar to iOS CoreGraphic framework.
See this syntax for CGColorCreate in apple Document
Example code is :
CGFloat components[]={r, g, b, 1.f};
drawColor=CGColorCreate(colorSpace, components);
CGColorSpaceRelease(colorSpace);
Note : You have to define r,g,b as your wish which are all CGFloat type..

How can I crop an image to have curved irregular edges?

I have UITableViewCell with image in the right size.
This is how the cell should look like:
And i have the backgound:
And the image placeholder:
And i want to know if there is a way to crop image with the iOS library?
Yes that possible:
UIImage *imageToCrop = ...;
UIGraphicsBeginImageContext();
CGContextRef context = UIGraphicsGetCurrentContext();
[imageToCrop drawAtPoint:CGPointZero];
CGContextAddEllipseInRect(context, CGRectMake(0 ,0, imageToCrop.size.width, imageToCrop.size.height);
CGContextClip(context);
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You can use CoreGraphics to add mask or clip with path. Mask is image with alpha channel which determines what part of image show. Below example how clip with image mask:
- (UIImage *)croppedImage:(UIImage *)sourceImage
{
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), NO, [UIScreen mainScreen].scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToMask(context, CGRectMake(0, 0, width, height), [UIImage imageNamed:#"mask"].CGImage);
[sourceImage drawInRect:CGRectMake(0, 0, width, height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
}
Then you can write cell.picture = [self croppedImage:sourceImage];
You can use image masking technique to crop this image
Please have a look at this link
https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBHIJEB
I have written some code that may help you out
#interface ImageRenderer : NSObject {
UIImage *image_;
}
#property (nonatomic, retain) UIImage * image;
- (void)cropImageinRect:(CGRect)rect;
- (void)maskImageWithMask:(UIImage *)maskImage;
- (void)imageWithAlpha;
#end
#implementation ImageRenderer
#synthesize image = image_;
- (void)cropImageinRect:(CGRect)rect {
CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, rect);
image_ = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
- (void)maskImageWithMask:(UIImage *)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext == NULL){
return;
}
CGFloat ratio = 0;
ratio = maskImage.size.width/ image_.size.width;
if(ratio * image_.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image_.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image_.size.width*ratio)-maskImage.size.width)/2 , -((image_.size.height*ratio)-maskImage.size.height)/2}, {image_.size.width*ratio, image_.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image_.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
image_ = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
}
- (void)imageWithAlpha {
CGImageRef imageRef = image_.CGImage;
CGFloat width = CGImageGetWidth(imageRef);
CGFloat height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(nil, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef resultImageRef = CGBitmapContextCreateImage(context);
image_ = [UIImage imageWithCGImage:resultImageRef scale:image_.scale orientation:image_.imageOrientation];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(resultImageRef);
}
#end
In this code you can crop the image out of a bigger one and then you can use a mask image to get your work done.

Resources