Clipping large png make the ios app crash - ios

- (UIImage *)createThumbnailImage:(UIImage *)image withSize:(CGSize)size {
CGRect imageRect = CGRectMake(0.0, 0.0, size.width, size.height);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, CGRectMake(0, 0, size.width, size.height));
CGContextSetInterpolationQuality(context, 0.8);
[image drawInRect:imageRect blendMode:kCGBlendModeNormal alpha:1];
UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumbnail;
}
- (void)viewDidLoad {
[super viewDidLoad];
UIImage *inputImage = [UIImage imageNamed:#"dog.jpg"];
UIImage *image = [self createThumbnailImage:inputImage withSize:CGSizeMake(640.0, 480.0)]
}
I got a thumbnail image(640 * 480) by code above. And some odd problem confused me.
When I sent a jpg (10000 * 10000) to the method,it worked well.
But when I sent a png with the same size, the app would crash.
I tried to find some documents about the difference between jpg and png, but it made no sense.
Does anyone have any idea about this bug?

-(UIImage *)getNeedImageFrom:(UIImage*)image cropRect:(CGRect)rect
{
CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *croppedImage = [UIImage imageWithCGImage:subImage];
CGImageRelease(subImage);
return croppedImage;
}
can you please check if this solve your problem

Related

Change size of UIImage to desired size

I am using this code to change the size of images.
- (UIImage *)scaledImage:(UIImage *)image size:(CGSize)size {
UIGraphicsBeginImageContextWithOptions(size, NO, 1.0);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I am currently in need to change the size of 100 images at runtime but this code causes delay. The loading time is much.
Is there any faster way to resize the UIImage?
Thanks
try this code :
CGRect rect = CGRectMake(0,0,75,75);
UIGraphicsBeginImageContext( rect.size );
[yourCurrentOriginalImage drawInRect:rect];
UIImage *picture1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(picture1);
UIImage *img=[UIImage imageWithData:imageData];
'- (UIImage *)scaledImage:(UIImage *)image size:(CGSize)size {
CGRect rect = CGRectMake(0,0, size.width,size.height);
UIGraphicsBeginImageContext( size );
[image drawInRect:rect];
UIImage *picture1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(picture1);
UIImage *img=[UIImage imageWithData:imageData];
return img;
}'

Resizing image in iOS

Hi I am developing an iOS app. I have an UIImageView with a image associated with it. I am changing its dimensions in viewDidLoad() method.
Initially when I change the dimension I am able to resize the image size on view. However after I crop the image(using Photoshop) accordingly to the shape of the object in the image(i.e getting rid of unwanted part of the image). My resize method doesn't seem to work i.e the size of the image is not changing though I call the same method.
The method I am using for resizing is given below.
-(void)initXYZ{
CGSize size;
CGFloat x,y;
x = 0+myImageView1.frame.size.width;
y = myImageView2.center.y;
size.width = _myImageView2.frame.size.width/2;
size.height = _myImageView2.frame.size.width/2;
UIImage *image = [UIImage imageNamed:#"xyz.png"];
image = [HomeViewController imageWithImage:image scaledToSize:size xCord:x yCord:y];}
Utility method is given below
+(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize xCord:(CGFloat)X yCord:(CGFloat)Y{
UIGraphicsBeginImageContextWithOptions(newSize,NO,0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;}
Try this...
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
OR
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipV = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipV);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
Try this:
- (UIImage*)resizeAndStoreImages:(UIImage*)img
{
UIImage *chosenImage = img;
NSData *imageData = UIImageJPEGRepresentation(chosenImage, 1.0);
int resizedImgMaxHeight = 500;
int resizedImgMaxWidth = 500;
UIImage *resizedImageData;
if (chosenImage.size.height > chosenImage.size.width && chosenImage.size.height > resizedImgMaxHeight) { // portrait
int width = (chosenImage.size.width / chosenImage.size.height) * resizedImgMaxHeight;
CGRect rect = CGRectMake( 0, 0, width, resizedImgMaxHeight);
UIGraphicsBeginImageContext(rect.size);
[chosenImage drawInRect:rect];
UIImage *pic1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resizedImageData = [UIImage imageWithData:UIImageJPEGRepresentation(pic1, 1.0)];
pic1 = nil;
} else if (chosenImage.size.width > chosenImage.size.height && chosenImage.size.width > resizedImgMaxWidth) { // landscape
int height = (chosenImage.size.height / chosenImage.size.width) * resizedImgMaxWidth;
CGRect rect = CGRectMake( 0, 0, resizedImgMaxWidth, height);
UIGraphicsBeginImageContext(rect.size);
[chosenImage drawInRect:rect];
UIImage *pic1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resizedImageData = [UIImage imageWithData:UIImageJPEGRepresentation(pic1, 1.0)];
pic1 = nil;
} else {
if (chosenImage.size.height > resizedImgMaxHeight) {
int width = (chosenImage.size.width / chosenImage.size.height) * resizedImgMaxHeight;
CGRect rect = CGRectMake( 0, 0, width, resizedImgMaxHeight);
UIGraphicsBeginImageContext(rect.size);
[chosenImage drawInRect:rect];
UIImage *pic1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resizedImageData = [UIImage imageWithData:UIImageJPEGRepresentation(pic1, 1.0)];
pic1 = nil;
} else {
resizedImageData = [UIImage imageWithData:imageData];
}
}
return resizedImageData;
}
Adjust the resizedImgMaxHeight and resizedImgMaxWidth as per your need

Cropped iOS Image comes back too large

Been trying to fix this problem all day to no avail.
Pretty much, I'm taking a screenshot of the view, then trying to crop out the first 50px and a footer. Problem is that when I do this, the result is a little blowed up, and quality is lost. Here's what I wrote, which I think conforms to retina.
-(UIImage *)takeSnapShotAndReturn{
//Take screenshot of whole view
if([[UIScreen mainScreen] respondsToSelector:#selector(scale)]){
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size,NO,[UIScreen mainScreen].scale);
}
else{
UIGraphicsBeginImageContext(self.view.window.bounds.size);
}
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
combinedImage = [self cropOutArea:image withRectangle:CGRectMake(0, 50, 320, 467)];
UIImageWriteToSavedPhotosAlbum(combinedImage, nil, nil, nil);
UIGraphicsEndImageContext();
return image;
}
-(UIImage *)cropOutArea:(UIImage*)image withRectangle:(CGRect)rectangle{
if(image.scale > 1){
rectangle = CGRectMake(rectangle.origin.x * image.scale,
rectangle.origin.y * image.scale,
rectangle.size.width * image.scale,
rectangle.size.height * image.scale);
}
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, rectangle);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(imageRef);
return result;
}
I find cropping extremely confusing!
I'm not sure EXACTLY what you're trying to do, but this may be it .....
-(UIImage *)simplishTopCropAndTo640:(UIImage *)fromImage
// moderately optimised!
{
float shortDimension = fminf(fromImage.size.width, fromImage.size.height);
// 1.use CGImageCreateWithImageInRect to take only the top square...
// 2. use drawInRect (or CGContextDrawImage, same) to scale...
CGRect topSquareOfOriginalRect =
CGRectMake(0,0, shortDimension,shortDimension);
// NOT fromImage.size.width,fromImage.size.width);
CGImageRef topSquareIR = CGImageCreateWithImageInRect(
fromImage.CGImage, topSquareOfOriginalRect);
CGSize size = CGSizeMake( 640,640 );
CGRect sized = CGRectMake(0.0f, 0.0f, size.width, size.height);
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef cc = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(cc, kCGInterpolationLow);
CGContextTranslateCTM(cc, 0, size.height);
CGContextScaleCTM(cc, 1.0, -1.0);
CGContextDrawImage(cc, sized, topSquareIR );
// arguably, those three lines more simply...
//[[UIImage imageWithCGImage:topSquareIR] drawInRect:sized];
CGImageRelease(topSquareIR);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
result =
[UIImage imageWithCGImage:result.CGImage
scale:result.scale
orientation: fromImage.imageOrientation];
//consider...something like...
//[UIImage imageWithCGImage:cgimg
// scale:3 orientation:fromImage.imageOrientation];
return result;
}
Consider also this valuable category .....
-(UIImage *)ordinaryCrop:(CGRect)toRect
{
// crops any image, to any rect. you can't beat that
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], toRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropped;
}
Finally don't forget this if you're using the camera "the most useful code in the universe!" iOS UIImagePickerController result image orientation after upload
Hope it helps somehow
Try setting this BOOL property before releasing result in cropOutArea.
result.layer.masksToBounds = YES

How to crop image with particular position and set to another view

Crop image from particular position and set to another view
Final image view
self.finalImageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 20, 320, 320)];
if(rectangle_button_preesed_view)
{
self.finalImageView.image =[self croppIngimageByImageName:self.imageView.image toRect:CGRectMake(30, 120, 260, 340)];
}
else
{
self.finalImageView.image =[self croppIngimageByImageName:self.imageView.image toRect:CGRectMake(30, 80, 260, 260)];
}
Cropping image
- (UIImage *)croppIngimageByImageName:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
NSLog(#" cropped size %f %f ",cropped.size.width,cropped.size.height);
return cropped;
}
#implementation UIImage (Crop)
- (UIImage *)crop:(CGRect)cropRect {
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], cropRect);
UIImage *cropedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropedImage;
}
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
// or use the UIImage wherever you like
[UIImageView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
OR
#implementation UIImage (Crop)
- (UIImage *)crop:(CGRect)rect {
rect = CGRectMake(rect.origin.x*self.scale,
rect.origin.y*self.scale,
rect.size.width*self.scale,
rect.size.height*self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:self.scale
orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
#end
Beside CoreGraphics solution, I would also suggest to use CoreImage which support filters for image manipulation. You should look at CICrop filter found in this documentation page.
Note: This filter is supported for iOS 5 or greater.
I currently do not have an exact solution but you can go ahead with a similar thread.
Hope it helps!

iOS, Generated images, and masking

I'm trying to generate an image that is lozenge-shaped and shows some percentage finished versus unfinished. The way I implemented this was as follows:
Generate 2 rectangles - one the size of the filled region, the other the size of the empty rectange
Invoke UIGrapicsBeginImageContext() with the size of the rectangle I am interested in
Draw the 2 rectangles in the context side-by side
Grab the image from the context and end the context
Create a new masked image by using CGImageMaskCreate() followed by CGImageCreateWithMask() and extracting the masked image
I generate the filled and empty bitmaps using category extensions to UIImage, and then apply a static mask image to them.
The Problem: This works fine in the simulator, but the masking doesn't work on a real device.
Instead of including the code here, I'm including a link to a project that has the code. The relevant files are:
UIImage.h/UIImage.m: The category extension to UIImage that adds both the "create an image with a specified color" and "create a masked image using the supplied mask".
TLRangeDisplay.h/TLRangeDisplay.m: the code for my lozenge-shaped status display. The routine of interest there is fillWithRect().
Here is the code I added to UIImage (via a category):
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size {
CGRect rect = CGRectMake(0.0f, 0.0f, size.height, size.width);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (UIImage*) maskWith:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([self CGImage], mask);
UIImage* image = [UIImage imageWithCGImage:masked];
CFRelease(mask);
CFRelease(masked);
return image;
}
And here is the routine that does the masking:
-(void)fillWithRect {
CGRect f = self.frame;
CGFloat width = f.size.width;
CGFloat fullRange = maxValue_ - minValue_;
CGFloat filledRange = currentValue_ - minValue_;
CGRect fillRect = CGRectMake(0, 0, (filledRange * width) / fullRange, f.size.height);
CGRect emptyRect = CGRectMake(fillRect.size.width, 0, width - fillRect.size.width, f.size.height);
UIImage *fillImage = nil;
UIImage *emptyImage = nil;
if(fillRect.size.width > 0) {
fillImage = [UIImage imageWithColor:fillColor_ andSize:fillRect.size];
}
if(emptyRect.size.width > 0) {
emptyImage = [UIImage imageWithColor:emptyColor_ andSize:emptyRect.size];
}
// Build the 2-color image
UIGraphicsBeginImageContext(f.size);
[fillImage drawInRect:fillRect];
[emptyImage drawInRect:emptyRect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Mask it
if(nil != maskImage_)
image = [image maskWith:maskImage_];
CGRect fullRect = CGRectMake(0, 0, f.size.width, f.size.height);
// Merge ith with the shape
UIGraphicsBeginImageContext(f.size);
[image drawInRect:fullRect];
[shapeImage_ drawInRect:fullRect];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[shownView_ removeFromSuperview];
shownView_ = [[UIImageView alloc] initWithImage:image];
[self addSubview:shownView_];
if(nil != shownView_)
[self bringSubviewToFront:shownView_];
}
The project can be downloaded from http://dl.dropbox.com/u/5375467/ColorPlayOS4.zip
Thanks for any insights on this problem!

Resources