Here is a camera demo from iOS developer center, and the function used to shrink image is on below.
The problem what I met is image being stretched while its width < height.
However, I've need to scale and shrink the image into a square(width : height = 1 : 1).
Do anybody have solution on this?
Thanks you guys prompt helped in advance.
static UIImage *shrinkImage(UIImage *original, CGSize size) {
CGFloat scale = [UIScreen mainScreen].scale;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width * scale,
size.height * scale, 8, 0, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context,
CGRectMake(0, 0, original.size.width * scale, original.size.width * scale),
original.CGImage);
CGImageRef shrunken = CGBitmapContextCreateImage(context);
UIImage *final = [UIImage imageWithCGImage:shrunken];
CGContextRelease(context);
CGImageRelease(shrunken);
return final;
}
-(UIImage *)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
if(newSize.width > newSize.height)
newSize = CGSizeMake(newSize.height, newSize.height);
else
newSize = CGSizeMake(newSize.width, newSize.width);
UIGraphicsBeginImageContextWithOptions(newSize, YES, [UIScreen mainScreen].scale);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
If you are not going to maintain aspect ratio, your image will surely look stretched.
Related
I have one Imageview that contains the Image . And one Mask Shape that contains shape of rabbit.
I have one code that gives the below result.
- (UIImage*)mynewmaskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL,320, 380, 8, 0, colorSpace,(CGBitmapInfo) kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = 320/ image.size.width;
if(ratio * image.size.height < 380) {
ratio = 380/ image.size.height;
}
CGRect rect1 = {{0, 0}, {320,380}};
CGRect rect2 = {{-((image.size.width*ratio)-320)/2 , -((image.size.height*ratio)-380)/2}, {image.size.width*ratio, image.size.height*ratio}};
// CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;}
The above code gives this result.
But I want the below result (like reverse masking).
How it is possible.Please Help me.
Thanks.
You should look through blendMode. Try smth like this:
[rabbitImage drawInRect:rect
blendMode:kCGBlendModeDestinationOut
alpha:1.0];
landscape is properly resizing but Portrait image is not resizing properly.
Please somebody help me to sort out this problem.
code :-
CGSize newSize = CGSizeMake(width, height);
float widthRatio = newSize.width/image.size.width*image.scale;
float heightRatio = newSize.height/image.size.height*image.scale;
NSLog(#" image size %f %f %f",image.size.width,image.size.height,image.scale);
if(widthRatio > heightRatio)
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
else
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this:
- (UIImage*)resizeImage:(UIImage *)image imageSize:(CGSize)size {
CGFloat imageWidth = image.size.width;
CGFloat imageHeight = image.size.height;
CGFloat requiredWidth = (imageWidth * size.height) / imageHeight;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(requiredWidth, size.height), NO, [UIScreen mainScreen].scale);
[image drawInRect:CGRectMake(0, 0, requiredWidth, size.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
//here is the scaled image which has been changed to the size specified
UIGraphicsEndImageContext();
return newImage;
}
I was facing something very similar issue few days back. I was trying to set larger dimension image in smaller dimension UIImageView using resizeImage method as mentioned in one of the answer here. But it could not solve my problem. So here is what I used to solve my problem,
yourImageView.contentMode = UIViewContentModeCenter;
[yourImageView setClipsToBounds:YES];
OR
[yourImageView setContentMode:UIViewContentModeScaleAspectFill];
yourImageView.autoresizingMask = ( UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight );
[yourImageView setClipsToBounds:YES];
Yes it will not scale your image, but your image would not be squished. I solved my problem using this, hopes it might solve your problem.
I have UITableViewCell with image in the right size.
This is how the cell should look like:
And i have the backgound:
And the image placeholder:
And i want to know if there is a way to crop image with the iOS library?
Yes that possible:
UIImage *imageToCrop = ...;
UIGraphicsBeginImageContext();
CGContextRef context = UIGraphicsGetCurrentContext();
[imageToCrop drawAtPoint:CGPointZero];
CGContextAddEllipseInRect(context, CGRectMake(0 ,0, imageToCrop.size.width, imageToCrop.size.height);
CGContextClip(context);
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You can use CoreGraphics to add mask or clip with path. Mask is image with alpha channel which determines what part of image show. Below example how clip with image mask:
- (UIImage *)croppedImage:(UIImage *)sourceImage
{
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), NO, [UIScreen mainScreen].scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToMask(context, CGRectMake(0, 0, width, height), [UIImage imageNamed:#"mask"].CGImage);
[sourceImage drawInRect:CGRectMake(0, 0, width, height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
}
Then you can write cell.picture = [self croppedImage:sourceImage];
You can use image masking technique to crop this image
Please have a look at this link
https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBHIJEB
I have written some code that may help you out
#interface ImageRenderer : NSObject {
UIImage *image_;
}
#property (nonatomic, retain) UIImage * image;
- (void)cropImageinRect:(CGRect)rect;
- (void)maskImageWithMask:(UIImage *)maskImage;
- (void)imageWithAlpha;
#end
#implementation ImageRenderer
#synthesize image = image_;
- (void)cropImageinRect:(CGRect)rect {
CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, rect);
image_ = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
- (void)maskImageWithMask:(UIImage *)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext == NULL){
return;
}
CGFloat ratio = 0;
ratio = maskImage.size.width/ image_.size.width;
if(ratio * image_.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image_.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image_.size.width*ratio)-maskImage.size.width)/2 , -((image_.size.height*ratio)-maskImage.size.height)/2}, {image_.size.width*ratio, image_.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image_.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
image_ = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
}
- (void)imageWithAlpha {
CGImageRef imageRef = image_.CGImage;
CGFloat width = CGImageGetWidth(imageRef);
CGFloat height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(nil, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef resultImageRef = CGBitmapContextCreateImage(context);
image_ = [UIImage imageWithCGImage:resultImageRef scale:image_.scale orientation:image_.imageOrientation];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(resultImageRef);
}
#end
In this code you can crop the image out of a bigger one and then you can use a mask image to get your work done.
I have a demo app here https://github.com/rdetert/image-transform-test
After importing an image, you can pinch, zoom, rotate the image. What I want to do is save out a 640x480 image (landscape mode) that looks identical to the live preview. So if there are 100px bars of empty space on the sides, I need the same empty bars in the final output (scaled appropriately).
This is proving to be more difficult than I thought it would be. I can't quite get it to come out right after days of working on it.
The magic method that generates the final image is called -(void)generateFinalImage
Good luck! ;)
EDIT
The green rectangle represents the actual area the imported image can be pinched, zoomed and rotated. The resolution on the iPhone 4S is 852x640, for example.
The blue rectangle is just a live preview for debugging and it's aspect ratio is the same as 640x480. The live preview could get very slow due to Core Image being very slow.
What I want to do is convert whatever is in the green rectangle to a 640x480 image. Notice the 852x640 is a slightly different aspect ratio than 640x480 too, but that isn't a huge problem.
Is your goal is to obtain just the exact copy of what you are editing, but with the size of the original image?
I guess it could be obtained by something like this:
- (UIImage *)padImage:(UIImage *)img to:(CGSize)size
{
if (size.width < img.size.width && size.height < img.size.height) return img;
size.width = MAX(size.width, img.size.width);
size.height = MAX(size.height, img.size.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGRect centeredRect = CGRectMake((size.width - img.size.width)/2.0, (size.height - img.size.height)/2.0, img.size.width, img.size.height);
CGContextDrawImage(context, centeredRect, [img CGImage]);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
UIImage *paddedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return paddedImage;
}
// final image size must be 640x480
- (void)generateFinalImage
{
float rotatableCanvasWidth = 495.0;
float rotatableCanvasHeight = 320.0;
UIImage *tmp = self.importedRawImage;
CGSize size = self.importedRawImage.size;
NSLog(NSStringFromCGSize(size));
tmp = [self padImage:tmp to:CGSizeMake(rotatableCanvasWidth, rotatableCanvasHeight)];
CIImage *ciImage = [[CIImage alloc] initWithImage:[tmp imageWithTransform:self.importedImageView.transform]];
CGPoint center = CGPointMake(size.width / 2.0, size.height / 2.0);
CIContext *context = [CIContext contextWithOptions:nil];
CGRect r = ciImage.extent;
r.origin.x = (r.size.width - rotatableCanvasHeight) / 2.0;
r.origin.y = (r.size.height - rotatableCanvasWidth) / 2.0;
r.size.width = rotatableCanvasHeight;
r.size.height = rotatableCanvasWidth;
self.finalImage = [UIImage imageWithCGImage:[context createCGImage:ciImage fromRect:r] scale:1.0 orientation:UIImageOrientationUp];
self.finalImage = [self.finalImage resizedImage:CGSizeMake(100.0f, 134.0f) interpolationQuality:kCGInterpolationHigh];
self.previewImageView.image = self.finalImage;
}
I am trying to resize an image on the basis of value selected on picker by user.
To this aim, I currently use following code:
- (UIImage *)scaleImage:(UIImage *)image toSize:(CGSize)targetSize {
CGRect frame;
UIImage * newImage;
newImage = image;
frame = frontImageView.frame;
frame.size.width = targetSize.width;
frame.size.height = targetSize.height;
frontImageView.frame = frame;
// the pixels will be painted to this array
CGImageRef imageRef = [newImage CGImage]; (APP crash at this point)
CGFloat height = targetSize.height;
CGFloat Width = targetSize.width;
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
pixels = (uint32_t *) malloc(targetSize.width * targetSize.height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, Width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(pixels, Width, height, 8, Width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGContextDrawImage(bitmap, CGRectMake(0, 0, Width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
CGImageRelease(newImage);
return result;
}
If I resize an image, the first time (by 25 % for instance) there is no crash. But afterwards, a crash occurs with the error "exec_BAD_Access".
How can I solve this?