Crop Image using mask - ios

My requirement is to crop the image using the maskImage.
Am able the to crop the image but not in the exact ratio as expected. I googled round and tried to implement it but unfortunately didn't got result as expected.This is what am getting after cropping the image.
Following is the code i'm using.
- (UIImage*) maskImage1:(UIImage *) image withMask:(UIImage *) mask
{
CGImageRef imageReference = image.CGImage;
CGImageRef maskReference = mask.CGImage;
CGImageRef imageMask = CGImageMaskCreate(CGImageGetWidth(maskReference),
CGImageGetHeight(maskReference),
CGImageGetBitsPerComponent(maskReference),
CGImageGetBitsPerPixel(maskReference),
CGImageGetBytesPerRow(maskReference),
CGImageGetDataProvider(maskReference),
NULL, // Decode is null
YES // Should interpolate
);
CGImageRef maskedReference = CGImageCreateWithMask(imageReference, imageMask);
CGImageRelease(imageMask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedReference];
CGImageRelease(maskedReference);
return maskedImage;
}
Thanks..!!

An alternative
You can also achieve the same effect with CALayers, and, in my opinion, is clear.
- (UIImage*) maskImage1:(UIImage *) image withMask:(UIImage *) mask
{
UIImage* maskedImage = image;
CALayer *maskLayer = [CALayer layer];
maskLayer.frame = maskedImage.bounds;
maskLayer.contents = (__bridge id) mask.CGImage;
maskedImage.layer.mask = maskLayer;
return maskedImage;
}
Probably a solution
Your mask UIImage probably has the contentScale wrong
mask.layer.contentScale = [UISCreen mainScreen].scale;
You can also force the size of your mask before you do CGImageMaskCreate:
mask.frame = image.bounds;

Maybe you had already solved this, but as I had the same problem and I solved it, I will explain the solution: The mask is applied to the real size of the photo, in my case it was 3264x2448 and logical it is not the iphone screen size, so when the mask is applied on the image, it became very small. I solved creating a layer on photoshop that have this 3264x2448 size and scaled the mask to stay exactly the same way like on iphone screen.
Other problem that I had is that the image got another orientation when I took the picture(I was using the camera), then I had to turn the mask to the same orientation of the picture on this photoshop layer. Modifying the orientation gives you changing sides, what was height now is width, as the inverse too, so, when I had to calculate the scale, I had to pay attention for which side should be multiplied to get the correct scale.

Related

How to remove the transparent area of an UIImageView after masking?

In one of my iOS applications, I am trying to cut a portion of an image using CGImageMask. I have succeeded in masking the image with the following code:
- (UIImage *)maskImage:(UIImage *)referenceImage withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([referenceImage CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
So, my image will be:
myImageView.image = [self maskImage:[UIImage imageNamed:#"image.png"]
withMask:[UIImage imageNamed:#"mask.png"]];
Problem:
The output image is of the same size of reference image('image.png') with transparent area around. But I want to avoid those transparent area, and crop the result image. How can I achieve this? There are several masks, and the mask frames are not similar to all. I am attaching a reference image of the problem overview here. Please help me friends. Thanks in advance.
Look up auto-cropping a UIImage. This should crop out anything transparent.
How do I autocrop a UIImage?

Getting black (empty) image from UIView drawViewHierarchyInRect:afterScreenUpdates:

After successfully using UIView’s new drawViewHierarchyInRect:afterScreenUpdates: method introduced in iOS 7 to obtain an image representation (via UIGraphicsGetImageFromCurrentImageContext()) for blurring my app also needed to obtain just a portion of a view. I managed to get it in the following manner:
UIImage *image;
CGSize blurredImageSize = [_blurImageView frame].size;
UIGraphicsBeginImageContextWithOptions(blurredImageSize, YES, .0f);
[aView drawViewHierarchyInRect: [aView bounds] afterScreenUpdates: YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This lets me retrieve aView’s content following _blurImageView’s frame.
Now, however, I would need to obtain a portion of aView, but this time this portion would be “inside”. Below is an image representing what I would like to achieve.
I have already tried creating a new graphics context and setting its size to the portion’s size (red box) and calling aView to draw in the rect that represents the red box’s frame (of course its superview’s frame being equal to aView’s) but the image obtained is all black (empty).
After a lot of tweaking I managed to find something that did the job, however I heavily doubt this is the way to go.
Here’s my [edited-for-Stack Overflow] code that works:
- (UIImage *) imageOfPortionOfABiggerView
{
UIView *bigViewToExtractFrom;
UIImage *image;
UIImage *wholeImage;
CGImageRef _image;
CGRect imageToExtractFrame;
CGFloat screenScale = [[UIScreen mainScreen] scale];
// have to scale the rect due to (I suppose) the screen's scale for Core Graphics.
imageToExtractFrame = CGRectApplyAffineTransform(imageToExtractFrame, CGAffineTransformMakeScale(screenScale, screenScale));
UIGraphicsBeginImageContextWithOptions([bigViewToExtractFrom bounds].size, YES, screenScale);
[bigViewToExtractFrom drawViewHierarchyInRect: [bigViewToExtractFrom bounds] afterScreenUpdates: NO];
wholeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// obtain a CGImage[Ref] from another CGImage, this lets me specify the rect to extract.
// However since the image is from a UIView which are all at 2x scale (retina) if you specify a rect in points CGImage will not take the screen's scale into consideration and will process the rect in pixels. You'll end up with an image from the wrong rect and half the size.
_image = CGImageCreateWithImageInRect([wholeImage CGImage], imageToExtractFrame);
wholeImage = nil;
// have to specify the image's scale due to CGImage not taking the screen's scale into consideration.
image = [UIImage imageWithCGImage: _image scale: screenScale orientation: UIImageOrientationUp];
CGImageRelease(_image);
return image;
}
I hope this will help anyone that stumped upon my issue. Feel free to improve my snippet.
Thanks

Mask arbitrarily sized UIImageView with resizable UIImage mask

Current code:
self.backgroundImageView.image = [self.message imageOfSize:self.message.size]; // Random image, random size
UIImage *rightBubbleBackground = [[UIImage imageNamed:#"BubbleRight"]
resizableImageWithCapInsets:BubbleRightCapInsets
resizingMode:UIImageResizingModeStretch];
CALayer *mask = [CALayer layer];
mask.contents = (id)[rightBubbleBackground CGImage];
mask.frame = self.backgroundImageView.layer.frame;
self.backgroundImageView.layer.mask = mask;
self.backgroundImageView.layer.masksToBounds = YES;
This does not work properly. Though the mask is applied, the rightBubbleBackground does not resize correctly to fit self.backgroundImageView, even though it has resizing cap insets (BubbleRightCapInsets) set.
Original Image:
Mask image (rightBubbleBackground):
Result:
I found this answer but it only works for symmetrical images. Maybe I could modify that answer for my use.
I was wrong. That answer can be modified to work for asymmetrical images. I worked on that answer a bit and solved my own problem.
The following code made my cap insets work for the mask layer:
mask.contentsCenter =
CGRectMake(BubbleRightCapInsets.left/rightBubbleBackground.size.width,
BubbleRightCapInsets.top/rightBubbleBackground.size.height,
1.0/rightBubbleBackground.size.width,
1.0/rightBubbleBackground.size.height);
Result:
I had (part of) the same problem - i.e. the pixelated layer contents. For me it was solved by setting the contentsScale value on the CALayer - for some reason the default scale on CALayers is always 1.0, even on Retina devices.
i.e.
layer.contentsScale = [UIScreen mainScreen].scale;
Also, if you're drawing a shape using CAShapeLayer and wondering its edges look a little jagged on retina devices, try:
shapeLayer.rasterizationScale = [UIScreen mainScreen].scale;
shapeLayer.shouldRasterize = YES;

Core Graphics - how to crop non-transparent pixels out of a UIImage?

I have a UIImage that is reading from a transparent PNG (500px by 500px). Somewhere in the image, there is a picture that I want to crop out and save as a separate UIImage. I also want to store the X and Y coordinates based on how many transparent pixels there were on the left and top of the newly cropped rectangle.
I was able to crop an image with this code:
- (UIImage *)cropImage:(UIImage *)image atRect:(CGRect)rect
{
double scale = image.scale;
CGRect scaledRect = CGRectMake(rect.origin.x*scale,rect.origin.y*scale,rect.size.width*scale,rect.size.height*scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], scaledRect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef scale:scale orientation:image.imageOrientation];
CGImageRelease(imageRef);
return cropped;
}
Which actually cuts off the transparent pixels on the top and left :S (this would be great if I was able to crop the pixels on right and bottom too!). It then resizes the rest of the image to the rectangle I specified. Unfortunately though I need to cut a picture that is in the middle of the image and I need the size to be able to be dynamic.
Been struggling with this for several hours now. Any ideas?
To crop an image, draw it into a smaller graphics context.
For example, let's say you have a 600x600 image. And let's say that you want to crop 200 pixels off all four sides. That leaves a 200x200 rectangle.
So you would make a 200x200 graphics context, using UIGraphicsBeginImageContextWithOptions. Then you would draw the image into it using drawAtPoint:, drawing at the point (-200,-200). If you think about it, you will see that that offset causes just the 200x200 from the middle of the original to be drawn into the actual bounds of the context. Thus you have cropped the image by 200 pixels on all four sides, which is what we wanted to do.
Thus here is a generalized version, assuming that we know the amount to crop from the left, right, top, and bottom:
UIImage* original = [UIImage imageNamed:#"original.png"];
CGSize sz = [original size];
CGFloat cropLeft = ...;
CGFloat cropRight = ...;
CGFloat cropTop = ...;
CGFloat cropBottom = ...;
UIGraphicsBeginImageContextWithOptions(
CGSizeMake(sz.width - cropLeft - cropRight, sz.height - cropTop - cropBottom),
NO, 0);
[original drawAtPoint:CGPointMake(-cropLeft, -cropTop)];
UIImage* cropped = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
After that, cropped is your cropped image.

Cropping ellipse using core image in ios

I want to crop an ellipse from an image in ios. Using core image framework, I know know to crop a reactangular region.
Using core graphics, I am able to clip the elliptical region. But, the size of the cropped image is same as the size of the original image as I am applying mask to area outside the ellipse.
So, the goal is to crop the elliptical region from an image and size of cropped image won't exceed the rectangular bounds of that image.
Any help would be greatly appreciated. Thanks in advance.
You have to create a context in the correct size, try the following code:
- (UIImage *)cropImage:(UIImage *)input inElipse:(CGRect)rect {
CGRect drawArea = CGRectMake(-rect.origin.x, -rect.origin.y, input.size.width, input.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(ctx, CGRectMake(0, 0, rect.size.width, rect.size.height));
CGContextClip(ctx);
[input drawInRect:drawArea];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Maybe you have to adjust the drawArea to your needs as i did not test it.

Resources