Use CGContextClip to clip a circle with image - ios

Have a imageViewA (frame{0,0,320,200}) in my app, and I want to clip a circle with the imageViewA's center, radius=50, and draw into another imageViewB (frame{0,0,100,100});
the original image effect like this :
and use below code to clip:
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat height = self.bounds.size.height;
CGContextTranslateCTM(ctx, 0.0, height);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextAddArc(ctx, self.frame.size.width/2, self.frame.size.height/2, 50, 0, 2*M_PI, 0);
CGContextClosePath(ctx);
CGContextSaveGState(ctx);
CGContextClip(ctx);
CGContextDrawImage(ctx, CGRectMake(0,0,self.frame.size.width, self.frame.size.height), image.CGImage);
CGContextRestoreGState(ctx);
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
NSString *headerPath = [NSString stringWithFormat:#"%#/header.png",HomeDirectory];
NSData *imageData = UIImageJPEGRepresentation(newImage, 1.0);
if([imageData writeToFile:headerPath atomically:YES]){
imageViewB.image = [UIImage imageWithContentsOfFile:headerPath];
}
clip image effect like this:
I only need the circle view, but the clip effect display that imageViewB has white space.
How to clip this image correctly?
Thanks!

imageViewB.layer.cornerRadius = 50.0;
imageViewB.layer.masksToBounds = YES;

JPEG doesn't support transparency. Use UIImagePNGRepresentation instead of UIImageJPEGRepresentation, and you will have the result you want.
NSData *imageData = UIImagePNGRepresentation(newImage);
if([imageData writeToFile:headerPath atomically:YES]){
imageViewB.image = [UIImage imageWithContentsOfFile:headerPath];
}

Just use masking for this purposes
This code will help you to mask images
+ (UIImage *)maskImage:(UIImage *)image withMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage *uiMasked = [UIImage imageWithCGImage:masked];
CFRelease(mask);
CFRelease(masked);
return uiMasked;
}
This article can help you
http://www.abdus.me/ios-programming-tips/how-to-mask-image-in-ios-an-image-masking-technique/
After masking you need to create an imageView with that masked image
and then subview it into another imageView.

Before you draw the image:
CGContextSetBlendMode(ctx, kCGBlendModeClear);
CGContextFillRect(ctx, self.bounds);
CGContextSetBlendMode(ctx, kCGBlendModeNormal);

Related

UIImage Masking with Another UIImage

I am struggling between the logic of masking two UIImage. I want to mask baby face UIImage on mask UIImage. The biggest problem is size. If 2 UIImages are of same size and then I can do easily but when the sizes are different, it doesn't work!
Below are the solutions tried but none of them working as expected.
Solution 1
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([image CGImage], mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
// returns new image with mask applied
return maskedImage;
}
Solution 2
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2 , -((image.size.height*ratio)-maskImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
Update
Baby face image size is smaller than the mask image. So I want to mask image on the specific rect.

Keeping right tints after image masking

I need to fill programmatically image with different colors with mask I got from our designer in photoshop file:
http://image.openlan.ru/images/83339350420766181806.png
I almost finish this except of details I missed:
UIImage *mask = [UIImage imageNamed:#"mask.png"];
UIImage *image = [UIImage imageNamed:#"vehicle.png"];
mask = [image maskImage:image withMask:mask];
image = [image coloredImageWithColor:[UIColor blueColor]];
image = [image drawImage:mask inImage:image];
Where methods implementations:
- (UIImage *) maskImage:(UIImage *)image withMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage *im = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
CGImageRelease(maskRef);
CGImageRelease(mask);
return im;
}
- (UIImage *) coloredImageWithColor:(UIColor *)color
{
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeDarken);
CGRect rect = (CGRect){ CGPointZero, self.size };
CGContextDrawImage(context, rect, self.CGImage);
CGContextClipToMask(context, rect, self.CGImage);
[color set];
CGContextFillRect(context, rect);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
- (UIImage *) drawImage:(UIImage *)fgImage inImage:(UIImage *)bgImage
{
UIGraphicsBeginImageContextWithOptions(bgImage.size, FALSE, 0.0);
[bgImage drawInRect:(CGRect){ CGPointZero, bgImage.size }];
[fgImage drawInRect:(CGRect){ CGPointZero, fgImage.size }];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
My current result is:
http://image.openlan.ru/images/34429534616655410787.png
I don't think if I use gradient, I'll see right result. What I don't like now is lack of tints in some image areas. What am I doing wrong here?
Inspired by this post I implemented my task eventually.
It's an origin image:
Then I drew a mask for this image do not fill special areas:
Below main method drawing this:
- (void) vehicleConfigure
{
NSString *vehicleName = #"sports.png";
UIImage *vehicleImage = [UIImage imageNamed:vehicleName];
vehicleName = [vehicleName stringByReplacingOccurrencesOfString:#".png" withString:#"-mask.png"];
UIImage *mask = [UIImage imageNamed:vehicleName];
mask = [vehicleImage applyingMask:mask];
vehicleImage = [vehicleImage coloredImageWithColor:RGB(30, 128, 255)];
vehicleImage = [vehicleImage drawMask:mask];
m_vehicleImageView.contentMode = UIViewContentModeScaleAspectFit;
m_vehicleImageView.image = vehicleImage;
}
Where I use category of UIImage with methods:
- (UIImage *) drawMask:(UIImage *)mask
{
UIGraphicsBeginImageContextWithOptions(self.size, FALSE, 0.0);
[self drawInRect:(CGRect){ CGPointZero, self.size }];
[mask drawInRect:(CGRect){ CGPointZero, mask.size }];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage *) applyingMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([self CGImage], mask);
UIImage *im = [UIImage imageWithCGImage:masked
scale:[[UIScreen mainScreen] scale]
orientation:UIImageOrientationUp];
// CGImageRelease(masked);
// CGImageRelease(maskRef);
// CGImageRelease(mask);
return im;
}
- (UIImage *) coloredImageWithColor:(UIColor *)color
{
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = (CGRect){ CGPointZero, self.size };
CGContextDrawImage(context, rect, self.CGImage);
CGContextClipToMask(context, rect, self.CGImage);
[color set];
CGContextFillRect(context, rect);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
And result looks:

Masking a transparent image with another actual image in iOS

I have two images, one is a mask that is transparent with some edges / borders and the other is the actual image. I want to merge both of them.
I have used the following code to mask and combine the image:
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
// create a bitmap graphics context the size of the image
CGFloat dim = MIN(image.size.width, image.size.height);
CGSize size = CGSizeMake(dim, dim);
UIGraphicsBeginImageContextWithOptions(size, NO, .0);
UIBezierPath *bezierPath = [UIBezierPath bezierPathWithOvalInRect:(CGRect){ CGPointZero, size }];
[bezierPath fill];
[bezierPath addClip];
CGPoint offset = CGPointMake((dim - image.size.width) * 0.5, (dim - image.size.height) * 0.5);
[image drawInRect:(CGRect){ offset, image.size }];
UIImage *ret = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return ret;
}
The result:
In the result image, the border of the image used as a mask is missing. Can someone please help me with this?
I wrote a masking category for ios (well it is basically cross platform because CoreImage is on both platforms anyway:
github project
the core functionality boils down to this (for your example)
UIImage *person = ...
UIImage *circle = ...
UIImage *result = [person imageMaskedWith:circle];
UIImageView *redbox = [[UIImageView alloc] initWithImage:result];
redbox.backgroundColor = [UIColor redColor]; //this can be a gradient!
the main part of the code from the category:
CGImageRef imageReference = image.CGImage;
CGImageRef maskReference = mask.CGImage;
CGRect rect = CGRectMake(0, 0, CGImageGetWidth(imageReference), CGImageGetHeight(imageReference));
// draw with Core Graphics
UIGraphicsBeginImageContext(rect.size);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, 0.0, rect.size.height);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextClipToMask(bitmap, rect, maskReference);
CGContextDrawImage(bitmap, rect, imageReference);
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Masking changes colors of UIImage- iOS

Here is what I am doing to mask a UIImage dynamically. It is working but for some reason the colors of output image is not the same as original one. What would be causing this? Thanks..
- (void) setClippingPath:(UIBezierPath *)clippingPath : (UIImageView *)imgView {
CAShapeLayer *maskLayer = [CAShapeLayer layer];
maskLayer.frame = self.imgView.frame;
maskLayer.path = [clippingPath CGPath];
maskLayer.fillColor = [[UIColor whiteColor] CGColor];
maskLayer.backgroundColor = [[UIColor clearColor] CGColor];
self.imgView.image = [self maskImage:self.imgView.image withClippingMask:[self imageFromLayer:maskLayer]];
}
- (UIImage *)imageFromLayer:(CALayer *)layer
{
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
-(UIImage*)maskImage:(UIImage *)image withClippingMask:(UIImage *)maskImage
{
CGImageRef maskRef = image.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([maskImage CGImage], mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
// returns new image with mask applied
return maskedImage;
}
Original Image
Mask
Output Image
The documentation for CGImageMaskCreate mentions:
When you draw into a context with a bitmap image mask, Quartz uses the mask to determine where and how the current fill color is applied to the image rectangle.
So if you want to just replace the black with white then you should be able to set the context color before creating the mask:
-(UIImage*)maskImage:(UIImage *)image withClippingMask:(UIImage *)maskImage
{
CGImageRef maskRef = image.CGImage;
CGContextSetFillColorWithColor( UIGraphicsGetCurrentContext( ), [ UIColor whiteColor ] );
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([maskImage CGImage], mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
// returns new image with mask applied
return maskedImage;
}
You might also want to update your mask to a more basic, greyscale JPG, something like this:

iOS, Generated images, and masking

I'm trying to generate an image that is lozenge-shaped and shows some percentage finished versus unfinished. The way I implemented this was as follows:
Generate 2 rectangles - one the size of the filled region, the other the size of the empty rectange
Invoke UIGrapicsBeginImageContext() with the size of the rectangle I am interested in
Draw the 2 rectangles in the context side-by side
Grab the image from the context and end the context
Create a new masked image by using CGImageMaskCreate() followed by CGImageCreateWithMask() and extracting the masked image
I generate the filled and empty bitmaps using category extensions to UIImage, and then apply a static mask image to them.
The Problem: This works fine in the simulator, but the masking doesn't work on a real device.
Instead of including the code here, I'm including a link to a project that has the code. The relevant files are:
UIImage.h/UIImage.m: The category extension to UIImage that adds both the "create an image with a specified color" and "create a masked image using the supplied mask".
TLRangeDisplay.h/TLRangeDisplay.m: the code for my lozenge-shaped status display. The routine of interest there is fillWithRect().
Here is the code I added to UIImage (via a category):
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size {
CGRect rect = CGRectMake(0.0f, 0.0f, size.height, size.width);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (UIImage*) maskWith:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([self CGImage], mask);
UIImage* image = [UIImage imageWithCGImage:masked];
CFRelease(mask);
CFRelease(masked);
return image;
}
And here is the routine that does the masking:
-(void)fillWithRect {
CGRect f = self.frame;
CGFloat width = f.size.width;
CGFloat fullRange = maxValue_ - minValue_;
CGFloat filledRange = currentValue_ - minValue_;
CGRect fillRect = CGRectMake(0, 0, (filledRange * width) / fullRange, f.size.height);
CGRect emptyRect = CGRectMake(fillRect.size.width, 0, width - fillRect.size.width, f.size.height);
UIImage *fillImage = nil;
UIImage *emptyImage = nil;
if(fillRect.size.width > 0) {
fillImage = [UIImage imageWithColor:fillColor_ andSize:fillRect.size];
}
if(emptyRect.size.width > 0) {
emptyImage = [UIImage imageWithColor:emptyColor_ andSize:emptyRect.size];
}
// Build the 2-color image
UIGraphicsBeginImageContext(f.size);
[fillImage drawInRect:fillRect];
[emptyImage drawInRect:emptyRect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Mask it
if(nil != maskImage_)
image = [image maskWith:maskImage_];
CGRect fullRect = CGRectMake(0, 0, f.size.width, f.size.height);
// Merge ith with the shape
UIGraphicsBeginImageContext(f.size);
[image drawInRect:fullRect];
[shapeImage_ drawInRect:fullRect];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[shownView_ removeFromSuperview];
shownView_ = [[UIImageView alloc] initWithImage:image];
[self addSubview:shownView_];
if(nil != shownView_)
[self bringSubviewToFront:shownView_];
}
The project can be downloaded from http://dl.dropbox.com/u/5375467/ColorPlayOS4.zip
Thanks for any insights on this problem!

Resources