CGImageCreateWithMask() not returning a masked image - ios

I am trying to mask a UIImage with another UIImage. The image to be masked is generated from a gradient of colours, and the mask is a [UIImage systemImageNamed:]. The problem is that the resulting UIImage after masking looks like the gradient without the mask applied. I have provided the methods for creating the gradient and for masking it with the other image. Can anybody identify what is going wrong here? Thank you in advance!
// Creating the image
UIImage *mask = [UIImage systemImageNamed:#"person.crop.circle.fill.badge.plus"];
UIImage *gradient = [UIImage imageWithGradientColours:#[[UIColor redColor], [UIColor greenColor]] size:mask.size];
UIImage *masked = [gradient imageByApplyingMaskImage:mask];
UIImageView *imageView = [[UIImageView alloc] initWithImage:masked];
[self.view addSubview:imageView];
// UIImage+Gradient
// Generates a UIImage of the specified size with a gradient in the specified colours
+ (UIImage*)imageWithGradientColours:(NSArray*)colours size:(CGSize)size {
CAGradientLayer *gradientLayer = [CAGradientLayer layer];
gradientLayer.frame = CGRectMake(0, 0, size.width, size.height);
NSMutableArray *cgColours = [NSMutableArray array];
for (UIColor *colour in colours) {
[cgColours addObject:(id)colour.CGColor];
}
gradientLayer.colors = cgColours;
UIGraphicsBeginImageContext(size);
[gradientLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *gradientImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [gradientImage imageWithRenderingMode:UIImageRenderingModeAlwaysOriginal];
}
// UIImage+MaskWithImage
// Returns a UIImage with a mask applied using the provided mask image
- (UIImage *)imageByApplyingMaskImage:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask(self.CGImage, mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef scale:[[UIScreen mainScreen] scale] orientation:UIImageOrientationUp];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
// returns new image with mask applied
return maskedImage;
}

Ah, I found the answer. I had to do some processing on the mask image before it would work properly as a mask:
- (UIImage *)imageAsMask {
//Ensure the image isn't rendered as template
//Tint the image with [UIColor blackColor]
UIImage *mask = [[self imageWithRenderingMode:UIImageRenderingModeAlwaysOriginal] imageWithTintColor:[UIColor blackColor]];
//Context for redrawing the mask
UIGraphicsBeginImageContextWithOptions(mask.size, YES, 1.0);
//Fill the mask with white background
[[UIColor whiteColor] setFill];
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, mask.size.width, mask.size.height));
//Draw the black mask on top of the white background
[mask drawAtPoint:CGPointZero];
//Get the image from the context and return
mask = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return mask;
}
And then to generate a masked image I just do:
UIImage *mask = [[UIImage systemImageNamed:#"person.crop.circle.fill.badge.plus"] imageAsMask];
UIImage *gradient = [UIImage imageWithGradientColours:#[[UIColor redColor], [UIColor greenColor]] size:mask.size];
UIImage *masked = [gradient imageByApplyingMaskImage:mask];

Related

how to Call static Method which Accept 3 Parameters

How can I call this function?
I am trying to call this function from -viewDidLoad.
I tried [circularImageWithImage(imageView.image, myclor, 0.2)];
static UIImage *circularImageWithImage(UIImage *inputImage,
UIColor *borderColor,
CGFloat borderWidth)
{
CGRect rect = (CGRect){ .origin=CGPointZero, .size=inputImage.size };
UIGraphicsBeginImageContextWithOptions(rect.size, NO, inputImage.scale); {
// Fill the entire circle with the border color.
[borderColor setFill];
[[UIBezierPath bezierPathWithOvalInRect:rect] fill];
// Clip to the interior of the circle (inside the border).
CGRect interiorBox = CGRectInset(rect, borderWidth, borderWidth);
UIBezierPath *interior = [UIBezierPath bezierPathWithOvalInRect:interiorBox];
[interior addClip];
[inputImage drawInRect:rect];
}
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
You can try with this code.
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
UIImageView *imageView = [[UIImageView alloc]initWithFrame:CGRectMake(0, 0, 100, 100)];//set your frame
//imageView.center = self.view.center;
UIImage *modifiedImage = circularImageWithImage(image, [UIColor redColor], 1.2);//border width >= 1.0 is better. Otherwise you may not see this
imageView.image = modifiedImage;
[self.view addSubview:imageView];
UIImage *sample = circularImageWithImage(imageView.image, myclor, 0.2);

iOS create UIImage using a mask and a UIColor

I'm currently coloring an existing image using a mask. For example, I have a white image with a black border and a circular mask (like the first two images). Then, I can create a third image with a color (i.e. green) which has green on the center of the original image (because the mask is present there).
The code I'm using is this (suggestions welcomed):
-(UIImage *)paintWithMask:(UIImage *)mask color:(UIColor *)color andSize:(CGSize)size{
UIImage *image = self;
UIImage *rotatedMask = [self rotateImage:mask]; //For some reason this is needed.
UIGraphicsBeginImageContextWithOptions(size, NO, image.scale);
CGRect rect = CGRectMake(0.0f, 0.0f, size.width, size.height);
[image drawInRect:rect];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeSourceIn);
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextClipToMask(context, rect, [rotatedMask CGImage]);
CGContextFillRect(context, rect);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
What I need to do now is paint the green circle using only the mask (without the black border obviously), like this:
Any ideas? Thanks a lot!!
There is a much easier way of doing this without CoreGraphics. Simply do the following:
-(UIImageView *)imageViewWithMask:(UIImage *)mask color:(UIColor *)color andSize:(CGSize)size{
UIImage *tempImage = mask;
tempImage = [tempImage imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIGraphicsBeginImageContextWithOptions(size, NO,0);
[tempImage drawInRect: CGRectMake(0,0,size.width,size.height)];
tempImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *iv = [[UIImageView alloc] initWithImage: tempImage];
iv.tintColor = color;
return iv;
}

UIButton ImageView showing only a color fill

I am setting a UIButton's ImageView.Image property to an image that was captured from an ImagePickerController. In my app, the image is captured and rounded with a border. The image is saved and I am loading the image into the button. Problem is, the button is showing only a blue circle and not the image inside... This only seems to happen on buttons.
Here is my code to round the image and set the border:
-(UIImage *)makeRoundedImage:(UIImage *) image
radius: (float) radius;
{
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, image.size.width, image.size.height);
imageLayer.contents = (id) image.CGImage;
imageLayer.backgroundColor = [[UIColor clearColor] CGColor];
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = radius;
imageLayer.borderWidth = 40;
UIColor *ios7BlueColor = [UIColor colorWithRed:0.0 green:122.0/255.0 blue:1.0 alpha:1.0];
imageLayer.borderColor = ios7BlueColor.CGColor;
UIGraphicsBeginImageContext(image.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
And here is how I am setting the button image:(player.playerImage is a cordite string with the path to the saved image)
-(void)ViewController:(UIViewController *)sender didUpdatePlayerImage:(NSString *)playerImagePath
{
player.playerImage = playerImagePath;
[_editImageButton setImage:[UIImage imageWithContentsOfFile:player.playerImage] forState:UIControlStateNormal];
}
Is this a limitation to the UIButton's ImageView implementation or am I missing something?
You are not actually writing the original image into the context.
Try the following:
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
That should render the image into the context, followed by your layer.

Masking changes colors of UIImage- iOS

Here is what I am doing to mask a UIImage dynamically. It is working but for some reason the colors of output image is not the same as original one. What would be causing this? Thanks..
- (void) setClippingPath:(UIBezierPath *)clippingPath : (UIImageView *)imgView {
CAShapeLayer *maskLayer = [CAShapeLayer layer];
maskLayer.frame = self.imgView.frame;
maskLayer.path = [clippingPath CGPath];
maskLayer.fillColor = [[UIColor whiteColor] CGColor];
maskLayer.backgroundColor = [[UIColor clearColor] CGColor];
self.imgView.image = [self maskImage:self.imgView.image withClippingMask:[self imageFromLayer:maskLayer]];
}
- (UIImage *)imageFromLayer:(CALayer *)layer
{
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
-(UIImage*)maskImage:(UIImage *)image withClippingMask:(UIImage *)maskImage
{
CGImageRef maskRef = image.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([maskImage CGImage], mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
// returns new image with mask applied
return maskedImage;
}
Original Image
Mask
Output Image
The documentation for CGImageMaskCreate mentions:
When you draw into a context with a bitmap image mask, Quartz uses the mask to determine where and how the current fill color is applied to the image rectangle.
So if you want to just replace the black with white then you should be able to set the context color before creating the mask:
-(UIImage*)maskImage:(UIImage *)image withClippingMask:(UIImage *)maskImage
{
CGImageRef maskRef = image.CGImage;
CGContextSetFillColorWithColor( UIGraphicsGetCurrentContext( ), [ UIColor whiteColor ] );
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([maskImage CGImage], mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
// returns new image with mask applied
return maskedImage;
}
You might also want to update your mask to a more basic, greyscale JPG, something like this:

iOS, Generated images, and masking

I'm trying to generate an image that is lozenge-shaped and shows some percentage finished versus unfinished. The way I implemented this was as follows:
Generate 2 rectangles - one the size of the filled region, the other the size of the empty rectange
Invoke UIGrapicsBeginImageContext() with the size of the rectangle I am interested in
Draw the 2 rectangles in the context side-by side
Grab the image from the context and end the context
Create a new masked image by using CGImageMaskCreate() followed by CGImageCreateWithMask() and extracting the masked image
I generate the filled and empty bitmaps using category extensions to UIImage, and then apply a static mask image to them.
The Problem: This works fine in the simulator, but the masking doesn't work on a real device.
Instead of including the code here, I'm including a link to a project that has the code. The relevant files are:
UIImage.h/UIImage.m: The category extension to UIImage that adds both the "create an image with a specified color" and "create a masked image using the supplied mask".
TLRangeDisplay.h/TLRangeDisplay.m: the code for my lozenge-shaped status display. The routine of interest there is fillWithRect().
Here is the code I added to UIImage (via a category):
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size {
CGRect rect = CGRectMake(0.0f, 0.0f, size.height, size.width);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (UIImage*) maskWith:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([self CGImage], mask);
UIImage* image = [UIImage imageWithCGImage:masked];
CFRelease(mask);
CFRelease(masked);
return image;
}
And here is the routine that does the masking:
-(void)fillWithRect {
CGRect f = self.frame;
CGFloat width = f.size.width;
CGFloat fullRange = maxValue_ - minValue_;
CGFloat filledRange = currentValue_ - minValue_;
CGRect fillRect = CGRectMake(0, 0, (filledRange * width) / fullRange, f.size.height);
CGRect emptyRect = CGRectMake(fillRect.size.width, 0, width - fillRect.size.width, f.size.height);
UIImage *fillImage = nil;
UIImage *emptyImage = nil;
if(fillRect.size.width > 0) {
fillImage = [UIImage imageWithColor:fillColor_ andSize:fillRect.size];
}
if(emptyRect.size.width > 0) {
emptyImage = [UIImage imageWithColor:emptyColor_ andSize:emptyRect.size];
}
// Build the 2-color image
UIGraphicsBeginImageContext(f.size);
[fillImage drawInRect:fillRect];
[emptyImage drawInRect:emptyRect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Mask it
if(nil != maskImage_)
image = [image maskWith:maskImage_];
CGRect fullRect = CGRectMake(0, 0, f.size.width, f.size.height);
// Merge ith with the shape
UIGraphicsBeginImageContext(f.size);
[image drawInRect:fullRect];
[shapeImage_ drawInRect:fullRect];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[shownView_ removeFromSuperview];
shownView_ = [[UIImageView alloc] initWithImage:image];
[self addSubview:shownView_];
if(nil != shownView_)
[self bringSubviewToFront:shownView_];
}
The project can be downloaded from http://dl.dropbox.com/u/5375467/ColorPlayOS4.zip
Thanks for any insights on this problem!

Resources