Keeping right tints after image masking - ios

I need to fill programmatically image with different colors with mask I got from our designer in photoshop file:
http://image.openlan.ru/images/83339350420766181806.png
I almost finish this except of details I missed:
UIImage *mask = [UIImage imageNamed:#"mask.png"];
UIImage *image = [UIImage imageNamed:#"vehicle.png"];
mask = [image maskImage:image withMask:mask];
image = [image coloredImageWithColor:[UIColor blueColor]];
image = [image drawImage:mask inImage:image];
Where methods implementations:
- (UIImage *) maskImage:(UIImage *)image withMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage *im = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
CGImageRelease(maskRef);
CGImageRelease(mask);
return im;
}
- (UIImage *) coloredImageWithColor:(UIColor *)color
{
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeDarken);
CGRect rect = (CGRect){ CGPointZero, self.size };
CGContextDrawImage(context, rect, self.CGImage);
CGContextClipToMask(context, rect, self.CGImage);
[color set];
CGContextFillRect(context, rect);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
- (UIImage *) drawImage:(UIImage *)fgImage inImage:(UIImage *)bgImage
{
UIGraphicsBeginImageContextWithOptions(bgImage.size, FALSE, 0.0);
[bgImage drawInRect:(CGRect){ CGPointZero, bgImage.size }];
[fgImage drawInRect:(CGRect){ CGPointZero, fgImage.size }];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
My current result is:
http://image.openlan.ru/images/34429534616655410787.png
I don't think if I use gradient, I'll see right result. What I don't like now is lack of tints in some image areas. What am I doing wrong here?

Inspired by this post I implemented my task eventually.
It's an origin image:
Then I drew a mask for this image do not fill special areas:
Below main method drawing this:
- (void) vehicleConfigure
{
NSString *vehicleName = #"sports.png";
UIImage *vehicleImage = [UIImage imageNamed:vehicleName];
vehicleName = [vehicleName stringByReplacingOccurrencesOfString:#".png" withString:#"-mask.png"];
UIImage *mask = [UIImage imageNamed:vehicleName];
mask = [vehicleImage applyingMask:mask];
vehicleImage = [vehicleImage coloredImageWithColor:RGB(30, 128, 255)];
vehicleImage = [vehicleImage drawMask:mask];
m_vehicleImageView.contentMode = UIViewContentModeScaleAspectFit;
m_vehicleImageView.image = vehicleImage;
}
Where I use category of UIImage with methods:
- (UIImage *) drawMask:(UIImage *)mask
{
UIGraphicsBeginImageContextWithOptions(self.size, FALSE, 0.0);
[self drawInRect:(CGRect){ CGPointZero, self.size }];
[mask drawInRect:(CGRect){ CGPointZero, mask.size }];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage *) applyingMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([self CGImage], mask);
UIImage *im = [UIImage imageWithCGImage:masked
scale:[[UIScreen mainScreen] scale]
orientation:UIImageOrientationUp];
// CGImageRelease(masked);
// CGImageRelease(maskRef);
// CGImageRelease(mask);
return im;
}
- (UIImage *) coloredImageWithColor:(UIColor *)color
{
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = (CGRect){ CGPointZero, self.size };
CGContextDrawImage(context, rect, self.CGImage);
CGContextClipToMask(context, rect, self.CGImage);
[color set];
CGContextFillRect(context, rect);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
And result looks:

Related

Overlay alpha colour to UIImage

I'm using the following code to overlay 0.5 alpha colour to UIImage. But it seems to be lagging. My image has the size of 2000x1500.
Are there any other ways to overlay alpha colour to UIImage fast?
Thanks.
// Tint the image
- (UIImage *)imageWithTint:(UIColor *)tintColor alpha:(CGFloat)alpha {
UIImage *image = self;
CGRect rect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, [UIScreen mainScreen].scale);
CGContextRef c = UIGraphicsGetCurrentContext();
[image drawInRect:rect];
CGContextSetFillColorWithColor(c, [[tintColor colorWithAlphaComponent:0.25] CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}

Resizing image in iOS

Hi I am developing an iOS app. I have an UIImageView with a image associated with it. I am changing its dimensions in viewDidLoad() method.
Initially when I change the dimension I am able to resize the image size on view. However after I crop the image(using Photoshop) accordingly to the shape of the object in the image(i.e getting rid of unwanted part of the image). My resize method doesn't seem to work i.e the size of the image is not changing though I call the same method.
The method I am using for resizing is given below.
-(void)initXYZ{
CGSize size;
CGFloat x,y;
x = 0+myImageView1.frame.size.width;
y = myImageView2.center.y;
size.width = _myImageView2.frame.size.width/2;
size.height = _myImageView2.frame.size.width/2;
UIImage *image = [UIImage imageNamed:#"xyz.png"];
image = [HomeViewController imageWithImage:image scaledToSize:size xCord:x yCord:y];}
Utility method is given below
+(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize xCord:(CGFloat)X yCord:(CGFloat)Y{
UIGraphicsBeginImageContextWithOptions(newSize,NO,0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;}
Try this...
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
OR
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipV = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipV);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
Try this:
- (UIImage*)resizeAndStoreImages:(UIImage*)img
{
UIImage *chosenImage = img;
NSData *imageData = UIImageJPEGRepresentation(chosenImage, 1.0);
int resizedImgMaxHeight = 500;
int resizedImgMaxWidth = 500;
UIImage *resizedImageData;
if (chosenImage.size.height > chosenImage.size.width && chosenImage.size.height > resizedImgMaxHeight) { // portrait
int width = (chosenImage.size.width / chosenImage.size.height) * resizedImgMaxHeight;
CGRect rect = CGRectMake( 0, 0, width, resizedImgMaxHeight);
UIGraphicsBeginImageContext(rect.size);
[chosenImage drawInRect:rect];
UIImage *pic1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resizedImageData = [UIImage imageWithData:UIImageJPEGRepresentation(pic1, 1.0)];
pic1 = nil;
} else if (chosenImage.size.width > chosenImage.size.height && chosenImage.size.width > resizedImgMaxWidth) { // landscape
int height = (chosenImage.size.height / chosenImage.size.width) * resizedImgMaxWidth;
CGRect rect = CGRectMake( 0, 0, resizedImgMaxWidth, height);
UIGraphicsBeginImageContext(rect.size);
[chosenImage drawInRect:rect];
UIImage *pic1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resizedImageData = [UIImage imageWithData:UIImageJPEGRepresentation(pic1, 1.0)];
pic1 = nil;
} else {
if (chosenImage.size.height > resizedImgMaxHeight) {
int width = (chosenImage.size.width / chosenImage.size.height) * resizedImgMaxHeight;
CGRect rect = CGRectMake( 0, 0, width, resizedImgMaxHeight);
UIGraphicsBeginImageContext(rect.size);
[chosenImage drawInRect:rect];
UIImage *pic1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resizedImageData = [UIImage imageWithData:UIImageJPEGRepresentation(pic1, 1.0)];
pic1 = nil;
} else {
resizedImageData = [UIImage imageWithData:imageData];
}
}
return resizedImageData;
}
Adjust the resizedImgMaxHeight and resizedImgMaxWidth as per your need

Use CGContextClip to clip a circle with image

Have a imageViewA (frame{0,0,320,200}) in my app, and I want to clip a circle with the imageViewA's center, radius=50, and draw into another imageViewB (frame{0,0,100,100});
the original image effect like this :
and use below code to clip:
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat height = self.bounds.size.height;
CGContextTranslateCTM(ctx, 0.0, height);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextAddArc(ctx, self.frame.size.width/2, self.frame.size.height/2, 50, 0, 2*M_PI, 0);
CGContextClosePath(ctx);
CGContextSaveGState(ctx);
CGContextClip(ctx);
CGContextDrawImage(ctx, CGRectMake(0,0,self.frame.size.width, self.frame.size.height), image.CGImage);
CGContextRestoreGState(ctx);
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
NSString *headerPath = [NSString stringWithFormat:#"%#/header.png",HomeDirectory];
NSData *imageData = UIImageJPEGRepresentation(newImage, 1.0);
if([imageData writeToFile:headerPath atomically:YES]){
imageViewB.image = [UIImage imageWithContentsOfFile:headerPath];
}
clip image effect like this:
I only need the circle view, but the clip effect display that imageViewB has white space.
How to clip this image correctly?
Thanks!
imageViewB.layer.cornerRadius = 50.0;
imageViewB.layer.masksToBounds = YES;
JPEG doesn't support transparency. Use UIImagePNGRepresentation instead of UIImageJPEGRepresentation, and you will have the result you want.
NSData *imageData = UIImagePNGRepresentation(newImage);
if([imageData writeToFile:headerPath atomically:YES]){
imageViewB.image = [UIImage imageWithContentsOfFile:headerPath];
}
Just use masking for this purposes
This code will help you to mask images
+ (UIImage *)maskImage:(UIImage *)image withMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage *uiMasked = [UIImage imageWithCGImage:masked];
CFRelease(mask);
CFRelease(masked);
return uiMasked;
}
This article can help you
http://www.abdus.me/ios-programming-tips/how-to-mask-image-in-ios-an-image-masking-technique/
After masking you need to create an imageView with that masked image
and then subview it into another imageView.
Before you draw the image:
CGContextSetBlendMode(ctx, kCGBlendModeClear);
CGContextFillRect(ctx, self.bounds);
CGContextSetBlendMode(ctx, kCGBlendModeNormal);

Masking changes colors of UIImage- iOS

Here is what I am doing to mask a UIImage dynamically. It is working but for some reason the colors of output image is not the same as original one. What would be causing this? Thanks..
- (void) setClippingPath:(UIBezierPath *)clippingPath : (UIImageView *)imgView {
CAShapeLayer *maskLayer = [CAShapeLayer layer];
maskLayer.frame = self.imgView.frame;
maskLayer.path = [clippingPath CGPath];
maskLayer.fillColor = [[UIColor whiteColor] CGColor];
maskLayer.backgroundColor = [[UIColor clearColor] CGColor];
self.imgView.image = [self maskImage:self.imgView.image withClippingMask:[self imageFromLayer:maskLayer]];
}
- (UIImage *)imageFromLayer:(CALayer *)layer
{
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
-(UIImage*)maskImage:(UIImage *)image withClippingMask:(UIImage *)maskImage
{
CGImageRef maskRef = image.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([maskImage CGImage], mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
// returns new image with mask applied
return maskedImage;
}
Original Image
Mask
Output Image
The documentation for CGImageMaskCreate mentions:
When you draw into a context with a bitmap image mask, Quartz uses the mask to determine where and how the current fill color is applied to the image rectangle.
So if you want to just replace the black with white then you should be able to set the context color before creating the mask:
-(UIImage*)maskImage:(UIImage *)image withClippingMask:(UIImage *)maskImage
{
CGImageRef maskRef = image.CGImage;
CGContextSetFillColorWithColor( UIGraphicsGetCurrentContext( ), [ UIColor whiteColor ] );
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([maskImage CGImage], mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
// returns new image with mask applied
return maskedImage;
}
You might also want to update your mask to a more basic, greyscale JPG, something like this:

iOS, Generated images, and masking

I'm trying to generate an image that is lozenge-shaped and shows some percentage finished versus unfinished. The way I implemented this was as follows:
Generate 2 rectangles - one the size of the filled region, the other the size of the empty rectange
Invoke UIGrapicsBeginImageContext() with the size of the rectangle I am interested in
Draw the 2 rectangles in the context side-by side
Grab the image from the context and end the context
Create a new masked image by using CGImageMaskCreate() followed by CGImageCreateWithMask() and extracting the masked image
I generate the filled and empty bitmaps using category extensions to UIImage, and then apply a static mask image to them.
The Problem: This works fine in the simulator, but the masking doesn't work on a real device.
Instead of including the code here, I'm including a link to a project that has the code. The relevant files are:
UIImage.h/UIImage.m: The category extension to UIImage that adds both the "create an image with a specified color" and "create a masked image using the supplied mask".
TLRangeDisplay.h/TLRangeDisplay.m: the code for my lozenge-shaped status display. The routine of interest there is fillWithRect().
Here is the code I added to UIImage (via a category):
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size {
CGRect rect = CGRectMake(0.0f, 0.0f, size.height, size.width);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (UIImage*) maskWith:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([self CGImage], mask);
UIImage* image = [UIImage imageWithCGImage:masked];
CFRelease(mask);
CFRelease(masked);
return image;
}
And here is the routine that does the masking:
-(void)fillWithRect {
CGRect f = self.frame;
CGFloat width = f.size.width;
CGFloat fullRange = maxValue_ - minValue_;
CGFloat filledRange = currentValue_ - minValue_;
CGRect fillRect = CGRectMake(0, 0, (filledRange * width) / fullRange, f.size.height);
CGRect emptyRect = CGRectMake(fillRect.size.width, 0, width - fillRect.size.width, f.size.height);
UIImage *fillImage = nil;
UIImage *emptyImage = nil;
if(fillRect.size.width > 0) {
fillImage = [UIImage imageWithColor:fillColor_ andSize:fillRect.size];
}
if(emptyRect.size.width > 0) {
emptyImage = [UIImage imageWithColor:emptyColor_ andSize:emptyRect.size];
}
// Build the 2-color image
UIGraphicsBeginImageContext(f.size);
[fillImage drawInRect:fillRect];
[emptyImage drawInRect:emptyRect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Mask it
if(nil != maskImage_)
image = [image maskWith:maskImage_];
CGRect fullRect = CGRectMake(0, 0, f.size.width, f.size.height);
// Merge ith with the shape
UIGraphicsBeginImageContext(f.size);
[image drawInRect:fullRect];
[shapeImage_ drawInRect:fullRect];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[shownView_ removeFromSuperview];
shownView_ = [[UIImageView alloc] initWithImage:image];
[self addSubview:shownView_];
if(nil != shownView_)
[self bringSubviewToFront:shownView_];
}
The project can be downloaded from http://dl.dropbox.com/u/5375467/ColorPlayOS4.zip
Thanks for any insights on this problem!

Resources