iOS, Generated images, and masking - ios

I'm trying to generate an image that is lozenge-shaped and shows some percentage finished versus unfinished. The way I implemented this was as follows:
Generate 2 rectangles - one the size of the filled region, the other the size of the empty rectange
Invoke UIGrapicsBeginImageContext() with the size of the rectangle I am interested in
Draw the 2 rectangles in the context side-by side
Grab the image from the context and end the context
Create a new masked image by using CGImageMaskCreate() followed by CGImageCreateWithMask() and extracting the masked image
I generate the filled and empty bitmaps using category extensions to UIImage, and then apply a static mask image to them.
The Problem: This works fine in the simulator, but the masking doesn't work on a real device.
Instead of including the code here, I'm including a link to a project that has the code. The relevant files are:
UIImage.h/UIImage.m: The category extension to UIImage that adds both the "create an image with a specified color" and "create a masked image using the supplied mask".
TLRangeDisplay.h/TLRangeDisplay.m: the code for my lozenge-shaped status display. The routine of interest there is fillWithRect().
Here is the code I added to UIImage (via a category):
+ (UIImage *)imageWithColor:(UIColor *)color {
CGRect rect = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size {
CGRect rect = CGRectMake(0.0f, 0.0f, size.height, size.width);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [color CGColor]);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (UIImage*) maskWith:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([self CGImage], mask);
UIImage* image = [UIImage imageWithCGImage:masked];
CFRelease(mask);
CFRelease(masked);
return image;
}
And here is the routine that does the masking:
-(void)fillWithRect {
CGRect f = self.frame;
CGFloat width = f.size.width;
CGFloat fullRange = maxValue_ - minValue_;
CGFloat filledRange = currentValue_ - minValue_;
CGRect fillRect = CGRectMake(0, 0, (filledRange * width) / fullRange, f.size.height);
CGRect emptyRect = CGRectMake(fillRect.size.width, 0, width - fillRect.size.width, f.size.height);
UIImage *fillImage = nil;
UIImage *emptyImage = nil;
if(fillRect.size.width > 0) {
fillImage = [UIImage imageWithColor:fillColor_ andSize:fillRect.size];
}
if(emptyRect.size.width > 0) {
emptyImage = [UIImage imageWithColor:emptyColor_ andSize:emptyRect.size];
}
// Build the 2-color image
UIGraphicsBeginImageContext(f.size);
[fillImage drawInRect:fillRect];
[emptyImage drawInRect:emptyRect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Mask it
if(nil != maskImage_)
image = [image maskWith:maskImage_];
CGRect fullRect = CGRectMake(0, 0, f.size.width, f.size.height);
// Merge ith with the shape
UIGraphicsBeginImageContext(f.size);
[image drawInRect:fullRect];
[shapeImage_ drawInRect:fullRect];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[shownView_ removeFromSuperview];
shownView_ = [[UIImageView alloc] initWithImage:image];
[self addSubview:shownView_];
if(nil != shownView_)
[self bringSubviewToFront:shownView_];
}
The project can be downloaded from http://dl.dropbox.com/u/5375467/ColorPlayOS4.zip
Thanks for any insights on this problem!

Related

Clipping large png make the ios app crash

- (UIImage *)createThumbnailImage:(UIImage *)image withSize:(CGSize)size {
CGRect imageRect = CGRectMake(0.0, 0.0, size.width, size.height);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, CGRectMake(0, 0, size.width, size.height));
CGContextSetInterpolationQuality(context, 0.8);
[image drawInRect:imageRect blendMode:kCGBlendModeNormal alpha:1];
UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumbnail;
}
- (void)viewDidLoad {
[super viewDidLoad];
UIImage *inputImage = [UIImage imageNamed:#"dog.jpg"];
UIImage *image = [self createThumbnailImage:inputImage withSize:CGSizeMake(640.0, 480.0)]
}
I got a thumbnail image(640 * 480) by code above. And some odd problem confused me.
When I sent a jpg (10000 * 10000) to the method,it worked well.
But when I sent a png with the same size, the app would crash.
I tried to find some documents about the difference between jpg and png, but it made no sense.
Does anyone have any idea about this bug?
-(UIImage *)getNeedImageFrom:(UIImage*)image cropRect:(CGRect)rect
{
CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *croppedImage = [UIImage imageWithCGImage:subImage];
CGImageRelease(subImage);
return croppedImage;
}
can you please check if this solve your problem

Overlay alpha colour to UIImage

I'm using the following code to overlay 0.5 alpha colour to UIImage. But it seems to be lagging. My image has the size of 2000x1500.
Are there any other ways to overlay alpha colour to UIImage fast?
Thanks.
// Tint the image
- (UIImage *)imageWithTint:(UIColor *)tintColor alpha:(CGFloat)alpha {
UIImage *image = self;
CGRect rect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, [UIScreen mainScreen].scale);
CGContextRef c = UIGraphicsGetCurrentContext();
[image drawInRect:rect];
CGContextSetFillColorWithColor(c, [[tintColor colorWithAlphaComponent:0.25] CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}

Keeping right tints after image masking

I need to fill programmatically image with different colors with mask I got from our designer in photoshop file:
http://image.openlan.ru/images/83339350420766181806.png
I almost finish this except of details I missed:
UIImage *mask = [UIImage imageNamed:#"mask.png"];
UIImage *image = [UIImage imageNamed:#"vehicle.png"];
mask = [image maskImage:image withMask:mask];
image = [image coloredImageWithColor:[UIColor blueColor]];
image = [image drawImage:mask inImage:image];
Where methods implementations:
- (UIImage *) maskImage:(UIImage *)image withMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage *im = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
CGImageRelease(maskRef);
CGImageRelease(mask);
return im;
}
- (UIImage *) coloredImageWithColor:(UIColor *)color
{
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeDarken);
CGRect rect = (CGRect){ CGPointZero, self.size };
CGContextDrawImage(context, rect, self.CGImage);
CGContextClipToMask(context, rect, self.CGImage);
[color set];
CGContextFillRect(context, rect);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
- (UIImage *) drawImage:(UIImage *)fgImage inImage:(UIImage *)bgImage
{
UIGraphicsBeginImageContextWithOptions(bgImage.size, FALSE, 0.0);
[bgImage drawInRect:(CGRect){ CGPointZero, bgImage.size }];
[fgImage drawInRect:(CGRect){ CGPointZero, fgImage.size }];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
My current result is:
http://image.openlan.ru/images/34429534616655410787.png
I don't think if I use gradient, I'll see right result. What I don't like now is lack of tints in some image areas. What am I doing wrong here?
Inspired by this post I implemented my task eventually.
It's an origin image:
Then I drew a mask for this image do not fill special areas:
Below main method drawing this:
- (void) vehicleConfigure
{
NSString *vehicleName = #"sports.png";
UIImage *vehicleImage = [UIImage imageNamed:vehicleName];
vehicleName = [vehicleName stringByReplacingOccurrencesOfString:#".png" withString:#"-mask.png"];
UIImage *mask = [UIImage imageNamed:vehicleName];
mask = [vehicleImage applyingMask:mask];
vehicleImage = [vehicleImage coloredImageWithColor:RGB(30, 128, 255)];
vehicleImage = [vehicleImage drawMask:mask];
m_vehicleImageView.contentMode = UIViewContentModeScaleAspectFit;
m_vehicleImageView.image = vehicleImage;
}
Where I use category of UIImage with methods:
- (UIImage *) drawMask:(UIImage *)mask
{
UIGraphicsBeginImageContextWithOptions(self.size, FALSE, 0.0);
[self drawInRect:(CGRect){ CGPointZero, self.size }];
[mask drawInRect:(CGRect){ CGPointZero, mask.size }];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage *) applyingMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([self CGImage], mask);
UIImage *im = [UIImage imageWithCGImage:masked
scale:[[UIScreen mainScreen] scale]
orientation:UIImageOrientationUp];
// CGImageRelease(masked);
// CGImageRelease(maskRef);
// CGImageRelease(mask);
return im;
}
- (UIImage *) coloredImageWithColor:(UIColor *)color
{
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = (CGRect){ CGPointZero, self.size };
CGContextDrawImage(context, rect, self.CGImage);
CGContextClipToMask(context, rect, self.CGImage);
[color set];
CGContextFillRect(context, rect);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
And result looks:

Masking a transparent image with another actual image in iOS

I have two images, one is a mask that is transparent with some edges / borders and the other is the actual image. I want to merge both of them.
I have used the following code to mask and combine the image:
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
// create a bitmap graphics context the size of the image
CGFloat dim = MIN(image.size.width, image.size.height);
CGSize size = CGSizeMake(dim, dim);
UIGraphicsBeginImageContextWithOptions(size, NO, .0);
UIBezierPath *bezierPath = [UIBezierPath bezierPathWithOvalInRect:(CGRect){ CGPointZero, size }];
[bezierPath fill];
[bezierPath addClip];
CGPoint offset = CGPointMake((dim - image.size.width) * 0.5, (dim - image.size.height) * 0.5);
[image drawInRect:(CGRect){ offset, image.size }];
UIImage *ret = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return ret;
}
The result:
In the result image, the border of the image used as a mask is missing. Can someone please help me with this?
I wrote a masking category for ios (well it is basically cross platform because CoreImage is on both platforms anyway:
github project
the core functionality boils down to this (for your example)
UIImage *person = ...
UIImage *circle = ...
UIImage *result = [person imageMaskedWith:circle];
UIImageView *redbox = [[UIImageView alloc] initWithImage:result];
redbox.backgroundColor = [UIColor redColor]; //this can be a gradient!
the main part of the code from the category:
CGImageRef imageReference = image.CGImage;
CGImageRef maskReference = mask.CGImage;
CGRect rect = CGRectMake(0, 0, CGImageGetWidth(imageReference), CGImageGetHeight(imageReference));
// draw with Core Graphics
UIGraphicsBeginImageContext(rect.size);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, 0.0, rect.size.height);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextClipToMask(bitmap, rect, maskReference);
CGContextDrawImage(bitmap, rect, imageReference);
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

iOS create UIImage using a mask and a UIColor

I'm currently coloring an existing image using a mask. For example, I have a white image with a black border and a circular mask (like the first two images). Then, I can create a third image with a color (i.e. green) which has green on the center of the original image (because the mask is present there).
The code I'm using is this (suggestions welcomed):
-(UIImage *)paintWithMask:(UIImage *)mask color:(UIColor *)color andSize:(CGSize)size{
UIImage *image = self;
UIImage *rotatedMask = [self rotateImage:mask]; //For some reason this is needed.
UIGraphicsBeginImageContextWithOptions(size, NO, image.scale);
CGRect rect = CGRectMake(0.0f, 0.0f, size.width, size.height);
[image drawInRect:rect];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeSourceIn);
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextClipToMask(context, rect, [rotatedMask CGImage]);
CGContextFillRect(context, rect);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
What I need to do now is paint the green circle using only the mask (without the black border obviously), like this:
Any ideas? Thanks a lot!!
There is a much easier way of doing this without CoreGraphics. Simply do the following:
-(UIImageView *)imageViewWithMask:(UIImage *)mask color:(UIColor *)color andSize:(CGSize)size{
UIImage *tempImage = mask;
tempImage = [tempImage imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
UIGraphicsBeginImageContextWithOptions(size, NO,0);
[tempImage drawInRect: CGRectMake(0,0,size.width,size.height)];
tempImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *iv = [[UIImageView alloc] initWithImage: tempImage];
iv.tintColor = color;
return iv;
}

Resources