How to apply filters/color on selected image area in iOS SDK - ios

I have an image of a my face and I just want to apply some filter or color on lips/eyes etc. I have detected the areas but my issue is how to add color there so that is looks natural and mix with existing image.
Please see the image below to have better idea what I need. See color applied on lips which is pretty much natural,

Use Following code to set any color to given filtered area. For Demo, I have used here rectangle to be get painted, you can use any other bounded area to fill up. Here i have used Yellow color to fill up, you can change that color too.
UIImage *image=...;//Add Code for getting Main Image
CGRect imageRect = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIGraphicsBeginImageContext(image.size);
[image drawInRect:imageRect];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 1.0,1.0,0.0,0.5);//set RGBA for color to fill in given area
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 0.0, 1.0);//set RGBA for Border color to add in given area
CGRect rect=CGRectMake(5,5,10,10);//Here, Use the area to be coloured, rightnow i have used Rectangle
CGContextFillRect(context,rect);
CGContextStrokeRectWithWidth(context, rect, 3);//Set Border Width in 3rd argument
CGContextSaveGState(context);
UIImage *newimage = UIGraphicsGetImageFromCurrentImageContext();
[UIImagePNGRepresentation(newimage) writeToFile:FileName] atomically:YES];
UIGraphicsEndImageContext();

There can be multiple ways to do this,one of them can be as mentioned in this link
The other approach can be u can create a UIView of the size same as size of your identified area,with whatever color you want and add that view as a subview on the face .

You can take the image and then apply a CoreImage filter to adjust the colors inside of a masked area that you've defined.
This answer outlines how to do this. Making it look natural more applies to just learning how to create filters that do what you want to the images.

You need to do some drawing customisation for particular image view. Here is some SO Q/A.
how to change the color of an individual pixel in uiimage
iPhone : How to change color of particular pixel of a UIImage?

Related

Set background - centered and not stretched

The programm sets the background as an image through:
[backgroundViewProxy setBackgroundColor:[UIColor colorWithPatternImage:[theme backgroundImage]]];
The drawRect of the correspoinding UIView is:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGSize phase = self.backgroundShift; //set this property to affect the positioning of the background image
CGContextSetPatternPhase(context, phase);
CGColorRef color = self.backgroundColor.CGColor;
CGContextSetFillColorWithColor(context, color);
CGContextFillRect(context, self.bounds);
CGContextRestoreGState(context);
}
I did not wrote the code on my own. I have to modify it.
I do understand what is going on here, but i cannot find a way to solve this problem:
I want the image to be centered on the screen (with and height) and it should not be streched or repeated.
At the moment, the image is repeated to fill the screen.
I have searched around the internet, but did not find a clear solution for centering(or positioning) the image in a similar context like this.
I would be glad if someone could help.
From apple's docs:
+ (UIColor *)colorWithPatternImage:(UIImage *)image
You can use pattern colors to set the fill or stroke color just as you would a solid color. During drawing, the image in the pattern color is tiled as necessary to cover the given area.
An alternative could be setting view.layer.contents to you image by casting it as type (id)

How to save image with transparent stroke

I followed #Rob answer and its drawn as I want...but when I saved this image....stroke not transparent anymore
Objective-C How to avoid drawing on same color image having stroke color of UIBezierPath
For save image I written this code
-(void)saveImage {
UIGraphicsBeginImageContextWithOptions(self.upperImageView.bounds.size, NO, 0);
if ([self.upperImageView respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)])
[self.upperImageView drawViewHierarchyInRect:self.upperImageView.bounds afterScreenUpdates:YES]; // iOS7+
else
[self.upperImageView.layer renderInContext:UIGraphicsGetCurrentContext()]; // pre iOS7
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndPDFContext();
UIImageWriteToSavedPhotosAlbum(self.upperImageView.image, self,#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
You are either setting the alpha of the image view with the paths to 1.0 somewhere, or you are using something that doesn't permit transparencies (e.g. UIGraphicsBeginImageContextWithOptions with YES for the opaque parameter, or staging the image in a format that doesn't preserve alpha, such as JPEG).
A few additional thoughts:
I notice that you're only drawing upperImageView. If you want the composite image, you need to draw both. Or are you only trying to save one of the image views?
(For those unfamiliar with that other question, the entire point was how to draw multiple paths over an image, and have the full set of those paths with the same reduced alpha, rather than having the intersection of paths lead to some additive behavior. This was accomplished by having two separate image views, one for the underlying image, and one for the drawn paths. The key to the answer to that question was that one should draw the paths at 100% alpha, but to add that as a layer to a view that, itself, has a reduced alpha.)
What is the alpha of the image view upon which you are drawing.
NB: In the answer to that other question, when saving a temporary copy of the combined paths. we had to temporarily set the alpha to 1.0. But when saving the final result here, we want to keep the "path" image view's alpha at its reduced value.
Unrelated, but you faithfully transcribed a typo (since fixed) in my original example where I accidentally called UIGraphicsEndPDFContext rather than UIGraphicsEndImageContext. You should use the latter.
So, considering two image views, one with the photograph and one with the drawn path, called imageImageView (with alpha of 1.0) and pathsImageView (with alpha of 0.5), respectively, I can save the snapshot like so:
- (void)saveSnapshot {
CGRect rect = self.pathsImageView.bounds;
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
if ([self.pathsImageView respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
[self.imageImageView drawViewHierarchyInRect:rect afterScreenUpdates:YES]; // iOS7+
[self.pathsImageView drawViewHierarchyInRect:rect afterScreenUpdates:YES];
} else {
[self.imageImageView.layer renderInContext:UIGraphicsGetCurrentContext()]; // pre iOS7
[self.pathsImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
When I did that, the resulting composite image was in my photo album:

iOS: Adding an outline/stroke to an image context with a transparent background

The images that goes through here are PNGs of different shapes with a transparent background. In addition to merging them (which works fine), I'd like to give the new image a couple of pixels thick outline. But I can't seem to manage that.
(So just to clarify, I'm after an outline around the actual shapes in the context, not a rectangle around the entire image.)
+ (UIImage *)mergeBackgroundImage:(UIImage *)backgroundImage withOverlayingImage:(UIImage *)overlayImage{
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, backgroundImage.scale);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[overlayImage drawInRect:CGRectMake(backgroundImage.size.width - overlayImage.size.width, backgroundImage.size.height - overlayImage.size.height, overlayImage.size.width, overlayImage.size.height)];
//Add stroke.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Thanks for your time!
Markus
If you make a CALayer who's backing is set to a CGImage of your image, you can then use it as a masking layer for your layer that requires an outline1. And once you've done that, you can render your layer into another context, and then get another UIImage from that.
// edit: Something like what's describe in this answer.

Change color of detail disclosure button dynamically in code

I know similar question(s) have been asked before, but most answers say create a new detail disclosure button image with the required color (e.g. How to change color of detail disclosure button for table cell).
I want to be able to change it to any color I choose, dynamically, at run-time, so using a pre-configured image is not viable.
From reading around, I think there may be a few ways to do this, but I'm not sure which is the best, or exactly how to do it:
Draw the required color circle in code and overlay with an image of shadow round outside of circle and arrow right (with clear alpha channel for rest, so drawn color circle is still visible)
Add image of shadow round outside of circle in UIImageView, and using as a mask, flood fill within this shadow circle, then overlay the arrow.
Add greyscale image, mask it with itself, and overlay the required color (e.g. http://coffeeshopped.com/2010/09/iphone-how-to-dynamically-color-a-uiimage), then overlay that with arrow image.
What is the best way, and does anyone have any code showing exactly how to do it ?
I use the following helper method to tint a grayscale image. It takes the grayscale image, the tint color, and an option mask image used to ensure the tint only happens in part of the grayscale image. This method also ensure the new image has the same (if any) resizable insets as the original image.
+ (UIImage *)tintImage:(UIImage *)baseImage withColor:(UIColor *)color mask:(UIImage *)maskImage {
UIGraphicsBeginImageContextWithOptions(baseImage.size, NO, baseImage.scale);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, baseImage.size.width, baseImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
if (maskImage) {
CGContextClipToMask(ctx, area, maskImage.CGImage);
} else {
CGContextClipToMask(ctx, area, baseImage.CGImage);
}
[color set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeOverlay);
CGContextDrawImage(ctx, area, baseImage.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (!UIEdgeInsetsEqualToEdgeInsets(baseImage.capInsets, UIEdgeInsetsZero)) {
newImage = [newImage resizableImageWithCapInsets:baseImage.capInsets];
}
return newImage;
}
Here is the non-retina grayscale detail disclosure image:
Here is the retina version:
Here is the non-retina mask (between the quotes - it's mostly white):
""
And the retina mask (between the quotes:
""
Consider also using an icon font in a label instead of an image. Something like glyphicons. Then you can set the text colour to whatever you want. You also have good options for scaling. The cost is that you don't have quite such precise control in some cases but it should work well for your case.

Merge two PNG UIImages in iOS without losing transparency

I have two png format images and both have transparency defined. I need to merge these together into a new png image but without losing any of the transparency in the result.
+(UIImage *) combineImage:(UIImage *)firstImage colorImage:(UIImage *)secondImage
{
UIGraphicsBeginImageContext(firstImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0, firstImage.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGRect rect = CGRectMake(0, 0, firstImage.size.width, firstImage.size.height);
// draw white background to preserve color of transparent pixels
CGContextSetBlendMode(context, kCGBlendModeDarken);
[[UIColor whiteColor] setFill];
CGContextFillRect(context, rect);
CGContextSaveGState(context);
CGContextRestoreGState(context);
// draw original image
CGContextSetBlendMode(context, kCGBlendModeDarken);
CGContextDrawImage(context, rect, firstImage.CGImage);
// tint image (loosing alpha) - the luminosity of the original image is preserved
CGContextSetBlendMode(context, kCGBlendModeDarken);
//CGContextSetAlpha(context, .85);
[[UIColor colorWithPatternImage:secondImage] setFill];
CGContextFillRect(context, rect);
CGContextSaveGState(context);
CGContextRestoreGState(context);
// mask by alpha values of original image
CGContextSetBlendMode(context, kCGBlendModeDestinationIn);
CGContextDrawImage(context, rect, firstImage.CGImage);
// image drawing code here
CGContextRestoreGState(context);
UIImage *coloredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImage;
}
needed any help to improve my code in performance.
Thanks in advance
First of all, those calls to CGContextSaveGState and CGContextRestoreGState, one after the other with nothing in between, isn't doing anything for you. See this other answer for an explanation of what CGContextSaveGState and CGContextRestoreGState do: CGContextSaveGState vs UIGraphicsPushContext
Now, it's not 100% clear to me what you mean by "merging" the images. If you just want to draw one on top of the other, and blend their colors using a standard blending mode then you just need to change those blend mode calls to pass kCGBlendModeNormal (or just leave out the calls to CGContextSetBlendMode entirely. If you want to mask the second image by the first image's alpha value then you should draw the second image with the normal blend mode, then switch to kCGBlendModeDestinationIn and draw the first image.
I'm afraid I'm not really sure what you're trying to do with the image tinting code in the middle, but my instinct is that you won't end up needing it. You should be able to get most merging effects by just drawing one image, then setting the blending mode, then drawing the other image.
Also, the code you've got there under the comment "draw white background to preserve color of transparent pixels" might draw white through the whole image, but it certainly doesn't preserve the color of transparent pixels, it makes those pixels white! You should remove that code too unless you really want your "transparent" color to be white.
Used the code given in Vinay's question and Aaron's comments to develop this hybrid that overlays any number of images:
/**
Returns the images overplayed atop each other according to their array position, with the first image being bottom-most, and the last image being top-most.
- parameter images: The images to overlay.
- parameter size: The size of resulting image. Any images not matching this size will show a loss in fidelity.
*/
func combinedImageFromImages(images: [UIImage], withSize size: CGSize) -> UIImage
{
// Setup the graphics context (allocation, translation/scaling, size)
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(context, 0, size.height)
CGContextScaleCTM(context, 1.0, -1.0)
let rect = CGRectMake(0, 0, size.width, size.height)
// Combine the images
for image in images {
CGContextDrawImage(context, rect, image.CGImage)
}
let combinedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return combinedImage
}

Resources