Related
I've been thinking about this one for a while. Basically, the code below draws a border on a UIView, but, if I add another UIView as a subview to that UIView - it'll appear above the border.
Like this:
How do I (as cleanly as possible), keep the border above all subviews?
Like this:
This is what I have so far. But, like stated above, it doesn't keep the border above all its subviews.
CGPathRef CGPathRect = CGPathCreateWithRect(rect, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGPathRef border = CGPathCreateCopyByStrokingPath(rect.CGPath, NULL, 5.0f, kCGLineCapButt, kCGLineJoinMiter, 0);
CGContextAddPath(context, border);
CGContextSetFillColorWithColor(context, someCGColor);
CGContextDrawPath(context, kCGPathFill);
CGPathRelease(border);
I could create a separate UIView for the border itself, and just insert subviews below that UIView, but that feels rather hackish. If there's a better way - I'd love to hear about it.
I would say use:
view.clipToBounds = YES;
view.layer.borderColor = [UIColor redColor];
view.layer.borderWidth = 2;
view.layer.cornerRadius = 4;
All subviews are clipped and u can keep ur border :).
I would like to tint an image with a color reference. The results should look like the Multiply blending mode in Photoshop, where whites would be replaced with tint:
I will be changing the color value continuously.
Follow up: I would put the code to do this in my ImageView's drawRect: method, right?
As always, a code snippet would greatly aid in my understanding, as opposed to a link.
Update: Subclassing a UIImageView with the code Ramin suggested.
I put this in viewDidLoad: of my view controller:
[self.lena setImage:[UIImage imageNamed:kImageName]];
[self.lena setOverlayColor:[UIColor blueColor]];
[super viewDidLoad];
I see the image, but it is not being tinted. I also tried loading other images, setting the image in IB, and calling setNeedsDisplay: in my view controller.
Update: drawRect: is not being called.
Final update: I found an old project that had an imageView set up properly so I could test Ramin's code and it works like a charm!
Final, final update:
For those of you just learning about Core Graphics, here is the simplest thing that could possibly work.
In your subclassed UIView:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColor(context, CGColorGetComponents([UIColor colorWithRed:0.5 green:0.5 blue:0 alpha:1].CGColor)); // don't make color too saturated
CGContextFillRect(context, rect); // draw base
[[UIImage imageNamed:#"someImage.png"] drawInRect: rect blendMode:kCGBlendModeOverlay alpha:1.0]; // draw image
}
In iOS7, they've introduced tintColor property on UIImageView and renderingMode on UIImage. To tint an UIImage on iOS7, all you have to do is:
UIImageView* imageView = …
UIImage* originalImage = …
UIImage* imageForRendering = [originalImage imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
imageView.image = imageForRendering;
imageView.tintColor = [UIColor redColor]; // or any color you want to tint it with
First you'll want to subclass UIImageView and override the drawRect method. Your class needs a UIColor property (let's call it overlayColor) to hold the blend color and a custom setter that forces a redraw when the color changes. Something like this:
- (void) setOverlayColor:(UIColor *)newColor {
if (overlayColor)
[overlayColor release];
overlayColor = [newColor retain];
[self setNeedsDisplay]; // fires off drawRect each time color changes
}
In the drawRect method you'll want to draw the image first then overlay it with a rectangle filled with the color you want along with the proper blending mode, something like this:
- (void) drawRect:(CGRect)area
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
// Draw picture first
//
CGContextDrawImage(context, self.frame, self.image.CGImage);
// Blend mode could be any of CGBlendMode values. Now draw filled rectangle
// over top of image.
//
CGContextSetBlendMode (context, kCGBlendModeMultiply);
CGContextSetFillColor(context, CGColorGetComponents(self.overlayColor.CGColor));
CGContextFillRect (context, self.bounds);
CGContextRestoreGState(context);
}
Ordinarily to optimize the drawing you would restrict the actual drawing to only the area passed in to drawRect, but since the background image has to be redrawn each time the color changes it's likely the whole thing will need refreshing.
To use it create an instance of the object then set the image property (inherited from UIImageView) to the picture and overlayColor to a UIColor value (the blend levels can be adjusted by changing the alpha value of the color you pass down).
I wanted to tint an image with alpha and I created the following class. Please let me know if you find any problems with it.
I have named my class CSTintedImageView and it inherits from UIView since UIImageView does not call the drawRect: method, like mentioned in previous replies.
I have set a designated initializer similar to the one found in the UIImageView class.
Usage:
CSTintedImageView * imageView = [[CSTintedImageView alloc] initWithImage:[UIImage imageNamed:#"image"]];
imageView.tintColor = [UIColor redColor];
CSTintedImageView.h
#interface CSTintedImageView : UIView
#property (strong, nonatomic) UIImage * image;
#property (strong, nonatomic) UIColor * tintColor;
- (id)initWithImage:(UIImage *)image;
#end
CSTintedImageView.m
#import "CSTintedImageView.h"
#implementation CSTintedImageView
#synthesize image=_image;
#synthesize tintColor=_tintColor;
- (id)initWithImage:(UIImage *)image
{
self = [super initWithFrame:CGRectMake(0, 0, image.size.width, image.size.height)];
if(self)
{
self.image = image;
//set the view to opaque
self.opaque = NO;
}
return self;
}
- (void)setTintColor:(UIColor *)color
{
_tintColor = color;
//update every time the tint color is set
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
//resolve CG/iOS coordinate mismatch
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -rect.size.height);
//set the clipping area to the image
CGContextClipToMask(context, rect, _image.CGImage);
//set the fill color
CGContextSetFillColor(context, CGColorGetComponents(_tintColor.CGColor));
CGContextFillRect(context, rect);
//blend mode overlay
CGContextSetBlendMode(context, kCGBlendModeOverlay);
//draw the image
CGContextDrawImage(context, rect, _image.CGImage);
}
#end
Just a quick clarification (after some research on this topic). The Apple doc here clearly states that:
The UIImageView class is optimized to draw its images to the display. UIImageView does not call the drawRect: method of its subclasses. If your subclass needs to include custom drawing code, you should subclass the UIView class instead.
so don't even waste any time attempting to override that method in a UIImageView subclass. Start with UIView instead.
This could be very useful: PhotoshopFramework is one powerful library to manipulate images on Objective-C. This was developed to bring the same functionalities that Adobe Photoshop users are familiar. Examples: Set colors using RGB 0-255, apply blend filers, transformations...
Is open source, here is the project link: https://sourceforge.net/projects/photoshopframew/
UIImage * image = mySourceImage;
UIColor * color = [UIColor yellowColor];
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height) blendMode:kCGBlendModeNormal alpha:1];
UIBezierPath * path = [UIBezierPath bezierPathWithRect:CGRectMake(0, 0, image.size.width, image.size.height)];
[color setFill];
[path fillWithBlendMode:kCGBlendModeMultiply alpha:1]; //look up blending modes for your needs
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//use newImage for something
For those of you who try to subclass an UIImageView class and get stuck at "drawRect: is not being called", note that you should subclass an UIView class instead, because for UIImageView classes, the "drawRect:" method is not called. Read more here: drawRect not being called in my subclass of UIImageView
Here is another way to implement image tinting, especially if you are already using QuartzCore for something else. This was my answer for a similar question.
Import QuartzCore:
#import <QuartzCore/QuartzCore.h>
Create transparent CALayer and add it as a sublayer for the image you want to tint:
CALayer *sublayer = [CALayer layer];
[sublayer setBackgroundColor:[UIColor whiteColor].CGColor];
[sublayer setOpacity:0.3];
[sublayer setFrame:toBeTintedImage.frame];
[toBeTintedImage.layer addSublayer:sublayer];
Add QuartzCore to your projects Framework list (if it isn't already there), otherwise you'll get compiler errors like this:
Undefined symbols for architecture i386: "_OBJC_CLASS_$_CALayer"
The only thing I can think of would be to create a rectangular mostly transparent view with the desired color and lay it over your image view by adding it as a subview. I'm not sure if this will really tint the image in the way you imagine though, I'm not sure how you would hack into an image and selectively replace certain colors with others... sounds pretty ambitious to me.
For example:
UIImageView *yourPicture = (however you grab the image);
UIView *colorBlock = [[UIView alloc] initWithFrame:yourPicture.frame];
//Replace R G B and A with values from 0 - 1 based on your color and transparency
colorBlock.backgroundColor = [UIColor colorWithRed:R green:G blue:B alpha:A];
[yourPicture addSubView:colorBlock];
Documentation for UIColor:
colorWithRed:green:blue:alpha:
Creates and returns a color object using the specified opacity and RGB component values.
+ (UIColor *)colorWithRed:(CGFloat)red green:(CGFloat)green blue:(CGFloat)blue alpha:(CGFloat)alpha
Parameters
red - The red component of the color object, specified as a value from 0.0 to 1.0.
green - The green component of the color object, specified as a value from 0.0 to 1.0.
blue - The blue component of the color object, specified as a value from 0.0 to 1.0.
alpha - The opacity value of the color object, specified as a value from 0.0 to 1.0.
Return Value
The color object. The color information represented by this object is in the device RGB colorspace.
Also you might want to consider caching the composited image for performance and just rendering it in drawRect:, then updated it if a dirty flag is indeed dirty. While you might be changing it often, there may be cases where draws are coming in and you're not dirty, so you can simply refresh from the cache. If memory is more of an issue than performance, you can ignore this :)
I have a library I open-sourced for this: ios-image-filters
For Swift 2.0,
let image: UIImage! = UIGraphicsGetImageFromCurrentImageContext()
imgView.image = imgView.image!.imageWithRenderingMode(UIImageRenderingMode.AlwaysTemplate)
imgView.tintColor = UIColor(red: 51/255.0, green: 51/255.0, blue:
51/255.0, alpha: 1.0)
I made macros for this purpose:
#define removeTint(view) \
if ([((NSNumber *)[view.layer valueForKey:#"__hasTint"]) boolValue]) {\
for (CALayer *layer in [view.layer sublayers]) {\
if ([((NSNumber *)[layer valueForKey:#"__isTintLayer"]) boolValue]) {\
[layer removeFromSuperlayer];\
break;\
}\
}\
}
#define setTint(view, tintColor) \
{\
if ([((NSNumber *)[view.layer valueForKey:#"__hasTint"]) boolValue]) {\
removeTint(view);\
}\
[view.layer setValue:#(YES) forKey:#"__hasTint"];\
CALayer *tintLayer = [CALayer new];\
tintLayer.frame = view.bounds;\
tintLayer.backgroundColor = [tintColor CGColor];\
[tintLayer setValue:#(YES) forKey:#"__isTintLayer"];\
[view.layer addSublayer:tintLayer];\
}
To use, simply just call:
setTint(yourView, yourUIColor);
//Note: include opacity of tint in your UIColor using the alpha channel (RGBA), e.g. [UIColor colorWithRed:0.5f green:0.0 blue:0.0 alpha:0.25f];
When removing the tint simply call:
removeTint(yourView);
I'm trying to create movie generation application by using AVComposition and have a trouble in making title frame.
Each frame is actually a calayer and title layer is on top of other frames.
Title(Text) needs to be transparent with black background so that they can see some part of the first content frame under title text letters.
I searched most articles about calayer mask, but nothing helped me.
I thought this article (How to make only the part covered by text/title transparent in a UIView in IOS) is helpful and coded like Dave's way, but got only white screen.
Here is what I have done:
// create UILabel from the title text
CGRect rectFrame = CGRectMake(0, 0, videoSize.width, videoSize.height);
UILabel *lbTitle = [[UILabel alloc] initWithFrame:rectFrame];
lbTitle.text = self.titleText;
lbTitle.font = [UIFont fontWithName:#"Helvetica" size:60];
lbTitle.textColor = [UIColor blackColor];
lbTitle.backgroundColor = [UIColor whiteColor];
// get title image and create mask layer
UIGraphicsBeginImageContextWithOptions(lbTitle.bounds.size, TRUE, [[UIScreen mainScreen] scale]);
[lbTitle.layer renderInContext:UIGraphicsGetCurrentContext()];
CGImageRef viewImage = [UIGraphicsGetImageFromCurrentImageContext() CGImage];
UIGraphicsEndImageContext();
CALayer *maskLayer = [CALayer layer];
maskLayer.contents = (__bridge id)viewImage;
maskLayer.frame = rectFrame;
// create title background layer and set mastLayer as mast layer of this layer
// this layer corresponds to "UIView's layer" in Dave's method
CALayer *animatedTitleLayer = [CALayer layer];
animatedTitleLayer.backgroundColor = [UIColor whiteColor].CGColor;
animatedTitleLayer.mask = maskLayer;
animatedTitleLayer.frame = rectFrame;
...
[view.layer addSubLayer:animatedTitleLayer];
Here I used animatedTitleLayer as title background(black background), but what I see is white screen.
Anyone can help me? Thanks in advance.
The mask uses the alpha channel to determine what parts to mask out and what parts to keep. However, your label that you render into an image is rendered as black text on a white background so there is no transparency in the image.
You have also specified that the graphics context you are using to render the image is opaque so even if the background color of the label as was clear you would get an opaque image.
So you need to set a clear background color on the label and pass NO as the second argument when you create the graphics context.
Suppose you wanna implement the same functionality iOS's camera 'Zoom & Crop' has... in which you can scroll and crop an image.
Any section of the picture that exceeds the size of the crop area gets grayed out.
I'm trying to replicate exactly that. Provided that the flag 'clipToBounds' is set to NO, you can get the whole subview to get displayed.
However, i'm finding it a bit hard to gray out the UIScrollView's subviews overflow.
How would you implement that?.
Thanks in advance!
You can do this by creating a subclass of UIView that is semi-transparent in the overflow region and transparent in the "crop" region and placing it over your UIScrollView and extending it out to cover the overflow.
The main methods you need to implement are initWithFrame:
#define kIDZAlphaOverlayDefaultAlpha 0.75
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
mAlpha = kIDZAlphaOverlayDefaultAlpha;
self.backgroundColor = [UIColor colorWithRed:0.0 green:0.0 blue:0.0 alpha:mAlpha];
self.userInteractionEnabled = NO;
}
return self;
}
Don't miss out the userInteractionEnabled = NO otherwise the scroll view will not sees events.
and drawRect
- (void)drawRect:(CGRect)rect
{
CGRect apertureRect = /* your crop rect */;
CGContextRef context = UIGraphicsGetCurrentContext();
/* draw the transparent rect */
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 0.0);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextFillRect(context, apertureRect);
/* draw a white border */
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextStrokeRect(context, apertureRect);
}
The important point here is the kCGBlendModeCopy this allows us to draw (or cut) a transparent rectangle in a semi-transparent background.
If you want to can make the transparent rectangle a rounded rectangle, and include a preview of the cropped image and end up with something like the screen below:
Sorry I can't share all the code for the screen shot. It's from a client project :-(
I've solved this issue by...:
Adding a new helper class, 'ApertureOverlay' (subclass of UIView), with the following code:
(void)drawRect:(CGRect)rect
{
if(CGRectEqualToRect(_apertureRect, CGRectZero) == NO)
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetShouldAntialias(context, false);
CGContextSetRGBFillColor(context, 1.0f, 1.0f, 1.0f, 1.0f);
CGContextFillRect(context, _apertureRect);
CGContextSetShouldAntialias(context, true);
}
}
ApertureOverlay has a background with the alpha byte set to 50%.
[self setBackgroundColor:[UIColor colorWithRed:1.0f green:1.0f blue:1.0f alpha:0.5f]];
So far... al we've got is a view with transparent background, and a white rectangle (drawn with _apertureRect position + size).
After implementing this class, i've set up the 'mask' attribute of the ScrollView, which contains the image inside.
[[self layer] setMask:[_apertureOverlayView layer]];
That's it!. If you update the ApertureOverlay's '_apertureRect' attribute, you'll need to call 'setNeedsDisplay', so it gets redrawn.
One more thing. By setting the antialias to false (ApertureOverlay)... things work pretty smooth.
I would like to tint an image with a color reference. The results should look like the Multiply blending mode in Photoshop, where whites would be replaced with tint:
I will be changing the color value continuously.
Follow up: I would put the code to do this in my ImageView's drawRect: method, right?
As always, a code snippet would greatly aid in my understanding, as opposed to a link.
Update: Subclassing a UIImageView with the code Ramin suggested.
I put this in viewDidLoad: of my view controller:
[self.lena setImage:[UIImage imageNamed:kImageName]];
[self.lena setOverlayColor:[UIColor blueColor]];
[super viewDidLoad];
I see the image, but it is not being tinted. I also tried loading other images, setting the image in IB, and calling setNeedsDisplay: in my view controller.
Update: drawRect: is not being called.
Final update: I found an old project that had an imageView set up properly so I could test Ramin's code and it works like a charm!
Final, final update:
For those of you just learning about Core Graphics, here is the simplest thing that could possibly work.
In your subclassed UIView:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColor(context, CGColorGetComponents([UIColor colorWithRed:0.5 green:0.5 blue:0 alpha:1].CGColor)); // don't make color too saturated
CGContextFillRect(context, rect); // draw base
[[UIImage imageNamed:#"someImage.png"] drawInRect: rect blendMode:kCGBlendModeOverlay alpha:1.0]; // draw image
}
In iOS7, they've introduced tintColor property on UIImageView and renderingMode on UIImage. To tint an UIImage on iOS7, all you have to do is:
UIImageView* imageView = …
UIImage* originalImage = …
UIImage* imageForRendering = [originalImage imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
imageView.image = imageForRendering;
imageView.tintColor = [UIColor redColor]; // or any color you want to tint it with
First you'll want to subclass UIImageView and override the drawRect method. Your class needs a UIColor property (let's call it overlayColor) to hold the blend color and a custom setter that forces a redraw when the color changes. Something like this:
- (void) setOverlayColor:(UIColor *)newColor {
if (overlayColor)
[overlayColor release];
overlayColor = [newColor retain];
[self setNeedsDisplay]; // fires off drawRect each time color changes
}
In the drawRect method you'll want to draw the image first then overlay it with a rectangle filled with the color you want along with the proper blending mode, something like this:
- (void) drawRect:(CGRect)area
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
// Draw picture first
//
CGContextDrawImage(context, self.frame, self.image.CGImage);
// Blend mode could be any of CGBlendMode values. Now draw filled rectangle
// over top of image.
//
CGContextSetBlendMode (context, kCGBlendModeMultiply);
CGContextSetFillColor(context, CGColorGetComponents(self.overlayColor.CGColor));
CGContextFillRect (context, self.bounds);
CGContextRestoreGState(context);
}
Ordinarily to optimize the drawing you would restrict the actual drawing to only the area passed in to drawRect, but since the background image has to be redrawn each time the color changes it's likely the whole thing will need refreshing.
To use it create an instance of the object then set the image property (inherited from UIImageView) to the picture and overlayColor to a UIColor value (the blend levels can be adjusted by changing the alpha value of the color you pass down).
I wanted to tint an image with alpha and I created the following class. Please let me know if you find any problems with it.
I have named my class CSTintedImageView and it inherits from UIView since UIImageView does not call the drawRect: method, like mentioned in previous replies.
I have set a designated initializer similar to the one found in the UIImageView class.
Usage:
CSTintedImageView * imageView = [[CSTintedImageView alloc] initWithImage:[UIImage imageNamed:#"image"]];
imageView.tintColor = [UIColor redColor];
CSTintedImageView.h
#interface CSTintedImageView : UIView
#property (strong, nonatomic) UIImage * image;
#property (strong, nonatomic) UIColor * tintColor;
- (id)initWithImage:(UIImage *)image;
#end
CSTintedImageView.m
#import "CSTintedImageView.h"
#implementation CSTintedImageView
#synthesize image=_image;
#synthesize tintColor=_tintColor;
- (id)initWithImage:(UIImage *)image
{
self = [super initWithFrame:CGRectMake(0, 0, image.size.width, image.size.height)];
if(self)
{
self.image = image;
//set the view to opaque
self.opaque = NO;
}
return self;
}
- (void)setTintColor:(UIColor *)color
{
_tintColor = color;
//update every time the tint color is set
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
//resolve CG/iOS coordinate mismatch
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -rect.size.height);
//set the clipping area to the image
CGContextClipToMask(context, rect, _image.CGImage);
//set the fill color
CGContextSetFillColor(context, CGColorGetComponents(_tintColor.CGColor));
CGContextFillRect(context, rect);
//blend mode overlay
CGContextSetBlendMode(context, kCGBlendModeOverlay);
//draw the image
CGContextDrawImage(context, rect, _image.CGImage);
}
#end
Just a quick clarification (after some research on this topic). The Apple doc here clearly states that:
The UIImageView class is optimized to draw its images to the display. UIImageView does not call the drawRect: method of its subclasses. If your subclass needs to include custom drawing code, you should subclass the UIView class instead.
so don't even waste any time attempting to override that method in a UIImageView subclass. Start with UIView instead.
This could be very useful: PhotoshopFramework is one powerful library to manipulate images on Objective-C. This was developed to bring the same functionalities that Adobe Photoshop users are familiar. Examples: Set colors using RGB 0-255, apply blend filers, transformations...
Is open source, here is the project link: https://sourceforge.net/projects/photoshopframew/
UIImage * image = mySourceImage;
UIColor * color = [UIColor yellowColor];
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height) blendMode:kCGBlendModeNormal alpha:1];
UIBezierPath * path = [UIBezierPath bezierPathWithRect:CGRectMake(0, 0, image.size.width, image.size.height)];
[color setFill];
[path fillWithBlendMode:kCGBlendModeMultiply alpha:1]; //look up blending modes for your needs
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//use newImage for something
For those of you who try to subclass an UIImageView class and get stuck at "drawRect: is not being called", note that you should subclass an UIView class instead, because for UIImageView classes, the "drawRect:" method is not called. Read more here: drawRect not being called in my subclass of UIImageView
Here is another way to implement image tinting, especially if you are already using QuartzCore for something else. This was my answer for a similar question.
Import QuartzCore:
#import <QuartzCore/QuartzCore.h>
Create transparent CALayer and add it as a sublayer for the image you want to tint:
CALayer *sublayer = [CALayer layer];
[sublayer setBackgroundColor:[UIColor whiteColor].CGColor];
[sublayer setOpacity:0.3];
[sublayer setFrame:toBeTintedImage.frame];
[toBeTintedImage.layer addSublayer:sublayer];
Add QuartzCore to your projects Framework list (if it isn't already there), otherwise you'll get compiler errors like this:
Undefined symbols for architecture i386: "_OBJC_CLASS_$_CALayer"
The only thing I can think of would be to create a rectangular mostly transparent view with the desired color and lay it over your image view by adding it as a subview. I'm not sure if this will really tint the image in the way you imagine though, I'm not sure how you would hack into an image and selectively replace certain colors with others... sounds pretty ambitious to me.
For example:
UIImageView *yourPicture = (however you grab the image);
UIView *colorBlock = [[UIView alloc] initWithFrame:yourPicture.frame];
//Replace R G B and A with values from 0 - 1 based on your color and transparency
colorBlock.backgroundColor = [UIColor colorWithRed:R green:G blue:B alpha:A];
[yourPicture addSubView:colorBlock];
Documentation for UIColor:
colorWithRed:green:blue:alpha:
Creates and returns a color object using the specified opacity and RGB component values.
+ (UIColor *)colorWithRed:(CGFloat)red green:(CGFloat)green blue:(CGFloat)blue alpha:(CGFloat)alpha
Parameters
red - The red component of the color object, specified as a value from 0.0 to 1.0.
green - The green component of the color object, specified as a value from 0.0 to 1.0.
blue - The blue component of the color object, specified as a value from 0.0 to 1.0.
alpha - The opacity value of the color object, specified as a value from 0.0 to 1.0.
Return Value
The color object. The color information represented by this object is in the device RGB colorspace.
Also you might want to consider caching the composited image for performance and just rendering it in drawRect:, then updated it if a dirty flag is indeed dirty. While you might be changing it often, there may be cases where draws are coming in and you're not dirty, so you can simply refresh from the cache. If memory is more of an issue than performance, you can ignore this :)
I have a library I open-sourced for this: ios-image-filters
For Swift 2.0,
let image: UIImage! = UIGraphicsGetImageFromCurrentImageContext()
imgView.image = imgView.image!.imageWithRenderingMode(UIImageRenderingMode.AlwaysTemplate)
imgView.tintColor = UIColor(red: 51/255.0, green: 51/255.0, blue:
51/255.0, alpha: 1.0)
I made macros for this purpose:
#define removeTint(view) \
if ([((NSNumber *)[view.layer valueForKey:#"__hasTint"]) boolValue]) {\
for (CALayer *layer in [view.layer sublayers]) {\
if ([((NSNumber *)[layer valueForKey:#"__isTintLayer"]) boolValue]) {\
[layer removeFromSuperlayer];\
break;\
}\
}\
}
#define setTint(view, tintColor) \
{\
if ([((NSNumber *)[view.layer valueForKey:#"__hasTint"]) boolValue]) {\
removeTint(view);\
}\
[view.layer setValue:#(YES) forKey:#"__hasTint"];\
CALayer *tintLayer = [CALayer new];\
tintLayer.frame = view.bounds;\
tintLayer.backgroundColor = [tintColor CGColor];\
[tintLayer setValue:#(YES) forKey:#"__isTintLayer"];\
[view.layer addSublayer:tintLayer];\
}
To use, simply just call:
setTint(yourView, yourUIColor);
//Note: include opacity of tint in your UIColor using the alpha channel (RGBA), e.g. [UIColor colorWithRed:0.5f green:0.0 blue:0.0 alpha:0.25f];
When removing the tint simply call:
removeTint(yourView);