CGPatternDrawPatternCallback not called in iOS 12 /Objective C - ios

The following Objective-C code has worked correctly in iOS 9 - 11. It draws a checkerboard with colored squares. For some reason the callback that adds the colors is not being called in iOS 12 and Xcode 10.0. I've tried a variety of fixes but nothing obvious has worked. Something seems to have changed in iOS 12 but nothing I tried has fixed the problem.
-(id)initWithFrame:(CGRect)frame checkerSize:(CGSize)checkerSize darkColor:(UIColor *)darkShade lightColor:(UIColor *)lightShade {
self = [super initWithFrame:frame];
if(self != nil)
{
// Initialize the property values
checkerHeight = checkerSize.height;
checkerWidth = checkerSize.width;
self.darkColor = darkShade;
self.lightColor = lightShade;
// Colored Pattern setup
CGPatternCallbacks coloredPatternCallbacks = {0, ColoredPatternCallback, NULL};
CGRect clippingRectangle = CGRectMake(0.0, 0.0, 2.0*checkerWidth, 2.0*checkerHeight);
// First we need to create a CGPatternRef that specifies the qualities of our pattern.
CGPatternRef coloredPattern = CGPatternCreate(
(__bridge_retained void *)self, // 'info' pointer for our callback
clippingRectangle, // the pattern coordinate space, drawing is clipped to this rectangle
CGAffineTransformIdentity, // a transform on the pattern coordinate space used before it is drawn.
2.0*checkerWidth, 2.0*checkerHeight, // the spacing (horizontal, vertical) of the pattern - how far to move after drawing each cell
kCGPatternTilingNoDistortion,
true, // this is a colored pattern, which means that you only specify an alpha value when drawing it
&coloredPatternCallbacks); // the callbacks for this pattern.
// To draw a pattern, you need a pattern colorspace.
// Since this is an colored pattern, the parent colorspace is NULL, indicating that it only has an alpha value.
CGColorSpaceRef coloredPatternColorSpace = CGColorSpaceCreatePattern(NULL);
CGFloat alpha = 1.0;
// Since this pattern is colored, we'll create a CGColorRef for it to make drawing it easier and more efficient.
// From here on, the colored pattern is referenced entirely via the associated CGColorRef rather than the
// originally created CGPatternRef.
coloredPatternColor = CGColorCreateWithPattern(coloredPatternColorSpace, coloredPattern, &alpha);
CGColorSpaceRelease(coloredPatternColorSpace);
CGPatternRelease(coloredPattern);
}
return self;
}
void ColoredPatternCallback(void *info, CGContextRef context) {
HS_QuartzPatternView *self = (__bridge_transfer id)info; // needed to access the Obj-C properties from the C function
CGFloat checkerHeight = [self checkerHeight];
CGFloat checkerWidth = [self checkerWidth];
// "Dark" Color
UIColor *dark = [self darkColor];
CGContextSetFillColorWithColor(context, dark.CGColor);
CGContextFillRect(context, CGRectMake(0.0, 0.0, checkerWidth, checkerHeight));
CGContextFillRect(context, CGRectMake(checkerWidth, checkerHeight, checkerWidth, checkerHeight));
// "Light" Color
UIColor *light = [self lightColor];
CGContextSetFillColorWithColor(context, light.CGColor);
CGContextFillRect(context, CGRectMake(checkerWidth, 0.0, checkerWidth, checkerHeight));
CGContextFillRect(context, CGRectMake(0.0, checkerHeight, checkerWidth, checkerHeight));
}

Related

How to change colour of individual pixel of UIImage/UIImageView

I have a UIImageView to which I have applied the filter:
testImageView.layer.magnificationFilter = kCAFilterNearest;
So that the individual pixels are visible. This UIImageView is within a UIScrollView, and the image itself is 1000x1000. I have used the following code to detect which pixel has been tapped:
I first set up a tap gesture recognizer:
UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];
Then used the location of the tap to produce the coordinates of the tap by which pixel of the UIImageView is tapped:
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
CGPoint touchPoint = [gesture locationInView:testImageView];
NSLog(#"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);
}
I would like to be able to tap a pixel, and have its colour change. However, none of the StackOverflow posts I have found have answers which work or are not outdated. For skilled coders, however, you may be able to help me decipher the older posts to make something that works, or to produce a simple fix on your own using my above code for detecting which pixel of the UIImageView has been tapped.
All help is appreciated.
Edit for originaluser2:
After following originaluser2's post, running the code works perfectly when I run it through his example GitHub project on my physical device. However, when I run the same code in my own app, I am met with the image being replaced with a white space, and the following errors:
<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
The code clearly works, as demonstrated by me testing it on my phone. However, the same code has produced a few issues in my own project. Though I have the suspicion that they are all caused by one or two simple central issues. How can I solve these errors?
You'll want to break this problem up into multiple steps.
Get the coordinates of the touched point in the image coordinate system
Get the x and y position of the pixel to change
Create a bitmap context and replace the given pixel's components with your new color's components.
First of all, to get the coordinates of the touched point in the image coordinate system – you can use a category method that I wrote on UIImageView. This will return a CGAffineTransform that will map a point from view coordinates to image coordinates – depending on the content mode of the view.
#interface UIImageView (PointConversionCatagory)
#property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
#property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
#end
#implementation UIImageView (PointConversionCatagory)
-(CGAffineTransform) viewToImageTransform {
UIViewContentMode contentMode = self.contentMode;
// failure conditions. If any of these are met – return the identity transform
if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
(contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
return CGAffineTransformIdentity;
}
// the width and height ratios
CGFloat rWidth = self.image.size.width/self.frame.size.width;
CGFloat rHeight = self.image.size.height/self.frame.size.height;
// whether the image will be scaled according to width
BOOL imageWiderThanView = rWidth > rHeight;
if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
// The ratio to scale both the x and y axis by
CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
// The x-offset of the inner rect as it gets centered
CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
// The y-offset of the inner rect as it gets centered
CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
} else {
return CGAffineTransformMakeScale(rWidth, rHeight);
}
}
-(CGAffineTransform) imageToViewTransform {
return CGAffineTransformInvert(self.viewToImageTransform);
}
#end
There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. You could skip this step entirely if your were displaying your image 1:1 on screen.
Next, you'll want to get the x and y position of the pixel to change. This is fairly simple – you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral.
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];
...
-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
if (!imageView.image) {
return;
}
// get the pixel position
CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
// replace image with new image, with the pixel replaced
imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}
Finally, you'll want to use another of my category methods – imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color.
/// A simple struct to represent the position of a pixel
struct PixelPosition {
NSInteger x;
NSInteger y;
};
typedef struct PixelPosition PixelPosition;
#interface UIImage (UIImagePixelManipulationCatagory)
#end
#implementation UIImage (UIImagePixelManipulationCatagory)
-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
// components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
UInt8* color255Components = calloc(sizeof(UInt8), 4);
for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
// raw image reference
CGImageRef rawImage = self.CGImage;
// image attributes
size_t width = CGImageGetWidth(rawImage);
size_t height = CGImageGetHeight(rawImage);
CGRect rect = {CGPointZero, {width, height}};
// image format
size_t bitsPerComponent = 8;
size_t bytesPerRow = width*4;
// the bitmap info
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
// data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
UInt8* data = calloc(bytesPerRow, height);
// get new RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create bitmap context
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
// draw image into context (populating the data array while doing so)
CGContextDrawImage(ctx, rect, rawImage);
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
// get image from context
CGImageRef img = CGBitmapContextCreateImage(ctx);
// clean up
free(color255Components);
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(data);
UIImage* returnImage = [UIImage imageWithCGImage:img];
CGImageRelease(img);
return returnImage;
}
#end
What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. Next, it creates a new bitmap context, with the given attributes of your input image.
The important bit of this method is:
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) – then uses that index to replace the component data of that pixel with the color components of your replacement color.
Finally, we get out an image from the bitmap context and perform some cleanup.
Finished Result:
Full Project: https://github.com/hamishknight/Pixel-Color-Changing
You could try something like the following:
UIImage *originalImage = [UIImage imageNamed:#"something"];
CGSize size = originalImage.size;
UIGraphicsBeginImageContext(size);
[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Fill UINavigationBar background with CGGradientRef

I would like to fill my UINavigationBar background with CGGradientRef instead of a picture file (e.g. myBackground.png). This practice will avoid having to create a PNG file for each screen size and also save storage space.
I've seen that it's possible to create an UIImage drawing a gradient from scratch and using:
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
Also, I've seen that I can assign an UIImage to a UINavigationBar using:
myNavigationController.navigationBar.barTintColor = [UIColor colorWithPatternImage:image]];
However, I have not been able to put these two together. Some help will be appreciated. Thank you.
Here is a overloaded UINavigationBar class using a two point gradient, although it could easily be improved to cover multi-point gradients.
To enable this bar style, select the navigation bar object in the navigation scene of the storyboard, and set its custom class to GradientNavigationBar.
In this case the awakeFromNib call is used to change the background, (assuming that the Navigation bar class has been changed in the storyboard), in the case that the Navigation bar is instantiated programatically, the customization call should be made in the appropriate position in the code.
The solution works by converting the colors passed to it to an array of CGFloat, then generating a CGGradientRef object, using those colors, creating an image and then using the setBackgroundImage:forBarMetrics call to set the background as required.
#interface GradientNavigationBar
#end
#implementation GradientNavigationBar
-(void) awakeFromNib {
[self setGradientBackground:[UIColor redColor]
endColor:[UIColor yellowColor]];
}
-(void) setGradientBackground:(UIColor *) startColor endColor:(UIColor *) endColor {
// Convert the colors into a format where they can be used with
// core graphics
CGFloat rs, gs, bs, as, re, ge, be, ae;
[startColor getRed:&rs green:&gs blue:&bs alpha:&as];
[endColor getRed:&re green:&ge blue:&be alpha:&ae];
CGFloat colors [] = {
rs, gs, bs, as,
re, ge, be, ae
};
// Generate an Image context with the appropriate options, it may
// be enhanced to take into account that Navbar heights differ
// eg between landscape and portrait in the iPhone
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
CGContextRef gc = UIGraphicsGetCurrentContext();
CGColorSpaceRef baseSpace = CGColorSpaceCreateDeviceRGB();
// The gradient element indicates the colors to be used and
// the color space
CGGradientRef gradient = CGGradientCreateWithColorComponents(baseSpace, colors, NULL, 2);
CGColorSpaceRelease(baseSpace), baseSpace = NULL;
// Draw the gradient
CGContextDrawLinearGradient(gc, gradient, CGPointMake(0, 0),CGPointMake(0, self.bounds.size.height),0);
// Capture the image
UIImage * backgroundImage = UIGraphicsGetImageFromCurrentImageContext();
// The critical call for setting the background image
// Note that separate calls can be made e.g. for the compact
// bar metric.
[self setBackgroundImage:backgroundImage forBarMetrics:UIBarMetricsDefault];
CGGradientRelease(gradient), gradient = NULL;
UIGraphicsEndImageContext();
}
#end

Tinting UIImage to a different color, OR, generating UIImage from vector

I have a circle with a black outline, and a white fill that I need to programmatically make the white into another color (via a UIColor). I've tried a handful of other stackoverflow solutions but none of them seem to work correctly, either filling just the outside or an outline.
I have two ways I could do this but I am unsure of how I would get the right results:
Tint just the white color to whatever the UIColor should be,
or,
Make a UIImage from two circles, one being filled and one overlapping that with black.
If you decide to use two circles, one white and one black, then you may find this helpful. This method will tint a uiimage one for you but it addresses the problem of only tinting the opaque part, meaning it will only tint the circle if you provide a png with transparency around the circle. So instead of filling the entire 24x24 frame of the image it fills only the opaque parts. This isn't exactly your question but you'll probably come across this problem if you go with the second option you listed.
-(UIImage*)colorAnImage:(UIColor*)color :(UIImage*)image{
CGRect rect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, image.scale);
CGContextRef c = UIGraphicsGetCurrentContext();
[image drawInRect:rect];
CGContextSetFillColorWithColor(c, [color CGColor]);
CGContextSetBlendMode(c, kCGBlendModeSourceAtop);
CGContextFillRect(c, rect);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Extend a UIView and just implement the drawRect method. For example, this will draw a green circle with a black outline.
- (void)drawRect:(CGRect)rect {
[super drawRect:rect];
CGContextRef gc = UIGraphicsGetCurrentContext();
[[UIColor greenColor] setFill];
CGContextFillEllipseInRect(gc, CGRectMake(0,0,24,24));
[[UIColor blackColor] set];
CGContextStrokeEllipseInRect(gc, CGRectMake(0,0,24,24));
}
For such simple shapes, just use CoreGraphics to draw a square and a circle -- adding the ability to set the fill color in your implementation.
If it's just black and white - then altering the white to another color is not so difficult when you know the color representations. Unfortunately, this is more complex to write and execute so… my recommendation is to just go straight to CoreGraphics for the simple task you outlined (bad pun, sorry).
here's a quick demo:
static void InsetRect(CGRect* const pRect, const CGFloat pAmount) {
const CGFloat halfAmount = pAmount * 0.5f;
*pRect = CGRectMake(pRect->origin.x + halfAmount, pRect->origin.y + halfAmount, pRect->size.width - pAmount, pRect->size.height - pAmount);
}
static void DrawBorderedCircleWithWidthInContext(const CGRect pRect, const CGFloat pWidth, CGContextRef pContext) {
CGContextSetLineWidth(pContext, pWidth);
CGContextSetShouldAntialias(pContext, true);
CGRect r = pRect;
/* draw circle's border */
CGContextSetRGBStrokeColor(pContext, 0.8f, 0.7f, 0, 1);
InsetRect(&r, pWidth);
CGContextStrokeEllipseInRect(pContext, r);
/* draw circle's fill */
CGContextSetRGBFillColor(pContext, 0, 0, 0.3f, 1);
InsetRect(&r, pWidth);
CGContextFillEllipseInRect(pContext, r);
}

slow pattern drawing for a backgroundView

for my tableview I've done a custom background view which I call on viewDidLoad on each tableView on my project:
- (void)viewDidLoad{
[super viewDidLoad];
//other unrelated sutff
self.tableView.backgroundView=[[BlueStyleBackground alloc]init];
}
It was my understanding that doing quartz2D stuff was the best option performance-wise. However this background view takes 0.45 seconds (average) to get drawn. This makes all the naviagation among tableViews a bit slow, not much but enough to notice it.
The background view is a gradient with a pattern overlaped. I've found out that the gradient gets drawn in 0.12 seconds, so the bottle neck appears to be the pattern drawing. What surprises me is that, from my point of view, it doesn't seem like a complicated thing to draw, it's only one horizontal line with a shadow.
Here is how the pattern is called (inside recDraw):
static const CGPatternCallbacks callbacks = { 0, &MyDrawColoredPattern2, NULL };
CGContextSaveGState(context);
CGColorSpaceRef patternSpace = CGColorSpaceCreatePattern(NULL);
CGContextSetFillColorSpace(context, patternSpace);
CGColorSpaceRelease(patternSpace);
CGPatternRef pattern = CGPatternCreate(NULL,
rect,//superficie que ha de cobrir CGRectMake(0, 0,self.bounds.size.width,8),
CGAffineTransformIdentity,
self.bounds.size.width, // espai entre patrons (X)
2, // espai entre patrons (Y)
kCGPatternTilingConstantSpacing,
true,
&callbacks);
and this is the pattern code:
void MyDrawColoredPattern2 (void *info, CGContextRef context) {
UIColor* __autoreleasing blauClar = [UIColor colorWithRed: 0.65 green: 0.65 blue: 0.65 alpha: 1];
UIColor* __autoreleasing blauFosc = [UIColor colorWithRed: 0.106 green: 0.345 blue:0.486 alpha:1];
CGColorRef dotColor = blauFosc.CGColor;
CGColorRef shadowColor = blauClar.CGColor;
CGContextSetFillColorWithColor(context, dotColor);
CGContextSetShadowWithColor(context, CGSizeMake(0, 1), 5, shadowColor);
CGContextFillRect(context, CGRectMake(0, 0, 480, 1));
}
Previously I did a pattern which draw two hexagons inside a 24+24 square. It looked more complicated that the code above but it takes only 0.15 seconds to get drawn.
Am I doing something wrong here? My knowledge about quartz drawing is not really that big and I would appreciate some inputs. Thanks in advance!
I hate to answer myself, but after some trial and error I found that changing the pattern code to the following, does the trick:
UIColor *__autoreleasing tileColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"pattern6-65.png"]];
CGColorRef tileCGColor = [tileColor CGColor];
CGColorSpaceRef colorSpace = CGColorGetColorSpace(tileCGColor);
CGContextSetFillColorSpace(context, colorSpace);
if (CGColorSpaceGetModel(colorSpace) == kCGColorSpaceModelPattern)
{
CGFloat alpha = 1.0f;
CGContextSetFillPattern(context, CGColorGetPattern(tileCGColor), &alpha);
}
else
{
CGContextSetFillColor(context, CGColorGetComponents(tileCGColor));
}
CGContextFillRect(context, CGRectMake(0, 0, 10, 10));
What it does is to fill the screen with a pattern generated with a very small png. So I had to draw the png with photoshop and then add it to the project. The bigger the png the faster it gets, but the more difficult is it to draw in photoshop.
The call for the pattern must be modiffied to fit the size of the png (10 pixels here):
CGPatternRef pattern = CGPatternCreate(NULL,
rect,//superficie que ha de cobrir CGRectMake(0, 0,self.bounds.size.width,8),
CGAffineTransformIdentity,
10, // espai entre patrons (X) self.bounds.size.width
10, // espai entre patrons (Y)
kCGPatternTilingConstantSpacing,
true,
&callbacks);
using this 10px png, the whole drawing time has drop to 0.18 seconds (gradient background plus pattern).
I hope it helps somebody with the same problem.
If I understand it right, you are trying to draw gradient pattern onto tableView background.
I would recommend you to create .png image in any graphic editor and put it onto the background.
Hope it helps

iOS: Applying a RGB filter to a greyscale PNG

I have a greyscale gem top view.
(PNG format, so has alpha component)
I would like to create 12 small size buttons, each in a different colour, from this image.
For the sake of tidiness, I would like to do this within the code rather than externally in some art package.
Can anyone provide a method (or even some code) for doing this?
PS I am aware of how to do it in GL using a ridiculous amount of code, I'm hoping there is a simpler way using core graphics / core animation
EDIT: Working solution, thanks to awesomeness from below answer
CGSize targetSize = (CGSize){100,100};
UIImage* image;
{
CGRect rect = (CGRect){ .size = targetSize };
UIGraphicsBeginImageContext( targetSize );
{
CGContextRef X = UIGraphicsGetCurrentContext();
UIImage* uiGem = [UIImage imageNamed: #"GemTop_Dull.png"];
// draw gem
[uiGem drawInRect: rect];
// overlay a red rectangle
CGContextSetBlendMode( X, kCGBlendModeColor ) ;
CGContextSetRGBFillColor ( X, 0.9, 0, 0, 1 );
CGContextFillRect ( X, rect );
// redraw gem
[uiGem drawInRect: rect
blendMode: kCGBlendModeDestinationIn
alpha: 1. ];
image = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
}
The easiest way to do it is to draw the image into an RGB-colorspaced CGBitmapContext, use CGContextSetBlendMode to set kCGBlendModeColor, and then draw over it with a solid color (e.g. with CGContextFillRect).
The best looking results are going to come from using the gray value to index into a gradient that goes from the darkest to the lightest colors of the desired result. Unfortunately I don't know the specifics of doing that with core graphics.
This is an improvement upon the answer in the question and an implementation of #Anomie
First, put this at the beginning of your UIButton class, or your view controller. It translates from UIColor to an RGBA value, which you will need later.
typedef enum { R, G, B, A } UIColorComponentIndices;
#implementation UIColor (EPPZKit)
- (CGFloat)redRGBAValue {
return CGColorGetComponents(self.CGColor)[R];
}
- (CGFloat)greenRGBAValue {
return CGColorGetComponents(self.CGColor)[G];
}
- (CGFloat)blueRGBAValue {
return CGColorGetComponents(self.CGColor)[B];
}
- (CGFloat)alphaRGBAValue {
return CGColorGetComponents(self.CGColor)[A];
}
#end
Now, make sure that you have your custom image button in IB, with a grayscale image and the right frame. This is considerably better and easier then creating the custom image button programmatically, because:
you can let IB load the image, instead of having to load it manually
you can adjust the button and see it visually in IB
your IB will look more like your app at runtime
you don't have to manually set frames
Assuming you are having the button be in IB (near the bottom will be support for having it programmatically created), add this method to your view controller or button cub class:
- (UIImage*)customImageColoringFromButton:(UIButton*)customImageButton fromColor:(UIColor*)color {
UIImage *customImage = [customImageButton.imageView.image copy];
UIGraphicsBeginImageContext(customImageButton.imageView.frame.size); {
CGContextRef X = UIGraphicsGetCurrentContext();
[customImage drawInRect: customImageButton.imageView.frame];
// Overlay a colored rectangle
CGContextSetBlendMode( X, kCGBlendModeColor) ;
CGContextSetRGBFillColor ( X, color.redRGBAValue, color.greenRGBAValue, color.blueRGBAValue, color.alphaRGBAValue);
CGContextFillRect ( X, customImageButton.imageView.frame);
// Redraw
[customImage drawInRect:customImageButton.imageView.frame blendMode: kCGBlendModeDestinationIn alpha: 1.0];
customImage = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
return customImage;
}
You then will need to call it in a setup method in your view controller or button subclass, and set the imageView of the button to it:
[myButton.imageView setImage:[self customImageColoringFromButton:myButton fromColor:desiredColor]];
If you are not using IB to create the button, use this method:
- (UIImage*)customImageColoringFromImage:(UIImage*)image fromColor:(UIColor*)color fromFrame:(CGRect)frame {
UIImage *customImage = [image copy];
UIGraphicsBeginImageContext(frame.size); {
CGContextRef X = UIGraphicsGetCurrentContext();
[customImage drawInRect: frame];
// Overlay a colored rectangle
CGContextSetBlendMode( X, kCGBlendModeColor) ;
CGContextSetRGBFillColor ( X, color.redRGBAValue, color.greenRGBAValue, color.blueRGBAValue, color.alphaRGBAValue);
CGContextFillRect ( X, frame);
// Redraw
[customImage drawInRect:frame blendMode: kCGBlendModeDestinationIn alpha: 1.0];
customImage = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
return customImage;
}
And call it with:
[self.disclosureButton.imageView setImage:[self customImageColoringFromImage:[UIImage imageNamed:#"GemTop_Dull.png"] fromColor:desiredColor fromFrame:desiredFrame]];

Resources