How to change colour of individual pixel of UIImage/UIImageView - ios

I have a UIImageView to which I have applied the filter:
testImageView.layer.magnificationFilter = kCAFilterNearest;
So that the individual pixels are visible. This UIImageView is within a UIScrollView, and the image itself is 1000x1000. I have used the following code to detect which pixel has been tapped:
I first set up a tap gesture recognizer:
UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];
Then used the location of the tap to produce the coordinates of the tap by which pixel of the UIImageView is tapped:
- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
CGPoint touchPoint = [gesture locationInView:testImageView];
NSLog(#"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);
}
I would like to be able to tap a pixel, and have its colour change. However, none of the StackOverflow posts I have found have answers which work or are not outdated. For skilled coders, however, you may be able to help me decipher the older posts to make something that works, or to produce a simple fix on your own using my above code for detecting which pixel of the UIImageView has been tapped.
All help is appreciated.
Edit for originaluser2:
After following originaluser2's post, running the code works perfectly when I run it through his example GitHub project on my physical device. However, when I run the same code in my own app, I am met with the image being replaced with a white space, and the following errors:
<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
The code clearly works, as demonstrated by me testing it on my phone. However, the same code has produced a few issues in my own project. Though I have the suspicion that they are all caused by one or two simple central issues. How can I solve these errors?

You'll want to break this problem up into multiple steps.
Get the coordinates of the touched point in the image coordinate system
Get the x and y position of the pixel to change
Create a bitmap context and replace the given pixel's components with your new color's components.
First of all, to get the coordinates of the touched point in the image coordinate system – you can use a category method that I wrote on UIImageView. This will return a CGAffineTransform that will map a point from view coordinates to image coordinates – depending on the content mode of the view.
#interface UIImageView (PointConversionCatagory)
#property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
#property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
#end
#implementation UIImageView (PointConversionCatagory)
-(CGAffineTransform) viewToImageTransform {
UIViewContentMode contentMode = self.contentMode;
// failure conditions. If any of these are met – return the identity transform
if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
(contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
return CGAffineTransformIdentity;
}
// the width and height ratios
CGFloat rWidth = self.image.size.width/self.frame.size.width;
CGFloat rHeight = self.image.size.height/self.frame.size.height;
// whether the image will be scaled according to width
BOOL imageWiderThanView = rWidth > rHeight;
if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
// The ratio to scale both the x and y axis by
CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
// The x-offset of the inner rect as it gets centered
CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
// The y-offset of the inner rect as it gets centered
CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
} else {
return CGAffineTransformMakeScale(rWidth, rHeight);
}
}
-(CGAffineTransform) imageToViewTransform {
return CGAffineTransformInvert(self.viewToImageTransform);
}
#end
There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. You could skip this step entirely if your were displaying your image 1:1 on screen.
Next, you'll want to get the x and y position of the pixel to change. This is fairly simple – you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral.
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];
...
-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
if (!imageView.image) {
return;
}
// get the pixel position
CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
// replace image with new image, with the pixel replaced
imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}
Finally, you'll want to use another of my category methods – imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color.
/// A simple struct to represent the position of a pixel
struct PixelPosition {
NSInteger x;
NSInteger y;
};
typedef struct PixelPosition PixelPosition;
#interface UIImage (UIImagePixelManipulationCatagory)
#end
#implementation UIImage (UIImagePixelManipulationCatagory)
-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
// components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
UInt8* color255Components = calloc(sizeof(UInt8), 4);
for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
// raw image reference
CGImageRef rawImage = self.CGImage;
// image attributes
size_t width = CGImageGetWidth(rawImage);
size_t height = CGImageGetHeight(rawImage);
CGRect rect = {CGPointZero, {width, height}};
// image format
size_t bitsPerComponent = 8;
size_t bytesPerRow = width*4;
// the bitmap info
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
// data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
UInt8* data = calloc(bytesPerRow, height);
// get new RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create bitmap context
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
// draw image into context (populating the data array while doing so)
CGContextDrawImage(ctx, rect, rawImage);
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
// get image from context
CGImageRef img = CGBitmapContextCreateImage(ctx);
// clean up
free(color255Components);
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(data);
UIImage* returnImage = [UIImage imageWithCGImage:img];
CGImageRelease(img);
return returnImage;
}
#end
What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. Next, it creates a new bitmap context, with the given attributes of your input image.
The important bit of this method is:
// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a
What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) – then uses that index to replace the component data of that pixel with the color components of your replacement color.
Finally, we get out an image from the bitmap context and perform some cleanup.
Finished Result:
Full Project: https://github.com/hamishknight/Pixel-Color-Changing

You could try something like the following:
UIImage *originalImage = [UIImage imageNamed:#"something"];
CGSize size = originalImage.size;
UIGraphicsBeginImageContext(size);
[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Related

Draw rectangles on image view.image not scaling properly - iOS

I start out with an imageView.image (a photo).
I submit (POST) the imageView.image to remote service (Microsoft face detection) for processing.
Remote service returns JSON of CGRect's for each detected face on the image.
I feed JSON into my UIView to draw the rectangles. I initiate my UIView with a frame of {0, 0, imageView.image.size.width, imageView.image.size.height}. <-- my thinking, a frame equivalent to the size of the imageView.image
Add my UIView as a subview of self.imageView OR self.view (tried both)
End Result:
Rectangles are drawn but they do not appear correctly on the imageView.image. That is, the CGRects generated for each of the faces are supposed to be relative to the image's coordinate space, as returned by the remote service but they appear off once I add my custom view.
I believe I may have a scaling issue of some sort as, if I divide each value in the CGRects / 2 (as a test) I can get an approximation but still off. The microsoft documentation states the detected faces are returned with rectangles indicating the location of faces in the image in Pixels. Yet, aren't they being treated as points when drawing my path?
Also, shouldn't I be initiating my view with a frame equivalent to the imageView.image's frame so that the view matches an identical coordinate space as the submitted image?
Here is a screenshot example of what it looks like if i try to scale down each CGRect by dividing them by 2.
I am new to iOS and broke away from the books to work on this as a self exercise. I can provide more code as needed. Thanks in advance for your insight!
EDIT 1
I add a subview for each rectangle as I iterate over an array of face attributes which include the rectangle for each face via the following method, which gets called during (void)viewDidAppear:(BOOL)animated
- (void)buildFaceRects {
// build an array of CGRect dicts off of JSON returned from analized image
NSMutableArray *array = [self analizeImage:self.imageView.image];
// enumerate over array using block - each obj in array represents one face
[array enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
// build dictionary of rects and attributes for the face
NSDictionary *json = [NSDictionary dictionaryWithObjectsAndKeys:obj[#"attributes"], #"attributes", obj[#"faceId"], #"faceId", obj[#"faceRectangle"], #"faceRectangle", nil];
// initiate face model object with dictionary
ZGCFace *face = [[ZGCFace alloc] initWithJSON:json];
NSLog(#"%#", face.faceId);
NSLog(#"%d", face.age);
NSLog(#"%#", face.gender);
NSLog(#"%f", face.faceRect.origin.x);
NSLog(#"%f", face.faceRect.origin.y);
NSLog(#"%f", face.faceRect.size.height);
NSLog(#"%f", face.faceRect.size.width);
// define frame for subview containing face rectangle
CGRect imageRect = CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height);
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imageRect];
// add view as subview of imageview (?)
[self.imageView addSubview:faceRect];
}];
}
EDIT 2:
/* Image info */
UIImageView *iv = self.imageView;
UIImage *img = iv.image;
CGImageRef CGimg = img.CGImage;
// Bitmap dimensions [pixels]
NSUInteger imgWidth = CGImageGetWidth(CGimg);
NSUInteger imgHeight = CGImageGetHeight(CGimg);
NSLog(#"Image dimensions: %lux%lu", imgWidth, imgHeight);
// Image size pixels (size * scale)
CGSize imgSizeInPixels = CGSizeMake(img.size.width * img.scale, img.size.height * img.scale);
NSLog(#"image size in Pixels: %fx%f", imgSizeInPixels.width, imgSizeInPixels.height);
// Image size points
CGSize imgSizeInPoints = img.size;
NSLog(#"image size in Points: %fx%f", imgSizeInPoints.width, imgSizeInPoints.height);
// Calculate Image frame (within imgview) with a contentMode of UIViewContentModeScaleAspectFit
CGFloat imgScale = fminf(CGRectGetWidth(iv.bounds)/imgSizeInPoints.width, CGRectGetHeight(iv.bounds)/imgSizeInPoints.height);
CGSize scaledImgSize = CGSizeMake(imgSizeInPoints.width * imgScale, imgSizeInPoints.height * imgScale);
CGRect imgFrame = CGRectMake(roundf(0.5f*(CGRectGetWidth(iv.bounds)-scaledImgSize.width)), roundf(0.5f*(CGRectGetHeight(iv.bounds)-scaledImgSize.height)), roundf(scaledImgSize.width), roundf(scaledImgSize.height));
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imgFrame];
// add view as subview of image view
[iv addSubview:faceRect];
}];
We've got several problems :
Microsoft returns pixel and iOS uses points. The difference between them is just about screen dimension. For instance on an iPhone 5 : 1 pt = 2 px and on an 3GS 1px = 1 pt. Look at the iOS documentation for more informations.
The frame of your UIImageView is not the image frame. When Microsofts returns the frame of a face, it returns it in the frame of the image and not in the frame of the UIImageView. So we've got a problem of coordinates system.
Be careful about time if you use Autolayout. The frame of a view set by constraints is not the same when ViewDidLoad: is called than when you see it on the screen.
Solution :
I'm just a read-only Objective C developer so I can't give you code. I could in Swift but it's not necessary.
Convert pixels into points. That's easy : use ratio.
Define the frame of a face using what you did. Then you have to move the coordinates you determined from the image frame coordinates system to the UIImageView coordinates system. That's less easy. It depends on the contentMode of your UIImageView. But I quickly find informations about it on the Internet.
If you use AutoLayout, add the frame of the face when AutoLayout finishes to calculate the layouts. So when ViewDidLayoutSubview: is called.
Or, that's better, use constraints to set your frame in the UIImageView.
I hope to be clear enough.
Some links :
iOS Drawing Concepts
Displayed Image Frame In UIImageView

Custom UIProgressView with image

I’m trying to make a custom UIProgressView where the image that gets filled up is the Nike Swoosh. I’ve tried to follow some tutorials but am getting nowhere.
My current approach:
Make the inside of swoosh transparent and surroundings black.
Then put a big UIProgressView behind that.
Since the middle of the swoosh is transparent, it looks like the swoosh is filling up.
But, modifying the height of the progress bar has proven to be a pain since it messes with the width in a weird way…and it’s hard to align the swoosh with the progress bar for responsiveness.
Are there any other ideas or libraries out there?
Thanks
My suggestion:
draw a second image programatically to apply as a mask against your 'swoosh' image, and then repeat this cyclically.
example, fill image from left (mask it from right)
{
//existing variables
IBOutlet UIImageView *swooshView;
}
-(UIImage *)maskImageOfSize:(CGSize )size filledTo:(CGFloat )percentage{
UIGraphicsBeginImageContextWithOptions ( size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColourWithColour (context, [UIColor blackColor].CGColor);
CGRect fillRect = CGRectZero;
fillRect.size.height = size.height;
fillRect.size.width = size.width * percentage / 100.0;
fillRect.origin.x = (size.width - fillRect.size.width);
CGContextFillRect(context, fillRect);
UIImage *result = UIGraphicsGetImageFromImageContext();
UIGraphicsEndImageContext();
return result;
}
-(void)fillSwooshToPercentage:(CGFloat )percentage{
percentage = ((CGFloat ) fmaxf ( 0.0 , (fminf (100.0, (float) percentage ) ) );
// just policing a 'floor' and 'ceiling'...
swooshView.layer.mask = [self maskImageOfSize:self.swoosh.bounds.size filledTo:percentage];
}

How to remove opacity but keep the alpha channel of UIImage?

I have a layer where I want the user to draw a 'mask' for cutting out images. It is semi-opaque so that they can see beneath what they are selecting.
How can I process this so that the drawing data has an alpha of 1.0, but retain the alpha channel (for masking)?
TL:DR - I'd like the black area to be a solid, single colour.
Here is the desired before and after (the white background should be transparent in both):
something like this:
for (pixel in image) {
if (pixel.alpha != 0.0) {
fill solid black
}
}
The following should do what you're after. Majority of the code is from How to set the opacity/alpha of a UIImage? I only added a test for the alpha value, before converting the colour of the pixel to black.
// Create a pixel buffer in an easy to use format
CGImageRef imageRef = [[UIImage imageNamed:#"testImage"] CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//alter the alpha when the alpha of the source != 0
int length = height * width * 4;
for (int i=0; i<length; i+=4) {
if (m_PixelBuf[i+3] != 0) {
m_PixelBuf[i+3] = 255;
}
}
//create a new image
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef newImgRef = CGBitmapContextCreateImage(ctx);
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
free(m_PixelBuf);
UIImage *finalImage = [UIImage imageWithCGImage:newImgRef];
CGImageRelease(newImgRef);
finalImage will now contain an image where all pixels that don't have an alpha of 0.0 have alpha of 1.
The underlying model for this app should not be images. This is not a question of "how do I create one rendition of the image from the other."
Instead, the underlying object model should be an array of paths. Then, when you want to create the image with translucent paths vs opaque paths, it's just a question of how you render this array of paths. Once you tackle it that way, the problem is not a complex image manipulation question but a simple rendering question.
By the way, I really like this array-of-paths model, because then it becomes quite trivial to do things like "gee, let me provide an undo function, letting the user remove one stroke at a time." It opens you up to all sorts of nice functional enhancements.
In terms of specifics of how to render these paths, it can be implemented in a variety of different ways. You could use custom drawRect function for UIView subclass that renders the paths with the appropriate alpha. Or you can do it with CAShapeLayer objects, too. Or you can do some hybrid (creating new image snapshots as you finish adding each path, saving you from having to re-render all of the paths each time). There are tons of ways of tackling this.
But the key insight is to employ an underlying model of an array of paths, and then the rendering of your two types of images becomes fairly trivial exercise:
The first image is a rendering of a bunch of paths as CAShapeLayer objects with alpha of 0.5. The second is the same rendering, but with an alpha of 1.0. Again, it doesn't matter if you use shape layers or low level Core Graphics calls, but the underlying idea is the same. Either render your paths with translucency or not.

Draw on UIImage to erase and see through, revealing the UIImage layer underneath [duplicate]

This question already has an answer here:
How to erase piece of UIImageView with png-brush and UIBezierPath
(1 answer)
Closed 9 years ago.
I spent the last few days trying to find a way, but I'm stuck.
At a certain point of my application I would like two UIImages overlaid, and when I start "drawing" on the first layer it would erase the image and let us see through, to be able to reveal the content underneath.
I don't know my ways in core graphics, and after spending days on the net I'm wondering if it is possible.
Is there anyone who could help me, or point me to the direction I should follow ?
Any help would be very appreciated.
Have a nice day.
Create one ScratchView class and put your Scratch image in initMethod like:-
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
scratchable = [UIImage imageNamed:#"scratchable.jpg"].CGImage;
width = CGImageGetWidth(scratchable);
height = CGImageGetHeight(scratchable);
self.opaque = NO;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CFMutableDataRef pixels = CFDataCreateMutable( NULL , width * height );
alphaPixels = CGBitmapContextCreate( CFDataGetMutableBytePtr( pixels ) , width , height , 8 , width , colorspace , kCGImageAlphaNone );
provider = CGDataProviderCreateWithCFData(pixels);
CGContextSetFillColorWithColor(alphaPixels, [UIColor blackColor].CGColor);
CGContextFillRect(alphaPixels, frame);
CGContextSetStrokeColorWithColor(alphaPixels, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(alphaPixels, 20.0);
CGContextSetLineCap(alphaPixels, kCGLineCapRound);
CGImageRef mask = CGImageMaskCreate(width, height, 8, 8, width, provider, nil, NO);
scratched = CGImageCreateWithMask(scratchable, mask);
CGImageRelease(mask);
CGColorSpaceRelease(colorspace);
}
return self;
}
and also create second view class for display image of background after Scratch
Bellow this example is much useful to you try with this:-
https://github.com/oyiptong/CGScratch

iOS: Applying a RGB filter to a greyscale PNG

I have a greyscale gem top view.
(PNG format, so has alpha component)
I would like to create 12 small size buttons, each in a different colour, from this image.
For the sake of tidiness, I would like to do this within the code rather than externally in some art package.
Can anyone provide a method (or even some code) for doing this?
PS I am aware of how to do it in GL using a ridiculous amount of code, I'm hoping there is a simpler way using core graphics / core animation
EDIT: Working solution, thanks to awesomeness from below answer
CGSize targetSize = (CGSize){100,100};
UIImage* image;
{
CGRect rect = (CGRect){ .size = targetSize };
UIGraphicsBeginImageContext( targetSize );
{
CGContextRef X = UIGraphicsGetCurrentContext();
UIImage* uiGem = [UIImage imageNamed: #"GemTop_Dull.png"];
// draw gem
[uiGem drawInRect: rect];
// overlay a red rectangle
CGContextSetBlendMode( X, kCGBlendModeColor ) ;
CGContextSetRGBFillColor ( X, 0.9, 0, 0, 1 );
CGContextFillRect ( X, rect );
// redraw gem
[uiGem drawInRect: rect
blendMode: kCGBlendModeDestinationIn
alpha: 1. ];
image = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
}
The easiest way to do it is to draw the image into an RGB-colorspaced CGBitmapContext, use CGContextSetBlendMode to set kCGBlendModeColor, and then draw over it with a solid color (e.g. with CGContextFillRect).
The best looking results are going to come from using the gray value to index into a gradient that goes from the darkest to the lightest colors of the desired result. Unfortunately I don't know the specifics of doing that with core graphics.
This is an improvement upon the answer in the question and an implementation of #Anomie
First, put this at the beginning of your UIButton class, or your view controller. It translates from UIColor to an RGBA value, which you will need later.
typedef enum { R, G, B, A } UIColorComponentIndices;
#implementation UIColor (EPPZKit)
- (CGFloat)redRGBAValue {
return CGColorGetComponents(self.CGColor)[R];
}
- (CGFloat)greenRGBAValue {
return CGColorGetComponents(self.CGColor)[G];
}
- (CGFloat)blueRGBAValue {
return CGColorGetComponents(self.CGColor)[B];
}
- (CGFloat)alphaRGBAValue {
return CGColorGetComponents(self.CGColor)[A];
}
#end
Now, make sure that you have your custom image button in IB, with a grayscale image and the right frame. This is considerably better and easier then creating the custom image button programmatically, because:
you can let IB load the image, instead of having to load it manually
you can adjust the button and see it visually in IB
your IB will look more like your app at runtime
you don't have to manually set frames
Assuming you are having the button be in IB (near the bottom will be support for having it programmatically created), add this method to your view controller or button cub class:
- (UIImage*)customImageColoringFromButton:(UIButton*)customImageButton fromColor:(UIColor*)color {
UIImage *customImage = [customImageButton.imageView.image copy];
UIGraphicsBeginImageContext(customImageButton.imageView.frame.size); {
CGContextRef X = UIGraphicsGetCurrentContext();
[customImage drawInRect: customImageButton.imageView.frame];
// Overlay a colored rectangle
CGContextSetBlendMode( X, kCGBlendModeColor) ;
CGContextSetRGBFillColor ( X, color.redRGBAValue, color.greenRGBAValue, color.blueRGBAValue, color.alphaRGBAValue);
CGContextFillRect ( X, customImageButton.imageView.frame);
// Redraw
[customImage drawInRect:customImageButton.imageView.frame blendMode: kCGBlendModeDestinationIn alpha: 1.0];
customImage = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
return customImage;
}
You then will need to call it in a setup method in your view controller or button subclass, and set the imageView of the button to it:
[myButton.imageView setImage:[self customImageColoringFromButton:myButton fromColor:desiredColor]];
If you are not using IB to create the button, use this method:
- (UIImage*)customImageColoringFromImage:(UIImage*)image fromColor:(UIColor*)color fromFrame:(CGRect)frame {
UIImage *customImage = [image copy];
UIGraphicsBeginImageContext(frame.size); {
CGContextRef X = UIGraphicsGetCurrentContext();
[customImage drawInRect: frame];
// Overlay a colored rectangle
CGContextSetBlendMode( X, kCGBlendModeColor) ;
CGContextSetRGBFillColor ( X, color.redRGBAValue, color.greenRGBAValue, color.blueRGBAValue, color.alphaRGBAValue);
CGContextFillRect ( X, frame);
// Redraw
[customImage drawInRect:frame blendMode: kCGBlendModeDestinationIn alpha: 1.0];
customImage = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
return customImage;
}
And call it with:
[self.disclosureButton.imageView setImage:[self customImageColoringFromImage:[UIImage imageNamed:#"GemTop_Dull.png"] fromColor:desiredColor fromFrame:desiredFrame]];

Resources