UIGraphicsBeginImageContext vs CGBitmapContextCreate - ios

I'm trying to change color of an image in a background thread.
Apple doc says UIGraphicsBeginImageContext can only be called from main thread, and I'm trying to use CGBitmapContextCreate:
context = CGBitmapContextCreate
(bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
I have two versions of "changeColor" first one using UIGraphisBeginImageContext, second one using CGBitmapContextCreate.
The first one correctly changes color, but second one doesn't.
Why is that?
- (UIImage*) changeColor: (UIColor*) aColor
{
if(aColor == nil)
return self;
UIGraphicsBeginImageContext(self.size);
CGRect bounds;
bounds.origin = CGPointMake(0,0);
bounds.size = self.size;
[aColor set];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, bounds, [self CGImage]);
CGContextFillRect(context, bounds);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
- (UIImage*) changeColor: (UIColor*) aColor
{
if(aColor == nil)
return self;
CGContextRef context = CreateARGBBitmapContext(self.size);
CGRect bounds;
bounds.origin = CGPointMake(0,0);
bounds.size = self.size;
CGColorRef colorRef = aColor.CGColor;
const CGFloat *components = CGColorGetComponents(colorRef);
float red = components[0];
float green = components[1];
float blue = components[2];
CGContextSetRGBFillColor(context, red, green, blue, 1);
CGContextClipToMask(context, bounds, [self CGImage]);
CGContextFillRect(context, bounds);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage* img = [UIImage imageWithCGImage: imageRef];
unsigned char* data = (unsigned char*)CGBitmapContextGetData (context);
CGContextRelease(context);
free(data);
return img;
}
CGContextRef CreateARGBBitmapContext(CGSize size)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = size.width;
size_t pixelsHigh = size.height;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}

Your second method is doing work that the first never did. Here's an adjustment of the second method to more closely match the first one:
- (UIImage*) changeColor: (UIColor*) aColor
{
if(aColor == nil)
return self;
CGContextRef context = CreateARGBBitmapContext(self.size);
CGRect bounds;
bounds.origin = CGPointMake(0,0);
bounds.size = self.size;
CGContextSetFillColorWithColor(aColor.CGColor);
CGContextClipToMask(context, bounds, [self CGImage]);
CGContextFillRect(context, bounds);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage* img = [UIImage imageWithCGImage: imageRef];
CGContextRelease(context);
return img;
}
The two changes I made were I converted this to use CGContextSetFillColorWithColor(), and I removed the dangerous and incorrect free() of the backing data of the bitmap context. If this code snippet does not behave identically to the first one, then you will have to look at your implementation of CreateARGBBitmapContext() to verify that it is correct.
Of course, as Brad Larson mentioned in the comments, if you're targeting iOS 4.0 and above, the UIKit graphics methods are (according to the release notes) thread-safe and you should be able to use the first method just fine.

Related

Convert indexed color .png to RGB or greyscale

I'm writing this Today Widget, that needs to display an image.
I noticed, that every time the Widget loads, the image is redrawn. This takes about half a second.
After some investigation, I found out, that the culprit is, that the image file is in the Indexed color space.
So: my question is:
How do I convert this file to something that the iPhone can display more efficiently? For instance, an RGB file. I would then save it to a new file, and load that new file in my UIImageView.
I played around a bit with CGImage, since I believe that is the solution direction, but I end up with a white UIImageView.
This is my code:
UIImage * theCartoon = [UIImage imageWithData:imageData];
CGImageRef si = [theCartoon CGImage];
CGDataProviderRef src = CGImageGetDataProvider(si);
CGImageRef imageRef = CGImageCreateWithPNGDataProvider(src, NULL, NO, kCGRenderingIntentDefault);
cartoon.image = [[UIImage alloc] initWithCGImage:imageRef];
Any suggestions on this approach? Some obvious misprogramming?
Try this
// The source image
CGImageRef image = theCartoon.CGImage;
CGSize size = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
// The result image in RGB color space
CGImageRef result = nil;
// Check color space
CGColorSpaceRef srcColorSpace = CGImageGetColorSpace(image);
if (CGColorSpaceGetModel(srcColorSpace) != kCGColorSpaceModelRGB) {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, 8, 0, colorSpace, kCGImageAlphaNoneSkipLast);
CGRect rect = {CGPointZero, size};
CGContextDrawImage(context, rect, image);
result = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
}
It's been a while since the question was asked, but for others who might need this, here is my solution:
-(UIImage *) convertIndexedColorSpaceToRGB:(UIImage *) sourceImage {
CGImageRef originalImageRef = sourceImage.CGImage;
const CGBitmapInfo originalBitmapInfo = CGImageGetBitmapInfo(originalImageRef);
// See: http://stackoverflow.com/questions/23723564/which-cgimagealphainfo-should-we-use
const uint32_t alphaInfo = (originalBitmapInfo & kCGBitmapAlphaInfoMask);
CGBitmapInfo bitmapInfo = originalBitmapInfo;
switch (alphaInfo)
{
case kCGImageAlphaNone:
bitmapInfo |= kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast;
break;
case kCGImageAlphaPremultipliedFirst:
case kCGImageAlphaPremultipliedLast:
case kCGImageAlphaNoneSkipFirst:
case kCGImageAlphaNoneSkipLast:
break;
case kCGImageAlphaOnly:
case kCGImageAlphaLast:
case kCGImageAlphaFirst:
{
return sourceImage;
}
break;
}
const CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
const CGSize pixelSize = CGSizeMake(sourceImage.size.width * sourceImage.scale, sourceImage.size.height * sourceImage.scale);
const CGContextRef context = CGBitmapContextCreate(NULL,
pixelSize.width,
pixelSize.height,
CGImageGetBitsPerComponent(originalImageRef),
pixelSize.width*4,
colorSpace,
bitmapInfo
);
CGColorSpaceRelease(colorSpace);
if (!context) return sourceImage;
const CGRect imageRect = CGRectMake(0, 0, pixelSize.width, pixelSize.height);
UIGraphicsPushContext(context);
// Flip coordinate system. See: http://stackoverflow.com/questions/506622/cgcontextdrawimage-draws-image-upside-down-when-passed-uiimage-cgimage
CGContextTranslateCTM(context, 0, pixelSize.height);
CGContextScaleCTM(context, 1.0, -1.0);
[sourceImage drawInRect:imageRect];
UIGraphicsPopContext();
const CGImageRef decompressedImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *image = [UIImage imageWithCGImage:decompressedImageRef scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
CGImageRelease(decompressedImageRef);
return image; }

Problems with CGbitmapcontext and alpha

I'm trying to develop an draw app using Core Graphics. I want to have the background to be in alpha but instead it is in black. I tried using all the different types of bitmapinfo types with no success. kCGImageAlphaPremultipliedLast will not work either. Anyone knows how to fix this?
- (BOOL) initContext:(CGSize)size {
int bitmapByteCount;
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipFirst;
cacheContext = CGBitmapContextCreate (cacheBitmap, size.width, size.height, 8, bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(), bitmapInfo);
CGContextSetRGBFillColor(cacheContext, 0, 0, 0, 0);
CGContextFillRect(cacheContext, (CGRect){CGPointZero, size});
return YES;
}
I'm using this code to do pretty much exactly what you're asking. I've set the color to red with a 50% alpha instead of 0% so you can actually see that the alpha channel is there.
#implementation ViewController
{
CGContextRef cacheContext;
void* cacheBitmap;
__weak IBOutlet UIImageView* _imageView;
}
- (void)viewDidLoad
{
[super viewDidLoad];
}
-(void)viewDidAppear:(BOOL)animated
{
[super viewDidAppear:animated];
[self setupContext:self.view.bounds.size];
CGImageRef cgImage = CGBitmapContextCreateImage(cacheContext);
_imageView.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
}
// Name changed to avoid using the magic word "init"
- (BOOL) setupContext:(CGSize)size
{
int bitmapByteCount;
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrderDefault;
// Create and define the color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
cacheContext = CGBitmapContextCreate (cacheBitmap, size.width, size.height, 8, bitmapBytesPerRow, colorSpace, bitmapInfo);
CGContextSetRGBFillColor(cacheContext, 1., 0, 0, 0.5);
CGContextFillRect(cacheContext, (CGRect){CGPointZero, size});
// Release the color space so memory doesn't leak
CGColorSpaceRelease(colorSpace);
return YES;
}
#end

Error in converting the image color to grey scale in ios [duplicate]

I am trying to convert an image into grayscale in the following way:
#define bytesPerPixel 4
#define bitsPerComponent 8
-(unsigned char*) getBytesForImage: (UIImage*)pImage
{
CGImageRef image = [pImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
NSUInteger bytesPerRow = bytesPerPixel * width;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
return rawData;
}
-(UIImage*) processImage: (UIImage*)pImage
{
DebugLog(#"processing image");
unsigned char *rawData = [self getBytesForImage: pImage];
NSUInteger width = pImage.size.width;
NSUInteger height = pImage.size.height;
DebugLog(#"width: %d", width);
DebugLog(#"height: %d", height);
NSUInteger bytesPerRow = bytesPerPixel * width;
for (int xCoordinate = 0; xCoordinate < width; xCoordinate++)
{
for (int yCoordinate = 0; yCoordinate < height; yCoordinate++)
{
int byteIndex = (bytesPerRow * yCoordinate) + xCoordinate * bytesPerPixel;
//Getting original colors
float red = ( rawData[byteIndex] / 255.f );
float green = ( rawData[byteIndex + 1] / 255.f );
float blue = ( rawData[byteIndex + 2] / 255.f );
//Processing pixel data
float averageColor = (red + green + blue) / 3.0f;
red = averageColor;
green = averageColor;
blue = averageColor;
//Assigning new color components
rawData[byteIndex] = (unsigned char) red * 255;
rawData[byteIndex + 1] = (unsigned char) green * 255;
rawData[byteIndex + 2] = (unsigned char) blue * 255;
}
}
NSData* newPixelData = [NSData dataWithBytes: rawData length: height * width * 4];
UIImage* newImage = [UIImage imageWithData: newPixelData];
free(rawData);
DebugLog(#"image processed");
return newImage;
}
So when I want to convert an image I just call processImage:
imageToDisplay.image = [self processImage: image];
But imageToDisplay doesn't display. What may be the problem?
Thanks.
I needed a version that preserved the alpha channel, so I modified the code posted by Dutchie432:
#implementation UIImage (grayscale)
typedef enum {
ALPHA = 0,
BLUE = 1,
GREEN = 2,
RED = 3
} PIXELS;
- (UIImage *)convertToGrayscale {
CGSize size = [self size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
#end
Here is a code using only UIKit and the luminosity blending mode. A bit of a hack, but it works well.
// Transform the image in grayscale.
- (UIImage*) grayishImage: (UIImage*) inputImage {
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, YES, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *filteredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return filteredImage;
}
To keep the transparency, maybe you can just set the opaque mode parameter of the UIGraphicsBeginImageContextWithOptions to NO. Needs to be checked.
Based on Cam's code with the ability to deal with the scale for Retina displays.
- (UIImage *) toGrayscale
{
const int RED = 1;
const int GREEN = 2;
const int BLUE = 3;
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
int width = imageRect.size.width;
int height = imageRect.size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100);
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image
scale:self.scale
orientation:UIImageOrientationUp];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
I liked Mathieu Godart's answer, but it didn't seem to work properly for retina or alpha images. Here's an updated version that seems to work for both of those for me:
- (UIImage*)convertToGrayscale
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGRect imageRect = CGRectMake(0.0f, 0.0f, self.size.width, self.size.height);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Draw a white background
CGContextSetRGBFillColor(ctx, 1.0f, 1.0f, 1.0f, 1.0f);
CGContextFillRect(ctx, imageRect);
// Draw the luminosity on top of the white background to get grayscale
[self drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0f];
// Apply the source image's alpha
[self drawInRect:imageRect blendMode:kCGBlendModeDestinationIn alpha:1.0f];
UIImage* grayscaleImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return grayscaleImage;
}
What exactly takes place when you use this function? Is the function returning an invalid image, or is the display not showing it correctly?
This is the method I use to convert to greyscale.
- (UIImage *) convertToGreyscale:(UIImage *)i {
int kRed = 1;
int kGreen = 2;
int kBlue = 4;
int colors = kGreen | kBlue | kRed;
int m_width = i.size.width;
int m_height = i.size.height;
uint32_t *rgbImage = (uint32_t *) malloc(m_width * m_height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbImage, m_width, m_height, 8, m_width * 4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextDrawImage(context, CGRectMake(0, 0, m_width, m_height), [i CGImage]);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// now convert to grayscale
uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height);
for(int y = 0; y < m_height; y++) {
for(int x = 0; x < m_width; x++) {
uint32_t rgbPixel=rgbImage[y*m_width+x];
uint32_t sum=0,count=0;
if (colors & kRed) {sum += (rgbPixel>>24)&255; count++;}
if (colors & kGreen) {sum += (rgbPixel>>16)&255; count++;}
if (colors & kBlue) {sum += (rgbPixel>>8)&255; count++;}
m_imageData[y*m_width+x]=sum/count;
}
}
free(rgbImage);
// convert from a gray scale image back into a UIImage
uint8_t *result = (uint8_t *) calloc(m_width * m_height *sizeof(uint32_t), 1);
// process the image back to rgb
for(int i = 0; i < m_height * m_width; i++) {
result[i*4]=0;
int val=m_imageData[i];
result[i*4+1]=val;
result[i*4+2]=val;
result[i*4+3]=val;
}
// create a UIImage
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(result, m_width, m_height, 8, m_width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGImageRef image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
free(m_imageData);
// make sure the data will be released by giving it to an autoreleased NSData
[NSData dataWithBytesNoCopy:result length:m_width * m_height];
return resultUIImage;
}
Different approach with CIFilter. Preserves alpha channel and works with transparent background:
+ (UIImage *)convertImageToGrayScale:(UIImage *)image
{
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:#(0.0) forKey:kCIInputSaturationKey];
CIImage *outputImage = filter.outputImage;
CGImageRef cgImageRef = [context createCGImage:outputImage fromRect:outputImage.extent];
UIImage *result = [UIImage imageWithCGImage:cgImageRef];
CGImageRelease(cgImageRef);
return result;
}
A swift extension to UIImage, preserving alpha:
extension UIImage {
private func convertToGrayScaleNoAlpha() -> CGImageRef {
let colorSpace = CGColorSpaceCreateDeviceGray();
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.None.rawValue)
let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, colorSpace, bitmapInfo)
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage)
return CGBitmapContextCreateImage(context)
}
/**
Return a new image in shades of gray + alpha
*/
func convertToGrayScale() -> UIImage {
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.Only.rawValue)
let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, nil, bitmapInfo)
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage);
let mask = CGBitmapContextCreateImage(context)
return UIImage(CGImage: CGImageCreateWithMask(convertToGrayScaleNoAlpha(), mask), scale: scale, orientation:imageOrientation)!
}
}
Here is another good solution as a category method on UIImage. It's based on this blog post and its comments. But I fixed a memory issue here:
- (UIImage *)grayScaleImage {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [self CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [self CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
CGImageRef cgImage = CGImageCreateWithMask(grayImage, mask);
UIImage *grayScaleImage = [UIImage imageWithCGImage:cgImage scale:self.scale orientation:self.imageOrientation];
// release the CG images
CGImageRelease(cgImage);
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
}
An fast and efficient Swift 3 implementation for iOS 9/10. I feel this is efficient having now tried every image filtering method I could find for processing 100's of images at a time (when downloading using AlamofireImage's ImageFilter option). I settled on this method as FAR better than any other I tried (for my use case) in terms of memory and speed.
func convertToGrayscale() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
let imageRect = CGRect(x: 0.0, y: 0.0, width: self.size.width, height: self.size.height)
let context = UIGraphicsGetCurrentContext()
// Draw a white background
context!.setFillColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)
context!.fill(imageRect)
// optional: increase contrast with colorDodge before applying luminosity
// (my images were too dark when using just luminosity - you may not need this)
self.draw(in: imageRect, blendMode: CGBlendMode.colorDodge, alpha: 0.7)
// Draw the luminosity on top of the white background to get grayscale of original image
self.draw(in: imageRect, blendMode: CGBlendMode.luminosity, alpha: 0.90)
// optional: re-apply alpha if your image has transparency - based on user1978534's answer (I haven't tested this as I didn't have transparency - I just know this would be the the syntax)
// self.draw(in: imageRect, blendMode: CGBlendMode.destinationIn, alpha: 1.0)
let grayscaleImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return grayscaleImage
}
Re the use of colorDodge: I initially had issues getting my images light enough to match the grayscale coloring produced by using CIFilter("CIPhotoEffectTonal") - my results turned out too dark. I was able to get a decent match by applying CGBlendMode.colorDodge # ~ 0.7 alpha, which seems to increase the overall contrast.
Other color blend effects might work too - but I think you would want to apply before luminocity, which is the greyscale filtering effect. I found this page very helpful to reference about the different BlendModes.
Re efficiency gains I found: I need to process 100's of thumbnail images as they are loaded from a server (using AlamofireImage for async loading, caching, and applying a filter). I started to experience crashes when the total size of my images exceeded the cache size, so I experimented with other methods.
The CoreImage CPU based CIFilter approach was the first I tried, and wasn't memory efficient enough for the number of images I'm handling.
I also tried applying a CIFilter via the GPU using EAGLContext(api: .openGLES3), which was actually even more memory intensive - I actually got memory warnings for 450+ mb use while loading 200 + images.
I tried bitmap processing (i.e. CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.none.rawValue)... which worked well except I couldn't get a high enough resolution for a modern retina device. Images were very grainy even when I added context.scaleBy(x: scaleFactor, y: scaleFactor).
So out of everything I tried, this method (UIGraphics Context Draw) to be VASTLY more efficient re speed and memory when applying as a filter to AlamofireImage. As in, seeing less than 70 mbs ram when processing my 200+ images and them basically loading instantly, rather than over about 35 seconds it took with the openEAGL methods. I know these are not very scientific benchmarks. I will instrument it if anyone is very curious though :)
And lastly, if you do need to pass this or another greyscale filter into AlamofireImage - this is how to do it: (note you must import AlamofireImage into your class to use ImageFilter)
public struct GrayScaleFilter: ImageFilter {
public init() {
}
public var filter: (UIImage) -> UIImage {
return { image in
return image.convertToGrayscale() ?? image
}
}
}
To use it, create the filter like this and pass into af_setImage like so:
let filter = GrayScaleFilter()
imageView.af_setImage(withURL: url, filter: filter)
#interface UIImageView (Settings)
- (void)convertImageToGrayScale;
#end
#implementation UIImageView (Settings)
- (void)convertImageToGrayScale
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.image.size.width, self.image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, self.image.size.width, self.image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [self.image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
self.image = newImage;
}
#end
I have yet another answer. This one is extremely performant and handles retina graphics as well as transparency. It expands on Sargis Gevorgyan's approach:
+ (UIImage*) grayScaleFromImage:(UIImage*)image opaque:(BOOL)opaque
{
// NSTimeInterval start = [NSDate timeIntervalSinceReferenceDate];
CGSize size = image.size;
CGRect bounds = CGRectMake(0, 0, size.width, size.height);
// Create bitmap content with current image size and grayscale colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t bitsPerComponent = 8;
size_t bytesPerPixel = opaque ? 1 : 2;
size_t bytesPerRow = bytesPerPixel * size.width * image.scale;
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, bitsPerComponent, bytesPerRow, colorSpace, opaque ? kCGImageAlphaNone : kCGImageAlphaPremultipliedLast);
// create image from bitmap
CGContextDrawImage(context, bounds, image.CGImage);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
CGContextRelease(context);
// performance results on iPhone 6S+ in Release mode.
// Results are in photo pixels, not device pixels:
// ~ 5ms for 500px x 600px
// ~ 15ms for 2200px x 600px
// NSLog(#"generating %d x %d # %dx grayscale took %f seconds", (int)size.width, (int)size.height, (int)image.scale, [NSDate timeIntervalSinceReferenceDate] - start);
return result;
}
Using blending modes instead is elegant, but copying to a grayscale bitmap is more performant because you only use one or two color channels instead of four. The opacity bool is meant to take in your UIView's opaque flag so you can opt out of using an alpha channel if you know you won't need one.
I haven't tried the Core Image based solutions in this answer thread, but I would be very cautious about using Core Image if performance is important.
Thats my try to convert fast by drawing directly to grayscale colorspace without each pixel enumeration. It works 10x faster than CIImageFilter solutions.
#implementation UIImage (Grayscale)
static UIImage *grayscaleImageFromCIImage(CIImage *image, CGFloat scale)
{
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, image, kCIInputBrightnessKey, #0.0, kCIInputContrastKey, #1.1, kCIInputSaturationKey, #0.0, nil].outputImage;
CIImage *output = [CIFilter filterWithName:#"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, kCIInputEVKey, #0.7, nil].outputImage;
CGImageRef ref = [[CIContext contextWithOptions:nil] createCGImage:output fromRect:output.extent];
UIImage *result = [UIImage imageWithCGImage:ref scale:scale orientation:UIImageOrientationUp];
CGImageRelease(ref);
return result;
}
static UIImage *grayscaleImageFromCGImage(CGImageRef imageRef, CGFloat scale)
{
NSInteger width = CGImageGetWidth(imageRef) * scale;
NSInteger height = CGImageGetHeight(imageRef) * scale;
NSMutableData *pixels = [NSMutableData dataWithLength:width*height];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(pixels.mutableBytes, width, height, 8, width, colorSpace, 0);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:ref scale:scale orientation:UIImageOrientationUp];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(ref);
return result;
}
- (UIImage *)grayscaleImage
{
if (self.CIImage) {
return grayscaleImageFromCIImage(self.CIImage, self.scale);
} else if (self.CGImage) {
return grayscaleImageFromCGImage(self.CGImage, self.scale);
}
return nil;
}
#end

How to replace colour from UIImage with other color [duplicate]

This question already exists:
replacing specific color in a uiimage
Closed 9 years ago.
I want to change color of UIImage with another color, firstly i want to touch the image and detect the color of the touched pixel and then want to replace color of touched pixel with another color.
I have following code in which i am trying to change touched pixel color but its returning transparent image.
- (UIColor *)colorAtPixel:(CGPoint)point
{
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = self.size.width;
NSUInteger height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
red = (CGFloat)pixelData[0] / 255.0f;
green = (CGFloat)pixelData[1] / 255.0f;
blue = (CGFloat)pixelData[2] / 255.0f;
alpha = (CGFloat)pixelData[3] / 255.0f;
[self changeWhiteColorTransparent:imageview.img];
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
-(void)changeWhiteColorTransparent: (UIImage *)image{
CGImageRef rawImageRef = image.CGImage;
const float colorMasking[6] = { red, 0, green, 0, blue, 0 };
UIGraphicsBeginImageContext(image.size);
CGImageRef maskedImageRef = CGImageCreateWithMaskingColors(rawImageRef, colorMasking);
{
//if in iphone
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, image.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
}
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height), maskedImageRef);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
CGImageRelease(maskedImageRef);
UIGraphicsEndImageContext();
NSString *imagespath =[self createDirectoryInDocumentsFolderWithName:#"images"];
NSFileManager *fileM = [NSFileManager defaultManager];
NSArray *contents= [fileM contentsOfDirectoryAtPath:imagespath error:nil];
NSString *savedImagePath = [imagespath stringByAppendingPathComponent:[NSString stringWithFormat:#"images%d.png",[contents count] + 1]];
NSData *imageData = UIImagePNGRepresentation(result);
[imageData writeToFile:savedImagePath atomically:NO];
}
here i an trying to make touched colour transparent but if i want to change it to yellow color then what i need to change i code?
Call this method form uitouch move...
-(void) getPixelColorAtLocation:(CGPoint)point
{
unsigned char pixel[4] = {0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -point.x, -point.y);
[self.layer renderInContext:context];
// NSLog(#"x- %f y- %f",point.x,point.y);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
NSLog(#"RGB Color code :%d %d %d",pixel[0],pixel[1],pixel[2]);
}
set this RGB value to your imageview .
You can get color code of touch point in RGB colr combination. try it.
it will help you.

error in for loop when rewriting values

I used the following method to convert the input image to grayscale and threshold:
UIImage *image = self.imageView.image;
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
NSLog(#"width %f, height %f", image.size.width, image.size.height );
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
self.imageView.image= newImage;
//Thresholds the grayscale image
CGImageRef sourceImage = newImage.CGImage ; //creates a CGImage reference for it
//
CFDataRef theData; //creates a variable of CFDataRef to store data of the image.
//
//
//
theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage)); //assigns theData variable to the actual image
//
//
//
//
//
//
//
//
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
//
int dataLength = CFDataGetLength(theData);
int counter=0;
for (int index = 0; index < dataLength; index += 4)
{
if (pixelData[index] < 180)
{
NSLog(#"The intensity is %u", pixelData[index]);
pixelData[index] = 0;
//pixelData[index+1] = 0;
//pixelData[index+2] = 0;
//pixelData[index+3] = 0;
}
else
{
NSLog(#"The intensity is %u", pixelData[index]);
pixelData[index] = 255;
// pixelData[index+1] = 255;
// pixelData[index+2] = 0;
//pixelData[index+3] = 0;
}
}
The app crashes when the for loop is trying to rewrite the intensities here:
pixelData[index] = 0;
Could someone please help me out please?
Thanks!
CFDataGetBytePtr returns read-only pointer, as Apple docs say.
A solution is proposed here.

Resources