error in for loop when rewriting values - ios

I used the following method to convert the input image to grayscale and threshold:
UIImage *image = self.imageView.image;
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
NSLog(#"width %f, height %f", image.size.width, image.size.height );
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
self.imageView.image= newImage;
//Thresholds the grayscale image
CGImageRef sourceImage = newImage.CGImage ; //creates a CGImage reference for it
//
CFDataRef theData; //creates a variable of CFDataRef to store data of the image.
//
//
//
theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage)); //assigns theData variable to the actual image
//
//
//
//
//
//
//
//
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
//
int dataLength = CFDataGetLength(theData);
int counter=0;
for (int index = 0; index < dataLength; index += 4)
{
if (pixelData[index] < 180)
{
NSLog(#"The intensity is %u", pixelData[index]);
pixelData[index] = 0;
//pixelData[index+1] = 0;
//pixelData[index+2] = 0;
//pixelData[index+3] = 0;
}
else
{
NSLog(#"The intensity is %u", pixelData[index]);
pixelData[index] = 255;
// pixelData[index+1] = 255;
// pixelData[index+2] = 0;
//pixelData[index+3] = 0;
}
}
The app crashes when the for loop is trying to rewrite the intensities here:
pixelData[index] = 0;
Could someone please help me out please?
Thanks!

CFDataGetBytePtr returns read-only pointer, as Apple docs say.
A solution is proposed here.

Related

How to create color bitmap (CGBitmapContextCreate) by using raw ARGB data

I am getting an image but I am losing 1 byte.
My resulting image:
I want to create color/rgba bitmap using argb raw (void*)data and I have its width and height. In backend(c++) I have decoded rgb to argb by using following method and then giving input as (void*)pData,
void decode_rgb_to_argb(U8Data r, U8Data g, U8Data b, U32Data argb, u_int elements)
{
assert(argb);
assert(r);
assert(g);
assert(b);
assert(elements);
unsigned char*p=NULL;
for(u_int i=0;i<elements;i++)
{
p=(unsigned char*)(argb+i);
*p=b[i];
p++;
*p=g[i];
p++;
*p=r[i];
p++;
*p=0;
}
}
pData = decode_rgb_to_argb;
//I am handling in ios
-(uiimage*) createBitmap:(void*)pData pWidth:(u_int)pWidth pHeight:(u_int)pHeight{
// Here i want to write ppm file using pData to check wether 4byte/3byte.
NSData* data = [NSData dataWithBytes:pData length:pWidth*pHeight];
char* myBuffer = (char*)pData;
char* rgba = (char*)malloc(pWidth*pHeight*4);
for(int i=0; i < pWidth*pHeight; i++) {
rgba[4*i] = myBuffer[3*i];
rgba[4*i+1] = myBuffer[3*i+1];
rgba[4*i+2] = myBuffer[3*i+2];
rgba[4*i+3] = 255; //or 0
}
size_t bitsPerComponent = 8;
size_t bytesPerRow = pWidth*4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(
// Here i have given my raw data(pData) and rgba buffer
(u_char*)pData,
pWidth,
pHeight,
bitsPerComponent,
bytesPerRow,
colorSpace,
//Here i have used kCGImageAlphaFirst because i am getting data as ARGB,but
//bitmap is not creating.If i use kCGImageAlphaNoneSkipLast, i am getting expected
//image but i'm loosing one byte(alpha).
kCGImageAlhaInfo
);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
UIImage *result = [[[UIImage imageWithCGImage:cgImage] retain] autorelease];
CGColorSpaceRelease(colorSpace);
CGImageRelease(cgImage);
CGContextRelease(bitmapContext);
return result;
}
How do I write a ppm file and how do I create a bitmap from my raw (void*)Pdata to a color image in ios?
If you need an uiimage with rgb Then you can use this function.
- (UIImage *) createImageWithRGBColor :(UIColor *)color ofSize : (CGSize)Size {
UIView *keyview = [[UIView alloc] initWithFrame:CGRectMake(0, 0, Size.width, Size.height)];
[keyview setBackgroundColor:color];
CGRect rect = [keyview bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[keyview.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
And call like this:
UIImage *image = [self createImageWithRGBColor:[UIColor colorWithRed:.25f green:.40f blue:.8f alpha:1.0] ofSize:CGSizeMake(100, 100)];

Remove transparency from a UIImage PNG file (NOT ALPHA VALUE)

I am working with a music player that takes images from the ID3 resources from the MP3. I've encountered that certain ArtWorks have transparency. (The images have transparent parts). This is causing my app to load those images very slow. I need to find out a way to remove the transparency of the UIImage before showing it. Or are there any other suggestions?
"Replace the transparent part of the image with a color such as white"
Here's my code if necessary:
NSURL *url = ad.audioPlayer.url;
AVAsset *asset = [AVAsset assetWithURL:url];
for (AVMetadataItem *metadataItem in asset.commonMetadata) {
if ([metadataItem.commonKey isEqualToString:#"artwork"]){
NSDictionary *imageDataDictionary = (NSDictionary *)metadataItem.value;
NSData *imageData = [imageDataDictionary objectForKey:#"data"];
UIImage *image = [UIImage imageWithData:imageData];
// This is the image and the place in code where I want to convert it
_artworkImageView.image = image;
_bgImage.image = [image applyDarkEffect];
}
}
I also had the sample problem and actually I did not want to remove the alpha channel but just to replace the transparent color with white color. I tried to remove alpha color as suggested in the comment in Remove alpha channel from UIImage but the annoying thing was after doing that the transparent color became black and I could not figure out how to make it white..
Eventually I ended up with just drawing a white background under the image with transparent parts without touching the alpha channel.
Code here:
// check if there is alpha channel
CGImageAlphaInfo alpha = CGImageGetAlphaInfo(wholeTemplate.CGImage);
if (alpha == kCGImageAlphaPremultipliedLast || alpha == kCGImageAlphaPremultipliedFirst ||
alpha == kCGImageAlphaLast || alpha == kCGImageAlphaFirst || alpha == kCGImageAlphaOnly)
{
// create the context with information from the original image
CGContextRef bitmapContext = CGBitmapContextCreate(NULL,
wholeTemplate.size.width,
wholeTemplate.size.height,
CGImageGetBitsPerComponent(wholeTemplate.CGImage),
CGImageGetBytesPerRow(wholeTemplate.CGImage),
CGImageGetColorSpace(wholeTemplate.CGImage),
CGImageGetBitmapInfo(wholeTemplate.CGImage)
);
// draw white rect as background
CGContextSetFillColorWithColor(bitmapContext, [UIColor whiteColor].CGColor);
CGContextFillRect(bitmapContext, CGRectMake(0, 0, wholeTemplate.size.width, wholeTemplate.size.height));
// draw the image
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, wholeTemplate.size.width, wholeTemplate.size.height), wholeTemplate.CGImage);
CGImageRef resultNoTransparency = CGBitmapContextCreateImage(bitmapContext);
// get the image back
wholeTemplate = [UIImage imageWithCGImage:resultNoTransparency];
// do not forget to release..
CGImageRelease(resultNoAlpha);
CGContextRelease(bitmapContext);
}
Try this...
//First create a white image that has an equal size with your image.
CGImageRef sourceImage = yourImage.CGImage;
CFDataRef theData;
theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage));
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
int dataLength = CFDataGetLength(theData);
int red = 0;
int green = 1;
int blue = 2;
int alpha = 3;
for (int i = 0; i < (dataLength); i += 4) {
//create white pixels
int r = 255;
int b = 255;
int g = 255;
int al = 255;
pixelData[i + red] = r;
pixelData[i + blue] = b;
pixelData[i + green] = g;
pixelData[i + alpha] = al;
}
CGContextRef context;
context = CGBitmapContextCreate(pixelData,
CGImageGetWidth(sourceImage),
CGImageGetHeight(sourceImage),
8,
CGImageGetBytesPerRow(sourceImage),
CGImageGetColorSpace(sourceImage),
kCGImageAlphaPremultipliedLast);
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CFRelease(theData);
//Now that you have your image use CIFilter "CISourceOverCompositing".
//this adds your white image as the background.
UIImageOrientation originalOrientation = origim.imageOrientation;
data3 = UIImagePNGRepresentation(origim);
CIImage *result = [CIImage imageWithData:data3];
CIImage *result2 = [CIImage imageWithCGImage:newCGImage];
CIContext *contextt = [CIContext contextWithOptions:nil];
CIFilter *addBackground = [CIFilter filterWithName:#"CISourceOverCompositing"];
[addBackground setDefaults];
[addBackground setValue:result forKey:#"inputImage"];
[addBackground setValue:result2 forKey:#"inputBackgroundImage"];
result = [addBackground valueForKey:#"outputImage"];
CGImageRef imgRef = [contextt createCGImage:result fromRect:result.extent];
image = [[UIImage alloc] initWithCGImage:imgRef scale:1.0 orientation:originalOrientation];
CGImageRelease(imgRef);
Try this code ....
+ (UIImage *)removeTransparentArea:(UIImage *)image {
CGRect newRect = [self cropRectForImage:image];
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, newRect);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return finalImage;
}
+ (CGRect)cropRectForImage:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
CGContextRef context = [self createARGBBitmapContextFromImage:cgImage];
if (context == NULL) return CGRectZero;
int width = image.size.width;
int height = image.size.height;
CGRect rect = CGRectMake(0, 0, width, height);
CGContextDrawImage(context, rect, cgImage);
unsigned char *data = CGBitmapContextGetData(context);
CGContextRelease(context);
//Filter through data and look for non-transparent pixels.
int lowX = width;
int lowY = height;
int highX = 0;
int highY = 0;
if (data != NULL) {
for (int y=0; y<height; y++) {
for (int x=0; x<width; x++) {
int pixelIndex = ((width * y) + x) * 4 /* 4 for A, R, G, B */;
if (data[pixelIndex] != 0) { //Alpha value is not zero; pixel is not transparent.
if (x < lowX) lowX = x;
if (x > highX) highX = x;
if (y < lowY) lowY = y;
if (y > highY) highY = y;
}
}
}
free(data);
} else {
return CGRectZero;
}
return CGRectMake(lowX, lowY, highX-lowX, highY-lowY);
}
+ (CGContextRef)createARGBBitmapContextFromImage:(CGImageRef)inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void *bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(inImage);
size_t height = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = ((int)width * 4);
bitmapByteCount = (bitmapBytesPerRow * (int)height);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) return NULL;
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease(colorSpace);
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
width,
height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL) free (bitmapData);
// Make sure and release colorspace before returning
CGColorSpaceRelease(colorSpace);
return context;
}

Error in converting the image color to grey scale in ios [duplicate]

I am trying to convert an image into grayscale in the following way:
#define bytesPerPixel 4
#define bitsPerComponent 8
-(unsigned char*) getBytesForImage: (UIImage*)pImage
{
CGImageRef image = [pImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
NSUInteger bytesPerRow = bytesPerPixel * width;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
return rawData;
}
-(UIImage*) processImage: (UIImage*)pImage
{
DebugLog(#"processing image");
unsigned char *rawData = [self getBytesForImage: pImage];
NSUInteger width = pImage.size.width;
NSUInteger height = pImage.size.height;
DebugLog(#"width: %d", width);
DebugLog(#"height: %d", height);
NSUInteger bytesPerRow = bytesPerPixel * width;
for (int xCoordinate = 0; xCoordinate < width; xCoordinate++)
{
for (int yCoordinate = 0; yCoordinate < height; yCoordinate++)
{
int byteIndex = (bytesPerRow * yCoordinate) + xCoordinate * bytesPerPixel;
//Getting original colors
float red = ( rawData[byteIndex] / 255.f );
float green = ( rawData[byteIndex + 1] / 255.f );
float blue = ( rawData[byteIndex + 2] / 255.f );
//Processing pixel data
float averageColor = (red + green + blue) / 3.0f;
red = averageColor;
green = averageColor;
blue = averageColor;
//Assigning new color components
rawData[byteIndex] = (unsigned char) red * 255;
rawData[byteIndex + 1] = (unsigned char) green * 255;
rawData[byteIndex + 2] = (unsigned char) blue * 255;
}
}
NSData* newPixelData = [NSData dataWithBytes: rawData length: height * width * 4];
UIImage* newImage = [UIImage imageWithData: newPixelData];
free(rawData);
DebugLog(#"image processed");
return newImage;
}
So when I want to convert an image I just call processImage:
imageToDisplay.image = [self processImage: image];
But imageToDisplay doesn't display. What may be the problem?
Thanks.
I needed a version that preserved the alpha channel, so I modified the code posted by Dutchie432:
#implementation UIImage (grayscale)
typedef enum {
ALPHA = 0,
BLUE = 1,
GREEN = 2,
RED = 3
} PIXELS;
- (UIImage *)convertToGrayscale {
CGSize size = [self size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
#end
Here is a code using only UIKit and the luminosity blending mode. A bit of a hack, but it works well.
// Transform the image in grayscale.
- (UIImage*) grayishImage: (UIImage*) inputImage {
// Create a graphic context.
UIGraphicsBeginImageContextWithOptions(inputImage.size, YES, 1.0);
CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
// Draw the image with the luminosity blend mode.
// On top of a white background, this will give a black and white image.
[inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];
// Get the resulting image.
UIImage *filteredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return filteredImage;
}
To keep the transparency, maybe you can just set the opaque mode parameter of the UIGraphicsBeginImageContextWithOptions to NO. Needs to be checked.
Based on Cam's code with the ability to deal with the scale for Retina displays.
- (UIImage *) toGrayscale
{
const int RED = 1;
const int GREEN = 2;
const int BLUE = 3;
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
int width = imageRect.size.width;
int height = imageRect.size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100);
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image
scale:self.scale
orientation:UIImageOrientationUp];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
I liked Mathieu Godart's answer, but it didn't seem to work properly for retina or alpha images. Here's an updated version that seems to work for both of those for me:
- (UIImage*)convertToGrayscale
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGRect imageRect = CGRectMake(0.0f, 0.0f, self.size.width, self.size.height);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Draw a white background
CGContextSetRGBFillColor(ctx, 1.0f, 1.0f, 1.0f, 1.0f);
CGContextFillRect(ctx, imageRect);
// Draw the luminosity on top of the white background to get grayscale
[self drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0f];
// Apply the source image's alpha
[self drawInRect:imageRect blendMode:kCGBlendModeDestinationIn alpha:1.0f];
UIImage* grayscaleImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return grayscaleImage;
}
What exactly takes place when you use this function? Is the function returning an invalid image, or is the display not showing it correctly?
This is the method I use to convert to greyscale.
- (UIImage *) convertToGreyscale:(UIImage *)i {
int kRed = 1;
int kGreen = 2;
int kBlue = 4;
int colors = kGreen | kBlue | kRed;
int m_width = i.size.width;
int m_height = i.size.height;
uint32_t *rgbImage = (uint32_t *) malloc(m_width * m_height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbImage, m_width, m_height, 8, m_width * 4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextDrawImage(context, CGRectMake(0, 0, m_width, m_height), [i CGImage]);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// now convert to grayscale
uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height);
for(int y = 0; y < m_height; y++) {
for(int x = 0; x < m_width; x++) {
uint32_t rgbPixel=rgbImage[y*m_width+x];
uint32_t sum=0,count=0;
if (colors & kRed) {sum += (rgbPixel>>24)&255; count++;}
if (colors & kGreen) {sum += (rgbPixel>>16)&255; count++;}
if (colors & kBlue) {sum += (rgbPixel>>8)&255; count++;}
m_imageData[y*m_width+x]=sum/count;
}
}
free(rgbImage);
// convert from a gray scale image back into a UIImage
uint8_t *result = (uint8_t *) calloc(m_width * m_height *sizeof(uint32_t), 1);
// process the image back to rgb
for(int i = 0; i < m_height * m_width; i++) {
result[i*4]=0;
int val=m_imageData[i];
result[i*4+1]=val;
result[i*4+2]=val;
result[i*4+3]=val;
}
// create a UIImage
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(result, m_width, m_height, 8, m_width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGImageRef image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
free(m_imageData);
// make sure the data will be released by giving it to an autoreleased NSData
[NSData dataWithBytesNoCopy:result length:m_width * m_height];
return resultUIImage;
}
Different approach with CIFilter. Preserves alpha channel and works with transparent background:
+ (UIImage *)convertImageToGrayScale:(UIImage *)image
{
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:#(0.0) forKey:kCIInputSaturationKey];
CIImage *outputImage = filter.outputImage;
CGImageRef cgImageRef = [context createCGImage:outputImage fromRect:outputImage.extent];
UIImage *result = [UIImage imageWithCGImage:cgImageRef];
CGImageRelease(cgImageRef);
return result;
}
A swift extension to UIImage, preserving alpha:
extension UIImage {
private func convertToGrayScaleNoAlpha() -> CGImageRef {
let colorSpace = CGColorSpaceCreateDeviceGray();
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.None.rawValue)
let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, colorSpace, bitmapInfo)
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage)
return CGBitmapContextCreateImage(context)
}
/**
Return a new image in shades of gray + alpha
*/
func convertToGrayScale() -> UIImage {
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.Only.rawValue)
let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, nil, bitmapInfo)
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage);
let mask = CGBitmapContextCreateImage(context)
return UIImage(CGImage: CGImageCreateWithMask(convertToGrayScaleNoAlpha(), mask), scale: scale, orientation:imageOrientation)!
}
}
Here is another good solution as a category method on UIImage. It's based on this blog post and its comments. But I fixed a memory issue here:
- (UIImage *)grayScaleImage {
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [self CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [self CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
CGImageRef cgImage = CGImageCreateWithMask(grayImage, mask);
UIImage *grayScaleImage = [UIImage imageWithCGImage:cgImage scale:self.scale orientation:self.imageOrientation];
// release the CG images
CGImageRelease(cgImage);
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
}
An fast and efficient Swift 3 implementation for iOS 9/10. I feel this is efficient having now tried every image filtering method I could find for processing 100's of images at a time (when downloading using AlamofireImage's ImageFilter option). I settled on this method as FAR better than any other I tried (for my use case) in terms of memory and speed.
func convertToGrayscale() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
let imageRect = CGRect(x: 0.0, y: 0.0, width: self.size.width, height: self.size.height)
let context = UIGraphicsGetCurrentContext()
// Draw a white background
context!.setFillColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)
context!.fill(imageRect)
// optional: increase contrast with colorDodge before applying luminosity
// (my images were too dark when using just luminosity - you may not need this)
self.draw(in: imageRect, blendMode: CGBlendMode.colorDodge, alpha: 0.7)
// Draw the luminosity on top of the white background to get grayscale of original image
self.draw(in: imageRect, blendMode: CGBlendMode.luminosity, alpha: 0.90)
// optional: re-apply alpha if your image has transparency - based on user1978534's answer (I haven't tested this as I didn't have transparency - I just know this would be the the syntax)
// self.draw(in: imageRect, blendMode: CGBlendMode.destinationIn, alpha: 1.0)
let grayscaleImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return grayscaleImage
}
Re the use of colorDodge: I initially had issues getting my images light enough to match the grayscale coloring produced by using CIFilter("CIPhotoEffectTonal") - my results turned out too dark. I was able to get a decent match by applying CGBlendMode.colorDodge # ~ 0.7 alpha, which seems to increase the overall contrast.
Other color blend effects might work too - but I think you would want to apply before luminocity, which is the greyscale filtering effect. I found this page very helpful to reference about the different BlendModes.
Re efficiency gains I found: I need to process 100's of thumbnail images as they are loaded from a server (using AlamofireImage for async loading, caching, and applying a filter). I started to experience crashes when the total size of my images exceeded the cache size, so I experimented with other methods.
The CoreImage CPU based CIFilter approach was the first I tried, and wasn't memory efficient enough for the number of images I'm handling.
I also tried applying a CIFilter via the GPU using EAGLContext(api: .openGLES3), which was actually even more memory intensive - I actually got memory warnings for 450+ mb use while loading 200 + images.
I tried bitmap processing (i.e. CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.none.rawValue)... which worked well except I couldn't get a high enough resolution for a modern retina device. Images were very grainy even when I added context.scaleBy(x: scaleFactor, y: scaleFactor).
So out of everything I tried, this method (UIGraphics Context Draw) to be VASTLY more efficient re speed and memory when applying as a filter to AlamofireImage. As in, seeing less than 70 mbs ram when processing my 200+ images and them basically loading instantly, rather than over about 35 seconds it took with the openEAGL methods. I know these are not very scientific benchmarks. I will instrument it if anyone is very curious though :)
And lastly, if you do need to pass this or another greyscale filter into AlamofireImage - this is how to do it: (note you must import AlamofireImage into your class to use ImageFilter)
public struct GrayScaleFilter: ImageFilter {
public init() {
}
public var filter: (UIImage) -> UIImage {
return { image in
return image.convertToGrayscale() ?? image
}
}
}
To use it, create the filter like this and pass into af_setImage like so:
let filter = GrayScaleFilter()
imageView.af_setImage(withURL: url, filter: filter)
#interface UIImageView (Settings)
- (void)convertImageToGrayScale;
#end
#implementation UIImageView (Settings)
- (void)convertImageToGrayScale
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.image.size.width, self.image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, self.image.size.width, self.image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [self.image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
self.image = newImage;
}
#end
I have yet another answer. This one is extremely performant and handles retina graphics as well as transparency. It expands on Sargis Gevorgyan's approach:
+ (UIImage*) grayScaleFromImage:(UIImage*)image opaque:(BOOL)opaque
{
// NSTimeInterval start = [NSDate timeIntervalSinceReferenceDate];
CGSize size = image.size;
CGRect bounds = CGRectMake(0, 0, size.width, size.height);
// Create bitmap content with current image size and grayscale colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t bitsPerComponent = 8;
size_t bytesPerPixel = opaque ? 1 : 2;
size_t bytesPerRow = bytesPerPixel * size.width * image.scale;
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, bitsPerComponent, bytesPerRow, colorSpace, opaque ? kCGImageAlphaNone : kCGImageAlphaPremultipliedLast);
// create image from bitmap
CGContextDrawImage(context, bounds, image.CGImage);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
CGContextRelease(context);
// performance results on iPhone 6S+ in Release mode.
// Results are in photo pixels, not device pixels:
// ~ 5ms for 500px x 600px
// ~ 15ms for 2200px x 600px
// NSLog(#"generating %d x %d # %dx grayscale took %f seconds", (int)size.width, (int)size.height, (int)image.scale, [NSDate timeIntervalSinceReferenceDate] - start);
return result;
}
Using blending modes instead is elegant, but copying to a grayscale bitmap is more performant because you only use one or two color channels instead of four. The opacity bool is meant to take in your UIView's opaque flag so you can opt out of using an alpha channel if you know you won't need one.
I haven't tried the Core Image based solutions in this answer thread, but I would be very cautious about using Core Image if performance is important.
Thats my try to convert fast by drawing directly to grayscale colorspace without each pixel enumeration. It works 10x faster than CIImageFilter solutions.
#implementation UIImage (Grayscale)
static UIImage *grayscaleImageFromCIImage(CIImage *image, CGFloat scale)
{
CIImage *blackAndWhite = [CIFilter filterWithName:#"CIColorControls" keysAndValues:kCIInputImageKey, image, kCIInputBrightnessKey, #0.0, kCIInputContrastKey, #1.1, kCIInputSaturationKey, #0.0, nil].outputImage;
CIImage *output = [CIFilter filterWithName:#"CIExposureAdjust" keysAndValues:kCIInputImageKey, blackAndWhite, kCIInputEVKey, #0.7, nil].outputImage;
CGImageRef ref = [[CIContext contextWithOptions:nil] createCGImage:output fromRect:output.extent];
UIImage *result = [UIImage imageWithCGImage:ref scale:scale orientation:UIImageOrientationUp];
CGImageRelease(ref);
return result;
}
static UIImage *grayscaleImageFromCGImage(CGImageRef imageRef, CGFloat scale)
{
NSInteger width = CGImageGetWidth(imageRef) * scale;
NSInteger height = CGImageGetHeight(imageRef) * scale;
NSMutableData *pixels = [NSMutableData dataWithLength:width*height];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(pixels.mutableBytes, width, height, 8, width, colorSpace, 0);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:ref scale:scale orientation:UIImageOrientationUp];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(ref);
return result;
}
- (UIImage *)grayscaleImage
{
if (self.CIImage) {
return grayscaleImageFromCIImage(self.CIImage, self.scale);
} else if (self.CGImage) {
return grayscaleImageFromCGImage(self.CGImage, self.scale);
}
return nil;
}
#end

Reverse Image Masking

In IOS SDK, I am able to mask an image but not able to reverse image mask. i mean, i have one make image which have rectangle part and now i mask image and i get attached image but i want reverse result.
I get image as result.
while i need this as result.
Please help me to achieve it.
...Edit...
My code
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, self.imageView.frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGImageRef maskImage = [[UIImage imageNamed:#"2.png"] CGImage];
CGContextClipToMask(context, self.imageView.bounds, maskImage);
CGContextTranslateCTM(context, 0.0, self.imageView.frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
[[self.imageView image] drawInRect:self.imageView.bounds];
UIImage *image11 = UIGraphicsGetImageFromCurrentImageContext();
self.imageView.image = image11;
Thanks
I have achieved it in two steps. May not be the best way to do it, but it works.
Invert your mask Image.
Mask
Look at the code below.
- (void)viewDidLoad
{
[super viewDidLoad];
_imageView = [[UIImageView alloc] initWithImage:i(#"test1.jpg")];
_imageView.image = [self maskImage:i(#"face.jpg") withMask:[self negativeImage]];
[self.view addSubview:_imageView];
}
The method below is taken from here
- (UIImage *)negativeImage
{
// get width and height as integers, since we'll be using them as
// array subscripts, etc, and this'll save a whole lot of casting
CGSize size = self.imageView.frame.size;
int width = size.width;
int height = size.height;
// Create a suitable RGB+alpha bitmap context in BGRA colour space
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
// draw the current image to the newly created context
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self.imageView.image CGImage]);
// run through every pixel, a scan line at a time...
for(int y = 0; y < height; y++)
{
// get a pointer to the start of this scan line
unsigned char *linePointer = &memoryPool[y * width * 4];
// step through the pixels one by one...
for(int x = 0; x < width; x++)
{
// get RGB values. We're dealing with premultiplied alpha
// here, so we need to divide by the alpha channel (if it
// isn't zero, of course) to get uninflected RGB. We
// multiply by 255 to keep precision while still using
// integers
int r, g, b;
if(linePointer[3])
{
r = linePointer[0] * 255 / linePointer[3];
g = linePointer[1] * 255 / linePointer[3];
b = linePointer[2] * 255 / linePointer[3];
}
else
r = g = b = 0;
// perform the colour inversion
r = 255 - r;
g = 255 - g;
b = 255 - b;
// multiply by alpha again, divide by 255 to undo the
// scaling before, store the new values and advance
// the pointer we're reading pixel data from
linePointer[0] = r * linePointer[3] / 255;
linePointer[1] = g * linePointer[3] / 255;
linePointer[2] = b * linePointer[3] / 255;
linePointer += 4;
}
}
// get a CG image from the context, wrap that into a
// UIImage
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
// clean up
CGImageRelease(cgImage);
CGContextRelease(context);
free(memoryPool);
// and return
return returnImage;
}
The method below taken from here.
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}

UIGraphicsBeginImageContext vs CGBitmapContextCreate

I'm trying to change color of an image in a background thread.
Apple doc says UIGraphicsBeginImageContext can only be called from main thread, and I'm trying to use CGBitmapContextCreate:
context = CGBitmapContextCreate
(bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
I have two versions of "changeColor" first one using UIGraphisBeginImageContext, second one using CGBitmapContextCreate.
The first one correctly changes color, but second one doesn't.
Why is that?
- (UIImage*) changeColor: (UIColor*) aColor
{
if(aColor == nil)
return self;
UIGraphicsBeginImageContext(self.size);
CGRect bounds;
bounds.origin = CGPointMake(0,0);
bounds.size = self.size;
[aColor set];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, bounds, [self CGImage]);
CGContextFillRect(context, bounds);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
- (UIImage*) changeColor: (UIColor*) aColor
{
if(aColor == nil)
return self;
CGContextRef context = CreateARGBBitmapContext(self.size);
CGRect bounds;
bounds.origin = CGPointMake(0,0);
bounds.size = self.size;
CGColorRef colorRef = aColor.CGColor;
const CGFloat *components = CGColorGetComponents(colorRef);
float red = components[0];
float green = components[1];
float blue = components[2];
CGContextSetRGBFillColor(context, red, green, blue, 1);
CGContextClipToMask(context, bounds, [self CGImage]);
CGContextFillRect(context, bounds);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage* img = [UIImage imageWithCGImage: imageRef];
unsigned char* data = (unsigned char*)CGBitmapContextGetData (context);
CGContextRelease(context);
free(data);
return img;
}
CGContextRef CreateARGBBitmapContext(CGSize size)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = size.width;
size_t pixelsHigh = size.height;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
Your second method is doing work that the first never did. Here's an adjustment of the second method to more closely match the first one:
- (UIImage*) changeColor: (UIColor*) aColor
{
if(aColor == nil)
return self;
CGContextRef context = CreateARGBBitmapContext(self.size);
CGRect bounds;
bounds.origin = CGPointMake(0,0);
bounds.size = self.size;
CGContextSetFillColorWithColor(aColor.CGColor);
CGContextClipToMask(context, bounds, [self CGImage]);
CGContextFillRect(context, bounds);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage* img = [UIImage imageWithCGImage: imageRef];
CGContextRelease(context);
return img;
}
The two changes I made were I converted this to use CGContextSetFillColorWithColor(), and I removed the dangerous and incorrect free() of the backing data of the bitmap context. If this code snippet does not behave identically to the first one, then you will have to look at your implementation of CreateARGBBitmapContext() to verify that it is correct.
Of course, as Brad Larson mentioned in the comments, if you're targeting iOS 4.0 and above, the UIKit graphics methods are (according to the release notes) thread-safe and you should be able to use the first method just fine.

Resources