Convert RGB Image to Grayscale and Grayscale to RGB Image? - ios

I have successfully converted an Image to Grayscale, I want to revert the Grayscale Image back to RGB Image. Please help. Thanks in advance.
-(UIImage *) toGrayscale
{
const int RED = 1;
const int GREEN = 2;
const int BLUE = 3;
Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
int width = imageRect.size.width;
int height = imageRect.size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100);
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image
scale:self.scale
orientation:UIImageOrientationUp];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}

To answer the question about converting, your code for grayscale:
uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100);
gives the relationship:
S = 0.3R + 0.59G + 0.11B
Go go from RGB to S involves solving for one unknown (S) with one equation (fine!).
To convert back is like trying for three unknowns (RGB) given one equation which isn't possible.
One hack to doing a grayscale colorisation is to consider grayscale as just intensity, and set
R = G = B = S - but this isn't going to restore your colors correctly (obviously).
So in short, conversion to grayscale is an irreversible function, like squaring a number (was it positive or negative?) - information is lost and can't be retrieved.

Related

How to correctly create grayscale bitmap without alpha channel on ios

I am trying to create a simple UIImage, 8x1, with gray color space and without alpha channel. My code:
- (UIImage *)createImage {
size_t w = 8;
size_t h = 1;
size_t size = w * h;
uint8_t * buf = calloc(size, sizeof(uint8_t));
for(int x = 0; x < size; x++) {
buf[x] = (uint8_t) (x * 32);
}
CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(buf, w, h, 8, w * sizeof(uint8_t), space, kCGBitmapByteOrderDefault | kCGImageAlphaNone);
CGImageRef img = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(space);
UIImage * ret = [UIImage imageWithCGImage: img];
CGImageRelease(img);
free(buf);
return ret;
}
When I display this image, it looks like this:
No matter the size of the image, the (N-4)-th pixel's colors is replicated to pixels N-3, N-2 and N-1. (the last one should be white)
When I add alpha channel (16 bits per pixel) or use RGB or indexed color model, its ok, but with this bitmap setting, I cannot set colors of the last 3 pixels. Some ghosts are messing with my pixels. Any ideas? :)

I am trying to create a partial Grayscale image

I am trying to create a partial gray scale image in which i am reading each and every pixel in that image and replacing the pixel data to gray color, and if the pixel color matches the desired color i restrict it to be applied so that the specific pixel color doesn't change.i don't know where i am going wrong it changes the whole image to gray scale and rotates the image 90 degrees. can some one help me out with this issue thanks in advance.
-(UIImage *) toPartialGrayscale{
const int RED = 1;
const int GREEN = 2;
const int BLUE = 3;
initialR=255.0;
initialG=0.0;
initialB=0.0;//218-112-214//0-191-255
float r;
float g;
float b;
tollerance=50;
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, originalImageView.image.size.width * scale, originalImageView.image.size.height * scale);
int width = imageRect.size.width;
int height = imageRect.size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [originalImageView.image CGImage]);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width; x++)
{
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100);
// set the pixels to grayi
r= initialR-rgbaPixel[RED];
g= initialG-rgbaPixel[GREEN];
b= initialB-rgbaPixel[BLUE];
if ((r<tollerance&&r>-tollerance)&&(g<tollerance&&g>-tollerance)&&(b<tollerance&&b>-tollerance))
{
rgbaPixel[RED] = (uint8_t)r;
rgbaPixel[GREEN] = (uint8_t)g;
rgbaPixel[BLUE] = (uint8_t)b;
}
else
{
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image
scale:scale
orientation:UIImageOrientationUp];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
This is the code i am using any kind of help will be appreciated thanks again in advance.
Hopefully the orientation piece is easy enough to resolve by playing with the UIImageOrientationUp constant that you're passing in when you create the final image. Try left or right until you get what you need.
As for the threshold not working, can you verify that your "tollerance" really is behaving like you expect. Change it to 255 and see if the entire image retains it's color (it should). If it's still grey, then you know that your conditional statement is where the problem lies.

Remove matte(alpha bleed) after combining jpg and alpha to get png image in iOS

To reduce the size of the application, I use jpg and alpha over png image. I am able to merge jpg and alpha image to get png but the issue is that it is leaving an alpha bleed(matte) where the edges are little sharp. Please help me with this.
Below code which I have written helps me to get png image from jpg and alpha image. Please help me to get rid of alpha bleed(matte). Thanks.
+ (UIImage*)pngImageWithJPEG:(UIImage*)jpegImage compressedAlphaFile:(UIImage*)alphaImage
{
CGRect imageRect = CGRectMake(0, 0, jpegImage.size.width, jpegImage.size.height);
//Pixel Buffer
uint32_t* piPixels = (uint32_t*)malloc(imageRect.size.width * imageRect.size.height * sizeof(uint32_t));
memset(piPixels, 0, imageRect.size.width * imageRect.size.height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(piPixels, imageRect.size.width, imageRect.size.height, 8, sizeof(uint32_t) * imageRect.size.width, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
//Drawing the alphaImage to the pixel buffer to seperate the alpha values
CGContextDrawImage(context, imageRect, alphaImage.CGImage);
//Buffer to store the alpha values from the alphaImage
uint8_t* piAlpha = (uint8_t*)malloc(sizeof(uint8_t) * imageRect.size.width * imageRect.size.height);
//Copying the alpha values from the alphaImage to the alpha buffer
for (uint32_t y = 0; y < imageRect.size.height; y++)
{
for (uint32_t x = 0; x < imageRect.size.width; x++)
{
uint8_t* piRGBAPixel = (uint8_t*)&piPixels[y * (uint32_t)imageRect.size.width + x];
//alpha = 0, red = 1, green = 2, blue = 3.
piAlpha[y * (uint32_t)imageRect.size.width + x] = piRGBAPixel[1];
}
}
//Drawing the jpegImage in the pixel buffer.
CGContextDrawImage(context, imageRect, jpegImage.CGImage);
//Setting alpha to the jpegImage
for (uint32_t y = 0; y < imageRect.size.height; y++)
{
for (uint32_t x = 0; x < imageRect.size.width; x++)
{
uint8_t* piRGBAPixel = (uint8_t*)&piPixels[y * (uint32_t)imageRect.size.width + x];
float fAlpha0To1 = piAlpha[y * (uint32_t)imageRect.size.width + x] / 255.0f;
//alpha = 0, red = 1, green = 2, blue = 3.
piRGBAPixel[0] = piAlpha[y * (uint32_t)imageRect.size.width + x];
piRGBAPixel[1] *= fAlpha0To1;
piRGBAPixel[2] *= fAlpha0To1;
piRGBAPixel[3] *= fAlpha0To1;
}
}
//Creating image from the pixel buffer
CGImageRef cgImage = CGBitmapContextCreateImage(context);
//Releasing resources
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(piPixels);
free(piAlpha);
//Creating the pngImage to return from the cgImage
UIImage* pngImage = [UIImage imageWithCGImage:cgImage];
//Releasing the cgImage.
CGImageRelease(cgImage);
return pngImage;
}
Premultiplied alpha colorspace handles color bleed much better. Try pre-processing source pixels like this:
r = r*a;
g = g*a;
b = b*a;
To get orignal values you'd usually reverse this after you read the image (divide RGB values by a if a > 0), but since iOS works in premultiplied alpha color natively, you don't even have to do that.
Also try PNG8+Alpha format, which sometimes may be smaller than separate JPG+mask.

Is it possible to add grayscale effect to partial part of UIImage?

Could anyone tell me how to add grayscale effect on a rect of image?
I found Convert image to grayscale, which can convert an image to grayscale.
But i just need to convert a part of UIImage to grayscale. is it possible?
Look to see your help
Thanks,
Huy
I modified the code in the other topic to be applied to a rect in your image.
typedef enum {
ALPHA = 0,
BLUE = 1,
GREEN = 2,
RED = 3
} PIXELS;
- (UIImage *)convertToGrayscale:(UIImage *) originalImage inRect: (CGRect) rect{
CGSize size = [originalImage size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [originalImage CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
if(x > rect.origin.x && y > rect.origin.y && x < rect.origin.x + rect.size.width && y < rect.origin.y + rect.size.height) {
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray in your rect
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
You can test it in a UIImageView:
imageview.image = [self convertToGrayscale:imageview.image inRect:CGRectMake(50, 50, 100, 100)];

iOS - CoreImage - Add an effect to partial of image

I just have a look on CoreImage framework on iOS 5, found that it's easy to add an effect to whole image.
I wonder if possible to add an effect on special part of image (a rectangle). for example add gray scale effect on partial of image/
I look forward to your help.
Thanks,
Huy
Watch session 510 from the WWDC 2012 videos. They present a technique how to apply a mask to a CIImage. You need to learn how to chain the filters together. In particular take a look at:
CICrop, CILinearGradient, CIRadialGradient (could be used to create the mask)
CISourceOverCompositing (put mask images together)
CIBlendWithMask (create final image)
The filters are documented here:
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html
Your best bet would be to copy the CIImage (so you now have two), crop the copied CIImage to the rect you want to effect, perform the effect on that cropped version, then use an overlay effect to create a new CIImage based on the two older CIImages.
It seems like a lot of effort, but when you understand all of this is being set up as a bunch of GPU shaders it makes a lot more sense.
typedef enum {
ALPHA = 0,
BLUE = 1,
GREEN = 2,
RED = 3
} PIXELS;
- (UIImage *)convertToGrayscale:(UIImage *) originalImage inRect: (CGRect) rect{
CGSize size = [originalImage size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [originalImage CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
if(x > rect.origin.x && y > rect.origin.y && x < rect.origin.x + rect.size.width && y < rect.origin.y + rect.size.height) {
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray in your rect
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
You can test it in a UIImageView:
imageview.image = [self convertToGrayscale:imageview.image inRect:CGRectMake(50, 50, 100, 100)];

Resources