How can I convert RGB image to 1 channel image (black/white) using ios5?
Input image is usually a photo of a book page.
Goal is to reduce the size of a photocopy by converting it to the 1 channel image.
If I understand your question, you want to apply a black and white thresholding to the image based on a pixel's luminance. For a fast way of doing this, you could use my open source GPUImage project (supporting back to iOS 4.x) and a couple of the image processing operations it provides.
In particular, the GPUImageLuminanceThresholdFilter and GPUImageAdaptiveThresholdFilter might be what you're looking for here. The former turns a pixel to black or white based on a luminance threshold you set (the default is 50%). The latter takes the local average luminance into account when applying this threshold, which can produce better results for text on pages of a book.
Usage of these filters on a UIImage is fairly simple:
UIImage *inputImage = [UIImage imageNamed:#"book.jpg"];
GPUImageLuminanceThresholdFilter *thresholdFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
UIImage *quickFilteredImage = [thresholdFilter imageByFilteringImage:inputImage];
These can be applied to a live camera feed and photos taken by the camera, as well.
You can use Core Image to process your image to black & white.
use CIEdgeWork , this will convert your image to black and whie
for more information on Core Image Programming, visit:-
https://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/CoreImaging/ci_tasks/ci_tasks.html#//apple_ref/doc/uid/TP30001185-CH3-TPXREF101
The code you are looking for is probably this:
CIContext *context = [CIContext contextWithOptions:nil]; // 1
CIImage *image = [CIImage imageWithContentsOfURL:myURL]; // 2
CIFilter *filter = [CIFilter filterWithName:#"CIEdgeWork"]; // 3
[filter setValue:image forKey:kCIInputImgeKey];
[filter setValue:[NSNumber numberWithFloat:0.8f] forKey:#"InputIntensity"];
CIImage *result = [filter valueForKey:kCIOutputImageKey]; // 4
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent];
here is some sample code,maybe helpful :
#implementation UIImage (GrayImage)
-(UIImage*)grayImage
{
int width = self.size.width;
int height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate
(nil,width,height,8,0,colorSpace,kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
return nil;
}
CGContextDrawImage(context,CGRectMake(0, 0, width, height), self.CGImage);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *grayImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(context);
return grayImage;
}
#end
I just write it as a Category of UIImage, but not support png image which has transparent pixel or it will be black.
Related
I have a view with black text around which I want to create a white "glow". I figure I can do this by grabbing a screenshot, inverting the colors (it's only black and white), masking the black to transparent, then "jittering" the resulting image in each direction. When I try to mask the black with CGImageCreateWithMaskingColors I get a null CGImageRef. So far this is what I have.
//First get a screenshot into a CI image so I can apply a CI Filter
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
CIImage* ciImage = [[CIImage alloc] initWithCGImage:[UIGraphicsGetImageFromCurrentImageContext() CGImage]];
UIGraphicsEndImageContext();
//Now apply the CIColorInvert filter
CIFilter* filter = [CIFilter filterWithName:#"CIColorInvert" keysAndValues:kCIInputImageKey, ciImage, nil];
ciImage = [filter valueForKey:kCIOutputImageKey];
//Now I need to get a CG image from the CI image.
CIContext* ciContext = [CIContext contextWithOptions:nil];
CGImageRef ref = [ciContext createCGImage:ciImage fromRect:[ciImage extent]];
//Now I try to mask black
const float maskingColor[6] = {0,0,0,0,0,0};
ref = CGImageCreateWithMaskingColors(ref, maskingColor); //ref is (null)
I know alpha channels can muck up the works but I really don't think I have any alpha channels here. Just to check I did CGImageGetColorSpace(ref) and got kCGColorSpaceDeviceRGB, no alpha channel.
Can someone please tell me where I'm going wrong? Optionally, a quick comment helping me understand the differences between UIImage, CIImage and CGImage would be great.
Looking at the documentation for CGImageCreateWithMaskingColors describes the image parameter as:
The image to mask. This parameter may not be an image mask, may
not already have an image mask or masking color associated with it,
and cannot have an alpha component.
You should use CGImageGetAlphaInfo to determine if your CGImageRef has an alpha channel.
And in terms of getting rid of that pesky alpha channel, I think you'll find this SO question helpful:
CGBitmapContextCreate with kCGImageAlphaNone
Here's my solution, there may be a better way of doing this but this works!
- (UIImage *)imageWithChromaKeyMasking {
const CGFloat colorMasking[6]={255.0,255.0,255.0,255.0,255.0,255.0};
CGImageRef oldImage = self.CGImage;
CGBitmapInfo oldInfo = CGImageGetBitmapInfo(oldImage);
CGBitmapInfo newInfo = (oldInfo & (UINT32_MAX ^ kCGBitmapAlphaInfoMask)) | kCGImageAlphaNoneSkipLast;
CGDataProviderRef provider = CGImageGetDataProvider(oldImage);
CGImageRef newImage = CGImageCreate(self.size.width, self.size.height, CGImageGetBitsPerComponent(oldImage), CGImageGetBitsPerPixel(oldImage), CGImageGetBytesPerRow(oldImage), CGImageGetColorSpace(oldImage), newInfo, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider); provider = NULL;
CGImageRef im = CGImageCreateWithMaskingColors(newImage, colorMasking);
UIImage *ret = [UIImage imageWithCGImage:im];
CGImageRelease(im);
return ret;
}
I'm using such solution to mask my UIImage with some alpha drawing:
masking an UIImage
The problem is that later i want to apply some CIFilters. However when i change value of the filter, my alpha gets lost from UIImage. Do i have to re-apply the alpha channel to output image each time after modifying CIFilter? This will surely make the process much slower.
Samples of code: (each new paragraph is in another method)
// set the image
_image = [incomeImage imageWithMask:_mask]; // imageWithMask from method from link
[_myView.imageView setImage:_image];
// calculate ciimages
_inputCIImage = [[CIImage alloc] initWithCGImage:_image.CGImage options:nil];
_myView.imageView.image = _image;
_currentCIImage = _inputCIImage;
// change value in filter
[filter setValue:#(0.2f) forKey:#"someKey"];
[filter setValue:_inputCIImage forKey:kCIInputImageKey];
_currentCIImage = [filter outputImage];
CGImageRef img = [_context createCGImage:_currentCIImage fromRect:[_currentCIImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:img];
you could do this only using CIFilters. Instead of using imageWithMask you can use the CIBlendWithMask CIFilter. Apple CIFilter Reference
We are applying a 'CIGaussianBlur' filter on few images. The process is working fine most of the time. But when the app moves to the background the process produce whit stripes on the image. (Images below, notice that the left and bottom of the image is striped to white and that the image is shrieked a bit in comparing to the original image).
The Code:
- (UIImage*)imageWithBlurRadius:(CGFloat)radius
{
UIImage *image = self;
LOG(#"(1) image size before resize = %#",NSStringFromCGSize(image.size));
NSData *imageData = UIImageJPEGRepresentation(self, 1.0);
LOG(#"(2) image data length = %ul",imageData.length);
//create our blurred image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
//setting up Gaussian Blur (we could use one of many filters offered by Core Image)
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:radius] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
//CIGaussianBlur has a tendency to shrink the image a little, this ensures it matches up exactly to the bounds of our original image
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *finalImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
LOG(#"(3) final image size after resize = %#",NSStringFromCGSize(finalImage.size));
return finalImage;
}
Before Filter
)
After Filter
Actually, I just faced this exact problem, and found a solution that's different than what #RizwanSattar describes.
What I do, based on an exchange with "Rincewind" on the Apple developer boards, is to first apply a CIAffineClamp on the image, with the transform value set to identity. This creates an image at the same scale, but with an infinite extent. That causes the blur to blur the edges correctly.
Then after I apply the blur, I crop the image to it's original extent, cropping away the feathering that takes place on the edges.
You can see the code in a CI Filter demo app I've posted on github:
CIFilter demo project on github
It's a general-purpose program that handles all the different CI filters, but it has code to deal with the Gaussian blur filter.
Take a look at the method showImage. It has special-case code to set the extent on the source image before applying the blur filter:
if ([currentFilterName isEqualToString: #"CIGaussianBlur"])
{
// NSLog(#"new image is bigger");
CIFilter *clampFilter = [self clampFilter];
CIImage *sourceCIImage = [CIImage imageWithCGImage: imageToEdit.CGImage];
[clampFilter setValue: sourceCIImage
forKey: kCIInputImageKey];
[clampFilter setValue:[NSValue valueWithBytes: &CGAffineTransformIdentity
objCType:#encode(CGAffineTransform)]
forKey:#"inputTransform"];
sourceCIImage = [clampFilter valueForKey: kCIOutputImageKey];
[currentFilter setValue: sourceCIImage
forKey: kCIInputImageKey];
}
(Where the method "clampFilter" just lazily loads a CIAffineClamp filter.)
Then I apply the user-selected filter:
outputImage = [currentFilter valueForKey: kCIOutputImageKey];
Then after applying the selected filter, I then check the extent of the resulting image and crop it back to the original extent if it's bigger:
CGSize newSize;
newSize = outputImage.extent.size;
if (newSize.width > sourceImageExtent.width || newSize.height > sourceImageExtent.height)
{
// NSLog(#"new image is bigger");
CIFilter *cropFilter = [self cropFilter]; //Lazily load a CIAffineClamp filter
CGRect boundsRect = CGRectMake(0, 0, sourceImageExtent.width, sourceImageExtent.height);
[cropFilter setValue:outputImage forKey: #"inputImage"];
CIVector *rectVector = [CIVector vectorWithCGRect: boundsRect];
[cropFilter setValue: rectVector
forKey: #"inputRectangle"];
outputImage = [cropFilter valueForKey: kCIOutputImageKey];
}
The reason you are seeing those "white strips" in the blurred image is that the resulting CIImage is bigger than you original image, because it has the fuzzy edges of the blur. When you are hard-cropping the resulting image to be the same size as your original image, it's not accounting for the fuzzy edges.
After:
CIImage *result = [filter valueForKey:kCIOutputImageKey];
Take a look at result.extent which is a CGRect that shows you the new bounding box relative to the original image. (i.e. for positive radii, result.extent.origin.y would be negative)
Here's some code (you should really test it):
CIImage *result = blurFilter.outputImage;
// Blur filter will create a larger image to cover the "fuzz", but
// we should cut it out since goes to transparent and it looks like a
// vignette
CGFloat imageSizeDifference = -result.extent.origin.x;
// NOTE: on iOS7 it seems to generate an image that will end up still vignetting, so
// as a hack just multiply the vertical inset by 2x
CGRect imageInset = CGRectInset(result.extent, imageSizeDifference, imageSizeDifference*2);
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:result fromRect:imageInset];
Hope that helps.
I am using CIFilter and the CIHueBlendMode in order to blend an image (foreground) and a red layer (background)
I am doing the exact same thing in Photoshop CS6 with the Hue Blend Mode (copied the foreground image and used the same red to fill the background layer)
Unfortunately the results are very different:
(and the same applies to comparing CIColorBlendMode, CIDifferenceBlendMode and CISaturationBlendMode with their Photoshop counterparts)
My question is: Is it me? Am I doing something wrong here? Or are Core Image Blend Modes and Photoshop Blend Modes altogether different things?
// Blending the input image with a red image
CIFilter* composite = [CIFilter filterWithName:#"CIHueBlendMode"];
[composite setValue:inputImage forKey:#"inputImage"];
[composite setValue:redImage forKey:#"inputBackgroundImage"];
CIImage *outputImage = [composite outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
imageView.image = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
// This is how I create the red image:
- (CIImage *)imageWithColor:(UIColor *)color inRect:(CGRect)rect
{
UIGraphicsBeginImageContext(rect.size);
CGContextRef _context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(_context, [color CGColor]);
CGContextFillRect(_context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [[CIImage alloc] initWithCGImage:image.CGImage options:nil];
}
I would like to use the filter CIHardLightBlendMode but it doesn't seems to work as explained on the Apple website:
If the source image sample color is lighter than 50% gray, the
background is lightened, similar to screening. If the source image
sample color is darker than 50% gray, the background is darkened,
similar to multiplying. If the source image sample color is equal to
50% gray, the source image is not changed
I have tried with a 50% gray overlay image and my result is darker.
I have tried with a 73% gray and there is almost no impact (as it should be with 50%)
The problem is I have a lot of existing overlay images that I currently use with ImageMagick and it is working correctly (the overlay images work also on Photoshop or Gimp using the Hard light overlay mode)
To process the images faster, I would like to use CoreImage but it is not working as I expect.
How can I use the filter as it is explained ?
Here is my code:
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *sourceImage = [CIImage imageWithContentsOfURL:[NSURL fileURLWithPath:imgSource]];
CIImage *overlayImage = [CIImage imageWithContentsOfURL:[NSURL fileURLWithPath:imgOverlay]];
CIImage *outputImage;
CIFilter *filter = [CIFilter filterWithName:#"CIHardLightBlendMode"];
[filter setDefaults];
[filter setValue:overlayImage forKey:kCIInputImageKey];
[filter setValue:sourceImage forKey:kCIInputBackgroundImageKey];
outputImage = [filter outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
self.processedImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
Thanks