CoreImage CIHardLightBlendMode issue - ios

I would like to use the filter CIHardLightBlendMode but it doesn't seems to work as explained on the Apple website:
If the source image sample color is lighter than 50% gray, the
background is lightened, similar to screening. If the source image
sample color is darker than 50% gray, the background is darkened,
similar to multiplying. If the source image sample color is equal to
50% gray, the source image is not changed
I have tried with a 50% gray overlay image and my result is darker.
I have tried with a 73% gray and there is almost no impact (as it should be with 50%)
The problem is I have a lot of existing overlay images that I currently use with ImageMagick and it is working correctly (the overlay images work also on Photoshop or Gimp using the Hard light overlay mode)
To process the images faster, I would like to use CoreImage but it is not working as I expect.
How can I use the filter as it is explained ?
Here is my code:
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *sourceImage = [CIImage imageWithContentsOfURL:[NSURL fileURLWithPath:imgSource]];
CIImage *overlayImage = [CIImage imageWithContentsOfURL:[NSURL fileURLWithPath:imgOverlay]];
CIImage *outputImage;
CIFilter *filter = [CIFilter filterWithName:#"CIHardLightBlendMode"];
[filter setDefaults];
[filter setValue:overlayImage forKey:kCIInputImageKey];
[filter setValue:sourceImage forKey:kCIInputBackgroundImageKey];
outputImage = [filter outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
self.processedImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
Thanks

Related

Core Image Filter CISourceOverCompositing not working Properly

I am Working on Photo Editing App and I have to merge two Images one Over another like this.
I have implemented the following code to do so:
Here imgedit is the background image and
imgEdit is the UIImageView containing imgedit.
UIImage *tempImg = [UIImage imageNamed:[NSString stringWithFormat:#"borderImg"]];
CIImage *inputBackgroundImage = [[CIImage alloc] initWithImage:imgedit];
CIImage *inputImage = [[CIImage alloc]initWithImage:tempImg] ;
CIFilter *filter = [CIFilter filterWithName:#"CISourceOverCompositing"];
[filter setDefaults];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:inputBackgroundImage forKey:#"inputBackgroundImage"];
CIImage *outputImage1 = [filter valueForKey:#"outputImage"];
CIContext *context = [CIContext contextWithOptions:nil];
imgEdit.image = [UIImage imageWithCGImage:[context createCGImage:outputImage1 fromRect:outputImage1.extent]];
But the outputImage I am getting after implementing above code is:
I have also tried to resize the input white frame image, by using following code:
tempImg=[tempImg resizedImageToSize:CGSizeMake(imgEdit.image.size.width,imgEdit.image.size.height)];
By using above code image get resized properly but But that is also not working.
Please help me out from here.
Your valuable help will be highly appreciated.
Thankyou in advance.
A better way to resize is as follows:
inputImage = [inputImage imageByApplyingTransform:CGAffineTransformMakeScale(inputBackgroundImage.extent.size.width/inputImage.extent.size.with, inputBackgroundImage.extent.size.height/inputImage.extent.size.height)];

CIFilter is not working correctly when app is in background

We are applying a 'CIGaussianBlur' filter on few images. The process is working fine most of the time. But when the app moves to the background the process produce whit stripes on the image. (Images below, notice that the left and bottom of the image is striped to white and that the image is shrieked a bit in comparing to the original image).
The Code:
- (UIImage*)imageWithBlurRadius:(CGFloat)radius
{
UIImage *image = self;
LOG(#"(1) image size before resize = %#",NSStringFromCGSize(image.size));
NSData *imageData = UIImageJPEGRepresentation(self, 1.0);
LOG(#"(2) image data length = %ul",imageData.length);
//create our blurred image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
//setting up Gaussian Blur (we could use one of many filters offered by Core Image)
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:radius] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
//CIGaussianBlur has a tendency to shrink the image a little, this ensures it matches up exactly to the bounds of our original image
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *finalImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
LOG(#"(3) final image size after resize = %#",NSStringFromCGSize(finalImage.size));
return finalImage;
}
Before Filter
)
After Filter
Actually, I just faced this exact problem, and found a solution that's different than what #RizwanSattar describes.
What I do, based on an exchange with "Rincewind" on the Apple developer boards, is to first apply a CIAffineClamp on the image, with the transform value set to identity. This creates an image at the same scale, but with an infinite extent. That causes the blur to blur the edges correctly.
Then after I apply the blur, I crop the image to it's original extent, cropping away the feathering that takes place on the edges.
You can see the code in a CI Filter demo app I've posted on github:
CIFilter demo project on github
It's a general-purpose program that handles all the different CI filters, but it has code to deal with the Gaussian blur filter.
Take a look at the method showImage. It has special-case code to set the extent on the source image before applying the blur filter:
if ([currentFilterName isEqualToString: #"CIGaussianBlur"])
{
// NSLog(#"new image is bigger");
CIFilter *clampFilter = [self clampFilter];
CIImage *sourceCIImage = [CIImage imageWithCGImage: imageToEdit.CGImage];
[clampFilter setValue: sourceCIImage
forKey: kCIInputImageKey];
[clampFilter setValue:[NSValue valueWithBytes: &CGAffineTransformIdentity
objCType:#encode(CGAffineTransform)]
forKey:#"inputTransform"];
sourceCIImage = [clampFilter valueForKey: kCIOutputImageKey];
[currentFilter setValue: sourceCIImage
forKey: kCIInputImageKey];
}
(Where the method "clampFilter" just lazily loads a CIAffineClamp filter.)
Then I apply the user-selected filter:
outputImage = [currentFilter valueForKey: kCIOutputImageKey];
Then after applying the selected filter, I then check the extent of the resulting image and crop it back to the original extent if it's bigger:
CGSize newSize;
newSize = outputImage.extent.size;
if (newSize.width > sourceImageExtent.width || newSize.height > sourceImageExtent.height)
{
// NSLog(#"new image is bigger");
CIFilter *cropFilter = [self cropFilter]; //Lazily load a CIAffineClamp filter
CGRect boundsRect = CGRectMake(0, 0, sourceImageExtent.width, sourceImageExtent.height);
[cropFilter setValue:outputImage forKey: #"inputImage"];
CIVector *rectVector = [CIVector vectorWithCGRect: boundsRect];
[cropFilter setValue: rectVector
forKey: #"inputRectangle"];
outputImage = [cropFilter valueForKey: kCIOutputImageKey];
}
The reason you are seeing those "white strips" in the blurred image is that the resulting CIImage is bigger than you original image, because it has the fuzzy edges of the blur. When you are hard-cropping the resulting image to be the same size as your original image, it's not accounting for the fuzzy edges.
After:
CIImage *result = [filter valueForKey:kCIOutputImageKey];
Take a look at result.extent which is a CGRect that shows you the new bounding box relative to the original image. (i.e. for positive radii, result.extent.origin.y would be negative)
Here's some code (you should really test it):
CIImage *result = blurFilter.outputImage;
// Blur filter will create a larger image to cover the "fuzz", but
// we should cut it out since goes to transparent and it looks like a
// vignette
CGFloat imageSizeDifference = -result.extent.origin.x;
// NOTE: on iOS7 it seems to generate an image that will end up still vignetting, so
// as a hack just multiply the vertical inset by 2x
CGRect imageInset = CGRectInset(result.extent, imageSizeDifference, imageSizeDifference*2);
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:result fromRect:imageInset];
Hope that helps.

change colors in image objective c

I have a black and white image, I would like to change the black in blue color and the white in yellow color(for example) in objective-c.
How could I do this ?
Thanks
You can use Core Image to do this.
UIImage *bwImage = ... // Get your image from somewhere
CGImageRef bwCGImage = bwImage.CGImage;
CIImage *bwCIImage = [CIImage imageWithCGImage:bwCGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIHueAdjust"];
// Change the float value here to change the hue
[filter setValue:[NSNumber numberWithFloat:0.5] forKey: #"inputAngle"];
// input black and white image
[filter setValue:bwCIImage forKey:kCIInputImageKey];
// get output from filter
CIImage *hueImage = [filter valueForKey:kCIOutputImageKey];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:hueImage
fromRect:[hueImage extent]];
UIImage *coloredImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
See documentation for more info: Core Image Filter Reference

Convert RGB image to 1 channel image (black/white)

How can I convert RGB image to 1 channel image (black/white) using ios5?
Input image is usually a photo of a book page.
Goal is to reduce the size of a photocopy by converting it to the 1 channel image.
If I understand your question, you want to apply a black and white thresholding to the image based on a pixel's luminance. For a fast way of doing this, you could use my open source GPUImage project (supporting back to iOS 4.x) and a couple of the image processing operations it provides.
In particular, the GPUImageLuminanceThresholdFilter and GPUImageAdaptiveThresholdFilter might be what you're looking for here. The former turns a pixel to black or white based on a luminance threshold you set (the default is 50%). The latter takes the local average luminance into account when applying this threshold, which can produce better results for text on pages of a book.
Usage of these filters on a UIImage is fairly simple:
UIImage *inputImage = [UIImage imageNamed:#"book.jpg"];
GPUImageLuminanceThresholdFilter *thresholdFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
UIImage *quickFilteredImage = [thresholdFilter imageByFilteringImage:inputImage];
These can be applied to a live camera feed and photos taken by the camera, as well.
You can use Core Image to process your image to black & white.
use CIEdgeWork , this will convert your image to black and whie
for more information on Core Image Programming, visit:-
https://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/CoreImaging/ci_tasks/ci_tasks.html#//apple_ref/doc/uid/TP30001185-CH3-TPXREF101
The code you are looking for is probably this:
CIContext *context = [CIContext contextWithOptions:nil]; // 1
CIImage *image = [CIImage imageWithContentsOfURL:myURL]; // 2
CIFilter *filter = [CIFilter filterWithName:#"CIEdgeWork"]; // 3
[filter setValue:image forKey:kCIInputImgeKey];
[filter setValue:[NSNumber numberWithFloat:0.8f] forKey:#"InputIntensity"];
CIImage *result = [filter valueForKey:kCIOutputImageKey]; // 4
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent];
here is some sample code,maybe helpful :
#implementation UIImage (GrayImage)
-(UIImage*)grayImage
{
int width = self.size.width;
int height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate
(nil,width,height,8,0,colorSpace,kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
return nil;
}
CGContextDrawImage(context,CGRectMake(0, 0, width, height), self.CGImage);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *grayImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(context);
return grayImage;
}
#end
I just write it as a Category of UIImage, but not support png image which has transparent pixel or it will be black.

Do iOS Core Image Blend Modes differ from Photoshop Blend Modes?

I am using CIFilter and the CIHueBlendMode in order to blend an image (foreground) and a red layer (background)
I am doing the exact same thing in Photoshop CS6 with the Hue Blend Mode (copied the foreground image and used the same red to fill the background layer)
Unfortunately the results are very different:
(and the same applies to comparing CIColorBlendMode, CIDifferenceBlendMode and CISaturationBlendMode with their Photoshop counterparts)
My question is: Is it me? Am I doing something wrong here? Or are Core Image Blend Modes and Photoshop Blend Modes altogether different things?
// Blending the input image with a red image
CIFilter* composite = [CIFilter filterWithName:#"CIHueBlendMode"];
[composite setValue:inputImage forKey:#"inputImage"];
[composite setValue:redImage forKey:#"inputBackgroundImage"];
CIImage *outputImage = [composite outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
imageView.image = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
// This is how I create the red image:
- (CIImage *)imageWithColor:(UIColor *)color inRect:(CGRect)rect
{
UIGraphicsBeginImageContext(rect.size);
CGContextRef _context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(_context, [color CGColor]);
CGContextFillRect(_context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [[CIImage alloc] initWithCGImage:image.CGImage options:nil];
}

Resources