Changing image hue iOS - ios

I want to change hue of an image using slider. what I did is:
float slideValue = sldHueChange.value;
beginImage= [CIImage imageWithCGImage:imgBorder.image.CGImage];
context = [CIContext contextWithOptions:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:kCIContextUseSoftwareRenderer]];
filter= [CIFilter filterWithName:#"CIHueAdjust"];
[filter setValue:beginImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:slideValue] forKey:#"inputAngle" ];
CIImage *outputImage = [filter outputImage];
CGImageRef cgimg =[context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
[imgBorder setImage:newImg];
CGImageRelease(cgimg);
Its working. But it is not smooth. I want a to change hue of image very smoothly.
If you have idea to make a smooth hue changer please share it. I really need it. Thanks in advance.

I would suggest you to use the amazing GPUImage from Brad, it has a lot of filters, or you can try to use the GPU render for CIContext, changing the boolean to NO in the options dictionary for the CIContext creation.

Now what I'm doing is:
CGRect rect = CGRectMake(0, 0, selectedBorderForChangingHue.size.width, selectedBorderForChangingHue.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context1 = UIGraphicsGetCurrentContext();
CGContextClipToMask(context1, rect, selectedBorderForChangingHue.CGImage);
CGContextSetFillColorWithColor(context1, [[UIColor colorWithHue:sldHueChange.value/255.0f saturation:1.0f brightness:1.0f alpha:1.0f] CGColor]);
CGContextFillRect(context1, rect);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *flippedImage = [UIImage imageWithCGImage:img.CGImage scale:1.0 orientation: UIImageOrientationDownMirrored];
imgBorder.image = flippedImage;
and it is working fine for me. and it is very smooth.

Related

Pixelated layer on image in ios

I need to add pixelated rectangular layer on UIImage which can be undo. Just like this..
I used this code but its not doing the same thing as i need
CALayer *maskLayer = [CALayer layer];
CALayer *mosaicLayer = [CALayer layer];
// Mask image ends with 0.15 opacity on both sides. Set the background color of the layer
// to the same value so the layer can extend the mask image.
mosaicLayer.contents = (id)[img CGImage];
mosaicLayer.frame = CGRectMake(0,0, img.size.width, img.size.height);
UIImage *maskImg = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"mask" ofType:#"png"]];
maskLayer.contents = (id)[maskImg CGImage];
maskLayer.frame = CGRectMake(100,150, maskImg.size.width, maskImg.size.height);
mosaicLayer.mask = maskLayer;
[imageView.layer addSublayer:mosaicLayer];
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saver = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
is there any built-in filter by apple for iOS? Please guide me Thanks
You can use GPUImage's GPUImagePixellateFilter https://github.com/BradLarson/GPUImage/blob/8811da388aed22e04ed54ca9a5a76791eeb40551/framework/Source/GPUImagePixellateFilter.h
We can use GPUImage framework but lot better is to use iOS own filters. easy coding :)
- (UIImage *)applyCIPixelateFilter:(UIImage*)fromImage withScale:(double)scale
{
/*
Makes an image blocky by mapping the image to colored squares whose color is defined by the replaced pixels.
Parameters
inputImage: A CIImage object whose display name is Image.
inputCenter: A CIVector object whose attribute type is CIAttributeTypePosition and whose display name is Center.
Default value: [150 150]
inputScale: An NSNumber object whose attribute type is CIAttributeTypeDistance and whose display name is Scale.
Default value: 8.00
*/
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter= [CIFilter filterWithName:#"CIPixellate"];
CIImage *inputImage = [[CIImage alloc] initWithImage:fromImage];
CIVector *vector = [CIVector vectorWithX:fromImage.size.width /2.0f Y:fromImage.size.height /2.0f];
[filter setDefaults];
[filter setValue:vector forKey:#"inputCenter"];
[filter setValue:[NSNumber numberWithDouble:scale] forKey:#"inputScale"];
[filter setValue:inputImage forKey:#"inputImage"];
CGImageRef cgiimage = [context createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:1.0f orientation:fromImage.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}

iOS image blur effect with category

I'm trying to add a blur effect using category.
+ (UIImage *)blurImageWithImage:(UIImage*) imageName withView:(UIView*)view {
UIImage *sourceImage = imageName;
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:#"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: #"inputImage"];
[gaussianBlurFilter setValue:#10 forKey:#"inputRadius"];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
// Set up output context.
UIGraphicsBeginImageContext(view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, view.frame);
CGContextRestoreGState(outputContext);
// Output image is ready.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage; }
then I call this function inside a UIView like this:
UIImage *image = [UIImage imageNamed:#"xxx"]
UIImageView *page = [[UIImageView alloc] initWithImage:[UIImage blurImageWithImage:image withView:self]];
If I add this function directly in the class, it works, but not if I do it in UIImage category.
I had face same problem earlier. But thank fully I am getting the solution.
Please follow step. Make sure your blur image function working fine.
1) In category add instance method instead of class method.
ex.
- (UIImage *)blurImageWithImage:(UIImage*) imageName withView:(UIView*)view
2) Import category in your VC
3) Use category, ex
UIImage *image = [UIImage imageNamed:#"xxx"]
UIImageView *page = [[UIImageView alloc] initWithImage:[image blurImageWithImage:image withView:self]];
Let me know this solution is working fine for you.
Turns out the problem was I forgot to add "-" when doing context translate.
So what I ended up doing is I create a class method.
Interface:
+ (UIImage *)blurImageWithImageName:(NSString*) imageName withView:(UIView*)view;
Implementation :
+ (UIImage *)blurImageWithImageName:(NSString*) imageName withView:(UIView*)view {
UIImage *sourceImage = [UIImage imageNamed:imageName];
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:#"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: #"inputImage"];
[gaussianBlurFilter setValue:#10 forKey:#"inputRadius"];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
// Set up output context.
UIGraphicsBeginImageContext(view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, view.frame);
CGContextRestoreGState(outputContext);
// Output image is ready.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}

CIFilter (CIStripesGenerator) with SKTexture?

I am trying to get a strips generated using CIFilter, then create a SKTexture from it.
Here is my code.
CIFilter *filter = [CIFilter filterWithName:#"CIStripesGenerator"];
[filter setDefaults];
[filter setValue:[CIColor colorWithRed:1 green:1 blue:1] forKey:#"inputColor0"];
[filter setValue:[CIColor colorWithRed:0 green:0 blue:0] forKey:#"inputColor1"];
//updated the code, whith this line
//stil the same problem
CIImage *croppedImage = [filter.outputImage imageByCroppingToRect:CGRectMake(0, 0, 320, 480)];
SKTexture *lightTexture = [SKTexture textureWithImage:[UIImage imageWithCIImage:croppedImage]];
SKSpriteNode *light = [SKSpriteNode spriteNodeWithTexture:lightTexture size:self.size];
However, i receive a run time error at the last line, any help would be appreciated, except for (lldb), the compiler does not give any more explanation.
UPDATE:
Thanks to rickster for guiding me towards the solution
-(UIImage*) generateImage {
// 1
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CIStripesGenerator"];
[filter setDefaults];
[filter setValue:[CIColor colorWithRed:1 green:1 blue:1] forKey:#"inputColor0"];
[filter setValue:[CIColor colorWithRed:0 green:0 blue:0] forKey:#"inputColor1"];
// 2
CGImageRef cgimg =
[context createCGImage:filter.outputImage fromRect:CGRectMake(0, 0, 320, 480)];
// 3
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
// 4
CGImageRelease(cgimg);
return newImage;
}
Then, i can create texture from the image:
SKTexture *stripesTexture = [SKTexture textureWithImage:[self generateImage]];
SKSpriteNode *stripes = [SKSpriteNode spriteNodeWithTexture:stripesTexture];
stripes.position=CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
[self addChild: stripes];
There are a couple of issues here:
You don't have a Core Image context for rendering your image. Create one with:
CIContext *context = [CIContext contextWithOptions:nil];
This probably won't provide real-time rendering performance. But it looks like you just want one-time generation of a static texture, so that's okay.
Most of the generator filters produce images of infinite extent. You need to either add a crop filter to the filter chain, or render your image using a method that lets you specify what rect of the image you want, like createCGImage:fromRect:. Then make an SKTexture from the resulting CGImageRef.

Blur UIImageView with gradient alpha

How would I make an UIImageView have a gradient blurred background? In other words, the image is fully blurred on the left and it gets less blurred on the right?
we did the same in a proyect! we used this method:
(UIImage *)blurImage:(CGImageRef*)image withBlurLevel:(CGFloat)blur
{
CIImage *inputImage = [CIImage imageWithCGImage:image];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues:kCIInputImageKey, inputImage, #"inputRadius", #(blur), nil];
CIImage *outputImage = filter.outputImage;
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef outImage = [context createCGImage:outputImage
fromRect:[inputImage extent]];
UIImage *returnImage = [UIImage imageWithCGImage:outImage];
CGImageRelease(outImage);
return returnImage;
}
This code use your GPU to make the blur, but you need to call that method in another thread if you don't want your UI to be blocked for a second.
Hope it helps!

Do iOS Core Image Blend Modes differ from Photoshop Blend Modes?

I am using CIFilter and the CIHueBlendMode in order to blend an image (foreground) and a red layer (background)
I am doing the exact same thing in Photoshop CS6 with the Hue Blend Mode (copied the foreground image and used the same red to fill the background layer)
Unfortunately the results are very different:
(and the same applies to comparing CIColorBlendMode, CIDifferenceBlendMode and CISaturationBlendMode with their Photoshop counterparts)
My question is: Is it me? Am I doing something wrong here? Or are Core Image Blend Modes and Photoshop Blend Modes altogether different things?
// Blending the input image with a red image
CIFilter* composite = [CIFilter filterWithName:#"CIHueBlendMode"];
[composite setValue:inputImage forKey:#"inputImage"];
[composite setValue:redImage forKey:#"inputBackgroundImage"];
CIImage *outputImage = [composite outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
imageView.image = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
// This is how I create the red image:
- (CIImage *)imageWithColor:(UIColor *)color inRect:(CGRect)rect
{
UIGraphicsBeginImageContext(rect.size);
CGContextRef _context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(_context, [color CGColor]);
CGContextFillRect(_context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [[CIImage alloc] initWithCGImage:image.CGImage options:nil];
}

Resources