I'm using such solution to mask my UIImage with some alpha drawing:
masking an UIImage
The problem is that later i want to apply some CIFilters. However when i change value of the filter, my alpha gets lost from UIImage. Do i have to re-apply the alpha channel to output image each time after modifying CIFilter? This will surely make the process much slower.
Samples of code: (each new paragraph is in another method)
// set the image
_image = [incomeImage imageWithMask:_mask]; // imageWithMask from method from link
[_myView.imageView setImage:_image];
// calculate ciimages
_inputCIImage = [[CIImage alloc] initWithCGImage:_image.CGImage options:nil];
_myView.imageView.image = _image;
_currentCIImage = _inputCIImage;
// change value in filter
[filter setValue:#(0.2f) forKey:#"someKey"];
[filter setValue:_inputCIImage forKey:kCIInputImageKey];
_currentCIImage = [filter outputImage];
CGImageRef img = [_context createCGImage:_currentCIImage fromRect:[_currentCIImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:img];
you could do this only using CIFilters. Instead of using imageWithMask you can use the CIBlendWithMask CIFilter. Apple CIFilter Reference
Related
I need to add pixelated rectangular layer on UIImage which can be undo. Just like this..
I used this code but its not doing the same thing as i need
CALayer *maskLayer = [CALayer layer];
CALayer *mosaicLayer = [CALayer layer];
// Mask image ends with 0.15 opacity on both sides. Set the background color of the layer
// to the same value so the layer can extend the mask image.
mosaicLayer.contents = (id)[img CGImage];
mosaicLayer.frame = CGRectMake(0,0, img.size.width, img.size.height);
UIImage *maskImg = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"mask" ofType:#"png"]];
maskLayer.contents = (id)[maskImg CGImage];
maskLayer.frame = CGRectMake(100,150, maskImg.size.width, maskImg.size.height);
mosaicLayer.mask = maskLayer;
[imageView.layer addSublayer:mosaicLayer];
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saver = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
is there any built-in filter by apple for iOS? Please guide me Thanks
You can use GPUImage's GPUImagePixellateFilter https://github.com/BradLarson/GPUImage/blob/8811da388aed22e04ed54ca9a5a76791eeb40551/framework/Source/GPUImagePixellateFilter.h
We can use GPUImage framework but lot better is to use iOS own filters. easy coding :)
- (UIImage *)applyCIPixelateFilter:(UIImage*)fromImage withScale:(double)scale
{
/*
Makes an image blocky by mapping the image to colored squares whose color is defined by the replaced pixels.
Parameters
inputImage: A CIImage object whose display name is Image.
inputCenter: A CIVector object whose attribute type is CIAttributeTypePosition and whose display name is Center.
Default value: [150 150]
inputScale: An NSNumber object whose attribute type is CIAttributeTypeDistance and whose display name is Scale.
Default value: 8.00
*/
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter= [CIFilter filterWithName:#"CIPixellate"];
CIImage *inputImage = [[CIImage alloc] initWithImage:fromImage];
CIVector *vector = [CIVector vectorWithX:fromImage.size.width /2.0f Y:fromImage.size.height /2.0f];
[filter setDefaults];
[filter setValue:vector forKey:#"inputCenter"];
[filter setValue:[NSNumber numberWithDouble:scale] forKey:#"inputScale"];
[filter setValue:inputImage forKey:#"inputImage"];
CGImageRef cgiimage = [context createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:1.0f orientation:fromImage.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}
I want to blur my view, and I use this code:
//Get a UIImage from the UIView
NSLog(#"blur capture");
UIGraphicsBeginImageContext(BlurContrainerView.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat: 5] forKey: #"inputRadius"]; //change number to increase/decrease blur
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
//create UIImage from filtered image
blurredImage = [[UIImage alloc] initWithCIImage:resultImage];
//Place the UIImage in a UIImageView
UIImageView *newView = [[UIImageView alloc] initWithFrame:self.view.bounds];
newView.image = blurredImage;
NSLog(#"%f,%f",newView.frame.size.width,newView.frame.size.height);
//insert blur UIImageView below transparent view inside the blur image container
[BlurContrainerView insertSubview:newView belowSubview:transparentView];
And it blurs the view, but not all of it. How can I blur all of the View?
The issue isn't that it's not blurring all of the image, but rather that the blur is extending the boundary of the image, making the image larger, and it's not lining up properly as a result.
To keep the image the same size, after the line:
CIImage *resultImage = [gaussianBlurFilter valueForKey: #"outputImage"];
You can grab the CGRect for a rectangle the size of the original image in the center of this resultImage:
// note, adjust rect because blur changed size of image
CGRect rect = [resultImage extent];
rect.origin.x += (rect.size.width - viewImage.size.width ) / 2;
rect.origin.y += (rect.size.height - viewImage.size.height) / 2;
rect.size = viewImage.size;
And then use CIContext to grab that portion of the image:
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimg = [context createCGImage:resultImage fromRect:rect];
UIImage *blurredImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
Alternatively, for iOS 7, if you go to the iOS UIImageEffects sample code and download iOS_UIImageEffects.zip, you can then grab the UIImage+ImageEffects category. Anyway, that provides a few new methods:
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage;
So, to blur and image and lightening it (giving that "frosted glass" effect) you can then do:
UIImage *newImage = [image applyLightEffect];
Interestingly, Apple's code does not employ CIFilter, but rather calls vImageBoxConvolve_ARGB8888 of the vImage high-performance image processing framework. This technique is illustrated in WWDC 2013 video Implementing Engaging UI on iOS.
A faster solution is to avoid CGImageRef altogether and perform all transformations at CIImage lazy level.
So, instead of your unfitting:
// create UIImage from filtered image (but size is wrong)
blurredImage = [[UIImage alloc] initWithCIImage:resultImage];
A nice solution is to write:
Objective-C
// cropping rect because blur changed size of image
CIImage *croppedImage = [resultImage imageByCroppingToRect:imageToBlur.extent];
// create UIImage from filtered cropped image
blurredImage = [[UIImage alloc] initWithCIImage:croppedImage];
Swift 3
// cropping rect because blur changed size of image
let croppedImage = resultImage.cropping(to: imageToBlur.extent)
// create UIImage from filtered cropped image
let blurredImage = UIImage(ciImage: croppedImage)
Swift 4
// cropping rect because blur changed size of image
let croppedImage = resultImage.cropped(to: imageToBlur.extent)
// create UIImage from filtered cropped image
let blurredImage = UIImage(ciImage: croppedImage)
Looks like the blur filter is giving you back an image that’s bigger than the one you started with, which makes sense since pixels at the edges are getting blurred out past them. The easiest solution would probably be to make newView use a contentMode of UIViewContentModeCenter so it doesn’t try to squash the blurred image down; you could also crop blurredImage by drawing it in the center of a new context of the appropriate size, but you don’t really need to.
We are applying a 'CIGaussianBlur' filter on few images. The process is working fine most of the time. But when the app moves to the background the process produce whit stripes on the image. (Images below, notice that the left and bottom of the image is striped to white and that the image is shrieked a bit in comparing to the original image).
The Code:
- (UIImage*)imageWithBlurRadius:(CGFloat)radius
{
UIImage *image = self;
LOG(#"(1) image size before resize = %#",NSStringFromCGSize(image.size));
NSData *imageData = UIImageJPEGRepresentation(self, 1.0);
LOG(#"(2) image data length = %ul",imageData.length);
//create our blurred image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
//setting up Gaussian Blur (we could use one of many filters offered by Core Image)
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:radius] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
//CIGaussianBlur has a tendency to shrink the image a little, this ensures it matches up exactly to the bounds of our original image
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *finalImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
LOG(#"(3) final image size after resize = %#",NSStringFromCGSize(finalImage.size));
return finalImage;
}
Before Filter
)
After Filter
Actually, I just faced this exact problem, and found a solution that's different than what #RizwanSattar describes.
What I do, based on an exchange with "Rincewind" on the Apple developer boards, is to first apply a CIAffineClamp on the image, with the transform value set to identity. This creates an image at the same scale, but with an infinite extent. That causes the blur to blur the edges correctly.
Then after I apply the blur, I crop the image to it's original extent, cropping away the feathering that takes place on the edges.
You can see the code in a CI Filter demo app I've posted on github:
CIFilter demo project on github
It's a general-purpose program that handles all the different CI filters, but it has code to deal with the Gaussian blur filter.
Take a look at the method showImage. It has special-case code to set the extent on the source image before applying the blur filter:
if ([currentFilterName isEqualToString: #"CIGaussianBlur"])
{
// NSLog(#"new image is bigger");
CIFilter *clampFilter = [self clampFilter];
CIImage *sourceCIImage = [CIImage imageWithCGImage: imageToEdit.CGImage];
[clampFilter setValue: sourceCIImage
forKey: kCIInputImageKey];
[clampFilter setValue:[NSValue valueWithBytes: &CGAffineTransformIdentity
objCType:#encode(CGAffineTransform)]
forKey:#"inputTransform"];
sourceCIImage = [clampFilter valueForKey: kCIOutputImageKey];
[currentFilter setValue: sourceCIImage
forKey: kCIInputImageKey];
}
(Where the method "clampFilter" just lazily loads a CIAffineClamp filter.)
Then I apply the user-selected filter:
outputImage = [currentFilter valueForKey: kCIOutputImageKey];
Then after applying the selected filter, I then check the extent of the resulting image and crop it back to the original extent if it's bigger:
CGSize newSize;
newSize = outputImage.extent.size;
if (newSize.width > sourceImageExtent.width || newSize.height > sourceImageExtent.height)
{
// NSLog(#"new image is bigger");
CIFilter *cropFilter = [self cropFilter]; //Lazily load a CIAffineClamp filter
CGRect boundsRect = CGRectMake(0, 0, sourceImageExtent.width, sourceImageExtent.height);
[cropFilter setValue:outputImage forKey: #"inputImage"];
CIVector *rectVector = [CIVector vectorWithCGRect: boundsRect];
[cropFilter setValue: rectVector
forKey: #"inputRectangle"];
outputImage = [cropFilter valueForKey: kCIOutputImageKey];
}
The reason you are seeing those "white strips" in the blurred image is that the resulting CIImage is bigger than you original image, because it has the fuzzy edges of the blur. When you are hard-cropping the resulting image to be the same size as your original image, it's not accounting for the fuzzy edges.
After:
CIImage *result = [filter valueForKey:kCIOutputImageKey];
Take a look at result.extent which is a CGRect that shows you the new bounding box relative to the original image. (i.e. for positive radii, result.extent.origin.y would be negative)
Here's some code (you should really test it):
CIImage *result = blurFilter.outputImage;
// Blur filter will create a larger image to cover the "fuzz", but
// we should cut it out since goes to transparent and it looks like a
// vignette
CGFloat imageSizeDifference = -result.extent.origin.x;
// NOTE: on iOS7 it seems to generate an image that will end up still vignetting, so
// as a hack just multiply the vertical inset by 2x
CGRect imageInset = CGRectInset(result.extent, imageSizeDifference, imageSizeDifference*2);
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:result fromRect:imageInset];
Hope that helps.
How can I convert RGB image to 1 channel image (black/white) using ios5?
Input image is usually a photo of a book page.
Goal is to reduce the size of a photocopy by converting it to the 1 channel image.
If I understand your question, you want to apply a black and white thresholding to the image based on a pixel's luminance. For a fast way of doing this, you could use my open source GPUImage project (supporting back to iOS 4.x) and a couple of the image processing operations it provides.
In particular, the GPUImageLuminanceThresholdFilter and GPUImageAdaptiveThresholdFilter might be what you're looking for here. The former turns a pixel to black or white based on a luminance threshold you set (the default is 50%). The latter takes the local average luminance into account when applying this threshold, which can produce better results for text on pages of a book.
Usage of these filters on a UIImage is fairly simple:
UIImage *inputImage = [UIImage imageNamed:#"book.jpg"];
GPUImageLuminanceThresholdFilter *thresholdFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
UIImage *quickFilteredImage = [thresholdFilter imageByFilteringImage:inputImage];
These can be applied to a live camera feed and photos taken by the camera, as well.
You can use Core Image to process your image to black & white.
use CIEdgeWork , this will convert your image to black and whie
for more information on Core Image Programming, visit:-
https://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/CoreImaging/ci_tasks/ci_tasks.html#//apple_ref/doc/uid/TP30001185-CH3-TPXREF101
The code you are looking for is probably this:
CIContext *context = [CIContext contextWithOptions:nil]; // 1
CIImage *image = [CIImage imageWithContentsOfURL:myURL]; // 2
CIFilter *filter = [CIFilter filterWithName:#"CIEdgeWork"]; // 3
[filter setValue:image forKey:kCIInputImgeKey];
[filter setValue:[NSNumber numberWithFloat:0.8f] forKey:#"InputIntensity"];
CIImage *result = [filter valueForKey:kCIOutputImageKey]; // 4
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent];
here is some sample code,maybe helpful :
#implementation UIImage (GrayImage)
-(UIImage*)grayImage
{
int width = self.size.width;
int height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate
(nil,width,height,8,0,colorSpace,kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
return nil;
}
CGContextDrawImage(context,CGRectMake(0, 0, width, height), self.CGImage);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *grayImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(context);
return grayImage;
}
#end
I just write it as a Category of UIImage, but not support png image which has transparent pixel or it will be black.
I don't know whether it sounds silly,
I have an image which will be added in the UIImageView or an UIButton. That image contains some texts in the Red color and the background is black. Is it possible to change the BG color as well as the Text color to custom color. Is that possible ??
If, after considering George's answer, you still need to modify the colours, you could try using Core Image filters.
The following example adjusts the hue of a UIImageView called screenImage by an angle given in angle.
- (void)rebuildFilterChain:(double)angle {
UIImage *uiImage = [UIImage imageNamed:#"background.jpg"];
CIImage *image = [CIImage imageWithCGImage:[uiImage CGImage]];
CIFilter *hueAdjust = [CIFilter filterWithName:#"CIHueAdjust"];
[hueAdjust setDefaults];
[hueAdjust setValue:image forKey: #"inputImage"];
[hueAdjust setValue: [NSNumber numberWithFloat:angle] forKey: #"inputAngle"];
self.resultImage = [hueAdjust valueForKey: #"outputImage"];
CGImageRef cgImage = [context createCGImage:resultImage fromRect:resultImage.extent];
screenImage.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
}
The full list of Core Image filters is on the Apple site.
Why do you use an image? If you have a background color that means you are using a flat color...not an image actually. So you can set the background color of the button as well as the title color. If you are doing this just to have rounded corners , don't.
Just import <QuartzCore/QuartzCore.h>
and do this:
button.layer.cornerRadius = 8;
button.layer.masksToBounds = TRUE;
Hope this helps.
Cheers!