I've got a gaussian blur which I'm doing for an app.
//Get a UIImage from the UIView
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the UIImage
CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[gaussianBlurFilter setValue:imageToBlur forKey:#"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat:2] forKey:#"inputRadius"];
CIImage *resultImage = [gaussianBlurFilter valueForKey:#"outputImage"];
UIImage *endImage = [[UIImage alloc] initWithCIImage:resultImage];
//Place the UIImage in a UIImageView
newView = [[UIImageView alloc] initWithFrame:self.view.bounds];
newView.image = endImage;
[self.view addSubview:newView];
It works great, but I want to be able to reverse it and return the view back to normal.
How can I do this as trying to simply remove the blur's view from it's superview didn't work. I've also tried setting various properties to nil with no luck.
Keep a pointer to viewImage in a property
#property (nonatomic, strong) UIImage* originalImage;
then after
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
add
self.originalImage = viewImage;
To revert your image:
newView.image = self.originalImage;
when you apply the blur it does not alter viewImage.. you have created a separate CIImage which gets blurred, and you make a new UIImage from the blurred CIImage.
First thing i saw in your code is that you don't apply the filter in a async task. blur the image take time, so if you don't want to freeze the main thread you should use:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
//blur the image in a second thread
dispatch_async(dispatch_get_main_queue(), ^{
//set the blurred image to your imageView in the main thread
});
});
To can reverse the original image, you just put the blurred copy in other imageView that appear over the original. In my case the original image isn't a imageView, but the view itself. So i only add a imageView in my view to set the blurred image, and set to nil when i want to reverse.
Finally, if you want to avoid the blink when you set the blurred image, you can play with alpha in animations to make a soft transition:
[self.blurredImageView setAlpha: 0.0]; //make the imageView invisible
[self.blurredImageView setImage:blurredImage];
//and after set the image, make it visible slowly.
[UIView animateWithDuration:0.5 delay:0.1
options:UIViewAnimationOptionCurveEaseInOut
animations:^{
[self.blurredImageView setAlpha: 1.0];
}
completion:nil];
Here is my complete methods:
- (void)makeBlurredScreenShot{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *imageView = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *sourceImage = [CIImage imageWithCGImage:imageView.CGImage];
// Apply clamp filter:
// this is needed because the CIGaussianBlur when applied makes
// a trasparent border around the image
NSString *clampFilterName = #"CIAffineClamp";
CIFilter *clamp = [CIFilter filterWithName:clampFilterName];
if (!clamp)
return;
[clamp setValue:sourceImage forKey:kCIInputImageKey];
CIImage *clampResult = [clamp valueForKey:kCIOutputImageKey];
// Apply Gaussian Blur filter
NSString *gaussianBlurFilterName = #"CIGaussianBlur";
CIFilter *gaussianBlur = [CIFilter filterWithName:gaussianBlurFilterName];
if (!gaussianBlur)
return;
[gaussianBlur setValue:clampResult forKey:kCIInputImageKey];
[gaussianBlur setValue:[NSNumber numberWithFloat:8.0] forKey:#"inputRadius"];
CIImage *gaussianBlurResult = [gaussianBlur valueForKey:kCIOutputImageKey];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
CGImageRef cgImage = [context createCGImage:gaussianBlurResult fromRect:[sourceImage extent]];
UIImage *blurredImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
dispatch_async(dispatch_get_main_queue(), ^{
[self.blurredImageView setAlpha: 0.0];
[self.blurredImageView setImage:blurredImage];
[UIView animateWithDuration:0.5 delay:0.1
options:UIViewAnimationOptionCurveEaseInOut
animations:^{
[self.blurredImageView setAlpha: 1.0];
}
completion:nil];
});
});
}
- (void)removeBlurredScreenShot{
[UIView animateWithDuration:0.5 delay:0.1
options:UIViewAnimationOptionCurveEaseInOut
animations:^{
[self.blurredImageView setAlpha: 0.0];
}
completion:^(BOOL finished) {
[self.blurredImageView setImage:nil];
}];
}
Like I said in my previous comment, you could create a property/iVar that holds the image before the effect is applied and revert in case you want to undo.
if(!_originalImage)
_originalImage = [[UIImage alloc] init];
_originalImage = viewImage; //(Creates a copy, not a C-Type pass-by-reference)
// Do your Blur Stuff
// Now somewhere down the line in your program, if you don't like the blur and the user would like to undo:
viewImage = _originalImage;
As per your comment on #HeWas's answer, you shouldn't blur the view completely. If the view is getting blurred, there's something else you are doing wrong elsewhere in your program.
Related
I am trying to add an image background to a generated atec code so far I can get the aztec code to generate but am having trouble with the CIBlendWithMask filter, i'm not sure exactly what i'm doing wrong I believe the user selected background as kCIInputBackgroundImageKey is correct and the aztec output image as the kCIInputImageKey is correct, I think where i'm going wrong is the kCIInputMaskImageKey but not exactly sure why I need to do I thought the aztec output would be a sufficient mask image - do I need to select the color or something to get the background clipping to the aztec image?
CIFilter *aztecFilter = [CIFilter filterWithName:#"CIAztecCodeGenerator"];
CIFilter *colorFilter = [CIFilter filterWithName:#"CIFalseColor"];
[aztecFilter setValue:stringData forKey:#"inputMessage"];
[colorFilter setValue:aztecFilter.outputImage forKey:#"background"];
NSData* imageData = [[NSUserDefaults standardUserDefaults] objectForKey:#"usertheme"];
CIImage *image = [UIImage imageWithData:imageData].CIImage;
[colorFilter setValue:[CIColor colorWithCGColor:[[UIColor blackColor] CGColor]] forKey:#"inputColor0"];
[colorFilter setValue:[CIColor colorWithRed:1 green:1 blue:1 alpha:0] forKey:#"inputColor1"];
CIFilter *blendFilter = [CIFilter filterWithName:#"CIBlendWithMask"];
[blendFilter setValue:colorFilter.outputImage forKey:kCIInputImageKey];
[blendFilter setValue:image forKey:kCIInputBackgroundImageKey];
[blendFilter setValue:colorFilter.outputImage forKey:kCIInputMaskImageKey];
Trying to create something like this but for aztec codes instead of QR
You may be over-thinking this a bit...
You can use the CIBlendWithMask filter with only a background image and a mask image.
So, if we start with a gradient image (you may be generating it on-the-fly):
Then generate the Aztec Code image (yellow outline is just to show the frame):
We can use that code image as the mask.
Here is sample code:
- (void)viewDidLoad {
[super viewDidLoad];
self.view.backgroundColor = [UIColor systemYellowColor];
// create a vertical stack view
UIStackView *sv = [UIStackView new];
sv.axis = UILayoutConstraintAxisVertical;
sv.spacing = 8;
sv.translatesAutoresizingMaskIntoConstraints = NO;
[self.view addSubview:sv];
// add 3 image views to the stack view
for (int i = 0; i < 3; ++i) {
UIImageView *imgView = [UIImageView new];
[sv addArrangedSubview:imgView];
[imgView.widthAnchor constraintEqualToConstant:240].active = YES;
[imgView.heightAnchor constraintEqualToAnchor:imgView.widthAnchor].active = YES;
}
[self.view addSubview:sv];
[sv.centerXAnchor constraintEqualToAnchor:self.view.safeAreaLayoutGuide.centerXAnchor].active = YES;
[sv.centerYAnchor constraintEqualToAnchor:self.view.safeAreaLayoutGuide.centerYAnchor].active = YES;
// load a gradient image for the background
UIImage *gradientImage = [UIImage imageNamed:#"bkgGradient"];
// put it in the first image view
((UIImageView *)sv.arrangedSubviews[0]).image = gradientImage;
// create aztec filter
CIFilter *aztecFilter = [CIFilter filterWithName:#"CIAztecCodeGenerator"];
// give it some string data
NSString *qrString = #"My string to encode";
NSData *stringData = [qrString dataUsingEncoding: NSUTF8StringEncoding];
[aztecFilter setValue:stringData forKey:#"inputMessage"];
// get the generated aztec image
CIImage *aztecCodeImage = aztecFilter.outputImage;
// scale it to match the background gradient image
float scaleX = gradientImage.size.width / aztecCodeImage.extent.size.width;
float scaleY = gradientImage.size.height / aztecCodeImage.extent.size.height;
aztecCodeImage = [[aztecCodeImage imageBySamplingNearest] imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];
// convert to UIImage and set the middle image view
UIImage *scaledCodeImage = [UIImage imageWithCIImage:aztecCodeImage scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
((UIImageView *)sv.arrangedSubviews[1]).image = scaledCodeImage;
// create a blend with mask filter
CIFilter *blendFilter = [CIFilter filterWithName:#"CIBlendWithMask"];
// set the background image
CIImage *bkgInput = [CIImage imageWithCGImage:[gradientImage CGImage]];
[blendFilter setValue:bkgInput forKey:kCIInputBackgroundImageKey];
// set the mask image
[blendFilter setValue:aztecCodeImage forKey:kCIInputMaskImageKey];
// get the blended CIImage
CIImage *output = [blendFilter outputImage];
// convert to UIImage
UIImage *blendedImage = [UIImage imageWithCIImage:output scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
// set the bottom image view to the result
((UIImageView *)sv.arrangedSubviews[2]).image = blendedImage;
}
which produces:
I am trying to do dispatch_async to the drawing code I posted on https://stackoverflow.com/questions/34430468/while-profiling-with-instruments-i-see-a-lot-of-cpu-consuming-task-happening-w. I got an error of : "No matching function for call to 'dispatch_async' . What I am trying to do , as this is a memory expensive operation , trying to create queue for the rendering operation to happen in background and when the image is ready to put in the main queue because UI update process works in the main thread. So guys guide me on this thread . I am posting the whole code.
#pragma mark Blurring the image
- (UIImage *)blurWithCoreImage:(UIImage *)sourceImage
{
// Set up output context.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:#"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: #"inputImage"];
[gaussianBlurFilter setValue:#10 forKey:#"inputRadius"]; //30
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, self.view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, self.view.frame);
CGContextRestoreGState(outputContext);
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
})
});
// Output image is ready.
}
It is throwing error on this code dispatch_async(dispatch_get_main_queue(), i.e when I am trying to bring it back on the main thread, for the UI works in main thread. What I am missing?
See this answer to a similar question:
Is this Core Graphics code thread safe?
You start drawing on one thread, then finish it on another thread. That's a ticking time bomb.
In addition, the "return outputImage" performed on the main thread isn't going to do you any good, because there is nobody to receive that return value. You should do all your drawing in the same thread, extract the image, and then call something on the main thread that processes the complete image.
I think your code looks good but the way you are using may wrong.
So please try like bellow
Create one method like bellow
- (UIImage *)blurWithCoreImage:(UIImage *)sourceImage
{
// Set up output context.
// dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
// dispatch_async(queue, ^{
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:#"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: #"inputImage"];
[gaussianBlurFilter setValue:#10 forKey:#"inputRadius"]; //30
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, self.view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, self.view.frame);
CGContextRestoreGState(outputContext);
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
and use this method like bellow
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
UIImage *img = [self blurWithCoreImage:[UIImage imageNamed:#"imagename.png"]];
dispatch_async(dispatch_get_main_queue(), ^{
[self.view addSubview:[[UIImageView alloc] initWithImage:img]];
});
});
I just tried like bellow for testing , it gave me proper result. so have a try
Let me know if you face any issues , all the best
I am trying to change a UIImage saturation during an animation. But it seems that it is too heavy/slow to do so this way:
-(void) changeSaturation:(CGFloat)value{
CIContext * context = [CIContext contextWithOptions:nil];
CIFilter *colorControlsFilter = [CIFilter filterWithName:#"CIColorControls"];
CIImage *image = self.imageView.image.CIImage;
[colorControlsFilter setValue:image forKey:#"inputImage"];
[colorControlsFilter setValue:[NSNumber numberWithFloat:value] forKey:#"inputSaturation"];
CIImage *outputImage = [colorControlsFilter outputImage];
CGImageRef cgimg = [context createCGImage:outputImage
fromRect:[outputImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:cgimg];
self.imageView.image = newImage;
CGImageRelease(cgimg);}
This method is called everytime the view is dragged by the user (when the distance changes, the saturation changes as well), which obviously is not the right way to do it but I would like a similar behavior.
I wonder if I can achieve this with a layer on top of the UIImageview.
Can anyone advise me on how to achieve my goal?
1) Calculate filtered image in async method:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
UIImage * filtredImage = /*Your method*/
dispatch_async(dispatch_get_main_queue(), ^{
self.imageView.image = filtredImage;
}
}
2) Smooth change image: use coreAnimation. self.layer - is top CALayer in your view
- (UIImage*)changeSaturation:(CGFloat)value
{
//Your filter process
}
- (void)changeSaturation:(CGFloat)value duration:(CGFloat)duration
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSUInteger cadreCount = ceil(duration * 60);
CGFloat incrimentValue = value / cadreCount;
NSMutableArray * mutArrayAnimationValues = [NSMutableArray new];
for (int cadreIndex = 0; cadreIndex <= cadreCount; cadreIndex++)
{
CGFloat currentValue = incrimentValue * cadreIndex;
UIImage * imageForCurrentCadr = [self changeSaturation:currentValue];
[mutArrayAnimationValues addObject:(id)imageForCurrentCadr.CGImage];
}
dispatch_async(dispatch_get_main_queue(), ^{
self.layer.contents = (id)[mutArrayAnimationValues lastObject];
CAKeyframeAnimation *animation = [CAKeyframeAnimation animationWithKeyPath:#"contents"];
[animation setValues:mutArrayAnimationValues];
[animation setDuration:duration];
[animation setTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseOut]];
[animation setFillMode:kCAFillModeBoth];
[pathAnimation setRemovedOnCompletion:NO];
[self.layer addAnimation:pathAnimation forKey:#"imageAnimation"];
}
}
}
Turns out my main issue was that the view being dragged contained a big resolution image. My solution above works perfectly fine.
If you change saturation between 0.0 and 1.0, you can add Black&White (Grayscale) image above normal image and change it's alpha. It will be most performant.
I'm looking at an Apple example that uses core image filters to adjust Hue/saturation/brightness of an image. In this case the input image has it's properties adjusted, and another image is returned as a result of the filter operation. I'm interested if this transition can be animated (step by step transition).
Is there a way for me to programmatically animate a black and white image having color slowly fading in?
I checked the Apple view programming guidelines, but don't see anything about animating images or colors.
- (void)hueChanged:(id)sender
{
CGFloat hue = [hueSlider value];
CGFloat saturation = [saturationSlider value];
CGFloat brightness = [brightnessSlider value];
// Update labels
[hueLabel setText:[NSString stringWithFormat:NSLocalizedString(#"Hue: %f", #"Hue label format."), hue]];
[saturationLabel setText:[NSString stringWithFormat:NSLocalizedString(#"Saturation: %f",
#"Saturation label format."), saturation]];
[brightnessLabel setText:[NSString stringWithFormat:NSLocalizedString(#"Brightness: %f", #"Brightness label format."), brightness]];
// Apply effects to image
dispatch_async(processingQueue, ^{
if (!self.colorControlsFilter) {
self.colorControlsFilter = [CIFilter filterWithName:#"CIColorControls"];
}
[self.colorControlsFilter setValue:self.baseCIImage forKey:kCIInputImageKey];
[self.colorControlsFilter setValue:#(saturation) forKey:#"inputSaturation"];
[self.colorControlsFilter setValue:#(brightness) forKey:#"inputBrightness"];
CIImage *coreImageOutputImage = [self.colorControlsFilter valueForKey:kCIOutputImageKey];
if (!self.hueAdjustFilter) {
self.hueAdjustFilter = [CIFilter filterWithName:#"CIHueAdjust"];
}
[self.hueAdjustFilter setValue:coreImageOutputImage forKey:kCIInputImageKey];
[self.hueAdjustFilter setValue:#(hue) forKey:#"inputAngle"];
coreImageOutputImage = [self.hueAdjustFilter valueForKey:kCIOutputImageKey];
CGRect rect = CGRectMake(0,0,self.image.size.width,self.image.size.height);
CGImageRef cgImage = [self.context createCGImage:coreImageOutputImage fromRect:rect];
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
dispatch_async(dispatch_get_main_queue(), ^{
[imageView setImage:image];
});
});
// imageView.center = self.view.center;
// imageView.frame = self.view.frame;
}
For slowly fade animations you can use core animations in IOS
theView.alpha = 0;
[UIView animateWithDuration:1.0
animations:^{
//Apply effects to image here
theView.center = midCenter;
theView.alpha = 1;
}
completion:^(BOOL finished){
[UIView animateWithDuration:1.0
animations:^{
//Apply effects to image here
}
completion:^(BOOL finished){
}];
}];
The following method attempts to apply gausian blur to an image. However it isn't doing anything. Can you please tell me what is wrong, and if you also know the reason why it's wrong, that would also help. I am trying to learn about CALayers and quartzcore.
Thanks
-(void)updateFavoriteRecipeImage{
[self.favoriteRecipeImage setImageWithURL:[NSURL URLWithString:self.profileVCModel.favoriteRecipeImageUrl] placeholderImage:[UIImage imageNamed:#"miNoImage"]];
//Set content mode
[self.favoriteRecipeImage setContentMode:UIViewContentModeScaleAspectFill];
self.favoriteRecipeImage.layer.masksToBounds = YES;
//Blur the image
CALayer *blurLayer = [CALayer layer];
CIFilter *blur = [CIFilter filterWithName:#"CIGaussianBlur"];
[blur setDefaults];
blurLayer.backgroundFilters = [NSArray arrayWithObject:blur];
[self.favoriteRecipeImage.layer addSublayer:blurLayer];
[self.favoriteRecipeImage setAlpha:0];
//Show image using fade
[UIView animateWithDuration:.3 animations:^{
//Load alpha
[self.favoriteRecipeImage setAlpha:1];
[self.favoriteRecipeImageMask setFrame:self.favoriteRecipeImage.frame];
}];
}
The documentation of the backgroundFilters property says this:
Special Considerations
This property is not supported on layers in iOS.
As of iOS 6.1, there is no public API for applying live filters to layers on iOS. You can write code to draw the underlying layers to a CGImage and then apply filters to that image and set it as your layer's background, but doing so is somewhat complex and isn't “live” (it doesn't update automatically if the underlying layers change).
Try something like below :
CIImage *inputImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:#"test.png"]] ;
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"] ;
[blurFilter setDefaults] ;
[blurFilter setValue:inputImage forKey:#"inputImage"] ;
[blurFilter setValue: [NSNumber numberWithFloat:10.0f] forKey:#"inputRadius"];
CIImage *outputImage = [blurFilter valueForKey: #"outputImage"];
CIContext *context = [CIContext contextWithOptions:nil];
self.bluredImageView.image = [UIImage imageWithCGImage:[context createCGImage:outputImage fromRect:outputImage.extent]];