I am trying to add an image background to a generated atec code so far I can get the aztec code to generate but am having trouble with the CIBlendWithMask filter, i'm not sure exactly what i'm doing wrong I believe the user selected background as kCIInputBackgroundImageKey is correct and the aztec output image as the kCIInputImageKey is correct, I think where i'm going wrong is the kCIInputMaskImageKey but not exactly sure why I need to do I thought the aztec output would be a sufficient mask image - do I need to select the color or something to get the background clipping to the aztec image?
CIFilter *aztecFilter = [CIFilter filterWithName:#"CIAztecCodeGenerator"];
CIFilter *colorFilter = [CIFilter filterWithName:#"CIFalseColor"];
[aztecFilter setValue:stringData forKey:#"inputMessage"];
[colorFilter setValue:aztecFilter.outputImage forKey:#"background"];
NSData* imageData = [[NSUserDefaults standardUserDefaults] objectForKey:#"usertheme"];
CIImage *image = [UIImage imageWithData:imageData].CIImage;
[colorFilter setValue:[CIColor colorWithCGColor:[[UIColor blackColor] CGColor]] forKey:#"inputColor0"];
[colorFilter setValue:[CIColor colorWithRed:1 green:1 blue:1 alpha:0] forKey:#"inputColor1"];
CIFilter *blendFilter = [CIFilter filterWithName:#"CIBlendWithMask"];
[blendFilter setValue:colorFilter.outputImage forKey:kCIInputImageKey];
[blendFilter setValue:image forKey:kCIInputBackgroundImageKey];
[blendFilter setValue:colorFilter.outputImage forKey:kCIInputMaskImageKey];
Trying to create something like this but for aztec codes instead of QR
You may be over-thinking this a bit...
You can use the CIBlendWithMask filter with only a background image and a mask image.
So, if we start with a gradient image (you may be generating it on-the-fly):
Then generate the Aztec Code image (yellow outline is just to show the frame):
We can use that code image as the mask.
Here is sample code:
- (void)viewDidLoad {
[super viewDidLoad];
self.view.backgroundColor = [UIColor systemYellowColor];
// create a vertical stack view
UIStackView *sv = [UIStackView new];
sv.axis = UILayoutConstraintAxisVertical;
sv.spacing = 8;
sv.translatesAutoresizingMaskIntoConstraints = NO;
[self.view addSubview:sv];
// add 3 image views to the stack view
for (int i = 0; i < 3; ++i) {
UIImageView *imgView = [UIImageView new];
[sv addArrangedSubview:imgView];
[imgView.widthAnchor constraintEqualToConstant:240].active = YES;
[imgView.heightAnchor constraintEqualToAnchor:imgView.widthAnchor].active = YES;
}
[self.view addSubview:sv];
[sv.centerXAnchor constraintEqualToAnchor:self.view.safeAreaLayoutGuide.centerXAnchor].active = YES;
[sv.centerYAnchor constraintEqualToAnchor:self.view.safeAreaLayoutGuide.centerYAnchor].active = YES;
// load a gradient image for the background
UIImage *gradientImage = [UIImage imageNamed:#"bkgGradient"];
// put it in the first image view
((UIImageView *)sv.arrangedSubviews[0]).image = gradientImage;
// create aztec filter
CIFilter *aztecFilter = [CIFilter filterWithName:#"CIAztecCodeGenerator"];
// give it some string data
NSString *qrString = #"My string to encode";
NSData *stringData = [qrString dataUsingEncoding: NSUTF8StringEncoding];
[aztecFilter setValue:stringData forKey:#"inputMessage"];
// get the generated aztec image
CIImage *aztecCodeImage = aztecFilter.outputImage;
// scale it to match the background gradient image
float scaleX = gradientImage.size.width / aztecCodeImage.extent.size.width;
float scaleY = gradientImage.size.height / aztecCodeImage.extent.size.height;
aztecCodeImage = [[aztecCodeImage imageBySamplingNearest] imageByApplyingTransform:CGAffineTransformMakeScale(scaleX, scaleY)];
// convert to UIImage and set the middle image view
UIImage *scaledCodeImage = [UIImage imageWithCIImage:aztecCodeImage scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
((UIImageView *)sv.arrangedSubviews[1]).image = scaledCodeImage;
// create a blend with mask filter
CIFilter *blendFilter = [CIFilter filterWithName:#"CIBlendWithMask"];
// set the background image
CIImage *bkgInput = [CIImage imageWithCGImage:[gradientImage CGImage]];
[blendFilter setValue:bkgInput forKey:kCIInputBackgroundImageKey];
// set the mask image
[blendFilter setValue:aztecCodeImage forKey:kCIInputMaskImageKey];
// get the blended CIImage
CIImage *output = [blendFilter outputImage];
// convert to UIImage
UIImage *blendedImage = [UIImage imageWithCIImage:output scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
// set the bottom image view to the result
((UIImageView *)sv.arrangedSubviews[2]).image = blendedImage;
}
which produces:
Related
I need to add pixelated rectangular layer on UIImage which can be undo. Just like this..
I used this code but its not doing the same thing as i need
CALayer *maskLayer = [CALayer layer];
CALayer *mosaicLayer = [CALayer layer];
// Mask image ends with 0.15 opacity on both sides. Set the background color of the layer
// to the same value so the layer can extend the mask image.
mosaicLayer.contents = (id)[img CGImage];
mosaicLayer.frame = CGRectMake(0,0, img.size.width, img.size.height);
UIImage *maskImg = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"mask" ofType:#"png"]];
maskLayer.contents = (id)[maskImg CGImage];
maskLayer.frame = CGRectMake(100,150, maskImg.size.width, maskImg.size.height);
mosaicLayer.mask = maskLayer;
[imageView.layer addSublayer:mosaicLayer];
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saver = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
is there any built-in filter by apple for iOS? Please guide me Thanks
You can use GPUImage's GPUImagePixellateFilter https://github.com/BradLarson/GPUImage/blob/8811da388aed22e04ed54ca9a5a76791eeb40551/framework/Source/GPUImagePixellateFilter.h
We can use GPUImage framework but lot better is to use iOS own filters. easy coding :)
- (UIImage *)applyCIPixelateFilter:(UIImage*)fromImage withScale:(double)scale
{
/*
Makes an image blocky by mapping the image to colored squares whose color is defined by the replaced pixels.
Parameters
inputImage: A CIImage object whose display name is Image.
inputCenter: A CIVector object whose attribute type is CIAttributeTypePosition and whose display name is Center.
Default value: [150 150]
inputScale: An NSNumber object whose attribute type is CIAttributeTypeDistance and whose display name is Scale.
Default value: 8.00
*/
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter= [CIFilter filterWithName:#"CIPixellate"];
CIImage *inputImage = [[CIImage alloc] initWithImage:fromImage];
CIVector *vector = [CIVector vectorWithX:fromImage.size.width /2.0f Y:fromImage.size.height /2.0f];
[filter setDefaults];
[filter setValue:vector forKey:#"inputCenter"];
[filter setValue:[NSNumber numberWithDouble:scale] forKey:#"inputScale"];
[filter setValue:inputImage forKey:#"inputImage"];
CGImageRef cgiimage = [context createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:1.0f orientation:fromImage.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}
I'm trying to add a blur effect using category.
+ (UIImage *)blurImageWithImage:(UIImage*) imageName withView:(UIView*)view {
UIImage *sourceImage = imageName;
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:#"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: #"inputImage"];
[gaussianBlurFilter setValue:#10 forKey:#"inputRadius"];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
// Set up output context.
UIGraphicsBeginImageContext(view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, view.frame);
CGContextRestoreGState(outputContext);
// Output image is ready.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage; }
then I call this function inside a UIView like this:
UIImage *image = [UIImage imageNamed:#"xxx"]
UIImageView *page = [[UIImageView alloc] initWithImage:[UIImage blurImageWithImage:image withView:self]];
If I add this function directly in the class, it works, but not if I do it in UIImage category.
I had face same problem earlier. But thank fully I am getting the solution.
Please follow step. Make sure your blur image function working fine.
1) In category add instance method instead of class method.
ex.
- (UIImage *)blurImageWithImage:(UIImage*) imageName withView:(UIView*)view
2) Import category in your VC
3) Use category, ex
UIImage *image = [UIImage imageNamed:#"xxx"]
UIImageView *page = [[UIImageView alloc] initWithImage:[image blurImageWithImage:image withView:self]];
Let me know this solution is working fine for you.
Turns out the problem was I forgot to add "-" when doing context translate.
So what I ended up doing is I create a class method.
Interface:
+ (UIImage *)blurImageWithImageName:(NSString*) imageName withView:(UIView*)view;
Implementation :
+ (UIImage *)blurImageWithImageName:(NSString*) imageName withView:(UIView*)view {
UIImage *sourceImage = [UIImage imageNamed:imageName];
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:#"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: #"inputImage"];
[gaussianBlurFilter setValue:#10 forKey:#"inputRadius"];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
// Set up output context.
UIGraphicsBeginImageContext(view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, view.frame);
CGContextRestoreGState(outputContext);
// Output image is ready.
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
I'm merging two images together by using the CIFilter #"CIDarkenBlendMode". It works fine except for one thing. I want the images to be exactly aligned on top of each other regardless of the image size but I am not able to achieve this. Do I have to create my own filter?
This is what I get:
This is what I want:
My merge-code:
-(void)mergeImagesWithCIImage:(UIImage*)image
{
CIImage *topImage = [[CIImage alloc]initWithImage:image];
CIImage *scaledImage = [self scaleImageWithCIImage:topImage];
CIImage *backgroundImage = [[CIImage alloc]initWithImage:self.vImage.image];
CIFilter *darkenFilter = [CIFilter filterWithName:#"CIDarkenBlendMode" keysAndValues:kCIInputImageKey,scaledImage,
#"inputBackgroundImage",backgroundImage,nil];
CIImage *filterOutputImage = darkenFilter.outputImage;
CIContext *ctx = [CIContext contextWithOptions:nil];
CGImageRef createdImage = [ctx createCGImage:filterOutputImage fromRect:filterOutputImage.extent];
UIImage *outputImage = [UIImage imageWithCGImage:createdImage];
CGImageRelease(createdImage);
createdImage = nil;
self.vImage.image = outputImage;
}
Instead of using a CIFilter I used:
[_image drawInRect:CGRectMake(centerX,centerY,_image.size.width,_image.size.height) blendMode:kCGBlendModeDarken alpha:0.8];
and centered the images.
I'm using such solution to mask my UIImage with some alpha drawing:
masking an UIImage
The problem is that later i want to apply some CIFilters. However when i change value of the filter, my alpha gets lost from UIImage. Do i have to re-apply the alpha channel to output image each time after modifying CIFilter? This will surely make the process much slower.
Samples of code: (each new paragraph is in another method)
// set the image
_image = [incomeImage imageWithMask:_mask]; // imageWithMask from method from link
[_myView.imageView setImage:_image];
// calculate ciimages
_inputCIImage = [[CIImage alloc] initWithCGImage:_image.CGImage options:nil];
_myView.imageView.image = _image;
_currentCIImage = _inputCIImage;
// change value in filter
[filter setValue:#(0.2f) forKey:#"someKey"];
[filter setValue:_inputCIImage forKey:kCIInputImageKey];
_currentCIImage = [filter outputImage];
CGImageRef img = [_context createCGImage:_currentCIImage fromRect:[_currentCIImage extent]];
UIImage *newImage = [UIImage imageWithCGImage:img];
you could do this only using CIFilters. Instead of using imageWithMask you can use the CIBlendWithMask CIFilter. Apple CIFilter Reference
The following method attempts to apply gausian blur to an image. However it isn't doing anything. Can you please tell me what is wrong, and if you also know the reason why it's wrong, that would also help. I am trying to learn about CALayers and quartzcore.
Thanks
-(void)updateFavoriteRecipeImage{
[self.favoriteRecipeImage setImageWithURL:[NSURL URLWithString:self.profileVCModel.favoriteRecipeImageUrl] placeholderImage:[UIImage imageNamed:#"miNoImage"]];
//Set content mode
[self.favoriteRecipeImage setContentMode:UIViewContentModeScaleAspectFill];
self.favoriteRecipeImage.layer.masksToBounds = YES;
//Blur the image
CALayer *blurLayer = [CALayer layer];
CIFilter *blur = [CIFilter filterWithName:#"CIGaussianBlur"];
[blur setDefaults];
blurLayer.backgroundFilters = [NSArray arrayWithObject:blur];
[self.favoriteRecipeImage.layer addSublayer:blurLayer];
[self.favoriteRecipeImage setAlpha:0];
//Show image using fade
[UIView animateWithDuration:.3 animations:^{
//Load alpha
[self.favoriteRecipeImage setAlpha:1];
[self.favoriteRecipeImageMask setFrame:self.favoriteRecipeImage.frame];
}];
}
The documentation of the backgroundFilters property says this:
Special Considerations
This property is not supported on layers in iOS.
As of iOS 6.1, there is no public API for applying live filters to layers on iOS. You can write code to draw the underlying layers to a CGImage and then apply filters to that image and set it as your layer's background, but doing so is somewhat complex and isn't “live” (it doesn't update automatically if the underlying layers change).
Try something like below :
CIImage *inputImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:#"test.png"]] ;
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"] ;
[blurFilter setDefaults] ;
[blurFilter setValue:inputImage forKey:#"inputImage"] ;
[blurFilter setValue: [NSNumber numberWithFloat:10.0f] forKey:#"inputRadius"];
CIImage *outputImage = [blurFilter valueForKey: #"outputImage"];
CIContext *context = [CIContext contextWithOptions:nil];
self.bluredImageView.image = [UIImage imageWithCGImage:[context createCGImage:outputImage fromRect:outputImage.extent]];