Applying border to image shape - ios

In my application I am having various images of different different shapes. Like tree, cloud. (Sample image is attached).
I want to add border to those shapes pro-grammatically. Like if image is of tree then need to highlight tree shape.
I cannot use calayer as it will apply border to UIImageView.
Can anyone guide me how to achieve this?

This can be achieved this by using a series of CIFilters. See images corresponding to steps below. In my example base image is a color image with transparent background and mask is black and white.
Use CIEdges to detect edges from the mask.
Then make edges thicker by applying disk maximum filter (CIMorphologyMaximum).
Convert borders image from black-and-white to transparent-and-white with CIMaskToAlpha
Overlay original image on top of borders.
Full code below:
let base = CIImage(cgImage: baseImage.cgImage!)
let mask = CIImage(cgImage: maskImage.cgImage!)
// 1
let edges = mask.applyingFilter("CIEdges", parameters: [
kCIInputIntensityKey: 1.0
])
// 2
let borderWidth = 0.02 * min(baseImage.size.width, baseImage.size.height)
let wideEdges = edges.applyingFilter("CIMorphologyMaximum", parameters: [
kCIInputRadiusKey: borderWidth
])
// 3
let background = wideEdges.applyingFilter("CIMaskToAlpha")
// 4
let composited = base.composited(over: background)
// Convert back to UIImage
let context = CIContext(options: nil)
let cgImageRef = context.createCGImage(composited, from: composited.extent)!
return UIImage(cgImage: cgImageRef)

Simple option is to draw the image twice, first with a small scale applied to grow the image a little. Masking if the images aren't transparent (but are black&white).

I just did the same thing but with a white border. I created a mask with a white body and 4px black stroke around the outside to give me the uniform border I want around my target image. The followng takes advantage of Core Image filters to mask off a solid color background (to be used as the border) and then to mask off and composite the target image.
// The two-tone mask image
UIImage *maskImage = [UIImage imageNamed: #"Mask"];
// Create a filler image of whatever color we want the border to be (in my case white)
UIGraphicsBeginImageContextWithOptions(maskImage.size, NO, maskImage.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, UIColor.whiteColor.CGColor);
CGContextFillRect(context, CGRectMake(0.f, 0.f, maskImage.size.width, maskImage.size.height));
UIImage *whiteImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Use CoreImage to mask the colored background to the mask (the entire opaque region of the mask)
CIContext *ciContext = [CIContext contextWithOptions: nil];
CIFilter *filter = [CIFilter filterWithName: #"CIBlendWithAlphaMask"];
[filter setValue: [CIImage imageWithCGImage: whiteImage.CGImage]
forKey: kCIInputImageKey];
[filter setValue: [CIImage imageWithCGImage: maskImage.CGImage]
forKey: kCIInputMaskImageKey];
CIImage *whiteBackground = filter.outputImage;
// scale the target image to the size of the mask (accounting for image scale)
// ** Uses NYXImageKit
image = [image scaleToSize: CGSizeMake(maskImage.size.width * maskImage.scale, maskImage.size.height * maskImage.scale)
usingMode: NYXResizeModeAspectFill];
// finally use Core Image to create our image using the masked white from above for our border and the inner (white) area of our mask image to mask the target image before compositing
filter = [CIFilter filterWithName: #"CIBlendWithMask"];
[filter setValue: [CIImage imageWithCGImage: image.CGImage]
forKey: kCIInputImageKey];
[filter setValue: whiteBackground
forKey: kCIInputBackgroundImageKey];
[filter setValue: [CIImage imageWithCGImage: maskImage.CGImage]
forKey: kCIInputMaskImageKey];
image = [UIImage imageWithCGImage: [ciContext createCGImage: filter.outputImage
fromRect: [filter.outputImage extent]]];

You can apply border to objects present in the image using OpenCV framework.
Check this link. Here edges are detected of an image and border is applied to it. I hope this will give exact idea which you want.
https://github.com/BloodAxe/OpenCV-Tutorial

Related

Colorwithpattern for coreimage color

I am using CIAztecCodeGenerator to generate an Aztec code.
I'm trying to set a pattern instead of a solid color for the foreground color for it however it is rendering as blank/white I was wondering if anyone knew what I am doing wrong.
[colorFilter setValue:[CIColor colorWithCGColor:[[UIColor colorWithPatternImage:image] CGColor]] forKey:#"inputColor0"];
It's a bit more complicated, but you can use a custom pattern by combining and blending the images in a certain way:
import CoreImage.CIFilterBuiltins // needed for using the type-safe filter interface
// ...
let patternImage = UIImage(named: "<image_name>")!
var patternInput = CIImage(cgImage: patternImage.cgImage!)
// potentially scale the pattern image
patternInput = patternInput.transformed(by: CGAffineTransform(scaleX: 0.5, y: 0.5))
let codeFilter = CIFilter.aztecCodeGenerator()
codeFilter.message = "<message>".data(using: .utf8)!
var codeImage = codeFilter.outputImage!
// invert code so the actual code part is white
let colorInvertFilter = CIFilter.colorInvert()
colorInvertFilter.inputImage = codeImage
codeImage = colorInvertFilter.outputImage!
// potentially scale the barcode (using nearest sampling to retain sharp edges)
codeImage = codeImage.samplingNearest().transformed(by: CGAffineTransform(scaleX: 50, y: 50))
let blendFilter = CIFilter.blendWithMask()
blendFilter.inputImage = patternInput
// using white color as background here, but can be any (or transparent when alpha = 0)
blendFilter.backgroundImage = CIImage(color: CIColor.white).cropped(to: codeImage.extent)
blendFilter.maskImage = codeImage
let output = blendFilter.outputImage!
Objective-C version:
// the image containing the pattern you want to show over the aztec code
CIImage *patternImage = [CIImage imageWithData:imageData];
// potentially scale the pattern image, if necessary
patternImage = [patternImage imageByApplyingTransform:CGAffineTransformMakeScale(0.5, 0.5)];
// generate aztec code image
CIFilter *qrFilter = [CIFilter filterWithName:#"CIAztecCodeGenerator"];
[qrFilter setValue:stringData forKey:#"inputMessage"];
CIImage *codeImage = qrFilter.outputImage;
// invert code so the actual code part is white (used for masking below)
codeImage = [codeImage imageByApplyingFilter: #"CIColorInvert"];
// potentially scale the aztec code (using nearest sampling to retain sharp edges)
codeImage = [[codeImage imageBySamplingNearest] imageByApplyingTransform:CGAffineTransformMakeScale(50, 50)];
// the background color for your aztec code; basically a solid color image of the same size of the code
CIImage *background = [[CIImage imageWithColor:[CIColor whiteColor]] imageByCroppingToRect:codeImage.extent];
//
CIFilter *blendFilter = [CIFilter filterWithName:#"CIBlendWithMask"];
[blendFilter setValue:patternImage forKey:#"inputImage"]; // the pattern image is in the foreground
[blendFilter setValue:background forKey:#"backgroundImage"]; // solid color in the aztec code, but could be any color or image
[blendFilter setValue:codeImage forKey:#"maskImage"]; // use the aztec code as a mask for the pattern image over the background
CIImage *output = blendFilter.outputImage
Note that the numbers in the two scaling steps depend on how large you want to display the code and how the pattern image should be scaled above the code.

White pixels around iOS Image using CIFilter

I add a picture frame (Image with transparent background) around an existing UIImage and save it all as one image. On simulator, everything looks like it runs great. However on the device, it adds some white pixels around some of the areas of the frame's image. Here is my code:
- (void)applyFilter {
NSLog(#"Running");
UIImage *borderImage = [UIImage imageNamed:#"IMG_8055.PNG"];
NSData *dataFromImage = UIImageJPEGRepresentation(self.imgView.image, 1);
CIImage *beginImage= [CIImage imageWithData:dataFromImage];
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *border =[CIImage imageWithData:UIImagePNGRepresentation(borderImage)];
border = [border imageByApplyingTransform:CGAffineTransformMakeScale(beginImage.extent.size.width/border.extent.size.width, beginImage.extent.size.height/border.extent.size.height)];
CIFilter *filter= [CIFilter filterWithName:#"CISourceOverCompositing"]; //#"CISoftLightBlendMode"];
[filter setDefaults];
[filter setValue:border forKey:#"inputImage"];
[filter setValue:beginImage forKey:#"inputBackgroundImage"];
CIImage *outputImage = [filter valueForKey:#"outputImage"];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
self.imgView.image = newImg;
}
Here is the resulting image:
The frame image used in the picture looks like this:
Here is a screenshot of the frame image in photoshop, showing those pixels are not present in the PNG.
The issue is that if you look at your image, those pixels immediately adjacent to the musical notes are apparently not transparent. And if you notice, those white pixels that appear in the final image aren't just the occasional pixel, but they appear in square blocks.
These sorts of squared-off pixel noise is a telltale sign of JPEG artifacts. It's hard to say what's causing this because the image you added to this question was a JPEG (which doesn't support transparency). I assume you must have a PNG version of this backdrop? You might have to share that with us to confirm this diagnosis.
But the bottom line is that you need to carefully examine the original image and the transparency of those pixels that appear to be white noise. Make sure that as you create/manipulate these images, avoid JPEG file formats, because it loses transparency information and introduces artifacts. PNG files are often safer.

how can you apply a filter on a UIImage as if it is sprayed on a brick wall?

I want to blend an image with a background image.
As if it is sprayed on a wall.
How can i obtain a realistic blend.
I have tried alpha but not giving good results.
I am quit new in this CoreImage stuff.
Please help.
Something similiar to this.
http://designshack.net/wp-content/uploads/texturetricks-14.jpg
i googled some but no luck.
I even do not know what i am looking for exactly.
Not sure how to google it.
Here is how to blend two images together.
UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"];
UIImage *image = [UIImage imageNamed:#"top.png"];
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This kind of works. I found a smiley face image - white on black - and a brick background.
The first job is to create CIImage versions of the pair. I pass the smiley face through a CIMaskToAlpha filter to make the black transparent:
let brick = CIImage(image: UIImage(named: "brick.jpg")!)!
let smiley = CIImage(image: UIImage(named: "smiley.jpg")!)!
.imageByApplyingFilter("CIMaskToAlpha", withInputParameters: nil)
Then composite the pair together with CISubtractBlendMode:
let composite = brick
.imageByApplyingFilter("CISubtractBlendMode",
withInputParameters: [kCIInputBackgroundImageKey: smiley])
The final result is almost there:
The subtract blend will only really work with white artwork.
Here's another approach: using CISoftLightBlendMode for the blend, bumping up the gamma to brighten the effect and then compositing that over the original brickwork using the smiley as a mask:
let composite = smiley
.imageByApplyingFilter("CISoftLightBlendMode",
withInputParameters: [kCIInputBackgroundImageKey: brick])
.imageByApplyingFilter("CIGammaAdjust",
withInputParameters: ["inputPower": 0.5])
.imageByApplyingFilter("CIBlendWithMask",
withInputParameters: [kCIInputBackgroundImageKey: brick, kCIInputMaskImageKey: smiley])
Which gives:
This approach is nice because you can control the paint whiteness with the gamma power.
Simon

How to perform image processing in ios

I have an image named myImage.jpg
I have to do the following animations to the image and put the final result in an imageView in iOS. The requirements are:
Resized to 262%.
Duplicate layer.
Flip vertical.
Align duplicate layer to the bottom of the first layer.
Apply Gaussian Blur (Value - 9) to duplicate layer.
Move the blurred layer (duplicated layer) 47px upward
Added Layer mask on blurred layer.
Applied Gradient to mask layer from black to white.
How to do these animations on the image. I have done flip vertical using
- (UIImage *) flipImage: (UIImage *) image {
UIImage *flippedImage = [UIImage imageWithCGImage:image.CGImage scale:image.scale orientation:UIImageOrientationDownMirrored];
return flippedImage;
}
and blur with following code:
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:3.0f] forKey:#"inputRadius"];
How to create a mask and gradient and how to combine these images?
I solved the problem. by generating separate images for original image, flipped blurred image, top and bottom gradients and combine them by using
UIGraphicsBeginImageContext(size)
Method.

I am using CIFilter to get a blur image,but why is the output image always larger than input image?

Codes are as below:
CIImage *imageToBlur = [CIImage imageWithCGImage: self.pBackgroundImageView.image.CGImage];
CIFilter *blurFilter = [CIFilter filterWithName: #"CIGaussianBlur" keysAndValues: kCIInputImageKey, imageToBlur, #"inputRadius", [NSNumber numberWithFloat: 10.0], nil];
CIImage *outputImage = [blurFilter outputImage];
UIImage *resultImage = [UIImage imageWithCIImage: outputImage];
For example,the input image has a size of (640.000000,1136.000000),but the output image has a size of (700.000000,1196.000000)
Any advice is appreciated.
This is a super late answer to your question, but the main problem is you're thinking of a CIImage as an image. It is not, it is a "recipe" for an image. So, when you apply the blur filter to it, Core Image calculates that to show every last pixel of your blur you would need a larger canvas. That estimated size to draw the entire image is called the "extent". In essence, every pixel is getting "fatter", which means that the final extent will be bigger than the original canvas. It is up to you to determine which part of the extent is useful to your drawing routine.

Resources