When merging two images CISourceOverCompositing losing alpha for watermark - ios

I am merging two images using CIImage filter CISourceOverCompositing
issue is it is losing alpha/opacity of watermark/foreground image if its applied opacity is less than one.
I applied alpha value like this
CGFloat rgba[4] = {0.0, 0.0, 0.0, alphaVal};
CIFilter *colorMatrix = [CIFilter filterWithName:#"CIColorMatrix"];
I think issue is I need to change alpha from CIImage through some calculation as CIImage working space is not same as UIImage.
want these type of calculation for CIImage rgbs
Core Image filter CISourceOverCompositing not appearing as expected with alpha overlay

Related

CIImage(IOS): Adding 3x3 convolution after a monochrome filter somehow restores color

I'm converting ciimage to monochrome, clipping with CICrop
and running sobel to detect edges, #if section at the bottom is
used to display result
CIImage *ci = [[CIImage alloc] initWithCGImage:uiImage.CGImage];
CIImage *gray = [CIFilter filterWithName:#"CIColorMonochrome" keysAndValues:
#"inputImage", ci, #"inputColor", [[CIColor alloc] initWithColor:[UIColor whiteColor]],
nil].outputImage;
CGRect rect = [ci extent];
rect.origin = CGPointZero;
CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width * 0.2, rect.size.height);
CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width* 0.2 W:rect.size.height];
CIImage *left = [gray imageByCroppingToRect:cropRectLeft];
CIFilter *cropFilter = [CIFilter filterWithName:#"CICrop"];
[cropFilter setValue:left forKey:#"inputImage"];
[cropFilter setValue:cropRect forKey:#"inputRectangle"];
// The sobel convoloution will produce an image that is 0.5,0.5,0.5,0.5 whereever the image is flat
// On edges the image will contain values that deviate from that based on the strength and
// direction of the edge
const double g = 1.;
const CGFloat weights[] = { 1*g, 0, -1*g,
2*g, 0, -2*g,
1*g, 0, -1*g};
left = [CIFilter filterWithName:#"CIConvolution3X3" keysAndValues:
#"inputImage", cropFilter.outputImage,
#"inputWeights", [CIVector vectorWithValues:weights count:9],
#"inputBias", #0.5,
nil].outputImage;
#define VISUALHELP 1
#if VISUALHELP
CGImageRef imageRefLeft = [gcicontext createCGImage:left fromRect:cropRectLeft];
CGContextDrawImage(context, cropRectLeft, imageRefLeft);
CGImageRelease(imageRefLeft);
#endif
Now whenever 3x3 convolution is not part of the ciimage pipeline
the portion of the image I run edge detection on shows up gray,
but whenever CIConvolution3X3 postfix is part of the processing pipeline
the colors magically appear back. This happens no matter
if I use CIColorMonochrome or CIPhotoEffectMono prefix to remove color.
Any ideas how to keep the color out all the way to the bottom of the pipeline?
tnx
UPD: not surprisingly running a crude custom monochrome kernel such as this one
kernel vec4 gray(sampler image)
{
vec4 s = sample(image, samplerCoord(image));
float r = (s.r * .299 + s.g * .587 + s.b * 0.114) * s.a;
s = vec4(r, r, r, 1);
return s;
}
instead of using standard mono filters from apple results in the exact same
issue with the color coming back when 3x3 convolution is part of my ci pipeline
This issue is that CI convolutions operations (e.g CIConvolution3X3, CIConvolution5X5 and CIGaussianBlur) operate on all four channels of the input image. This means that, in your code example, the resulting alpha channel will be 0.5 where you probably want it to be 1.0. Try adding a simple kernel after the convolution to set the alpha back to 1.
to followup: I gave up on coreimage for this task. it seems that using two instances of CIFilter or CIKernel cause a conflict. Someone somewhere in the coreimage
innards seems to manipulate gles state incorrectly and, thusly, reverse engineering what went wrong ends up costlier than using something other than core image (with custom ci filters which work on ios8 only anyway)
gpuimage seems not as buggy and easier to service/debug (no affiliation on my part)

Lanczos scale not working when scaleKey greater than some value

I have this code
CIImage * input_ciimage = [CIImage imageWithCGImage:self.CGImage];
CIImage * output_ciimage =
[[CIFilter filterWithName:#"CILanczosScaleTransform" keysAndValues:
kCIInputImageKey, input_ciimage,
kCIInputScaleKey, [NSNumber numberWithFloat:0.72], // [NSNumber numberWithFloat: 800.0 / self.size.width],
nil] outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef output_cgimage = [context createCGImage:output_ciimage
fromRect:[output_ciimage extent]];
UIImage *output_uiimage;
output_uiimage = [UIImage imageWithCGImage:output_cgimage
scale:1.0 orientation:self.imageOrientation];
CGImageRelease(output_cgimage);
return output_uiimage;
So, when scaleKey greater than some value then output_uiimage is black image.
In my case if value of key kCIInputScaleKey > #0.52 then result is black image. When i rotate image on 90 degree then i got the same result but value was 0.72 (not 0.52).
Whats wrong with library or mistake in my code?
I have iPhone 4, iOS 7.1.2, xCode 6.0 if needed.
That's what Apple said:
This scenario exposes a bug in Core Image. The bug occurs when rendering requires an intermediate buffer that has a dimension greater than the GPU texture limits (4096) AND the input image fits into these limits. This happens with any filter that is performing a convolution (blur, lanczos) on an input image that has width or height close to the GL texture limit.
Note: the render is succesful if the one of the dimensions of the input image is increased to 4097.
Replacing CILanczosScaleTransform with CIAffineTransform (lower quality) or resizing the image with CG are possible workarounds for the provided sample code.
I've updated Bug report after request from Apple's engineers. They answer:
We believe that the issue is with the Core Image Lanczos filter that
occurs at certain downsample scale factors. We hope to fix this issue
in the future.
The filter should work well with downsample that are power of 2 (i.e.
1/2, 1/4, 1/8). So, we would recommend limiting your downsample to
these values and then using AffineTransform to scale up or down
further if required.
We are now closing this bug report.

How to perform image processing in ios

I have an image named myImage.jpg
I have to do the following animations to the image and put the final result in an imageView in iOS. The requirements are:
Resized to 262%.
Duplicate layer.
Flip vertical.
Align duplicate layer to the bottom of the first layer.
Apply Gaussian Blur (Value - 9) to duplicate layer.
Move the blurred layer (duplicated layer) 47px upward
Added Layer mask on blurred layer.
Applied Gradient to mask layer from black to white.
How to do these animations on the image. I have done flip vertical using
- (UIImage *) flipImage: (UIImage *) image {
UIImage *flippedImage = [UIImage imageWithCGImage:image.CGImage scale:image.scale orientation:UIImageOrientationDownMirrored];
return flippedImage;
}
and blur with following code:
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:3.0f] forKey:#"inputRadius"];
How to create a mask and gradient and how to combine these images?
I solved the problem. by generating separate images for original image, flipped blurred image, top and bottom gradients and combine them by using
UIGraphicsBeginImageContext(size)
Method.

Applying border to image shape

In my application I am having various images of different different shapes. Like tree, cloud. (Sample image is attached).
I want to add border to those shapes pro-grammatically. Like if image is of tree then need to highlight tree shape.
I cannot use calayer as it will apply border to UIImageView.
Can anyone guide me how to achieve this?
This can be achieved this by using a series of CIFilters. See images corresponding to steps below. In my example base image is a color image with transparent background and mask is black and white.
Use CIEdges to detect edges from the mask.
Then make edges thicker by applying disk maximum filter (CIMorphologyMaximum).
Convert borders image from black-and-white to transparent-and-white with CIMaskToAlpha
Overlay original image on top of borders.
Full code below:
let base = CIImage(cgImage: baseImage.cgImage!)
let mask = CIImage(cgImage: maskImage.cgImage!)
// 1
let edges = mask.applyingFilter("CIEdges", parameters: [
kCIInputIntensityKey: 1.0
])
// 2
let borderWidth = 0.02 * min(baseImage.size.width, baseImage.size.height)
let wideEdges = edges.applyingFilter("CIMorphologyMaximum", parameters: [
kCIInputRadiusKey: borderWidth
])
// 3
let background = wideEdges.applyingFilter("CIMaskToAlpha")
// 4
let composited = base.composited(over: background)
// Convert back to UIImage
let context = CIContext(options: nil)
let cgImageRef = context.createCGImage(composited, from: composited.extent)!
return UIImage(cgImage: cgImageRef)
Simple option is to draw the image twice, first with a small scale applied to grow the image a little. Masking if the images aren't transparent (but are black&white).
I just did the same thing but with a white border. I created a mask with a white body and 4px black stroke around the outside to give me the uniform border I want around my target image. The followng takes advantage of Core Image filters to mask off a solid color background (to be used as the border) and then to mask off and composite the target image.
// The two-tone mask image
UIImage *maskImage = [UIImage imageNamed: #"Mask"];
// Create a filler image of whatever color we want the border to be (in my case white)
UIGraphicsBeginImageContextWithOptions(maskImage.size, NO, maskImage.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, UIColor.whiteColor.CGColor);
CGContextFillRect(context, CGRectMake(0.f, 0.f, maskImage.size.width, maskImage.size.height));
UIImage *whiteImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Use CoreImage to mask the colored background to the mask (the entire opaque region of the mask)
CIContext *ciContext = [CIContext contextWithOptions: nil];
CIFilter *filter = [CIFilter filterWithName: #"CIBlendWithAlphaMask"];
[filter setValue: [CIImage imageWithCGImage: whiteImage.CGImage]
forKey: kCIInputImageKey];
[filter setValue: [CIImage imageWithCGImage: maskImage.CGImage]
forKey: kCIInputMaskImageKey];
CIImage *whiteBackground = filter.outputImage;
// scale the target image to the size of the mask (accounting for image scale)
// ** Uses NYXImageKit
image = [image scaleToSize: CGSizeMake(maskImage.size.width * maskImage.scale, maskImage.size.height * maskImage.scale)
usingMode: NYXResizeModeAspectFill];
// finally use Core Image to create our image using the masked white from above for our border and the inner (white) area of our mask image to mask the target image before compositing
filter = [CIFilter filterWithName: #"CIBlendWithMask"];
[filter setValue: [CIImage imageWithCGImage: image.CGImage]
forKey: kCIInputImageKey];
[filter setValue: whiteBackground
forKey: kCIInputBackgroundImageKey];
[filter setValue: [CIImage imageWithCGImage: maskImage.CGImage]
forKey: kCIInputMaskImageKey];
image = [UIImage imageWithCGImage: [ciContext createCGImage: filter.outputImage
fromRect: [filter.outputImage extent]]];
You can apply border to objects present in the image using OpenCV framework.
Check this link. Here edges are detected of an image and border is applied to it. I hope this will give exact idea which you want.
https://github.com/BloodAxe/OpenCV-Tutorial

I am using CIFilter to get a blur image,but why is the output image always larger than input image?

Codes are as below:
CIImage *imageToBlur = [CIImage imageWithCGImage: self.pBackgroundImageView.image.CGImage];
CIFilter *blurFilter = [CIFilter filterWithName: #"CIGaussianBlur" keysAndValues: kCIInputImageKey, imageToBlur, #"inputRadius", [NSNumber numberWithFloat: 10.0], nil];
CIImage *outputImage = [blurFilter outputImage];
UIImage *resultImage = [UIImage imageWithCIImage: outputImage];
For example,the input image has a size of (640.000000,1136.000000),but the output image has a size of (700.000000,1196.000000)
Any advice is appreciated.
This is a super late answer to your question, but the main problem is you're thinking of a CIImage as an image. It is not, it is a "recipe" for an image. So, when you apply the blur filter to it, Core Image calculates that to show every last pixel of your blur you would need a larger canvas. That estimated size to draw the entire image is called the "extent". In essence, every pixel is getting "fatter", which means that the final extent will be bigger than the original canvas. It is up to you to determine which part of the extent is useful to your drawing routine.

Resources