Unexpected result of CISourceOverCompositing when alpha is involved - ios

When trying to place a image with 60% alpha channel over another image with 100% alpha channel on iOS using CoreImage I got a result I didn't expect. If I take the two images and place scene_2_480p over scene_480p like this:
let back: CIImage = loadImage("scene_480p", type: "jpg");
let front: CIImage = loadImage("scene_2_480p", type: "png");
let composeFilter: CIFilter = CIFilter(name: "CISourceOverCompositing");
composeFilter.setDefaults();
composeFilter.setValue(front, forKey: kCIInputImageKey);
composeFilter.setValue(back, forKey: kCIInputBackgroundImageKey);
let result: CIImage = composeFilter.outputImage;
I get this:
If I do the same with gimp, and place the same two images on two overlapping layers I get:
The result is close, but not the same. Anyone who can give an explanation of why the results are not the same and how to get the same identical result of gimp?
These are the original images I used:

I'm still not able to answer the "why" question, but by using this it is possible to get the correct result, with the proper alpha value. The scale must be set to 1.0 for the same result.

Related

Find and crop largest interior bounding box of image

I made an optical hardware that I can get stereo images from and I'm developing a helper application for this hardware. With this equipment, I shoot an object from 3 different angles. I fold the photo into 3 different image variables. This is what photos become when I correct distortions caused by perspective with CIPerspectiveTransform. There are redundant areas you see in the images and I do not use these areas.
Perspective corrected image: https://i.imgur.com/ACJgaIy.gif
I focus the images by dragging and after focusing I try to get the intersection areas. I can get the intersection areas of 3 images of different sizes and shapes with the CISourceInCompositing filter. However, the resulting images appear in irregular formats. Due to the proportional processes I use in focusing, images also contain transparent areas. You can download and test this image. https://i.imgur.com/uo8Srvv.png
Composited image: https://i.imgur.com/OY3owts.png
Composited animated image: https://i.imgur.com/M8JOdxR.gif
func intersectImages(inputImage: UIImage, backgroundImage:UIImage) -> UIImage {
if let currentFilter = CIFilter(name: "CISourceInCompositing") {
let inputImageCi = CIImage.init(image: inputImage)
let backgroundImageCi = CIImage.init(image: backgroundImage)
currentFilter.setValue(inputImageCi,forKey: "inputImage")
currentFilter.setValue(backgroundImageCi,forKey:"inputBackgroundImage")
let context = CIContext.init()
if let outputImage = currentFilter.outputImage {
if let extent = backgroundImageCi?.extent {
if let cgOutputImage = context.createCGImage(outputImage, from: extent){
return UIImage.init(cgImage: cgOutputImage)
}
}
}
}
return UIImage.init()
}
The problem I'm stuck with is: Is it possible to extract the images as rectangles while first getting these intersection areas or after the intersection operations? I couldn't come up with any solution. I'm trying to get the green framed photo I shared as a final.
Target image: https://i.imgur.com/18htpjm.png
Target image animated https://i.imgur.com/fMcElGy.gif

CIAdditionCompositing giving incorrect effect

I am trying to create an image by averaging several other images. To achieve this, I first darken each image by a factor equivalent to the number of images I am averaging:
func darkenImage(by multiplier: CGFloat) -> CIImage? {
let divImage = CIImage(color: CIColor(red: multiplier, green: multiplier, blue: multiplier))
let divImageResized = divImage.cropped(to: self.extent) //Set multiplier image to same size as image to be darkened
if let divFilter = CIFilter(name: "CIMultiplyBlendMode", parameters: ["inputImage":self, "inputBackgroundImage":divImageResized]) {
return divFilter.outputImage
}
print("Failed to darken image")
return nil
}
After this I take each darkened image and add them together (add image 1 and 2 together, then add the result together with image 3 etc):
func blend(with image: CIImage, blendMode: BlendMode) -> CIImage? {
if let filter = CIFilter(name: blendMode.format) { //blendMode.format is CIAdditionCompositing
filter.setDefaults()
filter.setValue(self, forKey: "inputImage")
filter.setValue(image, forKey: "inputBackgroundImage")
let resultImage = filter.outputImage
return resultImage
}
return nil
}
This code executes and produces a new image, but the more images I average together, the darker the shadows gets. The highlights stay about the same brightness as each of the individual images, but the darker parts just gets darker and darker. Does anyone know what could be wrong?
Original image:
Average of 2 images:
Average of 8 images:
Average of 20 images:
To reduce the number of potential issues I have also tried to darken the images before hand in Lightroom and just apply the CIAdditionCompositing filter. This gives the same result, which makes me think that CIAdditionCompositing may not just be adding up pixels, but use some slightly different algorithm, but I haven't found any documentation on this. I have also tried changing the darkening multiplier to see if I did a calculation error, but if I darken the images less, the highlights becomes overexposed when adding the images together again.
This may come a little late but some.
First try
I suspect the problem is that the color gamut is not linear, just like Ken Thomases mentioned. Unfortunately converting all images to be linear with the "CISRGBToneCurveToLinear" filter and after all images had been stacked converting them back with "CILinearToSRGBToneCurve" does not solve the issue.
Solution
using exposureAdjust to halve the exposure each time after adding two images did solve the issue. To halve the exposure you would need to decrease the f-stop by 1 step so the exposure value (EV) needs to be -1.
Additionally I added intermediate images just because I run into trouble sometimes on my old phone when the filter stack in an CIImage is too big.
if let evFilter = CIFilter(name: "CIExposureAdjust", parameters: ["inputImage":self, "inputEV":NSNumber(-1)]) {
return evFilter.outputImage?.insertingIntermediate()
}
P.S.: Please note that to create a correct result the images need to be added to each other and halved in exposure so that each image has the same weight to the resulting image. simply adding the next image in line and reducing the exposure afterwards will always give the latest added image 50% weight to the overall result.

CIRadialGradient reduces image size

After applying CIRadialGradient to my image it gets reduced in width by about 20%.
guard let image = bgImage.image, let cgimg = image.cgImage else {
print("imageView doesn't have an image!")
return
}
let coreImage = CIImage(cgImage:cgimg)
guard let radialMask = CIFilter(name:"CIRadialGradient") else {
return
}
guard let maskedVariableBlur = CIFilter(name:"CIMaskedVariableBlur") else {
print("CIMaskedVariableBlur does not exist")
return
}
maskedVariableBlur.setValue(coreImage, forKey: kCIInputImageKey)
maskedVariableBlur.setValue(radialMask.outputImage, forKey: "inputMask")
guard let selectivelyFocusedCIImage = maskedVariableBlur.outputImage else {
print("Setting maskedVariableBlur failed")
return
}
bgImage.image = UIImage(ciImage: selectivelyFocusedCIImage)
To clarify, bgImage is a UIImageView.
Why does this happen and how do I fix it?
Without RadialMask:
With RadialMask:
With the difference that on my physical iPhone the smaller image is aligned to the left.
I tend to explicitly state how big the image is by using a CIContext and creating a specifically sized CGImage instead of simply using UIImage(ciImage:). Try this, assuming your inputImage is called coreGraphics:
let ciCtx = CIContext()
let cgiig = ctx.createCGImage(selectivelyFocusedCIImage, from: coreImage.extent)
let uiImage = UIImage(cgImage: cgIMG!)
A few notes....
(1) I pulled this code out from an app I'm wrapping up. This is untested code (including the forced-unwrap), but the concept of what I'm doing is solid.
(2) You don't explain a lot of what you are trying to do, but when I see a variable named selectivelyFocusedCIImage I get concerned that you may be trying to use CoreImage in a more interactive way than "just" creating one image. If you want "near real-time" performance, render the CIImage in either a (deprecated as of iOS 12) GLKView or an MTKView instead of a UIImageView. The latter only uses the CPU where the two former use the GPU.
(3) Finally, a word of warning on CIContexts - they are expensive to create! Usually you can code it such that there's only one context that can be shared by everything n your app.
Look up the documentation, it's a mask that being applied to the image:
Docs: CIRadialGradient
The different sizes are caused by the kernel size of the blur filter:
The blur filter needs to sample a region around each pixel. Since there are no pixels beyond the image bounds, Core Image reduces the extend of the result image by half the kernel size (blur radius) to signal that for those pixels there is not enough information for a proper blur.
However, you can tell Core Image to treat the border pixels as extending infinitely in all directions so that the blur filter gets enough information even on the edges of the image. Afterwards you can crop the result back to the original dimension.
In your code, just change the following two lines:
maskedVariableBlur.setValue(coreImage.clampedToExtent(), forKey: kCIInputImageKey)
bgImage.image = UIImage(ciImage: selectivelyFocusedCIImage.cropped(to:coreImage.extend))

Apply Core Image Filter (CIBumpDistortion) to only one part of an image + change radius of selection and intensity of CIFilter

I would like to copy some of the features displayed here:
So I would like the user to apply a CIBumpDistortion filter to an image and let him choose
1) where exactly he wants to apply it by letting him just touch the respective location on the image
2a) the size of the circle selection (first slider in the image above)
2b) the intensity of the CIBumpDistortion Filter (second slider in the image above)
I read some previously asked questions, but they were not really helpful and some of the solutions sounded really far from userfriendly (e.g. cropping the needed part, then reapplying it to the old image). Hope I am not asking for too much at once. Objective-C would be preferred, but any help/hint would be much appreciated really! Thank you in advance!
I wrote a demo (iPad) project that lets you apply most supported CIFilters. It interrogates each filter for the parameters it needs and has built-in support for float values as well as points and colors. For the bump distortion filter it lets you select a center point, a radius, and an input scale.
The project is called CIFilterTest. You can download the project from Github at this link: https://github.com/DuncanMC/CIFilterTest
There is quite a bit of housekeeping in the app to support the general-purpose ability to use any supported filter, but it should give you enough information to implement your own bump filter as you're asking to do.
The approach I worked out to applying a filter and getting it to render without extending outside of the bounds of the original image is to first apply a clamp filter to the image (CIAffineClamp) set to the identity transform, take the output of that filter and feed that into the input of your "target" filter (the bump distortion filter in this case) and then take the output of that and feed that into a crop filter (CICrop) with the bounds of the crop filter set to the original image size.
The method to look for in the sample project is called showImage, in ViewController.m
You wrote:
1) where exactly he wants to apply it by letting him just touch the
respective location on the image
2a) the size of the circle selection (first slider in the image above)
2b) the intensity of the CIBumpDistortion Filter (second slider in the
image above)
Well, CIBumpDistortion has those attributes:
inputCenter is the center of the effect
inputRadius is the size of the circle selection
inputScale is the intensity
Simon
To show the bump:
You have to pass the location (kCIInputCenterKey) on image with Radius Size (white Circle in your case)
func appleBumpDistort(toImage currentImage: UIImage, radius : Float, intensity: Float) -> UIImage? {
var context: CIContext = CIContext()
let currentFilter = CIFilter(name: "CIBumpDistortion")
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter.setValue(radius, forKey: kCIInputRadiusKey)
currentFilter.setValue(intensity, forKey: kCIInputScaleKey)
currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
guard let image = currentFilter.outputImage else { return nil }
if let cgimg = context.createCGImage(image, from: image.extent) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
return nil
}

Large Image Compositing on iOS in Swift

Although I understand the theory behind image compositing, I haven't dealt much with hardware acceleration and I'm running into implementation issues on iOS (9.2, iPhone 6S). My project is to sequentially composite a large number (20, all the way to hundreds) of large images (12 megapixel) on top of each other at decreasing opacities, and I'm looking for advice as to the best framework or technique. I know there must be a good, hardware accelerated, destructive compositing tool capable of handling large files on iOS, because I can perform this task in Safari in an HTML Canvas tag, and load this page in Safari on the iPhone at nearly the same blazing speed.
This can be a destructive compositing task, like painting in Canvas, so I shouldn't have memory issues as the phone will only have to store the current result up to that point. Ideally, I'd like floating point pixel components, and I'd also like to be able to see the progress on screen.
Core Image has filters that seem great, but they are intended to operate losslessly on one or two pictures and return one result. I can feed that result into the filter again with the next image, and so on, but since the filter doesn't render immediately, this chaining of filters runs me out of memory after about 60 images. Rendering to a Core Graphics image object and reading back in as a Core Image object after each filter doesn't help either, as that overloads the memory even faster.
Looking at the documentation, there are a number of other ways for iOS to leverage the GPU - CALayers being a prime example. But I'm unclear if that handles pictures larger than the screen, or is only intended for framebuffers the size of the screen.
For this task - to leverage the GPU to store a destructively composited "stack" of 12 megapixel photos, and add an additional one on top at a specified opacity, repeatedly, while outputing the current contents of the stack scaled down to the screen - what is the best approach? Can I use an established framework/technique, or am I better of diving into OpenGL and Metal myself? I know the iPhone has this capability, I just need to figure out how to leverage it.
This is what I've got so far. Profiler tells me the rendering takes about 350ms, but I run out of memory if I increase to 20 pics. If I don't render after each loop, I can increase to about 60 pics before I run of out memory.
var stackBuffer: CIImage!
var stackRender: CGImage!
var uiImage: UIImage!
let glContext = EAGLContext(API: .OpenGLES3)
let context = CIContext(EAGLContext: glContext)
// Preload list of 10 test pics
var ciImageArray = Array(count: 10, repeatedValue: CIImage.emptyImage())
for i in 0...9 {
uiImage = UIImage(named: String(i) + ".jpg")!
ciImageArray[i] = CIImage(image: uiImage)!
}
// Put the first image in the buffer
stackBuffer = ciImageArray[0]
for i in 1...9 {
// The next image will have an opacity of 1/n
let topImage = ciImageArray[i]
let alphaTop = topImage.imageByApplyingFilter(
"CIColorMatrix", withInputParameters: [
"inputAVector" : CIVector(x:0, y:0, z:0, w:1/CGFloat(i + 1))
])
// Layer the next image on top of the stack
let filter = CIFilter(name: "CISourceOverCompositing")!
filter.setValue(alphaTop, forKey: kCIInputImageKey)
filter.setValue(stackBuffer, forKey: kCIInputBackgroundImageKey)
// Render the result, and read back in
stackRender = context.createCGImage(filter.outputImage!, fromRect: stackBuffer.extent)
stackBuffer = CIImage(CGImage: stackRender)
}
// Output result
uiImage = UIImage(CGImage: stackRender)
compositeView.image = uiImage

Resources