Cropping from a point from CIImage - ios

I have a CIImage, and when I touch a point on the display (Shown in the red box (lets say x: 10, y :20)), I need to crop that part of the image.
Since, it is a CIImage, the coordinates starts from Bottom-Top.
So, my question is when I select a Point on the image (Lets say the Red box shown on the image), How can I crop the image with relation to the CIImage ?
Note: Since the CIImage is reversed I basically want to how to convert my touch point which was x:10, y:20 so it can select the correct point from CIImage. Hope I made the question clear.
Update
let myCropFilter = CIFilter(name: "CICrop")
myCropFilter!.setValue(myInputImage, forKey: kCIInputImageKey)
myCropFilter!.setValue(CIVector(x: 10, y: 20, z: 100, w: 300), forKey: "inputRectangle")
let myOutputImage : CIImage = myCropFilter!.outputImage!

Related

Vertical edge detection with convolution giving transparent image as result with Swift

I am currently trying to write a function which takes an image and applies a 3x3 Matrix to filter the vertical edges. For that I am using CoreImage's CIConvolution3X3 and passing the matrix used to detect vertical edges in Sobels edge detection.
Here's the code:
func verticalEdgeFilter() -> UIImage {
let inputUIImage = UIImage(named: imageName)!
let inputCIImage = CIImage(image: inputUIImage)
let context = CIContext()
let weights: [CGFloat] = [1.0, 0.0, -1.0,
2.0, 0.0, -2.0,
1.0, 0.0, -1.0]
let verticalFilter = CIFilter.convolution3X3()
verticalFilter.inputImage = inputCIImage
verticalFilter.weights = CIVector(values: weights, count: 9)
if let output = verticalFilter.outputImage{
if let cgimg = context.createCGImage(output, from: output.extent) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
}
print("returning original")
return inputUIImage
}
Now as a result I always get an almost fully transparent image with a 2 Pixel border like this one:
Original
Screenshot of the result (border on the left side)
Am I missing something obvious because the images are only transparent if the center value of the matrix is 0. But if I try the same kernel on some webpage, it does at least lead to a usable result. Setting a bias also just crashes the whole thing which I don't understand.
I also checked Apples documentation on this, as well as the CIFilter web page but I'm not getting anywhere, so I would really appreciate it if someone could help me with this or tell me an alternative way of doing this in Swift :)
Applying this convolution matrix to a fully opaque image will inevitably produce a fully transparent output. This is because the total sum of kernel values is 0, so after multiplying the 9 neighboring pixels and summing them up you will get 0 in the alpha component of the result. There are two ways to deal with it:
Make output opaque by using settingAlphaOne(in:) CIImage helper method.
Use CIConvolutionRGB3X3 filter that leaves the alpha component alone and applies the kernel to RGB components only.
As far as the 2 pixels border, it's also expected because when the kernel is applied to the pixels at the border it still samples all 9 pixels, and some of them happen to fall outside the image boundary (exactly 2 pixels away from the border on each side). These non existent pixels contribute as transparent black pixels 0x000000.
To get rid of the border:
Clamp image to extent to produce infinite image where the border pixels are repeated to infinity away from the border. You can either use CIClamp filter or the CIImage helper function clampedToExtent()
Apply the convolution filter
Crop resulting image to the input image extent. You can use cropped(to:) CIImage helper function for it.
With these changes here is how your code could look like.
func verticalEdgeFilter() -> UIImage {
let inputUIImage = UIImage(named: imageName)!
let inputCIImage = CIImage(image: inputUIImage)!
let context = CIContext()
let weights: [CGFloat] = [1.0, 0.0, -1.0,
2.0, 0.0, -2.0,
1.0, 0.0, -1.0]
let verticalFilter = CIFilter.convolution3X3()
verticalFilter.inputImage = inputCIImage.clampedToExtent()
verticalFilter.weights = CIVector(values: weights, count: 9)
if var output = verticalFilter.outputImage{
output = output
.cropped(to: inputCIImage.extent)
.settingAlphaOne(in: inputCIImage.extent)
if let cgimg = context.createCGImage(output, from: output.extent) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
}
print("returning original")
return inputUIImage
}
If you use convolutionRGB3X3 instead of convolution3X3 you don't need to do settingAlphaOne.
BTW, if you want to play with convolution filters as well as any other CIFilter out of 250, check this app out that I just published: https://apps.apple.com/us/app/filter-magic/id1594986951

How to apply CIVignette as if CIImage were square?

I have a 1080x1920 CIImage and wish to apply CIVignette as if the image were square (to mimic a camera lens).
I'm new to CoreImage and am wondering how to temporarily change the extent of my CIImage to 1920x1920. (But I'm not sure if this is needed at all.)
I could copy-paste two narrow slivers of the original image left and right and CICrop afterwards, but this seems hacky.
Any ideas?
You can use a combination of clampedToExtent (which causes the image to repeat its border pixels infinitely) and cropped to make the image square. Then you can apply the vignette and crop the result back to the original extent:
// "crop" to square, placing the image in the middle
let longerSize = max(inputImage.extent.width, inputImage.extent.height)
let xOffset = (longerSize - inputImage.extent.width) / 2.0
let yOffset = (longerSize - inputImage.extent.height) / 2.0
let squared = inputImage.clampedToExtent().cropped(to: CGRect(x: -xOffset, y: -yOffset, width: longerSize, height: longerSize))
// apply vignette
let vignetteFilter = CIFilter(name: "CIVignette")!
vignetteFilter.setValue(squared, forKey: kCIInputImageKey)
vignetteFilter.setValue(1.0, forKey: kCIInputIntensityKey)
vignetteFilter.setValue(1.0, forKey: kCIInputRadiusKey)
let withVignette = vignetteFilter.outputImage!
// crop back to original extent
let output = withVignette.cropped(to: inputImage.extent)

CIDepthBlurEffect Failed to render part of the image because of the memory requirement

CIDepthBlurEffect normally works on my phone with ios 12.4 but it gives the following errors when I try to examine "inputFocusRect" parameter? Also no error is produced when I set y : 0; however image is not processed. Any idea? Thanks.
Failed to render 12192768 pixels because a CIKernel's ROI function did not allow tiling.
Failed to render part of the image because memory requirement of -1 too big.
Here is the code:
if let filter = CIFilter(name: "CIDepthBlurEffect", parameters: [kCIInputImageKey : mainImage, kCIInputDisparityImageKey : disparityMap]) {
filter.setValue(0.1, forKey: "inputAperture")
filter.setValue(0.1, forKey: "inputScaleFactor")
filter.setValue(CIVector(x: 0, y: 100, z: 100, w: 100), forKey: "inputFocusRect") // works without this line
let result = filter.outputImage
self.imageView.image = UIImage(ciImage: result!)
}
This is maybe not the solution, but I noticed two things that you could try and report back the results:
As matt pointed out, you should properly render the image first before using it in an image view. You need a CIContext for that (see below).
There seem to be a special constructor for the depth blur effect filter that is also linked to a CIContext.
Here's some could you can try:
// create this only once and re-use it when possible, it's an expensive object
let ciContext = CIContext()
let filter = ciContext.depthBlurEffectFilter(for: mainImage,
disparityImage: disparityMap,
portraitEffectsMatte: nil,
// the orientation of you input image
orientation: CGImagePropertyOrientation.up,
options: nil)!
filter.setValue(0.1, forKey: "inputAperture")
filter.setValue(0.1, forKey: "inputScaleFactor")
filter.setValue(CIVector(x: 0, y: 100, z: 100, w: 100), forKey: "inputFocusRect")
let result = ciContext.createCGImage(filter.outputImage!, from: mainImage.extent)
self.imageView.image = UIImage(cgImage: result)
Would be interesting to hear if this helps in any way.

Antialiased pixels from Swift code is converted to unwanted black pixels by OpenCV

i am trying to make some pixels transparent using swift code by setting anti-aliasing true and remove path pixels to transparent. For further process i had sent UIimage to opencv using c++ that convert edge of path to black pixels.
i want to remove that unwanted black pixels. How can i remove those blacks pixels generated by opencv?
in opencv just converting image to mat and from mat to UIImage, the same problem occurs.
swift code:
guard let imageSize=image.size else{
return
}
UIGraphicsBeginImageContextWithOptions(imageSize,false,1.0)
guard let context = UIGraphicsGetCurrentContext() else{
return
}
context.setShouldAntialias(true)
context.setAllowsAntialiasing(true)
context.setShouldSubpixelQuantizeFonts(true)
context.interpolationQuality = .high
imgCanvas.image?.draw(in: CGRect(x: 0, y:0, width: (imgCanvas.image?.size.width)!, height: (imgCanvas.image?.size.height)!))
bezeirPath = UIBezierPath()
bezeirPath.move(to: fromPoint)
bezeirPath.addLine(to: toPoint)
bezeirPath.lineWidth=(CGFloat(widthOfLine) * scaleX )/scrollView.zoomScale
bezeirPath.lineCapStyle = CGLineCap.round
bezeirPath.lineJoinStyle=CGLineJoin.round
bezeirPath.flatness=0.0
bezeirPath.miterLimit=0.0
bezeirPath.usesEvenOddFillRule=true
UIColor.white.setStroke()
bezeirPath.stroke(with: .clear, alpha:0)
bezeirPath.close()
bezeirPath.fill()
UIColor.clear.set()
context.addPath(bezeirPath.cgPath)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
opencv code:
Mat source;
UIImageToMat(originalImage,source,true);
return MatToUIImage(source);
I tried various ways to solve this issue, looking at different sources, but none worked. I have been trying to solve this for the past 3 days. So please if anybody has even any clue related to this issue, that would be helpful!
[
I would supply the RGB image and the A mask as separate images to OpenCV. Draw your mask into a single channel image:
guard let imageSize=image.size else{
return
}
UIGraphicsBeginImageContextWithOptions(imageSize,false,1.0)
guard let context = UIGraphicsGetCurrentContext() else{
return
}
context.setShouldAntialias(true)
context.setAllowsAntialiasing(true)
context.setShouldSubpixelQuantizeFonts(true)
context.interpolationQuality = .high
context.setFillColor(UIColor.black.cgColor)
context.addRect(CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
context.drawPath(using: .fill)
bezeirPath = UIBezierPath()
bezeirPath.move(to: fromPoint)
bezeirPath.addLine(to: toPoint)
bezeirPath.lineWidth=(CGFloat(widthOfLine) * scaleX )/scrollView.zoomScale
bezeirPath.lineCapStyle = CGLineCap.round
bezeirPath.lineJoinStyle=CGLineJoin.round
bezeirPath.flatness=0.0
bezeirPath.miterLimit=0.0
bezeirPath.usesEvenOddFillRule=true
UIColor.white.setStroke()
bezeirPath.stroke()
bezeirPath.close()
bezeirPath.fill()
let maskImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Then in OpenCV, you can apply the A image to your RGB image.
Mat source, sourceMask
UIImageToMat(image, source, true)
UIImageToMat(maskImage, sourceMask, true)
If your image isn't RGB, you can convert it:
cvtColor(source, source, CV_RGBA2RGB)
If your mask isn't single channel, you can convert it:
cvtColor(sourceMask, sourceMask, CV_RGBA2GRAY)
Then split the RGB image into channels:
Mat rgb[3];
split(source, rgb);
Then create RGBA image with the RGB channels and alpha channel:
Mat imgBGRA;
vector<Mat> channels = {rgb[0], rgb[1], rgb[2], sourceMask};
merge(channels, imgBGRA);
Since your mask was created with anti-aliasing, the image created above will also have anti-aliased alpha.
OpenCV can help you find edges using Canny Edge Detector You can refer for the implementation in this link
OpenCV's Canny Edge Detection in C++
Furthermore, I would like you to first do a bit of research on your own before actually asking for questions here.

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

Resources