Strange CoreImage cropping issues encounter here - ios

I have a strange issue where after I cropped a photo from my photo library, it cannot be displayed from the App. It gives me this error after I run this code:
self.correctedImageView.image = UIImage(ciImage: correctedImage)
[api] -[CIContext(CIRenderDestination) _startTaskToRender:toDestination:forPrepareRender:error:] The image extent and destination extent do not intersect.
Here is the code I used to crop and display. (inputImage is CIImage)
let imageSize = inputImage.extent.size
let correctedImage = inputImage
.cropped(to: textObvBox.boundingBox.scaled(to: imageSize) )
DispatchQueue.main.async {
self.correctedImageView.image = UIImage(ciImage: correctedImage)
}
More Info: Debug print the extent of the inputImage and correctedImage
Printing description of self.inputImage: CIImage: 0x1c42047a0 extent [0 0 3024 4032]>
crop [430 3955 31 32] extent=[430 3955 31 32]
affine [0 -1 1 0 0 4032] extent=[0 0 3024 4032] opaque
affine [1 0 0 -1 0 3024] extent=[0 0 4032 3024] opaque
colormatch "sRGB IEC61966-2.1"_to_workingspace extent=[0 0 4032 3024] opaque
IOSurface 0x1c4204790(501) seed:1 YCC420f 601 alpha_one extent=[0 0 4032 3024] opaque
Funny thing is that when I put a breakpoint, using Xcode, i was able to preview the cropped image properly. I'm not sure what this extent thing is for CIImage, but UIMageView doesn't like it when I assign the cropped image to it. Any idea what this extent thing does?

I ran into the same problem you describe. Due to some weird behavior in UIKit / CoreImage, I needed to convert the CIImage to a CGImage first. I noticed it only happened when I applied some filters to the CIImage, described below.
let image = /* my CIImage */
let goodImage = UIImage(ciImage: image)
uiImageView.image = goodImage // works great!
let image = /* my CIImage after applying 5-10 filters */
let badImage = UIImage(ciImage: image)
uiImageView.image = badImage // empty view!
This is how I solved it.
let ciContext = CIContext()
let cgImage = self.ciContext.createCGImage(image, from: image.extent)
uiImageView.image = UIImage(cgImage: cgImage) // works great!!!
As other commenters have stated beware creating a CIContext too often; its an expensive operation.

Extent in CIImage can be a bit of a pain, especially when working with cropping and translation filters. Effectively it allows the actual position of the new image to relate directly to the part of the old image it was taken from. If you crop the center out of an image, you'll find the extent's origin is non-zero and you'll get an issue like this. I believe the feature is intended for things such as making an art application that supports layers.
As noted above, you CAN convert the CIImage into a CGImage and from there to a UIImage, but this is slow and wasteful. It works because CGImage doesn't preserve extent, while UIImage can. However, as noted, it requires creating a CIContext which has a lot of overhead.
There is, fortunately, a better way. Simply create a CIAffineTransformFilter and populate it with an affine transform equal to the negative of the extent's origin, thus:
let transformFilter = CIFilter(name: "CIAffineTransform")!
let translate = CGAffineTransform(translationX: -image.extent.minX, y: -image.extent.minY)
let value = NSValue(cgAffineTransform: translate)
transformFilter.setValue(value, forKey: kCIInputTransformKey)
transformFilter.setValue(image, forKey: kCIInputImageKey)
let newImage = transformFilter.outputImage
Now, newImage should be identical to image but with an extent origin of zero. You can then pass this directly to UIView(ciimage:). I timed this and found it to be immensely faster than creating a CIContext and making a CGImage. For the sake of interest:
CGImage Method: 0.135 seconds.
CIFilter Method: 0.00002 seconds.
(Running on iPhone XS Max)

You can use advanced UIGraphicsImageRenderer to draw your image in iOS 10.X or later.
let img:CIImage = /* my ciimage */
let renderer = UIGraphicsImageRenderer(size: img.extent.size)
uiImageView.image = renderer.image { (context) in
UIImage(ciImage: img).draw(in: .init(origin: .zero, size: img.extent.size))
}

Related

Vertical edge detection with convolution giving transparent image as result with Swift

I am currently trying to write a function which takes an image and applies a 3x3 Matrix to filter the vertical edges. For that I am using CoreImage's CIConvolution3X3 and passing the matrix used to detect vertical edges in Sobels edge detection.
Here's the code:
func verticalEdgeFilter() -> UIImage {
let inputUIImage = UIImage(named: imageName)!
let inputCIImage = CIImage(image: inputUIImage)
let context = CIContext()
let weights: [CGFloat] = [1.0, 0.0, -1.0,
2.0, 0.0, -2.0,
1.0, 0.0, -1.0]
let verticalFilter = CIFilter.convolution3X3()
verticalFilter.inputImage = inputCIImage
verticalFilter.weights = CIVector(values: weights, count: 9)
if let output = verticalFilter.outputImage{
if let cgimg = context.createCGImage(output, from: output.extent) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
}
print("returning original")
return inputUIImage
}
Now as a result I always get an almost fully transparent image with a 2 Pixel border like this one:
Original
Screenshot of the result (border on the left side)
Am I missing something obvious because the images are only transparent if the center value of the matrix is 0. But if I try the same kernel on some webpage, it does at least lead to a usable result. Setting a bias also just crashes the whole thing which I don't understand.
I also checked Apples documentation on this, as well as the CIFilter web page but I'm not getting anywhere, so I would really appreciate it if someone could help me with this or tell me an alternative way of doing this in Swift :)
Applying this convolution matrix to a fully opaque image will inevitably produce a fully transparent output. This is because the total sum of kernel values is 0, so after multiplying the 9 neighboring pixels and summing them up you will get 0 in the alpha component of the result. There are two ways to deal with it:
Make output opaque by using settingAlphaOne(in:) CIImage helper method.
Use CIConvolutionRGB3X3 filter that leaves the alpha component alone and applies the kernel to RGB components only.
As far as the 2 pixels border, it's also expected because when the kernel is applied to the pixels at the border it still samples all 9 pixels, and some of them happen to fall outside the image boundary (exactly 2 pixels away from the border on each side). These non existent pixels contribute as transparent black pixels 0x000000.
To get rid of the border:
Clamp image to extent to produce infinite image where the border pixels are repeated to infinity away from the border. You can either use CIClamp filter or the CIImage helper function clampedToExtent()
Apply the convolution filter
Crop resulting image to the input image extent. You can use cropped(to:) CIImage helper function for it.
With these changes here is how your code could look like.
func verticalEdgeFilter() -> UIImage {
let inputUIImage = UIImage(named: imageName)!
let inputCIImage = CIImage(image: inputUIImage)!
let context = CIContext()
let weights: [CGFloat] = [1.0, 0.0, -1.0,
2.0, 0.0, -2.0,
1.0, 0.0, -1.0]
let verticalFilter = CIFilter.convolution3X3()
verticalFilter.inputImage = inputCIImage.clampedToExtent()
verticalFilter.weights = CIVector(values: weights, count: 9)
if var output = verticalFilter.outputImage{
output = output
.cropped(to: inputCIImage.extent)
.settingAlphaOne(in: inputCIImage.extent)
if let cgimg = context.createCGImage(output, from: output.extent) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
}
print("returning original")
return inputUIImage
}
If you use convolutionRGB3X3 instead of convolution3X3 you don't need to do settingAlphaOne.
BTW, if you want to play with convolution filters as well as any other CIFilter out of 250, check this app out that I just published: https://apps.apple.com/us/app/filter-magic/id1594986951

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

If a filter is applied to a PNG where height > width, it rotates the image 90 degrees. How can I efficiently prevent this?

I'm making a simple filter app. I've found that if you load an image from the camera roll that is a PNG (PNGs have no orientation data flag) and the height is greater than the width, upon applying certain distortion filters to said image it will rotate and present it self as if it were a landscape image.
I found the below technique online somewhere in the many tabs i had open and it seems to do exactly what i want. It uses the original scale and orientation of the image when it was first loaded.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.origImage.imageOrientation)
but this is the warning i get when i try to use it:
Ambiguous use of 'init(CIImage:scale:orientation:)'
Here's the entire thing I'm trying to get working:
//global variables
var image: UIImage!
var origImage: UIImage!
func setFilter(action: UIAlertAction) {
origImage = image
// make sure we have a valid image before continuing!
guard let image = self.imageView.image?.cgImage else { return }
let openGLContext = EAGLContext(api: .openGLES3)
let context = CIContext(eaglContext: openGLContext!)
let ciImage = CIImage(cgImage: image)
let currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter?.setValue(ciImage, forKey: kCIInputImageKey)
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
//the line below is the one giving me errors which i thought would work.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.image.imageOrientation)
self.imageView.image = UIImage(cgImage: context.createCGImage(newImage, from: output.extent)!)}
The filters all work, they unfortunately turn images described above by 90 degrees for the reasons I suspect.
I've tried some other methods like using an extension that checks orientation of UIimages and converting the CIimage to a Uiimage, using the extension, then trying to convert it back to a Ciimage or just load the UIimage to the imageView for output. I ran into snag after snag with that process. I started to seem really convoluted just to get certain images to their default orientation as well.
Any advice would be greatly appreciated!
EDIT: heres where I got the method I was trying: When applying a filter to a UIImage the result is upside down
I found the answer. My biggest issue was the "Ambiguous use of 'init(CIImage:scale:orientation:)' "
it turned out that Xcode was auto populating the code as 'CIImage:scale:orientation' when it should have been ciImage:scale:orientation' The very vague error left a new dev like my scratching my head for 3 days over this. (This was true for CGImage and UIImage inits as well, but my original error was with CIImage so I used that to explain.)
with that knowledge I was able to formulate the code below for my new output:
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
let outputImage = UIImage(cgImage: context.createCGImage(output, from: output.extent)!)
let imageTurned = UIImage(cgImage: outputImage.cgImage!, scale: CGFloat(1.0), orientation: origImage.imageOrientation)
centerScrollViewContents()
self.imageView.image = imageTurned
}
This code replaces the if let output in the OP.

Swift: 'Compacting' a cropped CGImage

When cropping a CGImage in Swift 3 (using the .cropping method), the original CGImage is referenced by the cropped version - both according to the documentation, and according to what the Allocations instruments shows me.
I am placing the cropped CGImage objects on an undo stack, so having the original versions retained 'costs' me about 21mb of memory per undo element.
Since there is no obvious way to 'compact' a cropped CGImage and have it made independent from the original, I have currently done something similar to the following (without all the force unwrapping):
let croppedImage = original.cropping(to: rect)!
let data = UIImagePNGRepresentation(UIImage(cgImage: croppedImage))!
let compactedCroppedImage = UIImage(data: data)!.cgImage!
This works perfectly, and now each undo snapshot takes up only the amount of memory that it is supposed to.
My question is: Is there a better / faster way to achieve this?
Your code involves a PNG compression and decompression. This can be avoided. Just create an offscreen bitmap of the target size, draw the original image into it and use it as an image:
UIGraphicsBeginImageContext(rect.size)
let targetRect = CGRect(x: -rect.origin.x, y: -rect.origin.y, width: original.size.width, height: original.size.height)
original.draw(in: targetRect)
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Note: The result is slightly different if you don't have integral coordinates.

Swift: UIGraphicsBeginImageContextWithOptions scale factor set to 0 but not applied

I used to resize an image with the following code and it used to work just fine regarding the scale factor. Now with Swift 3 I can't figure out why the scale factor is not taken into account. The image is resized but the scale factor not applied. Do you know why?
let layer = self.imageview.layer
UIGraphicsBeginImageContextWithOptions(layer.bounds.size, true, 0)
layer.render(in: UIGraphicsGetCurrentContext()!)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
print("SCALED IMAGE SIZE IS \(scaledImage!.size)")
print(scaledImage!.scale)
For example if I take a screenshot on iPhone 5 the image size will be 320*568. I used to get 640*1136 with exact same code.. What can cause the scale factor not to be applied?
When I print the scale of the image it would print 1, 2 or 3 based on the device resolution but will not be applied to the image taken from the context.
scaledImage!.size will not return the image size in pixel.
CGImageGetWidth and CGImageGetHeight returns the same size (in pixels)
That is image.size * image.scale
If you wanna test it out, at first you have to import CoreGraphics
let imageSize = scaledImage!.size //(320,568)
let imageWidthInPixel = CGImageGetWidth(scaledImage as! CGImage) //640
let imageHeightInPixel = CGImageGetHeight(scaledImage as! CGImage) //1136

Resources