CIRadialGradient reduces image size - ios

After applying CIRadialGradient to my image it gets reduced in width by about 20%.
guard let image = bgImage.image, let cgimg = image.cgImage else {
print("imageView doesn't have an image!")
return
}
let coreImage = CIImage(cgImage:cgimg)
guard let radialMask = CIFilter(name:"CIRadialGradient") else {
return
}
guard let maskedVariableBlur = CIFilter(name:"CIMaskedVariableBlur") else {
print("CIMaskedVariableBlur does not exist")
return
}
maskedVariableBlur.setValue(coreImage, forKey: kCIInputImageKey)
maskedVariableBlur.setValue(radialMask.outputImage, forKey: "inputMask")
guard let selectivelyFocusedCIImage = maskedVariableBlur.outputImage else {
print("Setting maskedVariableBlur failed")
return
}
bgImage.image = UIImage(ciImage: selectivelyFocusedCIImage)
To clarify, bgImage is a UIImageView.
Why does this happen and how do I fix it?
Without RadialMask:
With RadialMask:
With the difference that on my physical iPhone the smaller image is aligned to the left.

I tend to explicitly state how big the image is by using a CIContext and creating a specifically sized CGImage instead of simply using UIImage(ciImage:). Try this, assuming your inputImage is called coreGraphics:
let ciCtx = CIContext()
let cgiig = ctx.createCGImage(selectivelyFocusedCIImage, from: coreImage.extent)
let uiImage = UIImage(cgImage: cgIMG!)
A few notes....
(1) I pulled this code out from an app I'm wrapping up. This is untested code (including the forced-unwrap), but the concept of what I'm doing is solid.
(2) You don't explain a lot of what you are trying to do, but when I see a variable named selectivelyFocusedCIImage I get concerned that you may be trying to use CoreImage in a more interactive way than "just" creating one image. If you want "near real-time" performance, render the CIImage in either a (deprecated as of iOS 12) GLKView or an MTKView instead of a UIImageView. The latter only uses the CPU where the two former use the GPU.
(3) Finally, a word of warning on CIContexts - they are expensive to create! Usually you can code it such that there's only one context that can be shared by everything n your app.

Look up the documentation, it's a mask that being applied to the image:
Docs: CIRadialGradient

The different sizes are caused by the kernel size of the blur filter:
The blur filter needs to sample a region around each pixel. Since there are no pixels beyond the image bounds, Core Image reduces the extend of the result image by half the kernel size (blur radius) to signal that for those pixels there is not enough information for a proper blur.
However, you can tell Core Image to treat the border pixels as extending infinitely in all directions so that the blur filter gets enough information even on the edges of the image. Afterwards you can crop the result back to the original dimension.
In your code, just change the following two lines:
maskedVariableBlur.setValue(coreImage.clampedToExtent(), forKey: kCIInputImageKey)
bgImage.image = UIImage(ciImage: selectivelyFocusedCIImage.cropped(to:coreImage.extend))

Related

Find and crop largest interior bounding box of image

I made an optical hardware that I can get stereo images from and I'm developing a helper application for this hardware. With this equipment, I shoot an object from 3 different angles. I fold the photo into 3 different image variables. This is what photos become when I correct distortions caused by perspective with CIPerspectiveTransform. There are redundant areas you see in the images and I do not use these areas.
Perspective corrected image: https://i.imgur.com/ACJgaIy.gif
I focus the images by dragging and after focusing I try to get the intersection areas. I can get the intersection areas of 3 images of different sizes and shapes with the CISourceInCompositing filter. However, the resulting images appear in irregular formats. Due to the proportional processes I use in focusing, images also contain transparent areas. You can download and test this image. https://i.imgur.com/uo8Srvv.png
Composited image: https://i.imgur.com/OY3owts.png
Composited animated image: https://i.imgur.com/M8JOdxR.gif
func intersectImages(inputImage: UIImage, backgroundImage:UIImage) -> UIImage {
if let currentFilter = CIFilter(name: "CISourceInCompositing") {
let inputImageCi = CIImage.init(image: inputImage)
let backgroundImageCi = CIImage.init(image: backgroundImage)
currentFilter.setValue(inputImageCi,forKey: "inputImage")
currentFilter.setValue(backgroundImageCi,forKey:"inputBackgroundImage")
let context = CIContext.init()
if let outputImage = currentFilter.outputImage {
if let extent = backgroundImageCi?.extent {
if let cgOutputImage = context.createCGImage(outputImage, from: extent){
return UIImage.init(cgImage: cgOutputImage)
}
}
}
}
return UIImage.init()
}
The problem I'm stuck with is: Is it possible to extract the images as rectangles while first getting these intersection areas or after the intersection operations? I couldn't come up with any solution. I'm trying to get the green framed photo I shared as a final.
Target image: https://i.imgur.com/18htpjm.png
Target image animated https://i.imgur.com/fMcElGy.gif

Swift UIImage .jpegData() and .pngData() changes image size

I am using Swift's Vision Framework for Deep Learning and want to upload the input image to backend using REST API - for which I am converting my UIImage to MultipartFormData using jpegData() and pngData() function that swift natively offers.
I use session.sessionPreset = .vga640x480 to specify the image size in my app for processing.
I was seeing different size of image in backend - which I was able to confirm in the app because UIImage(imageData) converted from the image is of different size.
This is how I convert image to multipartData -
let multipartData = MultipartFormData()
if let imageData = self.image?.jpegData(compressionQuality: 1.0) {
multipartData.append(imageData, withName: "image", fileName: "image.jpeg", mimeType: "image/jpeg")
}
This is what I see in Xcode debugger -
The following looks intuitive, but manifests the behavior you describe, whereby one ends up with a Data representation of the image with an incorrect scale and pixel size:
let ciImage = CIImage(cvImageBuffer: pixelBuffer) // 640×480
let image = UIImage(ciImage: ciImage) // says it is 640×480 with scale of 1
guard let data = image.pngData() else { ... } // but if you extract `Data` and then recreate image from that, the size will be off by a multiple of your device’s scale
However, if you create it via a CGImage, you will get the right result:
let ciImage = CIImage(cvImageBuffer: pixelBuffer)
let ciContext = CIContext()
guard let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent) else { return }
let image = UIImage(cgImage: cgImage)
You asked:
If my image is 640×480 points with scale 2, does my deep learning model would still take the same to process as for a 1280×960 points with scale 1?
There is no difference, as far as the model goes, between 640×480pt # 2× versus 1280×960pt # 1×.
The question is whether 640×480pt # 2× is better than 640×480pt # 1×: In this case, the model will undoubtedly generate better results, though possibly slower, with higher resolution images (though at 2×, the asset is roughly four times larger/slower to upload; on 3× device, it will be roughly nine times larger).
But if you look at the larger asset generated by the direct CIImage » UIImage process, you can see that it did not really capture a 1280×960 snapshot, but rather captured 640×480 and upscaled (with some smoothing), so you really do not have a more detailed asset to deal with and is unlikely to generate better results. So, you will pay the penalty of the larger asset, but likely without any benefits.
If you need better results with larger images, I would change the preset to a higher resolution but still avoid the scale based adjustment by using the CIContext/CGImage-based snippet shared above.

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

Using GaussianBlur on image in viewDidLoad blocks UI

I'm creating a blur effect using this below function in viewDidLoad of viewController
func applyBlurEffect(image: UIImage){
let imageToBlur = CIImage(image: image)!
let blurfilter = CIFilter(name: "CIGaussianBlur")!
blurfilter.setValue(10, forKey: kCIInputRadiusKey)
blurfilter.setValue(imageToBlur, forKey: "inputImage")
let resultImage = blurfilter.value(forKey: "outputImage") as! CIImage
let croppedImage: CIImage = resultImage.cropping(to: CGRect(x:0,y: 0,width: imageToBlur.extent.size.width,height: imageToBlur.extent.size.height))
let context = CIContext(options: nil)
let blurredImage = UIImage (cgImage: context.createCGImage(croppedImage, from: croppedImage.extent)!)
self.backImage.image = blurredImage
}
But this piece of code blocks the UI and the viewController opens after 3-4 seconds of lag. I don't want to present the UI without the blurEffect as well as i don't want the user to wait for 3-4 seconds while opening the viewController.
Please provide with a optimum solution for this problem.
GPUImage (https://github.com/BradLarson/GPUImage) blur works really much faster than CoreImage one:
extension UIImage {
func imageWithGaussianBlur() -> UIImage? {
let source = GPUImagePicture(image: self)
let gaussianFilter = GPUImageGaussianBlurFilter()
gaussianFilter.blurRadiusInPixels = 2.2
source?.addTarget(gaussianFilter)
gaussianFilter.useNextFrameForImageCapture()
source?.processImage()
return gaussianFilter.imageFromCurrentFramebuffer()
}
}
However small delay is still possible (depends on image size), so if you can't preprocess the image until view loads, I'd suggest to resize the image first, blur and display the resulted thumbnail, and then after the original image is processed in background queue, replace the thumbnail with the blurred original.
Core Image Programming Guide
Performance Best Practices
Follow these practices for best performance:
Don’t create a CIContext object every time you render. Contexts store a lot of state information; it’s more efficient to reuse
them.
Evaluate whether you app needs color management. Don’t use it unless you need it. See Does Your App Need Color Management?. Avoid
Core Animation animations while rendering CIImage objects with a
GPU context. If you need to use both simultaneously, you can set up
both to use the CPU.
Make sure images don’t exceed CPU and GPU limits. Image size limits for CIContext objects differ depending on whether Core Image uses the
CPU or GPU. Check the limit by using the methods
inputImageMaximumSize and outputImageMaximumSize.
User smaller images when possible. Performance scales with the number of output pixels. You can have Core Image render into a
smaller view, texture, or framebuffer. Allow Core Animation to
upscale to display size.
Use Core Graphics or Image I/O functions to crop or downsample, such as the functions CGImageCreateWithImageInRect or
CGImageSourceCreateThumbnailAtIndex.
The UIImageView class works best with static images. If your app needs to get the best performance, use lower-level APIs.
Avoid unnecessary texture transfers between the CPU and GPU. Render to a rectangle that is the same size as the source image before
applying a contents scale factor.
Consider using simpler filters that can produce results similar to algorithmic filters. For example, CIColorCube can produce output
similar to CISepiaTone, and do so more efficiently.
Take advantage of the support for YUV image in iOS 6.0 and later. Camera pixel buffers are natively YUV but most image processing
algorithms expect RBGA data. There is a cost to converting between
the two. Core Image supports reading YUB from CVPixelBuffer objects
and applying the appropriate color transform.
Have a look at Brad Larson's GPUImage also. You might want to use it. see this answer. https://stackoverflow.com/a/12336118/1378447
Can you present the view controller with the original image and perform the blur on a background thread and do a nice effect to replace the image once the blur ones is ready??
Also, maybe you could use a UIVisualEffectView and see if performance are better?
Apple a while ago also released an example where they were using UIImageEffects to perform a blur. It is written in Obj-C but you could easily use it in Swift https://developer.apple.com/library/content/samplecode/UIImageEffects/Listings/UIImageEffects_UIImageEffects_h.html
Make use of dispatch queues. This one worked for me:
func applyBlurEffect(image: UIImage){
DispatchQueue.global(qos: DispatchQoS.QoSClass.userInitiated).async {
let imageToBlur = CIImage(image: image)!
let blurfilter = CIFilter(name: "CIGaussianBlur")!
blurfilter.setValue(10, forKey: kCIInputRadiusKey)
blurfilter.setValue(imageToBlur, forKey: "inputImage")
let resultImage = blurfilter.value(forKey: "outputImage") as! CIImage
let croppedImage: CIImage = resultImage.cropping(to: CGRect(x:0,y: 0,width: imageToBlur.extent.size.width,height: imageToBlur.extent.size.height))
let context = CIContext(options: nil)
let blurredImage = UIImage (cgImage: context.createCGImage(croppedImage, from: croppedImage.extent)!)
DispatchQueue.main.async {
self.backImage.image = blurredImage
}
}
}
But this method will create a delay of 3-4 seconds for image to become blur(but it won't block the loading of other UI contents). If you don't want that time delay too, then applying UIBlurEffect to imageView will produce a similar effect:
func applyBlurEffect(image: UIImage){
self.profileImageView.backgroundColor = UIColor.clear
let blurEffect = UIBlurEffect(style: .extraLight)
let blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.frame = self.backImage.bounds
blurEffectView.alpha = 0.5
blurEffectView.autoresizingMask = [.flexibleWidth, .flexibleHeight] // for supporting device rotation
self.backImage.addSubview(blurEffectView)
}
By changing the blur effect style to .light or .dark and alpha value from 0 to 1, you can get your desired effect

Apply Core Image Filter (CIBumpDistortion) to only one part of an image + change radius of selection and intensity of CIFilter

I would like to copy some of the features displayed here:
So I would like the user to apply a CIBumpDistortion filter to an image and let him choose
1) where exactly he wants to apply it by letting him just touch the respective location on the image
2a) the size of the circle selection (first slider in the image above)
2b) the intensity of the CIBumpDistortion Filter (second slider in the image above)
I read some previously asked questions, but they were not really helpful and some of the solutions sounded really far from userfriendly (e.g. cropping the needed part, then reapplying it to the old image). Hope I am not asking for too much at once. Objective-C would be preferred, but any help/hint would be much appreciated really! Thank you in advance!
I wrote a demo (iPad) project that lets you apply most supported CIFilters. It interrogates each filter for the parameters it needs and has built-in support for float values as well as points and colors. For the bump distortion filter it lets you select a center point, a radius, and an input scale.
The project is called CIFilterTest. You can download the project from Github at this link: https://github.com/DuncanMC/CIFilterTest
There is quite a bit of housekeeping in the app to support the general-purpose ability to use any supported filter, but it should give you enough information to implement your own bump filter as you're asking to do.
The approach I worked out to applying a filter and getting it to render without extending outside of the bounds of the original image is to first apply a clamp filter to the image (CIAffineClamp) set to the identity transform, take the output of that filter and feed that into the input of your "target" filter (the bump distortion filter in this case) and then take the output of that and feed that into a crop filter (CICrop) with the bounds of the crop filter set to the original image size.
The method to look for in the sample project is called showImage, in ViewController.m
You wrote:
1) where exactly he wants to apply it by letting him just touch the
respective location on the image
2a) the size of the circle selection (first slider in the image above)
2b) the intensity of the CIBumpDistortion Filter (second slider in the
image above)
Well, CIBumpDistortion has those attributes:
inputCenter is the center of the effect
inputRadius is the size of the circle selection
inputScale is the intensity
Simon
To show the bump:
You have to pass the location (kCIInputCenterKey) on image with Radius Size (white Circle in your case)
func appleBumpDistort(toImage currentImage: UIImage, radius : Float, intensity: Float) -> UIImage? {
var context: CIContext = CIContext()
let currentFilter = CIFilter(name: "CIBumpDistortion")
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter.setValue(radius, forKey: kCIInputRadiusKey)
currentFilter.setValue(intensity, forKey: kCIInputScaleKey)
currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
guard let image = currentFilter.outputImage else { return nil }
if let cgimg = context.createCGImage(image, from: image.extent) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
return nil
}

Resources