GussianBlur image with scaleAspectFill - ios

I want to use Gaussianblur on an image, but also i want to use my imageview scalemode's scaleAspectFill.
I am blurring my image with following code:
func getImageWithBlur(image: UIImage) -> UIImage?{
let context = CIContext(options: nil)
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
return nil
}
let beginImage = CIImage(image: image)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter.setValue(6.5, forKey: "inputRadius")
guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: cgimg)
}
But this is not working with scaleAspectFill mode.
They are both same images. But when i blur the second image, as you can see it is adding space from top and bottom. What should i do for fit well when using blur image too?

When you apply a CIGaussianBlur filter, the resulting image is larger than the original. This is because the blur is applied to the edges.
To get back an image at the original size, you need to use the original image extent.
Note, though, the blur is applied both inside and outside the edge, so if you clip only to the original extent, the edge will effectively "fade out". To avoid the edges altogether, you'll need to clip farther in.
Here is an example, using a UIImage extension to blur either with or without blurred edges:
extension UIImage {
func blurredImageWithBlurredEdges(inputRadius: CGFloat) -> UIImage? {
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
return nil
}
guard let beginImage = CIImage(image: self) else {
return nil
}
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter.setValue(inputRadius, forKey: "inputRadius")
guard let output = currentFilter.outputImage else {
return nil
}
// UIKit and UIImageView .contentMode doesn't play well with
// CIImage only, so we need to back the return UIImage with a CGImage
let context = CIContext()
// cropping rect because blur changed size of image
guard let final = context.createCGImage(output, from: beginImage.extent) else {
return nil
}
return UIImage(cgImage: final)
}
func blurredImageWithClippedEdges(inputRadius: CGFloat) -> UIImage? {
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
return nil
}
guard let beginImage = CIImage(image: self) else {
return nil
}
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter.setValue(inputRadius, forKey: "inputRadius")
guard let output = currentFilter.outputImage else {
return nil
}
// UIKit and UIImageView .contentMode doesn't play well with
// CIImage only, so we need to back the return UIImage with a CGImage
let context = CIContext()
// cropping rect because blur changed size of image
// to clear the blurred edges, use a fromRect that is
// the original image extent insetBy (negative) 1/2 of new extent origins
let newExtent = beginImage.extent.insetBy(dx: -output.extent.origin.x * 0.5, dy: -output.extent.origin.y * 0.5)
guard let final = context.createCGImage(output, from: newExtent) else {
return nil
}
return UIImage(cgImage: final)
}
}
and here is an example View Controller showing how to use it, and the different results:
class BlurTestViewController: UIViewController {
let imgViewA = UIImageView()
let imgViewB = UIImageView()
let imgViewC = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
let stackView = UIStackView()
stackView.axis = .vertical
stackView.alignment = .fill
stackView.distribution = .fillEqually
stackView.spacing = 8
stackView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(stackView)
NSLayoutConstraint.activate([
stackView.widthAnchor.constraint(equalToConstant: 200.0),
stackView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
stackView.centerYAnchor.constraint(equalTo: view.centerYAnchor),
])
[imgViewA, imgViewB, imgViewC].forEach { v in
v.backgroundColor = .red
v.contentMode = .scaleAspectFill
v.clipsToBounds = true
// square image views (1:1 ratio)
v.heightAnchor.constraint(equalTo: v.widthAnchor, multiplier: 1.0).isActive = true
stackView.addArrangedSubview(v)
}
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
guard let imgA = UIImage(named: "bkg640x360") else {
fatalError("Could not load image!")
}
guard let imgB = imgA.blurredImageWithBlurredEdges(inputRadius: 6.5) else {
fatalError("Could not create Blurred image with Blurred Edges")
}
guard let imgC = imgA.blurredImageWithClippedEdges(inputRadius: 6.5) else {
fatalError("Could not create Blurred image with Clipped Edges")
}
imgViewA.image = imgA
imgViewB.image = imgB
imgViewC.image = imgC
}
}
Using this original 640x360 image, with 200 x 200 image views:
We get this output:
Also worth mentioning - although I'm sure you've already noticed - these functions run very slowly on the simulator, but very quickly on an actual device.

I believe your issue is that the convolution kernel of the CIFilter is creating additional data as it applies the blur to the edges of the image. The CIContext isn't a strictly bounded space and is able to use area around the image to fully process all output. So rather than using output.extent in createCGImage, use the size of the input image (converted to a CGRect).
To account for the blurred alpha channel along the image edge, you can use the CIImage.unpremultiplyingAlpha().settingAlphaOne() methods to flatten the image before returning.
func getImageWithBlur(image: UIImage) -> UIImage? {
let context = CIContext(options: nil)
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else { return nil }
let beginImage = CIImage(image: image)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter.setValue(6.5, forKey: "inputRadius")
let rect = CGRect(x: 0.0, y: 0.0, width: image.size.width, height: image.size.height)
guard let output = currentFilter.outputImage?.unpremultiplyingAlpha().settingAlphaOne(in: rect) else { return nil }
guard let cgimg = context.createCGImage(output, from: rect) else { return nil }
print("image.size: \(image.size)")
print("output.extent: \(output.extent)")
return UIImage(cgImage: cgimg)
}

Related

UIGraphicsImageRenderer mirrors image after applying filter

I'm trying to apply filters on images.
Applying the filter works great, but it mirrors the image vertically.
The bottom row of images calls the filter function after init.
The main image at the top, gets the filter applied after pressing on one at the bottom
The ciFilter is CIFilter.sepiaTone().
func applyFilter(image: UIImage) -> UIImage? {
let rect = CGRect(origin: CGPoint.zero, size: image.size)
let renderer = UIGraphicsImageRenderer(bounds: rect)
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
let image = renderer.image { context in
let ciContext = CIContext(cgContext: context.cgContext, options: nil)
if let outputImage = ciFilter.outputImage {
ciContext.draw(outputImage, in: rect, from: rect)
}
}
return image
}
And after applying the filter twice, the new image gets zoomed in.
Here are some screenshots.
You don't need to use UIGraphicsImageRenderer.
You can directly get the image from CIContext.
func applyFilter(image: UIImage) -> UIImage? {
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
guard let ciImage = ciFilter.outputImage else {
return nil
}
let outputCGImage = CIContext().createCGImage(ciImage, from: ciImage.extent)
guard let _ = outputCGImage else { return nil }
let filteredImage = UIImage(cgImage: outputCGImage!, scale: image.scale, orientation: image.imageOrientation)
return filteredImage
}

Applying CIGaussianBlur to UIImage not working properly

I want a blur effect to UIImage as slider value changes.
I am using the CIGaussianBlur filter to blur the image.
The code is as follows
func applyBlurFilter(aCIImage: CIImage, val: CGFloat) -> UIImage {
let clampFilter = CIFilter(name: "CIAffineClamp")
clampFilter?.setDefaults()
clampFilter?.setValue(aCIImage, forKey: kCIInputImageKey)
let blurFilter = CIFilter(name: "CIGaussianBlur")
blurFilter?.setValue(clampFilter?.outputImage, forKey: kCIInputImageKey)
blurFilter?.setValue(val, forKey: kCIInputRadiusKey)
let rect = aCIImage.extent
if let output = blurFilter?.outputImage {
if let cgimg = self.context.createCGImage(output, from: rect) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
}
return image ?? self.image
}
Note: I've also tried the below code using CICrop filter
func applyBlurFilter(beginImage: CIImage, value: Float) -> UIImage? {
let currentFilter = CIFilter(name: "CIGaussianBlur")
currentFilter?.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter?.setValue(value, forKey: kCIInputRadiusKey)
let cropFilter = CIFilter(name: "CICrop")
cropFilter?.setValue(currentFilter!.outputImage, forKey: kCIInputImageKey)
cropFilter?.setValue(CIVector(cgRect: beginImage!.extent), forKey: "inputRectangle")
let output = cropFilter?.outputImage
let context = CIContext(options: nil)
let cgimg = self.context.createCGImage(output!, from: beginImage!.extent)
let processedImage = UIImage(cgImage: cgimg!)
return processedImage
}
The code works perfectly with some images, but with bigger images, while applying the blur filter to the image, the image's right edges get transparent which I don't want.
Note: I am running this on device
What am I doing wrong here, I have no idea
The image whose right edge gets transparant
Result after applying GaussianBlur to the above image
Thanks!!
Well, you're doing something wrong somewhere. The absolute best advice I can give you in your career is to create a small test project to experiment when you have such an issue - I've done this for 15 years in the Apple world, and its been of enormous help.
I created a project here so you don't have to (this time). I downloaded the image, placed it in an ImageView, and it looked perfect (as expected). I then used your code (except I had to create a context, and guess at radius values, then ran it. Image looks perfect with a blur of 0, 5, 10, and 25.
Obviously the issue is something else you are doing. What I suggest is that you keep adding to the test project until you can find what step is the problem (context? other image processing?)
This is the entirety of my code:
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let im1 = UIImage(named: "Image.jpg")!
let cim = CIImage(image: im1)!
let im2 = applyBlurFilter(aCIImage: cim, val: 25)
let iv = UIImageView(image: im2)
iv.contentMode = .scaleToFill
self.view.addSubview(iv)
}
func applyBlurFilter(aCIImage: CIImage, val: CGFloat) -> UIImage {
let clampFilter = CIFilter(name: "CIAffineClamp")
clampFilter?.setDefaults()
clampFilter?.setValue(aCIImage, forKey: kCIInputImageKey)
let blurFilter = CIFilter(name: "CIGaussianBlur")
blurFilter?.setValue(clampFilter?.outputImage, forKey: kCIInputImageKey)
blurFilter?.setValue(val, forKey: kCIInputRadiusKey)
let rect = aCIImage.extent
if let output = blurFilter?.outputImage {
let context = CIContext(options: nil)
if let cgimg = context.createCGImage(output, from: rect) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
}
fatalError()
}
}

CIFilters to Show Clipped Highlights and Shadows

In an image editing app, I am trying to show clipped highlights and shadows using CIFilters.
Filter List
I know there isn't a straight single filter for this, will have to be a combination of few together.
Any ideas? Thanks in advance.
CIFilters can add one after another. for example: i can first adjust shadow on image and than crop it.
func addFilters(toImage image: UIImage) -> UIImage? {
guard let cgImage = image.cgImage else { return nil }
let ciImage = CIImage(cgImage: cgImage)
//add adjust shadow filter
guard let shadowAdjust = CIFilter(name: "CIHighlightShadowAdjust") else { return nil }
shadowAdjust.setValue(ciImage, forKey: kCIInputImageKey)
shadowAdjust.setValue(1, forKey: "inputHighlightAmount")
shadowAdjust.setValue(3, forKey: "inputShadowAmount")
guard let output1 = shadowAdjust.outputImage else { return nil }
//now output result crop 80%
guard let crop = CIFilter(name: "CICrop") else { return nil }
crop.setValue(ciImage, forKey: kCIInputImageKey)
crop.setValue(CIVector(x: output1.extent.origin.x, y: output1.extent.origin.y, z: output1.extent.size.width * 0.8, w: output1.extent.size.height * 0.8), forKey: "inputRectangle")
guard let output2 = crop.outputImage else { return UIImage(ciImage: output1) }
//the image will be croped and shadow adjusted
return UIImage(ciImage: output2)
}

createCGImage returns nil when attempting to rotate/crop an image using CIFilter

I am working on applying multiple CIFilters to an image but I keep getting a nil result when I apply the second filter. I've created crop and rotate functions as follows:
func crop(_ image: CIImage) -> CIImage?{
let cropRectangle = CGRect(x: 0, y: 0, width: 0.5*image.extent.width, height: 0.5*image.extent.height)
guard let filter = CIFilter(name: "CICrop") else {print("Could not create filter.");return nil}
filter.setValue(image, forKey: "inputImage")
filter.setValue(cropRectangle, forKey: "inputRectangle")
return filter.outputImage
}
func rotate(image: CIImage, rotation: CGFloat) -> CIImage?{
guard let filter = CIFilter(name: "CIAffineTransform") else {print("Unable to generate filter");return nil}
let rotationTransform = CGAffineTransform.init(rotationAngle: rotation)
filter.setValue(image, forKey: "inputImage")
filter.setValue(rotationTransform, forKey: "inputTransform")
return filter.outputImage
}
If I apply crop and then rotation, my context.createCGImage works fine, but when I apply rotate and then crop, it returns nil. I have checked the .extension on the CIImage I am attempting to crop to make sure the crop rectangle is within its bounds. Accepting ideas. Here's my call to the 2 above mentioned functtions:
let context = CIContext(options: nil)
guard let ciImage = CIImage(image: #imageLiteral(resourceName: "sample3")) else {fatalError("Error on image generation!")}
guard let ciRotated = self.rotate(image: ciImage, rotation: CGFloat(Double.pi*3/2)) else {print("Could not rotate.");return}
guard let ciCropped = self.crop(ciRotated) else {print("Error cropping.");return}
guard let final = context.createCGImage(ciCropped, from: ciCropped.extent) else {print("Error on CG gen.");return}
The problem was that the origin of the rotated image was no longer at 0,0, so the crop rectangle was really out of bounds, making the crop function return a 0x0 sized image. I added a translation to make the origin return to 0,0 and everything worked.
Similar issue here: Core Image: after using CICrop, applying a compositing filter doesn't line up
Translation function I've created:
func translation(image: CIImage, x: CGFloat, y: CGFloat) -> CIImage?{
guard let filter = CIFilter(name: "CIAffineTransform") else {print("Unable to generate filter"); return nil}
let translate = CGAffineTransform.init(translationX: x, y: y)
filter.setValue(image, forKey: "inputImage")
filter.setValue(translate, forKey: "inputTransform")
return filter.outputImage
}
Final call to this example:
let context = CIContext(options: nil)
guard let ciImage = CIImage(image: #imageLiteral(resourceName: "sample3")) else {fatalError("Error on image generation!")}
guard let ciRotated = self.rotate(image: ciImage, rotation: CGFloat(Double.pi*3/2)) else {print("Could not rotate.");return}
guard let ciTranslated = self.translation(image: ciRotated, x: 0, y: ciRotated.extent.height) else {print("Unable to translate."); return}
guard let ciCropped = self.crop(ciTranslated) else {print("Error cropping.");return}
guard let final = context.createCGImage(ciCropped, from: ciCropped.extent) else {print("Error on CG gen.");return}

Any way to speed this UILabel blur code?

Here is the code and it is really slow, like seconds slow to render about 25 labels.
extension UILabel{
func deBlur(){
for subview in self.subviews {
if (subview.tag == 99999) {
subview.removeFromSuperview()
}
}
}
func blur(){
let blurRadius:CGFloat = 5.1
UIGraphicsBeginImageContext(bounds.size)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let blurFilter = CIFilter(name: "CIGaussianBlur")
blurFilter?.setDefaults()
let imageToBlur = CIImage(cgImage: (image?.cgImage)!)
blurFilter?.setValue(imageToBlur, forKey: kCIInputImageKey)
blurFilter?.setValue(blurRadius, forKey: "inputRadius")
let outputImage: CIImage? = blurFilter?.outputImage
let context = CIContext(options: nil)
let cgimg = context.createCGImage(outputImage!, from: (outputImage?.extent)!)
layer.contents = cgimg!
}
}
Any image / UIGraphics gurus know why this is so sloooow?
UPDATE: This line of code is the culprit. However, it is also needed to create the blur effect.
let cgimg = UILabel.context.createCGImage(outputImage!, from: (outputImage?.extent)!)

Resources