In a photo app (no video), I have a number of built-in and custom Metal CIFilters chained together in a class like so (I've left out the lines to set filter parameters, other than the input image):
var colorControlsFilter = CIFilter(name: "CIColorControls")!
var highlightShadowFilter = CIFilter(name: "CIHighlightShadowAdjust")!
func filter(image data: Data) -> UIImage
{
var outputImage: CIImage?
let rawFilter = CIFilter(imageData: imageData, options: nil)
outputImage = rawFilter?.outputImage
colorControlsFilter.setValue(outputImage, forKey: kCIInputImageKey)
outputImage = colorControlsFilter.setValue.outputImage
highlightShadowFilter.setValue(outputImage, forKey: kCIInputImageKey)
outputImage = highlightShadowFilter.setValue.outputImage
...
...
if let ciImage = outputImage
{
return renderImage(ciImage: ciImage)
}
}
func renderImage(ciImage: CIImage) -> UIImage?
{
var outputImage: UIImage?
let size = ciImage.extent.size
UIGraphicsBeginImageContext(size)
if let context = UIGraphicsGetCurrentContext()
{
context.interpolationQuality = .high
context.setShouldAntialias(true)
let inputImage = UIImage(ciImage: ciImage)
inputImage.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return outputImage
}
Processing takes about a second.
Is this way of linking together output to input of the filters the most efficient? Or more generally: What performance optimisations could I do?
You should use a CIContext to render the image:
var context = CIContext() // create this once and re-use it for each image
func render(image ciImage: CIImage) -> UIImage? {
let cgImage = context.createCGImage(ciImage, from: ciImage.extent)
return cgImage.map(UIImage.init)
}
It's important to create the CIContext only once since it's expensive to create because it's holding and caching all (Metal) resources needed for rendering the image.
Related
I'm trying to apply filters on images.
Applying the filter works great, but it mirrors the image vertically.
The bottom row of images calls the filter function after init.
The main image at the top, gets the filter applied after pressing on one at the bottom
The ciFilter is CIFilter.sepiaTone().
func applyFilter(image: UIImage) -> UIImage? {
let rect = CGRect(origin: CGPoint.zero, size: image.size)
let renderer = UIGraphicsImageRenderer(bounds: rect)
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
let image = renderer.image { context in
let ciContext = CIContext(cgContext: context.cgContext, options: nil)
if let outputImage = ciFilter.outputImage {
ciContext.draw(outputImage, in: rect, from: rect)
}
}
return image
}
And after applying the filter twice, the new image gets zoomed in.
Here are some screenshots.
You don't need to use UIGraphicsImageRenderer.
You can directly get the image from CIContext.
func applyFilter(image: UIImage) -> UIImage? {
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
guard let ciImage = ciFilter.outputImage else {
return nil
}
let outputCGImage = CIContext().createCGImage(ciImage, from: ciImage.extent)
guard let _ = outputCGImage else { return nil }
let filteredImage = UIImage(cgImage: outputCGImage!, scale: image.scale, orientation: image.imageOrientation)
return filteredImage
}
Is there an option to replicate func masking(_ mask: CGImage) -> CGImage? from Core Graphics using CoreImage and one of CIFilter? I've tried CIBlendWithMask and CIBlendWithAlphaMask without success. Most important thing is that I need to preserve alpha channel, so I want image to be masked in such way, that if mask is black -> show Image, if mask is white -> transparent.
My masking code:
extension UIImage {
func masked(by mask: UIImage) -> UIImage {
guard let maskRef = mask.cgImage,
let selfRef = cgImage,
let dataProvider = maskRef.dataProvider,
let mask = CGImage(
maskWidth: maskRef.width,
height: maskRef.height,
bitsPerComponent: maskRef.bitsPerComponent,
bitsPerPixel: maskRef.bitsPerPixel,
bytesPerRow: maskRef.bytesPerRow,
provider: dataProvider,
decode: nil,
shouldInterpolate: false),
let masked = selfRef.masking(mask) else {
fatalError("couldnt create mask!")
}
let maskedImage = UIImage(cgImage: masked)
return maskedImage
}
}
I found solution. Key is to use image that has grayscale format, rgb and other will not work. Resizing step can be removed, if image and maks have the same size. Doable as extension, can be done as subclass of CIImage too. Enjoy:)
Of course It can be modified as extension to UIImage, CIImage, CIFilter, how You like it.
extension CGImage {
func masked(by cgMask: CGImage) -> CIImage {
let selfCI = CIImage(cgImage: self)
let maskCI = CIImage(cgImage: cgMask)
let maskFilter = CIFilter(name: "CIMaskToAlpha")
maskFilter?.setValue(maskCI, forKey: "inputImage")
let scaleFilter = CIFilter(name: "CILanczosScaleTransform")
scaleFilter?.setValue(maskFilter?.outputImage, forKey: "inputImage")
scaleFilter?.setValue(selfCI.extent.height / maskCI.extent.height, forKey: "inputScale")
let filter: CIFilter! = CIFilter(name: "CIBlendWithAlphaMask")
filter.setValue(selfCI, forKey: "inputBackgroundImage")
let maskOutput = scaleFilter?.outputImage
filter.setValue(maskOutput, forKey: "inputMaskImage")
let outputImage = filter.outputImage!
return outputImage
}
}
You need to create your own custom CIFilter. The tutorials:
Apple Docs
Apple Docs 2
Raywenderlich.com Tutorial
This is not trivial, but it pays off when you learn how to do it :)
I want a blur effect to UIImage as slider value changes.
I am using the CIGaussianBlur filter to blur the image.
The code is as follows
func applyBlurFilter(aCIImage: CIImage, val: CGFloat) -> UIImage {
let clampFilter = CIFilter(name: "CIAffineClamp")
clampFilter?.setDefaults()
clampFilter?.setValue(aCIImage, forKey: kCIInputImageKey)
let blurFilter = CIFilter(name: "CIGaussianBlur")
blurFilter?.setValue(clampFilter?.outputImage, forKey: kCIInputImageKey)
blurFilter?.setValue(val, forKey: kCIInputRadiusKey)
let rect = aCIImage.extent
if let output = blurFilter?.outputImage {
if let cgimg = self.context.createCGImage(output, from: rect) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
}
return image ?? self.image
}
Note: I've also tried the below code using CICrop filter
func applyBlurFilter(beginImage: CIImage, value: Float) -> UIImage? {
let currentFilter = CIFilter(name: "CIGaussianBlur")
currentFilter?.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter?.setValue(value, forKey: kCIInputRadiusKey)
let cropFilter = CIFilter(name: "CICrop")
cropFilter?.setValue(currentFilter!.outputImage, forKey: kCIInputImageKey)
cropFilter?.setValue(CIVector(cgRect: beginImage!.extent), forKey: "inputRectangle")
let output = cropFilter?.outputImage
let context = CIContext(options: nil)
let cgimg = self.context.createCGImage(output!, from: beginImage!.extent)
let processedImage = UIImage(cgImage: cgimg!)
return processedImage
}
The code works perfectly with some images, but with bigger images, while applying the blur filter to the image, the image's right edges get transparent which I don't want.
Note: I am running this on device
What am I doing wrong here, I have no idea
The image whose right edge gets transparant
Result after applying GaussianBlur to the above image
Thanks!!
Well, you're doing something wrong somewhere. The absolute best advice I can give you in your career is to create a small test project to experiment when you have such an issue - I've done this for 15 years in the Apple world, and its been of enormous help.
I created a project here so you don't have to (this time). I downloaded the image, placed it in an ImageView, and it looked perfect (as expected). I then used your code (except I had to create a context, and guess at radius values, then ran it. Image looks perfect with a blur of 0, 5, 10, and 25.
Obviously the issue is something else you are doing. What I suggest is that you keep adding to the test project until you can find what step is the problem (context? other image processing?)
This is the entirety of my code:
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let im1 = UIImage(named: "Image.jpg")!
let cim = CIImage(image: im1)!
let im2 = applyBlurFilter(aCIImage: cim, val: 25)
let iv = UIImageView(image: im2)
iv.contentMode = .scaleToFill
self.view.addSubview(iv)
}
func applyBlurFilter(aCIImage: CIImage, val: CGFloat) -> UIImage {
let clampFilter = CIFilter(name: "CIAffineClamp")
clampFilter?.setDefaults()
clampFilter?.setValue(aCIImage, forKey: kCIInputImageKey)
let blurFilter = CIFilter(name: "CIGaussianBlur")
blurFilter?.setValue(clampFilter?.outputImage, forKey: kCIInputImageKey)
blurFilter?.setValue(val, forKey: kCIInputRadiusKey)
let rect = aCIImage.extent
if let output = blurFilter?.outputImage {
let context = CIContext(options: nil)
if let cgimg = context.createCGImage(output, from: rect) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
}
fatalError()
}
}
Here is the code and it is really slow, like seconds slow to render about 25 labels.
extension UILabel{
func deBlur(){
for subview in self.subviews {
if (subview.tag == 99999) {
subview.removeFromSuperview()
}
}
}
func blur(){
let blurRadius:CGFloat = 5.1
UIGraphicsBeginImageContext(bounds.size)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let blurFilter = CIFilter(name: "CIGaussianBlur")
blurFilter?.setDefaults()
let imageToBlur = CIImage(cgImage: (image?.cgImage)!)
blurFilter?.setValue(imageToBlur, forKey: kCIInputImageKey)
blurFilter?.setValue(blurRadius, forKey: "inputRadius")
let outputImage: CIImage? = blurFilter?.outputImage
let context = CIContext(options: nil)
let cgimg = context.createCGImage(outputImage!, from: (outputImage?.extent)!)
layer.contents = cgimg!
}
}
Any image / UIGraphics gurus know why this is so sloooow?
UPDATE: This line of code is the culprit. However, it is also needed to create the blur effect.
let cgimg = UILabel.context.createCGImage(outputImage!, from: (outputImage?.extent)!)
I am trying to generate QR Code using iOS Core Image API:
func createQRForString(#data : NSData)->CIImage!{
var qrFilter = CIFilter(name: "CIQRCodeGenerator")
qrFilter.setValue(data, forKey: "inputMessage")
qrFilter.setValue("H", forKey:"inputCorrectionLevel")
return qrFilter.outputImage
}
func createNonInterpolatedImageFromCIImage(image : CIImage,withScale scale:CGFloat)->UIImage{
let cgImage = CIContext(options: nil).createCGImage(image, fromRect: image.extent())
UIGraphicsBeginImageContext(CGSizeMake(image.extent().size.width*scale, image.extent().size.height*scale))
let context = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(context, kCGInterpolationNone)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
And the following code in viewDidLoad method :
let data = "Hello World".dataUsingEncoding(NSUTF8StringEncoding)
if let image=createQRForString(data: data!){
let uiimage = createNonInterpolatedImageFromCIImage(image, withScale: 1.0)
imageView.image = uiimage
}
else{
println("Error loading image")
}
}
But it neither prints "Error" nor shows qr code in the imageView.
Here is the solution:
override func viewDidLoad() {
super.viewDidLoad()
self.imgView.image = generateCode()
}
func generateCode() -> UIImage {
let filter = CIFilter(name: "CIQRCodeGenerator")
let data = "Hello World".dataUsingEncoding(NSUTF8StringEncoding)
filter.setValue("H", forKey:"inputCorrectionLevel")
filter.setValue(data, forKey:"inputMessage")
let outputImage = filter.outputImage
let context = CIContext(options:nil)
let cgImage = context.createCGImage(outputImage, fromRect:outputImage.extent())
let image = UIImage(CGImage:cgImage, scale:1.0, orientation:UIImageOrientation.Up)
let resized = resizeImage(image!, withQuality:kCGInterpolationNone, rate:5.0)
return resized
}
func resizeImage(image: UIImage, withQuality quality: CGInterpolationQuality, rate: CGFloat) -> UIImage {
let width = image.size.width * rate
let height = image.size.height * rate
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), true, 0)
let context = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(context, quality)
image.drawInRect(CGRectMake(0, 0, width, height))
let resized = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resized;
}