CIFilter is too slow on Xcode simulators - ios

I'm building a photo editing app that allow users to change the saturation, contrast and brightness of an image with sliders. I'm doing so by using CIFilter on the UIImage. Here's the code for the adjustment:
extension UIImage {
func withAdjustments(context: CIContext, saturation: CGFloat, contrast: CGFloat, brightness: CGFloat) -> UIImage {
guard let cgImage = self.cgImage else { return self }
guard let filter = CIFilter(name: "CIColorControls") else { return self }
filter.setValue(CIImage(cgImage: cgImage), forKey: kCIInputImageKey)
filter.setValue(saturation, forKey: kCIInputSaturationKey)
filter.setValue(contrast, forKey: kCIInputContrastKey)
filter.setValue(brightness, forKey: kCIInputBrightnessKey)
guard let result = filter.value(forKey: kCIOutputImageKey) as? CIImage else { return self }
guard let newCgImage = context.createCGImage(result, from: result.extent) else { return self }
return UIImage(cgImage: newCgImage, scale: 1, orientation: imageOrientation)
}
}
And in my view:
struct EditScreen: View {
let context = CIContext()
#State var saturation : Double = 1
#State var contrast : Double = 1
#State var brightness : Double = 0
...
var body: some View {
...
Image(uiImage: inputImage!.withAdjustments(
context: context,
saturation: saturation,
contrast: contrast,
brightness: brightness)).resizable().scaledToFit()
...
Those #State are modified with Sliders:
Slider(value: $contrast, in: 0...3, step: 0.01)
It works, but it is too laggy and low-performant on the simulator o Xcode. I haven't still been able to try it on a physical device but it seems to me that it'll work nice on it. However, I need it to work properly on the simulator, as I don't own a physical device to work with everyday. Is there any settings for the simulator that may be lowering its capabilities and slowing it down?

Related

How do you apply Core Image filters to an onscreen image using Swift/MacOS or iOS and Core Image

Photos editing adjustments provides a realtime view of the applied adjustments as they are applied. I wasn't able to find any samples of how you do this. All the examples seems to show that you apply the filters through a pipeline of sorts and then take the resulting image and update the screen with the result. See code below.
Photos seems to show the adjustment applied to the onscreen image. How do they achieve this?
func editImage(inputImage: CGImage) {
DispatchQueue.global().async {
let beginImage = CIImage(cgImage: inputImage)
guard let exposureOutput = self.exposureFilter(beginImage, ev: self.brightness) else {
return
}
guard let vibranceOutput = self.vibranceFilter(exposureOutput, amount: self.vibranceAmount) else {
return
}
guard let unsharpMaskOutput = self.unsharpMaskFilter(vibranceOutput, intensity: self.unsharpMaskIntensity, radius: self.unsharpMaskRadius) else {
return
}
guard let sharpnessOutput = self.sharpenFilter(unsharpMaskOutput, sharpness: self.unsharpMaskIntensity) else {
return
}
if let cgimg = self.context.createCGImage(sharpnessOutput, from: vibranceOutput.extent) {
DispatchQueue.main.async {
self.cgImage = cgimg
}
}
}
}
OK, I just found the answer - use MTKView, which is working fine except for getting the image to fill the view correctly!
For the benefit of others here are the basics... I have yet to figure out how to position the image correctly in the view - but I can see the filter applied in realtime!
class ViewController: NSViewController, MTKViewDelegate {
....
#objc dynamic var cgImage: CGImage? {
didSet {
if let cgimg = cgImage {
ciImage = CIImage(cgImage: cgimg)
}
}
}
var ciImage: CIImage?
// Metal resources
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
var sourceTexture: MTLTexture! // 2
let colorSpace = CGColorSpaceCreateDeviceRGB()
var context: CIContext!
var textureLoader: MTKTextureLoader!
override func viewDidLoad() {
super.viewDidLoad()
// Do view setup here.
let metalView = MTKView()
metalView.translatesAutoresizingMaskIntoConstraints = false
self.imageView.addSubview(metalView)
NSLayoutConstraint.activate([
metalView.bottomAnchor.constraint(equalTo: view.bottomAnchor),
metalView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
metalView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
metalView.topAnchor.constraint(equalTo: view.topAnchor)
])
device = MTLCreateSystemDefaultDevice()
commandQueue = device.makeCommandQueue()
metalView.delegate = self
metalView.device = device
metalView.framebufferOnly = false
context = CIContext()
textureLoader = MTKTextureLoader(device: device)
}
public func draw(in view: MTKView) {
if let ciImage = self.ciImage {
if let currentDrawable = view.currentDrawable {
let commandBuffer = commandQueue.makeCommandBuffer()
let inputImage = ciImage // 2
exposureFilter.setValue(inputImage, forKey: kCIInputImageKey)
exposureFilter.setValue(ev, forKey: kCIInputEVKey)
context.render(exposureFilter.outputImage!,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(origin: .zero, size: view.drawableSize),
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
commandBuffer?.commit()
}
}
}

Simple CIFilter Passthru with CGImage conversion returns black pixels

The following code:
let skView = SKView()
let scene = SKScene()
override func viewDidLoad() {
super.viewDidLoad()
self.scene.scaleMode = .resizeFill
self.skView.presentScene(self.scene)
self.scene.backgroundColor = UIColor.black
self.view.addSubview(skView)
self.scene.shouldEnableEffects = true
let sprite = SKSpriteNode(imageNamed: "NAME_THAT_PIC")
sprite.position = CGPoint(x: 300, y: 400)
let effectNode = SKEffectNode()
effectNode.filter = MyFilter()
effectNode.addChild(sprite)
will call this custom filter that does nothing but create a CGImage from a CIImage, correctly invoking context.createCGImage() as reported by many people (CIImages are not pixel buffered.)
MyFilter is reduced to a simple repro test:
class MyFilter: CIFilter {
var inputImage: CIImage?
var inputImageRect: CGRect? {
guard let image = self.inputImage else {
return nil
}
return image.extent
}
public override init() {
super.init()
}
required public init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override open var outputImage: CIImage? {
guard let inputImage = self.inputImage else {
return nil
}
let context = CIContext(options:nil)
let cgImage = context.createCGImage(inputImage, from: inputImageRect!)
// ... DO SOMETHING WITH CGIMAGE DATA ...
return CIImage(cgImage: cgImage!)
}
}
If I replace MyFilter() by another built-in filter, it works and will show the altered image so the viewcontroller code works. If instead, I return inputImage directly from the filter output call, it works and the image passed in will display.
When I dump the CGImage, the dimensions of the image are correct but every pixels are set to black.
I tried creating a UIImage using UIImage(cgImage: cgImage!) but the same happens.
What is causing pixels not to be loaded in the cgImage I generated from the inputImage?

Pixellating a UIImage returns UIImage with a different size

I'm using an extension to pixellate my images like the following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
return UIImage(ciImage: output)
}
The problem is the image represented by self here has not the same size than the one I create using UIImage(ciImage: output).
For example, using that code:
print("image.size BEFORE : \(image.size)")
if let imagePixellated = image.pixellated(scale: 48) {
image = imagePixellated
print("image.size AFTER : \(image.size)")
}
will print:
image.size BEFORE : (400.0, 298.0)
image.size AFTER : (848.0, 644.0)
Not the same size and not the same ratio.
Any idea why?
EDIT:
I added some prints in the extension as following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
print("UIIMAGE : \(self.size)")
print("ciImage.extent.size : \(ciImage.extent.size)")
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
print("output : \(output.extent.size)")
return UIImage(ciImage: output)
}
And here are the outputs:
UIIMAGE : (250.0, 166.5)
ciImage.extent.size : (500.0, 333.0)
output : (548.0, 381.0)
You have two problems:
self.size is measured in points. self's size in pixels is actually self.size multiplied by self.scale.
The CIPixellate filter changes the bounds of its image.
To fix problem one, you can simply set the scale property of the returned UIImage to be the same as self.scale:
return UIImage(ciImage: output, scale: self.scale, orientation: imageOrientation)
But you'll find this still isn't quite right. That's because of problem two. For problem two, the simplest solution is to crop the output CIImage:
// Must use self.scale, to disambiguate from the scale parameter
let floatScale = CGFloat(self.scale)
let pixelSize = CGSize(width: size.width * floatScale, height: size.height * floatScale)
let cropRect = CGRect(origin: CGPoint.zero, size: pixelSize)
guard let output = filter.outputImage?.cropping(to: cropRect) else { return nil }
This will give you an image of the size you want.
Now, your next question may be, "why is there a thin, dark border around my pixellated images?" Good question! But ask a new question for that.

Adding multiple effects to a photo

How can I add several effects to a picture? I have the following code that adds an effect to a photo:
func applyEffects(name: String, n: Float) {
filter.setValue(self.cImage, forKeyPath: kCIInputImageKey)
filter.setValue(n, forKeyPath: name)
let result = filter.value(forKey: kCIOutputImageKey) as! CIImage
let cgImage = CIContext.init(options: nil).createCGImage(result, from: result.extent)
self.customImage = UIImage.init(cgImage: cgImage!)
}
func brightness(n: Float) {
self.applyEffects(name: kCIInputBrightnessKey, n: n)
}
func contrast(n: Float) {
self.applyEffects(name: kCIInputContrastKey, n: n)
}
func saturation(n: Float) {
self.applyEffects(name: kCIInputSaturationKey, n: n)
}
But when I want to apply the second effect, the first one disappears. How can I overlay two or more effects on each other?
I'm assuming you are using CIColorControls as your filter.
You need to pass all three values into your call:
// The documentation doesn't give a default value for contrast, but for the others, I'm setting the defaults
var brightness:Float = 1
var contrast:Float = 1
var saturation:Float = 1
func applyEffects() {
filter.setValue(self.cImage, forKeyPath: kCIInputImageKey)
filter.setValue(brightness, forKeyPath: kCIInputBrightnessKey)
filter.setValue(contrast, forKeyPath: kCIInputContrastKey)
filter.setValue(saturation, forKeyPath: kCIInputSaturationKey)
let result = filter.value(forKey: kCIOutputImageKey) as! CIImage
let cgImage = CIContext.init(options: nil).createCGImage(result, from: result.extent)
self.customImage = UIImage.init(cgImage: cgImage!)
}
func brightness(n: Float) {
brightness = n
applyEffects()
}
func contrast(n: Float) {
contrast = n
applyEffects()
}
func saturation(n: Float) {
saturation = n
applyEffects()
}
A suggestion:
If you are trying to use "real-time" updating via UISliders, use a GLKView and send in the CIImage directly. It uses the GPU, and performance on a device is greatly increased. You can always create a UIImage for saving, messaging, etc.

How can I convert an UIImage to grayscale in Swift using CIFilter?

I am building a scanner component for an iOS app so far I have the result image cropped and in the correct perspective.
Now I need to turn the color image into Black and white "Scanned" document.
I tried to use - "CIPhotoEffectNoir" but it more grayscale then totally black and white. I wish to get a full contrast image with 100% black and 100% white.
How can I achieve that?
Thanks
You can use CIColorControls and set Contrast Key kCIInputContrastKey to increase the black/white contrast as follow:
Xcode 9 • Swift 4
extension String {
static let colorControls = "CIColorControls"
}
extension UIImage {
var coreImage: CIImage? { return CIImage(image: self) }
}
extension CIImage {
var uiImage: UIImage? { return UIImage(ciImage: self) }
func applying(contrast value: NSNumber) -> CIImage? {
return applyingFilter(.colorControls, parameters: [kCIInputContrastKey: value])
}
func renderedImage() -> UIImage? {
guard let image = uiImage else { return nil }
return UIGraphicsImageRenderer(size: image.size,
format: image.imageRendererFormat).image { _ in
image.draw(in: CGRect(origin: .zero, size: image.size))
}
}
}
let url = URL(string: "https://i.stack.imgur.com/Xs4RX.jpg")!
do {
if let coreImage = UIImage(data: try Data(contentsOf: url))?.coreImage,
let increasedContrast = coreImage.applying(contrast: 1.5) {
imageView.image = increasedContrast.uiImage
// if you need to convert your image to data (JPEG/PNG) you would need to render the ciimage using renderedImage method on CIImage
}
} catch {
print(error)
}
To convert from colors to grayscale you can set the Saturation Key kCIInputSaturationKey to zero:
extension CIImage {
func applying(saturation value: NSNumber) -> CIImage? {
return applyingFilter(.colorControls, parameters: [kCIInputSaturationKey: value])
}
var grayscale: CIImage? { return applying(saturation: 0) }
}
let url = URL(string: "https://i.stack.imgur.com/Xs4RX.jpg")!
do {
if let coreImage = UIImage(data: try Data(contentsOf: url))?.coreImage,
let grayscale = coreImage.grayscale {
// use grayscale image here
imageView.image = grayscale.uiImage
}
} catch {
print(error)
}
Desaturate will convert your image to grayscale
Increasing the contrast will push those grays out to the extremes, i.e. black and white.
You can CIColorControls:
let ciImage = CIImage(image: image)!
let blackAndWhiteImage = ciImage.applyingFilter("CIColorControls", withInputParameters: ["inputSaturation": 0, "inputContrast": 5])
Original:
With inputContrast = 1 (default):
With inputContrast = 5:
In Swift 5.1 I have written an extension method for OSX which also converts to and from NSImage. It uses saturation and input contrast to convert the image. I have abstracted a func for black and white.
extension NSImage {
func blackAndWhite () -> NSImage?
{
return applying(saturation: 0,inputContrast: 5,image: self)
}
func applying(saturation value: NSNumber, inputContrast inputContrastValue: NSNumber, image:NSImage) -> NSImage? {
let ciImage = CIImage(data: image.tiffRepresentation!)!
let blackAndWhiteImage = ciImage.applyingFilter("CIColorControls", parameters: ["inputSaturation": value, "inputContrast": inputContrastValue])
let rep: NSCIImageRep = NSCIImageRep(ciImage: blackAndWhiteImage)
let nsImage: NSImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
return nsImage
}
}

Resources