CIFilter applied to an image not working - swift - ios

I am trying to apply a CIFilter to an image that is already displayed in my imageView in my single view application but it does not appear to have applied the filter. Please can someone advise? My input image appear exactly the same.
SWIFT:
let filter = CIFilter.init(name: "CIHueAdjust")
let context = CIContext()
var extent: CGRect!
var scaleFactor: CGFloat!
#IBOutlet weak var img: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
let ciImage = CIImage.init(image: img.image!)
filter?.setDefaults()
filter?.setValue(ciImage, forKeyPath: kCIInputImageKey)
let result = filter?.outputImage
print("result: \(result)")
var image = UIImage.init(cgImage: context.createCGImage(result!, from: result!.extent)!)
img.image = image
img.image = image
}
CONSOLE:
result: Optional(<CIImage: 0x1c001a7a0 extent [0 0 2016 1512]>
affine [1 0 0 -1 0 1512] extent=[0 0 2016 1512] opaque
colormatch "sRGB IEC61966-2.1"_to_workingspace extent=[0 0 2016 1512] opaque
IOSurface 0x1c401a780(169) seed:1 YCC420f 601 alpha_one extent=[0 0 2016 1512] opaque

It's in the comments, but here's the full (and better formatted) answer on how to set up a call to CIHueAdjust, using Swift 4:
let filter = CIFilter(name: "CIHueAdjust")
let context = CIContext()
var extent: CGRect!
var scaleFactor: CGFloat!
#IBOutlet weak var img: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
let ciImage = CIImage(image: img.image!)
// Note: you may use kCIInputImageKey for inputImage
filter?.setValue(ciImage, forKey: "inputImage")
filter?.setValue(Float(1), forKey: "inputAngle")
let result = filter?.outputImage
var image = UIImage(cgImage: context.createCGImage(result!, from: result!.extent)!)
img.image = image
}

Related

How do you apply Core Image filters to an onscreen image using Swift/MacOS or iOS and Core Image

Photos editing adjustments provides a realtime view of the applied adjustments as they are applied. I wasn't able to find any samples of how you do this. All the examples seems to show that you apply the filters through a pipeline of sorts and then take the resulting image and update the screen with the result. See code below.
Photos seems to show the adjustment applied to the onscreen image. How do they achieve this?
func editImage(inputImage: CGImage) {
DispatchQueue.global().async {
let beginImage = CIImage(cgImage: inputImage)
guard let exposureOutput = self.exposureFilter(beginImage, ev: self.brightness) else {
return
}
guard let vibranceOutput = self.vibranceFilter(exposureOutput, amount: self.vibranceAmount) else {
return
}
guard let unsharpMaskOutput = self.unsharpMaskFilter(vibranceOutput, intensity: self.unsharpMaskIntensity, radius: self.unsharpMaskRadius) else {
return
}
guard let sharpnessOutput = self.sharpenFilter(unsharpMaskOutput, sharpness: self.unsharpMaskIntensity) else {
return
}
if let cgimg = self.context.createCGImage(sharpnessOutput, from: vibranceOutput.extent) {
DispatchQueue.main.async {
self.cgImage = cgimg
}
}
}
}
OK, I just found the answer - use MTKView, which is working fine except for getting the image to fill the view correctly!
For the benefit of others here are the basics... I have yet to figure out how to position the image correctly in the view - but I can see the filter applied in realtime!
class ViewController: NSViewController, MTKViewDelegate {
....
#objc dynamic var cgImage: CGImage? {
didSet {
if let cgimg = cgImage {
ciImage = CIImage(cgImage: cgimg)
}
}
}
var ciImage: CIImage?
// Metal resources
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
var sourceTexture: MTLTexture! // 2
let colorSpace = CGColorSpaceCreateDeviceRGB()
var context: CIContext!
var textureLoader: MTKTextureLoader!
override func viewDidLoad() {
super.viewDidLoad()
// Do view setup here.
let metalView = MTKView()
metalView.translatesAutoresizingMaskIntoConstraints = false
self.imageView.addSubview(metalView)
NSLayoutConstraint.activate([
metalView.bottomAnchor.constraint(equalTo: view.bottomAnchor),
metalView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
metalView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
metalView.topAnchor.constraint(equalTo: view.topAnchor)
])
device = MTLCreateSystemDefaultDevice()
commandQueue = device.makeCommandQueue()
metalView.delegate = self
metalView.device = device
metalView.framebufferOnly = false
context = CIContext()
textureLoader = MTKTextureLoader(device: device)
}
public func draw(in view: MTKView) {
if let ciImage = self.ciImage {
if let currentDrawable = view.currentDrawable {
let commandBuffer = commandQueue.makeCommandBuffer()
let inputImage = ciImage // 2
exposureFilter.setValue(inputImage, forKey: kCIInputImageKey)
exposureFilter.setValue(ev, forKey: kCIInputEVKey)
context.render(exposureFilter.outputImage!,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(origin: .zero, size: view.drawableSize),
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
commandBuffer?.commit()
}
}
}

Render a MTIImage

Please don't judge me I'm just learning Swift.
Recently I installed MetalPetal framework and I followed the instructions:
https://github.com/MetalPetal/MetalPetal#example-code
But I get error because of MTIContext. Maybe I have to declare something more of MetalPetal?
My Code:
import UIKit
import MetalPetal
import CoreGraphics
class ViewController: UIViewController {
#IBOutlet weak var image1: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
weak var image: UIImage?
image = image1.image
var ciImage = CIImage(image: image!)
var cgImage1 = convertCIImageToCGImage(inputImage: ciImage!)
let imageFromCGImage = MTIImage(cgImage: cgImage1!)
let inputImage = imageFromCGImage
let filter = MTISaturationFilter()
filter.saturation = 1
filter.inputImage = inputImage
let outputImage = filter.outputImage
let context = MTIContext()
do {
try context.render(outputImage, to: pixelBuffer)
var image3: CIImage? = try context.makeCIImage(from: outputImage!)
//context.makeCIImage(from: image)
//context.makeCGImage(from: image)
} catch {
print(error)
}
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
return cgImage
}
return nil
}
}
#YuAo
Input Image
An UIImage is based on either underlying Quartz image (can be retrieved with cgImage) or an underlying Core Image (can be retrieved from UIImage with ciImage).
MTIImage offers constructors for both types.
MTIContext
A MTIContext must be initialized with a device that can be retrieved by calling MTLCreateSystemDefaultDevice().
Rendering
A rendering to a pixel buffer is not needed. We can get the result by calling makeCGImage.
Test
I've taken your source code above and slightly adapted it to the aforementioned points.
I also added a second UIImageView to see the result of the filtering. I also changed the saturation to 0 to see if the filter works
If GPU or shaders are involved it makes sense to test on a real device and not on the simulator.
The result looks like this:
In the upper area you see the original jpg, in the lower area the filter is applied.
Swift
The simplified Swift code that produces this result looks like this:
override func viewDidLoad() {
super.viewDidLoad()
guard let image = UIImage(named: "regensburg.jpg") else { return }
guard let cgImage = image.cgImage else { return }
imageView1.image = image
let filter = MTISaturationFilter()
filter.saturation = 0
filter.inputImage = MTIImage(cgImage: cgImage)
if let device = MTLCreateSystemDefaultDevice(),
let outputImage = filter.outputImage {
do {
let context = try MTIContext(device: device)
let filteredImage = try context.makeCGImage(from: outputImage)
imageView2.image = UIImage(cgImage: filteredImage)
} catch {
print(error)
}
}
}

Simple CIFilter Passthru with CGImage conversion returns black pixels

The following code:
let skView = SKView()
let scene = SKScene()
override func viewDidLoad() {
super.viewDidLoad()
self.scene.scaleMode = .resizeFill
self.skView.presentScene(self.scene)
self.scene.backgroundColor = UIColor.black
self.view.addSubview(skView)
self.scene.shouldEnableEffects = true
let sprite = SKSpriteNode(imageNamed: "NAME_THAT_PIC")
sprite.position = CGPoint(x: 300, y: 400)
let effectNode = SKEffectNode()
effectNode.filter = MyFilter()
effectNode.addChild(sprite)
will call this custom filter that does nothing but create a CGImage from a CIImage, correctly invoking context.createCGImage() as reported by many people (CIImages are not pixel buffered.)
MyFilter is reduced to a simple repro test:
class MyFilter: CIFilter {
var inputImage: CIImage?
var inputImageRect: CGRect? {
guard let image = self.inputImage else {
return nil
}
return image.extent
}
public override init() {
super.init()
}
required public init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override open var outputImage: CIImage? {
guard let inputImage = self.inputImage else {
return nil
}
let context = CIContext(options:nil)
let cgImage = context.createCGImage(inputImage, from: inputImageRect!)
// ... DO SOMETHING WITH CGIMAGE DATA ...
return CIImage(cgImage: cgImage!)
}
}
If I replace MyFilter() by another built-in filter, it works and will show the altered image so the viewcontroller code works. If instead, I return inputImage directly from the filter output call, it works and the image passed in will display.
When I dump the CGImage, the dimensions of the image are correct but every pixels are set to black.
I tried creating a UIImage using UIImage(cgImage: cgImage!) but the same happens.
What is causing pixels not to be loaded in the cgImage I generated from the inputImage?

Issues with cropping an image

I am having difficulties with cropping an image; two things are happening currently that I wish to improve:
1) The quality of the photo degrades once it is cropped.
2) The view of the orientation is not correct after the photo is taken.
In Summary:
What is occurring, is the photo quality after cropping is not of correct standard and when the image appears in the ImageView it is rotated 90 degrees; why are these occurring? I am trying to crop an image based on the view of the captured stream.
Here is the cropping of the image:
func crop(capture : UIImage) -> UIImage {
let crop = cameraView.bounds
//CGRect(x:0,y:0,width:50,height:50)
let cgImage = capture.cgImage!.cropping(to: crop)
let image : UIImage = UIImage(cgImage: cgImage!)
return image
}
Here is where I am calling the crop
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let photoSampleBuffer = photoSampleBuffer {
let photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
var imageTaken = UIImage(data: photoData!)
//post photo
let croppedImage = self.crop(capture: imageTaken!)
imageTaken = croppedImage
self.imageView.image = imageTaken
// UIImageWriteToSavedPhotosAlbum(imageTaken!, nil, nil, nil)
}
}
& Here is the whole class
import UIKit
import AVFoundation
class CameraVC: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate, AVCapturePhotoCaptureDelegate {
var captureSession : AVCaptureSession?
var stillImageOutput : AVCapturePhotoOutput?
var previewLayer : AVCaptureVideoPreviewLayer?
#IBOutlet weak var imageView: UIImageView!
#IBOutlet var cameraView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
previewLayer?.frame = cameraView.bounds
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
captureSession = AVCaptureSession()
captureSession?.sessionPreset = AVCaptureSessionPreset1920x1080
stillImageOutput = AVCapturePhotoOutput()
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if (captureSession?.canAddInput(input))!{
captureSession?.addInput(input)
if (captureSession?.canAddOutput(stillImageOutput) != nil){
captureSession?.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer!)
captureSession?.startRunning()
let captureVideoLayer: AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer.init(session: captureSession!)
captureVideoLayer.frame = self.cameraView.bounds
captureVideoLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.cameraView.layer.addSublayer(captureVideoLayer)
}
}
} catch {
print("An error has occured")
}
}
#IBAction func takePhoto(_ sender: UIButton) {
didPressTakePhoto()
}
func didPressTakePhoto(){
if let videoConnection = stillImageOutput?.connection(withMediaType: AVMediaTypeVideo) {
videoConnection.videoOrientation = AVCaptureVideoOrientation.portrait
let settingsForCapture = AVCapturePhotoSettings()
settingsForCapture.flashMode = .auto
settingsForCapture.isAutoStillImageStabilizationEnabled = true
settingsForCapture.isHighResolutionPhotoEnabled = false
stillImageOutput?.capturePhoto(with: settingsForCapture, delegate: self)
}
}
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
if let photoSampleBuffer = photoSampleBuffer {
let photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
var imageTaken = UIImage(data: photoData!)
//post photo
let croppedImage = self.crop(capture: imageTaken!)
imageTaken = croppedImage
self.imageView.image = imageTaken
// UIImageWriteToSavedPhotosAlbum(imageTaken!, nil, nil, nil)
}
}
func crop(capture : UIImage) -> UIImage {
let crop = cameraView.bounds
//CGRect(x:0,y:0,width:50,height:50)
let cgImage = capture.cgImage!.cropping(to: crop)
let image : UIImage = UIImage(cgImage: cgImage!)
return image
}
}
An alternative is to use a Core Image filter called CIPerspectiveCorrect. Since it uses a CIImage - which isn't a true image but a "recipe" for an image - it doesn't suffer from degradation.
Basically, turn your UIImage/CGImage into a CIImage, pick any 4 points in it, and crop. It needn't be a parallelogram (or CGRect), just 4 points. There are two differences of note when using CI filters:
Instead of using a CGRect, you use a CIVector. A vector can have 2, 3, 4, even more parameters depending on the filter. In this case you want 4 CIVectors with 2 parameters each, corresponding to top left (TL), top right (TR), bottom left (BL), and bottom right (BR).
CI images have their point of origin (X/Y == 0/0) at their bottom left, not top left. This basically means your Y coordinate is upside down from CG or UI images.
Here's some sample code. First, some sample declarations, including a CI context:
let uiTL = CGPoint(x: 50, y: 50)
let uiTR = CGPoint(x: 75, y: 75)
let uiBL = CGPoint(x: 100, y: 300)
let uiBR = CGPoint(x: 25, y: 200)
var ciImage:CIImage!
var ctx:CIContext!
#IBOutlet weak var imageView: UIImageView!
In viewDidLoad we set the context and get our CIImage from the UIImageView:
override func viewDidLoad() {
super.viewDidLoad()
ctx = CIContext(options: nil)
ciImage = CIImage(image: imageView.image!)
}
UIImageViews have a frame or CGRect, and UIImages have a size or CGSize. CIImages have an extent, which is basically your CGSize. But remember, the Y axis is flipped, and it is possible for an infinite extent! (This isn't thecae for a UIImage source though.) Here's some helper functions to convert things:
func createScaledPoint(_ pt:CGPoint) -> CGPoint {
let x = (pt.x / imageView.frame.width) * ciImage.extent.width
let y = (pt.y / imageView.frame.height) * ciImage.extent.height
return CGPoint(x: x, y: y)
}
func createVector(_ point:CGPoint) -> CIVector {
return CIVector(x: point.x, y: ciImage.extent.height - point.y)
}
func createPoint(_ vector:CGPoint) -> CGPoint {
return CGPoint(x: vector.x, y: ciImage.extent.height - vector.y)
}
Here's the actual call to CIPerspectiveCorrection. If I remember correctly, a change in Swift 3 is to use AnyObject. While more strongly-typed variables worked in previous versions of Swift, they cause dumps now:
func doPerspectiveCorrection(
_ image:CIImage,
context:CIContext,
topLeft:AnyObject,
topRight:AnyObject,
bottomRight:AnyObject,
bottomLeft:AnyObject)
-> UIImage {
let filter = CIFilter(name: "CIPerspectiveCorrection")
filter?.setValue(topLeft, forKey: "inputTopLeft")
filter?.setValue(topRight, forKey: "inputTopRight")
filter?.setValue(bottomRight, forKey: "inputBottomRight")
filter?.setValue(bottomLeft, forKey: "inputBottomLeft")
filter!.setValue(image, forKey: kCIInputImageKey)
let cgImage = context.createCGImage((filter?.outputImage)!, from: (filter?.outputImage!.extent)!)
return UIImage(cgImage: cgImage!)
}
Now that we have our CIImage, we create the four CIVectors. In this sample project I hard-coded the 4 CGPoints and chose to create the CIVector in viewWillLayoutSubviews, the earliest I have the UI frames:
override func viewWillLayoutSubviews() {
let ciTL = createVector(createScaledPoint(uiTL))
let ciTR = createVector(createScaledPoint(uiTR))
let ciBR = createVector(createScaledPoint(uiBR))
let ciBL = createVector(createScaledPoint(uiBL))
imageView.image = doPerspectiveCorrection(CIImage(image: imageView.image!)!,
context: ctx,
topLeft: ciTL,
topRight: ciTR,
bottomRight: ciBR,
bottomLeft: ciBL)
}
If you put this code into a project, load in your image into a UIImageView, figure out what 4 CGPoints you want and run this, you should see your capped image. Good luck!
Adding to #dfd answer. If you just don't want to use fancy core image filter follow the work around here.
the quality of the photo after I crop it is poor?
Though you use highest possible session preset AVCaptureSessionPreset1920x1080 you are converting the sample image buffer to JPEG format and then cropping it. So obviously there will be some loss on quality.
To get nice quality cropped image try session preset AVCaptureSessionPresetHigh to let AVFoundation decide the high quality video and dngPhotoDataRepresentation(forRawSampleBuffer:previewPhotoSampleBuffer:) to get the image in DNG format.
The view and the orientation is not correct after I take the photo?
didFinishProcessingPhotoSampleBuffer delegate gives your sample buffer which is always 90 degree rotated to left. So you might want to rotate 90 degree right by yourself.
init your UIImage with cgImage and orientation as right and scale as 1.0.
init(cgImage: CGImage, scale: CGFloat, orientation: UIImageOrientation)

Increase/decrease brightness of image using UISlider?

I am building an iOS app which is based on image operations.
I want to increase and decrease brightness of image by slider value.
I have used this code to do this:
#IBOutlet var imageView: UIImageView!
#IBOutlet var uiSlider : UISlider!
override func viewDidLoad()
{
super.viewDidLoad()
var image = UIImage(named: "54715869.jpg")
imageView.image = image
uiSlider.minimumValue = -0.2
uiSlider.maximumValue = 0.2
uiSlider.value = 0.0
uiSlider.maximumTrackTintColor = UIColor(red: 0.1, green: 0.7, blue: 0, alpha: 1)
uiSlider.minimumTrackTintColor = UIColor.blackColor()
uiSlider.addTarget(self, action: "brightnesssliderMove:", forControlEvents: UIControlEvents.TouchUpInside)
uiSlider.addTarget(self, action: "brightnesssliderMove:", forControlEvents: UIControlEvents.TouchUpOutside)
}
func brightnesssliderMove(sender: UISlider)
{
var filter = CIFilter(name: "CIColorControls");
filter.setValue(NSNumber(float: sender.value), forKey: "inputBrightness")
var image = self.imageView.image
var rawimgData = CIImage(image: image)
filter.setValue(rawimgData, forKey: "inputImage")
var outpuImage = filter.valueForKey("outputImage")
imageView.image = UIImage(CIImage: outpuImage as CIImage)
}
Now my question is that when I increase slider value it also increase brightness of image but only when I change slider position for first time.
When I am again changing the position of slider I am getting this errror:
fatal error: unexpectedly found nil while unwrapping an Optional value.
This error is coming at line:
imageView.image = UIImage(CIImage: outpuImage as CIImage)
This time rawimgData data comes nil.
I found answer to my question here is how i have done coding.
import CoreImage
class ViewController: UIViewController,UIImagePickerControllerDelegate,UINavigationControllerDelegate {
var aCIImage = CIImage();
var contrastFilter: CIFilter!;
var brightnessFilter: CIFilter!;
var context = CIContext();
var outputImage = CIImage();
var newUIImage = UIImage();
override func viewDidLoad() {
super.viewDidLoad()
var aUIImage = imageView.image;
var aCGImage = aUIImage?.CGImage;
aCIImage = CIImage(CGImage: aCGImage)
context = CIContext(options: nil);
contrastFilter = CIFilter(name: "CIColorControls");
contrastFilter.setValue(aCIImage, forKey: "inputImage")
brightnessFilter = CIFilter(name: "CIColorControls");
brightnessFilter.setValue(aCIImage, forKey: "inputImage")
}
func sliderContrastValueChanged(sender: UISlider) {
contrastFilter.setValue(NSNumber(float: sender.value), forKey: "inputContrast")
outputImage = contrastFilter.outputImage;
var cgimg = context.createCGImage(outputImage, fromRect: outputImage.extent())
newUIImage = UIImage(CGImage: cgimg)!
imageView.image = newUIImage;
}
func sliderValueChanged(sender: UISlider) {
brightnessFilter.setValue(NSNumber(float: sender.value), forKey: "inputBrightness");
outputImage = brightnessFilter.outputImage;
let imageRef = context.createCGImage(outputImage, fromRect: outputImage.extent())
newUIImage = UIImage(CGImage: imageRef)!
imageView.image = newUIImage;
}
}
You can use AlamofireImage, here's an example of how you can do it:
#IBAction func brightnessChanged(sender: UISlider) {
let filterParameters = ["inputBrightness": sender.value]
imageView.image = originalImage.af_imageWithAppliedCoreImageFilter("CIColorControls", filterParameters: filterParameters)
}

Resources