Swift Core Image filter over filtered image - ios

my problem is as follows: I made a simple app with a uiviewcontroller and an uiview(FilterView). om my view I added a UIButton and a UIImageView. What I want is that when you push the button a SepiaFilter is applied to the image:
func sepiaButtonClicked( sender:UIButton ){
let context = CIContext(options: nil)
let image = CIImage(image: theView.imageView.image)
let filter = CIFilter(name: "CISepiaTone", withInputParameters: [
kCIInputImageKey : image,
kCIInputIntensityKey : NSNumber(double: 0.5)
])
let imageWithFilter = filter.outputImage
theView.imageView.image = UIImage(CIImage: imageWithFilter)
}
theView refers to the UIView with this piece of code on top
var theView:FilterView {
get {
return view as! FilterView
}
}
now when I push the button the filter is applied as I wanted it to happen, but if you would press it again afterwards it gives a fatal error 'unexpectedly found nil while unwrapping an Optional value'. this is I think the image ( the one I enter for kCIInputImageKey ).
Can anyone give me an explanation on why this is happening? I can't figure out the difference between the first and second click on the button.. how I see it this code just replaces the UIImage with the new one and it should be ready to be triggered again?
Thx in advance,
Pieter-Jan De Bruyne

Try this :
func sepiaButtonClicked( sender:UIButton ){
var CurrentImage = self.imageView.image
var inputImage = CIImage(image:CurrentImage)
let filter = CIFilter(name:"CISepiaTone")
filter.setValue(inputImage, forKey: kCIInputImageKey)
filter.setValue(0.5, forKey: kCIInputIntensityKey)
let context = CIContext(options: nil)
let imageWithFilter = filter.outputImage
let NewOuptutImage = context.createCGImage(imageWithFilter , fromRect: imageWithFilter.extent())
imageView.image = UIImage(CGImage: NewOuptutImage)
}

Related

Getting error [UIImage extent]: unrecognized selector sent to instance

I'm trying to apply a radial blur to my UIImageView but when I try this I get the error
[UIImage extent]: unrecognized selector sent to instance
The code I'm using is from the example on:
https://developer.apple.com/documentation/coreimage/selectively_focusing_on_an_image
let h = bgImage.image!.size.height
let w = bgImage.image!.size.width
guard let radialMask = CIFilter(name:"CIRadialGradient") else {
return
}
let imageCenter = CIVector(x:0.55 * w, y:0.6 * h)
radialMask.setValue(imageCenter, forKey:kCIInputCenterKey)
radialMask.setValue(0.2 * h, forKey:"inputRadius0")
radialMask.setValue(0.3 * h, forKey:"inputRadius1")
radialMask.setValue(CIColor(red:0, green:1, blue:0, alpha:0),
forKey:"inputColor0")
radialMask.setValue(CIColor(red:0, green:1, blue:0, alpha:1),
forKey:"inputColor1")
guard let maskedVariableBlur = CIFilter(name:"CIMaskedVariableBlur") else {
return
}
maskedVariableBlur.setValue(bgImage.image, forKey: kCIInputImageKey)
maskedVariableBlur.setValue(10, forKey: kCIInputRadiusKey)
maskedVariableBlur.setValue(radialMask.outputImage, forKey: "inputMask")
let selectivelyFocusedCIImage = maskedVariableBlur.outputImage/
In which bgImage is a UIImageView
What am I doing wrong here?
You need
guard let image = maskedVariableBlur?.image, cgimg = image.CGImage else {
print("imageView doesn't have an image!")
return
}
as
let coreImage = CIImage(CGImage:cgimg)
maskedVariableBlur.setValue(coreImage, forKey: kCIInputImageKey)
expects a CIImage not a UIImage
I see two issues
- One is your explicitly unwrapping the optional.
let h = bgImage.image!.size.height
let w = bgImage.image!.size.width
Please use guard here to avoid unexpected crashes
Second issue is bgImage.image!.size.height. Here you should be using bgImage.image.CIImage.size or something like #image.CIImage.size.
Please refer below similar post. I hope this should help
Unrecognized selector sent to UIImage?

How to remove the border/drop shadow from an UIImageView?

I've been generating QR Codes using the CIQRCodeGenerator CIFilter and it works very well:
But when I resize the UIImageView and generate again
#IBAction func sizeSliderValueChanged(_ sender: UISlider) {
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sender.value), y: CGFloat(sender.value))
}
I get a weird Border/DropShadow around the image sometimes:
How can I prevent it from appearing at all times or remove it altogether?
I have no idea what it is exactly, a border, a dropShadow or a Mask, as I'm new to Swift/iOS.
Thanks in advance!
PS. I didn't post any of the QR-Code generating code as it's pretty boilerplate and can be found in many tutorials out there, but let me know if you need it
EDIT:
code to generate the QR Code Image
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
guard let qrEncodedImage = filter.outputImage else {
return nil
}
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let transform = CGAffineTransform(scaleX: scaleX, y: scaleY )
if let outputImage = filter.outputImage?.applying(transform) {
return UIImage(ciImage: outputImage)
}
return nil
}
Code for button pressed
#IBAction func generateCodeButtonPressed(_ sender: CustomButton) {
if codeTextField.text == "" {
return
}
let newEncodedMessage = codeTextField.text!
let encodedImage: UIImage = generateQRCode(from: newEncodedMessage)!
qrImageView.image = encodedImage
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sizeSlider.value), y: CGFloat(sizeSlider.value))
qrImageView.layer.minificationFilter = kCAFilterNearest
qrImageView.layer.magnificationFilter = kCAFilterNearest
}
It’s a little hard to be sure without the code you’re using to generate the image for the image view, but that looks like a resizing artifact—the CIImage may be black or transparent outside the edges of the QR code, and when the image view size doesn’t match the image’s intended size, the edges get fuzzy and either the image-outside-its-boundaries or the image view’s background color start bleeding in. Might be able to fix it by setting the image view layer’s minification/magnification filters to “nearest neighbor”, like so:
imageView.layer.minificationFilter = kCAFilterNearest
imageView.layer.magnificationFilter = kCAFilterNearest
Update from seeing the code you added—you’re currently resizing the image twice, first with the call to applying(transform) and then by setting a transform on the image view itself. I suspect the first resize is adding the blurriness, which the minification / magnification filter I suggested earlier then can’t fix. Try shortening generateQRCode to this:
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
if let qrEncodedImage = filter.outputImage {
return UIImage(cgImage: qrEncodedImage)
}
return nil
}
I think the problem here is that you try to resize it to "non-square" (as your scaleX isn't always the same as scaleY), while the QR code is always square so both side should have the same scale factor to get a non-blurred image.
Something like:
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let scale = max(scaleX, scaleY)
let transform = CGAffineTransform(scaleX: scale, y: scale)
will make sure you have "non-bordered/non-blurred/squared" UIImage.
I guess the issue is with the image(png) file not with your UIImageView. Try to use another image and I hope it will work!

Pixellating a UIImage returns UIImage with a different size

I'm using an extension to pixellate my images like the following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
return UIImage(ciImage: output)
}
The problem is the image represented by self here has not the same size than the one I create using UIImage(ciImage: output).
For example, using that code:
print("image.size BEFORE : \(image.size)")
if let imagePixellated = image.pixellated(scale: 48) {
image = imagePixellated
print("image.size AFTER : \(image.size)")
}
will print:
image.size BEFORE : (400.0, 298.0)
image.size AFTER : (848.0, 644.0)
Not the same size and not the same ratio.
Any idea why?
EDIT:
I added some prints in the extension as following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
print("UIIMAGE : \(self.size)")
print("ciImage.extent.size : \(ciImage.extent.size)")
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
print("output : \(output.extent.size)")
return UIImage(ciImage: output)
}
And here are the outputs:
UIIMAGE : (250.0, 166.5)
ciImage.extent.size : (500.0, 333.0)
output : (548.0, 381.0)
You have two problems:
self.size is measured in points. self's size in pixels is actually self.size multiplied by self.scale.
The CIPixellate filter changes the bounds of its image.
To fix problem one, you can simply set the scale property of the returned UIImage to be the same as self.scale:
return UIImage(ciImage: output, scale: self.scale, orientation: imageOrientation)
But you'll find this still isn't quite right. That's because of problem two. For problem two, the simplest solution is to crop the output CIImage:
// Must use self.scale, to disambiguate from the scale parameter
let floatScale = CGFloat(self.scale)
let pixelSize = CGSize(width: size.width * floatScale, height: size.height * floatScale)
let cropRect = CGRect(origin: CGPoint.zero, size: pixelSize)
guard let output = filter.outputImage?.cropping(to: cropRect) else { return nil }
This will give you an image of the size you want.
Now, your next question may be, "why is there a thin, dark border around my pixellated images?" Good question! But ask a new question for that.

This class is not key value coding-compliant for the key 'CIAttributeTypeRectangle.'

I am learning how to program and I am experimenting with the camera. I've been trying to apply a CICrop filter to an image, but in crashes each time. Here is the code:
let Rectangle = CIVector(x: view.center.x, y: view.center.y, z: view.bounds.height, w: view.bounds.width)
let filter = CIFilter(name: "CICrop")
let ciContext = CIContext(options: nil)
filter.setDefaults()
filter.setValue(inputImage, forKey: kCIInputImageKey)
filter.setValue(Rectangle, forKey: kCIAttributeTypeRectangle)
let originalOrientation: UIImageOrientation = imageView.image!.imageOrientation
let originalScale = imageView.image!.scale
let cgImage = ciContext.createCGImage(filter.outputImage, fromRect: inputImage.extent())
imageView.image = UIImage(CGImage: cgImage, scale: originalScale, orientation: originalOrientation)
It constantly crashes on this line: filter.setValue(Rectangle, forKey: kCIAttributeTypeRectangle)
Can somebody help me with whats going on? Also, please provide code in the answer because, like I said before, I'm still trying to learn. Thank you!
The "not key value coding-compliant" error is often b/c you have an outlet that is no longer attached to your view controller. Right click on your imageView in the storyboard and see if you have any referencing outlets that are outdated (i.e., removed from your code but not removed from the storyboard object).

Select portion of the uiimage with given color

I would like to implement a feature that allows user to select a given color from the image and replace it with transparent color. Ideally it would work similar to Pixelmator, https://www.youtube.com/watch?v=taXGaQC0JBg where user can select colors and see which portions of the image are currently selected and a slider that can be used to select the tolerance of the colors.
My primary suspect for the replacing colors is CGImageCreateWithMaskingColors() function. Perhaps CIColorCube might do the job.
I am not sure how to proceed with visualizing the selection of the colors. Any tips will be welcome!
thank you,
Janusz
EDIT:
I am moving very slowly but I made some progress. I am using imageCreateWithMaskingColors function to extract unwanted colors:
func imageWithMaskingColors(){
//get uiimage
let image:CGImageRef = self.inputImage.image!.CGImage
let maskingColors :[CGFloat] = [0,200,0,255,0,255]
let newciimage = CGImageCreateWithMaskingColors(image,maskingColors)
let newImage = UIImage(CGImage: newciimage)
self.outputImage.image = newImage
let w = CGFloat(CGImageGetWidth(newciimage))
let h = CGFloat(CGImageGetHeight(newciimage))
let size = CGSizeMake(w,h)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
let context = UIGraphicsGetCurrentContext();
newImage?.drawInRect(CGRectMake(0, 0, w, h))
let result = UIGraphicsGetImageFromCurrentImageContext();
self.inputImage1 = result.CGImage
UIGraphicsEndImageContext();
}
In the next step I am applying a CISourceOutCompositing CIFilter to get a selected area, that was removed in last step:
#IBAction func blendMode(){
let context = CIContext(options: nil)
let inputImage:CIImage = CIImage(CGImage:self.inputImage1)
var filter = CIFilter(name: "CISourceOutCompositing")
println(inputImage.debugDescription)
//mix it with black
let fileURL = NSBundle.mainBundle().URLForResource("black", withExtension: "jpg")
var backgroundImage = CIImage(contentsOfURL: fileURL)
filter.setValue(inputImage, forKey: kCIInputBackgroundImageKey)
filter.setValue(backgroundImage, forKey: kCIInputImageKey)
println(backgroundImage.debugDescription)
let outputImage = filter.outputImage
println(outputImage.debugDescription)
let cgimg = context.createCGImage(outputImage, fromRect: outputImage.extent())
blendImage1 = cgimg
let newImage = UIImage(CGImage: cgimg)
self.outputImage.image = newImage
}
In the next step I would like to add a dashed-stroke line to borders and remove the filling color of the selected image (black tiger).
I used a GPUImage CannyEdgeDetectionFilter to the image but it didn't give me satysfiying results (black image)
let gpaPicture = GPUImagePicture(CGImage: blendImage1)
let canny = GPUImageCannyEdgeDetectionFilter()
//canny.upperThreshold = CGFloat(1125)
canny.lowerThreshold = CGFloat(1)
gpaPicture.addTarget(canny)
canny.useNextFrameForImageCapture()
gpaPicture.processImage()
let gpuResult = canny.imageByFilteringImage(UIImage(CGImage:blendImage1))

Resources