Getting error [UIImage extent]: unrecognized selector sent to instance - ios

I'm trying to apply a radial blur to my UIImageView but when I try this I get the error
[UIImage extent]: unrecognized selector sent to instance
The code I'm using is from the example on:
https://developer.apple.com/documentation/coreimage/selectively_focusing_on_an_image
let h = bgImage.image!.size.height
let w = bgImage.image!.size.width
guard let radialMask = CIFilter(name:"CIRadialGradient") else {
return
}
let imageCenter = CIVector(x:0.55 * w, y:0.6 * h)
radialMask.setValue(imageCenter, forKey:kCIInputCenterKey)
radialMask.setValue(0.2 * h, forKey:"inputRadius0")
radialMask.setValue(0.3 * h, forKey:"inputRadius1")
radialMask.setValue(CIColor(red:0, green:1, blue:0, alpha:0),
forKey:"inputColor0")
radialMask.setValue(CIColor(red:0, green:1, blue:0, alpha:1),
forKey:"inputColor1")
guard let maskedVariableBlur = CIFilter(name:"CIMaskedVariableBlur") else {
return
}
maskedVariableBlur.setValue(bgImage.image, forKey: kCIInputImageKey)
maskedVariableBlur.setValue(10, forKey: kCIInputRadiusKey)
maskedVariableBlur.setValue(radialMask.outputImage, forKey: "inputMask")
let selectivelyFocusedCIImage = maskedVariableBlur.outputImage/
In which bgImage is a UIImageView
What am I doing wrong here?

You need
guard let image = maskedVariableBlur?.image, cgimg = image.CGImage else {
print("imageView doesn't have an image!")
return
}
as
let coreImage = CIImage(CGImage:cgimg)
maskedVariableBlur.setValue(coreImage, forKey: kCIInputImageKey)
expects a CIImage not a UIImage

I see two issues
- One is your explicitly unwrapping the optional.
let h = bgImage.image!.size.height
let w = bgImage.image!.size.width
Please use guard here to avoid unexpected crashes
Second issue is bgImage.image!.size.height. Here you should be using bgImage.image.CIImage.size or something like #image.CIImage.size.
Please refer below similar post. I hope this should help
Unrecognized selector sent to UIImage?

Related

change the contrast and brightness of image using slider

I am trying to change the contrast and brightness of an image using the slider, and I have created a slider programatically. When I am trying to vary the contrast by the slider, am getting an error like
reason: '-[UISlider floatValue]: unrecognized selector sent to instance 0x103c4ffa0'`func viewforslide(){
vieew.frame = CGRect.init(x: 10, y:view.frame.size.height - 180, width: self.view.frame.size.width - 20, height: 40)
vieew.backgroundColor = UIColor.lightGray
vieew.layer.cornerRadius = vieew.frame.size.height/2
view.addSubview(vieew)
createslider()
}
func createslider(){
var sliderDemo = UISlider(frame:CGRect(x: 15, y: 5, width: vieew.frame.size.width - 30, height: 30))
sliderDemo.minimumValue = 0.0
sliderDemo.maximumValue = 1000.0
sliderDemo.isContinuous = true
sliderDemo.tintColor = UIColor.black
sliderDemo.value = 500.0
sliderDemo.addTarget(self, action: #selector(_sldComponentChangedValue),for: .valueChanged)
vieew.addSubview(sliderDemo)
}
#IBAction func _sldComponentChangedValue(sender: UISlider) {
// Set value to the nearest int
sender.setValue(Float(roundf(sender.value)), animated: false)
let newvalforslider = sender
print("\(newvalforslider)")
let displayinPercentage: Int = Int((sender.value/200) * 10000)
// contrastValueLabel.text = ("\(displayinPercentage)")
self.imageView.image = results.enhancedImage
let beginImage = CIImage(image: self.imageView.image!)
let filter = CIFilter(name: "CIColorControls")
filter?.setValue(beginImage, forKey: kCIInputImageKey)
filter?.setValue(sender.value, forKey: kCIInputContrastKey)
var filteredImage = filter?.outputImage
var context = CIContext(options: nil)
imageView.image = UIImage(cgImage: context.createCGImage(filteredImage!, from: (filteredImage?.extent)!)!)
var sliderValue = sender.value
}
`
If anyone helps me to do this, would be great. Thanks in advance.
While adding target method to sliderDemo you have given selector (_sldComponentChangedValue), which is not right, your actual method does receives an argument. That's the difference in methods name, so on slide you gets crash saying "unrecognized selector sent to instance"
instead do as follows
sliderDemo.addTarget(self, action: #selector(_sldComponentChangedValue(sender:)), for: .valueChanged);
#anisha If you complicate previous Ans. try this new code.
func increaseContrast(_ image: UIImage) -> UIImage {
let inputImage = CIImage(image: image)!
let parameters =
[
"inputContrast": NSNumber(value: 2) // set how many contrast you want
]
let outputImage = inputImage.applyingFilter("CIColorControls",
parameters: parameters)
let context = CIContext(options: nil)
let img = context.createCGImage(outputImage, from: outputImage.extent)!
return UIImage(cgImage: img)
}

Pixellating a UIImage returns UIImage with a different size

I'm using an extension to pixellate my images like the following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
return UIImage(ciImage: output)
}
The problem is the image represented by self here has not the same size than the one I create using UIImage(ciImage: output).
For example, using that code:
print("image.size BEFORE : \(image.size)")
if let imagePixellated = image.pixellated(scale: 48) {
image = imagePixellated
print("image.size AFTER : \(image.size)")
}
will print:
image.size BEFORE : (400.0, 298.0)
image.size AFTER : (848.0, 644.0)
Not the same size and not the same ratio.
Any idea why?
EDIT:
I added some prints in the extension as following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
print("UIIMAGE : \(self.size)")
print("ciImage.extent.size : \(ciImage.extent.size)")
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
print("output : \(output.extent.size)")
return UIImage(ciImage: output)
}
And here are the outputs:
UIIMAGE : (250.0, 166.5)
ciImage.extent.size : (500.0, 333.0)
output : (548.0, 381.0)
You have two problems:
self.size is measured in points. self's size in pixels is actually self.size multiplied by self.scale.
The CIPixellate filter changes the bounds of its image.
To fix problem one, you can simply set the scale property of the returned UIImage to be the same as self.scale:
return UIImage(ciImage: output, scale: self.scale, orientation: imageOrientation)
But you'll find this still isn't quite right. That's because of problem two. For problem two, the simplest solution is to crop the output CIImage:
// Must use self.scale, to disambiguate from the scale parameter
let floatScale = CGFloat(self.scale)
let pixelSize = CGSize(width: size.width * floatScale, height: size.height * floatScale)
let cropRect = CGRect(origin: CGPoint.zero, size: pixelSize)
guard let output = filter.outputImage?.cropping(to: cropRect) else { return nil }
This will give you an image of the size you want.
Now, your next question may be, "why is there a thin, dark border around my pixellated images?" Good question! But ask a new question for that.

Swift 3 - How do I improve image quality for Tesseract?

I am using Swift 3 to build a mobile app that allows the user to take a picture and run Tesseract OCR over the resulting image.
However, I've been trying to increase the quality of scan and it doesn't seem to be working much. I've segmented the photo into a more "zoomed in" region that I want to recognize and even tried making it black and white. Are there any strategies for "enhancing" or optimizing the picture quality/size so that Tesseract can recognize it better? Thanks!
tesseract.image = // the camera photo here
tesseract.recognize()
print(tesseract.recognizedText)
I got these errors and have no idea what to do:
Error in pixCreateHeader: depth must be {1, 2, 4, 8, 16, 24, 32}
Error in pixCreateNoInit: pixd not made
Error in pixCreate: pixd not made
Error in pixGetData: pix not defined
Error in pixGetWpl: pix not defined
2017-03-11 22:22:30.019717 ProjectName[34247:8754102] Cannot convert image to Pix with bpp = 64
Error in pixSetYRes: pix not defined
Error in pixGetDimensions: pix not defined
Error in pixGetColormap: pix not defined
Error in pixClone: pixs not defined
Error in pixGetDepth: pix not defined
Error in pixGetWpl: pix not defined
Error in pixGetYRes: pix not defined
Please call SetImage before attempting recognition.Please call SetImage before attempting recognition.2017-03-11 22:22:30.026605 EOB-Reader[34247:8754102] No recognized text. Check that -[Tesseract setImage:] is passed an image bigger than 0x0.
ive been using tesseract fairly successfully in swift 3 using the following:
func performImageRecognition(_ image: UIImage) {
let tesseract = G8Tesseract(language: "eng")
var textFromImage: String?
tesseract?.engineMode = .tesseractCubeCombined
tesseract?.pageSegmentationMode = .singleBlock
tesseract?.image = imageView.image
tesseract?.recognize()
textFromImage = tesseract?.recognizedText
print(textFromImage!)
}
I also found pre-processing the image helped too. I added the following extension to UIImage
import UIKit
import CoreImage
extension UIImage {
func toGrayScale() -> UIImage {
let greyImage = UIImageView()
greyImage.image = self
let context = CIContext(options: nil)
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
currentFilter!.setValue(CIImage(image: greyImage.image!), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!,from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
greyImage.image = processedImage
return greyImage.image!
}
func binarise() -> UIImage {
let glContext = EAGLContext(api: .openGLES2)!
let ciContext = CIContext(eaglContext: glContext, options: [kCIContextOutputColorSpace : NSNull()])
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: self), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciContext.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
func scaleImage() -> UIImage {
let maxDimension: CGFloat = 640
var scaledSize = CGSize(width: maxDimension, height: maxDimension)
var scaleFactor: CGFloat
if self.size.width > self.size.height {
scaleFactor = self.size.height / self.size.width
scaledSize.width = maxDimension
scaledSize.height = scaledSize.width * scaleFactor
} else {
scaleFactor = self.size.width / self.size.height
scaledSize.height = maxDimension
scaledSize.width = scaledSize.height * scaleFactor
}
UIGraphicsBeginImageContext(scaledSize)
self.draw(in: CGRect(x: 0, y: 0, width: scaledSize.width, height: scaledSize.height))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
func orientate(img: UIImage) -> UIImage {
if (img.imageOrientation == UIImageOrientation.up) {
return img;
}
UIGraphicsBeginImageContextWithOptions(img.size, false, img.scale)
let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
img.draw(in: rect)
let normalizedImage : UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return normalizedImage
}
}
And then called this before passing the image to performImageRecognition
func processImage() {
self.imageView.image! = self.imageView.image!.toGrayScale()
self.imageView.image! = self.imageView.image!.binarise()
self.imageView.image! = self.imageView.image!.scaleImage()
}
Hope this helps

Swift Core Image filter over filtered image

my problem is as follows: I made a simple app with a uiviewcontroller and an uiview(FilterView). om my view I added a UIButton and a UIImageView. What I want is that when you push the button a SepiaFilter is applied to the image:
func sepiaButtonClicked( sender:UIButton ){
let context = CIContext(options: nil)
let image = CIImage(image: theView.imageView.image)
let filter = CIFilter(name: "CISepiaTone", withInputParameters: [
kCIInputImageKey : image,
kCIInputIntensityKey : NSNumber(double: 0.5)
])
let imageWithFilter = filter.outputImage
theView.imageView.image = UIImage(CIImage: imageWithFilter)
}
theView refers to the UIView with this piece of code on top
var theView:FilterView {
get {
return view as! FilterView
}
}
now when I push the button the filter is applied as I wanted it to happen, but if you would press it again afterwards it gives a fatal error 'unexpectedly found nil while unwrapping an Optional value'. this is I think the image ( the one I enter for kCIInputImageKey ).
Can anyone give me an explanation on why this is happening? I can't figure out the difference between the first and second click on the button.. how I see it this code just replaces the UIImage with the new one and it should be ready to be triggered again?
Thx in advance,
Pieter-Jan De Bruyne
Try this :
func sepiaButtonClicked( sender:UIButton ){
var CurrentImage = self.imageView.image
var inputImage = CIImage(image:CurrentImage)
let filter = CIFilter(name:"CISepiaTone")
filter.setValue(inputImage, forKey: kCIInputImageKey)
filter.setValue(0.5, forKey: kCIInputIntensityKey)
let context = CIContext(options: nil)
let imageWithFilter = filter.outputImage
let NewOuptutImage = context.createCGImage(imageWithFilter , fromRect: imageWithFilter.extent())
imageView.image = UIImage(CGImage: NewOuptutImage)
}

Select portion of the uiimage with given color

I would like to implement a feature that allows user to select a given color from the image and replace it with transparent color. Ideally it would work similar to Pixelmator, https://www.youtube.com/watch?v=taXGaQC0JBg where user can select colors and see which portions of the image are currently selected and a slider that can be used to select the tolerance of the colors.
My primary suspect for the replacing colors is CGImageCreateWithMaskingColors() function. Perhaps CIColorCube might do the job.
I am not sure how to proceed with visualizing the selection of the colors. Any tips will be welcome!
thank you,
Janusz
EDIT:
I am moving very slowly but I made some progress. I am using imageCreateWithMaskingColors function to extract unwanted colors:
func imageWithMaskingColors(){
//get uiimage
let image:CGImageRef = self.inputImage.image!.CGImage
let maskingColors :[CGFloat] = [0,200,0,255,0,255]
let newciimage = CGImageCreateWithMaskingColors(image,maskingColors)
let newImage = UIImage(CGImage: newciimage)
self.outputImage.image = newImage
let w = CGFloat(CGImageGetWidth(newciimage))
let h = CGFloat(CGImageGetHeight(newciimage))
let size = CGSizeMake(w,h)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
let context = UIGraphicsGetCurrentContext();
newImage?.drawInRect(CGRectMake(0, 0, w, h))
let result = UIGraphicsGetImageFromCurrentImageContext();
self.inputImage1 = result.CGImage
UIGraphicsEndImageContext();
}
In the next step I am applying a CISourceOutCompositing CIFilter to get a selected area, that was removed in last step:
#IBAction func blendMode(){
let context = CIContext(options: nil)
let inputImage:CIImage = CIImage(CGImage:self.inputImage1)
var filter = CIFilter(name: "CISourceOutCompositing")
println(inputImage.debugDescription)
//mix it with black
let fileURL = NSBundle.mainBundle().URLForResource("black", withExtension: "jpg")
var backgroundImage = CIImage(contentsOfURL: fileURL)
filter.setValue(inputImage, forKey: kCIInputBackgroundImageKey)
filter.setValue(backgroundImage, forKey: kCIInputImageKey)
println(backgroundImage.debugDescription)
let outputImage = filter.outputImage
println(outputImage.debugDescription)
let cgimg = context.createCGImage(outputImage, fromRect: outputImage.extent())
blendImage1 = cgimg
let newImage = UIImage(CGImage: cgimg)
self.outputImage.image = newImage
}
In the next step I would like to add a dashed-stroke line to borders and remove the filling color of the selected image (black tiger).
I used a GPUImage CannyEdgeDetectionFilter to the image but it didn't give me satysfiying results (black image)
let gpaPicture = GPUImagePicture(CGImage: blendImage1)
let canny = GPUImageCannyEdgeDetectionFilter()
//canny.upperThreshold = CGFloat(1125)
canny.lowerThreshold = CGFloat(1)
gpaPicture.addTarget(canny)
canny.useNextFrameForImageCapture()
gpaPicture.processImage()
let gpuResult = canny.imageByFilteringImage(UIImage(CGImage:blendImage1))

Resources