change the contrast and brightness of image using slider - ios

I am trying to change the contrast and brightness of an image using the slider, and I have created a slider programatically. When I am trying to vary the contrast by the slider, am getting an error like
reason: '-[UISlider floatValue]: unrecognized selector sent to instance 0x103c4ffa0'`func viewforslide(){
vieew.frame = CGRect.init(x: 10, y:view.frame.size.height - 180, width: self.view.frame.size.width - 20, height: 40)
vieew.backgroundColor = UIColor.lightGray
vieew.layer.cornerRadius = vieew.frame.size.height/2
view.addSubview(vieew)
createslider()
}
func createslider(){
var sliderDemo = UISlider(frame:CGRect(x: 15, y: 5, width: vieew.frame.size.width - 30, height: 30))
sliderDemo.minimumValue = 0.0
sliderDemo.maximumValue = 1000.0
sliderDemo.isContinuous = true
sliderDemo.tintColor = UIColor.black
sliderDemo.value = 500.0
sliderDemo.addTarget(self, action: #selector(_sldComponentChangedValue),for: .valueChanged)
vieew.addSubview(sliderDemo)
}
#IBAction func _sldComponentChangedValue(sender: UISlider) {
// Set value to the nearest int
sender.setValue(Float(roundf(sender.value)), animated: false)
let newvalforslider = sender
print("\(newvalforslider)")
let displayinPercentage: Int = Int((sender.value/200) * 10000)
// contrastValueLabel.text = ("\(displayinPercentage)")
self.imageView.image = results.enhancedImage
let beginImage = CIImage(image: self.imageView.image!)
let filter = CIFilter(name: "CIColorControls")
filter?.setValue(beginImage, forKey: kCIInputImageKey)
filter?.setValue(sender.value, forKey: kCIInputContrastKey)
var filteredImage = filter?.outputImage
var context = CIContext(options: nil)
imageView.image = UIImage(cgImage: context.createCGImage(filteredImage!, from: (filteredImage?.extent)!)!)
var sliderValue = sender.value
}
`
If anyone helps me to do this, would be great. Thanks in advance.

While adding target method to sliderDemo you have given selector (_sldComponentChangedValue), which is not right, your actual method does receives an argument. That's the difference in methods name, so on slide you gets crash saying "unrecognized selector sent to instance"
instead do as follows
sliderDemo.addTarget(self, action: #selector(_sldComponentChangedValue(sender:)), for: .valueChanged);

#anisha If you complicate previous Ans. try this new code.
func increaseContrast(_ image: UIImage) -> UIImage {
let inputImage = CIImage(image: image)!
let parameters =
[
"inputContrast": NSNumber(value: 2) // set how many contrast you want
]
let outputImage = inputImage.applyingFilter("CIColorControls",
parameters: parameters)
let context = CIContext(options: nil)
let img = context.createCGImage(outputImage, from: outputImage.extent)!
return UIImage(cgImage: img)
}

Related

How do you apply Core Image filters to an onscreen image using Swift/MacOS or iOS and Core Image

Photos editing adjustments provides a realtime view of the applied adjustments as they are applied. I wasn't able to find any samples of how you do this. All the examples seems to show that you apply the filters through a pipeline of sorts and then take the resulting image and update the screen with the result. See code below.
Photos seems to show the adjustment applied to the onscreen image. How do they achieve this?
func editImage(inputImage: CGImage) {
DispatchQueue.global().async {
let beginImage = CIImage(cgImage: inputImage)
guard let exposureOutput = self.exposureFilter(beginImage, ev: self.brightness) else {
return
}
guard let vibranceOutput = self.vibranceFilter(exposureOutput, amount: self.vibranceAmount) else {
return
}
guard let unsharpMaskOutput = self.unsharpMaskFilter(vibranceOutput, intensity: self.unsharpMaskIntensity, radius: self.unsharpMaskRadius) else {
return
}
guard let sharpnessOutput = self.sharpenFilter(unsharpMaskOutput, sharpness: self.unsharpMaskIntensity) else {
return
}
if let cgimg = self.context.createCGImage(sharpnessOutput, from: vibranceOutput.extent) {
DispatchQueue.main.async {
self.cgImage = cgimg
}
}
}
}
OK, I just found the answer - use MTKView, which is working fine except for getting the image to fill the view correctly!
For the benefit of others here are the basics... I have yet to figure out how to position the image correctly in the view - but I can see the filter applied in realtime!
class ViewController: NSViewController, MTKViewDelegate {
....
#objc dynamic var cgImage: CGImage? {
didSet {
if let cgimg = cgImage {
ciImage = CIImage(cgImage: cgimg)
}
}
}
var ciImage: CIImage?
// Metal resources
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
var sourceTexture: MTLTexture! // 2
let colorSpace = CGColorSpaceCreateDeviceRGB()
var context: CIContext!
var textureLoader: MTKTextureLoader!
override func viewDidLoad() {
super.viewDidLoad()
// Do view setup here.
let metalView = MTKView()
metalView.translatesAutoresizingMaskIntoConstraints = false
self.imageView.addSubview(metalView)
NSLayoutConstraint.activate([
metalView.bottomAnchor.constraint(equalTo: view.bottomAnchor),
metalView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
metalView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
metalView.topAnchor.constraint(equalTo: view.topAnchor)
])
device = MTLCreateSystemDefaultDevice()
commandQueue = device.makeCommandQueue()
metalView.delegate = self
metalView.device = device
metalView.framebufferOnly = false
context = CIContext()
textureLoader = MTKTextureLoader(device: device)
}
public func draw(in view: MTKView) {
if let ciImage = self.ciImage {
if let currentDrawable = view.currentDrawable {
let commandBuffer = commandQueue.makeCommandBuffer()
let inputImage = ciImage // 2
exposureFilter.setValue(inputImage, forKey: kCIInputImageKey)
exposureFilter.setValue(ev, forKey: kCIInputEVKey)
context.render(exposureFilter.outputImage!,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(origin: .zero, size: view.drawableSize),
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
commandBuffer?.commit()
}
}
}

Getting error [UIImage extent]: unrecognized selector sent to instance

I'm trying to apply a radial blur to my UIImageView but when I try this I get the error
[UIImage extent]: unrecognized selector sent to instance
The code I'm using is from the example on:
https://developer.apple.com/documentation/coreimage/selectively_focusing_on_an_image
let h = bgImage.image!.size.height
let w = bgImage.image!.size.width
guard let radialMask = CIFilter(name:"CIRadialGradient") else {
return
}
let imageCenter = CIVector(x:0.55 * w, y:0.6 * h)
radialMask.setValue(imageCenter, forKey:kCIInputCenterKey)
radialMask.setValue(0.2 * h, forKey:"inputRadius0")
radialMask.setValue(0.3 * h, forKey:"inputRadius1")
radialMask.setValue(CIColor(red:0, green:1, blue:0, alpha:0),
forKey:"inputColor0")
radialMask.setValue(CIColor(red:0, green:1, blue:0, alpha:1),
forKey:"inputColor1")
guard let maskedVariableBlur = CIFilter(name:"CIMaskedVariableBlur") else {
return
}
maskedVariableBlur.setValue(bgImage.image, forKey: kCIInputImageKey)
maskedVariableBlur.setValue(10, forKey: kCIInputRadiusKey)
maskedVariableBlur.setValue(radialMask.outputImage, forKey: "inputMask")
let selectivelyFocusedCIImage = maskedVariableBlur.outputImage/
In which bgImage is a UIImageView
What am I doing wrong here?
You need
guard let image = maskedVariableBlur?.image, cgimg = image.CGImage else {
print("imageView doesn't have an image!")
return
}
as
let coreImage = CIImage(CGImage:cgimg)
maskedVariableBlur.setValue(coreImage, forKey: kCIInputImageKey)
expects a CIImage not a UIImage
I see two issues
- One is your explicitly unwrapping the optional.
let h = bgImage.image!.size.height
let w = bgImage.image!.size.width
Please use guard here to avoid unexpected crashes
Second issue is bgImage.image!.size.height. Here you should be using bgImage.image.CIImage.size or something like #image.CIImage.size.
Please refer below similar post. I hope this should help
Unrecognized selector sent to UIImage?

Pixellating a UIImage returns UIImage with a different size

I'm using an extension to pixellate my images like the following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
return UIImage(ciImage: output)
}
The problem is the image represented by self here has not the same size than the one I create using UIImage(ciImage: output).
For example, using that code:
print("image.size BEFORE : \(image.size)")
if let imagePixellated = image.pixellated(scale: 48) {
image = imagePixellated
print("image.size AFTER : \(image.size)")
}
will print:
image.size BEFORE : (400.0, 298.0)
image.size AFTER : (848.0, 644.0)
Not the same size and not the same ratio.
Any idea why?
EDIT:
I added some prints in the extension as following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
print("UIIMAGE : \(self.size)")
print("ciImage.extent.size : \(ciImage.extent.size)")
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
print("output : \(output.extent.size)")
return UIImage(ciImage: output)
}
And here are the outputs:
UIIMAGE : (250.0, 166.5)
ciImage.extent.size : (500.0, 333.0)
output : (548.0, 381.0)
You have two problems:
self.size is measured in points. self's size in pixels is actually self.size multiplied by self.scale.
The CIPixellate filter changes the bounds of its image.
To fix problem one, you can simply set the scale property of the returned UIImage to be the same as self.scale:
return UIImage(ciImage: output, scale: self.scale, orientation: imageOrientation)
But you'll find this still isn't quite right. That's because of problem two. For problem two, the simplest solution is to crop the output CIImage:
// Must use self.scale, to disambiguate from the scale parameter
let floatScale = CGFloat(self.scale)
let pixelSize = CGSize(width: size.width * floatScale, height: size.height * floatScale)
let cropRect = CGRect(origin: CGPoint.zero, size: pixelSize)
guard let output = filter.outputImage?.cropping(to: cropRect) else { return nil }
This will give you an image of the size you want.
Now, your next question may be, "why is there a thin, dark border around my pixellated images?" Good question! But ask a new question for that.

Swift 3 - How do I improve image quality for Tesseract?

I am using Swift 3 to build a mobile app that allows the user to take a picture and run Tesseract OCR over the resulting image.
However, I've been trying to increase the quality of scan and it doesn't seem to be working much. I've segmented the photo into a more "zoomed in" region that I want to recognize and even tried making it black and white. Are there any strategies for "enhancing" or optimizing the picture quality/size so that Tesseract can recognize it better? Thanks!
tesseract.image = // the camera photo here
tesseract.recognize()
print(tesseract.recognizedText)
I got these errors and have no idea what to do:
Error in pixCreateHeader: depth must be {1, 2, 4, 8, 16, 24, 32}
Error in pixCreateNoInit: pixd not made
Error in pixCreate: pixd not made
Error in pixGetData: pix not defined
Error in pixGetWpl: pix not defined
2017-03-11 22:22:30.019717 ProjectName[34247:8754102] Cannot convert image to Pix with bpp = 64
Error in pixSetYRes: pix not defined
Error in pixGetDimensions: pix not defined
Error in pixGetColormap: pix not defined
Error in pixClone: pixs not defined
Error in pixGetDepth: pix not defined
Error in pixGetWpl: pix not defined
Error in pixGetYRes: pix not defined
Please call SetImage before attempting recognition.Please call SetImage before attempting recognition.2017-03-11 22:22:30.026605 EOB-Reader[34247:8754102] No recognized text. Check that -[Tesseract setImage:] is passed an image bigger than 0x0.
ive been using tesseract fairly successfully in swift 3 using the following:
func performImageRecognition(_ image: UIImage) {
let tesseract = G8Tesseract(language: "eng")
var textFromImage: String?
tesseract?.engineMode = .tesseractCubeCombined
tesseract?.pageSegmentationMode = .singleBlock
tesseract?.image = imageView.image
tesseract?.recognize()
textFromImage = tesseract?.recognizedText
print(textFromImage!)
}
I also found pre-processing the image helped too. I added the following extension to UIImage
import UIKit
import CoreImage
extension UIImage {
func toGrayScale() -> UIImage {
let greyImage = UIImageView()
greyImage.image = self
let context = CIContext(options: nil)
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
currentFilter!.setValue(CIImage(image: greyImage.image!), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!,from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
greyImage.image = processedImage
return greyImage.image!
}
func binarise() -> UIImage {
let glContext = EAGLContext(api: .openGLES2)!
let ciContext = CIContext(eaglContext: glContext, options: [kCIContextOutputColorSpace : NSNull()])
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: self), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciContext.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
func scaleImage() -> UIImage {
let maxDimension: CGFloat = 640
var scaledSize = CGSize(width: maxDimension, height: maxDimension)
var scaleFactor: CGFloat
if self.size.width > self.size.height {
scaleFactor = self.size.height / self.size.width
scaledSize.width = maxDimension
scaledSize.height = scaledSize.width * scaleFactor
} else {
scaleFactor = self.size.width / self.size.height
scaledSize.height = maxDimension
scaledSize.width = scaledSize.height * scaleFactor
}
UIGraphicsBeginImageContext(scaledSize)
self.draw(in: CGRect(x: 0, y: 0, width: scaledSize.width, height: scaledSize.height))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
func orientate(img: UIImage) -> UIImage {
if (img.imageOrientation == UIImageOrientation.up) {
return img;
}
UIGraphicsBeginImageContextWithOptions(img.size, false, img.scale)
let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
img.draw(in: rect)
let normalizedImage : UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return normalizedImage
}
}
And then called this before passing the image to performImageRecognition
func processImage() {
self.imageView.image! = self.imageView.image!.toGrayScale()
self.imageView.image! = self.imageView.image!.binarise()
self.imageView.image! = self.imageView.image!.scaleImage()
}
Hope this helps

Increase/decrease brightness of image using UISlider?

I am building an iOS app which is based on image operations.
I want to increase and decrease brightness of image by slider value.
I have used this code to do this:
#IBOutlet var imageView: UIImageView!
#IBOutlet var uiSlider : UISlider!
override func viewDidLoad()
{
super.viewDidLoad()
var image = UIImage(named: "54715869.jpg")
imageView.image = image
uiSlider.minimumValue = -0.2
uiSlider.maximumValue = 0.2
uiSlider.value = 0.0
uiSlider.maximumTrackTintColor = UIColor(red: 0.1, green: 0.7, blue: 0, alpha: 1)
uiSlider.minimumTrackTintColor = UIColor.blackColor()
uiSlider.addTarget(self, action: "brightnesssliderMove:", forControlEvents: UIControlEvents.TouchUpInside)
uiSlider.addTarget(self, action: "brightnesssliderMove:", forControlEvents: UIControlEvents.TouchUpOutside)
}
func brightnesssliderMove(sender: UISlider)
{
var filter = CIFilter(name: "CIColorControls");
filter.setValue(NSNumber(float: sender.value), forKey: "inputBrightness")
var image = self.imageView.image
var rawimgData = CIImage(image: image)
filter.setValue(rawimgData, forKey: "inputImage")
var outpuImage = filter.valueForKey("outputImage")
imageView.image = UIImage(CIImage: outpuImage as CIImage)
}
Now my question is that when I increase slider value it also increase brightness of image but only when I change slider position for first time.
When I am again changing the position of slider I am getting this errror:
fatal error: unexpectedly found nil while unwrapping an Optional value.
This error is coming at line:
imageView.image = UIImage(CIImage: outpuImage as CIImage)
This time rawimgData data comes nil.
I found answer to my question here is how i have done coding.
import CoreImage
class ViewController: UIViewController,UIImagePickerControllerDelegate,UINavigationControllerDelegate {
var aCIImage = CIImage();
var contrastFilter: CIFilter!;
var brightnessFilter: CIFilter!;
var context = CIContext();
var outputImage = CIImage();
var newUIImage = UIImage();
override func viewDidLoad() {
super.viewDidLoad()
var aUIImage = imageView.image;
var aCGImage = aUIImage?.CGImage;
aCIImage = CIImage(CGImage: aCGImage)
context = CIContext(options: nil);
contrastFilter = CIFilter(name: "CIColorControls");
contrastFilter.setValue(aCIImage, forKey: "inputImage")
brightnessFilter = CIFilter(name: "CIColorControls");
brightnessFilter.setValue(aCIImage, forKey: "inputImage")
}
func sliderContrastValueChanged(sender: UISlider) {
contrastFilter.setValue(NSNumber(float: sender.value), forKey: "inputContrast")
outputImage = contrastFilter.outputImage;
var cgimg = context.createCGImage(outputImage, fromRect: outputImage.extent())
newUIImage = UIImage(CGImage: cgimg)!
imageView.image = newUIImage;
}
func sliderValueChanged(sender: UISlider) {
brightnessFilter.setValue(NSNumber(float: sender.value), forKey: "inputBrightness");
outputImage = brightnessFilter.outputImage;
let imageRef = context.createCGImage(outputImage, fromRect: outputImage.extent())
newUIImage = UIImage(CGImage: imageRef)!
imageView.image = newUIImage;
}
}
You can use AlamofireImage, here's an example of how you can do it:
#IBAction func brightnessChanged(sender: UISlider) {
let filterParameters = ["inputBrightness": sender.value]
imageView.image = originalImage.af_imageWithAppliedCoreImageFilter("CIColorControls", filterParameters: filterParameters)
}

Resources