How to remove the border/drop shadow from an UIImageView? - ios

I've been generating QR Codes using the CIQRCodeGenerator CIFilter and it works very well:
But when I resize the UIImageView and generate again
#IBAction func sizeSliderValueChanged(_ sender: UISlider) {
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sender.value), y: CGFloat(sender.value))
}
I get a weird Border/DropShadow around the image sometimes:
How can I prevent it from appearing at all times or remove it altogether?
I have no idea what it is exactly, a border, a dropShadow or a Mask, as I'm new to Swift/iOS.
Thanks in advance!
PS. I didn't post any of the QR-Code generating code as it's pretty boilerplate and can be found in many tutorials out there, but let me know if you need it
EDIT:
code to generate the QR Code Image
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
guard let qrEncodedImage = filter.outputImage else {
return nil
}
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let transform = CGAffineTransform(scaleX: scaleX, y: scaleY )
if let outputImage = filter.outputImage?.applying(transform) {
return UIImage(ciImage: outputImage)
}
return nil
}
Code for button pressed
#IBAction func generateCodeButtonPressed(_ sender: CustomButton) {
if codeTextField.text == "" {
return
}
let newEncodedMessage = codeTextField.text!
let encodedImage: UIImage = generateQRCode(from: newEncodedMessage)!
qrImageView.image = encodedImage
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sizeSlider.value), y: CGFloat(sizeSlider.value))
qrImageView.layer.minificationFilter = kCAFilterNearest
qrImageView.layer.magnificationFilter = kCAFilterNearest
}

It’s a little hard to be sure without the code you’re using to generate the image for the image view, but that looks like a resizing artifact—the CIImage may be black or transparent outside the edges of the QR code, and when the image view size doesn’t match the image’s intended size, the edges get fuzzy and either the image-outside-its-boundaries or the image view’s background color start bleeding in. Might be able to fix it by setting the image view layer’s minification/magnification filters to “nearest neighbor”, like so:
imageView.layer.minificationFilter = kCAFilterNearest
imageView.layer.magnificationFilter = kCAFilterNearest
Update from seeing the code you added—you’re currently resizing the image twice, first with the call to applying(transform) and then by setting a transform on the image view itself. I suspect the first resize is adding the blurriness, which the minification / magnification filter I suggested earlier then can’t fix. Try shortening generateQRCode to this:
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
if let qrEncodedImage = filter.outputImage {
return UIImage(cgImage: qrEncodedImage)
}
return nil
}

I think the problem here is that you try to resize it to "non-square" (as your scaleX isn't always the same as scaleY), while the QR code is always square so both side should have the same scale factor to get a non-blurred image.
Something like:
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let scale = max(scaleX, scaleY)
let transform = CGAffineTransform(scaleX: scale, y: scale)
will make sure you have "non-bordered/non-blurred/squared" UIImage.

I guess the issue is with the image(png) file not with your UIImageView. Try to use another image and I hope it will work!

Related

Removing a colour from a UIImage

I have an image:
As you can clearly see, the barcode doesn't fit in very well with the UI :/
I thought a potential fix for this, was to "green screen" out the black in the image, leaving on the white part of the barcode.
The barcode itself is generated on the fly.
func generateBarcode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CICode128BarcodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.applying(transform) {
let invertFiler = CIFilter(name: "CIColorInvert")!
invertFiler.setValue(output, forKey: kCIInputImageKey)
return UIImage(ciImage: (invertFiler.outputImage?.applying(transform))!) //TODO: Remove force unwrap
}
}
return nil
}
Now I've heard I can use a "CIColorCube" filter but haven't been able to work out to use it.
Is removing the black part possible? And, if so, would you be able to help me out?
Thanks
There is a filter (CIMaskToAlpha) for taking gray images and using the gray level as an alpha value. For black and white images, this makes black transparent and white opaque-white, which I think is what you want.
func generateBarcode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CICode128BarcodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let outputBarcode = filter.outputImage?.applying(transform) {
let invertFilter = CIFilter(name: "CIColorInvert")!
invertFilter.setValue(outputBarcode, forKey: kCIInputImageKey)
if let outputInvert = invertFilter.outputImage?.applying(transform) {
let mask = CIFilter(name: "CIMaskToAlpha")!
mask.setValue(outputInvert, forKey: kCIInputImageKey)
return UIImage(ciImage: (mask.outputImage?.applying(transform))!) //TODO: Remove force unwrap
}
}
}
return nil
}
If you put the resulting image on a background that is blue, you will see just the white bars.
PS: When you say "green screen", what you mean (in imaging language) is the pixel is transparent. This is represented by the alpha component of the color (R, G, B, A). This filter sets each pixel to white, but uses the original color to set the alpha. White (which is 1.0) is a fully opaque alpha, and Black (which is 0.0) is fully transparent. If you had other gray levels, those pixels would be semi-transparent white.

Pixellating a UIImage returns UIImage with a different size

I'm using an extension to pixellate my images like the following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
return UIImage(ciImage: output)
}
The problem is the image represented by self here has not the same size than the one I create using UIImage(ciImage: output).
For example, using that code:
print("image.size BEFORE : \(image.size)")
if let imagePixellated = image.pixellated(scale: 48) {
image = imagePixellated
print("image.size AFTER : \(image.size)")
}
will print:
image.size BEFORE : (400.0, 298.0)
image.size AFTER : (848.0, 644.0)
Not the same size and not the same ratio.
Any idea why?
EDIT:
I added some prints in the extension as following:
func pixellated(scale: Int = 8) -> UIImage? {
guard let ciImage = CIImage(image: self), let filter = CIFilter(name: "CIPixellate") else { return nil }
print("UIIMAGE : \(self.size)")
print("ciImage.extent.size : \(ciImage.extent.size)")
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let output = filter.outputImage else { return nil }
print("output : \(output.extent.size)")
return UIImage(ciImage: output)
}
And here are the outputs:
UIIMAGE : (250.0, 166.5)
ciImage.extent.size : (500.0, 333.0)
output : (548.0, 381.0)
You have two problems:
self.size is measured in points. self's size in pixels is actually self.size multiplied by self.scale.
The CIPixellate filter changes the bounds of its image.
To fix problem one, you can simply set the scale property of the returned UIImage to be the same as self.scale:
return UIImage(ciImage: output, scale: self.scale, orientation: imageOrientation)
But you'll find this still isn't quite right. That's because of problem two. For problem two, the simplest solution is to crop the output CIImage:
// Must use self.scale, to disambiguate from the scale parameter
let floatScale = CGFloat(self.scale)
let pixelSize = CGSize(width: size.width * floatScale, height: size.height * floatScale)
let cropRect = CGRect(origin: CGPoint.zero, size: pixelSize)
guard let output = filter.outputImage?.cropping(to: cropRect) else { return nil }
This will give you an image of the size you want.
Now, your next question may be, "why is there a thin, dark border around my pixellated images?" Good question! But ask a new question for that.

Trying to make the kCIInputCenterKey for image filters based on touch location and The y coordinate is flipped

I would appreciate some help on converting my X and Y touch coordinates to proper CIVectors. The code below is everything I'm using to have a "touch" be the kCIInputCenterKey coordinate for the bump distortion. It works somewhat but the Y coordinate is flipped when i touch the screen to apply the choose a center key for the filter. X is correct but if i touch on the top of the image the filter is applied on the opposite lower part of the image while retaining correct x axis location
var xCord:CGFloat = 0.0
var yCord:CGFloat = 0.0
func didTapImage(gesture: UIGestureRecognizer) {
let point = gesture.location(in: gesture.view)
print(point)
xCord = point.x
yCord = point.y
print ("\(point) and x\(xCord) and \(yCord)")
}
#IBAction func filter(_ sender: Any) {
guard let image = self.imageView.image?.cgImage else { return }
let openGLContext = EAGLContext(api: .openGLES3)
let context = CIContext(eaglContext: openGLContext!)
let ciImage = CIImage(cgImage: image)
let filter = CIFilter(name: "CIBumpDistortion")
filter?.setValue(ciImage, forKey: kCIInputImageKey)
filter?.setValue((CIVector(x: xCord, y: yCord)), forKey: kCIInputCenterKey)
filter?.setValue(300.0, forKey: kCIInputRadiusKey)
filter?.setValue(2.50, forKey: kCIInputScaleKey)
centerScrollViewContents()
if let output = filter?.value(forKey: kCIOutputImageKey) as? CIImage{
self.imageView.image = UIImage(cgImage: context.createCGImage(output, from: output.extent)!)
}
}
The problem is the yCord is flipped for some reason coming from being a float to a Vector. When I try to correct this by using something like:
filter?.setValue((CIVector(x: xCord, y: -yCord)), forKey: kCIInputCenterKey)
or
filter?.setValue((CIVector(x: xCord, y: yCord * (-1))), forKey: kCIInputCenterKey)
It causes the entire image to jump up or down in the image view and the filter doesn't get applied to it anywhere. Not sure where to go from here since the value doesn't want to be flipped with simple math.
Any help would be greatly appreciated!
I found the solution.
to flip the y correctly all I had to do was:
filter?.setValue((CIVector(x: xCord, y: CGFloat(image.height) - yCord)), forKey: kCIInputCenterKey)

Swift Core Image filter over filtered image

my problem is as follows: I made a simple app with a uiviewcontroller and an uiview(FilterView). om my view I added a UIButton and a UIImageView. What I want is that when you push the button a SepiaFilter is applied to the image:
func sepiaButtonClicked( sender:UIButton ){
let context = CIContext(options: nil)
let image = CIImage(image: theView.imageView.image)
let filter = CIFilter(name: "CISepiaTone", withInputParameters: [
kCIInputImageKey : image,
kCIInputIntensityKey : NSNumber(double: 0.5)
])
let imageWithFilter = filter.outputImage
theView.imageView.image = UIImage(CIImage: imageWithFilter)
}
theView refers to the UIView with this piece of code on top
var theView:FilterView {
get {
return view as! FilterView
}
}
now when I push the button the filter is applied as I wanted it to happen, but if you would press it again afterwards it gives a fatal error 'unexpectedly found nil while unwrapping an Optional value'. this is I think the image ( the one I enter for kCIInputImageKey ).
Can anyone give me an explanation on why this is happening? I can't figure out the difference between the first and second click on the button.. how I see it this code just replaces the UIImage with the new one and it should be ready to be triggered again?
Thx in advance,
Pieter-Jan De Bruyne
Try this :
func sepiaButtonClicked( sender:UIButton ){
var CurrentImage = self.imageView.image
var inputImage = CIImage(image:CurrentImage)
let filter = CIFilter(name:"CISepiaTone")
filter.setValue(inputImage, forKey: kCIInputImageKey)
filter.setValue(0.5, forKey: kCIInputIntensityKey)
let context = CIContext(options: nil)
let imageWithFilter = filter.outputImage
let NewOuptutImage = context.createCGImage(imageWithFilter , fromRect: imageWithFilter.extent())
imageView.image = UIImage(CGImage: NewOuptutImage)
}

Custom CIFilter subclass returns image with scale out of whack

I'm new to writing CIFilters, and I'm stuck on this problem. Here is my source image being displayed in a UIImageView with contentMode set to Aspect Fit:
Here is the image returned from my CIFilter object being displayed in the same UIImageView:
I've tried copying over the original scale and orientation from my source image to the UIImage being constructed from the CIImage returned from the filter with no luck.
What might be causing this?
I'm thinking I'm doing something wrong in my CIFilter class. I am starting to suspect something in my outputImage method:?
func outputImage() -> CIImage? {
if let inputImage = inputImage {
let dod = inputImage.extent()
if let kernel = kernel {
let args = [inputImage as AnyObject]
let dod = inputImage.extent().rectByInsetting(dx: -1, dy: -1)
return kernel.applyWithExtent(dod, roiCallback: {
(index, rect) in
return rect.rectByInsetting(dx: -1, dy: -1)
}, arguments: args)
}
}
return nil
}

Resources