I would like to pixelize and unpixelize an UIImage or an UIImageview using Swift but I have no idea on how can I do that.
Maybe using effects, layer or something like that?
This is a very easy task on iOS.
Pixelation
Your can use the CIPixellate Core Image Filter.
func pixellated(image: UIImage) -> UIImage? {
guard let
ciImage = CIImage(image: image),
filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
guard let output = filter.outputImage else { return nil }
return UIImage(CIImage: output)
}
Result
The default inputScale value is 8. However you can increase or decrease the effect manually setting the parameter.
filter.setValue(8, forKey: "inputScale")
// ^
// change this
Extension
You can also define the following extension
extension UIImage {
func pixellated(scale: Int = 8) -> UIImage? {
guard let
ciImage = UIKit.CIImage(image: self),
filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
filter.setValue(scale, forKey: "inputScale")
guard let output = filter.outputImage else { return nil }
return UIImage(CIImage: output)
}
}
Unpixelation
The mechanism is exactly the same, you just need to use a different filter. You can find the full list of filters here (but take a look at which params are available/required for each filter). I think the CIGaussianBlur can do the job.
Of course don't expect to be able to input a low resolution superpixellated image and get an high definition one. This technology is available only in X-Files :D
The Mushroom image has been taken from here.
Related
I use this code to blur my UIImage
extension UIImage {
func blurred(radius: CGFloat) -> UIImage {
let ciContext = CIContext(options: nil)
guard let cgImage = cgImage else { return self }
let inputImage = CIImage(cgImage: cgImage)
guard let ciFilter = CIFilter(name: "CIGaussianBlur") else { return self }
ciFilter.setValue(inputImage, forKey: kCIInputImageKey)
ciFilter.setValue(radius, forKey: "inputRadius")
guard let resultImage = ciFilter.value(forKey: kCIOutputImageKey) as? CIImage else { return self }
guard let cgImage2 = ciContext.createCGImage(resultImage, from: inputImage.extent) else { return self }
return UIImage(cgImage: cgImage2)
}
}
But it takes so long to return image from this operation.
Actually this operation takes about 2 seconds:
guard let cgImage2 = ciContext.createCGImage(resultImage, from: inputImage.extent) else { return self }
I have not tested it on real device, but not sure if the code is efficient
That code looks fine-ish, though you should cache the image it returns rather than calling it repeatedly if at all possible; as Matt points out in the comments below, you should also use a shared CIContext rather than setting a new one up every time.
The performance issue you’re seeing is due to the simulator having very different performance characteristics from real hardware. It sounds like Core Image is either using the simulator’s emulated OpenGL ES interface (which is slow) or the CPU (which is slower). Testing it on an iOS device will give you a much better idea of the performance you should expect.
I want to remove a BLACK shadow border around Image when applying a Blur filter.
Please review below-attached screenshot. Blur function work correctly but want to remove a black shadow. I only want to do blur an Image. I don't want to apply any color effect with blur. Please let us know when should I missed...
Here I Have uploaded Image due to the low points:
https://drive.google.com/open?id=1KtVgqRXOmIEQXh9IMyWNAlariL0hcJBN
https://drive.google.com/open?id=1l2eLq7VwFPb3-SfIokW0Ijhk2jqUvjlU
Here is my function to Apply Blur effects on particular Image:
Parameter :
doBlurImage - Main Image want to Blur it
imageBlurValue - Blur value from 0 to 50 Float
func makeBlurImage(doBlurImage : UIImage, imageBlurValue : CGFloat) -> UIImage {
let beginImage = CIImage(image: doBlurImage)
let currentFilter = CIFilter(name: "CIGaussianBlur")
currentFilter!.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter!.setValue(imageBlurValue, forKey: kCIInputRadiusKey)
let cropFilter = CIFilter(name: "CICrop")
cropFilter!.setValue(currentFilter!.outputImage, forKey: kCIInputImageKey)
cropFilter!.setValue(CIVector(cgRect: beginImage!.extent), forKey: "inputRectangle")
let output = cropFilter!.outputImage
return UIImage(ciImage: output!)
}
I found a different way to fix this problem..
Apple says :
Applying a clamp effect before the blur filter avoids edge softening by making the original image opaque in all directions.
So we should applying CIAffineClamp filter to avoid black shadow, clampedToExtent() function is returns a new image created by making the pixel colors along its edges extend infinitely in all directions and it already member of CIImage class so we can use it without creating any extra func.
So implementation of solution will be like this :
fileprivate final func blurImage(image: UIImage?, blurAmount: CGFloat, completionHandler: #escaping (UIImage?) -> Void) {
guard let inputImage = image else {
print("Input image is null!")
completionHandler(nil); return
}
guard let ciImage = CIImage(image: inputImage) else {
print("Cannot create ci image from ui image!")
completionHandler(nil); return
}
let blurFilter = CIFilter(name: "CIGaussianBlur")
blurFilter?.setValue(ciImage.clampedToExtent(), forKey: kCIInputImageKey)
blurFilter?.setValue(blurAmount, forKey: kCIInputRadiusKey)
guard let openGLES3 = EAGLContext(api: .openGLES3) else {
print("Cannot create openGLES3 context!")
completionHandler(nil); return
}
let context = CIContext(eaglContext: openGLES3)
guard let ciImageResult = blurFilter?.outputImage else {
print("Cannot get output image from filter!")
completionHandler(nil); return
}
guard let resultImage = context.createCGImage(ciImageResult, from: ciImage.extent) else {
print("Cannot create output image from filtered image extent!")
completionHandler(nil); return
}
completionHandler(UIImage(cgImage: resultImage))
}
Note: Creation of context is expensive then you can create it out side of your function.
These are possible options to generate a Blur effect in ios:
CIGaussianBlur will generate a Blur effect based on a background color of Image.
UIVisualEffectView will generate a Blur effect based on a Style of UIVisualEffectView. Blur effect in UIVisualEffectView are
.extraLight, .light, .dark, .extraDark, regular, and prominent.
Suggested Option - GPUIMage - You can archive best blur effect using GPUImage Processing Library.
Blur effect using GPUImage:
var resultImage = UIImage()
let gaussianBlur = GaussianBlur()
gaussianBlur.blurRadiusInPixels = Float(ImageBlurValue)
let pictureInput = PictureInput(image: YourImage)
let pictureOutput = PictureOutput()
pictureOutput.imageAvailableCallback = {image in
print("Process completed")
resultImage = image
}
pictureInput --> gaussianBlur --> pictureOutput
pictureInput.processImage(synchronously:true)
pictureInput.removeAllTargets()
return resultImage
Happy Coding!...:)
I've been having this problem for a while now and looked at dozens of answers here and can't seem to find anything that helps.
Scenario
I am generating a QR Code on the iOS side of my app and want this QR code to be sent to the WatchKit Extension that I am currently developing.
How I am generating the QR Code
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(string, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
return UIImage(ciImage: codeImage);
}
}
}
What I want next
I want to get the data from the QR image so that I can send it to the Apple Watch app, like so:
let data = UIImagePNGRepresentation(QRCodeImage);
But, This always returns nil because there is no image data backing the output from the filter.
Note: I know that there is no data associated with the CI Image because it hasn't been rendered and doesn't even have data associated with it because it's just the output from the filter. I don't know how to get around this because I'm pretty new to image processing and such. :/
What I've Tried
Creating a cgImage from the filter.outputImage
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.QRCode = UIImage(cgImage: cgImage)
}
}
}
}
But this doesn't work, it doesn't seem, because the image on the view is blank.
Creating a blank CIImage as Input Image
func update(with string: String) {
let blankCiImage = CIImage(color: .white) //This probably isn't right...
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
filter.setValue(blankCiImage, forKey: kCIInputImageKey)
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.contactCode = UIImage(cgImage: cgImage)
print(self.contactCode!)
print(UIImagePNGRepresentation(self.contactCode!))
}
}
}
}
This doesn't work either - my thought was to add a blank image to it and then do the filter on top of it, but I am probably not doing this right.
My Goal
Literally, just to get the data from the generated QR Code. Most threads suggest UIImage(ciImage: output) , but this doesn't have any backing data.
If anyone could help me out with this, that'd be great. And any explanation on how it works would be wonderful too.
Edit: I don't believe this is the same as the marked duplicate - The marked duplicate is about editing an existing image using CI filters and getting that data and this is about an image that is solely created through CI filter with no input image - QR Codes. the other answer did not fully relate.
You have a couple of issues in your code. You need to convert your string to data using String Encoding isoLatin1 before passing it to the filter. Another issue is that to convert your CIImage to data you need to redraw/render your CIImage and to prevent blurring the image when scaled you need to apply a transform to the image to increase its size:
extension StringProtocol {
var qrCode: UIImage? {
guard
let data = data(using: .isoLatin1),
let outputImage = CIFilter(name: "CIQRCodeGenerator",
parameters: ["inputMessage": data, "inputCorrectionLevel": "M"])?.outputImage
else { return nil }
let size = outputImage.extent.integral
let output = CGSize(width: 250, height: 250)
let format = UIGraphicsImageRendererFormat()
format.scale = UIScreen.main.scale
return UIGraphicsImageRenderer(size: output, format: format).image { _ in outputImage
.transformed(by: .init(scaleX: output.width/size.width, y: output.height/size.height))
.image
.draw(in: .init(origin: .zero, size: output))
}
}
}
extension CIImage {
var image: UIImage { .init(ciImage: self) }
}
Playground testing:
let link = "https://stackoverflow.com/questions/51178573/swift-image-data-from-ciimage-qr-code-how-to-render-cifilter-output?noredirect=1"
let image = link.qrCode!
let data = image.jpegData(compressionQuality: 1) // 154785 bytes
In an image editing app, I am trying to show clipped highlights and shadows using CIFilters.
Filter List
I know there isn't a straight single filter for this, will have to be a combination of few together.
Any ideas? Thanks in advance.
CIFilters can add one after another. for example: i can first adjust shadow on image and than crop it.
func addFilters(toImage image: UIImage) -> UIImage? {
guard let cgImage = image.cgImage else { return nil }
let ciImage = CIImage(cgImage: cgImage)
//add adjust shadow filter
guard let shadowAdjust = CIFilter(name: "CIHighlightShadowAdjust") else { return nil }
shadowAdjust.setValue(ciImage, forKey: kCIInputImageKey)
shadowAdjust.setValue(1, forKey: "inputHighlightAmount")
shadowAdjust.setValue(3, forKey: "inputShadowAmount")
guard let output1 = shadowAdjust.outputImage else { return nil }
//now output result crop 80%
guard let crop = CIFilter(name: "CICrop") else { return nil }
crop.setValue(ciImage, forKey: kCIInputImageKey)
crop.setValue(CIVector(x: output1.extent.origin.x, y: output1.extent.origin.y, z: output1.extent.size.width * 0.8, w: output1.extent.size.height * 0.8), forKey: "inputRectangle")
guard let output2 = crop.outputImage else { return UIImage(ciImage: output1) }
//the image will be croped and shadow adjusted
return UIImage(ciImage: output2)
}
I want to implement an image downscaling algorithm for iOS. After reading that Core Images's CILanczosScaleTransform was a great fit for it, I implemented it the following way:
public func resizeImage(_ image: UIImage, targetWidth: CGFloat) -> UIImage? {
assert(targetWidth > 0.0)
let scale = Double(targetWidth) / Double(image.size.width)
guard let ciImage = CIImage(image: image) else {
fatalError("Couldn't create CIImage from image in input")
}
guard let filter = CIFilter(name: "CILanczosScaleTransform") else {
fatalError("The filter CILanczosScaleTransform is unavailable on this device.")
}
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let result = filter.outputImage else {
fatalError("No output on filter.")
}
guard let cgImage = context.createCGImage(result, from: result.extent) else {
fatalError("Couldn't create CG Image")
}
return UIImage(cgImage: cgImage)
}
It works well but I get a classic border artifact probably due to the pixel-neighborhood base of the algorithm. I couldn't find anything in Apple's doc about this. Is there something smarter than rendering a bigger image and then crop the border to solve this issue?
You can use imageByClampingToExtent.
Calling this method ... creates an image of infinite extent by repeating
pixel colors from the edges of the original image.
You could use it like this:
...
guard let ciImage = CIImage(image: image)?.clampedToExtent() else {
fatalError("Couldn't create CIImage from image in input")
}
See more information here: Apple Doc for clampedtoextent