Using GPUImage2 with CIFilters - ios

I'm using GPUImage along with CIFilter to process an image. However, this is proving to be extremely memory intensive, and I'm trying to find a better solution. I'm mainly using CIFilter, but GPUImage2 provides an AdaptiveThreshold filter that CIFilter doesn't supply. As a result, I must use both in my project. Is there a way to transfer the outputImage of CIFilter to GPUImage2 without too much overhead? Or is there a way to port a GPUImage2 filter to CIFilter or vice versa. Here's some example code I'm using now:
let openGLContext = EAGLContext(api: .openGLES2)
let context = CIContext(eaglContext: openGLContext!)
guard let cgFilteredImage = context.createCGImage(noiseReductionOutput, from: noiseReductionOutput.extent) else {
completionHandler(nil)
return
}
let correctlyRotatedImage = UIImage(cgImage: cgFilteredImage, scale: imageToProcess.scale, orientation: imageToProcess.imageOrientation)
//
// GPUImage
//
let orientation = ImageOrientation.orientation(for: correctlyRotatedImage.imageOrientation)
let pictureInput = PictureInput(image: correctlyRotatedImage, orientation: orientation)
let processedImage = PictureOutput()
pictureInput --> adaptiveThreshold --> processedImage
processedImage.imageAvailableCallback = {
image in
completionHandler(image)
}
pictureInput.processImage(synchronously: true)
Additionally, I'm trying to use CIFilter in the most efficient way possible by chaining filters like this:
let filter = CIFilter(name: "Filter_Name_Here")!
guard let filterOutput = filter.outputImage else {
completionHandler(nil)
return
}
let secondFilter = CIFilter(name: "Filter_Name_Here")!
secondFilter.setValuesForKeys([kCIInputImageKey: filterOutput])

Related

Blur image performance issues on iOS simulator 10.2.1

I use this code to blur my UIImage
extension UIImage {
func blurred(radius: CGFloat) -> UIImage {
let ciContext = CIContext(options: nil)
guard let cgImage = cgImage else { return self }
let inputImage = CIImage(cgImage: cgImage)
guard let ciFilter = CIFilter(name: "CIGaussianBlur") else { return self }
ciFilter.setValue(inputImage, forKey: kCIInputImageKey)
ciFilter.setValue(radius, forKey: "inputRadius")
guard let resultImage = ciFilter.value(forKey: kCIOutputImageKey) as? CIImage else { return self }
guard let cgImage2 = ciContext.createCGImage(resultImage, from: inputImage.extent) else { return self }
return UIImage(cgImage: cgImage2)
}
}
But it takes so long to return image from this operation.
Actually this operation takes about 2 seconds:
guard let cgImage2 = ciContext.createCGImage(resultImage, from: inputImage.extent) else { return self }
I have not tested it on real device, but not sure if the code is efficient
That code looks fine-ish, though you should cache the image it returns rather than calling it repeatedly if at all possible; as Matt points out in the comments below, you should also use a shared CIContext rather than setting a new one up every time.
The performance issue you’re seeing is due to the simulator having very different performance characteristics from real hardware. It sounds like Core Image is either using the simulator’s emulated OpenGL ES interface (which is slow) or the CPU (which is slower). Testing it on an iOS device will give you a much better idea of the performance you should expect.

How to Remove Black Shadow Rectangle from Image when Applying Blur Filter in iOS?

I want to remove a BLACK shadow border around Image when applying a Blur filter.
Please review below-attached screenshot. Blur function work correctly but want to remove a black shadow. I only want to do blur an Image. I don't want to apply any color effect with blur. Please let us know when should I missed...
Here I Have uploaded Image due to the low points:
https://drive.google.com/open?id=1KtVgqRXOmIEQXh9IMyWNAlariL0hcJBN
https://drive.google.com/open?id=1l2eLq7VwFPb3-SfIokW0Ijhk2jqUvjlU
Here is my function to Apply Blur effects on particular Image:
Parameter :
doBlurImage - Main Image want to Blur it
imageBlurValue - Blur value from 0 to 50 Float
func makeBlurImage(doBlurImage : UIImage, imageBlurValue : CGFloat) -> UIImage {
let beginImage = CIImage(image: doBlurImage)
let currentFilter = CIFilter(name: "CIGaussianBlur")
currentFilter!.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter!.setValue(imageBlurValue, forKey: kCIInputRadiusKey)
let cropFilter = CIFilter(name: "CICrop")
cropFilter!.setValue(currentFilter!.outputImage, forKey: kCIInputImageKey)
cropFilter!.setValue(CIVector(cgRect: beginImage!.extent), forKey: "inputRectangle")
let output = cropFilter!.outputImage
return UIImage(ciImage: output!)
}
I found a different way to fix this problem..
Apple says :
Applying a clamp effect before the blur filter avoids edge softening by making the original image opaque in all directions.
So we should applying CIAffineClamp filter to avoid black shadow, clampedToExtent() function is returns a new image created by making the pixel colors along its edges extend infinitely in all directions and it already member of CIImage class so we can use it without creating any extra func.
So implementation of solution will be like this :
fileprivate final func blurImage(image: UIImage?, blurAmount: CGFloat, completionHandler: #escaping (UIImage?) -> Void) {
guard let inputImage = image else {
print("Input image is null!")
completionHandler(nil); return
}
guard let ciImage = CIImage(image: inputImage) else {
print("Cannot create ci image from ui image!")
completionHandler(nil); return
}
let blurFilter = CIFilter(name: "CIGaussianBlur")
blurFilter?.setValue(ciImage.clampedToExtent(), forKey: kCIInputImageKey)
blurFilter?.setValue(blurAmount, forKey: kCIInputRadiusKey)
guard let openGLES3 = EAGLContext(api: .openGLES3) else {
print("Cannot create openGLES3 context!")
completionHandler(nil); return
}
let context = CIContext(eaglContext: openGLES3)
guard let ciImageResult = blurFilter?.outputImage else {
print("Cannot get output image from filter!")
completionHandler(nil); return
}
guard let resultImage = context.createCGImage(ciImageResult, from: ciImage.extent) else {
print("Cannot create output image from filtered image extent!")
completionHandler(nil); return
}
completionHandler(UIImage(cgImage: resultImage))
}
Note: Creation of context is expensive then you can create it out side of your function.
These are possible options to generate a Blur effect in ios:
CIGaussianBlur will generate a Blur effect based on a background color of Image.
UIVisualEffectView will generate a Blur effect based on a Style of UIVisualEffectView. Blur effect in UIVisualEffectView are
.extraLight, .light, .dark, .extraDark, regular, and prominent.
Suggested Option - GPUIMage - You can archive best blur effect using GPUImage Processing Library.
Blur effect using GPUImage:
var resultImage = UIImage()
let gaussianBlur = GaussianBlur()
gaussianBlur.blurRadiusInPixels = Float(ImageBlurValue)
let pictureInput = PictureInput(image: YourImage)
let pictureOutput = PictureOutput()
pictureOutput.imageAvailableCallback = {image in
print("Process completed")
resultImage = image
}
pictureInput --> gaussianBlur --> pictureOutput
pictureInput.processImage(synchronously:true)
pictureInput.removeAllTargets()
return resultImage
Happy Coding!...:)

Swift - Image Data From CIImage QR Code / How to render CIFilter Output

I've been having this problem for a while now and looked at dozens of answers here and can't seem to find anything that helps.
Scenario
I am generating a QR Code on the iOS side of my app and want this QR code to be sent to the WatchKit Extension that I am currently developing.
How I am generating the QR Code
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(string, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
return UIImage(ciImage: codeImage);
}
}
}
What I want next
I want to get the data from the QR image so that I can send it to the Apple Watch app, like so:
let data = UIImagePNGRepresentation(QRCodeImage);
But, This always returns nil because there is no image data backing the output from the filter.
Note: I know that there is no data associated with the CI Image because it hasn't been rendered and doesn't even have data associated with it because it's just the output from the filter. I don't know how to get around this because I'm pretty new to image processing and such. :/
What I've Tried
Creating a cgImage from the filter.outputImage
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.QRCode = UIImage(cgImage: cgImage)
}
}
}
}
But this doesn't work, it doesn't seem, because the image on the view is blank.
Creating a blank CIImage as Input Image
func update(with string: String) {
let blankCiImage = CIImage(color: .white) //This probably isn't right...
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
filter.setValue(blankCiImage, forKey: kCIInputImageKey)
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.contactCode = UIImage(cgImage: cgImage)
print(self.contactCode!)
print(UIImagePNGRepresentation(self.contactCode!))
}
}
}
}
This doesn't work either - my thought was to add a blank image to it and then do the filter on top of it, but I am probably not doing this right.
My Goal
Literally, just to get the data from the generated QR Code. Most threads suggest UIImage(ciImage: output) , but this doesn't have any backing data.
If anyone could help me out with this, that'd be great. And any explanation on how it works would be wonderful too.
Edit: I don't believe this is the same as the marked duplicate - The marked duplicate is about editing an existing image using CI filters and getting that data and this is about an image that is solely created through CI filter with no input image - QR Codes. the other answer did not fully relate.
You have a couple of issues in your code. You need to convert your string to data using String Encoding isoLatin1 before passing it to the filter. Another issue is that to convert your CIImage to data you need to redraw/render your CIImage and to prevent blurring the image when scaled you need to apply a transform to the image to increase its size:
extension StringProtocol {
var qrCode: UIImage? {
guard
let data = data(using: .isoLatin1),
let outputImage = CIFilter(name: "CIQRCodeGenerator",
parameters: ["inputMessage": data, "inputCorrectionLevel": "M"])?.outputImage
else { return nil }
let size = outputImage.extent.integral
let output = CGSize(width: 250, height: 250)
let format = UIGraphicsImageRendererFormat()
format.scale = UIScreen.main.scale
return UIGraphicsImageRenderer(size: output, format: format).image { _ in outputImage
.transformed(by: .init(scaleX: output.width/size.width, y: output.height/size.height))
.image
.draw(in: .init(origin: .zero, size: output))
}
}
}
extension CIImage {
var image: UIImage { .init(ciImage: self) }
}
Playground testing:
let link = "https://stackoverflow.com/questions/51178573/swift-image-data-from-ciimage-qr-code-how-to-render-cifilter-output?noredirect=1"
let image = link.qrCode!
let data = image.jpegData(compressionQuality: 1) // 154785 bytes

How can I fix a Core Image's CILanczosScaleTransform filter border artifact?

I want to implement an image downscaling algorithm for iOS. After reading that Core Images's CILanczosScaleTransform was a great fit for it, I implemented it the following way:
public func resizeImage(_ image: UIImage, targetWidth: CGFloat) -> UIImage? {
assert(targetWidth > 0.0)
let scale = Double(targetWidth) / Double(image.size.width)
guard let ciImage = CIImage(image: image) else {
fatalError("Couldn't create CIImage from image in input")
}
guard let filter = CIFilter(name: "CILanczosScaleTransform") else {
fatalError("The filter CILanczosScaleTransform is unavailable on this device.")
}
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let result = filter.outputImage else {
fatalError("No output on filter.")
}
guard let cgImage = context.createCGImage(result, from: result.extent) else {
fatalError("Couldn't create CG Image")
}
return UIImage(cgImage: cgImage)
}
It works well but I get a classic border artifact probably due to the pixel-neighborhood base of the algorithm. I couldn't find anything in Apple's doc about this. Is there something smarter than rendering a bigger image and then crop the border to solve this issue?
You can use imageByClampingToExtent.
Calling this method ... creates an image of infinite extent by repeating
pixel colors from the edges of the original image.
You could use it like this:
...
guard let ciImage = CIImage(image: image)?.clampedToExtent() else {
fatalError("Couldn't create CIImage from image in input")
}
See more information here: Apple Doc for clampedtoextent

Get UIImage from function

I wrote a function to apply the filter to a photo. Why the function always returns nil?
func getImage() -> UIImage? {
let openGLContext = EAGLContext(api: .openGLES2)
let context = CIContext(eaglContext: openGLContext!)
let coreImage = UIImage(cgImage: image.cgImage!)
filter.setValue(coreImage, forKey: kCIInputImageKey)
filter.setValue(1, forKey: kCIInputContrastKey)
if let outputImage = filter.value(forKey: kCIOutputImageKey) as? CIImage {
let output = context.createCGImage(outputImage, from: outputImage.extent)
return UIImage(cgImage: output!)
}
return nil
}
When you working with coreImage in iOS.You have to considered coreImage classes such as CIFilter, CIContext, CIVector, and CIColor and the input image should be a CIImage which holds the image data. It can be created from a UIImage.
Note: You haven't mentioned about your filter definition or custom filter that you using.From your code, I can see you trying to change the contrast of an input image.So, I am testing the code using CIColorControls filter to adjust the contrast.
func getImage(inputImage: UIImage) -> UIImage? {
let openGLContext = EAGLContext(api: .openGLES2)
let context = CIContext(eaglContext: openGLContext!)
let filter = CIFilter(name: "CIColorControls")
let coreImage = CIImage(image: filterImage.image!)
filter?.setValue(coreImage, forKey: kCIInputImageKey)
filter?.setValue(5, forKey: kCIInputContrastKey)
if let outputImage = filter?.value(forKey: kCIOutputImageKey) as? CIImage {
let output = context.createCGImage(outputImage, from: outputImage.extent)
return UIImage(cgImage: output!)
}
return nil
}
You can call the above func like below.
filterImage.image = getImage(inputImage: filterImage.image!)
Output:
You never enter the if in that case. Maybe filter.value returns nil OR it isn't a CIImage it returns. Extract it like that and set a breakpoint at if or add a print between:
let value = filter.value(forKey: kCIOutputImageKey)
if let outputImage = value as? CIImage {
let output = context.createCGImage(outputImage, from: outputImage.extent)
return UIImage(cgImage: output!)
}
If you need more informations you have to share the filter definition and usage code.

Resources