Blur image performance issues on iOS simulator 10.2.1 - ios

I use this code to blur my UIImage
extension UIImage {
func blurred(radius: CGFloat) -> UIImage {
let ciContext = CIContext(options: nil)
guard let cgImage = cgImage else { return self }
let inputImage = CIImage(cgImage: cgImage)
guard let ciFilter = CIFilter(name: "CIGaussianBlur") else { return self }
ciFilter.setValue(inputImage, forKey: kCIInputImageKey)
ciFilter.setValue(radius, forKey: "inputRadius")
guard let resultImage = ciFilter.value(forKey: kCIOutputImageKey) as? CIImage else { return self }
guard let cgImage2 = ciContext.createCGImage(resultImage, from: inputImage.extent) else { return self }
return UIImage(cgImage: cgImage2)
}
}
But it takes so long to return image from this operation.
Actually this operation takes about 2 seconds:
guard let cgImage2 = ciContext.createCGImage(resultImage, from: inputImage.extent) else { return self }
I have not tested it on real device, but not sure if the code is efficient

That code looks fine-ish, though you should cache the image it returns rather than calling it repeatedly if at all possible; as Matt points out in the comments below, you should also use a shared CIContext rather than setting a new one up every time.
The performance issue you’re seeing is due to the simulator having very different performance characteristics from real hardware. It sounds like Core Image is either using the simulator’s emulated OpenGL ES interface (which is slow) or the CPU (which is slower). Testing it on an iOS device will give you a much better idea of the performance you should expect.

Related

Swift - Image Data From CIImage QR Code / How to render CIFilter Output

I've been having this problem for a while now and looked at dozens of answers here and can't seem to find anything that helps.
Scenario
I am generating a QR Code on the iOS side of my app and want this QR code to be sent to the WatchKit Extension that I am currently developing.
How I am generating the QR Code
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(string, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
return UIImage(ciImage: codeImage);
}
}
}
What I want next
I want to get the data from the QR image so that I can send it to the Apple Watch app, like so:
let data = UIImagePNGRepresentation(QRCodeImage);
But, This always returns nil because there is no image data backing the output from the filter.
Note: I know that there is no data associated with the CI Image because it hasn't been rendered and doesn't even have data associated with it because it's just the output from the filter. I don't know how to get around this because I'm pretty new to image processing and such. :/
What I've Tried
Creating a cgImage from the filter.outputImage
func createQR(with string: String) {
if let filter = CIFilter(name: "CIQRCodeGenerator") {
//set the data to the contact data
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.QRCode = UIImage(cgImage: cgImage)
}
}
}
}
But this doesn't work, it doesn't seem, because the image on the view is blank.
Creating a blank CIImage as Input Image
func update(with string: String) {
let blankCiImage = CIImage(color: .white) //This probably isn't right...
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(contactData, forKey: "inputMessage")
filter.setValue("L", forKey: "inputCorrectionLevel")
filter.setValue(blankCiImage, forKey: kCIInputImageKey)
if let codeImage = filter.outputImage {
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(codeImage, from: codeImage.extent) {
self.contactCode = UIImage(cgImage: cgImage)
print(self.contactCode!)
print(UIImagePNGRepresentation(self.contactCode!))
}
}
}
}
This doesn't work either - my thought was to add a blank image to it and then do the filter on top of it, but I am probably not doing this right.
My Goal
Literally, just to get the data from the generated QR Code. Most threads suggest UIImage(ciImage: output) , but this doesn't have any backing data.
If anyone could help me out with this, that'd be great. And any explanation on how it works would be wonderful too.
Edit: I don't believe this is the same as the marked duplicate - The marked duplicate is about editing an existing image using CI filters and getting that data and this is about an image that is solely created through CI filter with no input image - QR Codes. the other answer did not fully relate.
You have a couple of issues in your code. You need to convert your string to data using String Encoding isoLatin1 before passing it to the filter. Another issue is that to convert your CIImage to data you need to redraw/render your CIImage and to prevent blurring the image when scaled you need to apply a transform to the image to increase its size:
extension StringProtocol {
var qrCode: UIImage? {
guard
let data = data(using: .isoLatin1),
let outputImage = CIFilter(name: "CIQRCodeGenerator",
parameters: ["inputMessage": data, "inputCorrectionLevel": "M"])?.outputImage
else { return nil }
let size = outputImage.extent.integral
let output = CGSize(width: 250, height: 250)
let format = UIGraphicsImageRendererFormat()
format.scale = UIScreen.main.scale
return UIGraphicsImageRenderer(size: output, format: format).image { _ in outputImage
.transformed(by: .init(scaleX: output.width/size.width, y: output.height/size.height))
.image
.draw(in: .init(origin: .zero, size: output))
}
}
}
extension CIImage {
var image: UIImage { .init(ciImage: self) }
}
Playground testing:
let link = "https://stackoverflow.com/questions/51178573/swift-image-data-from-ciimage-qr-code-how-to-render-cifilter-output?noredirect=1"
let image = link.qrCode!
let data = image.jpegData(compressionQuality: 1) // 154785 bytes

How can I fix a Core Image's CILanczosScaleTransform filter border artifact?

I want to implement an image downscaling algorithm for iOS. After reading that Core Images's CILanczosScaleTransform was a great fit for it, I implemented it the following way:
public func resizeImage(_ image: UIImage, targetWidth: CGFloat) -> UIImage? {
assert(targetWidth > 0.0)
let scale = Double(targetWidth) / Double(image.size.width)
guard let ciImage = CIImage(image: image) else {
fatalError("Couldn't create CIImage from image in input")
}
guard let filter = CIFilter(name: "CILanczosScaleTransform") else {
fatalError("The filter CILanczosScaleTransform is unavailable on this device.")
}
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(scale, forKey: kCIInputScaleKey)
guard let result = filter.outputImage else {
fatalError("No output on filter.")
}
guard let cgImage = context.createCGImage(result, from: result.extent) else {
fatalError("Couldn't create CG Image")
}
return UIImage(cgImage: cgImage)
}
It works well but I get a classic border artifact probably due to the pixel-neighborhood base of the algorithm. I couldn't find anything in Apple's doc about this. Is there something smarter than rendering a bigger image and then crop the border to solve this issue?
You can use imageByClampingToExtent.
Calling this method ... creates an image of infinite extent by repeating
pixel colors from the edges of the original image.
You could use it like this:
...
guard let ciImage = CIImage(image: image)?.clampedToExtent() else {
fatalError("Couldn't create CIImage from image in input")
}
See more information here: Apple Doc for clampedtoextent

Using GPUImage2 with CIFilters

I'm using GPUImage along with CIFilter to process an image. However, this is proving to be extremely memory intensive, and I'm trying to find a better solution. I'm mainly using CIFilter, but GPUImage2 provides an AdaptiveThreshold filter that CIFilter doesn't supply. As a result, I must use both in my project. Is there a way to transfer the outputImage of CIFilter to GPUImage2 without too much overhead? Or is there a way to port a GPUImage2 filter to CIFilter or vice versa. Here's some example code I'm using now:
let openGLContext = EAGLContext(api: .openGLES2)
let context = CIContext(eaglContext: openGLContext!)
guard let cgFilteredImage = context.createCGImage(noiseReductionOutput, from: noiseReductionOutput.extent) else {
completionHandler(nil)
return
}
let correctlyRotatedImage = UIImage(cgImage: cgFilteredImage, scale: imageToProcess.scale, orientation: imageToProcess.imageOrientation)
//
// GPUImage
//
let orientation = ImageOrientation.orientation(for: correctlyRotatedImage.imageOrientation)
let pictureInput = PictureInput(image: correctlyRotatedImage, orientation: orientation)
let processedImage = PictureOutput()
pictureInput --> adaptiveThreshold --> processedImage
processedImage.imageAvailableCallback = {
image in
completionHandler(image)
}
pictureInput.processImage(synchronously: true)
Additionally, I'm trying to use CIFilter in the most efficient way possible by chaining filters like this:
let filter = CIFilter(name: "Filter_Name_Here")!
guard let filterOutput = filter.outputImage else {
completionHandler(nil)
return
}
let secondFilter = CIFilter(name: "Filter_Name_Here")!
secondFilter.setValuesForKeys([kCIInputImageKey: filterOutput])

Swift CoreImage Retaining Memory

I'm using ReactiveCocoa and CoreImage/Video to process videos, CoreImage to transform and filter each CMSampleBuffer from the video and ReactiveCocoa to process the buffers in sequence.
My filtering function is fairly simple; all I'm doing is detecting if there's a face in the given image, and cropping the image to the face bounds.
static func process(input: CIImage) -> SignalProducer<CIImage?, Types.Error> {
return SignalProducer { observer, disposable in
let context = CIContext()
let detector = CIDetector(ofType: CIDetectorTypeFace, context: context, options: nil)
guard let firstFeature = detector.featuresInImage(input, options: [CIDetectorImageOrientation: NSNumber(integer: 6)]).first else {
observer.sendNext(nil)
observer.sendCompleted()
return
}
let cropFilter = CIFilter(name: "CICrop")
let cropRect: CIVector = CIVector(CGRect: firstFeature.bounds)
cropFilter?.setValue(input, forKey: "inputImage")
cropFilter?.setValue(cropRect, forKey: "inputRectangle")
guard let output = cropFilter?.outputImage else {
observer.sendNext(nil)
observer.sendCompleted()
return
}
observer.sendNext(output)
observer.sendCompleted()
disposable.addDisposable {
cropFilter?.setValue(nil, forKey: "inputImage")
}
}
}
However, somewhere in this function memory is being unintentionally retained. If I wrap the inner SignalProducer block in an autoreleasepool statement, everything works fine and my memory usage never gets above 50 MB. But If I don't wrap my filtering code in that, the memory jumps from 30MB to 200MB then the app crashes.
What's being retained and which line is causing it?
EDIT:
I discovered that shortly there after it was this line:
guard let firstFeature = detector.featuresInImage(input, options: [CIDetectorImageOrientation: NSNumber(integer: 6)]).first else {
observer.sendNext(nil)
observer.sendCompleted()
return
}
Although as to why it's this line, I have no clue. Could this perhaps be a bug in CoreImage itself?

How to pixelize and unpixelize an UIImage or UIImageview?

I would like to pixelize and unpixelize an UIImage or an UIImageview using Swift but I have no idea on how can I do that.
Maybe using effects, layer or something like that?
This is a very easy task on iOS.
Pixelation
Your can use the CIPixellate Core Image Filter.
func pixellated(image: UIImage) -> UIImage? {
guard let
ciImage = CIImage(image: image),
filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
guard let output = filter.outputImage else { return nil }
return UIImage(CIImage: output)
}
Result
The default inputScale value is 8. However you can increase or decrease the effect manually setting the parameter.
filter.setValue(8, forKey: "inputScale")
// ^
// change this
Extension
You can also define the following extension
extension UIImage {
func pixellated(scale: Int = 8) -> UIImage? {
guard let
ciImage = UIKit.CIImage(image: self),
filter = CIFilter(name: "CIPixellate") else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
filter.setValue(scale, forKey: "inputScale")
guard let output = filter.outputImage else { return nil }
return UIImage(CIImage: output)
}
}
Unpixelation
The mechanism is exactly the same, you just need to use a different filter. You can find the full list of filters here (but take a look at which params are available/required for each filter). I think the CIGaussianBlur can do the job.
Of course don't expect to be able to input a low resolution superpixellated image and get an high definition one. This technology is available only in X-Files :D
The Mushroom image has been taken from here.

Resources