Using GaussianBlur on image in viewDidLoad blocks UI - ios

I'm creating a blur effect using this below function in viewDidLoad of viewController
func applyBlurEffect(image: UIImage){
let imageToBlur = CIImage(image: image)!
let blurfilter = CIFilter(name: "CIGaussianBlur")!
blurfilter.setValue(10, forKey: kCIInputRadiusKey)
blurfilter.setValue(imageToBlur, forKey: "inputImage")
let resultImage = blurfilter.value(forKey: "outputImage") as! CIImage
let croppedImage: CIImage = resultImage.cropping(to: CGRect(x:0,y: 0,width: imageToBlur.extent.size.width,height: imageToBlur.extent.size.height))
let context = CIContext(options: nil)
let blurredImage = UIImage (cgImage: context.createCGImage(croppedImage, from: croppedImage.extent)!)
self.backImage.image = blurredImage
}
But this piece of code blocks the UI and the viewController opens after 3-4 seconds of lag. I don't want to present the UI without the blurEffect as well as i don't want the user to wait for 3-4 seconds while opening the viewController.
Please provide with a optimum solution for this problem.

GPUImage (https://github.com/BradLarson/GPUImage) blur works really much faster than CoreImage one:
extension UIImage {
func imageWithGaussianBlur() -> UIImage? {
let source = GPUImagePicture(image: self)
let gaussianFilter = GPUImageGaussianBlurFilter()
gaussianFilter.blurRadiusInPixels = 2.2
source?.addTarget(gaussianFilter)
gaussianFilter.useNextFrameForImageCapture()
source?.processImage()
return gaussianFilter.imageFromCurrentFramebuffer()
}
}
However small delay is still possible (depends on image size), so if you can't preprocess the image until view loads, I'd suggest to resize the image first, blur and display the resulted thumbnail, and then after the original image is processed in background queue, replace the thumbnail with the blurred original.

Core Image Programming Guide
Performance Best Practices
Follow these practices for best performance:
Don’t create a CIContext object every time you render. Contexts store a lot of state information; it’s more efficient to reuse
them.
Evaluate whether you app needs color management. Don’t use it unless you need it. See Does Your App Need Color Management?. Avoid
Core Animation animations while rendering CIImage objects with a
GPU context. If you need to use both simultaneously, you can set up
both to use the CPU.
Make sure images don’t exceed CPU and GPU limits. Image size limits for CIContext objects differ depending on whether Core Image uses the
CPU or GPU. Check the limit by using the methods
inputImageMaximumSize and outputImageMaximumSize.
User smaller images when possible. Performance scales with the number of output pixels. You can have Core Image render into a
smaller view, texture, or framebuffer. Allow Core Animation to
upscale to display size.
Use Core Graphics or Image I/O functions to crop or downsample, such as the functions CGImageCreateWithImageInRect or
CGImageSourceCreateThumbnailAtIndex.
The UIImageView class works best with static images. If your app needs to get the best performance, use lower-level APIs.
Avoid unnecessary texture transfers between the CPU and GPU. Render to a rectangle that is the same size as the source image before
applying a contents scale factor.
Consider using simpler filters that can produce results similar to algorithmic filters. For example, CIColorCube can produce output
similar to CISepiaTone, and do so more efficiently.
Take advantage of the support for YUV image in iOS 6.0 and later. Camera pixel buffers are natively YUV but most image processing
algorithms expect RBGA data. There is a cost to converting between
the two. Core Image supports reading YUB from CVPixelBuffer objects
and applying the appropriate color transform.
Have a look at Brad Larson's GPUImage also. You might want to use it. see this answer. https://stackoverflow.com/a/12336118/1378447

Can you present the view controller with the original image and perform the blur on a background thread and do a nice effect to replace the image once the blur ones is ready??
Also, maybe you could use a UIVisualEffectView and see if performance are better?
Apple a while ago also released an example where they were using UIImageEffects to perform a blur. It is written in Obj-C but you could easily use it in Swift https://developer.apple.com/library/content/samplecode/UIImageEffects/Listings/UIImageEffects_UIImageEffects_h.html

Make use of dispatch queues. This one worked for me:
func applyBlurEffect(image: UIImage){
DispatchQueue.global(qos: DispatchQoS.QoSClass.userInitiated).async {
let imageToBlur = CIImage(image: image)!
let blurfilter = CIFilter(name: "CIGaussianBlur")!
blurfilter.setValue(10, forKey: kCIInputRadiusKey)
blurfilter.setValue(imageToBlur, forKey: "inputImage")
let resultImage = blurfilter.value(forKey: "outputImage") as! CIImage
let croppedImage: CIImage = resultImage.cropping(to: CGRect(x:0,y: 0,width: imageToBlur.extent.size.width,height: imageToBlur.extent.size.height))
let context = CIContext(options: nil)
let blurredImage = UIImage (cgImage: context.createCGImage(croppedImage, from: croppedImage.extent)!)
DispatchQueue.main.async {
self.backImage.image = blurredImage
}
}
}
But this method will create a delay of 3-4 seconds for image to become blur(but it won't block the loading of other UI contents). If you don't want that time delay too, then applying UIBlurEffect to imageView will produce a similar effect:
func applyBlurEffect(image: UIImage){
self.profileImageView.backgroundColor = UIColor.clear
let blurEffect = UIBlurEffect(style: .extraLight)
let blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.frame = self.backImage.bounds
blurEffectView.alpha = 0.5
blurEffectView.autoresizingMask = [.flexibleWidth, .flexibleHeight] // for supporting device rotation
self.backImage.addSubview(blurEffectView)
}
By changing the blur effect style to .light or .dark and alpha value from 0 to 1, you can get your desired effect

Related

CIRadialGradient reduces image size

After applying CIRadialGradient to my image it gets reduced in width by about 20%.
guard let image = bgImage.image, let cgimg = image.cgImage else {
print("imageView doesn't have an image!")
return
}
let coreImage = CIImage(cgImage:cgimg)
guard let radialMask = CIFilter(name:"CIRadialGradient") else {
return
}
guard let maskedVariableBlur = CIFilter(name:"CIMaskedVariableBlur") else {
print("CIMaskedVariableBlur does not exist")
return
}
maskedVariableBlur.setValue(coreImage, forKey: kCIInputImageKey)
maskedVariableBlur.setValue(radialMask.outputImage, forKey: "inputMask")
guard let selectivelyFocusedCIImage = maskedVariableBlur.outputImage else {
print("Setting maskedVariableBlur failed")
return
}
bgImage.image = UIImage(ciImage: selectivelyFocusedCIImage)
To clarify, bgImage is a UIImageView.
Why does this happen and how do I fix it?
Without RadialMask:
With RadialMask:
With the difference that on my physical iPhone the smaller image is aligned to the left.
I tend to explicitly state how big the image is by using a CIContext and creating a specifically sized CGImage instead of simply using UIImage(ciImage:). Try this, assuming your inputImage is called coreGraphics:
let ciCtx = CIContext()
let cgiig = ctx.createCGImage(selectivelyFocusedCIImage, from: coreImage.extent)
let uiImage = UIImage(cgImage: cgIMG!)
A few notes....
(1) I pulled this code out from an app I'm wrapping up. This is untested code (including the forced-unwrap), but the concept of what I'm doing is solid.
(2) You don't explain a lot of what you are trying to do, but when I see a variable named selectivelyFocusedCIImage I get concerned that you may be trying to use CoreImage in a more interactive way than "just" creating one image. If you want "near real-time" performance, render the CIImage in either a (deprecated as of iOS 12) GLKView or an MTKView instead of a UIImageView. The latter only uses the CPU where the two former use the GPU.
(3) Finally, a word of warning on CIContexts - they are expensive to create! Usually you can code it such that there's only one context that can be shared by everything n your app.
Look up the documentation, it's a mask that being applied to the image:
Docs: CIRadialGradient
The different sizes are caused by the kernel size of the blur filter:
The blur filter needs to sample a region around each pixel. Since there are no pixels beyond the image bounds, Core Image reduces the extend of the result image by half the kernel size (blur radius) to signal that for those pixels there is not enough information for a proper blur.
However, you can tell Core Image to treat the border pixels as extending infinitely in all directions so that the blur filter gets enough information even on the edges of the image. Afterwards you can crop the result back to the original dimension.
In your code, just change the following two lines:
maskedVariableBlur.setValue(coreImage.clampedToExtent(), forKey: kCIInputImageKey)
bgImage.image = UIImage(ciImage: selectivelyFocusedCIImage.cropped(to:coreImage.extend))

Confusion About CIContext, OpenGL and Metal (SWIFT). Does CIContext use CPU or GPU by default?

So I'm making an app where some of the main features revolve around applying CIFilters to images.
let context = CIContext()
let context = CIContext(eaglContext: EAGLContext(api: .openGLES3)!)
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)
All of these give me about the same CPU usage (70%) on my CameraViewController where I apply filters to frames and update the imageview. All of these seem to work the exact same way which makes me think I am missing some vital piece of information.
For example, using AVFoundation I get each frame from the camera apply the filters and update the imageview with the new image.
let context = CIContext()
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
connection.videoOrientation = orientation
connection.isVideoMirrored = !cameraModeIsBack
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)
let sharpenFilter = CIFilter(name: "CISharpenLuminance")
let saturationFilter = CIFilter(name: "CIColorControls")
let contrastFilter = CIFilter(name: "CIColorControls")
let pixellateFilter = CIFilter(name: "CIPixellate")
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
var cameraImage = CIImage(cvImageBuffer: pixelBuffer!)
saturationFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
saturationFilter?.setValue(saturationValue, forKey: "inputSaturation")
var cgImage = context.createCGImage((saturationFilter?.outputImage!)!, from: cameraImage.extent)!
cameraImage = CIImage(cgImage: cgImage)
sharpenFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
sharpenFilter?.setValue(sharpnessValue, forKey: kCIInputSharpnessKey)
cgImage = context.createCGImage((sharpenFilter?.outputImage!)!, from: (cameraImage.extent))!
cameraImage = CIImage(cgImage: cgImage)
contrastFilter?.setValue(cameraImage, forKey: "inputImage")
contrastFilter?.setValue(contrastValue, forKey: "inputContrast")
cgImage = context.createCGImage((contrastFilter?.outputImage!)!, from: (cameraImage.extent))!
cameraImage = CIImage(cgImage: cgImage)
pixellateFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
pixellateFilter?.setValue(pixelateValue, forKey: kCIInputScaleKey)
cgImage = context.createCGImage((pixellateFilter?.outputImage!)!, from: (cameraImage.extent))!
applyChanges(image: cgImage)
}
Another example is how I apply changes just to a normal image (I use sliders for all of this)
func imagePixelate(sliderValue: CGFloat){
let cgImg = image?.cgImage
let ciImg = CIImage(cgImage: cgImg!)
let pixellateFilter = CIFilter(name: "CIPixellate")
pixellateFilter?.setValue(ciImg, forKey: kCIInputImageKey)
pixellateFilter?.setValue(sliderValue, forKey: kCIInputScaleKey)
let outputCIImg = pixellateFilter?.outputImage!
let outputCGImg = context.createCGImage(outputCIImg!, from: (outputCIImg?.extent)!)
let outputUIImg = UIImage(cgImage:outputCGImg!, scale:(originalImage?.scale)!, orientation: originalOrientation!)
imageSource[0] = ImageSource(image: outputUIImg)
slideshow.setImageInputs(imageSource)
currentFilteredImage = outputUIImg
}
So pretty much:
Create CgImg from UiImg
Create CiImg from CgImg
Use context to apply filter and translate back to UiImg
Update whatever view with new UiImg
This runs well on my iPhone X and surprisingly well on my iPhone 6 as well. Since my app is pretty much complete I'm looking to optimize it as much as possible. I've looked through a lot of documentation on using OpenGL and Metal to do stuff as well but can't seem to figure out how to start.
I always thought I was running these processes on the CPU but creating the context with OpenGL and Metal provided no improvement. Do I need to be using a MetalKit view or GLKit view (eaglContext seems to be completely deprecated)? How do I translate this over? The apple documentation seems to be lacklustre.
I started making this a comment, but I think since WWDC'18 this works best as an answer. I'll edit as others more an expert than I comment, and am willing to delete the entire answer if that's the proper thing to do.
You are on the right track - utilize the GPU when you can and it's a good fit. CoreImage and Metal, while "low-level" technologies that "usually" use the GPU, can use the CPU if that is desired. CoreGraphics? It renders things using the CPU.
Images. A UIImage and a CGImage are actual images. A CIImage however, isn't. The best way to think of it is a "recipe" for an image.
I typically - for now, I'll explain in a moment - stick to CoreImage, CIFilters, CIImages, and GLKViews when working with filters. Using a GLKView against a CIImage means using OpenGL and a single CIContext and EAGLContext. It offers almost as good performance as using MetalKit or MTKViews.
As for using UIKit and it's UIImage and UIImageView, I only do when needed - saving/sharing/uploading, whatever. Stick to the GPU until then.
....
Here's where it starts getting complicated.
Metal is an Apple proprietary API. Since they own the hardware - including the CPU and GPU - they've optimized it for them. It's "pipeline" is somewhat different than OpenGL. Nothing major, just different.
Until WWDC'18, using GLKit, including GLKView, was fine. But all things OpenGL were depricated, and Apple is moving things to Metal. While the performance gain (for now) isn't that great, you may be best off for something new to use MTKView, Metal, and CIContext`.
Look at the answer #matt gave here for a nice way to use MTKViews.
Some independent points:
Profile your app to figure out where it's spending that CPU time.
If the graphics work is actually not very hard — that is, if your app isn't GPU bound — optimizing the GPU work may not help overall performance.
Try to avoid moving data back and forth between the CPU and GPU. Don't keep creating CGImages for each filter output. A major feature of Core Image is the ability to chain filters without doing rendering for each, then render all of the effects at once. Also, as dfd says in his answer, if you render directly to screen rather than creating a UIImage to display in an image view, that would be better.
Avoid redundant work. Don't recreate your CIFilter objects every time. If the parameters haven't changed, don't reconfigure them every time.

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

Useing CIFilter to create image cause rendering very slow on iOS

I use CIFilter to create a UIImage then add it on a UIImageView. The process of creating this image is really fast and I can add it on the image view. But the whole UI is freeze for a few seconds until it shows the filtered image. I checked that the CIFilter call is fast. I think the slow is caused by UIImageView to render the image. Why is it such slow if the image already created? Below is the code to make a filter of a image.
func photoEffectChrome() -> CIFilter {
let filter = CIFilter(name: "CIPhotoEffectChrome")!
return (filter)
}
func outputImage(filter: CIFilter, originalImage: UIImage) -> UIImage{
print(filter)
let inputImage = CIImage(image: originalImage)
filter.setValue(inputImage, forKey: kCIInputImageKey)
let cgImage = context!.createCGImage(filter.outputImage!, fromRect: (filter.outputImage?.extent)!)
return UIImage(CGImage: cgImage, scale: 1, orientation: originalImage.imageOrientation)
}
The call on the above methods happens on a background thread, then I use blow method to add it on a scroll view.
dispatch_async(dispatch_get_main_queue(), {
self.filterScrollView.addSubview(uiView)
})
If I comment out the "self.filterScrollView.addSubview(uiView)", the UI runs smoothly. Why does it take a long time for rendering the image? More specifically, it happens on simulator. It works much faster when running on a device.
But the whole UI is freeze for a few seconds until it shows the filtered image
A problem of this sort suggests that the method in which you set the UIImageView's image (not shown in your question) is being called on a background thread. You mustn't do that.

Large Image Compositing on iOS in Swift

Although I understand the theory behind image compositing, I haven't dealt much with hardware acceleration and I'm running into implementation issues on iOS (9.2, iPhone 6S). My project is to sequentially composite a large number (20, all the way to hundreds) of large images (12 megapixel) on top of each other at decreasing opacities, and I'm looking for advice as to the best framework or technique. I know there must be a good, hardware accelerated, destructive compositing tool capable of handling large files on iOS, because I can perform this task in Safari in an HTML Canvas tag, and load this page in Safari on the iPhone at nearly the same blazing speed.
This can be a destructive compositing task, like painting in Canvas, so I shouldn't have memory issues as the phone will only have to store the current result up to that point. Ideally, I'd like floating point pixel components, and I'd also like to be able to see the progress on screen.
Core Image has filters that seem great, but they are intended to operate losslessly on one or two pictures and return one result. I can feed that result into the filter again with the next image, and so on, but since the filter doesn't render immediately, this chaining of filters runs me out of memory after about 60 images. Rendering to a Core Graphics image object and reading back in as a Core Image object after each filter doesn't help either, as that overloads the memory even faster.
Looking at the documentation, there are a number of other ways for iOS to leverage the GPU - CALayers being a prime example. But I'm unclear if that handles pictures larger than the screen, or is only intended for framebuffers the size of the screen.
For this task - to leverage the GPU to store a destructively composited "stack" of 12 megapixel photos, and add an additional one on top at a specified opacity, repeatedly, while outputing the current contents of the stack scaled down to the screen - what is the best approach? Can I use an established framework/technique, or am I better of diving into OpenGL and Metal myself? I know the iPhone has this capability, I just need to figure out how to leverage it.
This is what I've got so far. Profiler tells me the rendering takes about 350ms, but I run out of memory if I increase to 20 pics. If I don't render after each loop, I can increase to about 60 pics before I run of out memory.
var stackBuffer: CIImage!
var stackRender: CGImage!
var uiImage: UIImage!
let glContext = EAGLContext(API: .OpenGLES3)
let context = CIContext(EAGLContext: glContext)
// Preload list of 10 test pics
var ciImageArray = Array(count: 10, repeatedValue: CIImage.emptyImage())
for i in 0...9 {
uiImage = UIImage(named: String(i) + ".jpg")!
ciImageArray[i] = CIImage(image: uiImage)!
}
// Put the first image in the buffer
stackBuffer = ciImageArray[0]
for i in 1...9 {
// The next image will have an opacity of 1/n
let topImage = ciImageArray[i]
let alphaTop = topImage.imageByApplyingFilter(
"CIColorMatrix", withInputParameters: [
"inputAVector" : CIVector(x:0, y:0, z:0, w:1/CGFloat(i + 1))
])
// Layer the next image on top of the stack
let filter = CIFilter(name: "CISourceOverCompositing")!
filter.setValue(alphaTop, forKey: kCIInputImageKey)
filter.setValue(stackBuffer, forKey: kCIInputBackgroundImageKey)
// Render the result, and read back in
stackRender = context.createCGImage(filter.outputImage!, fromRect: stackBuffer.extent)
stackBuffer = CIImage(CGImage: stackRender)
}
// Output result
uiImage = UIImage(CGImage: stackRender)
compositeView.image = uiImage

Resources