I'm creating an app that requires real-time application of filters to images. Converting the UIImage to a CIImage, and applying the filters are both extremely fast operations, yet it takes too long to convert the created CIImage back to a CGImageRef and display the image (1/5 of a second, which is actually a lot if editing needs to be real-time).
The image is about 2500 by 2500 pixels big, which is most likely part of the problem
Currently, I'm using
let image: CIImage //CIImage with applied filters
let eagl = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
let context = CIContext(EAGLContext: eagl, options: [kCIContextWorkingColorSpace : NSNull()])
//this line takes too long for real-time processing
let cg: CGImage = context.createCGImage(image, fromRect: image.extent)
I've looked into using EAGLContext.drawImage()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
Yet I can't find any solid documentation on exactly how this is done, or if it would be any faster
Is there any faster way to display a CIImage to the screen (either in a UIImageView, or directly on a CALayer)? I would like to avoid decreasing the image quality too much, because this may be noticeable to the user.
It may be worth considering Metal and displaying with a MTKView.
You'll need a Metal device which can be created with MTLCreateSystemDefaultDevice(). That's used to create a command queue and Core Image context. Both these objects are persistent and quite expensive to instantiate, so ideally should be created once:
lazy var commandQueue: MTLCommandQueue =
{
return self.device!.newCommandQueue()
}()
lazy var ciContext: CIContext =
{
return CIContext(MTLDevice: self.device!)
}()
You'll also need a color space:
let colorSpace = CGColorSpaceCreateDeviceRGB()!
When it comes to rendering a CIImage, you'll need to create a short lived command buffer:
let commandBuffer = commandQueue.commandBuffer()
You'll want to render your CIImage (let's call it image) to the currentDrawable?.texture of a MTKView. If that's bound to targetTexture, the rendering syntax is:
ciContext.render(image,
toMTLTexture: targetTexture,
commandBuffer: commandBuffer,
bounds: image.extent,
colorSpace: colorSpace)
commandBuffer.presentDrawable(currentDrawable!)
commandBuffer.commit()
I have a working version here.
Hope that helps!
Simon
I ended up using the context.drawImage(image, inRect: destinationRect, fromRect: image.extent) method. Here's the image view class that I created
import Foundation
//GLKit must be linked and imported
import GLKit
class CIImageView: GLKView{
var image: CIImage?
var ciContext: CIContext?
//initialize with the frame, and CIImage to be displayed
//(or nil, if the image will be set using .setRenderImage)
init(frame: CGRect, image: CIImage?){
super.init(frame: frame, context: EAGLContext(API: EAGLRenderingAPI.OpenGLES2))
self.image = image
//Set the current context to the EAGLContext created in the super.init call
EAGLContext.setCurrentContext(self.context)
//create a CIContext from the EAGLContext
self.ciContext = CIContext(EAGLContext: self.context)
}
//for usage in Storyboards
required init?(coder aDecoder: NSCoder){
super.init(coder: aDecoder)
self.context = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
EAGLContext.setCurrentContext(self.context)
self.ciContext = CIContext(EAGLContext: self.context)
}
//set the current image to image
func setRenderImage(image: CIImage){
self.image = image
//tell the processor that the view needs to be redrawn using drawRect()
self.setNeedsDisplay()
}
//called automatically when the view is drawn
override func drawRect(rect: CGRect){
//unwrap the current CIImage
if let image = self.image{
//multiply the frame by the screen's scale (ratio of points : pixels),
//because the following .drawImage() call uses pixels, not points
let scale = UIScreen.mainScreen().scale
let newFrame = CGRectMake(rect.minX, rect.minY, rect.width * scale, rect.height * scale)
//draw the image
self.ciContext?.drawImage(
image,
inRect: newFrame,
fromRect: image.extent
)
}
}
}
Then, to use it, simply
let myFrame: CGRect //frame in self.view where the image should be displayed
let myImage: CIImage //CIImage with applied filters
let imageView: CIImageView = CIImageView(frame: myFrame, image: myImage)
self.view.addSubview(imageView)
Resizing the UIImage to the screen size before converting it to a CIImage also helps. It speeds things up a lot in the case of high quality images. Just make sure to use the full-size image when actually saving it.
Thats it! Then, to update the image in the view
imageView.setRenderImage(newCIImage)
//note that imageView.image = newCIImage won't work because
//the view won't be redrawn
You can use GlkView and render as you said with context.drawImage() :
let glView = GLKView(frame: superview.bounds, context: EAGLContext(API: .OpenGLES2))
let context = CIContext(EAGLContext: glView.context)
After your processing render the image :
glView.bindDrawable()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
glView.display()
That is a pretty big image so that's definitely part of it. I'd recommend looking at GPUImage for doing single image filters. You can skip over using CoreImage altogether.
let inputImage:UIImage = //... some image
let stillImageSource = GPUImagePicture(image: inputImage)
let filter = GPUImageSepiaFilter()
stillImageSource.addTarget(filter)
filter.useNextFrameForImageCapture()
stillImageSource.processImage()
Related
Here is my problem : I want to display a pixel buffer that I calculated to a MTKView. I searched for MTLTexture, MTLBuffer and other Metal objects, but I can't find any way to just present a pixel buffer.
Every tutorial I saw are about presenting 3D objects with vertex and fragments shaders.
I think the buffer has to be presented within the drawInMTKView function (maybe with the MTLRenderCommandEncoder), but again, I can't find any information about this.
I hope I'm not asking an obvious question.
Thanks
Welcome!
I recommend you use Core Image for rendering the content of the pixel buffer into the view. This requires the least manual Metal setup.
Setup the MTKView and some required objects as follows (assuming you have a view controller and a storyboard setup):
import UIKit
import CoreImage
class PreviewViewController: UIViewController {
#IBOutlet weak var metalView: MTKView!
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
var ciContext: CIContext!
var pixelBuffer: CVPixelBuffer?
override func viewDidLoad() {
super.viewDidLoad()
self.device = MTLCreateSystemDefaultDevice()
self.commandQueue = self.device.makeCommandQueue()
self.metalView.delegate = self
self.metalView.device = self.device
// this allows us to render into the view's drawable
self.metalView.framebufferOnly = false
self.ciContext = CIContext(mtlDevice: self.device)
}
}
In the delegate method you use Core Image to transform the pixel buffer to fit the contents of the view (this is a bonus, adapt it to your use case) and render it using the CIContext:
extension PreviewViewController: MTKViewDelegate {
func draw(in view: MTKView) {
guard let pixelBuffer = self.pixelBuffer,
let commandBuffer = self.commandQueue.makeCommandBuffer() else { return }
// turn the pixel buffer into a CIImage so we can use Core Image for rendering into the view
let image = CIImage(cvPixelBuffer: pixelBuffer)
// bonus: transform the image to aspect-fit the view's bounds
let drawableSize = view.drawableSize
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = image.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
// center in the view
let originX = max(drawableSize.width - scaledImage.extent.size.width, 0) / 2
let originY = max(drawableSize.height - scaledImage.extent.size.height, 0) / 2
let centeredImage = scaledImage.transformed(by: CGAffineTransform(translationX: originX, y: originY))
// Create a render destination that allows to lazily fetch the target texture
// which allows the encoder to process all CI commands _before_ the texture is actually available.
// This gives a nice speed boost because the CPU doesn't need to wait for the GPU to finish
// before starting to encode the next frame.
// Also note that we don't pass a command buffer here, because according to Apple:
// "Rendering to a CIRenderDestination initialized with a commandBuffer requires encoding all
// the commands to render an image into the specified buffer. This may impact system responsiveness
// and may result in higher memory usage if the image requires many passes to render."
let destination = CIRenderDestination(width: Int(drawableSize.width),
height: Int(drawableSize.height),
pixelFormat: view.colorPixelFormat,
commandBuffer: nil,
mtlTextureProvider: { () -> MTLTexture in
return currentDrawable.texture
})
// render into the view's drawable
let _ = try! self.ciContext.startTask(toRender: centeredImage, to: destination)
// present the drawable
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
}
There is a slightly simpler way for rendering into the drawable texture instead of using CIRenderDestination, but this is recommended if you want to achieve high frame rates (see comment).
I think I found a solution : https://developer.apple.com/documentation/metal/creating_and_sampling_textures?language=objc.
In this exemple, they show how to render an image to a Metal view, using just a few vertices and a fragment shader to render the texture to a 2D square.
I'll go from there. Not sure if there isn't a better (simpler ?) way to do that. But I guess that's how Metal wants us to do this.
since the documentation on swiftUI isn't great yet I wanted to ask how I can convert an "image" to an "UIImage" or how to convert an "image" to pngData/jpgData
let image = Image(systemName: "circle.fill")
let UIImage = image as UIImage
There is no direct way of converting an Image to UIImage. instead, you should treat the Image as a View, and try to convert that View to a UIImage.
Image conforms to View, so we already have the View we need. now we just need to convert that View to a UIImage.
We need 2 components to achieve this. First, a function to change our Image/View to a UIView, and second one, to change the UIView we created to UIImage.
For more Convenience, both functions are declared as Extensions to their appropriate types.
extension View {
// This function changes our View to UIView, then calls another function
// to convert the newly-made UIView to a UIImage.
public func asUIImage() -> UIImage {
let controller = UIHostingController(rootView: self)
controller.view.frame = CGRect(x: 0, y: CGFloat(Int.max), width: 1, height: 1)
UIApplication.shared.windows.first!.rootViewController?.view.addSubview(controller.view)
let size = controller.sizeThatFits(in: UIScreen.main.bounds.size)
controller.view.bounds = CGRect(origin: .zero, size: size)
controller.view.sizeToFit()
// here is the call to the function that converts UIView to UIImage: `.asUIImage()`
let image = controller.view.asUIImage()
controller.view.removeFromSuperview()
return image
}
}
extension UIView {
// This is the function to convert UIView to UIImage
public func asUIImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}
How To Use?
let image: Image = Image("MyImageName") // Create an Image anyhow you want
let uiImage: UIImage = image.asUIImage() // Works Perfectly
Bonus
As i said, we are treating the Image, as a View. In the process, we don't use any specific features of Image, the only thing that is important is that our Image is a View (conforms to View protocol).
This means that with this method, you can not only convert an Image to a UIImage, but also you can convert any View to a UIImage.
var myView: some View {
// create the view here
}
let uiImage = myView.asUIImage() // Works Perfectly
Such thing is not possible with SwiftUI, and I bet it will never be. It goes againts the whole framework concept. However, you can do:
let uiImage = UIImage(systemName: "circle.fill")
let image = Image(uiImage: uiImage)
I want to detect ball and have AR model interact with it. I used opencv for ball detection and send center of ball which I can use in hitTest to get coordinates in sceneView. I have been converting CVPixelBuffer to UIImage using following function:
static func convertToUIImage(buffer: CVPixelBuffer) -> UIImage?{
let ciImage = CIImage(cvPixelBuffer: buffer)
let temporaryContext = CIContext(options: nil)
if let temporaryImage = temporaryContext.createCGImage(ciImage, from: CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(buffer), height: CVPixelBufferGetHeight(buffer)))
{
let capturedImage = UIImage(cgImage: temporaryImage)
return capturedImage
}
return nil
}
This gave me rotated image:
Then i found about changing orientation using:
let capturedImage = UIImage(cgImage: temporaryImage, scale: 1.0, orientation: .right)
While it gave correct orientation while device is in portrait, rotating device to landscape again gave rotated image.
Now I am thinking about handling it using viewWillTransition. But before that i want to know:
If there is other way around to convert image with correct orientation?
Why does this happen?
1. Is there another way to convert the image with the correct orientation?
You may try to use snapshot() of ARSCNView (inherited from SCNView), which:
Draws the contents of the view and returns them as a new image object
so if you have an object like:
#IBOutlet var arkitSceneView:ARSCNView!
you only need to do so:
let imageFromArkitScene:UIImage? = arkitSceneView.snapshot()
2. Why does this happen?
It's because the CVPixelBuffer comes from ARFrame, which is :
captured (continuously) from the device camera, by the running AR session.
Well, since the camera orientation does not change with the rotation of the device (they are separate), to be able to adjust the orientation of your frame to the current view, you should re-orient the image captured from your camera applying the affine transform extracted with displayTransform(for:viewportSize:):
Returns an affine transform for converting between normalized image coordinates and a coordinate space appropriate for rendering the camera image onscreen.
you may find good documentation here, usage example:
let orient = UIApplication.shared.statusBarOrientation
let viewportSize = yourSceneView.bounds.size
let transform = frame.displayTransform(for: orient, viewportSize: viewportSize).inverted()
var finalImage = CIImage(cvPixelBuffer: pixelBuffer).transformed(by: transform)
Is there a simple way to render a rotated tiled image as a view background? Something to the effect of UIColor(patternImage:) but where the image is rotated at a certain angle?
There is no simple way to achieve this, at least not in vanilla Swift. I would use another UIView as a subview for our original view, set its background to a tiled image and add a CGAffineTransform to that particular view.
Turns out Core Image filter CIAffineTile does exactly what I want.
extension UIImage {
func tile(angle: CGFloat) -> CIImage? {
return CIImage(image: self)?.applyingFilter(
"CIAffineTile",
withInputParameters: [
kCIInputTransformKey: NSValue(
cgAffineTransform: CGAffineTransform(rotationAngle: angle)
)
]
)
}
}
This function creates a CIImage with infinite extent, which can be cropped and converted to a real image.
let v = UIImageView()
// ...
let source = UIImage(named: "sample")!
let tiled = source.tile(angle: CGFloat.pi / 6)!
let result = UIImage(ciImage: tiled.cropping(to: v.bounds))
v.image = result
I have a UIView canvas which I would like to save a screen shot of it and it's subviews (which are the colorful shapes) on the camera roll when I press the UIBarButton shareBarButton. This works on the simulator, however, it does not produce the image in the way I like:
Ideally what I would like the snapshot to look like (except w/out the carrier, time, & battery status on the top of the screen).
What the snapshot in the camera roll actually looks like.
I want the snapshot to look exactly like the way it looks on the iPhone screen. So if part of the shape goes beyond the screen, the snapshot will capture only the part of the shape that is visible on screen. I also want the snapshot to have the size of the canvas which is basically the size of the view except slightly shorter height:
canvas = UIView(frame: CGRectMake(0, 0, view.bounds.height, view.bounds.height-toolbar.bounds.height))
If someone could tell what I'm doing wrong in creating the snapshot that would be greatly appreciated!
My code:
func share(sender: UIBarButtonItem) {
let masterpiece = canvas.snapshotViewAfterScreenUpdates(true)
let image = snapshot(masterpiece)
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
func snapshot(masterpiece: UIView) -> UIImage{
UIGraphicsBeginImageContextWithOptions(masterpiece.bounds.size, false, UIScreen.mainScreen().scale)
masterpiece.drawViewHierarchyInRect(masterpiece.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
In the first instance I would try snap-shotting the UIWindow to see if that solves your issue:
Here is a UIWindow extension I use (not specifically for camera work) - try that.
import UIKit
extension UIWindow {
func capture() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.frame.size, self.opaque, UIScreen.mainScreen().scale)
self.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
I call it like:
let window: UIWindow! = UIApplication.sharedApplication().keyWindow
let windowImage = window.capture()