CIImage display MTKView vs GLKView performance - ios

I have a series of UI Images (made from incoming jpeg Data from server) that I wish to render using MTKView. Problem is it is too slow compared to GLKView. There is lot of buffering and delay when I have a series of images to display in MTKView but no delay in GLKView.
Here is MTKView display code:
private lazy var context: CIContext = {
return CIContext(mtlDevice: self.device!, options: [CIContextOption.workingColorSpace : NSNull()])
}()
var ciImg: CIImage? {
didSet {
syncQueue.sync {
internalCoreImage = ciImg
}
}
}
func displayCoreImage(_ ciImage: CIImage) {
self.ciImg = ciImage
}
override func draw(_ rect: CGRect) {
var ciImage: CIImage?
syncQueue.sync {
ciImage = internalCoreImage
}
drawCIImage(ciImg)
}
func drawCIImage(_ ciImage:CIImage?) {
guard let image = ciImage,
let currentDrawable = currentDrawable,
let commandBuffer = commandQueue?.makeCommandBuffer()
else {
return
}
let currentTexture = currentDrawable.texture
let drawingBounds = CGRect(origin: .zero, size: drawableSize)
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scaledImage = image.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
context.render(scaledImage, to: currentTexture, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
And here is code for GLKView which is lag free and fast:
private var videoPreviewView:GLKView!
private var eaglContext:EAGLContext!
private var context:CIContext!
override init(frame: CGRect) {
super.init(frame: frame)
initCommon()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
initCommon()
}
func initCommon() {
eaglContext = EAGLContext(api: .openGLES3)!
videoPreviewView = GLKView(frame: self.bounds, context: eaglContext)
context = CIContext(eaglContext: eaglContext, options: nil)
self.addSubview(videoPreviewView)
videoPreviewView.bindDrawable()
videoPreviewView.clipsToBounds = true
videoPreviewView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
}
func displayCoreImage(_ ciImage: CIImage) {
let sourceExtent = ciImage.extent
let sourceAspect = sourceExtent.size.width / sourceExtent.size.height
let videoPreviewWidth = CGFloat(videoPreviewView.drawableWidth)
let videoPreviewHeight = CGFloat(videoPreviewView.drawableHeight)
let previewAspect = videoPreviewWidth/videoPreviewHeight
// we want to maintain the aspect radio of the screen size, so we clip the video image
var drawRect = sourceExtent
if sourceAspect > previewAspect
{
// use full height of the video image, and center crop the width
drawRect.origin.x = drawRect.origin.x + (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0
drawRect.size.width = drawRect.size.height * previewAspect
}
else
{
// use full width of the video image, and center crop the height
drawRect.origin.y = drawRect.origin.y + (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0
drawRect.size.height = drawRect.size.width / previewAspect
}
var videoRect = CGRect(x: 0, y: 0, width: videoPreviewWidth, height: videoPreviewHeight)
if sourceAspect < previewAspect
{
// use full height of the video image, and center crop the width
videoRect.origin.x += (videoRect.size.width - videoRect.size.height * sourceAspect) / 2.0;
videoRect.size.width = videoRect.size.height * sourceAspect;
}
else
{
// use full width of the video image, and center crop the height
videoRect.origin.y += (videoRect.size.height - videoRect.size.width / sourceAspect) / 2.0;
videoRect.size.height = videoRect.size.width / sourceAspect;
}
videoPreviewView.bindDrawable()
if eaglContext != EAGLContext.current() {
EAGLContext.setCurrent(eaglContext)
}
// clear eagl view to black
glClearColor(0, 0, 0, 1)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA))
context.draw(ciImage, in: videoRect, from: sourceExtent)
videoPreviewView.display()
}
I really want to find out where is bottleneck in Metal code. Is Metal not capable of displaying 640x360 UIImages 20 times per second?
EDIT: Setting colorPixelFormat of MTKView to rgba16Float solves the delay issue, but the reproduced colors are not accurate. So seems like colorspace conversion issue with core image. But how does GLKView renders so fast delay but not MTKView?
EDIT2: Setting colorPixelFormat of MTKView to bgra_xr10 mostly solves the delay issue. But the problem is we can not use CIRenderDestination API with this pixel color format.
Still wondering how GLKView/CIContext render the images so quickly without any delay but in MTKView we need to set colorPixelFormat to bgra_xr10 for increasing performance. And settings bgra_xr10 on iPad Mini 2 causes a crash:
-[MTLRenderPipelineDescriptorInternal validateWithDevice:], line 2590: error 'pixelFormat, for color render target(0), is not a valid MTLPixelFormat.

Related

MTKView is blurry - samplingNearest() does not appear to work

I'm using a MTKView to display some pixel art, but it shows up blurry.
Here is the really weird part: I took a screenshot to show you all what it looks like, but the screenshot is perfectly sharp! Yet, the contents of the MTKView is blurry. Here's the screenshot, and a simulation of what it looks like in the app:
Note the test pattern displayed in the app is 32 x 32 pixels.
When switching from one app to this one, the view is briefly sharp, before instantly becoming blurry.
I suspect this has something to do with anti-aliasing, but I can't seem to find a way to turn it off. Here is my code:
import UIKit
import MetalKit
class ViewController: UIViewController, MTKViewDelegate {
var metalView: MTKView!
var image: CIImage!
var commandQueue: MTLCommandQueue!
var context: CIContext!
override func viewDidLoad() {
super.viewDidLoad()
setup()
layout()
}
func setup() {
guard let image = loadTestPattern() else { return }
self.image = image
let metalView = MTKView(frame: CGRect(origin: CGPoint.zero, size: image.extent.size))
metalView.device = MTLCreateSystemDefaultDevice()
metalView.delegate = self
metalView.framebufferOnly = false
metalView.isPaused = true
metalView.enableSetNeedsDisplay = true
commandQueue = metalView.device?.makeCommandQueue()
context = CIContext(mtlDevice: metalView.device!)
self.metalView = metalView
view.addSubview(metalView)
}
func layout() {
let size = image.extent.size
metalView.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
metalView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
metalView.centerYAnchor.constraint(equalTo: view.centerYAnchor),
metalView.widthAnchor.constraint(equalToConstant: size.width),
metalView.heightAnchor.constraint(equalToConstant: size.height),
])
let viewBounds = view.bounds.size
let scale = min(viewBounds.width/size.width, viewBounds.height/size.height)
metalView.layer.magnificationFilter = CALayerContentsFilter.nearest;
metalView.transform = metalView.transform.scaledBy(x: floor(scale * 0.8), y: floor(scale * 0.8))
}
func loadTestPattern() -> CIImage? {
guard let uiImage = UIImage(named: "TestPattern_32.png") else { return nil }
guard let image = CIImage(image: uiImage) else { return nil }
return image
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
guard let image = self.image else { return }
if let currentDrawable = view.currentDrawable,
let commandBuffer = self.commandQueue.makeCommandBuffer() {
let drawableSize = view.drawableSize
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = image.samplingNearest().transformed(by: CGAffineTransform(scaleX: scale, y: scale))
let destination = CIRenderDestination(width: Int(drawableSize.width),
height: Int(drawableSize.height),
pixelFormat: view.colorPixelFormat,
commandBuffer: nil,
mtlTextureProvider: { () -> MTLTexture in return currentDrawable.texture })
try! self.context.startTask(toRender: scaledImage, to: destination)
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
}
}
Any ideas on what is going on?
Edit 01:
Some additional clues: I attached a pinch gesture recognizer to the MTKView, and printed how much it's being scaled by. Up to a scale factor of approximately 31-32, it appears to be using a linear filter, but beyond 31 or 32, nearest filtering takes over.
Clue #2: Problem disappears when MTKView is replaced with a standard UIImageView.
I'm not sure why that is.
You can find how to turn on/off multisampling anti-aliasing How to use multisampling with an MTKView?
Just have .sampleCount = 1. However, you problem doesn't look like MSAA-related.
My only idea. Here I'd check framebuffer sizes in Metal Debugger in XCode. Sometimes (depending on contentScale factor on your device) framebuffer can be stretched. E.g. your have a device with virtual resolution 100x100 and content scale factor 2. Physical resolution would be 200x200 in this case, and framebuffer 100x100 will be stretched by the system. This may happen with implicit linear filtering, instead of nearest one you set for main render pass. For screenshots it can use 1:1 resolution and system stretching doesn't happen.

Achieve same CIFilter effect on different sizes of same image

I'm building a photo editor and to keep a good performance I filter a small version of the image first and when the user wants to export it, then I filter the higher resolution image.
I'm using CIGaussianBlur filter but I can't achieve same results for different images resolutions.
This is my code:
class ViewController : UIViewController {
var originalImage = UIImage()
var previewImageView = UIImageView()
var previewCIImage: CIImage!
var scaleFactor = CGFloat()
let blurFilter = CIFilter.gaussianBlur()
var blurSlider = UISlider()
var blurRadius = Float()
override func viewDidLoad() {
previewImageView.image = originalImage.scalePreservingAspectRatio(targetSize: previewImageView.frame.size)
previewCIImage = CIImage(image: previewImageView.image!)
// Get the scale factor
scaleFactor = originalImage.getScaleFactor(targetSize: previewImageView.frame.size)
blurSlider.addTarget(self, action: #selector(blurChanged(slider:)), for: .valueChanged)
}
#objc func blurChanged(slider: UISlider) {
blurRadius = slider.value
let scaledRadius = blurRadius * Float(scaleFactor)
blurFilter.radius = scaledRadius
MTKView.setNeedsDisplay()
}
func exportFullSizeImage() -> UIImage {
let inputImage = CIImage(image: originalImage)!
blurFilter.inputImage = inputImage.clampedToExtent()
// Assuming scaleFactor is 1.0 for the unscaled image
let scaledRadius = blurRadius * 1.0
blurFilter.radius = scaledRadius
let output = (blurFilter.outputImage)!
let outputCGImage = context.createCGImage(output, from: output.extent)
return UIImage(cgImage: outputCGImage!)
}
}
extension UIImage {
func scalePreservingAspectRatio(targetSize: CGSize) -> UIImage {
let widthRatio = targetSize.width / size.width
let heightRatio = targetSize.height / size.height
let scaleFactor = min(widthRatio, heightRatio)
let scaledImageSize = CGSize(
width: size.width * scaleFactor,
height: size.height * scaleFactor
)
let renderer = UIGraphicsImageRenderer(
size: scaledImageSize
)
let scaledImage = renderer.image { _ in
self.draw(in: CGRect(
origin: .zero,
size: scaledImageSize
))
}
return scaledImage
}
func getScaleFactor(targetSize: CGSize) -> CGFloat {
let widthRatio = targetSize.width / size.width
let heightRatio = targetSize.height / size.height
let scaleFactor = min(widthRatio, heightRatio)
return scaleFactor
}
}
Here's the output of the small version of the image (preview image):
And here's the output of the full size image (unscaled image):
The results are clearly different, the full size/unscaled image has more blur. I need to achieve the same blur effect on both images.
I've found two similar questions: Output of CIFilter has different effect for different sizes of same image and How to achieve same CIFilter effect on multiple sizes of same image
I know the scale factor of the resized image, that's maybe useful to get an answer.
The parameter scaling from the linked answer should work for all images, regardless of their aspect ratio. The important part is that you apply the scale factor to both images, the preview and the export.
Alternatively, since you have the scale factor of the resized image, you can use that to scale the parameter (instead of using the image size):
// assuming scaleFactor is 1.0 for the unscaled image
let scaledRadius = radius * scaleFactor
filter.setValue(scaledRadius, forKey: "inputRadius")
Please also note that not every parameter of every filter needs scaling to achieve consistency across different image sizes. Usually, only parameters that describe some kind of effect radius or size need scaling.

MTKView glitches/strobing while using a custom blur filter written in Metal

I am using CADisplayLink to live filter an image and showing it in the MTKView. All filters work fine until I try the blur filter - during that filter MTKView sometimes starts strobing, glitching or just showing a black screen on some of the frames instead of the actual result image.
I have three interesting observations:
1) There is no such problem when I display the result image in the UIImageView, so the filter itself is not the cause of the problem
2) If I switch filter back from blur to any other filter, the same problem starts happening in those filters too, but ONLY when I used the blur filter first
3) The glitching itself is slowly fading away the more I use the app. It even starts to occur less and less the more times I actually launch the app.
Code for the MTKView:
import GLKit
import UIKit
import MetalKit
import QuartzCore
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()!
}()
lazy var ciContext: CIContext =
{
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
framebufferOnly = false
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
// from tutorial
private func setup() {
framebufferOnly = false
isPaused = false
enableSetNeedsDisplay = false
}
/// The image to display
var image: CIImage?
{
didSet
{
}
}
override func draw()
{
guard let
image = image,
let targetTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let bounds = CGRect(origin: CGPoint.zero, size: drawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(scaledImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer!.present(currentDrawable!)
commandBuffer!.commit()
super.draw()
}
}
extension CGRect
{
func aspectFitInRect(target: CGRect) -> CGRect
{
let scale: CGFloat =
{
let scale = target.width / self.width
return self.height * scale <= target.height ?
scale :
target.height / self.height
}()
let width = self.width * scale
let height = self.height * scale
let x = target.midX - width / 2
let y = target.midY - height / 2
return CGRect(x: x,
y: y,
width: width,
height: height)
}
}
The code for the blur filter in Metal:
float4 zoneBlur(sampler src, float time, float4 touch) {
float focusPower = 2.0;
int focusDetail = 10;
float2 uv = src.coord();
float2 fingerPos;
float2 size = src.size();
if (touch.x == 0 || touch.y == 0) {
fingerPos = float2(0.5, 0.5);
} else {
fingerPos = touch.xy / size.xy;
}
float2 focus = uv - fingerPos;
float4 outColor;
outColor = float4(0, 0, 0, 1);
for (int i=0; i < focusDetail; i++) {
float power = 1.0 - focusPower * (1.0/size.x) * float(i);
outColor.rgb += src.sample(focus * power + fingerPos).rgb;
}
outColor.rgb *= 1.0 / float(focusDetail);
return outColor;
}
I wonder what can cause such an odd behaviour?

Metal View (MTKView) Drawing Size Issue

Here I have a MTKView and running a simple CIFilter live on camera feed. This works fine.
Issue
On older devices' selfie camera's, such as iPhone 5, iPad Air, the feed gets drawn on a smaller area. UPDATE: Found out that CMSampleBuffer fed to MTKView is smaller in size when this happens. I guess the texture in each update needs to be scaled up?
import UIKit
import MetalPerformanceShaders
import MetalKit
import AVFoundation
final class MetalObject: NSObject, MTKViewDelegate {
private var metalBufferView : MTKView?
private var metalDevice = MTLCreateSystemDefaultDevice()
private var metalCommandQueue : MTLCommandQueue!
private var metalSourceTexture : MTLTexture?
private var context : CIContext?
private var filter : CIFilter?
init(with frame: CGRect, filterType: Int, scaledUp: Bool) {
super.init()
self.metalCommandQueue = self.metalDevice!.makeCommandQueue()
self.metalBufferView = MTKView(frame: frame, device: self.metalDevice)
self.metalBufferView!.framebufferOnly = false
self.metalBufferView!.isPaused = true
self.metalBufferView!.contentScaleFactor = UIScreen.main.nativeScale
self.metalBufferView!.delegate = self
self.context = CIContext()
}
final func update (sampleBuffer: CMSampleBuffer) {
var textureCache : CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.metalDevice!, nil, &textureCache)
var cameraTexture: CVMetalTexture?
guard
let cameraTextureCache = textureCache,
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let cameraTextureWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0)
let cameraTextureHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0)
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
cameraTextureCache,
pixelBuffer,
nil,
MTLPixelFormat.bgra8Unorm,
cameraTextureWidth,
cameraTextureHeight,
0,
&cameraTexture)
if let cameraTexture = cameraTexture,
let metalTexture = CVMetalTextureGetTexture(cameraTexture) {
self.metalSourceTexture = metalTexture
self.metalBufferView!.draw()
}
}
//MARK: - Metal View Delegate
final func draw(in view: MTKView) {
guard let currentDrawable = self.metalBufferView!.currentDrawable,
let sourceTexture = self.metalSourceTexture
else { return }
let commandBuffer = self.metalCommandQueue!.makeCommandBuffer()
var inputImage = CIImage(mtlTexture: sourceTexture)!.applyingOrientation(self.orientationNumber)
if self.showFilter {
self.filter!.setValue(inputImage, forKey: kCIInputImageKey)
inputImage = filter!.outputImage!
}
self.context!.render(inputImage, to: currentDrawable.texture, commandBuffer: commandBuffer, bounds: inputImage.extent, colorSpace: self.colorSpace!)
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
final func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}
}
Observations
Only happens on selfie cameras of older devices
Selfie cameras on newer devices are fine
when the issue occurs, new content gets drawn in a smaller area (gravitated towards top left), with old content from back camera is still remaining outside of new content.
Constraints and the sizing/placement of Metal View is fine.
self.metalBufferView!.contentScaleFactor = UIScreen.main.nativeScale
solves the weird scaling issue on Plus devices.
It looks like the resolution of the front (selfie) camera on older devices is lower, so you'll need to scale the video up if you want it to use the full width or height. Since you're already using CIContext and Metal, you can simply instruct the rendering call to draw the image to whatever rectangle you like.
In your draw method, you execute
self.context!.render(inputImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: inputImage.extent,
colorSpace: self.colorSpace!)
The bounds argument is the destination rectangle in which the image will be rendered. Currently, you are using the image extent, which means the image will not be scaled.
To scale the video up, use the display rectangle instead. You can simply use your metalBufferView.bounds since this will be the size of your display view. You'll end up with
self.context!.render(inputImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: self.metalBufferView.bounds,
colorSpace: self.colorSpace!)
If the image and the view are different aspect ratios (width/height is the aspect ratio), then you'll have to compute the correct size such that the image's aspect ratio is preserved. To do this, you'll end up with code like this:
CGRect dest = self.metalBufferView.bounds;
CGSize imageSize = inputImage.extent.size;
CGSize viewSize = dest.size;
double imageAspect = imageSize.width / imageSize.height;
double viewAspect = viewSize.width / viewSize.height;
if (imageAspect > viewAspect) {
// the image is wider than the view, adjust height
dest.size.height = 1/imageAspect * dest.size.width;
} else {
// the image is taller than the view, adjust the width
dest.size.width = imageAspect * dest.size.height;
// center the tall image
dest.origin.x = (viewSize.width - dest.size.width) / 2;
}
Hope this is useful, please let me know if anything doesn't work or clarification would be helpful.

How to convert UIView to UIImage with high resolution?

There have been several discussions regarding how to convert UIView to UIImage, either using view.drawHierarchy(in:) or view.layer.renderInContext(). However, even if set the scale to device scale, the result is still in pretty bad resolution. I wonder if there's a way to transform a UIView to UIImage with high resolution and quality?
You need to set the correct content scale on each subview.
extension UIView {
func scale(by scale: CGFloat) {
self.contentScaleFactor = scale
for subview in self.subviews {
subview.scale(by: scale)
}
}
func getImage(scale: CGFloat? = nil) -> UIImage {
let newScale = scale ?? UIScreen.main.scale
self.scale(by: newScale)
let format = UIGraphicsImageRendererFormat()
format.scale = newScale
let renderer = UIGraphicsImageRenderer(size: self.bounds.size, format: format)
let image = renderer.image { rendererContext in
self.layer.render(in: rendererContext.cgContext)
}
return image
}
}
To create your image:
let image = yourView.getImage()

Resources