MTKView is blurry - samplingNearest() does not appear to work - ios

I'm using a MTKView to display some pixel art, but it shows up blurry.
Here is the really weird part: I took a screenshot to show you all what it looks like, but the screenshot is perfectly sharp! Yet, the contents of the MTKView is blurry. Here's the screenshot, and a simulation of what it looks like in the app:
Note the test pattern displayed in the app is 32 x 32 pixels.
When switching from one app to this one, the view is briefly sharp, before instantly becoming blurry.
I suspect this has something to do with anti-aliasing, but I can't seem to find a way to turn it off. Here is my code:
import UIKit
import MetalKit
class ViewController: UIViewController, MTKViewDelegate {
var metalView: MTKView!
var image: CIImage!
var commandQueue: MTLCommandQueue!
var context: CIContext!
override func viewDidLoad() {
super.viewDidLoad()
setup()
layout()
}
func setup() {
guard let image = loadTestPattern() else { return }
self.image = image
let metalView = MTKView(frame: CGRect(origin: CGPoint.zero, size: image.extent.size))
metalView.device = MTLCreateSystemDefaultDevice()
metalView.delegate = self
metalView.framebufferOnly = false
metalView.isPaused = true
metalView.enableSetNeedsDisplay = true
commandQueue = metalView.device?.makeCommandQueue()
context = CIContext(mtlDevice: metalView.device!)
self.metalView = metalView
view.addSubview(metalView)
}
func layout() {
let size = image.extent.size
metalView.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
metalView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
metalView.centerYAnchor.constraint(equalTo: view.centerYAnchor),
metalView.widthAnchor.constraint(equalToConstant: size.width),
metalView.heightAnchor.constraint(equalToConstant: size.height),
])
let viewBounds = view.bounds.size
let scale = min(viewBounds.width/size.width, viewBounds.height/size.height)
metalView.layer.magnificationFilter = CALayerContentsFilter.nearest;
metalView.transform = metalView.transform.scaledBy(x: floor(scale * 0.8), y: floor(scale * 0.8))
}
func loadTestPattern() -> CIImage? {
guard let uiImage = UIImage(named: "TestPattern_32.png") else { return nil }
guard let image = CIImage(image: uiImage) else { return nil }
return image
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
guard let image = self.image else { return }
if let currentDrawable = view.currentDrawable,
let commandBuffer = self.commandQueue.makeCommandBuffer() {
let drawableSize = view.drawableSize
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = image.samplingNearest().transformed(by: CGAffineTransform(scaleX: scale, y: scale))
let destination = CIRenderDestination(width: Int(drawableSize.width),
height: Int(drawableSize.height),
pixelFormat: view.colorPixelFormat,
commandBuffer: nil,
mtlTextureProvider: { () -> MTLTexture in return currentDrawable.texture })
try! self.context.startTask(toRender: scaledImage, to: destination)
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
}
}
Any ideas on what is going on?
Edit 01:
Some additional clues: I attached a pinch gesture recognizer to the MTKView, and printed how much it's being scaled by. Up to a scale factor of approximately 31-32, it appears to be using a linear filter, but beyond 31 or 32, nearest filtering takes over.
Clue #2: Problem disappears when MTKView is replaced with a standard UIImageView.
I'm not sure why that is.

You can find how to turn on/off multisampling anti-aliasing How to use multisampling with an MTKView?
Just have .sampleCount = 1. However, you problem doesn't look like MSAA-related.
My only idea. Here I'd check framebuffer sizes in Metal Debugger in XCode. Sometimes (depending on contentScale factor on your device) framebuffer can be stretched. E.g. your have a device with virtual resolution 100x100 and content scale factor 2. Physical resolution would be 200x200 in this case, and framebuffer 100x100 will be stretched by the system. This may happen with implicit linear filtering, instead of nearest one you set for main render pass. For screenshots it can use 1:1 resolution and system stretching doesn't happen.

Related

Transforming ARFrame#capturedImage to view size

When using the ARSessionDelegate to process the raw camera image in ARKit...
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let currentFrame = session.currentFrame else { return }
let capturedImage = currentFrame.capturedImage
debugPrint("Display size", UIScreen.main.bounds.size)
debugPrint("Camera frame resolution", CVPixelBufferGetWidth(capturedImage), CVPixelBufferGetHeight(capturedImage))
// ...
}
... as documented, the camera image data doesn't match the screen size, for example, on iPhone X I get:
Display size: 375x812pt
Camera resolution: 1920x1440px
Now there is the displayTransform(for:viewportSize:) API to transform camera coordinates to view coordinates. When using the API like this:
let ciimage = CIImage(cvImageBuffer: capturedImage)
let transform = currentFrame.displayTransform(for: .portrait, viewportSize: UIScreen.main.bounds.size)
var transformedImage = ciimage.transformed(by: transform)
debugPrint("Transformed size", transformedImage.extent.size)
I get a size of 2340x1920 which seems incorrect, the result should have an aspect ratio of 375:812 (~0.46). What do I miss here / what's the correct way to use this API to transform the camera image to an image "as displayed by ARSCNView"?
(Example project: ARKitCameraImage)
This turned out to be quite complicated because displayTransform(for:viewportSize) expects normalized image coordinates, it seems you have to flip the coordinates only in portrait mode and the image needs to be not only transformed but also cropped. The following code does the trick for me. Suggestions how to improve this would be appreciated.
guard let frame = session.currentFrame else { return }
let imageBuffer = frame.capturedImage
let imageSize = CGSize(width: CVPixelBufferGetWidth(imageBuffer), height: CVPixelBufferGetHeight(imageBuffer))
let viewPort = sceneView.bounds
let viewPortSize = sceneView.bounds.size
let interfaceOrientation : UIInterfaceOrientation
if #available(iOS 13.0, *) {
interfaceOrientation = self.sceneView.window!.windowScene!.interfaceOrientation
} else {
interfaceOrientation = UIApplication.shared.statusBarOrientation
}
let image = CIImage(cvImageBuffer: imageBuffer)
// The camera image doesn't match the view rotation and aspect ratio
// Transform the image:
// 1) Convert to "normalized image coordinates"
let normalizeTransform = CGAffineTransform(scaleX: 1.0/imageSize.width, y: 1.0/imageSize.height)
// 2) Flip the Y axis (for some mysterious reason this is only necessary in portrait mode)
let flipTransform = (interfaceOrientation.isPortrait) ? CGAffineTransform(scaleX: -1, y: -1).translatedBy(x: -1, y: -1) : .identity
// 3) Apply the transformation provided by ARFrame
// This transformation converts:
// - From Normalized image coordinates (Normalized image coordinates range from (0,0) in the upper left corner of the image to (1,1) in the lower right corner)
// - To view coordinates ("a coordinate space appropriate for rendering the camera image onscreen")
// See also: https://developer.apple.com/documentation/arkit/arframe/2923543-displaytransform
let displayTransform = frame.displayTransform(for: interfaceOrientation, viewportSize: viewPortSize)
// 4) Convert to view size
let toViewPortTransform = CGAffineTransform(scaleX: viewPortSize.width, y: viewPortSize.height)
// Transform the image and crop it to the viewport
let transformedImage = image.transformed(by: normalizeTransform.concatenating(flipTransform).concatenating(displayTransform).concatenating(toViewPortTransform)).cropped(to: viewPort)
Thank you so much for your answer! I was working on this for a week.
Here's an alternative way to do it without messing with the orientation. Instead of using the capturedImage property you can use a snapshot of the screen.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let image = CIImage(image: sceneView.snapshot()) else { return }
let imageSize = image.extent.size
// Convert to "normalized image coordinates"
let resize = CGAffineTransform(scaleX: 1.0 / imageSize.width, y: 1.0 / imageSize.height)
// Convert to view size
let viewSize = CGAffineTransform(scaleX: sceneView.bounds.size.width, y: sceneView.bounds.size.height)
// Transform image
let editedImage = image.transformed(by: resize.concatenating(viewSize)).cropped(to: sceneView.bounds)
sceneView.scene.background.contents = context.createCGImage(editedImage, from: editedImage.extent)
}

CIImage display MTKView vs GLKView performance

I have a series of UI Images (made from incoming jpeg Data from server) that I wish to render using MTKView. Problem is it is too slow compared to GLKView. There is lot of buffering and delay when I have a series of images to display in MTKView but no delay in GLKView.
Here is MTKView display code:
private lazy var context: CIContext = {
return CIContext(mtlDevice: self.device!, options: [CIContextOption.workingColorSpace : NSNull()])
}()
var ciImg: CIImage? {
didSet {
syncQueue.sync {
internalCoreImage = ciImg
}
}
}
func displayCoreImage(_ ciImage: CIImage) {
self.ciImg = ciImage
}
override func draw(_ rect: CGRect) {
var ciImage: CIImage?
syncQueue.sync {
ciImage = internalCoreImage
}
drawCIImage(ciImg)
}
func drawCIImage(_ ciImage:CIImage?) {
guard let image = ciImage,
let currentDrawable = currentDrawable,
let commandBuffer = commandQueue?.makeCommandBuffer()
else {
return
}
let currentTexture = currentDrawable.texture
let drawingBounds = CGRect(origin: .zero, size: drawableSize)
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scaledImage = image.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
context.render(scaledImage, to: currentTexture, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
And here is code for GLKView which is lag free and fast:
private var videoPreviewView:GLKView!
private var eaglContext:EAGLContext!
private var context:CIContext!
override init(frame: CGRect) {
super.init(frame: frame)
initCommon()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
initCommon()
}
func initCommon() {
eaglContext = EAGLContext(api: .openGLES3)!
videoPreviewView = GLKView(frame: self.bounds, context: eaglContext)
context = CIContext(eaglContext: eaglContext, options: nil)
self.addSubview(videoPreviewView)
videoPreviewView.bindDrawable()
videoPreviewView.clipsToBounds = true
videoPreviewView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
}
func displayCoreImage(_ ciImage: CIImage) {
let sourceExtent = ciImage.extent
let sourceAspect = sourceExtent.size.width / sourceExtent.size.height
let videoPreviewWidth = CGFloat(videoPreviewView.drawableWidth)
let videoPreviewHeight = CGFloat(videoPreviewView.drawableHeight)
let previewAspect = videoPreviewWidth/videoPreviewHeight
// we want to maintain the aspect radio of the screen size, so we clip the video image
var drawRect = sourceExtent
if sourceAspect > previewAspect
{
// use full height of the video image, and center crop the width
drawRect.origin.x = drawRect.origin.x + (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0
drawRect.size.width = drawRect.size.height * previewAspect
}
else
{
// use full width of the video image, and center crop the height
drawRect.origin.y = drawRect.origin.y + (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0
drawRect.size.height = drawRect.size.width / previewAspect
}
var videoRect = CGRect(x: 0, y: 0, width: videoPreviewWidth, height: videoPreviewHeight)
if sourceAspect < previewAspect
{
// use full height of the video image, and center crop the width
videoRect.origin.x += (videoRect.size.width - videoRect.size.height * sourceAspect) / 2.0;
videoRect.size.width = videoRect.size.height * sourceAspect;
}
else
{
// use full width of the video image, and center crop the height
videoRect.origin.y += (videoRect.size.height - videoRect.size.width / sourceAspect) / 2.0;
videoRect.size.height = videoRect.size.width / sourceAspect;
}
videoPreviewView.bindDrawable()
if eaglContext != EAGLContext.current() {
EAGLContext.setCurrent(eaglContext)
}
// clear eagl view to black
glClearColor(0, 0, 0, 1)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA))
context.draw(ciImage, in: videoRect, from: sourceExtent)
videoPreviewView.display()
}
I really want to find out where is bottleneck in Metal code. Is Metal not capable of displaying 640x360 UIImages 20 times per second?
EDIT: Setting colorPixelFormat of MTKView to rgba16Float solves the delay issue, but the reproduced colors are not accurate. So seems like colorspace conversion issue with core image. But how does GLKView renders so fast delay but not MTKView?
EDIT2: Setting colorPixelFormat of MTKView to bgra_xr10 mostly solves the delay issue. But the problem is we can not use CIRenderDestination API with this pixel color format.
Still wondering how GLKView/CIContext render the images so quickly without any delay but in MTKView we need to set colorPixelFormat to bgra_xr10 for increasing performance. And settings bgra_xr10 on iPad Mini 2 causes a crash:
-[MTLRenderPipelineDescriptorInternal validateWithDevice:], line 2590: error 'pixelFormat, for color render target(0), is not a valid MTLPixelFormat.

Take a screenshot with background image - iOS

I want to take a screenshot. In this screenshot, I want the text and image in it. However, I am having an issue because when I take a screenshot, I only see the text but not the image.
I think the problem is that clearContainerView only contains the text but not the image. I can't put the image inside of clearContainerView because I want the image to stretch the entire screen... and I want the text centered between the title and tab bar (as shown with green square above).
My code and pictures are below:
This is my current layout in Storyboard:
This is what I want a screenshot of:
This is the screenshot that I get:
This is my code:
#IBOutlet weak var clearContainerView: UIView!
#IBAction func takeScreenshotTapped(_ sender: UIButton) {
let screenshot = clearContainerView.screenshot()
print(screenshot)
}
extension UIView {
func screenshot() -> UIImage {
let image = UIGraphicsImageRenderer(size: bounds.size).image { _ in
drawHierarchy(in: CGRect(origin: .zero, size: bounds.size), afterScreenUpdates: true)
}
return image
}
}
Any suggestions on how to do this?
You can use the following method on your controller view to get the portion of clearContainerView which will be a snapshot view. Then you can use that view object and take a screenshot of it.
resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets:
You have to pass the rect which will is your clearContainerView frame. You can pass zero insets in case you don't want any stretchable content. It return a view object which will contain your imageView portion + your complete clearContainerView. Then you can use the returned view and take its screen shot.
I tried with the following.
My original view.
The screenshot
Use this extension.
//USAGE
let image = self.view.snapshot(of:self.<your view>.frame)
Here "your view" should be the base view from the hierarchy or your can simply use
let image = self.view.snapshot(of:self.view.frame)
Extension
// UIView screenshot
extension UIView {
/// Create snapshot
///
/// - parameter rect: The `CGRect` of the portion of the view to return. If `nil` (or omitted),
/// return snapshot of the whole view.
///
/// - returns: Returns `UIImage` of the specified portion of the view.
func snapshot(of rect: CGRect? = nil) -> UIImage? {
// snapshot entire view
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
drawHierarchy(in: bounds, afterScreenUpdates: true)
let wholeImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// if no `rect` provided, return image of whole view
guard let image = wholeImage, let rect = rect else { return wholeImage }
// otherwise, grab specified `rect` of image
let scale = image.scale
let scaledRect = CGRect(x: rect.origin.x * scale, y: rect.origin.y * scale, width: rect.size.width * scale, height: rect.size.height * scale)
guard let cgImage = image.cgImage?.cropping(to: scaledRect) else { return nil }
return UIImage(cgImage: cgImage, scale: scale, orientation: .up)
}
}
Main Code:
let frame = containerView.frame
let x: CGFloat = 0
let y = frame.minY.pointsToPixel()
let width = frame.width.pointsToPixel()
let height = frame.height.pointsToPixel()
let rect = CGRect(x: x, y: y, width: width, height: height)
let image = cropImage(image: view.screenshot(), toRect: rect)
extension UIView {
func screenshot() -> UIImage {
let image = UIGraphicsImageRenderer(size: bounds.size).image { _ in
drawHierarchy(in: CGRect(origin: .zero, size: bounds.size), afterScreenUpdates: true)
}
return image
}
}
public extension CGFloat {
func pointsToPixel() -> CGFloat {
return self * UIScreen.main.scale
}
}
output:
After Screenshot:
What I've done: take a screenshot of the whole view with your method and then crop the image by converting CGPoints to pixels.
You can use the code given below to capture the screenshot. It will capture the whole window not the particular view. But take care of one thing that if you don't want your "Title", "Button" and "Tab Bar" in the screenshot then you need to hide them before the UIGraphicsBeginImageContextWithOptions and show them again after UIGraphicsEndImageContext.
func takeScreenshot() -> UIImage? {
var screenshotImage: UIImage?
let layer = UIApplication.shared.keyWindow!.layer
let scale = UIScreen.main.scale
// Hide your title label, button and tab bar here
UIGraphicsBeginImageContextWithOptions(layer.frame.size, false, scale);
guard let context = UIGraphicsGetCurrentContext() else {return nil}
layer.render(in:context)
screenshotImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// Unhide your title label, button and tab bar here
return screenshotImage
}
#IBAction func takeScreenshotTapped(_ sender: UIButton) {
let screenshot = takeScreenshot()
print(screenshot)
}
Everything is fine. I have made a demo for your expected output.
You just need to change your view hierarchy like this:
As pr you say the image if outside of clearContainerView
Code
class VC: UIViewController {
#IBOutlet weak var mainVw: UIView!
#IBOutlet weak var clearContainerView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
}
#IBAction func takeImage(_ sender: Any) {
if let VC_1 = self.storyboard?.instantiateViewController(withIdentifier: "VC1") as? VC1{
VC_1.img = mainVw.screenshot()
self.present(VC_1, animated: false, completion: nil)
}
}
}
Output :
Try this!
Helper Function
struct UIGraphicsDrawImageHelper {
static func drawImage(from image: UIImageView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(image.bounds.size, image.isOpaque, 0.0)
image.drawHierarchy(in: image.bounds, afterScreenUpdates: false)
let renderImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return renderImage
}
}
calling it
guard let image = UIGraphicsDrawImageHelper.drawImage(from: qrcodeImageView) else { return }
ImageView (Clear Container View) is the view that you want to snapshot your screen. you can change it to uiview or whatever is view.
Hope this help.

Where do I properly initialize my MTLCommandBuffer?

After debugging using the GPU Capture button, a warning is displayed "Your application created a MTLBuffer object during GPU work. Create buffers at load time for best performance."
The only thing related to a MTLBuffer in my code is the creation of a MTLCommandBuffer every time draw is called:
override func draw(_ rect: CGRect){
let commandBuffer = commandQueue.makeCommandBuffer()
guard var
image = image,
let targetTexture:MTLTexture = currentDrawable?.texture else
{
return
}
let customDrawableSize:CGSize = drawableSize
let bounds = CGRect(origin: CGPoint.zero, size: customDrawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = customDrawableSize.width / image.extent.width
let scaleY = customDrawableSize.height / image.extent.height
let scale = min(scaleX*IVScaleFactor, scaleY*IVScaleFactor)
image = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(image,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
My first thought was to move that to a different scope, where I can define my command buffer as a variable, and then make it equal commandQueue.makeCommandBuffer() when the frame is initialized. This immediately crashes the application.
I'm not sure how to initialize this properly without a warning or crash. The MTLCommandQueue is a lazy var.
Here are the changes that makes it crash:
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
var commandBuffer: MTLCommandBuffer?
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
...
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
commandBuffer = commandQueue.makeCommandBuffer()
}
Then I of course remove the definition of commandBuffer in my draw function.
This warning is related to MTLBuffers, not MTLCommandBuffers.
You should certainly not proactively create a command buffer in your initializer. Create command buffers precisely when you're about to encode GPU work (i.e. as you were doing initially, in your draw method).
As for the diagnostic message, it's probably the case that Core Image is creating a temporary buffer on your behalf when rendering your image. There isn't much you can do about this, but depending on the size of the buffer and the frequency of drawing, it probably isn't a big deal.

Metal View (MTKView) Drawing Size Issue

Here I have a MTKView and running a simple CIFilter live on camera feed. This works fine.
Issue
On older devices' selfie camera's, such as iPhone 5, iPad Air, the feed gets drawn on a smaller area. UPDATE: Found out that CMSampleBuffer fed to MTKView is smaller in size when this happens. I guess the texture in each update needs to be scaled up?
import UIKit
import MetalPerformanceShaders
import MetalKit
import AVFoundation
final class MetalObject: NSObject, MTKViewDelegate {
private var metalBufferView : MTKView?
private var metalDevice = MTLCreateSystemDefaultDevice()
private var metalCommandQueue : MTLCommandQueue!
private var metalSourceTexture : MTLTexture?
private var context : CIContext?
private var filter : CIFilter?
init(with frame: CGRect, filterType: Int, scaledUp: Bool) {
super.init()
self.metalCommandQueue = self.metalDevice!.makeCommandQueue()
self.metalBufferView = MTKView(frame: frame, device: self.metalDevice)
self.metalBufferView!.framebufferOnly = false
self.metalBufferView!.isPaused = true
self.metalBufferView!.contentScaleFactor = UIScreen.main.nativeScale
self.metalBufferView!.delegate = self
self.context = CIContext()
}
final func update (sampleBuffer: CMSampleBuffer) {
var textureCache : CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.metalDevice!, nil, &textureCache)
var cameraTexture: CVMetalTexture?
guard
let cameraTextureCache = textureCache,
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let cameraTextureWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0)
let cameraTextureHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0)
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
cameraTextureCache,
pixelBuffer,
nil,
MTLPixelFormat.bgra8Unorm,
cameraTextureWidth,
cameraTextureHeight,
0,
&cameraTexture)
if let cameraTexture = cameraTexture,
let metalTexture = CVMetalTextureGetTexture(cameraTexture) {
self.metalSourceTexture = metalTexture
self.metalBufferView!.draw()
}
}
//MARK: - Metal View Delegate
final func draw(in view: MTKView) {
guard let currentDrawable = self.metalBufferView!.currentDrawable,
let sourceTexture = self.metalSourceTexture
else { return }
let commandBuffer = self.metalCommandQueue!.makeCommandBuffer()
var inputImage = CIImage(mtlTexture: sourceTexture)!.applyingOrientation(self.orientationNumber)
if self.showFilter {
self.filter!.setValue(inputImage, forKey: kCIInputImageKey)
inputImage = filter!.outputImage!
}
self.context!.render(inputImage, to: currentDrawable.texture, commandBuffer: commandBuffer, bounds: inputImage.extent, colorSpace: self.colorSpace!)
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
final func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}
}
Observations
Only happens on selfie cameras of older devices
Selfie cameras on newer devices are fine
when the issue occurs, new content gets drawn in a smaller area (gravitated towards top left), with old content from back camera is still remaining outside of new content.
Constraints and the sizing/placement of Metal View is fine.
self.metalBufferView!.contentScaleFactor = UIScreen.main.nativeScale
solves the weird scaling issue on Plus devices.
It looks like the resolution of the front (selfie) camera on older devices is lower, so you'll need to scale the video up if you want it to use the full width or height. Since you're already using CIContext and Metal, you can simply instruct the rendering call to draw the image to whatever rectangle you like.
In your draw method, you execute
self.context!.render(inputImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: inputImage.extent,
colorSpace: self.colorSpace!)
The bounds argument is the destination rectangle in which the image will be rendered. Currently, you are using the image extent, which means the image will not be scaled.
To scale the video up, use the display rectangle instead. You can simply use your metalBufferView.bounds since this will be the size of your display view. You'll end up with
self.context!.render(inputImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: self.metalBufferView.bounds,
colorSpace: self.colorSpace!)
If the image and the view are different aspect ratios (width/height is the aspect ratio), then you'll have to compute the correct size such that the image's aspect ratio is preserved. To do this, you'll end up with code like this:
CGRect dest = self.metalBufferView.bounds;
CGSize imageSize = inputImage.extent.size;
CGSize viewSize = dest.size;
double imageAspect = imageSize.width / imageSize.height;
double viewAspect = viewSize.width / viewSize.height;
if (imageAspect > viewAspect) {
// the image is wider than the view, adjust height
dest.size.height = 1/imageAspect * dest.size.width;
} else {
// the image is taller than the view, adjust the width
dest.size.width = imageAspect * dest.size.height;
// center the tall image
dest.origin.x = (viewSize.width - dest.size.width) / 2;
}
Hope this is useful, please let me know if anything doesn't work or clarification would be helpful.

Resources