Renderpass Descriptor in Metal - ios

I am new to Metal. How many MTLRenderpassdescriptor we can create in a single Application any limit ?
I want to have three MTLRenderpassdescriptor is it possible. One I need to draw quads according to finger position. It is like Offscreen Rendering. There will be texture output.
I want to pass the output texture to another renderpass descriptor to process the color of it which is also similar to Offscreen Rendering.Here new output texture will be available
I want to present new output Texture to the screen. Is it Possible ? When I create three renderpass descriptors only two is available.
First offscreen rendering It works fine
let renderPass = MTLRenderPassDescriptor()
renderPass.colorAttachments[0].texture = ssTexture
renderPass.colorAttachments[0].loadAction = .clear
renderPass.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
renderPass.colorAttachments[0].storeAction = .store
let commandBuffer = commandQueue.makeCommandBuffer()
var commandEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPass)
for scene in scenesTool{
scene.render(commandEncoder: commandEncoder!)
}
commandEncoder?.endEncoding()
This renderpass descriptor is not available in Debug as well.
let renderPassWC = MTLRenderPassDescriptor()
renderPassWC.colorAttachments[0].texture = ssCurrentTexture
renderPassWC.colorAttachments[0].loadAction = .clear
renderPassWC.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
renderPassWC.colorAttachments[0].storeAction = .store
let commandBufferWC = commandQueue.makeCommandBuffer()
let commandEncoderWC = commandBufferWC?.makeRenderCommandEncoder(descriptor: renderPassWC)
for scene in scenesWCTool{
print(" msbksadbdksabfkasbd")
scene.render(commandEncoder: commandEncoderWC!)
}
commandEncoderWC?.endEncoding()
Third Renderpass Descriptor is working Fine.
let descriptor = view.currentRenderPassDescriptor
commandEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: descriptor!)
for x in canvasTemporaryScenes{
x.updateCanvas(texture: ssTexture!)
x.render(commandEncoder: commandEncoder!)
}
commandEncoder?.endEncoding()
guard let drawable = view.currentDrawable else { return }
commandBuffer?.present(drawable)
commandBuffer?.commit()
commandBuffer?.waitUntilCompleted()

MTLRenderPassDescriptor objects are cheap to create, unlike some other kinds of objects in Metal (e.g. pipeline state objects). You are expected to create descriptors any time you need them. You don't need to keep them for a long period of time. Just recreate them each time you are going to create a render command encoder. For example, creating multiple render pass descriptors per frame is perfectly normal. There's no specific limit on the number you can create.
I have no idea what you mean by "When I create three renderpass descriptors only two is available" and "This renderpass descriptor is not available in Debug". What do you mean by "available" in these sentences? Explain exactly what results you want and expect and what's actually happening that's different than that.

Related

3D viewer for iOS using MetalKit and Swift - Depth doesn’t work

I’m using Metal with Swift to build a 3D viewer for iOS and I have some issues to make the depth working. From now, I can draw and render a single shape correctly in 3D (like a simple square plane (4 triangles (2 for each face)) or a tetrahedron (4 triangles)).
However, when I try to draw 2 shapes together, the depth between these two shapes doesn’t work. For example, a plane is placed at Z axes = 0 behind a tetra which is placed at Z > 0. If I look a this scene from the back (camera placed somewhere at Z < 0), it’s ok. But when I look at this scene from the front (camera placed somewhere at Z > 0), it doesn’t work. The plane is drawn before the tetra even if it is placed behind the tetra.
I think that the plane is always drawn on the screen before the tetra (no matter the position of the camera) because the call of drawPrimitives for the plane is done before the call for the tetra. However, I was thinking that all the depth and stencil settings will deal with that properly.
I don’t know if the depth isn’t working because depth texture, stencil state and so on are not correctly set or because each shape is drawn in a different call of drawPrimitives.
In other words, do I have to draw all shapes in the same call of drawPrimitives to make the depth working ? The idea of this multiple call to drawPrimitives is to deal with different kinds of primitive type for each shape (triangle or line or …).
This is how I set the depth stencil state and the depth texture and the render pipeline :
init() {
// some miscellaneous initialisation …
// …
// all MTL stuff :
commandQueue = device.makeCommandQueue()
// Stencil descriptor
let depthStencilDescriptor = MTLDepthStencilDescriptor()
depthStencilDescriptor.depthCompareFunction = .less
depthStencilDescriptor.isDepthWriteEnabled = true
depthStencilState = device.makeDepthStencilState(descriptor: depthStencilDescriptor)!
// Library and pipeline descriptor & state
let library = try! device.makeLibrary(source: shaders, options: nil)
// Our vertex function name
let vertexFunction = library.makeFunction(name: "basic_vertex_function")
// Our fragment function name
let fragmentFunction = library.makeFunction(name: "basic_fragment_function")
// Create basic descriptor
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
// Attach the pixel format that si the same as the MetalView
renderPipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
renderPipelineDescriptor.depthAttachmentPixelFormat = .depth32Float_stencil8
renderPipelineDescriptor.stencilAttachmentPixelFormat = .depth32Float_stencil8
//renderPipelineDescriptor.stencilAttachmentPixelFormat = .stencil8
// Attach the shader functions
renderPipelineDescriptor.vertexFunction = vertexFunction
renderPipelineDescriptor.fragmentFunction = fragmentFunction
// Try to update the state of the renderPipeline
do {
renderPipelineState = try device.makeRenderPipelineState(descriptor: renderPipelineDescriptor)
} catch {
print(error.localizedDescription)
}
// Depth Texture
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: 576, height: 723, mipmapped: false)
desc.storageMode = .private
desc.usage = .pixelFormatView
depthTexture = device.makeTexture(descriptor: desc)!
// Uniforms buffer
modelMatrix = Matrix4()
modelMatrix.multiplyLeft(worldMatrix)
uniformBuffer = device.makeBuffer( length: MemoryLayout<Float>.stride*16*2, options: [])
let bufferPointer = uniformBuffer.contents()
memcpy(bufferPointer, &modelMatrix.matrix.m, MemoryLayout<Float>.stride * 16)
memcpy(bufferPointer + MemoryLayout<Float>.stride * 16, &projectionMatrix.matrix.m, MemoryLayout<Float>.stride * 16)
}
And the draw function :
function draw(in view: MTKView) {
// create render pass descriptor
guard let drawable = view.currentDrawable,
let renderPassDescriptor = view.currentRenderPassDescriptor else {
return
}
renderPassDescriptor.depthAttachment.texture = depthTexture
renderPassDescriptor.depthAttachment.clearDepth = 1.0
//renderPassDescriptor.depthAttachment.loadAction = .load
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .store
// Create a buffer from the commandQueue
let commandBuffer = commandQueue.makeCommandBuffer()
let commandEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
commandEncoder?.setRenderPipelineState(renderPipelineState)
commandEncoder?.setFrontFacing(.counterClockwise)
commandEncoder?.setCullMode(.back)
commandEncoder?.setDepthStencilState(depthStencilState)
// Draw all obj in objects
// objects = array of Object; each object describing vertices and primitive type of a shape
// objects[0] = Plane, objects[1] = Tetra
for obj in objects {
createVertexBuffers(device: view.device!, vertices: obj.vertices)
commandEncoder?.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
commandEncoder?.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
commandEncoder?.drawPrimitives(type: obj.primitive, vertexStart: 0, vertexCount: obj.vertices.count)
}
commandEncoder?.endEncoding()
commandBuffer?.present(drawable)
commandBuffer?.commit()
}
Does anyone has an idea of what is wrong or missing ?
Any advice is welcome !
Edited 09/23/2022: Code updated
Few things of the top of my head:
First
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .depth32Float_stencil8, width: 576, height: 723, mipmapped: false)
Second
renderPipelineDescriptor.depthAttachmentPixelFormat = .depth32Float_stencil8
Notice the pixeFormat should be same in both places, and since you seem to be using stencil test as well so depth32Float_stencil8 will be perfect.
Third
Now another thing you seem to be missing is, clearing depth texture before every render pass, am I right?
So, you should set load action of depth attachment to .clear, like this:
renderPassDescriptor.depthAttachment.loadAction = .clear
Fourth (Subjective to your usecase)*
If none of the above works, you might need to discard framents with alpha = 0 in your fragment function by calling discard_fragment() when color you are returning has alpha 0
Also note for future:
Ideally you want depth texture to be fresh and empty when every new frame starts getting rendered (first draw call of a render pass) and then reuse it for subsequent draw calls in same render pass by setting load action .load and store action .store.
ex: Assuming you have 3 draw calls, say drawing polygons wiz triangle, rectangle, sphere in one frame, then your depth attachment setup should be like this:
Frame 1 Starts:
First Draw: triangle
loadAction: Clear
storeAction: Store
Second Draw: rectangle
loadAction: load
storeAction: Store
Third Draw: sphere
loadAction: load
storeAction: store/dontcare
Frame 2 Starts: Notice you clear depth buffer for 1st draw call of new frame
First Draw: triangle
loadAction: Clear
storeAction: Store
Second Draw: rectangle
loadAction: load
storeAction: Store
Third Draw: sphere
loadAction: load
storeAction: store/dontcare
Your depth texture pixel format is not correct, try to change its pixel format to: MTLPixelFormatDepth32Float or MTLPixelFormatDepth32Float_Stencil8.

MTKView frequently displaying scrambled MTLTextures

I am working on an MTKView-backed paint program which can replay painting history via an array of MTLTextures that store keyframes. I am having an issue in which sometimes the content of these MTLTextures is scrambled.
As an example, say I want to store a section of the drawing below as a keyframe:
During playback, sometimes the drawing will display exactly as intended, but sometimes, it will display like this:
Note the distorted portion of the picture. (The undistorted portion constitutes a static background image that's not part of the keyframe in question)
I describe the way I Create individual MTLTextures from the MTKView's currentDrawable below. Because of color depth issues I won't go into, the process may seem a little round-about.
I first get a CGImage of the subsection of the screen that constitutes a keyframe.
I use that CGImage to create an MTLTexture tied to the MTKView's device.
I store that MTLTexture into a MTLTextureStructure that stores the MTLTexture and the keyframe's bounding-box (which I'll need later)
Lastly, I store in an array of MTLTextureStructures (keyframeMetalArray). During playback, when I hit a keyframe, I get it from this keyframeMetalArray.
The associated code is outlined below.
let keyframeCGImage = weakSelf!.canvasMetalViewPainting.mtlTextureToCGImage(bbox: keyframeBbox, copyMode: copyTextureMode.textureKeyframe) // convert from MetalTexture to CGImage
let keyframeMTLTexture = weakSelf!.canvasMetalViewPainting.CGImageToMTLTexture(cgImage: keyframeCGImage)
let keyframeMTLTextureStruc = mtlTextureStructure(texture: keyframeMTLTexture, bbox: keyframeBbox, strokeType: brushTypeMode.brush)
weakSelf!.keyframeMetalArray.append(keyframeMTLTextureStruc)
Without providing specifics about how each conversion is happening, I wonder if, from an architecture design point, I'm overlooking something that is corrupting my data stored in the keyframeMetalArray. It may be unwise to try to store these MTLTextures in volatile arrays, but I don't know that for a fact. I just figured using MTLTextures would be the quickest way to update content.
By the way, when I swap out arrays of keyframes to arrays of UIImage.pngData, I have no display issues, but it's a lot slower. On the plus side, it tells me that the initial capture from currentDrawable to keyframeCGImage is working just fine.
Any thoughts would be appreciated.
p.s. adding a bit of detail based on the feedback:
mtlTextureToCGImage:
func mtlTextureToCGImage(bbox: CGRect, copyMode: copyTextureMode) -> CGImage {
let kciOptions = [convertFromCIContextOption(CIContextOption.outputPremultiplied): true,
convertFromCIContextOption(CIContextOption.useSoftwareRenderer): false] as [String : Any]
let bboxStrokeScaledFlippedY = CGRect(x: (bbox.origin.x * self.viewContentScaleFactor), y: ((self.viewBounds.height - bbox.origin.y - bbox.height) * self.viewContentScaleFactor), width: (bbox.width * self.viewContentScaleFactor), height: (bbox.height * self.viewContentScaleFactor))
let strokeCIImage = CIImage(mtlTexture: metalDrawableTextureKeyframe,
options: convertToOptionalCIImageOptionDictionary(kciOptions))!.oriented(CGImagePropertyOrientation.downMirrored)
let imageCropCG = cicontext.createCGImage(strokeCIImage, from: bboxStrokeScaledFlippedY, format: CIFormat.RGBA8, colorSpace: colorSpaceGenericRGBLinear)
cicontext.clearCaches()
return imageCropCG!
} // end of func mtlTextureToCGImage(bbox: CGRect)
CGImageToMTLTexture:
func CGImageToMTLTexture (cgImage: CGImage) -> MTLTexture {
// Note that we forego the more direct method of creating stampTexture:
//let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
// because MTKTextureLoader seems to be doing additional processing which messes with the resulting texture/colorspace
let width = Int(cgImage.width)
let height = Int(cgImage.height)
let bytesPerPixel = 4
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm,
width: width,
height: height,
mipmapped: false)
texDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.shaderRead.rawValue)
texDescriptor.storageMode = .shared
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else {
return brushTextureSquare // return SOMETHING
}
let dstData: CFData = (cgImage.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
let region = MTLRegionMake2D(0, 0, width, height)
print ("[MetalViewPainting]: w= \(width) | h= \(height) region = \(region.size)")
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
return stampTexture
} // end of func CGImageToMTLTexture (cgImage: CGImage)
The type of distortion looks like a bytes-per-row alignment issue between CGImage and MTLTexture. You're probably only seeing this issue when your image is a certain size that falls outside of the bytes-per-row alignment requirement of your MTLDevice. If you really need to store the texture as a CGImage, ensure that you are using the bytesPerRow value of the CGImage when copying back to the texture.

Called CAMetalLayer nextDrawable earlier than needed

The GPU frame capture is warning:
your application called CAMetalLayer nextDrawable earlier than needed
I am calling present(drawable) after all my other GPU calls have been made, directly before committing the command buffer
guard let commandBuffer = commandQueue.makeCommandBuffer(),
let computeBuffer = commandQueue.makeCommandBuffer(),
let descriptor = view.currentRenderPassDescriptor else { return }
...
//First run the compute kernel
guard let computeEncoder = computeBuffer.makeComputeCommandEncoder() else { return }
computeEncoder.setComputePipelineState(computePipelineState)
dispatchThreads(particleCount: particleCount)
computeEncoder.endEncoding()
computeBuffer.commit()
//I need to wait here because I need values from a computed buffer first
//and also why I am not just using a single pipeline descriptor
computeBuffer.waitUntilCompleted()
//Next render computed particles with vertex shader to a texture
let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: descriptor0)!
renderEncoder.setRenderPipelineState(renderPipelineState)
...
renderEncoder.endEncoding()
//Draw texture (created above) using vertex shader:
let renderTexture = commandBuffer.makeRenderCommandEncoder(descriptor: descriptor)
renderTexture?.setRenderPipelineState(renderCanvasPipelineState)
...
renderTexture?.endEncoding()
//Finally present drawable and commit command buffer:
guard let drawable = view.currentDrawable else { return }
commandBuffer.present(drawable)
commandBuffer.commit()
I don't see how it is possible to request the currentDrawable any later. Am I doing something wrong or sub-optimal?
I started looking into the issue because most frames (basically doing the same thing), the wait for the current drawable is about 3-10ms. Occasionally the wait is 35-45ms.
I see a number of recommendations to use presentDrawable instead of present, but that does not seem to be an option with Swift.
Is there any way to request the current drawable when it is needed and make the warning go away?
You're calling view.currentRenderPassDescriptor at the top of the code you listed. The render pass descriptor has to have a reference to the drawable's texture as a color attachment. That implies obtaining the drawable and asking for its texture. So, that line requests the drawable.
Don't obtain the render pass descriptor until just before you create the render command encoder (let renderTexture = commandBuffer.makeRenderCommandEncoder(descriptor: descriptor)) using it.

How to load a existing Texture using MTLRenderPassDescriptor

I am having a image in the asset. Then I am changing it to MTLTexture. I want to pass this texture to the shader function and append the texture and add the smudge feature to the passed texture using shader functions.
Currently I am passing the texture in MTLRenderPassDescriptor like below.
let renderPassWC = MTLRenderPassDescriptor()
renderPassWC.colorAttachments[0].texture = ssTexture
renderPassWC.colorAttachments[0].loadAction = .load
renderPassWC.colorAttachments[0].storeAction = .store
When I edit the texture in Shader function like Moving the pixel of ssTexture to adjacent Pixel. The movement don't stops. Because the operations I am doing in shader functions continuously operating in the appended Textures every draw cycle.
So I think rather than loadAction with load I feel clear will be a option but texture which passed become clear when I changed the code as below
renderPassWC.colorAttachments[0].texture = ssTexture
renderPassWC.colorAttachments[0].loadAction = .clear
renderPassWC.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
.Any possible way to pass the image texture with clear.

Off Screen Rendering

In off screen rendering in metal
let textureDescriptors = MTLTextureDescriptor()
textureDescriptors.textureType = MTLTextureType.type2D
let screenRatio = UIScreen.main.scale
textureDescriptors.width = Int((DrawingManager.shared.size?.width)!) * Int(screenRatio)
textureDescriptors.height = Int((DrawingManager.shared.size?.height)!) * Int(screenRatio)
textureDescriptors.pixelFormat = .bgra8Unorm
textureDescriptors.storageMode = .shared
textureDescriptors.usage = [.renderTarget, .shaderRead]
ssTexture = device.makeTexture(descriptor: textureDescriptors)
ssTexture.label = "ssTexture"
Here the texture is in Clear color. Is it possible to load a image texture and is it posible to render the image texture in Draw Method
let renderPass = MTLRenderPassDescriptor()
renderPass.colorAttachments[0].loadAction = .clear
renderPass.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
renderPass.colorAttachments[0].texture = ssTexture
renderPass.colorAttachments[0].storeAction = .store
I'm not sure what you're asking.
There's MTLTextureLoader for creating textures initialized with the contents of an image.
You can use the replace(region:...) methods of MTLTexture to fill all or part of a texture with image data.
You can use MTLBlitCommandEncoder to copy data from one texture to (all or part of) another or from a buffer to a texture.
You can draw to a texture or write to it from a compute shader.
It's a general-purpose API. There are many ways to do the things you seem to be asking. What have you tried? In what way did those attempts fail to achieve what you want?

Resources