How to load a existing Texture using MTLRenderPassDescriptor - ios

I am having a image in the asset. Then I am changing it to MTLTexture. I want to pass this texture to the shader function and append the texture and add the smudge feature to the passed texture using shader functions.
Currently I am passing the texture in MTLRenderPassDescriptor like below.
let renderPassWC = MTLRenderPassDescriptor()
renderPassWC.colorAttachments[0].texture = ssTexture
renderPassWC.colorAttachments[0].loadAction = .load
renderPassWC.colorAttachments[0].storeAction = .store
When I edit the texture in Shader function like Moving the pixel of ssTexture to adjacent Pixel. The movement don't stops. Because the operations I am doing in shader functions continuously operating in the appended Textures every draw cycle.
So I think rather than loadAction with load I feel clear will be a option but texture which passed become clear when I changed the code as below
renderPassWC.colorAttachments[0].texture = ssTexture
renderPassWC.colorAttachments[0].loadAction = .clear
renderPassWC.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
.Any possible way to pass the image texture with clear.

Related

3D viewer for iOS using MetalKit and Swift - Depth doesn’t work

I’m using Metal with Swift to build a 3D viewer for iOS and I have some issues to make the depth working. From now, I can draw and render a single shape correctly in 3D (like a simple square plane (4 triangles (2 for each face)) or a tetrahedron (4 triangles)).
However, when I try to draw 2 shapes together, the depth between these two shapes doesn’t work. For example, a plane is placed at Z axes = 0 behind a tetra which is placed at Z > 0. If I look a this scene from the back (camera placed somewhere at Z < 0), it’s ok. But when I look at this scene from the front (camera placed somewhere at Z > 0), it doesn’t work. The plane is drawn before the tetra even if it is placed behind the tetra.
I think that the plane is always drawn on the screen before the tetra (no matter the position of the camera) because the call of drawPrimitives for the plane is done before the call for the tetra. However, I was thinking that all the depth and stencil settings will deal with that properly.
I don’t know if the depth isn’t working because depth texture, stencil state and so on are not correctly set or because each shape is drawn in a different call of drawPrimitives.
In other words, do I have to draw all shapes in the same call of drawPrimitives to make the depth working ? The idea of this multiple call to drawPrimitives is to deal with different kinds of primitive type for each shape (triangle or line or …).
This is how I set the depth stencil state and the depth texture and the render pipeline :
init() {
// some miscellaneous initialisation …
// …
// all MTL stuff :
commandQueue = device.makeCommandQueue()
// Stencil descriptor
let depthStencilDescriptor = MTLDepthStencilDescriptor()
depthStencilDescriptor.depthCompareFunction = .less
depthStencilDescriptor.isDepthWriteEnabled = true
depthStencilState = device.makeDepthStencilState(descriptor: depthStencilDescriptor)!
// Library and pipeline descriptor & state
let library = try! device.makeLibrary(source: shaders, options: nil)
// Our vertex function name
let vertexFunction = library.makeFunction(name: "basic_vertex_function")
// Our fragment function name
let fragmentFunction = library.makeFunction(name: "basic_fragment_function")
// Create basic descriptor
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
// Attach the pixel format that si the same as the MetalView
renderPipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
renderPipelineDescriptor.depthAttachmentPixelFormat = .depth32Float_stencil8
renderPipelineDescriptor.stencilAttachmentPixelFormat = .depth32Float_stencil8
//renderPipelineDescriptor.stencilAttachmentPixelFormat = .stencil8
// Attach the shader functions
renderPipelineDescriptor.vertexFunction = vertexFunction
renderPipelineDescriptor.fragmentFunction = fragmentFunction
// Try to update the state of the renderPipeline
do {
renderPipelineState = try device.makeRenderPipelineState(descriptor: renderPipelineDescriptor)
} catch {
print(error.localizedDescription)
}
// Depth Texture
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: 576, height: 723, mipmapped: false)
desc.storageMode = .private
desc.usage = .pixelFormatView
depthTexture = device.makeTexture(descriptor: desc)!
// Uniforms buffer
modelMatrix = Matrix4()
modelMatrix.multiplyLeft(worldMatrix)
uniformBuffer = device.makeBuffer( length: MemoryLayout<Float>.stride*16*2, options: [])
let bufferPointer = uniformBuffer.contents()
memcpy(bufferPointer, &modelMatrix.matrix.m, MemoryLayout<Float>.stride * 16)
memcpy(bufferPointer + MemoryLayout<Float>.stride * 16, &projectionMatrix.matrix.m, MemoryLayout<Float>.stride * 16)
}
And the draw function :
function draw(in view: MTKView) {
// create render pass descriptor
guard let drawable = view.currentDrawable,
let renderPassDescriptor = view.currentRenderPassDescriptor else {
return
}
renderPassDescriptor.depthAttachment.texture = depthTexture
renderPassDescriptor.depthAttachment.clearDepth = 1.0
//renderPassDescriptor.depthAttachment.loadAction = .load
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .store
// Create a buffer from the commandQueue
let commandBuffer = commandQueue.makeCommandBuffer()
let commandEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
commandEncoder?.setRenderPipelineState(renderPipelineState)
commandEncoder?.setFrontFacing(.counterClockwise)
commandEncoder?.setCullMode(.back)
commandEncoder?.setDepthStencilState(depthStencilState)
// Draw all obj in objects
// objects = array of Object; each object describing vertices and primitive type of a shape
// objects[0] = Plane, objects[1] = Tetra
for obj in objects {
createVertexBuffers(device: view.device!, vertices: obj.vertices)
commandEncoder?.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
commandEncoder?.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
commandEncoder?.drawPrimitives(type: obj.primitive, vertexStart: 0, vertexCount: obj.vertices.count)
}
commandEncoder?.endEncoding()
commandBuffer?.present(drawable)
commandBuffer?.commit()
}
Does anyone has an idea of what is wrong or missing ?
Any advice is welcome !
Edited 09/23/2022: Code updated
Few things of the top of my head:
First
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .depth32Float_stencil8, width: 576, height: 723, mipmapped: false)
Second
renderPipelineDescriptor.depthAttachmentPixelFormat = .depth32Float_stencil8
Notice the pixeFormat should be same in both places, and since you seem to be using stencil test as well so depth32Float_stencil8 will be perfect.
Third
Now another thing you seem to be missing is, clearing depth texture before every render pass, am I right?
So, you should set load action of depth attachment to .clear, like this:
renderPassDescriptor.depthAttachment.loadAction = .clear
Fourth (Subjective to your usecase)*
If none of the above works, you might need to discard framents with alpha = 0 in your fragment function by calling discard_fragment() when color you are returning has alpha 0
Also note for future:
Ideally you want depth texture to be fresh and empty when every new frame starts getting rendered (first draw call of a render pass) and then reuse it for subsequent draw calls in same render pass by setting load action .load and store action .store.
ex: Assuming you have 3 draw calls, say drawing polygons wiz triangle, rectangle, sphere in one frame, then your depth attachment setup should be like this:
Frame 1 Starts:
First Draw: triangle
loadAction: Clear
storeAction: Store
Second Draw: rectangle
loadAction: load
storeAction: Store
Third Draw: sphere
loadAction: load
storeAction: store/dontcare
Frame 2 Starts: Notice you clear depth buffer for 1st draw call of new frame
First Draw: triangle
loadAction: Clear
storeAction: Store
Second Draw: rectangle
loadAction: load
storeAction: Store
Third Draw: sphere
loadAction: load
storeAction: store/dontcare
Your depth texture pixel format is not correct, try to change its pixel format to: MTLPixelFormatDepth32Float or MTLPixelFormatDepth32Float_Stencil8.

Face texture from ARKit

I am running a face tracking configuration in ARKit with SceneKit, in each frame i can access the camera feed via the snapshot property or the capturedImage as a buffer, i have also been able to map each face vertex to the image coordinate space and add some UIView helpers(1 point squares) to display in realtime all the face vertices on the screen, like this:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceGeometry = node.geometry as? ARSCNFaceGeometry,
let anchorFace = anchor as? ARFaceAnchor,
anchorFace.isTracked
else { return }
let vertices = anchorFace.geometry.vertices
for (index, vertex) in vertices.enumerated() {
let vertex = sceneView.projectPoint(node.convertPosition(SCNVector3(vertex), to: nil))
let xVertex = CGFloat(vertex.x)
let yVertex = CGFloat(vertex.y)
let newPosition = CGPoint(x: xVertex, y: yVertex)
// Here i update the position of each UIView in the screen with the calculated vertex new position, i have an array of views that matches the vertex count that is consistent across sessions.
}
}
Since the UV coordinates are also constant across sessions, i am trying to draw for each pixel that is over the face mesh its corresponding position in the UV texture so i can get, after some iterations, a persons face texture to a file.
I have come to some theorical solutions, like creating CGPaths for each triangle, and ask for each pixel if it is contained in that triangle and if it is, create a triangular image, cropping a rectangle and then applying a triangle mask obtained from the points projected by the triangle vertices in the image coordinates, so in this fashion i can obtain a triangular image that has to be translated to the underlying triangle transform (like skewing it in place), and then, in a UIView (1024x1024) add each triangle image as UIImageView as a sub view, and finally encode that UIView as PNG, this sounds like a lot of work, specifically the part of matching the cropped triangle with the UV texture corresponding triangle.
In the Apple demo project there is an image that shows how that UV texture looks like, if you edit this image and add some colors it will then show up in the face, but i need the other way around, from what i am seeing in the camera feed, create a texture of your face, in the same demo project there is an example that does exactly what i need but with a shader, and with no clues on how to extract the texture to a file, the shader codes looks like this:
/*
<samplecode>
<abstract>
SceneKit shader (geometry) modifier for texture mapping ARKit camera video onto the face.
</abstract>
</samplecode>
*/
#pragma arguments
float4x4 displayTransform // from ARFrame.displayTransform(for:viewportSize:)
#pragma body
// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;
// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;
// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;
// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;
// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;
/**
* MARK: Post-process special effects
*/
Honestly i do not have much experience with shaders, so any help would be appreciated in translating the shader info on how to translate to a more Cocoa Touch Swift code, right now i am not thinking yet in performance, so it if has to be done in the CPU like in a background thread or offline is ok, anyway i will have to choose the right frames to avoid skewed samples, or triangles with very good information and some other with barely a few pixels stretched (like checking if the normal of the triangle is pointing to the camera, sample it), or other UI helpers to make the user turns the face to sample all the face correctly.
I have already checked this post and this post but cannot get it to work.
This app does exactly what i need, but they do not seem like using ARKit.
Thanks.

Off Screen Rendering

In off screen rendering in metal
let textureDescriptors = MTLTextureDescriptor()
textureDescriptors.textureType = MTLTextureType.type2D
let screenRatio = UIScreen.main.scale
textureDescriptors.width = Int((DrawingManager.shared.size?.width)!) * Int(screenRatio)
textureDescriptors.height = Int((DrawingManager.shared.size?.height)!) * Int(screenRatio)
textureDescriptors.pixelFormat = .bgra8Unorm
textureDescriptors.storageMode = .shared
textureDescriptors.usage = [.renderTarget, .shaderRead]
ssTexture = device.makeTexture(descriptor: textureDescriptors)
ssTexture.label = "ssTexture"
Here the texture is in Clear color. Is it possible to load a image texture and is it posible to render the image texture in Draw Method
let renderPass = MTLRenderPassDescriptor()
renderPass.colorAttachments[0].loadAction = .clear
renderPass.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
renderPass.colorAttachments[0].texture = ssTexture
renderPass.colorAttachments[0].storeAction = .store
I'm not sure what you're asking.
There's MTLTextureLoader for creating textures initialized with the contents of an image.
You can use the replace(region:...) methods of MTLTexture to fill all or part of a texture with image data.
You can use MTLBlitCommandEncoder to copy data from one texture to (all or part of) another or from a buffer to a texture.
You can draw to a texture or write to it from a compute shader.
It's a general-purpose API. There are many ways to do the things you seem to be asking. What have you tried? In what way did those attempts fail to achieve what you want?

How can I get normal shading on a SKSpriteNode with a custom shader?

I've been doing some work in SpriteKit, and I can't seem to get custom shaders and the pseudo 3D lighting effects from a normal texture to work at the same time.
I have a pair of PNG textures, representing a shape with its basic coloring, and a normal map of the same image. If I create an SKSpriteNode, using those textures and add a light to the scene, I see the bumpiness and beveled edges I expect.
cactus = SKSpriteNode(imageNamed: "Saguaro.png")
cactus.normalTexture = SKTexture(imageNamed: "Saguaro_n")
cactus.position = sceneCenter
cactus.lightingBitMask = 1
light = SKLightNode()
light.position = CGPoint.zero
light.lightColor = UIColor.white
light.isEnabled = true
light.categoryBitMask = 1
light.ambientColor = UIColor.white
light.falloff = 0.3
If, however, I add a custom shader, I get the flat color of just the colors in the texture image. (Code below)
// Assign a shader to the SpriteNode
// Shader loaded from a file with code below
cactus.shader = myShader
// Shader code
void main(void) {
gl_FragColor = SKDefaultShading();
}
Is there something I can do in the shader to use the built-in lighting effects from the normal map? I'm not too familiar with writing custom fragment shaders, so perhaps there's something obvious I'm not doing.

Rendering MTLTexture on MTKView is not keeping aspect ratio

I have a texture that's 1080x1920 pixels. And I'm trying to render it on a MTKView that isn't the same aspect ratio. (i.e iPad/iPhone X full screen).
This is how I'm rendering the texture for the MTKView:
private func render(_ texture: MTLTexture, withCommandBuffer commandBuffer: MTLCommandBuffer, device: MTLDevice) {
guard let currentRenderPassDescriptor = metalView?.currentRenderPassDescriptor,
let currentDrawable = metalView?.currentDrawable,
let renderPipelineState = renderPipelineState,
let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor) else {
semaphore.signal()
return
}
encoder.pushDebugGroup("RenderFrame")
encoder.setRenderPipelineState(renderPipelineState)
encoder.setFragmentTexture(texture, index: 0)
encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1)
encoder.popDebugGroup()
encoder.endEncoding()
// Called after the command buffer is scheduled
commandBuffer.addScheduledHandler { [weak self] _ in
guard let strongSelf = self else {
return
}
strongSelf.didRender(texture: texture)
strongSelf.semaphore.signal()
}
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
I want the texture to be rendered like .scaleAspectFill on a UIView and I'm trying to learn Metal so I'm not sure where I should be looking for this (the .metal file, the pipeline, the view itself, the encoder, etc.)
Thanks!
Edit: Here is the shader code:
#include <metal_stdlib> using namespace metal;
typedef struct {
float4 renderedCoordinate [[position]];
float2 textureCoordinate; } TextureMappingVertex;
vertex TextureMappingVertex mapTexture(unsigned int vertex_id [[ vertex_id ]]) {
float4x4 renderedCoordinates = float4x4(float4( -1.0, -1.0, 0.0, 1.0 ),
float4( 1.0, -1.0, 0.0, 1.0 ),
float4( -1.0, 1.0, 0.0, 1.0 ),
float4( 1.0, 1.0, 0.0, 1.0 ));
float4x2 textureCoordinates = float4x2(float2( 0.0, 1.0 ),
float2( 1.0, 1.0 ),
float2( 0.0, 0.0 ),
float2( 1.0, 0.0 ));
TextureMappingVertex outVertex;
outVertex.renderedCoordinate = renderedCoordinates[vertex_id];
outVertex.textureCoordinate = textureCoordinates[vertex_id];
return outVertex; }
fragment half4 displayTexture(TextureMappingVertex mappingVertex [[ stage_in ]],texture2d<float, access::sample> texture [[ texture(0) ]]) {
constexpr sampler s(address::clamp_to_edge, filter::linear);
return half4(texture.sample(s, mappingVertex.textureCoordinate));
}
A few general things to start with when dealing with Metal textures or Metal in general:
You should take into account the difference between points and pixels, refer to the documentation here. The frame property of a UIView subclass (as MTKView is one) always gives you the width and the height of the view in points.
The mapping from points to actual pixels is controlled through the contentScaleFactor option. The MTKView automatically selects a texture with a fitting aspect ratio that matches the actual pixels of your device. For example, the underlying texture of a MTKView on the iPhone X would have a resolution of 2436 x 1125 (the actual display size in pixels). This is documented here: "The MTKView class automatically supports native screen scale. By default, the size of the view’s current drawable is always guaranteed to match the size of the view itself."
As documented here, the .scaleAspectFill option "scale[s] the content to fill the size of the view. Some portion of the content may be clipped to fill the view’s bounds". You want to simulate this behavior.
Rendering with Metal is nothing more than "drawing" to the resolve texture, which is automatically set by the MTKView. However, you still have full control and could do it on your own by manually creating textures and setting them in your renderPassDescriptor. But you don't need to care about this right now. The single thing you should care about is what, where and which part of the 1080x1920 pixels texture in your resolve texture you want to render in your resolve texture (which might have a different aspect ratio). We want to fully fill ("scaleAspectFill") the resolve texture, so we leave the renderedCoordinates in your fragment shader as they are. The are defining a rectangle over the whole resolve texture, which means the fragment shader is called for every single pixel in the resolve texture. Following, we will simply change the texture coordinates.
Let's define the aspect ratio as ratio = width / height, the resolve texture as r_tex and the texture you want to render as tex.
So assuming your resolve texture does not have the same aspect ratio, there are two possible scenarios:
The aspect ratio of your texture that you want to render is larger than the aspect ratio of your resolve texture (the texture Metal renders to), that means the texture you want to render has a larger width than the resolve texture. In this case we leave the y values of the coordinate as they are. The x values of texture coordinates will be changed:
x_left = 0 + ((tex.width - r_tex.width) / 2.0)
x_right = tex_width - ((tex.width - r_tex_width) / 2.0)
These values must be normalized because the texture samples needs coordinates in the range from 0 to 1:
x_left = x_left / tex.width
x_right = x_right / tex.width
We have our new texture coordinates:
topLeft = float2(x_left,0)
topRight = float2(x_right,0)
bottomLeft = float2(x_left,1)
bottomRight = float2(x_right,1)
This will have the effect that nothing of the top or the bottom of your texture will be cut off, but some outer parts at the left and right side will be clipped, i.e. not visible.
The aspect ratio of your texture that you want to render is smaller than the aspect ratio of your resolve texture. The procedure is the same as with first scenario, but this time we will change the y coordinates
This should render your texture so that the resolve texture is completely filled and the aspect ratio of your texture is maintained on the x-axis. Maintaining the y-axis will work similarly. Additionally you have to check which side of the texture is larger/smaller and incorporate this in your calculation. This will clip parts of your texture as it would be when using scaleAspectFill. Be aware that the above solution is untested. But I hope it is helpful. Be sure to visit Metal Best Practices documentation from time to time, it's very helpful to get the basic concepts right. Have fun with Metal!
So your vertex shader pretty directly dictates that the source texture be stretched to the dimensions of the viewport. You are rendering a quad that fills the viewport, because its coordinates are at the extremes ([-1, 1]) of the Normalized Device Coordinate system in the horizontal and vertical directions.
And you are mapping the source texture corner-to-corner over that same range. That's because you specify the extremes of texture coordinate space ([0, 1]) for the texture coordinates.
There are various approaches to achieve what you want. You could pass the vertex coordinates in to the shader via a buffer, instead of hard-coding them. That way, you can compute the appropriate values in app code. You'd compute the desired destination coordinates in the render target, expressed in NDC. So, conceptually, something like left_ndc = (left_pixel / target_width) * 2 - 1, etc.
Alternatively, and probably easier, you can leave the shader as-is and change the viewport for the draw operation to target only the appropriate portion of the render target.

Resources