I have been trying to use texture2d_array for my application of live filters in metal. But I'm not getting the proper result.
Im creating the texture array like this,
Code: Class MetalTextureArray.
class MetalTextureArray {
private(set) var arrayTexture: MTLTexture
private var width: Int
private var height: Int
init(_ width: Int, _ height: Int, _ arrayLength: Int, _ device: MTLDevice) {
self.width = width
self.height = height
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.textureType = .type2DArray
textureDescriptor.pixelFormat = .bgra8Unorm
textureDescriptor.width = width
textureDescriptor.height = height
textureDescriptor.arrayLength = arrayLength
arrayTexture = device.makeTexture(descriptor: textureDescriptor)
}
func append(_ texture: MTLTexture) -> Bool {
if let bytes = texture.buffer?.contents() {
let region = MTLRegion(origin: MTLOrigin(x: 0, y: 0, z: 0), size: MTLSize(width: width, height: height, depth: 1))
arrayTexture.replace(region: region, mipmapLevel: 0, withBytes: bytes, bytesPerRow: texture.bufferBytesPerRow)
return true
}
return false
}
}
Im encoding this texture into the renderEncoder like this,
Code:
let textureArray = MetalTextureArray.init(firstTexture!.width, firstTexture!.height, colorTextures.count, device)
_ = textureArray.append(colorTextures[0].texture)
_ = textureArray.append(colorTextures[1].texture)
_ = textureArray.append(colorTextures[2].texture)
_ = textureArray.append(colorTextures[3].texture)
_ = textureArray.append(colorTextures[4].texture)
renderEncoder.setFragmentTexture(textureArray.arrayTexture, at: 1)
Finally I'm accessing the texture2d_array in the fragment shader like this,
Code:
struct RasterizerData {
float4 clipSpacePosition [[position]];
float2 textureCoordinate;
};
multipleShader(RasterizerData in [[stage_in]],
texture2d<half> colorTexture [[ texture(0) ]],
texture2d_array<half> texture2D [[ texture(1) ]])
{
constexpr sampler textureSampler (mag_filter::linear,
min_filter::linear,
s_address::repeat,
t_address::repeat,
r_address::repeat);
// Sample the texture and return the color to colorSample
half4 colorSample = colorTexture.sample (textureSampler, in.textureCoordinate);
float4 outputColor;
half red = texture2D.sample(textureSampler, in.textureCoordinate, 2).r;
half green = texture2D.sample(textureSampler, in.textureCoordinate, 2).g;
half blue = texture2D.sample(textureSampler, in.textureCoordinate, 2).b;
outputColor = float4(colorSample.r * red, colorSample.g * green, colorSample.b * blue, colorSample.a);
// We return the color of the texture
return outputColor;
}
The textures I'm appending to the texture array are the texture which are extracted from acv curve file which is of size 256 * 1.
In this code half red = texture2D.sample(textureSampler, in.textureCoordinate, 2).r; I gave the last argument as 2 because I thought it as the index of the texture to be accessed. But I don't know what it means.
But after doing all these I'm getting the black screen. Even I have other fragment shaders and all of them are working fine. But for this fragment shader I'm getting black screen. I think for this code half blue = texture2D.sample(textureSampler, in.textureCoordinate, 2).b I'm getting 0 for all the red, green, and blue values.
Edit 1:
As suggested I used blitcommandEncoder to copy the texture and still no result.
My code goes here,
My MetalTextureArray class has come modifications.
Method append goes like this.
func append(_ texture: MTLTexture) -> Bool {
self.blitCommandEncoder.copy(from: texture, sourceSlice: 0, sourceLevel: 0, sourceOrigin: MTLOrigin(x: 0, y: 0, z: 0), sourceSize: MTLSize(width: texture.width, height: texture.height, depth: 1), to: self.arrayTexture, destinationSlice: count, destinationLevel: 0, destinationOrigin: MTLOrigin(x: 0, y: 0, z: 0))
count += 1
return true
}
And Im appending the texture like this
let textureArray = MetalTextureArray.init(256, 1, colorTextures.count, device, blitCommandEncoder: blitcommandEncoder)
for (index, filter) in colorTextures!.enumerated() {
_ = textureArray.append(colorTextures[index].texture)
}
renderEncoder.setFragmentTexture(textureArray.arrayTexture, at: 1)
My shader code goes like this
multipleShader(RasterizerData in [[stage_in]],
texture2d<half> colorTexture [[ texture(0) ]],
texture2d_array<float> textureArray [[texture(1)]],
const device struct SliceDataSource &sliceData [[ buffer(2) ]])
{
constexpr sampler textureSampler (mag_filter::linear,
min_filter::linear);
// Sample the texture and return the color to colorSample
half4 colorSample = colorTexture.sample (textureSampler, in.textureCoordinate);
float4 outputColor = float4(0,0,0,0);
int slice = 1;
float red = textureArray.sample(textureSampler, in.textureCoordinate, slice).r;
float blue = textureArray.sample(textureSampler, in.textureCoordinate, slice).b;
float green = textureArray.sample(textureSampler, in.textureCoordinate, slice).g;
outputColor = float4(colorSample.r * red, colorSample.g * green, colorSample.b * blue, colorSample.a);
// We return the color of the texture
return outputColor;
}
Still I get the black screen.
In the method textureArray.sample(textureSampler, in.textureCoordinate, slice); what is the third parameter. I though it as an index and I gave some random index to fetch the random texture. Is it correct?
Edit 2:
I finally able to implement the suggestion and I got the result by using endEncoding method before another encoder is implemented and I got the following screen with the ACV negative filter.
.
Can someone suggest me.
Thanks.
There's a difference between an array of textures and an array texture. It sounds to me like you just want an array of textures. In that case, you should not use texture2d_array; you should use array<texture2d<half>, 5> texture_array [[texture(1)]].
In the app code, you can either use multiple calls to setFragmentTexture() to assign textures to sequential indexes or you can use setFragmentTextures() to assign a bunch of textures to a range of indexes all at once.
In the shader code, you'd use array subscripting syntax to refer to the individual textures in the array (e.g. texture_array[2]).
If you really do want to use an array texture, then you probably need to change your append() method. First, if the texture argument was not created with the makeTexture(descriptor:offset:bytesPerRow:) method of MTLBuffer, then texture.buffer will always be nil. That is, textures only have associated buffers if they were originally created from a buffer. To copy from texture to texture, you should use a blit command encoder and its copy(from:sourceSlice:sourceLevel:sourceOrigin:sourceSize:to:destinationSlice:destinationLevel:destinationOrigin:) method.
Second, if you want to replace the texture data for a specific slice (array element) of the array texture, you need to pass that slice index in as an argument to the replace() method. For that, you'd need to use the replace(region:mipmapLevel:slice:withBytes:bytesPerRow:bytesPerImage:) method, not the replace(region:mipmapLevel:withBytes:bytesPerRow:) as you're currently doing. Your current code is just replacing the first slice over and over (assuming the source textures really are associated with a buffer).
Related
I'm using ARKit with scene reconstruction and need to rendered the captured scene geometry in metal. I can access this geometry through the ARMeshAnchor.geometry, which is a ARMeshGeometry. However when I try rendering it using my custom metal rendering pipeline, nothing renders and I get a bunch of errors like this:
Invalid device load executing vertex function "myVertex" encoder: "0", draw: 3, at offset 4688
Here's a highly simplified version of my code that I've been using for debugging:
struct InOut {
float4 position [[position]];
};
vertex InOut myVertex(
uint vid [[vertex_id]],
const constant float3* vertexArray [[buffer(0)]])
{
TouchInOut out;
const float3 in = vertexArray[vid];
out.position = float4(in.position, 1);
}
fragment float4 myFragment(TouchInOut in [[stage_in]]){
return float4(1, 0, 0, 1);
}
// Setup MTLRenderPipelineDescriptor
let pipelineDescriptor = MTLRenderPipelineDescriptor()
pipelineDescriptor.colorAttachments[0].pixelFormat = .rgba8Unorm
pipelineDescriptor.sampleCount = 1
pipelineDescriptor.vertexFunction = defaultLibrary.makeFunction(name: "myVertex")
pipelineDescriptor.fragmentFunction = defaultLibrary.makeFunction(name: "myFragment")
let vertexDescriptor = MTLVertexDescriptor()
vertexDescriptor.attributes[0].format = .float3
vertexDescriptor.attributes[0].offset = 0
vertexDescriptor.attributes[0].bufferIndex = 0
vertexDescriptor.layouts[0].stride = MemoryLayout<SIMD3<Float>>.stride
pipelineDescriptor.vertexDescriptor = vertexDescriptor
func render(arMesh: ARMeshAnchor) -> void {
// snip... — Setting up command buffers
let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)!
renderEncoder.setViewport(MTLViewport(originX: 0, originY: 0, width: 512, height: 512, znear: 0, zfar: 1))
renderEncoder.setRenderPipelineState(renderPipelineState)
let vertices = arMesh.geometry.vertices
let faces = arMesh.geometry.faces
renderEncoder.setVertexBuffer(vertices.buffer, offset: 0, index: 0)
renderEncoder.drawIndexedPrimitives(type: .triangle, indexCount: faces.count * 3, indexType: .uint32, indexBuffer: buffer, indexBufferOffset: 0)
renderEncoder.endEncoding()
// snip... — Clean up
}
I can't figure out why this code causes the metal exception. It stops throwing if I cap vid in the shader to around 100, but it still doesn't draw anything properly
What's going on here? Why does my code produce an error and how can I fix it?
The problem here is the alignment/packing of the vertex data.
Each vertex in ARMeshGeometry.vertices consists of 3 float components, for a total size of 12 bytes. The code above assumes that this means the data is a float3 / SIMD3<Float>, however the vertices from ARMeshGeometry are actually tightly packed. So while SIMD3<Float> has a stride of 16, the actual vertex data has a stride of 12.
The larger size of float3 (16) vs the actual size of elements in the vertices buffer (12) results in metal trying to access data off the end of the vertices buffer, producing the error.
There are two important fixes here:
Make sure the MTLVertexDescriptor has the correct stride:
let exampleMeshGeometry: ARMeshGeometry = ...
vertexDescriptor.layouts[0].stride = exampleMeshGeometry.vertices.stride
In the shader, use packed_float3 instead of float3
vertex InOut myVertex(
uint vid [[vertex_id]],
const constant packed_float3* vertexArray [[buffer(0)]])
{
...
}
After fixing these issues, you should be able to properly transfer ARMeshGeometry buffers to your metal shader
I am rendering point fragments from a buffer with this call:
renderEncoder.drawPrimitives(type: .point,
vertexStart: 0,
vertexCount: 1,
instanceCount: emitter.currentParticles)
emitter.currentParticles is the total number of particles in the buffer. Is it possible to somehow draw only a portion of the buffer?
I have tried this, but it draws the first half of the buffer:
renderEncoder.drawPrimitives(type: .point,
vertexStart: emitter.currentParticles / 2,
vertexCount: 1,
instanceCount: emitter.currentParticles / 2)
In fact, it seems that vertexStart has no effect. I can seemingly set it to any value, and it still starts at 0.
Edit:
Pipeline configuration:
private func buildParticlePipelineStates() {
do {
guard let library = Renderer.device.makeDefaultLibrary(),
let function = library.makeFunction(name: "compute") else { return }
// particle update pipeline state
particlesPipelineState = try Renderer.device.makeComputePipelineState(function: function)
// render pipeline state
let vertexFunction = library.makeFunction(name: "vertex_particle")
let fragmentFunction = library.makeFunction(name: "fragment_particle")
let descriptor = MTLRenderPipelineDescriptor()
descriptor.vertexFunction = vertexFunction
descriptor.fragmentFunction = fragmentFunction
descriptor.colorAttachments[0].pixelFormat = renderPixelFormat
descriptor.colorAttachments[0].isBlendingEnabled = true
descriptor.colorAttachments[0].rgbBlendOperation = .add
descriptor.colorAttachments[0].alphaBlendOperation = .add
descriptor.colorAttachments[0].sourceRGBBlendFactor = .sourceAlpha
descriptor.colorAttachments[0].sourceAlphaBlendFactor = .sourceAlpha
descriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
descriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
renderPipelineState = try
Renderer.device.makeRenderPipelineState(descriptor: descriptor)
renderPipelineState = try Renderer.device.makeRenderPipelineState(descriptor: descriptor)
} catch let error {
print(error.localizedDescription)
}
}
Vertex shader:
struct VertexOut {
float4 position [[ position ]];
float point_size [[ point_size ]];
float4 color;
};
vertex VertexOut vertex_particle(constant float2 &size [[buffer(0)]],
device Particle *particles [[buffer(1)]],
constant float2 &emitterPosition [[ buffer(2) ]],
uint instance [[instance_id]])
{
VertexOut out;
float2 position = particles[instance].position + emitterPosition;
out.position.xy = position.xy / size * 2.0 - 1.0;
out.position.z = 0;
out.position.w = 1;
out.point_size = particles[instance].size * particles[instance].scale;
out.color = particles[instance].color;
return out;
}
fragment float4 fragment_particle(VertexOut in [[ stage_in ]],
texture2d<float> particleTexture [[ texture(0) ]],
float2 point [[ point_coord ]]) {
constexpr sampler default_sampler;
float4 color = particleTexture.sample(default_sampler, point);
if ((color.a < 0.01) || (in.color.a < 0.01)) {
discard_fragment();
}
color = float4(in.color.xyz, 0.2 * color.a * in.color.a);
return color;
}
You're not using a vertex descriptor nor a [[stage_in]] parameter for your vertex shader. So, Metal is not fetching/gathering vertex data for you. You're just indexing into a buffer that's laid out with your vertex data already in the format you want. That's fine. See my answer here for more info about vertex descriptor.
Given that, though, the vertexStart parameter of the draw call only affects the value of a parameter to your vertex function with the [[vertex_id]] attribute. Your vertex function doesn't have such a parameter, let alone use it. Instead it uses an [[instance_id]] parameter to index into the vertex data buffer. You can read another of my answers here for a quick primer on draw calls and how they result in calls to your vertex shader function.
There are a couple of ways you could change things to draw only half of the points. You could change the draw call you use to:
renderEncoder.drawPrimitives(type: .point,
vertexStart: 0,
vertexCount: 1,
instanceCount: emitter.currentParticles / 2,
baseInstance: emitter.currentParticles / 2)
This would not require any changes to the vertex shader. It just changes the range of values fed to the instance parameter. However, since it doesn't seem like this is really a case of instancing, I recommend that you change the shader and your draw call. For the shader, rename the instance parameter to vertex or vid and change its attribute from [[instance_id]] to [[vertex_id]]. Then, change the draw call to:
renderEncoder.drawPrimitives(type: .point,
vertexStart: emitter.currentParticles / 2,
vertexCount: emitter.currentParticles / 2)
In truth, they basically behave the same way in this case, but the latter better represents what you're doing (and the draw call is simpler, which is nice).
So, I'm trying to render a cube with 3d texture. Texture contains 3 slices of 3 diferent colors, red green and blue. Each slice consists of 4 pixels with the same color. Works fine. https://imgur.com/a/a5oXi
private func makeTexture() {
let width = 2
let height = 2
let depth = 3
let byteSize = 4
let bytesPerRow = byteSize * width
let bytesPerImage = bytesPerRow * height
let blue: UInt32 = 0x000000FF
let green: UInt32 = 0xFF00FF00
let red: UInt32 = 0x00FF0000
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.pixelFormat = .bgra8Unorm
textureDescriptor.width = width
textureDescriptor.height = height
textureDescriptor.depth = depth
textureDescriptor.textureType = .type3D
let image = UnsafeMutableRawPointer.allocate(bytes: width*height*depth*byteSize, alignedTo: 1)
image.storeBytes(of: red, toByteOffset: 0, as: UInt32.self)
image.storeBytes(of: red, toByteOffset: 4, as: UInt32.self)
image.storeBytes(of: red, toByteOffset: 8, as: UInt32.self)
image.storeBytes(of: red, toByteOffset: 12, as: UInt32.self)
image.storeBytes(of: green, toByteOffset: 16, as: UInt32.self)
image.storeBytes(of: green, toByteOffset: 20, as: UInt32.self)
image.storeBytes(of: green, toByteOffset: 24, as: UInt32.self)
image.storeBytes(of: green, toByteOffset: 28, as: UInt32.self)
image.storeBytes(of: blue, toByteOffset: 32, as: UInt32.self)
image.storeBytes(of: blue, toByteOffset: 36, as: UInt32.self)
image.storeBytes(of: blue, toByteOffset: 40, as: UInt32.self)
image.storeBytes(of: blue, toByteOffset: 44, as: UInt32.self)
texture = device?.makeTexture(descriptor: textureDescriptor)
let region = MTLRegionMake3D(0, 0, 0, width, height, depth)
texture?.replace(region: region,
mipmapLevel: 0,
slice: 0,
withBytes: image,
bytesPerRow: bytesPerRow,
bytesPerImage: bytesPerImage)
}
fragment shader code:
struct VertexOut{
float4 position [[position]];
float3 textureCoordinate;
};
fragment half4 basic_fragment(VertexOut in [[stage_in]],
texture3d<half> colorTexture [[ texture(0) ]]) {
constexpr sampler textureSampler (mag_filter::nearest,
min_filter::nearest);
// Sample the texture to obtain a color
const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate);
// We return the color of the texture
return colorSample;
}
Then i want to make red and blue slices transparent, so i set alphas equals to 0
let blue: UInt32 = 0x000000FF
let green: UInt32 = 0xFF00FF00
let red: UInt32 = 0x00FF0000
fragment shader now contains
const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate);
if (colorSample.a <= 0)
discard_fragment();
and expect to see a cut with green color but i see just green edges https://imgur.com/a/yGQdQ.
There is nothing inside the cube and i dont even see back edges because cullMode is set to .front.
Can I draw and see the texture within the object so i can see the insides of it? I haven't found the way so far. Isn't it when i set texture type to 3d, it should calculate the color for each pixel of the 3D object? not just the edges? Maybe it does, but doesn't display?
No, 3D textures don't get you that.
There is no 3D object, there are just triangles (which you provide). Those are 2D objects, although they are positioned within 3D space. Metal does not try to figure out what solid object you're trying to draw by extrapolating from the triangles you tell it to draw. No common 3D-drawing API does that. It's not generally possible. Among other things, keep in mind that you don't even have to give all of the triangles to Metal together; they could be split across draw calls.
There is no "inside" to any object, as far as Metal knows, just points, lines, and triangles. If you want to render the inside of an object, you have to model that. For a slice of a cube, you have to compute the new surfaces of the "exposed inside" and pass triangles to Metal to draw that.
A 3D texture is just a texture that you can sample with a 3D coordinate. Note that the decision about what fragments to draw has already been made before your fragment shader is called and Metal doesn't even know you'll be using a 3D texture at the time it makes those decisions.
I'm drawing 2 different vertex buffers in metal, one with a texture (ignoring vertex color data) and the other without a texture (drawing purely the vertex color data):
let commandBuffer = self.commandQueue.makeCommandBuffer()
let commandEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: rpd)
//render first buffer with texture
commandEncoder.setRenderPipelineState(self.rps)
commandEncoder.setVertexBuffer(self.vertexBuffer1, offset: 0, at: 0)
commandEncoder.setVertexBuffer(self.uniformBuffer, offset: 0, at: 1)
commandEncoder.setFragmentTexture(self.texture, at: 0)
commandEncoder.setFragmentSamplerState(self.samplerState, at: 0)
commandEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: count1, instanceCount: 1)
//render second buffer without texture
commandEncoder.setRenderPipelineState(self.rps)
commandEncoder.setVertexBuffer(self.vertexBuffer2, offset: 0, at: 0)
commandEncoder.setVertexBuffer(self.uniformBuffer, offset: 0, at: 1)
commandEncoder.setFragmentTexture(nil, at: 0)
commandEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: count2, instanceCount: 1)
commandEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
The shader looks like this:
#include <metal_stdlib>
using namespace metal;
struct Vertex {
float4 position [[position]];
float4 color;
float4 texCoord;
};
struct Uniforms {
float4x4 modelMatrix;
};
vertex Vertex vertex_func(constant Vertex *vertices [[buffer(0)]],
constant Uniforms &uniforms [[buffer(1)]],
uint vid [[vertex_id]])
{
float4x4 matrix = uniforms.modelMatrix;
Vertex in = vertices[vid];
Vertex out;
out.position = matrix * float4(in.position);
out.color = in.color;
out.texCoord = in.texCoord;
return out;
}
fragment float4 fragment_func(Vertex vert [[stage_in]],
texture2d<float> tex2D [[ texture(0) ]],
sampler sampler2D [[ sampler(0) ]]) {
if (vert.color[0] == 0 && vert.color[1] == 0 && vert.color[2] == 0) {
//texture color
return tex2D.sample(sampler2D, float2(vert.texCoord[0],vert.texCoord[1]));
}
else {
//color color
return vert.color;
}
}
Is this there a better way of doing this? Any vertex that i want to use the texture i'm setting to black, and the shader checks to see if the color is black, and if so then use the texture, otherwise use the color.
Also, is there a way to blend the colored polys and textured polys together using a multiply function if they overlap on the screen? It seems like MTLBlendOperation only has options for add/subtract/min/max, no multiply?
Another way to do this would be to have two different fragment functions, one that renders textured fragments and another one that deals with coloured vertices.
First you would need to create two different MTLRenderPipelineState at load time:
let desc = MTLRenderPipelineDescriptor()
/* ...load all other settings in the descriptor... */
// Load the common vertex function.
desc.vertexFunction = library.makeFunction(name: "vertex_func")
// First create the one associated to the textured fragment function.
desc.fragmentFunction = library.makeFunction(name: "fragment_func_textured")
let texturedRPS = try! device.makeRenderPipelineState(descriptor: desc)
// Then modify the descriptor to create the state associated with the untextured fragment function.
desc.fragmentFunction = library.makeFunction(name: "fragment_func_untextured")
let untexturedRPS = try! device.makeRenderPipelineState(descriptor: desc)
Then at render time, before encoding the draw commands of a textured object you set the textured state, and before encoding the draw commands of an untextured object you switch to the untextured one. Like this:
//render first buffer with texture
commandEncoder.setRenderPipelineState(texturedRPS) // Set the textured state
commandEncoder.setVertexBuffer(self.vertexBuffer1, offset: 0, at: 0)
commandEncoder.setVertexBuffer(self.uniformBuffer, offset: 0, at: 1)
commandEncoder.setFragmentTexture(self.texture, at: 0)
commandEncoder.setFragmentSamplerState(self.samplerState, at: 0)
commandEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: count1, instanceCount: 1)
//render second buffer without texture
commandEncoder.setRenderPipelineState(untexturedRPS) // Set the untextured state
commandEncoder.setVertexBuffer(self.vertexBuffer2, offset: 0, at: 0)
commandEncoder.setVertexBuffer(self.uniformBuffer, offset: 0, at: 1)
// No need to set the fragment texture as we don't need it in the fragment function.
// commandEncoder.setFragmentTexture(nil, at: 0)
commandEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: count2, instanceCount: 1)
No change is required for the vertex function. While you need to split the fragment function into two:
fragment float4 fragment_func_textured(Vertex vert [[stage_in]],
texture2d<float> tex2D [[ texture(0) ]],
sampler sampler2D [[ sampler(0) ]]) {
//texture color
return tex2D.sample(sampler2D, float2(vert.texCoord[0],vert.texCoord[1]));
}
fragment float4 fragment_func_untextured(Vertex vert [[stage_in]]) {
//color color
return vert.color;
}
You could even go ahead and have two different vertex functions that output two different vertex structures in order to save a few bytes. In fact the textured fragment function only needs the texCoord field and not the color, while the untextured function is the other way around.
EDIT: You can use this fragment function to use both the texture color and the vertex color:
fragment float4 fragment_func_blended(Vertex vert [[stage_in]],
texture2d<float> tex2D [[ texture(0) ]],
sampler sampler2D [[ sampler(0) ]]) {
// texture color
float4 texture_sample = tex2D.sample(sampler2D, float2(vert.texCoord[0],vert.texCoord[1]));
// vertex color
float4 vertex_sample = vert.color;
// Blend the two together
float4 blended = texture_sample * vertex_sample;
// Or use another blending operation.
// float4 blended = mix(texture_sample, vertex_sample, mix_factor);
// Where mix_factor is in the range 0.0 to 1.0.
return blended;
}
I'm attempting to write an augmented reality app using SceneKit, and I need accurate 3D points from the current rendered frame, given a 2D pixel and depth using SCNSceneRenderer's unprojectPoint method. This requires an x, y, and z where the x and y is a pixel coordinate and normally the z is a value read from the depth buffer at that frame.
The SCNView's delegate has this method to render the depth frame:
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
renderDepthFrame()
}
func renderDepthFrame(){
// setup our viewport
let viewport: CGRect = CGRect(x: 0, y: 0, width: Double(SettingsModel.model.width), height: Double(SettingsModel.model.height))
// depth pass descriptor
let renderPassDescriptor = MTLRenderPassDescriptor()
let depthDescriptor: MTLTextureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: MTLPixelFormat.depth32Float, width: Int(SettingsModel.model.width), height: Int(SettingsModel.model.height), mipmapped: false)
let depthTex = scnView!.device!.makeTexture(descriptor: depthDescriptor)
depthTex.label = "Depth Texture"
renderPassDescriptor.depthAttachment.texture = depthTex
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.clearDepth = 1.0
renderPassDescriptor.depthAttachment.storeAction = .store
let commandBuffer = commandQueue.makeCommandBuffer()
scnRenderer.scene = scene
scnRenderer.pointOfView = scnView.pointOfView!
scnRenderer!.render(atTime: 0, viewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)
// setup our depth buffer so the cpu can access it
let depthImageBuffer: MTLBuffer = scnView!.device!.makeBuffer(length: depthTex.width * depthTex.height*4, options: .storageModeShared)
depthImageBuffer.label = "Depth Buffer"
let blitCommandEncoder: MTLBlitCommandEncoder = commandBuffer.makeBlitCommandEncoder()
blitCommandEncoder.copy(from: renderPassDescriptor.depthAttachment.texture!, sourceSlice: 0, sourceLevel: 0, sourceOrigin: MTLOriginMake(0, 0, 0), sourceSize: MTLSizeMake(Int(SettingsModel.model.width), Int(SettingsModel.model.height), 1), to: depthImageBuffer, destinationOffset: 0, destinationBytesPerRow: 4*Int(SettingsModel.model.width), destinationBytesPerImage: 4*Int(SettingsModel.model.width)*Int(SettingsModel.model.height))
blitCommandEncoder.endEncoding()
commandBuffer.addCompletedHandler({(buffer) -> Void in
let rawPointer: UnsafeMutableRawPointer = UnsafeMutableRawPointer(mutating: depthImageBuffer.contents())
let typedPointer: UnsafeMutablePointer<Float> = rawPointer.assumingMemoryBound(to: Float.self)
self.currentMap = Array(UnsafeBufferPointer(start: typedPointer, count: Int(SettingsModel.model.width)*Int(SettingsModel.model.height)))
})
commandBuffer.commit()
}
This works. I get depth values between 0 and 1. The problem is that I can't use them in the unprojectPoint because they don't appear to be scaled the same as the initial pass, despite using the same SCNScene and SCNCamera.
My questions:
Is there any way to get the depth values directly from SceneKit SCNView's main pass without having to do an extra pass with a separate SCNRenderer?
Why don't the depth values in my pass match the values I get from doing a hit test and then unprojecting? The depth values from my pass go from 0.78 to 0.94. The depth values in the hit test range from 0.89 to 0.97, which curiously enough, matches the OpenGL depth values of the scene when I rendered it in Python.
My hunch is this is a difference in viewports and SceneKit is doing something to scale the depth values from -1 to 1 just like OpenGL.
EDIT: And in case you're wondering, I can't use the hitTest method directly. It's too slow for what I'm trying to achieve.
SceneKit uses a log scale reverse Z-Buffer by default. You can disable the reverse Z-Buffer quite easily (scnView.usesReverseZ = false) but taking the log depth to [0, 1] range with linear distribution requires access to the depth buffer, the value of the far clipping range and the value of the near clipping range. Here is the process of taking a non-reverse-z-log-depth to a linearly distributed depth in the range of [0, 1]:
float delogDepth(float depth, float nearClip, float farClip) {
// The depth buffer is in Log Format. Probably a 24bit float depth with 8 for stencil.
// https://outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html
// We need to undo the log format.
// https://stackoverflow.com/questions/18182139/logarithmic-depth-buffer-linearization
float logTuneConstant = nearClip / farClip;
float deloggedDepth = ((pow(logTuneConstant * farClip + 1.0, depth) - 1.0) / logTuneConstant) / farClip;
// The values are going to hover around a particular range. Linearize that distribution.
// This part may not be necessary, depending on how you will use the depth.
// http://glampert.com/2014/01-26/visualizing-the-depth-buffer/
float negativeOneOneDepth = deloggedDepth * 2.0 - 1.0;
float zeroOneDepth = ((2.0 * nearClip) / (farClip + nearClip - negativeOneOneDepth * (farClip - nearClip)));
return zeroOneDepth;
}
As a workaround, I switched to OpenGL ES and read the depth buffer by adding a fragment shader that packs the depth value into the RGBA renderbuffer SCNShadable.
See here for more info: http://concord-consortium.github.io/lab/experiments/webgl-gpgpu/webgl.html
I understand this is a valid approach as it is used in shadow mapping quite often on OpenGL ES devices and WebGL, but this feels hacky to me and I shouldn't have to do this. I would still be interested in another answer if someone can figure out Metal's viewport transformation.