I add stock smoke SCNParticleSystem to a node in a scene in an ARSCNView. This works as expected.
I use ARSCNView.snapshot() to capture image and process before drawing in MTKView draw() method.
I then call removeAllParticleSystems() on the main thread on node with particle system and remove the node from scene with removeFromParent().
I then add other nodes to the scene and eventually, the app crashes with the error validateFunctionArguments:3469: failed assertion 'Vertex Function(uberparticle_vert): missing buffer binding at index 19 for vertexBuffer.1[0].'
TheAll Exceptions break point often stops on the ARSCNView.snapshot() call.
Why is this this crashing?
What does the error mean?
How should I be adding and removing particle systems from scenes in ARSCNView's?
UPDATE:
I hooked up the MTKView subclass I use from here to a working ARKit demo with particle system and the same Vertex Function crash occurs.
Does that mean the issue is with the passthrough vertex shader function?
Why are the particle systems treated differently?
Below are the shader functions. Thanks.
#include <metal_stdlib>
using namespace metal;
// Vertex input/output structure for passing results from vertex shader to fragment shader
struct VertexIO
{
float4 position [[position]];
float2 textureCoord [[user(texturecoord)]];
};
// Vertex shader for a textured quad
vertex VertexIO vertexPassThrough(device packed_float4 *pPosition [[ buffer(0) ]],
device packed_float2 *pTexCoords [[ buffer(1) ]],
uint vid [[ vertex_id ]])
{
VertexIO outVertex;
outVertex.position = pPosition[vid];
outVertex.textureCoord = pTexCoords[vid];
return outVertex;
}
// Fragment shader for a textured quad
fragment half4 fragmentPassThrough(VertexIO inputFragment [[ stage_in ]],
texture2d<half> inputTexture [[ texture(0) ]],
sampler samplr [[ sampler(0) ]])
{
return inputTexture.sample(samplr, inputFragment.textureCoord);
}
Related
I am trying to add a smudge effect to my paint brush project. To achieve that, I think I need to sample the the current results (which is in paintedTexture) from the start of the brush stroke coordinates and pass it to the fragment shader.
I have a vertex shader such as:
vertex VertexOut vertex_particle(
device Particle *particles [[buffer(0)]],
constant RenderParticleParameters *params [[buffer(1)]],
texture2d<half> imageTexture [[ texture(0) ]],
texture2d<half> paintedTexture [[ texture(1) ]],
uint instance [[instance_id]])
{
VertexOut out;
Drawing a fragment shader such as:
fragment half4 fragment_particle(VertexOut in [[ stage_in ]],
half4 existingColor [[color(0)]],
texture2d<half> brushTexture [[ texture(0) ]],
float2 point [[ point_coord ]]) {
Is it possible to create a clipped texture from the paintedTexture and send it to the fragment shader?
paintedTexture is the current results that have been painted to the canvas. I would like to create a new texture from paintedTexture using the same area as the brush texture and pass it to the fragment shader.
The existingColor [[color(0)]] in the fragment shader is of no use since it is the current color, not the color at the beginning of a stroke. If I use existingColor, it's like using transparency (or a transfer mode based on what math is used to combine it with a new color).
If I am barking up the wrong tree, any suggestions on how to achieve a smudging effect with Metal would potentially be acceptable answers.
Update: I tried using a texture2d in the VertexOut struct:
struct VertexOut {
float4 position [[ position ]];
float point_size [[ point_size ]];
texture2d<half> paintedTexture;
}
But it fails to compile with the error:
vertex function has invalid return type 'VertexOut'
It doesn't seem possible to have an array in the VertexOut struct either (which isn't nearly as ideal as a texture, but it could be a path forward):
struct VertexOut {
float4 position [[ position ]];
float point_size [[ point_size ]];
half4 paintedPixels[65536];
}
Gives me the error:
type 'VertexOut' is not valid for attribute 'stage_in'
It's not possible for shaders to create textures. They could fill an existing one, but I don't think that's what you want or need, here.
I would expect you could pass paintedTexture to the fragment shader and use the vertex shader to note where, from that texture, to sample. So, just coordinates.
I'd like to save a depth buffer to a texture in metal, but nothing I've tried seems to work.
_renderPassDesc.colorAttachments[1].clearColor = MTLClearColorMake(0.f, 0.f, 0.f, 1.f);
[self createTextureFor:_renderPassDesc.colorAttachments[1]
size:screenSize
withDevice:_device
format:MTLPixelFormatRGBA16Float];
_renderPassDesc.depthAttachment.loadAction = MTLLoadActionClear;
_renderPassDesc.depthAttachment.storeAction = MTLStoreActionStore;
_renderPassDesc.depthAttachment.texture = self.depthTexture;
_renderPassDesc.depthAttachment.clearDepth = 1.0;
When I pass depthTexture into my shader (which works fine with data from my other textures), all I get is red pixels.
As I change clearDepth to values closer to zero, I get darker shades of red. Perhaps I'm somehow not sampling the texture correctly in my shader?
fragment float4 cubeFrag(ColorInOut in [[stage_in]],
texture2d<float> albedo [[ texture(0) ]],
texture2d<float> normals [[ texture(1) ]],
texture2d<float> albedo2 [[ texture(2) ]],
texture2d<float> normals2 [[ texture(3) ]],
texture2d<float> lightData [[ texture(4) ]],
texture2d<float> depth [[ texture(5) ]])
{
constexpr sampler texSampler(min_filter::linear, mag_filter::linear);
return depth.sample(texSampler, in.texCoord).rgba;
}
Use depth2d<float> instead of texture2d<float> as the argument type, and read a float from the depth texture float val = depth.sample(texSampler, in.texCoord);
OK, it turns out that I just needed to use depth2d instead of texture2d:
depth2d<float> depth [[ texture(5) ]])
My renderer supports 2 vertex formats:
typedef struct
{
packed_float3 position;
packed_float2 texcoord;
packed_float3 normal;
packed_float4 tangent;
packed_float4 color;
} vertex_ptntc;
typedef struct
{
packed_float3 position;
packed_float2 texcoord;
packed_float4 color;
} vertex_ptc;
One of my shader library's vertex shader signature is as follows:
vertex ColorInOut unlit_vertex(device vertex_ptc* vertex_array [[ buffer(0) ]],
constant uniforms_t& uniforms [[ buffer(1) ]],
unsigned int vid [[ vertex_id ]])
Some of the meshes rendered by this shader will use one format and some will use the other. How do I support both formats? This shader only uses the attributes in vertex_ptc. Do I have to write another vertex shader?
When defining the shader function argument as an array of structures (as you're doing), the structure definition in the shader vertex function must match the exact shape and size of the actual structures in the buffer (including padding).
Have you considered defining the input in terms of the [[stage_in]] qualifier, and vertex descriptors? This will allow you to massage the vertex input on a shader-by-shader basis, by using an [[attribute(n)]] qualifier on each element of the structure declared for each shader function. You would define a vertex descriptor for each structure.
I'm making a sprite batcher that can deal with more than one texture per batch. A sprite's data is stored into this huge uniforms buffer which is sent to the GPU as soon as the sprite batch is all filled up. I tried assuming there would be 16 textures, some of which were going to be unused, and based on the textureID sent through the instance uniforms the fragment shader would pick the right texture to use. This yielded in roughly 60 fps with 800-1000 sprites on an iPhone 5s. I then tested this with a single texture and received a satisfying 2000 sprites at 60 fps. Knowing I would still need to be able to swap textures, I decided to use texture arrays to bind one texture with 16 slices. If I render using texture index 0, the fps is just as it was with the single slice texture. Once I delve into further slices, however, performance drops massively.
Here is the shader:
struct VertexIn {
packed_float2 position [[ attribute(0) ]];
packed_float2 texCoord [[ attribute(1) ]];
};
struct VertexOut {
float4 position [[position]];
float2 texCoord;
uint iid;
};
struct InstanceUniforms {
float3x2 transformMatrix;
float2 uv;
float2 uvLengths;
float textureID;
};
vertex VertexOut spriteVertexShader(const device VertexIn *vertex_array [[ buffer(0) ]],
const device InstanceUniforms *instancedUniforms [[ buffer(1) ]],
uint vid [[ vertex_id ]],
uint iid [[ instance_id ]]) {
VertexIn vertexIn = vertex_array[vid];
InstanceUniforms instanceUniforms = instancedUniforms[iid];
VertexOut vertexOut;
vertexOut.position = float4(instanceUniforms.transformMatrix * float3(vertexIn.position, 1.0), 0.0, 1.0);
vertexOut.texCoord = instanceUniforms.uv + vertexIn.texCoord * instanceUniforms.uvLengths;
vertexOut.iid = iid;
return vertexOut;
}
fragment float4 spriteFragmentShader(VertexOut interpolated [[ stage_in ]],
const device InstanceUniforms *instancedUniforms [[ buffer(0) ]],
texture2d_array<float> tex [[ texture(0) ]],
sampler sampler2D [[ sampler(0) ]],
float4 dst_color [[ color(0) ]]) {
InstanceUniforms instanceUniforms = instancedUniforms[interpolated.iid];
float2 texCoord = interpolated.texCoord;
return tex.sample(sampler2D, texCoord, uint(instanceUniforms.textureID));
}
Everything is working exactly as expected until I use a texture slice greater than 0. I am using instanced rendering. All sprites share the same vertex and index buffer.
In iOS 8 there was a problem with the division of floats in Metal preventing proper texture projection, which I solved.
Today I discovered that the texture projection on iOS 9 is broken again, although I'm not sure why.
The result of warping a texture on CPU (with OpenCV) and on GPU are not the same. You can see on your iPhone if you run this example app (already includes the fix for iOS 8) from iOS 9.
The expected CPU warp is colored red, while the GPU warp done by Metal is colored green, so where they overlap they are yellow. Ideally you should not see green or red, but only shades of yellow.
Can you:
confirm the problem exists on your end;
give any advice on anything that might be wrong?
The shader code is:
struct VertexInOut
{
float4 position [[ position ]];
float3 warpedTexCoords;
float3 originalTexCoords;
};
vertex VertexInOut warpVertex(uint vid [[ vertex_id ]],
device float4 *positions [[ buffer(0) ]],
device float3 *texCoords [[ buffer(1) ]])
{
VertexInOut v;
v.position = positions[vid];
// example homography
simd::float3x3 h = {
{1.03140473, 0.0778113901, 0.000169219566},
{0.0342947133, 1.06025684, 0.000459250761},
{-0.0364957005, -38.3375587, 0.818259298}
};
v.warpedTexCoords = h * texCoords[vid];
v.originalTexCoords = texCoords[vid];
return v;
}
fragment half4 warpFragment(VertexInOut inFrag [[ stage_in ]],
texture2d<half, access::sample> original [[ texture(0) ]],
texture2d<half, access::sample> cpuWarped [[ texture(1) ]])
{
constexpr sampler s(coord::pixel, filter::linear, address::clamp_to_zero);
half4 gpuWarpedPixel = half4(original.sample(s, inFrag.warpedTexCoords.xy * (1.0 / inFrag.warpedTexCoords.z)).r, 0, 0, 255);
half4 cpuWarpedPixel = half4(0, cpuWarped.sample(s, inFrag.originalTexCoords.xy).r, 0, 255);
return (gpuWarpedPixel + cpuWarpedPixel) * 0.5;
}
Do not ask me why, but if I multiply the warped coordinates by 1.00005 or any number close to 1.0, it is fixed (apart from very tiny details). See last commit in the example app repo.