I have noticed that texture have a field names "value" in all the shaders but it seems that this fields is never used.
Example :
tDiffuse: { type: "t", value: 0, texture: null },
What is the purpose of this field ?
Thanks
It is used.
It defines in which slot a texture needs to be stored. A slot is a place where a shaderprogram can access the texture trough samplers. This is indeed mostly 0 becuase we use only 1 texture in a shader. But if we wanted multiple textures being accessed in the shader the then the value needs to be changed to the number of slots nessecary.
Like you can see here:
https://github.com/gero3/three.js/blob/master/src/renderers/WebGLShaders.js#L1392-1397
map is the first texture. (slot 0)
envMap is the second texture. (slot 1)
lightMap is the third texture. (slot 2)
For every texture you use in the shaderprogram , you must choose a new slot.
Related
As you might know, Metal Shading Language allows few ways to read pixel data from texture2d in the kernel function. It can be either simple read(short2 coord) or sample(float2 coord, [different additional parameters]). But I noticed, that when it comes to writing something into texture, there's only write method.
And the problem here is that sample method allows to sample from certain mipmap level which is very convenient. Developer just needs to create a sampler with mipFilter and use normalized coordinates.
But what if I want to write into certain mipmap level of the texture? The thing is that write method doesn't have mipmap parameter the way sample method has and I cannot find any alternative for that.
I'm pretty sure there should be a way to choose mipmap level for writing data to the texture, because Metal Performance Shaders framework has solutions where mipmaps of textures are being populated.
Thanks in advance!
You can do this with texture views.
The purpose of texture views is to reinterpret the contents of a base texture by selecting a subset of its levels and slices and potentially reading/writing its pixels in a different (but compatible) pixel format.
The -newTextureViewWithPixelFormat:textureType:levels:slices: method on the MTLTexture protocol returns a new instance of id<MTLTexture> that has the first level specified in the levels range as its base mip level. By creating one view per mip level you wish to write to, you can "target" each level in the original texture.
For example, to create a texture view on the second mip level of a 2D texture, you might call the method like this:
id<MTLTexture> viewTexture =
[baseTexture newTextureViewWithPixelFormat:baseTexture.pixelFormat
textureType:baseTexture.textureType
levels:NSMakeRange(1, 1)
slices:NSMakeRange(0, 1)];
When binding this new texture as an argument, its mip level 0 will correspond to mip level 1 of its base texture. You can therefore use the ordinary texture write function in a shader to write to the selected mip level:
myShaderTexture.write(color, coords);
I'm using a metal shader to draw many particles onto the screen. Each particle has its own position (which can change) and often two particles have the same position. How can I check if the texture2d I write into does not have a pixel at a certain position yet? (I want to make sure that I only draw a particle at a certain position if there hasn't been drawn a particle yet, because I get an ugly flickering if many particles are drawn at the same positon)
I've tried outTexture.read(particlePosition), but this obviously doesn't work, because of the texture access qualifier, which is access::write.
Is there a way I can have read and write access to a texture2d at the same time? (If there isn't, how could I still solve my problem?)
There are several approaches that could work here. In concurrent systems programming, what you're talking about is termed first-write wins.
1) If the particles only need to preclude other particles from being drawn (and aren't potentially obscured by other elements in the scene in the same render pass), you can write a special value to the depth buffer to signify that a fragment has already been written to a particular coordinate. For example, you'd turn on depth test (using the depth compare function Equal), clear the depth buffer to some distant value (like 1.0), and then write a value of 0.0 to the depth buffer in the fragment function. Any subsequent write to a given pixel will fail to pass the depth test and will not be drawn.
2) Use framebuffer read-back. On iOS, Metal allows you to read from the currently-bound primary renderbuffer by attributing a parameter to your fragment function with [[color(0)]]. This parameter will contain the current color value in the renderbuffer, which you can test against to determine whether it has been written to. This does require you to clear the texture to a predetermined color that will never otherwise be produced by your fragment function, so it is more limited than the above approach, and possibly less performant.
All of the above applies whether you're rendering to a drawable's texture for direct presentation to the screen, or to some offscreen texture.
To answer the read and write part : you can specify a read/write access for the output texture as such :
texture2d<float, access::read_write> outTexture [[texture(1)]],
Also, your texture descriptor must specify usage :
textureDescriptor?.usage = [.shaderRead, .shaderWrite]
I am attempting to write a fragment shader for the app that I am working on. I pass my uniform into the shader which works but it works on the entire object. I want to be able to modify the object pixel by pixel. So my code now is....
let shader = SKShader( fileNamed: "Shader.fsh" );
shader.addUniform( SKUniform( name: "value", float: 1.0 ) );
m_image.shader = shader;
Here the uniform "value" will be the same for all pixels. But, for example, let's say I want to change "value" to "0.0" after a certain amount of pixels are drawn. So for example....
shader.addUniform( SKUniform( name: "value", float: 1.0 ) );
// 100 pixels are drawn
shader.addUniform( SKUniform( name: "value", float: 0.0 ) );
Is this even possible with SKShader? Would this have to be done in the shader source?
One idea I was thinking of was using an array uniform but it doesn't appear that SKShader allows this.
Thanks for any help is advance.
In general, the word uniform means unchanging — something that's the same in all cases or situations. Such is the way of shader uniforms: even though the shader code runs independently (and in parallel) for each pixel in a rendered, images, the value of a uniform variable input to the shader is the same across all pixels.
While you could, in theory, pass an array of values into the shader representing the colors for every pixel, that's essentially the same as passing an image (or just setting a texture image on the sprite)... at that point you're using a shader for nothing.
Instead, you typically want your GLSL(ish*) code to, if it's doing anything based on pixel location, find out the pixel coordinates it's writing to and calculate a result based on that. In a shader for SKShader, you get pixel coordinates from the vec2 v_tex_coord shader variable.
(This looks like a decent tutorial (with links to others) for getting started on SpriteKit shaders. If you follow other tutorials or shader code libraries for help doing cool stuff with pixel shaders, you'll find ideas and algorithms you can reuse, but the ways they find the current output pixel will be different. In a shader for SpriteKit, you can usually safely replace gl_FragCoord with v_tex_coord.)
* SKShader doesn't use actual GLSL per se, It actually uses a subset of GLSL that automatically translates to appropriate GPU code for the device/renderer in use.
I have a C++ DirectX 11 renderer that I have been writing.
I have written a COLLADA 1.4.1 loader to import COLLADA data for use in supporting skeletal animations.
I'm validating the loader at this point (and I've supported COLLADA before in another renderer I've written previously using different technology) and I'm running into a problem matching up COLLADA with DX10/11.
I have 3 separate vertex buffers of data:
A vertex buffer of Unique vertex positions.
A vertex buffer of Unique normals.
A vertex buffer of Unique texture coordinates.
These vertex buffers contain different array length (positions has 2910 elements, normals has more than 9000, and texture coordinates has roughly 3200.)
COLLADA provides a triangle list which gives me the indices into each of these arrays for a given triangle (verbose and oddly done at first, but ultimately it becomes simple once you've worked with it.)
Knowing that DX10/11 support multiple vertex buffer I figured I would be filling the DX10/11 index buffer with indices into each of these buffers * and * (this is the important part), these indices could be different for a given point of a triangle.
In other words, I could set the three vertex buffers, set the correct input layout, and then in the index buffer I would put the equivalent of:
l_aIndexBuffer[ NumberOfTriangles * 3 ]
for( i = 0; i < NumberOfTriangles; i++ )
{
l_aIndexBufferData.add( triangle[i].Point1.PositionIndex )
l_aIndexBufferData.add( triangle[i].Point1.NormalIndex )
l_aIndexBufferData.add( triangle[i].Point1.TextureCoordinateIndex )
}
The documentation regarding using multiple vertex buffers in DirectX doesn't seem to give any information about how this affects the index buffer (more on this later.)
Running the code that way yield strange rendering results where I could see the mesh I had being drawn intermittently correctly (strange polygons but about a third of the points were in the correct place - hint - hint)
I figured I'd screwed up my data or my indices at this point (yesterday) so I painstakingly validated it all, and so I figured I was screwing upon my input or something else. I eliminated this by using the values from the normal and texture buffers to alternatively set the color value used by the pixel shader, the colors were correct so I wasn't suffering a padding issue.
Ultimately I came to the conclusion that DX10/11 must be expect the data ordered in a different fashion, so I tried storing the indices in this fashion:
indices.add( Point1Position index )
indices.add( Point2Position index )
indices.add( Point3Position index )
indices.add( Point1Normal index )
indices.add( Point2Normal index )
indices.add( Point3Normal index )
indices.add( Point1TexCoord index )
indices.add( Point2TexCoord index )
indices.add( Point3TexCoord index )
Oddly enough, this yielded a rendered mesh that looked 1/3 correct - hint - hint.
I then surmised that maybe DX10/DX11 wanted the indices stored 'by vertex buffer' meaning that I would add all the position indices for all the triangles first, then all the normal indices for all the triangles, then all the texture coordinate indices for all the triangles.
This yielded another 1/3 correct (looking) mesh.
This made me think - well, surely DX10/11 wouldn't provide you with the ability to stream from multiple vertex buffers and then actually expect only one index per triangle point?
Only including indices into the vertex buffer of positions yields a properly rendered mesh that unfortunately uses the wrong normals and texture coordinates.
It appears that putting the normal and texture coordinate indices into the index buffer caused erroneous drawing over the properly rendered mesh.
Is this the expected behavior?
Multiple Vertex Buffers - One Index Buffer and the index buffer can only have a single index for a point of a triangle?
That really just doesn't make sense to me.
Help!
The very first thing that comes in my head:
All hardware that supports compute shaders (equal to almost all DirectX 10 and higher) also supports ByteAddressBuffers and most of it supports StructuredBuffers. So you can bind your arrays as SRVs and have random access to any of its elements in shaders.
Something like this (not tested, just pseudocode):
// Indices passed as vertex buffer to shader
// Think of them as of "references" to real data
struct VS_INPUT
{
uint posidx;
uint noridx;
uint texidx;
}
// The real vertex data
// You pass it as structured buffers (similar to textures)
StructuredBuffer<float3> pos : register (t0);
StructuredBuffer<float3> nor : register (t1);
StructuredBuffer<float2> tex : register (t2);
VS_OUTPUT main(VS_INPUT indices)
{
// in shader you read data for current vertex
float3 pos = pos[indices.posidx];
float3 nor = nor[indices.noridx];
float2 tex = tex[indices.texidx];
// here you do something
}
Let's call that "compute shader approach". You must use DirectX 11 API.
Also you can bind your indices in same fashion and do some magic in shaders. In this case you need to find out current index id. Probably you can take it from SV_VertexId.
And probably you can workaround these buffers and bind data somehow else ( DirectX 9 compatible texture sampling! O_o ).
Hope it helps!
How do you implement per instance textures, vertex shaders, and pixel shaders, in the same Vertex Buffer and/or DeviceContext?
I am just trying to find the most efficient way to have different pixel shaders used by the same type of mesh, but colored differently. For example, I would like square and triangle models in the vertex buffer, and for the vertex/pixel/etc shaders to act differently based on instance data.... (If the instance data includes "dead" somehow, the shaders used to draw opaque shapes with solid colors rather than gradients are used.
Given:
1. Different model templates in Vertex Buffer, Square & Triangl, (more eventually).
Instance Buffer with [n] instances of type Square and/or Triangle, etc.
Guesses:
Things I am trying to Research to do this:
A: Can I add a Texture, VertexShader or PixelShader ID to the buffer data so that HLSL or the InputAssembly can determine which Shader to use at draw time?
B. Can I "Set" multiple Pixel and Vertex Shaders into the DeviceContext, and how do I tell DirectX to "switch" the Vertex Shader that is loaded at render time?
C. How many Shaders of each type, (Vertex, Pixel, Hull, etc), can I associate with model definitions/meshes in the default Vertex Buffer?
D. Can I use some sort of Shader Selector in HLSL?
Related C++ Code
When I create an input layout, can I do this without specifying an actual Vertex Shader, or somehow specify more than one?
NS::ThrowIfFailed(
result = NS::DeviceManager::Device->CreateInputLayout(
NS::ModelRenderer::InitialElementDescription,
2,
vertexShaderFile->Data,
vertexShaderFile->Length,
& NS::ModelRenderer::StaticInputLayout
)
);
When I set the VertexShader and PixelShader, how do I associate them with a particular model in my VertexBuffer? Is it possible to set more than one of each?
DeviceManager::DeviceContext->IASetInputLayout(ModelRenderer::StaticInputLayout.Get());
DeviceManager::DeviceContext->VSSetShader(ModelRenderer::StaticVertexShader.Get(), nullptr, 0);
DeviceManager::DeviceContext->PSSetShader(ModelRenderer::StaticPixelShader.Get(), nullptr, 0);
How do I add a Texture, VertexShader or PixelShader ID to the buffer
data so that HLSL or the InputAssembly can determine which Shader to
use at draw time?
You can't assign a Pixel Shader ID to a buffer, that's not how the pipeline works.
A / You can bind only one Vertex/Pixel Shader in a Device context at a time, which defines your pipeline, draw your geometry using this shader, then switch to another Vertex/Pixel shader as needed, draw next geometry...
B/ you can use different shaders using the same model, but that's done on cpu using VSSetShader, PSSetShader....
C/No, for same reason as in B (shaders are set on the CPU)
When I create an input layout, can I do this without specifying an actual Vertex Shader, or somehow specify more than one?
if you don't specify a vertex shader, the pipeline will consider that you draw "null" geometry, which is actually possible (and very fun), but bit out of context, if you provide geometry you need to send the vertex shader data so the runtime can match your geometry layout to the vertex input layout. You can of course create several input layouts by calling the function several times (once per vertex shader/geometry in worst case, but if two models/vertex shaders have the same layout you can omit it).
When I set the VertexShader and PixelShader, how do I associate them with a particular model in my VertexBuffer? Is it possible to set more than one of each?
You bind everything you need (Vertex/Pixel shaders, Vertex/IndexBuffer,Input layout) and call draw (or drawinstanced).