I have two different textures. One is Colored one and other is simply alpha image I want to mask both Image Textures. How I can do that in Metal Shader Language. one texture is 128*128 size other is 256*256 in size. I want the mask texture in the size of 128*128.
fragment float4 fragmentShaderone(VertexOut params[[stage_in]],
texture2d<float, access::sample>srcTexture [[texture(0)]],
texture2d<float, access::sample> maskTexture [[texture(1)]])
{
constexpr sampler defaultSampler;
float srcColor = float4(texture.sample(defaultSampler, float2(params.textureCoordinates))) * float4(1,0,0,40.0/255.0);
float4 maskColor = float4(texture4.sample(defaultSampler, float2(params.textureCoordinates))) ;
return srcColor * maskColor
}
Here In sampling Texture I am using same coordinates for mask and source image.
Related
I have an MTLTexture in RGBA8Unorm format, and a screen texture (in MTKView) in BGRA8Unorm format (reversed). In the Metal shader, when I sample from that texture using sample(), I get float4. When I write to texture in metal shader, I also write float4. It seems that when I am inside the shader code, float4 always represents the same order of components RGBA regardless of the original format the texture is in ([0] for red, [1] for green, [2] for blue, and [3] for alpha). Is my conclusion correct that the meaning of the components of the sampled/written float4 is always the same inside the shader, regardless of what the storage format of the texture is?
UPDATE: I use the following code to write to a texture with RGBA8Unnorm format:
kernel void
computeColourMap(constant Uniforms &uniforms [[buffer(0)]],
constant array<float, 120> &s [[buffer(1)]],
constant array<float, 120> &red [[buffer(2)]],
constant array<float, 120> &green [[buffer(3)]],
constant array<float, 120> &blue [[buffer(4)]],
texture2d<float, access::write> output [[texture(0)]],
uint2 id [[thread_position_in_grid]])
{
if (id.x >= output.get_width() || id.y >= output.get_height()) {
return;
}
uint i = id.x % 120;
float4 col (0, 0, 0, 1);
col.x += amps[i] * red[i];
col.y += amps[i] * green[i];
col.z += amps[i] * blue[i];
output.write(col, id);
}
I then use the following shaders for the rendering stage:
vertex VertexOut
vertexShader(const device VertexIn *vertexArray [[buffer(0)]],
unsigned int vid [[vertex_id]])
{
VertexIn vertex_in = vertexArray[vid];
VertexOut vertex_out;
vertex_out.position = vertex_in.position;
vertex_out.textureCoord = vertex_in.textureCoord;
return vertex_out;
}
fragment float4
fragmentShader(VertexOut interpolated [[stage_in]],
texture2d<float> colorTexture [[ texture(0) ]])
{
const float4 colorSample = colorTexture.sample(nearestSampler,
interpolated.textureCoord);
return colorSample;
}
where colourTexture passed into the fragment shader is the one I generated in RGBA8Unorm format, and in Swift I have:
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
renderPipelineDescriptor.vertexFunction = library.makeFunction(name: "vertexShader")!
renderPipelineDescriptor.fragmentFunction = library.makeFunction(name: "fragmentShader")!
renderPipelineDescriptor.colorAttachments[0].pixelFormat = colorPixelFormat
the colorPixelFormat of the MTKView is BGRA8Unorm (reversed relative to texture), which is not the same as my texture, but the colours on the screen come out correct.
UPDATE 2: one further pointer that within a shader the colour represented by float4 always has order of rgba is: float4 type actually has accessors called v.r, v.g, v.b, v.rgb, etc...
The vector always has 4 components, but the type of the components is not necessarily float. When you declare a texture, you specify the component type as a template argument (texture2d<float ...> in your code).
For example, from Metal Shading Language Specification v2.1, section 5.10.1:
The following member functions can be used to sample from a 1D
texture.
Tv sample(sampler s, float coord) const
Tv is a 4-component vector type based on the templated type used
to declare the texture type. If T is float, Tv is float4. If T is half,
Tv is half4. If T is int, Tv is int4. If T is uint, Tv is uint4. If T
is short, Tv is short4 and if T is ushort, Tv is ushort4.
The same Tv type is used in the declaration of write(). The functions for other texture types are documented in a similar manner.
And, yes, component .r always contains the red component (if present), etc. And [0] always corresponds to .r (or .x).
vertex Vertex
line_vertex_main(device Vertex *vertices [[buffer(0)]],
constant Uniforms &uniforms [[buffer(1)]],
uint vid [[vertex_id]])
{
float4x4 matrix = uniforms.matrix;
Vertex in = vertices[vid];
Vertex out;
out.position = matrix * float4(in.position);
out.color = in.color;
return out;
}
fragment float4
line_fragment_main(Vertex inVertex [[stage_in]])
{
return inVertex.color;
}
Color is incorrect. color(0.9,0.6,0,0.4) in metal transform to a strange color:
left is correct, right is draw with metal
Color is correct when draw metal triangles with a no alpha color,
right is draw with metal.
Your blending mode is not configured. You could configure blending on your MTLRenderPipelineDescriptor.
Render an image in two steps.
It needs two pair vertex and fragment shader.
In the second step, the second fragment shader needs the frame buffer rendered by first pair shader.
How to get the frame buffer or the color in a special coordinate.
I have read the following answer.
iOS Metal Shader - Texture read and write access?
How to chain filters in Metal for iOS?
fragment float4 second_fragment(VertexOut vert [[stage_in]],
texture2d<float> tex [[texture(0)]],
float4 framebuffer [[color(0)]]
) {
float4 textureColor = tex.sample(basicSampler,vert.texCoor);
return textureColor;
}
[[color(0)]] qualifier only get the color in current coordinate. I need to know the color in any other coordinate.
I generate simple 2D grid with triangle strip representing water surface. First generated vertex has position [0,0] and the last one has [1,1]. For my water simulation I need to store current positions of vertices to a texture and then sample these values from the texture in the next frame to get the previous state of the water surface.
So, I created the texture in a size of vertices. For example if I will have a 10x10 vertices grid, I use a texture with 10x10 pixels (one pixel for one vertex data). And set this texture as a render target to render all vertex data into it.
According to this: MSDN Coordinate Systems, If I will use current positions of vertices in the grid (bottom-left at [0;0], top-right at [1;1]), rendered texture looks like this:
So I need to do some conversion to NDC. I convert it in a vertex shader like this:
[vertex.x * 2 - 1; vertex.y * 2 - 1]
Consider this 3x3 grid:
Now, grid is stretched to whole texture size. Texture coordinates are different from NDC and apparently I can use original coordinates of the grid (before conversion) to sample values from the texture and get previous values (positions) of vertices.
Here is a sample of my vertex/pixel shader code:
This vertex shader converts coordinates and sends it to pixel shader with SV_POSITION semantics (describes the pixel location).
struct VertexInput
{
float4 pos : POSITION;
float2 tex : TEXCOORD;
};
struct VertexOutput
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
// convertes coordinates from 0,0 origin to -1,-1, etc.
float2 toNDC(float2 px)
{
return float2(px.x * 2 - 1, px.y * 2 - 1);
}
VertexOutput main( VertexInput input )
{
VertexOutput output;
float2 ndc = toNDC(float2(input.pos.x, input.pos.z));
output.pos = float4(ndc, 1, 1);
output.tex = float2(input.pos.x, input.pos.z);
return output;
}
And here's the pixel shader saving values from vertex shader at defined pixel location (SV_POSITION).
struct PixelInput
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
float4 main(PixelInput input) : SV_TARGET
{
return float4(input.tex.x, input.tex.y, 0, 1);
}
And we're finally getting to my problem! I use graphics debugger in Visual Studio 2012 which allows me to look at the rendered texture and its values. I would expect that at the pixel location [0,1] (in texel coordinate system) should be value [0,0] (or [0,0,0,1] to be precise, for RGBA format) but it seems that value of final pixel is interpolated between 3 vertices and I have a wrong value for a given vertex.
Screenshot from VS graphics debugger:
Rendered 3x3 texture ([0;1] location in texel coordinate system):
Values from vertex and pixel shader:
How to render the exact value from vertex shader to texture for a given pixel?
I am pretty new to computer graphics and Direct3D 11, so please excuse my deficiencies.
I'm using the rastertek framework for terrain generation. I've got the terrain rendered from the vertex shader but I don't know how to calculate the normals in the shader. Theres a function call in one of the classes that generates the normals from the terrain but this only works if the terrain was generated on the cpu. Heres the code for the vertex shader I'm using:
////////////////////////////////////////////////////////////////////////////////
// Filename: terrain.vs
////////////////////////////////////////////////////////////////////////////////
#include "terrain.fx"
/////////////
// GLOBALS //
/////////////
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};
//////////////
// TYPEDEFS //
//////////////
struct VertexInputType
{
float4 position : POSITION;
float3 normal : NORMAL;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float3 normal : NORMAL;
};
////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
PixelInputType TerrainVertexShader(VertexInputType input)
{
PixelInputType output;
input.position.y = input.position.y + terrain(input.position.x,input.position.z);
// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;
// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);
// Calculate the normal vector against the world matrix only.
output.normal = mul(input.normal, (float3x3)worldMatrix);
// Normalize the normal vector.
output.normal = normalize(output.normal);
return output;
}
Your big problem with generating normals in a shader is that you need knowledge of surrounding vertices. This is something you can overcome with a geometry shader but not with a vertex shader. A simple way to calculate the normal is to calculate the polygon normals (take the vector formed from 2 edges and cross product them for the face normal) of all polys that hit the vertex you are looking at and then add them up and normalise. As such if you haven't got access to a geometry shader the only real solution is to use the CPU. Even then this is not the best way to calculate the vertex normals. You may still find it better to use a more complex algorithm yet and that will give you even more problems! So yeah, CPU or geometry shader ... those are, basically, your options.