overlapping Metal Point primitives and blending - metal

I am rendering Points primitives that overlap partially. The fragment shader shades parts of each Point primitive square transparent (solid center circle). A point primitive that does not overlap any other point primitive, shades as expected (the transparent areas of the square show the background).
When such a point primitive overlaps another point primitive, the behavior is a unexpected. specifically: the transparent area does not show the opaque color of the unlerlying points primitive, but instead clears it and shows the background color.
In other words, point primitives with a higher vertex ID number clear the previsously shaded fragments of point primitives with a lower vertex ID number.
The points primitives are encoded like this:
command_encoder.drawPrimitives(.Point, vertexStart: 0, vertexCount: stroke.count)
And the vertex & fragment shaders looks like this:
vertex OutVertex vertex_func(constant InVertex* vertex_array [[buffer(0)]],
constant Uniforms& uniforms [[buffer(1)]],
uint vid [[vertex_id]]) {
OutVertex out;
InVertex in = vertex_array[vid];
// transform vertex into NDC space
out.position = uniforms.projection * float4(in.position.x, in.position.y, 0, 1);
out.pointSize = 60;
return out;
}
fragment float4 fragment_func(OutVertex vert [[stage_in]], float2 uv[[point_coord]]) {
float2 uvPos = uv;
uvPos.x -= 0.5f;
uvPos.y -= 0.5f;
uvPos *= 2.0f;
float dist = sqrt(uvPos.x*uvPos.x + uvPos.y*uvPos.y);
float circleAlpha = 1.0f-dist;
half4 color = half4(0.0f, 0.0f, 1.0f, 1.0f);
color *= circleAlpha;
return float4(color.r, color.g, color.b, circleAlpha);
}
The results looks like this:
I would like to find out how to prevent the clearing of overlapping transparent areas but maintain the transparency against the background.
Thank you.
Update 5/25/16:
I was able to turn on fixed function blending with these additions to my render pipeline descriptor:
rpld.colorAttachments[0].blendingEnabled = true
rpld.colorAttachments[0].rgbBlendOperation = .Add;
rpld.colorAttachments[0].alphaBlendOperation = .Add;
rpld.colorAttachments[0].sourceRGBBlendFactor = .One;
rpld.colorAttachments[0].sourceAlphaBlendFactor = .One;
rpld.colorAttachments[0].destinationRGBBlendFactor = .OneMinusSourceAlpha;
rpld.colorAttachments[0].destinationAlphaBlendFactor = .OneMinusSourceAlpha;
And now things look as expected:

Related

color is incorrect when draw metal triangles with an alpha color. such as (0.9,0.6,0,0.4)

vertex Vertex
line_vertex_main(device Vertex *vertices [[buffer(0)]],
constant Uniforms &uniforms [[buffer(1)]],
uint vid [[vertex_id]])
{
float4x4 matrix = uniforms.matrix;
Vertex in = vertices[vid];
Vertex out;
out.position = matrix * float4(in.position);
out.color = in.color;
return out;
}
fragment float4
line_fragment_main(Vertex inVertex [[stage_in]])
{
return inVertex.color;
}
Color is incorrect. color(0.9,0.6,0,0.4) in metal transform to a strange color:
left is correct, right is draw with metal
Color is correct when draw metal triangles with a no alpha color,
right is draw with metal.
Your blending mode is not configured. You could configure blending on your MTLRenderPipelineDescriptor.

How to understand Exponential Shadow Mapping in HLSL with Directional Light?

I've tried to understand how ESM is working - I have regular Shadow Mapping in Place (occluded/not occluded) in a deferred rendering pipeline and are trying to use ESM instead.
I've tried to adapt this from Cansin:
http://homepage.lnu.se/staff/tblma/Deferred Rendering in XNA 4.pdf
But as he does not use directional lights, I may have a misunderstanding. This is basically my approach on adapting it to directional lights:
Create ShadowMap:
float4 PS(VSO input) : COLOR0
{
float depth = input.Position2D.z / input.Position2D.w;
return exp(depth);
}
I am using an Orthogonal Projection Matrix (same NearFarClip as actual cam), as I do with regular Shadow Mapping (Position2D is ScreenSpace, because it's a directional light, Z is always the distance surface/light, or am I wrong?)
Get Shadow Factor - basically like regular Shadow Mapping, I transform into Light/Screenspace, getting the depth from the ShadowMap
float4 Position = 1;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth; // saved depth from gbuffer
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
float2 LUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float shadowDepth = tex2D(sampler_shadow, LUV).r;
float shadow = shadowDepth * exp(-10 * LightScreenPos.z);
Is my thinking fundamentally wrong?

Metal fragment shader uv coordinates change when reading in vertex color

I was playing with applying dithering to a simple colored quad, and found a strange issue. I have a fragment shader which should calculate dithering at some uv and return a dithered color. This works fine on a textured quad, but strangely enough, when I access color data from my inVertex, the uv coordinates change to some bizarre values, and y value seems to be mapped to x axis. I'll try to illustrate what happens when I change stuff around the fragment shader code.
fragment float4 fragment_colored_dithered(ColoredVertex inVertex [[stage_in]],
float2 uv[[point_coord]]) {
float4 color = inVertex.color;
uv = (uv/2) + 0.5;
if (uv.y < 0.67) {
return float4(uv.y, 0, 0, 1);
}
else {
return color;
}
}
Produces the following result:
Where the left side of the image shows my gradient quad, notice that if (uv.y < 0.67) maps to x values in the image 🤔.
If I change this fragment shader and nothing else in the code, like so, where I return float4(0, 0, 1, 0) instead of inVertex.color, the uv coordinates are mapped correctly.
fragment float4 fragment_colored_dithered(ColoredVertex inVertex [[stage_in]],
float2 uv[[point_coord]]) {
float4 color = inVertex.color;
uv = (uv/2) + 0.5;
if (uv.y < 0.67) {
return float4(uv.y, 0, 0, 1);
}
else {
return float4(0, 0, 1, 0); //return color;
}
}
Produces this (correct) result:
I think I can hack around this problem by applying a 1x1 texture to the gradient and using texture coordinates, but I'd really like to know what is happening here, is this a bug or a feature that I don't understand?
Why are you using [[point_coord]]? What do you think it represents?
Unless you're drawing point primitives, you shouldn't be using that. Since you're drawing a "quad", and given the screenshots, I assume you're not drawing point primitives.
I suspect [[point_coord]] is simply undefined and subject to random-ish behavior when you're drawing triangles. The randomness is apparently affected by the specifics (such as stack layout) of the fragment shader.
You should either be using [[position]] and scaling by the window size or using an interpolated field within your ColoredVertex struct to carry "texture" coordinates.

Linear Depth to World Position

I have the following fragment and vertex shaders.
HLSL code
`
// Vertex shader
//-----------------------------------------------------------------------------------
void mainVP(
float4 position : POSITION,
out float4 outPos : POSITION,
out float2 outDepth : TEXCOORD0,
uniform float4x4 worldViewProj,
uniform float4 texelOffsets,
uniform float4 depthRange) //Passed as float4(minDepth, maxDepth,depthRange,1 / depthRange)
{
outPos = mul(worldViewProj, position);
outPos.xy += texelOffsets.zw * outPos.w;
outDepth.x = (outPos.z - depthRange.x)*depthRange.w;//value [0..1]
outDepth.y = outPos.w;
}
// Fragment shader
void mainFP( float2 depth: TEXCOORD0, out float4 result : COLOR) {
float finalDepth = depth.x;
result = float4(finalDepth, finalDepth, finalDepth, 1);
}
`
This shader produces a depth map.
This depth map must then be used to reconstruct the world positions for the depth values. I have searched other posts but none of them seem to store the depth using the same formula I am using. The only similar post is the following
Reconstructing world position from linear depth
Therefore, I am having a hard time reconstructing the point using the x and y coordinates from the depth map and the corresponding depth.
I need some help in constructing the shader to get the world view position for a depth at particular texture coordinates.
It doesn't look like you're normalizing your depth. Try this instead. In your VS, do:
outDepth.xy = outPos.zw;
And in your PS to render the depth, you can do:
float finalDepth = depth.x / depth.y;
Here is a function to then extract the view-space position of a particular pixel from your depth texture. I'm assuming you're rendering screen aligned quad and performing your position-extraction in the pixel shader.
// Function for converting depth to view-space position
// in deferred pixel shader pass. vTexCoord is a texture
// coordinate for a full-screen quad, such that x=0 is the
// left of the screen, and y=0 is the top of the screen.
float3 VSPositionFromDepth(float2 vTexCoord)
{
// Get the depth value for this pixel
float z = tex2D(DepthSampler, vTexCoord);
// Get x/w and y/w from the viewport position
float x = vTexCoord.x * 2 - 1;
float y = (1 - vTexCoord.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);
// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, g_matInvProjection);
// Divide by w to get the view-space position
return vPositionVS.xyz / vPositionVS.w;
}
For a more advanced approach that reduces the number of calculation involved but involves using the view frustum and a special way of rendering the screen-aligned quad, see here.

Direct3D 11 not rasterizing any vertices

I'm trying to render a simple triangle on screen using Direct3D 11, but nothing shows up. Here are my vertices:
SimpleVertex vertices[ 3 ] =
{
{ XMFLOAT3( -1.0f, -1.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 0.0f ) },
{ XMFLOAT3( -1.0f, 1.0f, 0.0f ) },
};
The expected output is a triangle with one point in the top left corner of the screen, one point in the top right corner of the screen, and one point in the bottom left corner of the screen. However, nothing is being rendered anywhere.
I'm not performing any matrix transformations, and the vertex shader just passes the input directly to the output. Everything seems to be set up correctly, and when I use the graphics debugger in Visual Studio 2012, the correct vertex position is being passed to the vertex shader. However, it skips directly from the vertex shader stage to the output merger stage in the pipeline. I assume this means that nothing is being sent to the pixel shader, which would again mean that the vectors are being discarded in the rasterizer stage. Why is this happening?
Here is my rasterizer state:
D3D11_RASTERIZER_DESC rasterizerDesc;
rasterizerDesc.AntialiasedLineEnable = false;
rasterizerDesc.CullMode = D3D11_CULL_NONE;
rasterizerDesc.DepthBias = 0;
rasterizerDesc.DepthBiasClamp = 0.0f;
rasterizerDesc.DepthClipEnable = true;
rasterizerDesc.FillMode = D3D11_FILL_SOLID;
rasterizerDesc.FrontCounterClockwise = false;
rasterizerDesc.MultisampleEnable = false;
rasterizerDesc.ScissorEnable = false;
rasterizerDesc.SlopeScaledDepthBias = 0.0f;
And my viewport (width/height are the window client area matching my back buffer, which are set to 1024x576 in my test setup):
D3D11_VIEWPORT viewport;
viewport.Height = static_cast< float >( height );
viewport.MaxDepth = 1.0f;
viewport.MinDepth = 0.0f;
viewport.TopLeftX = 0.0f;
viewport.TopLeftY = 0.0f;
viewport.Width = static_cast< float >( width );
Can anyone see what is making the rasterize stage drop my vertices? Or are there any other parts of my D3D setup that could be causing this?
i found this on the internet .. it took absolulely ages to load so i copied and pasted i have highlighted in bold an interesting point.
The D3D_OVERLOADS constructors defined in row 11 offers a convenient way for C++ programmers to create transformed and lit vertices with D3DTLVERTEX.
_D3DTLVERTEX(const D3DVECTOR& v, float _rhw, D3DCOLOR _color,
D3DCOLOR _specular, float _tu, float _tv)
{
sx = v.x;
sy = v.y;
sz = v.z;
rhw = _rhw;
color = _color;
specular = _specular;
tu = _tu;
tv = _tv;
}
The system requires a vertex position that has already been transformed. So the x and y values must be in screen coordinates, and z must be the depth value of the pixel, which could be used in a z-buffer (we won't use a z-buffer here). Z values can range from 0.0 to 1.0, where 0.0 is the closest possible position to the viewer, and 1.0 is the farthest position still visible within the viewing area. Immediately following the position, transformed and lit vertices must include an RHW (reciprocal of homogeneous W) value.
Before rasterizing the vertices, they have to be converted from homogeneous vertices to non-homogeneous vertices, because the rasterizer expects them this way. Direct3D converts the homogeneous vertices to non-homogeneous vertices by dividing the x-, y-, and z-coordinates by the w-coordinate, and produces an RHW value by inverting the w-coordinate. This is only done for vertices which are transformed and lit by Direct3D.
The RHW value is used in multiple ways: for calculating fog, for performing perspective-correct texture mapping, and for w-buffering (an alternate form of depth buffering).
With D3D_OVERLOADS defined, D3DVECTOR is declared as
_D3DVECTOR(D3DVALUE _x, D3DVALUE _y, D3DVALUE _z);
D3DVALUE is the fundamental Direct3D fractional data type. It's declared in d3dtypes.h as
typedef float D3DVALUE, *LPD3DVALUE;
The source shows that the x and y values for the D3DVECTOR are always 0.0f (this will be changed in InitDeviceObjects()). rhw is always 0.5f, color is 0xfffffff and specular is set to 0. Only the tu1 and tv1 values are differing between the four vertices. These are the coordinates of the background texture.
In order to map texels onto primitives, Direct3D requires a uniform address range for all texels in all textures. Therefore, it uses a generic addressing scheme in which all texel addresses are in the range of 0.0 to 1.0 inclusive.
If, instead, you decide to assign texture coordinates to make Direct3D use the bottom half of the texture, the texture coordinates your application would assign to the vertices of the primitive in this example are (0.0,0.0), (1.0,0.0), (1.0,0.5), and (0.0,0.5). Direct3D will apply the bottom half of the texture as the background.
Note: By assigning texture coordinates outside that range, you can create certain special texturing effects.
You will find the declaration of D3DTextr_CreateTextureFromFile() in the Framework source in d3dtextr.cpp. It creates a local bitmap from a passed file. Textures could be created from *.bmp and *.tga files. Textures are managed in the framework in a linked list, which holds the info per texture, called texture container.
struct TextureContainer
{
TextureContainer* m_pNext; // Linked list ptr
TCHAR m_strName[80]; // Name of texture (doubles as image filename)
DWORD m_dwWidth;
DWORD m_dwHeight;
DWORD m_dwStage; // Texture stage (for multitexture devices)
DWORD m_dwBPP;
DWORD m_dwFlags;
BOOL m_bHasAlpha;
LPDIRECTDRAWSURFACE7 m_pddsSurface; // Surface of the texture
HBITMAP m_hbmBitmap; // Bitmap containing texture image
DWORD* m_pRGBAData;
public:
HRESULT LoadImageData();
HRESULT LoadBitmapFile( TCHAR* strPathname );
HRESULT LoadTargaFile( TCHAR* strPathname );
HRESULT Restore( LPDIRECT3DDEVICE7 pd3dDevice );
HRESULT CopyBitmapToSurface();
HRESULT CopyRGBADataToSurface();
TextureContainer( TCHAR* strName, DWORD dwStage, DWORD dwFlags );
~TextureContainer();
};
The problem was actually in my rendering logic. I set the stride of the vertex buffer to 0 instead of the size of my vertex struct. Changed that, and it renders just fine!

Resources