Punching alpha fillled holes to render-to-textures in Three.js - webgl

I am using render-to-texture to do postprocessing and then blending several 2D layers together.
Currently I am using stencil mask to make "holes" in render-to-texture targets and leaving some of the areas transparent. However, this is little cumbersome in my case. I'd rather ignore the stencil mask and then just would use normal polyfill operations to draw the holes.
What kind of methods there exist for rendering "fill to alpha 0.0" areas in the scene? I.e. the existing rendet-to-texture destination alpha value would be ignored and just replaced with 0.0 value. I assume you can set OpenGL mode bits so (how?) that this can done, without the need of using a custom fragment shader.
I already know how to set depth mask to ignore mode, so I can redraw over the top of the existing polygons.

You just have to use the THREE.NoBlending blending mode in the material used in the polygons you draw to make the holes.The material should be a ShaderMaterial so you can write the desired alpha, like here:
var r = 0.5;
var g = 0;
var b = 0;
var a = 0.8;
var material = new THREE.ShaderMaterial( {
uniforms: {
col: { type: "v4", value: new THREE.Vector4( r, g, b, a ) }
},
fragmentShader: "uniform vec4 col; void main() {\n\tgl_FragColor = col;\n}",
side: THREE.DoubleSide
} );
material.transparent = true;
material.blending = THREE.NoBlending;
(Note that the DoubleSide parameter is not related to the problem but it is useful sometimes.)

Related

(DX12 Shadow Mapping) Depth buffer is always filled with 1

I'm really new to graphics programming in general, so please bear with me. I am trying to add shadow mapping from a distant light (orthogonal projection) into my scene, but when I follow the (very incomplete) steps from Frank Luna's DX12 book I find that my SRV for the shadow map is just filled with depths of 1.
If it helps, here is my SRV definition:
D3D12_TEX2D_SRV texDesc = {
0,
-1,
0,
0.0f
};
D3D12_SHADER_RESOURCE_VIEW_DESC srvDesc = {
DXGI_FORMAT_R32_TYPELESS,
D3D12_SRV_DIMENSION_TEXTURE2D,
D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING,
};
srvDesc.Texture2D = texDesc;
m_device->CreateShaderResourceView(m_lightDepthTexture.Get(),&srvDesc, m_cbvHeap->GetCPUDescriptorHandleForHeapStart());
and here are my DSV heap and descriptor definitions:
D3D12_DESCRIPTOR_HEAP_DESC dsvHeapDesc = {};
dsvHeapDesc.NumDescriptors = 2;
dsvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV;
dsvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;
ThrowIfFailed(m_device->CreateDescriptorHeap(&dsvHeapDesc, IID_PPV_ARGS(&m_dsvHeap)));
D3D12_DEPTH_STENCIL_VIEW_DESC depthStencilDesc = {};
depthStencilDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilDesc.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D;
depthStencilDesc.Flags = D3D12_DSV_FLAG_NONE;
CD3DX12_HEAP_PROPERTIES heapProps = CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT);
CD3DX12_RESOURCE_DESC resourceDesc = CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_R32_TYPELESS, m_width, m_height, 1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL);
D3D12_CLEAR_VALUE depthOptimizedClearValue = {};
depthOptimizedClearValue.Format = DXGI_FORMAT_D32_FLOAT;
depthOptimizedClearValue.DepthStencil.Depth = 1.0f;
depthOptimizedClearValue.DepthStencil.Stencil = 0;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&resourceDesc,
D3D12_RESOURCE_STATE_DEPTH_WRITE,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_dsvBuffer)
));
D3D12_RESOURCE_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D12_RESOURCE_DESC));
texDesc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
texDesc.Alignment = 0;
texDesc.Width = m_width;
texDesc.Height = m_height;
texDesc.DepthOrArraySize = 1;
texDesc.MipLevels = 1;
texDesc.Format = DXGI_FORMAT_R32_TYPELESS;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN;
texDesc.Flags = D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&texDesc,
D3D12_RESOURCE_STATE_GENERIC_READ,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_lightDepthTexture)
));
CD3DX12_CPU_DESCRIPTOR_HANDLE dsv(m_dsvHeap->GetCPUDescriptorHandleForHeapStart());
m_device->CreateDepthStencilView(m_dsvBuffer.Get(), &depthStencilDesc, dsv);
dsv.Offset(1, m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_DSV));
m_device->CreateDepthStencilView(m_lightDepthTexture.Get(), &depthStencilDesc, dsv);
I then created a basic vertex shader that just transforms the vertices with my map (from Frank Luna's book, page 648,650). Since I bound the m_lightDepthTexture to D3D12GraphicsCommandList::OMSetRenderTargets, I assumed that the depth values would be written onto m_lightDepthTexture. But simply sampling this texture in my main pass proves that the values are actually 1.0f. So nothing actually happened on my shadow pass!
I really have no idea what to ask, but if anyone has a sample DX12 shadow map I could see (Google comes up with DX11 or less, or much too complicated samples), or if there's a good source to learn about this, please let me know!
EDIT: I should say that I changed the format from DXGI_FORMAT_D24_UNORM_S8_UINT, as I think the extra 8 bits for stencil is irrelevant to my case. I changed back to the book format and nothing changed, so I think this format should be fine.
If you remove the unecessary return ret; from your shadow vertex shader, the problem then seems to be in winding order of vertices of your sphere. You can easily verify this by setting cull mode to D3D12_CULL_MODE_NONE for your shadow PSO.
You can easily correct your sphere winding order by switching order of any two vertices of every triangle, so wherever you have p1,p2,p3 you just write it for example as p1,p3,p2.
You will also need to check your matrix multiplication order in your vertex shaders, I didn't checked it in detail but it's inconsistent and I believe the cause why the sphere will appear black when you fix the above issue. You also seem to be missing division by w for your light coords in lighting vertex shader.

How can I adjust the transparency with DirectX 11?

I'm trying to make transparent object like a colored glass or water and I succeeded in making it.
but I don't know how to adjust it's tranparency. Am I trying to do the impossible?
The scene is rendered with simple calculation of color and lighting.
and not using texture mapping in this program
I tried changing Blend state desc, blend factor, samplemask
but I'm not sure it was right.
Here is my Blendstate desc
BlendStateDesc.AlphaToCoverageEnable = false;
BlendStateDesc.IndependentBlendEnable = false;
BlendStateDesc.RenderTarget[0].BlendEnable = true;
BlendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_DEST_COLOR;
BlendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_ZERO;
BlendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
BlendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
BlendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].RenderTargetWriteMask
=D3D11_COLOR_WRITE_ENABLE_ALL;
and setting
float bf[] = { 0.f,0.f,0.f,0.f };
pDeviceContext->OMSetBlendState(m_pd3dBlendState, bf, 0xffffffff);
I will assume this is DX11 and that the part you are missing is actually in your shader.
In your pixel shader, you will be returning a float4 rgba. The "a" value will control the values applied to the calculation. This value is between 0 and 1. If you could post your pixel shader code, that would help confirm/deny the solution.

Fragment shader output interferes with conditional statement

Context: I'm doing all of the following using OpenGLES 2 on iOS 11
While implementing different blend modes used to blend two textures together I came across a weird issue that I managed to reduce to the following:
I'm trying to blend the following two textures together, only using the fragment shader and not the OpenGL blend functions or equations. GL_BLEND is disabled.
Bottom - dst:
Top - src:
(The bottom image is the same as the top image but rotated and blended onto an opaque white background using "normal" (as in Photoshop 'normal') blending)
In order to do the blending I use the
#extension GL_EXT_shader_framebuffer_fetch
extension, so that in my fragment shader I can write:
void main()
{
highp vec4 dstColor = gl_LastFragData[0];
highp vec4 srcColor = texture2D(textureUnit, textureCoordinateInterpolated);
gl_FragColor = blend(srcColor, dstColor);
}
The blend doesn't perform any blending itself. It only chooses the correct blend function to apply based on a uniform blendMode integer value. In this case the first texture gets drawn with an already tested normal blending function and then the second texture gets drawn on top with the following blendTest function:
Now here's where the problem comes in:
highp vec4 blendTest(highp vec4 srcColor, highp vec4 dstColor) {
highp float threshold = 0.7; // arbitrary
highp float g = 0.0;
if (dstColor.r > threshold && srcColor.r > threshold) {
g = 1.0;
}
//return vec4(0.6, g, 0.0, 1.0); // no yellow lines (Case 1)
return vec4(0.8, g, 0.0, 1.0); // shows yellow lines (Case 2)
}
This is the output I would expect (made in Photoshop):
So red everywhere and green/yellow in the areas where both textures contain an amount of red that is larger than the arbitrary threshold.
However, the results I get are for some reason dependent on the output value I choose for the red component (0.6 or 0.8) and none of these outputs matches the expected one.
Here's what I see (The grey border is just the background):
Case 1:
Case 2:
So to summarize: If I return a red value that is larger than the threshold, e.g
return vec4(0.8, g, 0.0, 1.0);
I see vertical yellow lines, whereas if the red component is less than the threshold there will be no yellow/green in the result whatsoever.
Question:
Why does the output of my fragment shader determine whether or not the conditional statement is executed and even then, why do I end up with green vertical lines instead of green boxes (which indicates that the dstColor is not being read properly)?
Does it have to do with the extension that I'm using?
I also want to point out that the textures are both being loaded in and bound properly. I can see them just fine if I just return the individual texture info without blending or even with a normal blending function that I've implemented everything works as expected.
I found out what the problem was (and I realize that it's not something anyone could have known from just reading the question):
There is an additional fully transparent texture being drawn between the two textures you can see above, which I had forgotten about.
Instead of accounting for that and just returning the dstColor in case the srcColor alpha is 0, the transparent texture's color information (which is (0.0, 0.0, 0.0, 0.0)) was being used when blending, therefore altering the framebuffer content.
Both the transparent texture and the final texture were drawn with the blendTest function, so the output of the first function call was then being read in when blending the final texture.

Displacement Map UV Mapping?

Summary
I'm trying to apply a displacement map (Height map) to a rather simple object (Hexagonal plane) and I'm having some unexpected results. I am using grayscale and as such, I was under the impression my height map should only be affecting the Z values of my mesh. However, the displacement map I've created stretches the mesh across the X and Y planes. Furthermore, it doesn't seem to use the UV mapping I've created that all other textures are successfully applied to.
Model and UV Map
Here are reference images of my hexagonal mesh and its corresponding UV map in Blender.
Diffuse and Displacement Textures
These are the diffuse and displacement map textures I am applying to my mesh through Three.JS.
Renders
When I render the plane without a displacement map, you can see that the hexagonal plane stays within the lines. However, when I add the displacement map it clearly affects the X and Y positions of the vertices rather than affecting only the Z, expanding the plane well over the lines.
Code
Here's the relevant Three.js code:
// Textures
var diffuseTexture = THREE.ImageUtils.loadTexture('diffuse.png', null, loaded);
var displacementTexture = THREE.ImageUtils.loadTexture('displacement.png', null, loaded);
// Terrain Uniform
var terrainShader = THREE.ShaderTerrain["terrain"];
var uniformsTerrain = THREE.UniformsUtils.clone(terrainShader.uniforms);
//uniformsTerrain["tNormal"].value = null;
//uniformsTerrain["uNormalScale"].value = 1;
uniformsTerrain["tDisplacement"].value = displacementTexture;
uniformsTerrain["uDisplacementScale"].value = 1;
uniformsTerrain[ "tDiffuse1" ].value = diffuseTexture;
//uniformsTerrain[ "tDetail" ].value = null;
uniformsTerrain[ "enableDiffuse1" ].value = true;
//uniformsTerrain[ "enableDiffuse2" ].value = true;
//uniformsTerrain[ "enableSpecular" ].value = true;
//uniformsTerrain[ "uDiffuseColor" ].value.setHex(0xcccccc);
//uniformsTerrain[ "uSpecularColor" ].value.setHex(0xff0000);
//uniformsTerrain[ "uAmbientColor" ].value.setHex(0x0000cc);
//uniformsTerrain[ "uShininess" ].value = 3;
//uniformsTerrain[ "uRepeatOverlay" ].value.set(6, 6);
// Terrain Material
var material = new THREE.ShaderMaterial({
uniforms:uniformsTerrain,
vertexShader:terrainShader.vertexShader,
fragmentShader:terrainShader.fragmentShader,
lights:true,
fog:true
});
// Load Tile
var loader = new THREE.JSONLoader();
loader.load('models/hextile.js', function(g) {
//g.computeFaceNormals();
//g.computeVertexNormals();
g.computeTangents();
g.materials[0] = material;
tile = new THREE.Mesh(g, new THREE.MeshFaceMaterial());
scene.add(tile);
});
Hypothesis
I'm currently juggling three possibilities as to why this could be going wrong:
The UV map is not applying to my displacement map.
I've made the displacement map incorrectly.
I've missed a crucial step in the process that would lock the displacement to Z-only.
And of course, secret option #4 which is none of the above and I just really have no idea what I'm doing. Or any mixture of the aforementioned.
Live Example
You can view a live example here.
If anybody with more knowledge on the subject could guide me I'd be very grateful!
Edit 1: As per suggestion, I've commented out computeFaceNormals() and computeVertexNormals(). While it did make a slight improvement, the mesh is still being warped.
In your terrain material, set wireframe = true, and you will be able to see what is happening.
Your code and textures are basically fine. The problem occurs when you compute vertex normals in the loader callback function.
The computed vertex normals for the outer ring of your geometry point somewhat outward. This is most likely because in computeVertexNormals() they are computed by averaging the face normals of each neighboring face, and the face normals of the "sides" of your model (the black part) are averaged into the vertex normal calculation for those vertices that make up the outer ring of the "cap".
As a result, the outer ring of the "cap" expands outward under the displacement map.
EDIT: Sure enough, straight from your model, the vertex normals of the outer ring point outward. The vertex normals for the inner rings are all parallel. Perhaps Blender is using the same logic to generate vertex normals as computeVertexNormals() does.
The problem is how your object is constructed becuase the displacement happens along the normal vector.
the code is here.
https://github.com/mrdoob/three.js/blob/master/examples/js/ShaderTerrain.js#L348-350
"vec3 dv = texture2D( tDisplacement, uvBase ).xyz;",
This takes a the rgb vector of the displacement texture.
"float df = uDisplacementScale * dv.x + uDisplacementBias;",
this takes only red value of the vector becuase uDisplacementScale is normally 1.0 and uDisplacementBias is 0.0.
"vec3 displacedPosition = normal * df + position;",
This displaces the postion along the normal vector.
so to solve you either update the normals or the shader.

DrawUserPrimitives<VertexPositionTexture> complains about Color0 missing for vertex shader

First of all, I am new to XNA and the way GPU works and how it cooperates with the XNA (or DirectX) API.
I have a polygon to draw using the SpriteBatch. I'm triangulating the polygon, and creating a VertexPositionTexture array to hold the vertices. I set the vertices (and just for simplicity, set the texture offset vector to zero), and try to draw the primitives, but I get this error:
The current vertex declaration does not include all the elements required by the current vertex shader. Color0 is missing.
Here is my code, I've double checked my vectors from triangulation, they are fine:
VertexPositionTexture[] vertices = new VertexPositionTexture[triangulationResult.Count * 3];
int ctr = 0;
foreach (var item in triangulationResult)
{
foreach (var point in item.Vertices)
{
vertices[ctr++] = new VertexPositionTexture(new Vector3(point.X, point.Y, 0), Vector2.Zero);
}
}
sb.GraphicsDevice.DrawUserPrimitives<VertexPositionTexture>(PrimitiveType.TriangleList, vertices, 0, triangulationResult.Count);
What am I possibly doing wrong here?
Your shader is expecting a Color in the vertex stream.... so you have to use VertexPositionColorTexture or change your shader.
It seems that you are not using any shader. If the active shader is the one used with spritebatch you won't be able to draw it right.
VertexPositionColorTexture[] vertices = new VertexPositionColorTexture[triangulationResult.Count * 3];
int ctr = 0;
foreach (var item in triangulationResult)
{
foreach (var point in item.Vertices)
{
vertices[ctr++] = new VertexPositionColorTexture(new Vector3(point.X, point.Y, 0), Color.White, Vector2.Zero);
}
}
sb.GraphicsDevice.DrawUserPrimitives<VertexPositionColorTexture>(PrimitiveType.TriangleList, vertices, 0, triangulationResult.Count);
Use BasicEffect if you are drawing polygons (MSDN tutorial). You should only use SpriteBatch for sprite drawing (ie: using its Draw methods).
The vertex element type that BasicEffect requires will depend on what settings you apply to it.
To use a vertex element type without a colour component (like VertexPositionTexture), set BasicEffect.VertexColorEnabled to false.
Or alternately, use a vertex element type that supplies a colour, such as VertexPositionColorTexture.
If you want to create a BasicEffect that has the same coordinate system as SpriteBatch, see this answer or this blog post.

Resources