How can I adjust the transparency with DirectX 11? - directx

I'm trying to make transparent object like a colored glass or water and I succeeded in making it.
but I don't know how to adjust it's tranparency. Am I trying to do the impossible?
The scene is rendered with simple calculation of color and lighting.
and not using texture mapping in this program
I tried changing Blend state desc, blend factor, samplemask
but I'm not sure it was right.
Here is my Blendstate desc
BlendStateDesc.AlphaToCoverageEnable = false;
BlendStateDesc.IndependentBlendEnable = false;
BlendStateDesc.RenderTarget[0].BlendEnable = true;
BlendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_DEST_COLOR;
BlendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_ZERO;
BlendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
BlendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
BlendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].RenderTargetWriteMask
=D3D11_COLOR_WRITE_ENABLE_ALL;
and setting
float bf[] = { 0.f,0.f,0.f,0.f };
pDeviceContext->OMSetBlendState(m_pd3dBlendState, bf, 0xffffffff);

I will assume this is DX11 and that the part you are missing is actually in your shader.
In your pixel shader, you will be returning a float4 rgba. The "a" value will control the values applied to the calculation. This value is between 0 and 1. If you could post your pixel shader code, that would help confirm/deny the solution.

Related

(DX12 Shadow Mapping) Depth buffer is always filled with 1

I'm really new to graphics programming in general, so please bear with me. I am trying to add shadow mapping from a distant light (orthogonal projection) into my scene, but when I follow the (very incomplete) steps from Frank Luna's DX12 book I find that my SRV for the shadow map is just filled with depths of 1.
If it helps, here is my SRV definition:
D3D12_TEX2D_SRV texDesc = {
0,
-1,
0,
0.0f
};
D3D12_SHADER_RESOURCE_VIEW_DESC srvDesc = {
DXGI_FORMAT_R32_TYPELESS,
D3D12_SRV_DIMENSION_TEXTURE2D,
D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING,
};
srvDesc.Texture2D = texDesc;
m_device->CreateShaderResourceView(m_lightDepthTexture.Get(),&srvDesc, m_cbvHeap->GetCPUDescriptorHandleForHeapStart());
and here are my DSV heap and descriptor definitions:
D3D12_DESCRIPTOR_HEAP_DESC dsvHeapDesc = {};
dsvHeapDesc.NumDescriptors = 2;
dsvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV;
dsvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;
ThrowIfFailed(m_device->CreateDescriptorHeap(&dsvHeapDesc, IID_PPV_ARGS(&m_dsvHeap)));
D3D12_DEPTH_STENCIL_VIEW_DESC depthStencilDesc = {};
depthStencilDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilDesc.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D;
depthStencilDesc.Flags = D3D12_DSV_FLAG_NONE;
CD3DX12_HEAP_PROPERTIES heapProps = CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT);
CD3DX12_RESOURCE_DESC resourceDesc = CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_R32_TYPELESS, m_width, m_height, 1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL);
D3D12_CLEAR_VALUE depthOptimizedClearValue = {};
depthOptimizedClearValue.Format = DXGI_FORMAT_D32_FLOAT;
depthOptimizedClearValue.DepthStencil.Depth = 1.0f;
depthOptimizedClearValue.DepthStencil.Stencil = 0;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&resourceDesc,
D3D12_RESOURCE_STATE_DEPTH_WRITE,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_dsvBuffer)
));
D3D12_RESOURCE_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D12_RESOURCE_DESC));
texDesc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
texDesc.Alignment = 0;
texDesc.Width = m_width;
texDesc.Height = m_height;
texDesc.DepthOrArraySize = 1;
texDesc.MipLevels = 1;
texDesc.Format = DXGI_FORMAT_R32_TYPELESS;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN;
texDesc.Flags = D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&texDesc,
D3D12_RESOURCE_STATE_GENERIC_READ,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_lightDepthTexture)
));
CD3DX12_CPU_DESCRIPTOR_HANDLE dsv(m_dsvHeap->GetCPUDescriptorHandleForHeapStart());
m_device->CreateDepthStencilView(m_dsvBuffer.Get(), &depthStencilDesc, dsv);
dsv.Offset(1, m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_DSV));
m_device->CreateDepthStencilView(m_lightDepthTexture.Get(), &depthStencilDesc, dsv);
I then created a basic vertex shader that just transforms the vertices with my map (from Frank Luna's book, page 648,650). Since I bound the m_lightDepthTexture to D3D12GraphicsCommandList::OMSetRenderTargets, I assumed that the depth values would be written onto m_lightDepthTexture. But simply sampling this texture in my main pass proves that the values are actually 1.0f. So nothing actually happened on my shadow pass!
I really have no idea what to ask, but if anyone has a sample DX12 shadow map I could see (Google comes up with DX11 or less, or much too complicated samples), or if there's a good source to learn about this, please let me know!
EDIT: I should say that I changed the format from DXGI_FORMAT_D24_UNORM_S8_UINT, as I think the extra 8 bits for stencil is irrelevant to my case. I changed back to the book format and nothing changed, so I think this format should be fine.
If you remove the unecessary return ret; from your shadow vertex shader, the problem then seems to be in winding order of vertices of your sphere. You can easily verify this by setting cull mode to D3D12_CULL_MODE_NONE for your shadow PSO.
You can easily correct your sphere winding order by switching order of any two vertices of every triangle, so wherever you have p1,p2,p3 you just write it for example as p1,p3,p2.
You will also need to check your matrix multiplication order in your vertex shaders, I didn't checked it in detail but it's inconsistent and I believe the cause why the sphere will appear black when you fix the above issue. You also seem to be missing division by w for your light coords in lighting vertex shader.

what is the 'alpha' value of pixel shader?

hi there day i am in process to make 2d game using directx11 api.
and it come to point that i need to use transparent effect.
so i have a green background and one footprint on middle.
and simply without setting anything but alpha value of returning color in pixel shader, i made a bit of success, but the problem is that it doesn't work for white color.
this is Pixel Shader code
cbuffer CB_TRANSPARENCY : register(b0)
{
float tp;
};
Texture2D footprint : register(t0);
SamplerState samplerState : register(s0);
struct PS_INPUT
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
float4 main(PS_INPUT input) : SV_Target
{
float3 texColor = footprint.Sample(samplerState, input.tex).xyz;
return float4(texColor, tp);
}
it there something that i miss?
or should i use some blendingstate thing?
any help would be appreciated
[edit] here's something to edit. actually alpha value doesn't do anything without blending setting. just one variable to be used for any custom calculation.
In my project, i was using spritebathch,spritefont class for rendering font on screen,
So i guess in spritebatch class, there might be blendingState under the hood that blend black color, so that i have got this effect without setting my blendingState.
Yes, you need to create a blend state with appropriate alpha processing mode and then make sure that created blend state is attached to output merging stage of the rendering pipeline prior to drawing:
D3D11_BLEND_DESC blendStateDesc{};
blendStateDesc.AlphaToCoverageEnable = FALSE;
blendStateDesc.IndependentBlendEnable = FALSE;
blendStateDesc.RenderTarget[0].BlendEnable = TRUE;
blendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_SRC_ALPHA;
blendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_DEST_ALPHA;
blendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendStateDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
if(not SUCCEEDED(p_device->CreateBlendState(&blendStateDesc, &blendState)))
{
std::abort();
}
p_device_context->OMSetBlendState(blendState, nullptr, 0xFFFFFFFF);
//draw calls...

Displacement Map UV Mapping?

Summary
I'm trying to apply a displacement map (Height map) to a rather simple object (Hexagonal plane) and I'm having some unexpected results. I am using grayscale and as such, I was under the impression my height map should only be affecting the Z values of my mesh. However, the displacement map I've created stretches the mesh across the X and Y planes. Furthermore, it doesn't seem to use the UV mapping I've created that all other textures are successfully applied to.
Model and UV Map
Here are reference images of my hexagonal mesh and its corresponding UV map in Blender.
Diffuse and Displacement Textures
These are the diffuse and displacement map textures I am applying to my mesh through Three.JS.
Renders
When I render the plane without a displacement map, you can see that the hexagonal plane stays within the lines. However, when I add the displacement map it clearly affects the X and Y positions of the vertices rather than affecting only the Z, expanding the plane well over the lines.
Code
Here's the relevant Three.js code:
// Textures
var diffuseTexture = THREE.ImageUtils.loadTexture('diffuse.png', null, loaded);
var displacementTexture = THREE.ImageUtils.loadTexture('displacement.png', null, loaded);
// Terrain Uniform
var terrainShader = THREE.ShaderTerrain["terrain"];
var uniformsTerrain = THREE.UniformsUtils.clone(terrainShader.uniforms);
//uniformsTerrain["tNormal"].value = null;
//uniformsTerrain["uNormalScale"].value = 1;
uniformsTerrain["tDisplacement"].value = displacementTexture;
uniformsTerrain["uDisplacementScale"].value = 1;
uniformsTerrain[ "tDiffuse1" ].value = diffuseTexture;
//uniformsTerrain[ "tDetail" ].value = null;
uniformsTerrain[ "enableDiffuse1" ].value = true;
//uniformsTerrain[ "enableDiffuse2" ].value = true;
//uniformsTerrain[ "enableSpecular" ].value = true;
//uniformsTerrain[ "uDiffuseColor" ].value.setHex(0xcccccc);
//uniformsTerrain[ "uSpecularColor" ].value.setHex(0xff0000);
//uniformsTerrain[ "uAmbientColor" ].value.setHex(0x0000cc);
//uniformsTerrain[ "uShininess" ].value = 3;
//uniformsTerrain[ "uRepeatOverlay" ].value.set(6, 6);
// Terrain Material
var material = new THREE.ShaderMaterial({
uniforms:uniformsTerrain,
vertexShader:terrainShader.vertexShader,
fragmentShader:terrainShader.fragmentShader,
lights:true,
fog:true
});
// Load Tile
var loader = new THREE.JSONLoader();
loader.load('models/hextile.js', function(g) {
//g.computeFaceNormals();
//g.computeVertexNormals();
g.computeTangents();
g.materials[0] = material;
tile = new THREE.Mesh(g, new THREE.MeshFaceMaterial());
scene.add(tile);
});
Hypothesis
I'm currently juggling three possibilities as to why this could be going wrong:
The UV map is not applying to my displacement map.
I've made the displacement map incorrectly.
I've missed a crucial step in the process that would lock the displacement to Z-only.
And of course, secret option #4 which is none of the above and I just really have no idea what I'm doing. Or any mixture of the aforementioned.
Live Example
You can view a live example here.
If anybody with more knowledge on the subject could guide me I'd be very grateful!
Edit 1: As per suggestion, I've commented out computeFaceNormals() and computeVertexNormals(). While it did make a slight improvement, the mesh is still being warped.
In your terrain material, set wireframe = true, and you will be able to see what is happening.
Your code and textures are basically fine. The problem occurs when you compute vertex normals in the loader callback function.
The computed vertex normals for the outer ring of your geometry point somewhat outward. This is most likely because in computeVertexNormals() they are computed by averaging the face normals of each neighboring face, and the face normals of the "sides" of your model (the black part) are averaged into the vertex normal calculation for those vertices that make up the outer ring of the "cap".
As a result, the outer ring of the "cap" expands outward under the displacement map.
EDIT: Sure enough, straight from your model, the vertex normals of the outer ring point outward. The vertex normals for the inner rings are all parallel. Perhaps Blender is using the same logic to generate vertex normals as computeVertexNormals() does.
The problem is how your object is constructed becuase the displacement happens along the normal vector.
the code is here.
https://github.com/mrdoob/three.js/blob/master/examples/js/ShaderTerrain.js#L348-350
"vec3 dv = texture2D( tDisplacement, uvBase ).xyz;",
This takes a the rgb vector of the displacement texture.
"float df = uDisplacementScale * dv.x + uDisplacementBias;",
this takes only red value of the vector becuase uDisplacementScale is normally 1.0 and uDisplacementBias is 0.0.
"vec3 displacedPosition = normal * df + position;",
This displaces the postion along the normal vector.
so to solve you either update the normals or the shader.

Punching alpha fillled holes to render-to-textures in Three.js

I am using render-to-texture to do postprocessing and then blending several 2D layers together.
Currently I am using stencil mask to make "holes" in render-to-texture targets and leaving some of the areas transparent. However, this is little cumbersome in my case. I'd rather ignore the stencil mask and then just would use normal polyfill operations to draw the holes.
What kind of methods there exist for rendering "fill to alpha 0.0" areas in the scene? I.e. the existing rendet-to-texture destination alpha value would be ignored and just replaced with 0.0 value. I assume you can set OpenGL mode bits so (how?) that this can done, without the need of using a custom fragment shader.
I already know how to set depth mask to ignore mode, so I can redraw over the top of the existing polygons.
You just have to use the THREE.NoBlending blending mode in the material used in the polygons you draw to make the holes.The material should be a ShaderMaterial so you can write the desired alpha, like here:
var r = 0.5;
var g = 0;
var b = 0;
var a = 0.8;
var material = new THREE.ShaderMaterial( {
uniforms: {
col: { type: "v4", value: new THREE.Vector4( r, g, b, a ) }
},
fragmentShader: "uniform vec4 col; void main() {\n\tgl_FragColor = col;\n}",
side: THREE.DoubleSide
} );
material.transparent = true;
material.blending = THREE.NoBlending;
(Note that the DoubleSide parameter is not related to the problem but it is useful sometimes.)

In DirectX 9, how to get zbuffer to work properly?

I've been unsuccessful at getting a simple cube geometry with shading turned on to display correctly.
This is c# code, but the values are being passed through SlimDX directly to C++ code.
pParams.BackBufferWidth = 0;
pParams.BackBufferHeight = 0;
pParams.BackBufferCount = 1;
pParams.BackBufferFormat = Format::X8R8G8B8;
pParams.Multisample = MultisampleType::None;
pParams.MultisampleQuality = 0;
pParams.DeviceWindowHandle = this.Handle;
pParams.Windowed = true;
pParams.AutoDepthStencilFormat = Format.D24X8;
pParams.EnableAutoDepthStencil = true;
pParams.PresentFlags = PresentFlags.None;
pParams.FullScreenRefreshRateInHertz = 0;
pParams.PresentationInterval = PresentInterval.Immediate;
pParams.SwapEffect = SwapEffect.Discard;
... are the values in the PresentParameter struct used to set up my Direct3D9Device object.
During a rendering, SetRenderState is called as follows:
this.D3DDevice.Clear(ClearFlags.Target | ClearFlags.ZBuffer, this.BackColor, 10000.0f, 0);
this.D3DDevice.SetRenderState(RenderState.Ambient, false);
this.D3DDevice.SetRenderState(RenderState.ZEnable, ZBufferType.UseZBuffer);
this.D3DDevice.SetRenderState(RenderState.ZWriteEnable, true);
this.D3DDevice.SetRenderState(RenderState.ZFunc, Compare.LessEqual);
this.D3DDevice.BeginScene();
Again, this is passed through to C++ code, which marshals the values in to calls a C++ programmer would not fear.
The primitives are diffuse colored vertices (D3DFVF_XYZ | D3DFVF_DIFFUSE). The wireframe view looks like this:
wireframe view http://gallery.me.com/robert.perkins/100045/z-fightingwireframe/web.jpg
The nearer pair of larger triangles is the near face of a cube.
The filled view looks like this:
full view 1 http://gallery.me.com/robert.perkins/100045/Z-fighting/web.jpg
Or this, on a subsequent rendering call:
full view 2 http://gallery.me.com/robert.perkins/100045/zfight2/web.jpg
I'm not sure how to fix this. Where should I begin looking?
Edit: The camera projection matrix looks about like this for one of the frames:
{[[M11:0.6281456 M12:0.7659309 M13:0.1370506 M14:0]
[M21:0.7705086 M22:-0.5877584 M23:-0.2466911 M24:0]
[M31:-0.1083957 M32:0.2605566 M33:-0.9593542 M34:0]
[M41:-3.225646 M42:-1.096823 M43:20.91392 M44:1]]}
And, the view matrix looks like this:
camera.ViewMatrix = {[[M11:0.6281456 M12:0.7659309 M13:0.1370506 M14:0]
[M21:0.7705086 M22:-0.5877584 M23:-0.2466911 M24:0]
[M31:-0.1083957 M32:0.2605566 M33:-0.9593542 M34:0]
[M41:-3.225646 M42:-1.096823 M43:20.91392 M44:1]]}
Clear the Z-Buffer to 1.0f not 10000.0f.
From the Clear docs in the SDK:
[in] Clear the depth buffer to this new z value which ranges from 0 to 1..
It may also be useful to see your projection matrix and viewport settings ...
Edit: How do you build that projection matrix? You have set zNear to 0 and zFar to 1. Try setting your zNear to 0.001f and zFar to 1000.0f and see whether that helps you at all...
A hunch: Try enabling the Z-Buffer before you clear.

Resources