DirectX: Antialiasing doesn´t work - directx

I just want to enable Antialiasing in DirectX9, but it doesn´t seem to do much, and the text drawn with ID3DXFont.DrawText(...) looks jagged too.
Here is the initialization-part
pDirect3D = Direct3DCreate9( D3D_SDK_VERSION);
memset(&presentParameters, 0, sizeof(_D3DPRESENT_PARAMETERS_));
presentParameters.BackBufferCount = 1;
presentParameters.BackBufferWidth = 800;
presentParameters.BackBufferHeight = 500;
presentParameters.MultiSampleType = D3DMULTISAMPLE_NONMASKABLE;
presentParameters.MultiSampleQuality = 2;
presentParameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
presentParameters.hDeviceWindow = hWnd;
presentParameters.Flags = 0;
presentParameters.FullScreen_RefreshRateInHz = D3DPRESENT_RATE_DEFAULT;
presentParameters.PresentationInterval = D3DPRESENT_INTERVAL_DEFAULT;
presentParameters.BackBufferFormat = D3DFMT_R5G6B5;
presentParameters.EnableAutoDepthStencil = TRUE;
presentParameters.AutoDepthStencilFormat = D3DFMT_D16;
presentParameters.Windowed = TRUE;
pDirect3D->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hWnd,D3DCREATE_SOFTWARE_VERTEXPROCESSING, &presentParameters, &pDevice);
pDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE);
pDevice->SetRenderState(D3DRS_MULTISAMPLEANTIALIAS, TRUE);
Is there something I do wrong?
ShowWindow(hWnd, nCmdShow);
UpdateWindow(hWnd);

First, text isn't anti-aliased by mutli-sampling, secondly a MultiSampleQuality of 2 is barely noticeable. Try a 4 or 8 ensure that the result is achieved, try toggling and watch the jagged edges.
You should checkout the AntiAlias sample provided in the DirectX SDK for details about setting this up properly.

I am creating text with meshes (D3DXCreateTextW), and I notice a significant difference when MultiSampling, even at low quality levels. With any kind of MultiSampling, the text and other lines are smooth, whereas they are jagged without MultiSampling.
Use CheckDeviceMultiSampleType to confirm that your video card does accept the type and level that you are requesting.

Related

(DX12 Shadow Mapping) Depth buffer is always filled with 1

I'm really new to graphics programming in general, so please bear with me. I am trying to add shadow mapping from a distant light (orthogonal projection) into my scene, but when I follow the (very incomplete) steps from Frank Luna's DX12 book I find that my SRV for the shadow map is just filled with depths of 1.
If it helps, here is my SRV definition:
D3D12_TEX2D_SRV texDesc = {
0,
-1,
0,
0.0f
};
D3D12_SHADER_RESOURCE_VIEW_DESC srvDesc = {
DXGI_FORMAT_R32_TYPELESS,
D3D12_SRV_DIMENSION_TEXTURE2D,
D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING,
};
srvDesc.Texture2D = texDesc;
m_device->CreateShaderResourceView(m_lightDepthTexture.Get(),&srvDesc, m_cbvHeap->GetCPUDescriptorHandleForHeapStart());
and here are my DSV heap and descriptor definitions:
D3D12_DESCRIPTOR_HEAP_DESC dsvHeapDesc = {};
dsvHeapDesc.NumDescriptors = 2;
dsvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV;
dsvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;
ThrowIfFailed(m_device->CreateDescriptorHeap(&dsvHeapDesc, IID_PPV_ARGS(&m_dsvHeap)));
D3D12_DEPTH_STENCIL_VIEW_DESC depthStencilDesc = {};
depthStencilDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilDesc.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D;
depthStencilDesc.Flags = D3D12_DSV_FLAG_NONE;
CD3DX12_HEAP_PROPERTIES heapProps = CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT);
CD3DX12_RESOURCE_DESC resourceDesc = CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_R32_TYPELESS, m_width, m_height, 1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL);
D3D12_CLEAR_VALUE depthOptimizedClearValue = {};
depthOptimizedClearValue.Format = DXGI_FORMAT_D32_FLOAT;
depthOptimizedClearValue.DepthStencil.Depth = 1.0f;
depthOptimizedClearValue.DepthStencil.Stencil = 0;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&resourceDesc,
D3D12_RESOURCE_STATE_DEPTH_WRITE,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_dsvBuffer)
));
D3D12_RESOURCE_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D12_RESOURCE_DESC));
texDesc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
texDesc.Alignment = 0;
texDesc.Width = m_width;
texDesc.Height = m_height;
texDesc.DepthOrArraySize = 1;
texDesc.MipLevels = 1;
texDesc.Format = DXGI_FORMAT_R32_TYPELESS;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN;
texDesc.Flags = D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&texDesc,
D3D12_RESOURCE_STATE_GENERIC_READ,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_lightDepthTexture)
));
CD3DX12_CPU_DESCRIPTOR_HANDLE dsv(m_dsvHeap->GetCPUDescriptorHandleForHeapStart());
m_device->CreateDepthStencilView(m_dsvBuffer.Get(), &depthStencilDesc, dsv);
dsv.Offset(1, m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_DSV));
m_device->CreateDepthStencilView(m_lightDepthTexture.Get(), &depthStencilDesc, dsv);
I then created a basic vertex shader that just transforms the vertices with my map (from Frank Luna's book, page 648,650). Since I bound the m_lightDepthTexture to D3D12GraphicsCommandList::OMSetRenderTargets, I assumed that the depth values would be written onto m_lightDepthTexture. But simply sampling this texture in my main pass proves that the values are actually 1.0f. So nothing actually happened on my shadow pass!
I really have no idea what to ask, but if anyone has a sample DX12 shadow map I could see (Google comes up with DX11 or less, or much too complicated samples), or if there's a good source to learn about this, please let me know!
EDIT: I should say that I changed the format from DXGI_FORMAT_D24_UNORM_S8_UINT, as I think the extra 8 bits for stencil is irrelevant to my case. I changed back to the book format and nothing changed, so I think this format should be fine.
If you remove the unecessary return ret; from your shadow vertex shader, the problem then seems to be in winding order of vertices of your sphere. You can easily verify this by setting cull mode to D3D12_CULL_MODE_NONE for your shadow PSO.
You can easily correct your sphere winding order by switching order of any two vertices of every triangle, so wherever you have p1,p2,p3 you just write it for example as p1,p3,p2.
You will also need to check your matrix multiplication order in your vertex shaders, I didn't checked it in detail but it's inconsistent and I believe the cause why the sphere will appear black when you fix the above issue. You also seem to be missing division by w for your light coords in lighting vertex shader.

ID3DDevice::CreateTexture2D fails with E_INVALIDARG in NV12 format for certain texture heights

I have the following texture description:
D3D11_TEXTURE2D_DESC texDesc = {};
texDesc.Width = 1920;
texDesc.Height = 953;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_NV12;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.CPUAccessFlags = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.BindFlags = (D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE);
texDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED;
And I want to create the texture using the description with ID3D11Device::CreateTexture2D:
HRESULT hr = _pDevice->CreateTexture2D(&texDesc, 0, _ppTexOutput);
With the description given, hr is always E_INVALIDARG.
But it all works if texDesc.Height is set to, for example, 954. Also for every value the texture is created successfully if texDesc.Format is set to DXGI_FORMAT_B8G8R8A8_UNORM.
Is it something about DXGI_FORMAT_NV12 format which doesn't support certain texture heights/widths? Should I just use heights that divide by 2? Or is there more complicated rule behind this?
Yes, that format requires that both width and height are even. See here for reference. It explicitly says that for format DXGI_FORMAT_NV12:
Width and height must be even.
If you had debug layer enabled as Simon Mourier said in the comments you would already know this. I strongly advise you to enable it since it makes debugging in DirectX a lot easier.

How can I adjust the transparency with DirectX 11?

I'm trying to make transparent object like a colored glass or water and I succeeded in making it.
but I don't know how to adjust it's tranparency. Am I trying to do the impossible?
The scene is rendered with simple calculation of color and lighting.
and not using texture mapping in this program
I tried changing Blend state desc, blend factor, samplemask
but I'm not sure it was right.
Here is my Blendstate desc
BlendStateDesc.AlphaToCoverageEnable = false;
BlendStateDesc.IndependentBlendEnable = false;
BlendStateDesc.RenderTarget[0].BlendEnable = true;
BlendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_DEST_COLOR;
BlendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_ZERO;
BlendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
BlendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
BlendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].RenderTargetWriteMask
=D3D11_COLOR_WRITE_ENABLE_ALL;
and setting
float bf[] = { 0.f,0.f,0.f,0.f };
pDeviceContext->OMSetBlendState(m_pd3dBlendState, bf, 0xffffffff);
I will assume this is DX11 and that the part you are missing is actually in your shader.
In your pixel shader, you will be returning a float4 rgba. The "a" value will control the values applied to the calculation. This value is between 0 and 1. If you could post your pixel shader code, that would help confirm/deny the solution.

No output in second RenderTarget when blend state is set

I have a strange issue when using two RenderTargets in SharpDX, using DX11.
I am rendering a set of objects that can be layered, and am using blend modes to achieve partial transparency. Rendering is done to two render targets in a single pass, with the second render target being used as a colour picker - I simply render the object ID (integer) to this second target and retrieve the object ID from the texture under the mouse after rendering.
The issue I am getting is frustrating, as it does not happen on all computers. In fact, it doesn't happen on any of our development machines but has been reported in the wild - typically on machines with integrated Intel (HD) graphics. On these computers, no output is generated in the second render target. We have been able to reproduce the problem on a laptop here, and if we don't set the blend state, then the issue is resolved. Obviously this isn't a fix, since we need the blending.
The texture descriptions for the main render target (0) and the colour picking target look like this:
var desc = new Texture2DDescription
{
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
Format = Format.B8G8R8A8_UNorm,
Width = width,
Height = height,
MipLevels = 1,
SampleDescription = sampleDesc,
Usage = ResourceUsage.Default,
OptionFlags = RenderTargetOptionFlags,
CpuAccessFlags = CpuAccessFlags.None,
ArraySize = 1
};
var colourPickerDesc = new Texture2DDescription
{
BindFlags = BindFlags.RenderTarget,
Format = Format.R32_SInt,
Width = width,
Height = height,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0),
Usage = ResourceUsage.Default,
OptionFlags = ResourceOptionFlags.None,
CpuAccessFlags = CpuAccessFlags.None,
ArraySize = 1,
};
The blend state is set like this:
var blendStateDescription = new BlendStateDescription { AlphaToCoverageEnable = false };
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].SourceBlend = BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationBlend = BlendOption.InverseSourceAlpha;
blendStateDescription.RenderTarget[0].BlendOperation = BlendOperation.Add;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = BlendOption.InverseSourceAlpha;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = BlendOperation.Add;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = ColorWriteMaskFlags.All;
_blendState = new BlendState(_device, blendStateDescription);
and is applied at the start of rendering. I have tried explicitly setting IsBlendEnabled to false for RenderTarget[1] but it makes no difference.
Any help on this would be most welcome - ultimately, we may have to resort to making two render passes but that would be annoying.
I have now resolved this issue, although exactly how or why the "fix" works in not entirely clear. Hat-tip to VTT for pointing me to the IndependentBlendEnable flag in the BlendStateDescription. Setting that flag on its own (to true), along with RenderTarget[1].IsBlendEnabled = false, was not enough. What worked in the end was filling a complete set of values for RenderTarget[1], along with the aforementioned flags. Presumably all other values in the second RenderTarget would be ignored, as blend is disabled, but for some reason they need to be populated. As mentioned before, this problem only appears on certain graphics cards so I have no idea if this is the correct behaviour or just a quirk of those cards.

In DirectX 9, how to get zbuffer to work properly?

I've been unsuccessful at getting a simple cube geometry with shading turned on to display correctly.
This is c# code, but the values are being passed through SlimDX directly to C++ code.
pParams.BackBufferWidth = 0;
pParams.BackBufferHeight = 0;
pParams.BackBufferCount = 1;
pParams.BackBufferFormat = Format::X8R8G8B8;
pParams.Multisample = MultisampleType::None;
pParams.MultisampleQuality = 0;
pParams.DeviceWindowHandle = this.Handle;
pParams.Windowed = true;
pParams.AutoDepthStencilFormat = Format.D24X8;
pParams.EnableAutoDepthStencil = true;
pParams.PresentFlags = PresentFlags.None;
pParams.FullScreenRefreshRateInHertz = 0;
pParams.PresentationInterval = PresentInterval.Immediate;
pParams.SwapEffect = SwapEffect.Discard;
... are the values in the PresentParameter struct used to set up my Direct3D9Device object.
During a rendering, SetRenderState is called as follows:
this.D3DDevice.Clear(ClearFlags.Target | ClearFlags.ZBuffer, this.BackColor, 10000.0f, 0);
this.D3DDevice.SetRenderState(RenderState.Ambient, false);
this.D3DDevice.SetRenderState(RenderState.ZEnable, ZBufferType.UseZBuffer);
this.D3DDevice.SetRenderState(RenderState.ZWriteEnable, true);
this.D3DDevice.SetRenderState(RenderState.ZFunc, Compare.LessEqual);
this.D3DDevice.BeginScene();
Again, this is passed through to C++ code, which marshals the values in to calls a C++ programmer would not fear.
The primitives are diffuse colored vertices (D3DFVF_XYZ | D3DFVF_DIFFUSE). The wireframe view looks like this:
wireframe view http://gallery.me.com/robert.perkins/100045/z-fightingwireframe/web.jpg
The nearer pair of larger triangles is the near face of a cube.
The filled view looks like this:
full view 1 http://gallery.me.com/robert.perkins/100045/Z-fighting/web.jpg
Or this, on a subsequent rendering call:
full view 2 http://gallery.me.com/robert.perkins/100045/zfight2/web.jpg
I'm not sure how to fix this. Where should I begin looking?
Edit: The camera projection matrix looks about like this for one of the frames:
{[[M11:0.6281456 M12:0.7659309 M13:0.1370506 M14:0]
[M21:0.7705086 M22:-0.5877584 M23:-0.2466911 M24:0]
[M31:-0.1083957 M32:0.2605566 M33:-0.9593542 M34:0]
[M41:-3.225646 M42:-1.096823 M43:20.91392 M44:1]]}
And, the view matrix looks like this:
camera.ViewMatrix = {[[M11:0.6281456 M12:0.7659309 M13:0.1370506 M14:0]
[M21:0.7705086 M22:-0.5877584 M23:-0.2466911 M24:0]
[M31:-0.1083957 M32:0.2605566 M33:-0.9593542 M34:0]
[M41:-3.225646 M42:-1.096823 M43:20.91392 M44:1]]}
Clear the Z-Buffer to 1.0f not 10000.0f.
From the Clear docs in the SDK:
[in] Clear the depth buffer to this new z value which ranges from 0 to 1..
It may also be useful to see your projection matrix and viewport settings ...
Edit: How do you build that projection matrix? You have set zNear to 0 and zFar to 1. Try setting your zNear to 0.001f and zFar to 1000.0f and see whether that helps you at all...
A hunch: Try enabling the Z-Buffer before you clear.

Resources