ID3DDevice::CreateTexture2D fails with E_INVALIDARG in NV12 format for certain texture heights - directx

I have the following texture description:
D3D11_TEXTURE2D_DESC texDesc = {};
texDesc.Width = 1920;
texDesc.Height = 953;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_NV12;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.CPUAccessFlags = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.BindFlags = (D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE);
texDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED;
And I want to create the texture using the description with ID3D11Device::CreateTexture2D:
HRESULT hr = _pDevice->CreateTexture2D(&texDesc, 0, _ppTexOutput);
With the description given, hr is always E_INVALIDARG.
But it all works if texDesc.Height is set to, for example, 954. Also for every value the texture is created successfully if texDesc.Format is set to DXGI_FORMAT_B8G8R8A8_UNORM.
Is it something about DXGI_FORMAT_NV12 format which doesn't support certain texture heights/widths? Should I just use heights that divide by 2? Or is there more complicated rule behind this?

Yes, that format requires that both width and height are even. See here for reference. It explicitly says that for format DXGI_FORMAT_NV12:
Width and height must be even.
If you had debug layer enabled as Simon Mourier said in the comments you would already know this. I strongly advise you to enable it since it makes debugging in DirectX a lot easier.

Related

iOS Raw PixelFormat rggb has 2 channel?

I'm trying to get Raw pure data from avfoundation.
And I figured out that my device support pixelformat CV14BayerRggb.
After documentation, I got raw image file which is good.
However I want byte array from AVCapturephoto from photoOutput delegate callback.
There is one thing I can't not understand!!
When I look deep into pixelformatdescription of rggb,
<CVPixelBuffer 0x283b51ad0 width=4032 height=3024 bytesPerRow=8064 pixelFormat=rgg4 iosurface=0x280e59970 poolName=RENDERED_RAW attributes={
PixelFormatDescription = {
BitsPerBlock = 16;
BitsPerComponent = 8;
ContainsAlpha = 0;
ContainsGrayscale = 0;
ContainsRGB = 1;
ContainsYCbCr = 0;
FillExtendedPixelsCallback = {length = 24, bytes = 0x0000000000000000948536ba010000000000000000000000};
PixelFormat = 1919379252;
};
Since after bayer filter, I know one pixel has one color data.
So using interpolation of neighboring pixels color data, fill up the other color value which is demosaicing.
But why in this moment, BitsPerBlock(BitsPerPixel) is 16?
Which is channel 2.
Is there anything I misunderstand about bayer filter?

(DX12 Shadow Mapping) Depth buffer is always filled with 1

I'm really new to graphics programming in general, so please bear with me. I am trying to add shadow mapping from a distant light (orthogonal projection) into my scene, but when I follow the (very incomplete) steps from Frank Luna's DX12 book I find that my SRV for the shadow map is just filled with depths of 1.
If it helps, here is my SRV definition:
D3D12_TEX2D_SRV texDesc = {
0,
-1,
0,
0.0f
};
D3D12_SHADER_RESOURCE_VIEW_DESC srvDesc = {
DXGI_FORMAT_R32_TYPELESS,
D3D12_SRV_DIMENSION_TEXTURE2D,
D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING,
};
srvDesc.Texture2D = texDesc;
m_device->CreateShaderResourceView(m_lightDepthTexture.Get(),&srvDesc, m_cbvHeap->GetCPUDescriptorHandleForHeapStart());
and here are my DSV heap and descriptor definitions:
D3D12_DESCRIPTOR_HEAP_DESC dsvHeapDesc = {};
dsvHeapDesc.NumDescriptors = 2;
dsvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV;
dsvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;
ThrowIfFailed(m_device->CreateDescriptorHeap(&dsvHeapDesc, IID_PPV_ARGS(&m_dsvHeap)));
D3D12_DEPTH_STENCIL_VIEW_DESC depthStencilDesc = {};
depthStencilDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilDesc.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D;
depthStencilDesc.Flags = D3D12_DSV_FLAG_NONE;
CD3DX12_HEAP_PROPERTIES heapProps = CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT);
CD3DX12_RESOURCE_DESC resourceDesc = CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_R32_TYPELESS, m_width, m_height, 1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL);
D3D12_CLEAR_VALUE depthOptimizedClearValue = {};
depthOptimizedClearValue.Format = DXGI_FORMAT_D32_FLOAT;
depthOptimizedClearValue.DepthStencil.Depth = 1.0f;
depthOptimizedClearValue.DepthStencil.Stencil = 0;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&resourceDesc,
D3D12_RESOURCE_STATE_DEPTH_WRITE,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_dsvBuffer)
));
D3D12_RESOURCE_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D12_RESOURCE_DESC));
texDesc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
texDesc.Alignment = 0;
texDesc.Width = m_width;
texDesc.Height = m_height;
texDesc.DepthOrArraySize = 1;
texDesc.MipLevels = 1;
texDesc.Format = DXGI_FORMAT_R32_TYPELESS;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN;
texDesc.Flags = D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&texDesc,
D3D12_RESOURCE_STATE_GENERIC_READ,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_lightDepthTexture)
));
CD3DX12_CPU_DESCRIPTOR_HANDLE dsv(m_dsvHeap->GetCPUDescriptorHandleForHeapStart());
m_device->CreateDepthStencilView(m_dsvBuffer.Get(), &depthStencilDesc, dsv);
dsv.Offset(1, m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_DSV));
m_device->CreateDepthStencilView(m_lightDepthTexture.Get(), &depthStencilDesc, dsv);
I then created a basic vertex shader that just transforms the vertices with my map (from Frank Luna's book, page 648,650). Since I bound the m_lightDepthTexture to D3D12GraphicsCommandList::OMSetRenderTargets, I assumed that the depth values would be written onto m_lightDepthTexture. But simply sampling this texture in my main pass proves that the values are actually 1.0f. So nothing actually happened on my shadow pass!
I really have no idea what to ask, but if anyone has a sample DX12 shadow map I could see (Google comes up with DX11 or less, or much too complicated samples), or if there's a good source to learn about this, please let me know!
EDIT: I should say that I changed the format from DXGI_FORMAT_D24_UNORM_S8_UINT, as I think the extra 8 bits for stencil is irrelevant to my case. I changed back to the book format and nothing changed, so I think this format should be fine.
If you remove the unecessary return ret; from your shadow vertex shader, the problem then seems to be in winding order of vertices of your sphere. You can easily verify this by setting cull mode to D3D12_CULL_MODE_NONE for your shadow PSO.
You can easily correct your sphere winding order by switching order of any two vertices of every triangle, so wherever you have p1,p2,p3 you just write it for example as p1,p3,p2.
You will also need to check your matrix multiplication order in your vertex shaders, I didn't checked it in detail but it's inconsistent and I believe the cause why the sphere will appear black when you fix the above issue. You also seem to be missing division by w for your light coords in lighting vertex shader.

No output in second RenderTarget when blend state is set

I have a strange issue when using two RenderTargets in SharpDX, using DX11.
I am rendering a set of objects that can be layered, and am using blend modes to achieve partial transparency. Rendering is done to two render targets in a single pass, with the second render target being used as a colour picker - I simply render the object ID (integer) to this second target and retrieve the object ID from the texture under the mouse after rendering.
The issue I am getting is frustrating, as it does not happen on all computers. In fact, it doesn't happen on any of our development machines but has been reported in the wild - typically on machines with integrated Intel (HD) graphics. On these computers, no output is generated in the second render target. We have been able to reproduce the problem on a laptop here, and if we don't set the blend state, then the issue is resolved. Obviously this isn't a fix, since we need the blending.
The texture descriptions for the main render target (0) and the colour picking target look like this:
var desc = new Texture2DDescription
{
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
Format = Format.B8G8R8A8_UNorm,
Width = width,
Height = height,
MipLevels = 1,
SampleDescription = sampleDesc,
Usage = ResourceUsage.Default,
OptionFlags = RenderTargetOptionFlags,
CpuAccessFlags = CpuAccessFlags.None,
ArraySize = 1
};
var colourPickerDesc = new Texture2DDescription
{
BindFlags = BindFlags.RenderTarget,
Format = Format.R32_SInt,
Width = width,
Height = height,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0),
Usage = ResourceUsage.Default,
OptionFlags = ResourceOptionFlags.None,
CpuAccessFlags = CpuAccessFlags.None,
ArraySize = 1,
};
The blend state is set like this:
var blendStateDescription = new BlendStateDescription { AlphaToCoverageEnable = false };
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].SourceBlend = BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationBlend = BlendOption.InverseSourceAlpha;
blendStateDescription.RenderTarget[0].BlendOperation = BlendOperation.Add;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = BlendOption.InverseSourceAlpha;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = BlendOperation.Add;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = ColorWriteMaskFlags.All;
_blendState = new BlendState(_device, blendStateDescription);
and is applied at the start of rendering. I have tried explicitly setting IsBlendEnabled to false for RenderTarget[1] but it makes no difference.
Any help on this would be most welcome - ultimately, we may have to resort to making two render passes but that would be annoying.
I have now resolved this issue, although exactly how or why the "fix" works in not entirely clear. Hat-tip to VTT for pointing me to the IndependentBlendEnable flag in the BlendStateDescription. Setting that flag on its own (to true), along with RenderTarget[1].IsBlendEnabled = false, was not enough. What worked in the end was filling a complete set of values for RenderTarget[1], along with the aforementioned flags. Presumably all other values in the second RenderTarget would be ignored, as blend is disabled, but for some reason they need to be populated. As mentioned before, this problem only appears on certain graphics cards so I have no idea if this is the correct behaviour or just a quirk of those cards.

View GPU Memory / View Texture2D memory space for debugging

I've got a question about a PixelShader I am trying to implement, and what I currently do (this is just for debugging, and trying to figure stuff out):
int3 loc;
loc.x = (int)(In.TextureUV.x * resolution_XY.x);
loc.y = (int)(In.TextureUV.x * resolution_XY.x);
loc.z = 0;
float4 r = g_txDiffuse.Load(loc);
return float4(r.x, r.y, r.z, 1);
The point is, this is always 0,0,0,1
The texture buffer is created:
D3D11_TEXTURE2D_DESC tDesc;
tDesc.Height = 480;
tDesc.Width = 640;
tDesc.Usage = D3D11_USAGE_DYNAMIC;
tDesc.MipLevels = 1;
tDesc.ArraySize = 1;
tDesc.SampleDesc.Count = 1;
tDesc.SampleDesc.Quality = 0;
tDesc.Format = DXGI_FORMAT_R8_UINT;
tDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
tDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
tDesc.MiscFlags = 0;
V_RETURN(pd3dDevice->CreateTexture2D(&tDesc, NULL, &g_pCurrentImage));
I upload the texture (which should be a live display at the end) via:
D3D11_MAPPED_SUBRESOURCE resource;
pd3dImmediateContext->Map(g_pCurrentImage, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);
memcpy( resource.pData, g_Images.GetData(), g_Images.GetDataSize() );
pd3dImmediateContext->Unmap( g_pCurrentImage, 0 );
I've checked the resource.pData, the data in there is a valid 8bit monochrome image. I made sure the data coming from the camera is 8bit monochrome 640x480.
There's a few things I don't fully understand:
if I run the Map / memcpy / Unmap routine in every frame, the driver will ultimately crash, the system will be unresponsive. Is there a different way to update a complete texture every frame which should be done?
the texture I uploaded is 8bit, why is the Texture2D.load() a float4 return? Do I have to use a different method to access the texture data? I tried to .sample it, but that didn't work either. Would I have to use a int buffer or something instead?
is there a way to debug the GPU memory, to check if the memcpy worked in the first place?
The Map, memcpy, Unmap really ought not to crash unless2 you are trying to copy too much data into the texture. It would be interesting to know what "GetDataSize()" returns. Does it equal 307,200? If its more than that then there lies your problem.
Texture2D returns a float4 because thats what you've asked for. If you write float r = g_txDiffuse.Load( ... ). The 8-bits get extended to a normalised float as part of the load process. Are you sure, btw, that your calculation of "loc" is correct because as you have it now loc.x and loc.y will always be the same.
You can debug whats going on with DirectX using PIX. Its a great tool and I highly recommend you familiarise yourself with it.

DirectX: Antialiasing doesn´t work

I just want to enable Antialiasing in DirectX9, but it doesn´t seem to do much, and the text drawn with ID3DXFont.DrawText(...) looks jagged too.
Here is the initialization-part
pDirect3D = Direct3DCreate9( D3D_SDK_VERSION);
memset(&presentParameters, 0, sizeof(_D3DPRESENT_PARAMETERS_));
presentParameters.BackBufferCount = 1;
presentParameters.BackBufferWidth = 800;
presentParameters.BackBufferHeight = 500;
presentParameters.MultiSampleType = D3DMULTISAMPLE_NONMASKABLE;
presentParameters.MultiSampleQuality = 2;
presentParameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
presentParameters.hDeviceWindow = hWnd;
presentParameters.Flags = 0;
presentParameters.FullScreen_RefreshRateInHz = D3DPRESENT_RATE_DEFAULT;
presentParameters.PresentationInterval = D3DPRESENT_INTERVAL_DEFAULT;
presentParameters.BackBufferFormat = D3DFMT_R5G6B5;
presentParameters.EnableAutoDepthStencil = TRUE;
presentParameters.AutoDepthStencilFormat = D3DFMT_D16;
presentParameters.Windowed = TRUE;
pDirect3D->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hWnd,D3DCREATE_SOFTWARE_VERTEXPROCESSING, &presentParameters, &pDevice);
pDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE);
pDevice->SetRenderState(D3DRS_MULTISAMPLEANTIALIAS, TRUE);
Is there something I do wrong?
ShowWindow(hWnd, nCmdShow);
UpdateWindow(hWnd);
First, text isn't anti-aliased by mutli-sampling, secondly a MultiSampleQuality of 2 is barely noticeable. Try a 4 or 8 ensure that the result is achieved, try toggling and watch the jagged edges.
You should checkout the AntiAlias sample provided in the DirectX SDK for details about setting this up properly.
I am creating text with meshes (D3DXCreateTextW), and I notice a significant difference when MultiSampling, even at low quality levels. With any kind of MultiSampling, the text and other lines are smooth, whereas they are jagged without MultiSampling.
Use CheckDeviceMultiSampleType to confirm that your video card does accept the type and level that you are requesting.

Resources