CreateTexture2D returns black image - directx

I am trying to make a desktop recorder, but all i get, is black screen, i have no clue why at all.
I tried with dx9, but same thing when i use backbuffer, front buffer method does work, and it can capture the frames correctly, but it's too slow (33ms per frame, and all because of GetFrontBuffer).
So i decided to try with dx11, there are no errors from return, no errors when creating swapchain and device, everything is fine, and in fact the frames are captured(i measure the time and fps, and something is going on), but they are all black, like it's not coming from the desktop, but from somewhere else.
This is the capture method
if(contains_errors()){return;}
m_swap_chain->GetBuffer(0, __uuidof(ID3D11Resource), (void**)&m_back_buffer_ptr);
return_if_null(m_back_buffer_ptr);
HRESULT hr = m_back_buffer_ptr->QueryInterface(__uuidof(ID3D11Resource), (void**)&m_back_buffer_data);
return_if_failed(hr);
hr = m_swap_chain->GetDevice(__uuidof(ID3D11Device), (void**)&m_device);
return_if_failed(hr);
hr = m_swap_chain->GetDesc(&m_desc);
return_if_failed(hr);
ID3D11Texture2D* texture = nullptr;
hr = m_device->CreateTexture2D(&m_tex_desc, 0, &texture);
return_if_failed(hr);
ID3D11DeviceContext* context = nullptr;
m_device->GetImmediateContext(&context);
return_if_null(context);
context->CopyResource(texture, m_back_buffer_data);
D3D11_MAPPED_SUBRESOURCE map_subres = {0, 0, 0};
hr = context->Map(texture, 0, D3D11_MAP_READ, 0, &map_subres);
return_if_failed(hr);
if(m_current_frame == 0)
{
m_current_frame = new BYTE[map_subres.DepthPitch];
}
memcpy(m_current_frame, map_subres.pData, map_subres.DepthPitch);
texture->Release();
m_device->Release();
This is the texture desc setup
ZeroMemory(&m_tex_desc, sizeof(m_tex_desc));
m_tex_desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
m_tex_desc.Width = m_desc.BufferDesc.Width;
m_tex_desc.Height = m_desc.BufferDesc.Height;
m_tex_desc.MipLevels = 1;
m_tex_desc.ArraySize = 1;
m_tex_desc.SampleDesc.Count = 1;
m_tex_desc.Usage = D3D11_USAGE_STAGING;
m_tex_desc.BindFlags = 0;
m_tex_desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
m_tex_desc.MiscFlags = 0;
This is swapchain desc
m_desc.BufferDesc.Width = 1366;
m_desc.BufferDesc.Height = 768;
m_desc.BufferDesc.RefreshRate.Numerator = 1;
m_desc.BufferDesc.RefreshRate.Denominator = 60;
m_desc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
m_desc.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
m_desc.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
m_desc.SampleDesc.Count = 2;
m_desc.SampleDesc.Quality = 0;
m_desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
m_desc.BufferCount = 1;
m_desc.OutputWindow = (HWND)m_dx_win->winId();
m_desc.Windowed = true;
m_desc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
m_desc.Flags = 0;
Class members
private:
IDXGISwapChain* m_swap_chain = 0;
ID3D11DeviceContext* m_context = 0;
Dx_Output_Window* m_dx_win = 0;
IDXGIResource* m_back_buffer_ptr = 0;
ID3D11Resource* m_back_buffer_data = 0;
ID3D11Device* m_device = 0;
D3D_FEATURE_LEVEL m_selected_feature;
DXGI_SWAP_CHAIN_DESC m_desc;
D3D11_TEXTURE2D_DESC m_tex_desc = {};
I look up basically all the resources i could, but i could not find any info why it does work, but the image is all black. I was thinking maybe there is something up with the display, but no, i took the raw data, and display the value, and all the pixel or whatever it was is, was exactly 0, which is black color.
In the "m_desc.OutputWindow = (HWND)m_dx_win->winId();" i tried to also use GetDesktopWindow(), but it doesn't change anything, in fact i got some warnings instead.

Related

Resize D3D11Texture2D DirectX 11

I would like to resize a D3D11Texture2D to make it smaller. For example I have a texture in 1920x1080 and I would like to scale it 1280x720 for example.
Just so you know I'm not drawing at all, I just want to get the byte buffer scaled. Here is my code :
if (mRealTexture == nullptr) {
D3D11_TEXTURE2D_DESC description;
texture2D->GetDesc(&description);
description.BindFlags = 0;
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
description.Usage = D3D11_USAGE_STAGING;
description.MiscFlags = 0;
hr = mDevice->CreateTexture2D(&description, NULL, &mRealTexture);
if (FAILED(hr)) {
if (mRealTexture) {
mRealTexture->Release();
mRealTexture = nullptr;
}
return NULL;
}
}
mImmediateContext->CopyResource(mRealTexture, texture2D);
if (mScaledTexture == nullptr) {
D3D11_TEXTURE2D_DESC description;
texture2D->GetDesc(&description);
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
description.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
description.Width = 1440;
description.Height = 585;
description.MipLevels = 4;
description.ArraySize = 1;
description.SampleDesc.Count = 1;
description.SampleDesc.Quality = 0;
description.Usage = D3D11_USAGE_DEFAULT;
hr = mDevice->CreateTexture2D(&description, NULL, &mScaledTexture);
if (FAILED(hr)) {
if (mScaledTexture) {
mScaledTexture->Release();
mScaledTexture = nullptr;
}
return NULL;
}
} //I want to copy the mRealTexture on the mScaledTexture, map the scaledTexture and get the buffer.
Thanks for help
Having thought about this and that you are left with only a couple of options.
1 - You create a viewport etc and render to your target size with a full screen quad. Which I get the feeling you don't want to do.
2 - You roll your own scaling which isn't too bad and scale texture as you copy the data from one buffer to another.
Option 2 isn't too bad, the roughest scaling would be reading a points based on the scaling ratio, but a more accurate version would be to average a number of samples based a weighted grid (the weights need to be recalculated for each pixel you visit on your target.)

Segmentation Fault in vkCmdBlitImage

There is a segmentation fault in vkCmdBlitImage. According to Valgrind, it is an invalid read of size 8 with the address being 0x48. Disabling layers does not fix the problem.
The driver used is the Nvidia Linux driver version 364.19. The GPU is a GeForce GTX 970.
Relevant code:
VkImageCreateInfo img_info;
img_info.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO;
img_info.pNext = NULL;
img_info.flags = 0;
img_info.imageType = VK_IMAGE_TYPE_2D;
img_info.format = VK_FORMAT_R8G8B8A8_UNORM;
img_info.extent = (VkExtent3D){info.width, info.height, 1};
img_info.mipLevels = 1;
img_info.arrayLayers = 1;
img_info.samples = VK_SAMPLE_COUNT_1_BIT;
img_info.tiling = VK_IMAGE_TILING_LINEAR;
img_info.usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT;
img_info.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
img_info.queueFamilyIndexCount = 0;
img_info.pQueueFamilyIndices = NULL;
img_info.initialLayout = VK_IMAGE_LAYOUT_PREINITIALIZED;
VkImage src_image;
VKR(vkCreateImage(info.device, &img_info, NULL, &src_image));
VkMemoryRequirements src_req;
vkGetImageMemoryRequirements(info.device, src_image, &src_req);
VkDeviceMemory src_mem = create_memory(info.physical_device, info.device,
src_req.memoryTypeBits, src_req.size,
true); //The true makes it create host-visible memory.
vkBindImageMemory(info.device, src_image, src_mem, 0);
VkImageSubresource src_subres;
src_subres.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
src_subres.mipLevel = 0;
src_subres.arrayLayer = 0;
VkSubresourceLayout src_subres_layout;
vkGetImageSubresourceLayout(info.device, src_image, &src_subres, &src_subres_layout);
uint8_t* src_data = NULL;
VKR(vkMapMemory(info.device, src_mem, src_subres_layout.offset, src_subres_layout.rowPitch*info.height, 0, (void**)&src_data));
//Code that initialized src_data
VkMappedMemoryRange range;
range.sType = VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE;
range.pNext = NULL;
range.memory = src_mem;
range.offset = src_subres_layout.offset;
range.size = src_subres_layout.rowPitch * info.height;
VKR(vkFlushMappedMemoryRanges(info.device, 1, &range));
vkUnmapMemory(info.device, src_mem);
VkCommandBufferAllocateInfo alloc_info;
alloc_info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO;
alloc_info.pNext = NULL;
alloc_info.commandPool = info.cmd_pool;
alloc_info.level = VK_COMMAND_BUFFER_LEVEL_PRIMARY;
alloc_info.commandBufferCount = 1;
VkCommandBuffer cmd_buf;
VKR(vkAllocateCommandBuffers(info.device, &alloc_info, &cmd_buf));
VkCommandBufferBeginInfo begin_cmd_buf_info;
begin_cmd_buf_info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
begin_cmd_buf_info.pNext = NULL;
begin_cmd_buf_info.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT;
begin_cmd_buf_info.pInheritanceInfo = NULL;
vkBeginCommandBuffer(cmd_buf, &begin_cmd_buf_info);
image_barrier(VK_IMAGE_ASPECT_COLOR_BIT, cmd_buf, VK_ACCESS_HOST_WRITE_BIT,
VK_ACCESS_TRANSFER_READ_BIT, VK_IMAGE_LAYOUT_PREINITIALIZED,
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, VK_PIPELINE_STAGE_HOST_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT, src_image);
image_barrier(VK_IMAGE_ASPECT_COLOR_BIT, cmd_buf, info.dst_img_access,
VK_ACCESS_TRANSFER_WRITE_BIT, info.dst_img_layout,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT, info.dst_image);
VkImageBlit region;
region.srcSubresource = (VkImageSubresourceLayers){VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1};
region.srcOffsets[0] = (VkOffset3D){0, 0, 0};
region.srcOffsets[1] = (VkOffset3D){info.width, info.height, 1};
region.dstSubresource = (VkImageSubresourceLayers){VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1};
region.dstOffsets[0] = (VkOffset3D){0, 0, 0};
region.dstOffsets[1] = (VkOffset3D){info.width, info.height, 1};
vkCmdBlitImage(cmd_buf, src_image, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, info.dst_image, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 1, &region, VK_FILTER_NEAREST);
vkEndCommandBuffer(cmd_buf);
The rest of the code is found at https://gitlab.com/pendingchaos/WIP29/tree/00f348f2ef588e5f724fcb1f695e7692128cac4c/src.
Cut down output of vulkaninfo can be found at http://pastebin.com/JaHqCy98.
Your synchronization seems improper for the jorb. They discard your preinitialized Image and do no synchronization (due to the dst=BOTTOM).
Let me put together something that should work just fine with your computationaly demanding 4x4 Image processing:
image_barrier( VK_IMAGE_ASPECT_COLOR_BIT, cmd_buf,
VK_ACCESS_HOST_WRITE_BIT, VK_ACCESS_TRANSFER_READ_BIT,
VK_IMAGE_LAYOUT_PREINITIALIZED, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
VK_PIPELINE_STAGE_HOST_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT,
src_image);
image_barrier( VK_IMAGE_ASPECT_COLOR_BIT, cmd_buf,
info.dst_img_access, VK_ACCESS_TRANSFER_WRITE_BIT,
info.dst_img_layout, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT,
info.dst_image);
BTW:
the amount should be VkDeviceSize not size_t in createMemory()
vkBindImageMemory(), vkBeginCommandBuffer() and vkEndCommandBuffer() return and should perhaps be in your VKR
If you rewrite whole aspect of the image, you can use src=LAYOUT_UNDEFINED to discard the old data (more efficient!)

Depth stencil buffer not working directx11

ok i tried everything at this point and I'm really lost....
ID3D11Texture2D* depthStencilTexture;
D3D11_TEXTURE2D_DESC depthTexDesc;
ZeroMemory (&depthTexDesc, sizeof(D3D11_TEXTURE2D_DESC));
depthTexDesc.Width = set->mapSettings["SCREEN_WIDTH"];
depthTexDesc.Height = set->mapSettings["SCREEN_HEIGHT"];
depthTexDesc.MipLevels = 1;
depthTexDesc.ArraySize = 1;
depthTexDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthTexDesc.SampleDesc.Count = 1;
depthTexDesc.SampleDesc.Quality = 0;
depthTexDesc.Usage = D3D11_USAGE_DEFAULT;
depthTexDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ;
depthTexDesc.MiscFlags = 0;
mDevice->CreateTexture2D(&depthTexDesc, NULL, &depthStencilTexture);
D3D11_DEPTH_STENCIL_DESC dsDesc;
// Depth test parameters
dsDesc.DepthEnable = true;
dsDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
dsDesc.DepthFunc = D3D11_COMPARISON_LESS;//LESS
// Stencil test parameters
dsDesc.StencilEnable = false;
dsDesc.StencilReadMask = 0xFF;
dsDesc.StencilWriteMask = 0xFF;
// Stencil operations if pixel is front-facing
dsDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR; //INCR
dsDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// Stencil operations if pixel is back-facing
dsDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR; //DECR
dsDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// Create depth stencil state
mDevice->CreateDepthStencilState(&dsDesc, &mDepthStencilState);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory (&depthStencilViewDesc, sizeof(depthStencilViewDesc));
depthStencilViewDesc.Format = depthTexDesc.Format;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
mDevice->CreateDepthStencilView(depthStencilTexture, &depthStencilViewDesc, &mDepthStencilView);
mDeviceContext->OMSetDepthStencilState(mDepthStencilState, 1);
and then afterwards i call
mDeviceContext->OMSetRenderTargets(1, &mTargetView, mDepthStencilView);
obviously i clean before every frame
mDeviceContext->ClearRenderTargetView(mTargetView, D3DXCOLOR(0.0f, 0.0f, 0.0f, 1.0f));
mDeviceContext->ClearDepthStencilView(mDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0 );
and still it just keeps the last pixel drawn with no testing....
screenshot
PS i've checked the rasterizer and it is correctly drawing only the front faces
any help anyone?
Check your HRESULTs - the call to CreateTexture2D is almost certainly failing because you have specified CPU_ACCESS flags on a DEFAULT texture. Since you never check any errors or pointers, this just propagates NULL to all your depth objects, effectively disabling depth testing.
You can also catch errors like this by enabling D3D debug layers, by adding D3D11_CREATE_DEVICE_DEBUG to the flags on D3D11CreateDevice. If you had done this, you would see the following debug spew:
D3D11 ERROR: ID3D11Device::CreateTexture2D: A D3D11_USAGE_DEFAULT
Resource cannot have any CPUAccessFlags set. The following
CPUAccessFlags bits cannot be set in this case: D3D11_CPU_ACCESS_READ
(1), D3D11_CPU_ACCESS_WRITE (1). [ STATE_CREATION ERROR #98:
CREATETEXTURE2D_INVALIDCPUACCESSFLAGS]

Creating a Texture2D array with CPU write access

I'm trying to create a Texture2D array with CPU write access. In detail, the code I'm using looks like this:
D3D10_TEXTURE2D_DESC texDesc;
texDesc.Width = 32;
texDesc.Height = 32;
texDesc.MipLevels = 1;
texDesc.ArraySize = 2;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D10_USAGE_DYNAMIC;
texDesc.CPUAccessFlags = CPU_ACCESS_WRITE;
texDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT;
texDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesc.MiscFlags = 0;
ID3D10Texture2D* texture2D;
CHECK_SUCCESS(device->CreateTexture2D(&texDesc, NULL, &texture2D));
This, however, fails with E_INVALIDARG. Creating a 3D texture with the same dimensions (ie 32x32x2) and parameters works well, but is not desired in this case.
Does anyone know why this setup would not be valid?
You need to provide an array of D3D11_SUBRESOURCE_DATA of length texDesc.ArraySize as input for the second parameter of CreateTexture2D().
For more help see this example.

Return error when trying to copy the render target's backbuffer

I have one WDDM user mode display driver for DX9. Now I would like to dump the
render target's back buffer to a bmp file. Since the render target resource is
not lockable, I have to create a resource from system buffer and bitblt from the
render target to the system buffer and then save the system buffer to the bmp
file. However, calling the bitblt always return the error code E_FAIL. I also
tried to call the pfnCaptureToSysMem which also returned the same error code.
Anything wrong here?
D3DDDI_SURFACEINFO nfo;
nfo.Depth = 0;
nfo.Width = GetRenderSize().cx;
nfo.Height = GetRenderSize().cy;
nfo.pSysMem = NULL;
nfo.SysMemPitch = 0;
nfo.SysMemSlicePitch = 0;
D3DDDIARG_CREATERESOURCE resource;
resource.Format = D3DDDIFMT_A8R8G8B8;
resource.Pool = D3DDDIPOOL_SYSTEMMEM;
resource.MultisampleType = D3DDDIMULTISAMPLE_NONE;
resource.MultisampleQuality = 0;
resource.pSurfList = &nfo;
resource.SurfCount = 1;
resource.MipLevels = 1;
resource.Fvf = 0;
resource.VidPnSourceId = 0;
resource.RefreshRate.Numerator = 0;
resource.RefreshRate.Denominator = 0;
resource.hResource = NULL;
resource.Flags.Value = 0;
resource.Flags.Texture = 1;
resource.Flags.Dynamic = 1;
resource.Rotation = D3DDDI_ROTATION_IDENTITY;
HRESULT hr = m_pDevice->m_deviceFuncs.pfnCreateResource(m_pDevice->GetDrv(), &resource);
HANDLE hSysSpace = resource.hResource;
D3DDDIARG_BLT blt;
blt.hSrcResource = m_pDevice->m_hRenderTarget;
blt.hDstResource = hSysSpace;
blt.SrcRect.left = 0;
blt.SrcRect.top = 0;
blt.SrcRect.right = GetRenderSize().cx;
blt.SrcRect.bottom = GetRenderSize().cy;
blt.DstRect = blt.SrcRect;
blt.DstSubResourceIndex = 0;
blt.SrcSubResourceIndex = 0;
blt.Flags.Value = 0;
blt.ColorKey = 0;
hr = m_pDevice->m_deviceFuncs.pfnBlt(m_pDevice, &blt);
You are on the right track, but I think you can use the DirectX functions for this.
In order to copy the render target from video memory to system memory you should use the IDirect3DDevice9::GetRenderTargetData() function.
This function requires that the destination surface is an offscreen plain surface created with pool D3DPOOL_SYSTEMMEM. This surface also must have the same dimensions as the render target (no stretching allowed). Use IDirect3DDevice9::CreateOffscreenPlain() to create this surface.
Then this surface can be locked and the color data can be accessed by the CPU.

Resources