I'm puzzled why this isn't working.
I'm trying to add texture data to each of the cube textures faces. For some reason, only the first(+x) works. The MSDN documentation is quite sparse, but it looks like this should do the trick:
// mip-level 0 data
// R8G8B8A8 texture
uint32_t sizeWidth = textureWidth * sizeof(uint8_t) * 4;
if (isCubeTexture)
{
for (uint32_t index = 0; index < gCubemapNumTextures; ++index)
{
const uint32_t subResourceID = D3D11CalcSubresource(0, index, 1);
context->UpdateSubresource(mTexture, subResourceID, NULL, &textureData.at(sizeWidth * textureHeight * index), sizeWidth, 0);
}
}
When debugging and looking at the faces its all just black except the first face, which seems to load fine. So obivously I am doing somerhing wrong, how do you properly upload cubetexture data to all the faces?
EDIT: follow parameters used to create the texture:
D3D11_TEXTURE2D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(D3D11_TEXTURE2D_DESC));
textureDesc.Width = textureWidth;
textureDesc.Height = textureHeight;
textureDesc.ArraySize = isCubeTexture ? gCubemapNumTextures : 1;
if (isSRGB)
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB;
else
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
if (isCubeTexture)
textureDesc.MiscFlags |= D3D11_RESOURCE_MISC_TEXTURECUBE;
DXCALL(device->CreateTexture2D(&textureDesc, NULL, &mTexture));
Then after uploading the data I generate mip chain like this:
context->GenerateMips(mShaderResourceView);
And again, it works fine but only for the first (+x) face.
You create the texture with "0" mip levels by virtue of zero'ing out the texture description. Zero means "full mip chain please", which means more than 1 mip (unless your texture is 1x1).
Your arguments to D3D11CalcSubresource has a third argument of '1', suggesting only one mip, which appears not to be true. Be sure to pass in the correct number of mips to this helper function or it won't calculate the correct subresource index.
You can get the mip count by calling GetDesc() after the texture has been created.
Related
I would like to resize a D3D11Texture2D to make it smaller. For example I have a texture in 1920x1080 and I would like to scale it 1280x720 for example.
Just so you know I'm not drawing at all, I just want to get the byte buffer scaled. Here is my code :
if (mRealTexture == nullptr) {
D3D11_TEXTURE2D_DESC description;
texture2D->GetDesc(&description);
description.BindFlags = 0;
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
description.Usage = D3D11_USAGE_STAGING;
description.MiscFlags = 0;
hr = mDevice->CreateTexture2D(&description, NULL, &mRealTexture);
if (FAILED(hr)) {
if (mRealTexture) {
mRealTexture->Release();
mRealTexture = nullptr;
}
return NULL;
}
}
mImmediateContext->CopyResource(mRealTexture, texture2D);
if (mScaledTexture == nullptr) {
D3D11_TEXTURE2D_DESC description;
texture2D->GetDesc(&description);
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
description.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
description.Width = 1440;
description.Height = 585;
description.MipLevels = 4;
description.ArraySize = 1;
description.SampleDesc.Count = 1;
description.SampleDesc.Quality = 0;
description.Usage = D3D11_USAGE_DEFAULT;
hr = mDevice->CreateTexture2D(&description, NULL, &mScaledTexture);
if (FAILED(hr)) {
if (mScaledTexture) {
mScaledTexture->Release();
mScaledTexture = nullptr;
}
return NULL;
}
} //I want to copy the mRealTexture on the mScaledTexture, map the scaledTexture and get the buffer.
Thanks for help
Having thought about this and that you are left with only a couple of options.
1 - You create a viewport etc and render to your target size with a full screen quad. Which I get the feeling you don't want to do.
2 - You roll your own scaling which isn't too bad and scale texture as you copy the data from one buffer to another.
Option 2 isn't too bad, the roughest scaling would be reading a points based on the scaling ratio, but a more accurate version would be to average a number of samples based a weighted grid (the weights need to be recalculated for each pixel you visit on your target.)
my program is Directx Program that draws a container cube within it smaller cubes....these smaller cubes fall by time i hope you understand what i mean...
The program isn't complete yet ...it should draws the container only ....but it draws nothing ...only the background color is visible... i only included what i think is needed ...
this is the routines that initialize the program
bool Game::init(HINSTANCE hinst,HWND _hw){
Directx11 ::init(hinst , _hw);
return LoadContent();}
Directx11::init()
bool Directx11::init(HINSTANCE hinst,HWND hw){
_hinst=hinst;_hwnd=hw;
RECT rc;
GetClientRect(_hwnd,&rc);
height= rc.bottom - rc.top;
width = rc.right - rc.left;
UINT flags=0;
#ifdef _DEBUG
flags |=D3D11_CREATE_DEVICE_DEBUG;
#endif
HR(D3D11CreateDevice(0,_driverType,0,flags,0,0,D3D11_SDK_VERSION,&d3dDevice,&_featureLevel,&d3dDeviceContext));
if (d3dDevice == 0 || d3dDeviceContext == 0)
return 0;
DXGI_SWAP_CHAIN_DESC sdesc;
ZeroMemory(&sdesc,sizeof(DXGI_SWAP_CHAIN_DESC));
sdesc.Windowed=true;
sdesc.BufferCount=1;
sdesc.BufferDesc.Format=DXGI_FORMAT_R8G8B8A8_UNORM;
sdesc.BufferDesc.Height=height;
sdesc.BufferDesc.Width=width;
sdesc.BufferDesc.Scaling=DXGI_MODE_SCALING_UNSPECIFIED;
sdesc.BufferDesc.ScanlineOrdering=DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
sdesc.OutputWindow=_hwnd;
sdesc.BufferDesc.RefreshRate.Denominator=1;
sdesc.BufferDesc.RefreshRate.Numerator=60;
sdesc.Flags=0;
sdesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
if (m4xMsaaEnable)
{
sdesc.SampleDesc.Count=4;
sdesc.SampleDesc.Quality=m4xMsaaQuality-1;
}
else
{
sdesc.SampleDesc.Count=1;
sdesc.SampleDesc.Quality=0;
}
IDXGIDevice *Device=0;
HR(d3dDevice->QueryInterface(__uuidof(IDXGIDevice),reinterpret_cast <void**> (&Device)));
IDXGIAdapter*Ad=0;
HR(Device->GetParent(__uuidof(IDXGIAdapter),reinterpret_cast <void**> (&Ad)));
IDXGIFactory* fac=0;
HR(Ad->GetParent(__uuidof(IDXGIFactory),reinterpret_cast <void**> (&fac)));
fac->CreateSwapChain(d3dDevice,&sdesc,&swapchain);
ReleaseCOM(Device);
ReleaseCOM(Ad);
ReleaseCOM(fac);
ID3D11Texture2D *back = 0;
HR(swapchain->GetBuffer(0,__uuidof(ID3D11Texture2D),reinterpret_cast <void**> (&back)));
HR(d3dDevice->CreateRenderTargetView(back,0,&RenderTarget));
D3D11_TEXTURE2D_DESC Tdesc;
ZeroMemory(&Tdesc,sizeof(D3D11_TEXTURE2D_DESC));
Tdesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
Tdesc.ArraySize = 1;
Tdesc.Format= DXGI_FORMAT_D24_UNORM_S8_UINT;
Tdesc.Height= height;
Tdesc.Width = width;
Tdesc.Usage = D3D11_USAGE_DEFAULT;
Tdesc.MipLevels=1;
if (m4xMsaaEnable)
{
Tdesc.SampleDesc.Count=4;
Tdesc.SampleDesc.Quality=m4xMsaaQuality-1;
}
else
{
Tdesc.SampleDesc.Count=1;
Tdesc.SampleDesc.Quality=0;
}
HR(d3dDevice->CreateTexture2D(&Tdesc,0,&depthview));
HR(d3dDevice->CreateDepthStencilView(depthview,0,&depth));
d3dDeviceContext->OMSetRenderTargets(1,&RenderTarget,depth);
D3D11_VIEWPORT vp;
vp.TopLeftX=0.0f;
vp.TopLeftY=0.0f;
vp.Width = static_cast <float> (width);
vp.Height= static_cast <float> (height);
vp.MinDepth = 0.0f;
vp.MaxDepth = 1.0f;
d3dDeviceContext -> RSSetViewports(1,&vp);
return true;
SetBuild() Prepare the matrices inside the container for the smaller cubes ....i didnt program it to draw the smaller cubes yet
and this the function that draws the scene
void Game::Render(){
d3dDeviceContext->ClearRenderTargetView(RenderTarget,reinterpret_cast <const float*> (&Colors::LightSteelBlue));
d3dDeviceContext->ClearDepthStencilView(depth,D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL,1.0f,0);
d3dDeviceContext-> IASetInputLayout(_layout);
d3dDeviceContext-> IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3dDeviceContext->IASetIndexBuffer(indices,DXGI_FORMAT_R32_UINT,0);
UINT strides=sizeof(Vertex),off=0;
d3dDeviceContext->IASetVertexBuffers(0,1,&vertices,&strides,&off);
D3DX11_TECHNIQUE_DESC des;
Tech->GetDesc(&des);
Floor * Lookup; /*is a variable to Lookup inside the matrices structure (Floor Contains XMMATRX Piese[9])*/
std::vector<XMFLOAT4X4> filled; // saves the matrices of the smaller cubes
XMMATRIX V=XMLoadFloat4x4(&View),P = XMLoadFloat4x4(&Proj);
XMMATRIX vp = V * P;XMMATRIX wvp;
for (UINT i = 0; i < des.Passes; i++)
{
d3dDeviceContext->RSSetState(BuildRast);
wvp = XMLoadFloat4x4(&(B.Memory[0].Pieces[0])) * vp; // Loading The Matrix at translation(0,0,0)
HR(ShadeMat->SetMatrix(reinterpret_cast<float*> ( &wvp)));
HR(Tech->GetPassByIndex(i)->Apply(0,d3dDeviceContext));
d3dDeviceContext->DrawIndexed(build_ind_count,build_ind_index,build_vers_index);
d3dDeviceContext->RSSetState(PieseRast);
UINT r1=B.GetSize(),r2=filled.size();
for (UINT j = 0; j < r1; j++)
{
Lookup = &B.Memory[j];
for (UINT r = 0; r < Lookup->filledindeces.size(); r++)
{
filled.push_back(Lookup->Pieces[Lookup->filledindeces[r]]);
}
}
for (UINT j = 0; j < r2; j++)
{
ShadeMat->SetMatrix( reinterpret_cast<const float*> (&filled[i]));
Tech->GetPassByIndex(i)->Apply(0,d3dDeviceContext);
d3dDeviceContext->DrawIndexed(piese_ind_count,piese_ind_index,piese_vers_index);
}
}
HR(swapchain->Present(0,0));}
thanks in Advance
One bug in your program appears to be that you're using i, the index of the current pass, as an index into the filled vector, when you should apparently be using j.
Another apparent bug is that in the loop where you are supposed to be iterating over the elements of filled, you're not iterating over all of them. The value r2 is set to the size of filled before you append anything to it during that pass. During the first pass this means that nothing will be drawn by this loop. If your technique only has one pass then this means that the second DrawIndexed call in your code will never be executed.
It also appears you should be only adding matrices to filled once, regardless of the number of the passes the technique has. You should consider if your code is actually meant to work with techniques with multiple passes.
ok i tried everything at this point and I'm really lost....
ID3D11Texture2D* depthStencilTexture;
D3D11_TEXTURE2D_DESC depthTexDesc;
ZeroMemory (&depthTexDesc, sizeof(D3D11_TEXTURE2D_DESC));
depthTexDesc.Width = set->mapSettings["SCREEN_WIDTH"];
depthTexDesc.Height = set->mapSettings["SCREEN_HEIGHT"];
depthTexDesc.MipLevels = 1;
depthTexDesc.ArraySize = 1;
depthTexDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthTexDesc.SampleDesc.Count = 1;
depthTexDesc.SampleDesc.Quality = 0;
depthTexDesc.Usage = D3D11_USAGE_DEFAULT;
depthTexDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ;
depthTexDesc.MiscFlags = 0;
mDevice->CreateTexture2D(&depthTexDesc, NULL, &depthStencilTexture);
D3D11_DEPTH_STENCIL_DESC dsDesc;
// Depth test parameters
dsDesc.DepthEnable = true;
dsDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
dsDesc.DepthFunc = D3D11_COMPARISON_LESS;//LESS
// Stencil test parameters
dsDesc.StencilEnable = false;
dsDesc.StencilReadMask = 0xFF;
dsDesc.StencilWriteMask = 0xFF;
// Stencil operations if pixel is front-facing
dsDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR; //INCR
dsDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// Stencil operations if pixel is back-facing
dsDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR; //DECR
dsDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// Create depth stencil state
mDevice->CreateDepthStencilState(&dsDesc, &mDepthStencilState);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory (&depthStencilViewDesc, sizeof(depthStencilViewDesc));
depthStencilViewDesc.Format = depthTexDesc.Format;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
mDevice->CreateDepthStencilView(depthStencilTexture, &depthStencilViewDesc, &mDepthStencilView);
mDeviceContext->OMSetDepthStencilState(mDepthStencilState, 1);
and then afterwards i call
mDeviceContext->OMSetRenderTargets(1, &mTargetView, mDepthStencilView);
obviously i clean before every frame
mDeviceContext->ClearRenderTargetView(mTargetView, D3DXCOLOR(0.0f, 0.0f, 0.0f, 1.0f));
mDeviceContext->ClearDepthStencilView(mDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0 );
and still it just keeps the last pixel drawn with no testing....
screenshot
PS i've checked the rasterizer and it is correctly drawing only the front faces
any help anyone?
Check your HRESULTs - the call to CreateTexture2D is almost certainly failing because you have specified CPU_ACCESS flags on a DEFAULT texture. Since you never check any errors or pointers, this just propagates NULL to all your depth objects, effectively disabling depth testing.
You can also catch errors like this by enabling D3D debug layers, by adding D3D11_CREATE_DEVICE_DEBUG to the flags on D3D11CreateDevice. If you had done this, you would see the following debug spew:
D3D11 ERROR: ID3D11Device::CreateTexture2D: A D3D11_USAGE_DEFAULT
Resource cannot have any CPUAccessFlags set. The following
CPUAccessFlags bits cannot be set in this case: D3D11_CPU_ACCESS_READ
(1), D3D11_CPU_ACCESS_WRITE (1). [ STATE_CREATION ERROR #98:
CREATETEXTURE2D_INVALIDCPUACCESSFLAGS]
i have a code like this:
Mat img = Highgui.imread(inFile);
Mat templ = Highgui.imread(templateFile);
int result_cols = img.cols() - templ.cols() + 1;
int result_rows = img.rows() - templ.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32FC1);
Imgproc.matchTemplate(img, templ, result, Imgproc.TM_CCOEFF);
/////Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat());
for (int i = 0; i < result_rows; i++)
for (int j = 0; j < result_cols; j++)
if(result.get(i, j)[0]>?)
//match!
I need to parse the input image to find multiple occurrencies of the template image. I want to have a result like this:
result[0][0]= 15%
result[0][1]= 17%
result[x][y]= 47%
If i use TM_COEFF all results are [-xxxxxxxx.xxx,+xxxxxxxx.xxx]
If i use TM_SQDIFF all results are xxxxxxxx.xxx
If i use TM_CCORR all results are xxxxxxxx.xxx
How can i detect a match or a mismatch? What is the right condition into the if?
If i normalized the matrix the application set a value to 1 and i can't detect if the template isn't stored into the image (all mismatch).
Thanks in advance
You can append "_NORMED" to the method names (For instance: CV_TM_COEFF_NORMED in C++; could be slightly different in Java) to get a sensible value for your purpose.
By 'sensible', I mean that you will get values in the range of 0 to 1 which can be multiplied by 100 for your purpose.
Note: For CV_TM_SQDIFF_NORMED, it will be in the range -1 to 0, and you will have to subtract the value from 1 in order to make sense of it, because the lowest value if used in this method.
Tip: you can use the java equivalent of minMaxLoc() in order to get the minimum and maximum values. It's very useful when used in conjunction with matchtemplate.
I believe 'minMaxLoc' that is located inside the class Core.
Here's a C++ implementation:
matchTemplate( input_mat, template_mat, result_mat, method_NORMED );
double minVal, maxVal;
double percentage;
Point minLoc; Point maxLoc;
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
if( method_NORMED == CV_TM_SQDIFF_NORMED )
{
percentage=1-minVal;
}
else
{
percentage=maxVal;
}
Useful C++ docs:
Match template description along with available methods: http://docs.opencv.org/modules/imgproc/doc/object_detection.html
MinMaxLoc documentation:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=minmaxloc#minmaxloc
Another approach will be background differencing. You can observe the distortion.
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;
import org.opencv.imgproc.Imgproc;
public class BackgroundDifference {
public static void main(String[] arg){
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat model = Highgui.imread("e:\\answers\\template.jpg",Highgui.CV_LOAD_IMAGE_GRAYSCALE);
Mat scene = Highgui.imread("e:\\answers\\front7.jpg",Highgui.CV_LOAD_IMAGE_GRAYSCALE);
Mat diff = new Mat();
Core.absdiff(model,scene,diff);
Imgproc.threshold(diff,diff,15,255,Imgproc.THRESH_BINARY);
int distortion = Core.countNonZero(diff);
System.out.println("distortion:"+distortion);
Highgui.imwrite("e:\\answers\\diff.jpg",diff);
}
}
I'm trying to create a Texture2D array with CPU write access. In detail, the code I'm using looks like this:
D3D10_TEXTURE2D_DESC texDesc;
texDesc.Width = 32;
texDesc.Height = 32;
texDesc.MipLevels = 1;
texDesc.ArraySize = 2;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D10_USAGE_DYNAMIC;
texDesc.CPUAccessFlags = CPU_ACCESS_WRITE;
texDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT;
texDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesc.MiscFlags = 0;
ID3D10Texture2D* texture2D;
CHECK_SUCCESS(device->CreateTexture2D(&texDesc, NULL, &texture2D));
This, however, fails with E_INVALIDARG. Creating a 3D texture with the same dimensions (ie 32x32x2) and parameters works well, but is not desired in this case.
Does anyone know why this setup would not be valid?
You need to provide an array of D3D11_SUBRESOURCE_DATA of length texDesc.ArraySize as input for the second parameter of CreateTexture2D().
For more help see this example.