DirectX Device::CreateTexture2D() crashes when calling with all 3 params - directx

I am gonna try to render a 2D Texture from opencv::Mat to a Texture in DX11 and also to a shaderesource afterwars. The Problem is, the program crashes on Device::CreateTexture2D() and I can't figure out why. I researched the whole day, I just don't see whats wrong here.
Furthermore, the Problem seems not to be the cv::Mat as Resource, because I have tried also this example here: D3D11 CreateTexture2D in 32 bit format with the chess-example as resource for the texture... and the functions still crashes, when calling with the 3rd param.
I found others, who had Problems with that function, sometimes the Problem was because that SysMemPitch was not set for 2D Textures but that's not the case here unfortunately.
Error Output: First-chance exception at 0x692EF11E (igd10iumd32.dll) in ARift.exe: 0xC0000005: Access violation reading location 0x03438000.
Unhandled exception at 0x692EF11E (igd10iumd32.dll) in ARift.exe: 0xC0000005: Access violation reading location 0x03438000.
here is the relevant code:
bool Texture::InitCameraStream(ID3D11Device* device, ARiftControl* arift_control)
{
D3D11_TEXTURE2D_DESC td;
ZeroMemory(&td, sizeof(td));
td.ArraySize = 1;
td.BindFlags = D3D11_BIND_SHADER_RESOURCE;
td.Usage = D3D11_USAGE_DYNAMIC;
td.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
td.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
td.Height = arift_control->picture_1_.size().height;
td.Width = arift_control->picture_1_.size().width;
td.MipLevels = 1;
td.MiscFlags = 0;
td.SampleDesc.Count = 1;
td.SampleDesc.Quality = 0;
D3D11_SUBRESOURCE_DATA srInitData;
srInitData.pSysMem = arift_control->picture_1_.ptr();
srInitData.SysMemPitch = arift_control->picture_1_.size().width * 4;
ID3D11Texture2D* tex = 0;
if ((device->CreateTexture2D(&td, &srInitData, NULL) == S_FALSE));
{
std::cerr << "Texture Description: OK " << std::endl << "Subresource: OK" << std::endl;
}
if (FAILED(device->CreateTexture2D(&td, &srInitData, &tex)));
{
std::cerr << "Error: Texture could not be created! "<< std::endl;
return false;
}
// Create the shader-resource view
D3D10_SHADER_RESOURCE_VIEW_DESC srDesc;
srDesc.Format = td.Format;
srDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
srDesc.Texture2D.MostDetailedMip = 0;
srDesc.Texture2D.MipLevels = 1;
if (FAILED(device->CreateShaderResourceView(tex, NULL, &texture_)));
{
std::cerr << "Can't create Shader Resource View" << std::endl;
return false;
}
return true;
}
CreateTexture2D Returns S_FALSE when the first 2 Parameters are valid, and passing 0 as the 3rd param. So in my case, it also Returns S_FALSE the first time, so the Debug Output appears. When calling CreateTexture2D with the 3rd param (the TExture COM object), it crashed. I have absolutely no idea anymore.
Furthermore, I tried to Setup Debugging with DirectX and followed that tutorial: http://blog.rthand.com/post/2010/10/25/Capture-DirectX-1011-debug-output-to-Visual-Studio.aspx - but I can't see a "Debug" window in my Project Properties in Visual Studio 2013. So I still get to "igd10iumd32.pdb not loaded" window, after the program crashes.
Edit: at least I could fix the issue with the additional D3D debug Outputs for now. In my Visual Studio 2013 I had to set the following: Project Properties -> Debugging -> Debug Type -> Mixed for getting the Additional D3D logs :)
Can anyone help here? It's really frustrating, just getting stuck on that single function the whole day ..
Many thanks!
Max

Your input texture data passed in D3D11_SUBRESOURCE_DATA is not sufficiently sized. In your comment, you said that the input image data is 900x1600, and the link is a JPEG. However, you are specifying to D3D that the data format is DXGI_FORMAT_B8G8R8A8_UNORM. JPEG is a compressed format, thus, the data stream will be smaller than it would be in BGRA format. When your drive (igd10iumd32.dll) attempts to read this input stream, it crashes because the buffer is not as large as you told D3D it was.
You can use D3DX11CreateTextureFromFile to load JPEG data. There are also some free image conversion libraries you can use to convert the JPEG into a D3D natively compatible format.

Related

How to send cluster in separated node ros pcl

Hi i'm new in pointcloud library. I'm trying to show clustering result point on rviz or pcl viewer, and then show nothing. And i realize that my data show nothing too when i subcsribe and cout that. Hopefully can help my problem, thanks
This is my code for clustering and send node
void cloudReceive(const sensor_msgs::PointCloud2ConstPtr& inputMsg){
mutex_lock.lock();
pcl::fromROSMsg(*inputMsg, *inputCloud);
cout<<inputCloud<<endl;
pcl::search::KdTree<pcl::PointXYZRGB>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZRGB>);
tree->setInputCloud(inputCloud);
std::vector<pcl::PointIndices> cluster_indices;
pcl::EuclideanClusterExtraction<pcl::PointXYZRGB> ec;
ec.setClusterTolerance(0.03);//2cm
ec.setMinClusterSize(200);//min points
ec.setMaxClusterSize(1000);//max points
ec.setSearchMethod(tree);
ec.setInputCloud(inputCloud);
ec.extract(cluster_indices);
if(cluster_indices.size() > 0){
std::vector<pcl::PointIndices>::const_iterator it;
int i = 0;
for (it = cluster_indices.begin(); it != cluster_indices.end(); ++it){
if(i >= 10)
break;
cloud_cluster[i]->points.clear();
std::vector<int>::const_iterator idx_it;
for (idx_it = it->indices.begin(); idx_it != it->indices.end(); idx_it++)
cloud_cluster[i]->points.push_back(inputCloud->points[*idx_it]);
cloud_cluster[i]->width = cloud_cluster[i]->points.size();
// cloud_cluster[i]->height = 1;
// cloud_cluster[i]->is_dense = true;
cout<<"PointCloud representing the Cluster: " << cloud_cluster[i]->points.size() << " data points"<<endl;
std::stringstream ss;
ss<<"cobaa_pipecom2_cluster_"<< i << ".pcd";
writer.write<pcl::PointXYZRGB> (ss.str(), *cloud_cluster[i], false);
pcl::toROSMsg(*cloud_cluster[i], outputMsg);
// cout<<"data = "<< outputMsg <<endl;
cloud_cluster[i]->header.frame_id = FRAME_ID;
pclpub[i++].publish(outputMsg);
// i++;
}
}
else
ROS_INFO_STREAM("0 clusters extracted\n");
}
And this one is the main
int main(int argc, char** argv){
for (int z = 0; z < 10; z++) {
// std::cout << " - clustering/" << z << std::endl;
cloud_cluster[z] = pcl::PointCloud<pcl::PointXYZRGB>::Ptr(new pcl::PointCloud<pcl::PointXYZRGB>);
cloud_cluster[z]->height = 1;
cloud_cluster[z]->is_dense = true;
// cloud_cluster[z]->header.frame_id = FRAME_ID;
}
ros::init(argc,argv,"clustering");
ros::NodeHandlePtr nh(new ros::NodeHandle());
pclsub = nh->subscribe("/pclsegmen",1,cloudReceive);
std::string pub_str("clustering/0");
for (int z = 0; z < 10; z++) {
pub_str[11] = z + 48;//48=0(ASCII)
// z++;
pclpub[z] = nh->advertise <sensor_msgs::PointCloud2> (pub_str, 1);
}
// pclpub = nh->advertise<sensor_msgs::PointCloud2>("/pclcluster",1);
ros::spin();
}
This isn't an exact answer, but I think it addresses your issue & may ease your debugging.
RViz can directly subscribe to a published point cloud, the one I'm assuming you're trying to see in the cloud_receive callback. If you set the Frame to whichever frame it's being published at, and add it from the available topics, you should see the points. (Easier than trying to rebroadcast it as different topics).
Also, I recommend looking at the rostopic command line tool. You can do rostopic list to check if it's being published, rostopic bw to see if it's really publishing the expected volume of data (ex bytes vs kilobytes vs megabytes), rostopic hz to see how frequently (if ever) it's publishing, and (briefly) rostopic echo to look at the data itself. (This is me assuming from your question it's more an issue with the data coming into your node).
If you're having trouble, not with data coming into the node, nor with the visualization of pointcloud data in general, but with the transformed data that's supposed to come out of the node, I would check that the clustering worked, & reduce your code moreso to just having 1 publisher publish something. You may be doing something weird. Like messing up your pointers. You could also turn on stronger compilation warnings for your node with -Wall -Wextra -Werror or step through the execution of it via gdb (launch-prefix="xterm -e gdb --args").
The solution is, i change the ASCII number into lexical_cast. Thanks for your response, i hope this can help other
for (int z = 0; z < CLOUD_QTD; z++) {
// pub_str[11] = z + 48;
std::string topicName = "/pclcluster/" + boost::lexical_cast<std::string>(z);
global::pub[z] = n.advertise <sensor_msgs::PointCloud2> (topicName, 1);
}

error when creating 2d texture with dynamic format

i am pretty newbie on directx.
and i am stumbling with handling resource.
Okay First, i created texture that i can read/write in GPU, and it worked well.
And now, as you can check in my code, i wanted to read this texture in CPU as well(reading from application side), so i edited the usage from USAGE_DEFAULT to USAGE_DYNAMIC.
Microsoft::WRL::ComPtr<ID3D11Texture2D> outputTexture;
D3D11_TEXTURE2D_DESC outputTex_desc;
outputTex_desc.Format = DXGI_FORMAT_R32_FLOAT;
outputTex_desc.Width = 3;
outputTex_desc.Height = 3;
outputTex_desc.MipLevels = 1;
outputTex_desc.ArraySize = 1;
outputTex_desc.BindFlags = D3D11_BIND_UNORDERED_ACCESS |
D3D11_BIND_SHADER_RESOURCE;
outputTex_desc.SampleDesc.Count = msCount;
outputTex_desc.SampleDesc.Quality = msQuality;
outputTex_desc.Usage = D3D11_USAGE_DYNAMIC;
outputTex_desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
outputTex_desc.MiscFlags = 0;
// CREATE 'TEXTURE'
device->CreateTexture2D( // FAIL HERE !!!
&outputTex_desc,
nullptr,
outputTexture.GetAddressOf());
// CREATE 'SRV'
...
// CREATE 'UAV'
...
and it starts failing exactly when it executes the 'device->CreateTexture2D()'
any advice would be amazing.
The first two steps in debugging any Direct3D 11 program is:
Make sure you are checking every HRESULT for either success (SUCCEEDED macro) or failure (FAILED macro). If it is safe to ignore the return value at runtime, then the function returns void. See ThrowIfFailed.
Enable the Direct3D debug device and look for debug output (a.k.a. use D3D11_CREATE_DEVICE_DEBUG in your Debug configuration).
If you enable the Direct3D debug device, you'll get detailed information on why the API returns failure codes in many cases.
If you did that, you'd see the error for this code:
D3D11 ERROR: ID3D11Device::CreateTexture2D: A D3D11_USAGE_DYNAMIC Resource
may only have the D3D11_CPU_ACCESS_WRITE CPUAccessFlags set.
[ STATE_CREATION ERROR #98: CREATETEXTURE2D_INVALIDCPUACCESSFLAGS]
In order to READ it on the CPU, you must copy to a D3D11_USAGE_STAGING Resource first. For example source code on doing that, see the ScreenGrab code from DirectX Tool Kit.
You don't mention what your msCount or msQuality values are here. I assumed 1, and 0 respectively. If you use any other values, you'll get:
D3D11 ERROR: ID3D11Device::CreateTexture2D: Multisampling is not supported
with the D3D11_BIND_UNORDERED_ACCESS BindFlag. SampleDesc.Count must be 1
and SampleDesc.Quality must be 0.
[ STATE_CREATION ERROR #99: CREATETEXTURE2D_INVALIDBINDFLAGS]
In order to access a texture from cpu (read mode), you need you create a separate staging texture, then copy your texture into it.
These are the flags for the gpu only texture (please note I force sample count to 1, as UAV access is not allowed for multisampled textures)
D3D11_TEXTURE2D_DESC gpuTexDesc;
gpuTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
gpuTexDesc.Width = 3;
gpuTexDesc.Height = 3;
gpuTexDesc.MipLevels = 1;
gpuTexDesc.ArraySize = 1;
gpuTexDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS |
D3D11_BIND_SHADER_RESOURCE;
gpuTexDesc.SampleDesc.Count = 1;
gpuTexDesc.SampleDesc.Quality = 0;
gpuTexDesc.Usage = D3D11_USAGE_DEFAULT;
gpuTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_NONE;
gpuTexDesc.MiscFlags = 0;
Then you create a second texture for reading
D3D11_TEXTURE2D_DESC readTexDesc;
readTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
readTexDesc.Width = 3;
readTexDesc.Height = 3;
readTexDesc.MipLevels = 1;
readTexDesc.ArraySize = 1;
readTexDesc.BindFlags = 0; //No bind flags allowed for staging
readTexDesc.SampleDesc.Count = 1;
readTexDesc.SampleDesc.Quality = 0;
readTexDesc.Usage = D3D11_USAGE_STAGING; //need staging flag for read
readTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
readTexDesc.MiscFlags = 0;
then you can use CopyResource:
deviceContext->CopyResource(readTex, gpuTex);
Once done, you can finally access texture data for reading using Map
D3D11_MAPPED_SUBRESOURCE MappedResource;
deviceContext->Map(readTex, 0, D3D11_MAP_READ, 0, &MappedResource);
MappedResource will give you access to data in cpu, once you done processing, don't forget to Unmap the resource.
deviceContext->Unmap(readTex, 0);

OpenCV VideoCapture set function does not change frame rate FPS?

Good afternoon everybody,
In OpenCV I'm having trouble getting the VideoCapture's set function to change the frame rate. Here is my test program, using the "768x576.avi" test video file from the "C:\OpenCV-3.1.0\opencv\sources\samples\data" directory
// VideoCaptureSetTest.cpp
#include<opencv2/core/core.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<iostream>
#include<conio.h> // it may be necessary to change or remove this line if not using Windows
///////////////////////////////////////////////////////////////////////////////////////////////////
int main() {
cv::VideoCapture capVideo;
cv::Mat imgFrame;
capVideo.open("768x576.avi");
if (!capVideo.isOpened()) { // if unable to open video file
std::cout << "error reading video file" << std::endl << std::endl; // show error message
_getch(); // it may be necessary to change or remove this line if not using Windows
return(0); // and exit program
}
char chCheckForEscKey = 0;
double dblFPS = capVideo.get(CV_CAP_PROP_FPS);
std::cout << "1st time - dblFPS = " << dblFPS << "\n"; // this prints "10" to std out as expected
bool blnSetReturnValue = capVideo.set(CV_CAP_PROP_FPS, 5);
std::cout << "2nd time - dblFPS = " << dblFPS << "\n"; // this also prints "10", why not "5" as expected ??!!
std::cout << "blnSetReturnValue = " << blnSetReturnValue << std::endl; // this prints "0" (i.e. false)
while (chCheckForEscKey != 27) {
capVideo.read(imgFrame);
if (imgFrame.empty()) break;
cv::imshow("imgFrame", imgFrame);
chCheckForEscKey = cv::waitKey(1);
}
return(0);
}
I'm using a very standard setup here, OpenCV 3.1.0, Visual Studio 2015, Windows 10, and the .avi file I'm testing with is the one that ships with OpenCV.
No matter what I attempt to set the FPS to, it stays at 10 and the set function always returns false.
Yes, I'm aware of the hack fix of setting the cv::waitKey() parameter to a larger value to achieve a certain delay, but this would then be computer-dependent and I may need to run video on various computers in the future so this is not an option.
Am I doing something wrong? Is the VideoCapture::set function known to not work in some cases? Has anybody else experienced the same results? I checked the OpenCV issue tracker and did not find anything to this effect. Is this a bug I should raise a ticket for? Anybody else with some experience with this please advise.

cvExtractSURF don't work when useProvidedKeypoints = true

So, I'm trying to extract some SURF keypoints, but I want to impose these key points! So, I put the last parameter to "true" which is "useProvidedKeypoints".
Also, when I create my Keypoint, I used the default constructor (so some default values there). I only change the point "pt" and the octave that I set to 3.
I'm using the C++ interface with SURF. But I know that the problem is right at cvExtractSURF because I copied that part of the code in mine to help me debug.
When I call that function, with the last parameter set to true, I got this error:
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /home/widgg/opencv/trunk/modules/core/src/matrix.cpp, line 651
terminate called after throwing an instance of 'cv::Exception'
what(): /home/widgg/opencv/trunk/modules/core/src/matrix.cpp:651: error: (-5) Unknown array type in function cvarrToMat
I really don't know what I'm doing wrong!
EDIT:
Here's some code. First how I create the keypoints (I left a couple of informations, like the layer_id stuff, but you get the main idea):
for (json_pt_info_vector::iterator b_beg = beg->points.begin(); b_beg != b_end; ++b_beg)
{
int layer_id = b_beg->layer_id;
json_point_info_coord &jpic = b_beg->coord;
jpic.feature_id = features[layer_id].keypoints.size();
KeyPoint kp;
kp.octave = 3;
kp.pt.x = jpic.x;
kp.pt.y = jpic.y;
features[layer_id].keypoints.push_back(kp);
}
Here's the call to SURF:
SURF surf(300, 3, 4);
for (int i = 0; i < nb_img; ++i)
{
debug_msg("extract_features #4.1");
cv::detail::ImageFeatures &cdif = features[i];
Mat gray_image = imread(param.layer_images[i], 0); // 0 = force to gray scale!
debug_msg("extract_features #4.2");
vector<float> descriptors;
debug_msg("extract_features #4.3");
surf(gray_image, Mat(), cdif.keypoints, descriptors, true); // MUST BE TRUE TO FORCE THE PROVIDED KEYPOINTS
debug_msg("extract_features #4.4");
cdif.descriptors = Mat(descriptors, true).reshape(1, (int)cdif.keypoints.size());
debug_msg("extract_features #4.5");
gray_image.release();
debug_msg("extract_features #4.6");
images[i] = imread(param.layer_images[i]); // keep the image open
}
It crashes after #4.3 in the debug message!
Hope that helps!
EDIT 2:
I replaced some part by cv::SurfDescriptorExtracter. I replace everything from 4.3 to 4.5 with the following line:
extractor.compute(gray_image, cdif.keypoints, cdif.descriptors);
So now, there's still a bug, but it's located somewhere else, not necessary related to this question!
I'm surprised that the call to surf(gray_image, Mat(), cdif.keypoints, descriptors, true) even compiles. the descriptors argument should be a cv::Mat, not a vector.

XNA 3.1 Preserving the Depth Buffer before it clears

I'm trying to get around XNA 3.1's automatic clearing of the depth buffer when switching render targets by copying the IDirect3DSurface9 from the depth buffer before the render targets are switched, then restore the depth buffer at a later point.
In the code, the getDepthBuffer method is a pointer to the IDirect3DDevice9 GetDepthStencilBuffer function. The pointer to that method seems to be correct, but when I try to get the IDirect3DSurface9 pointer it returns an exception (0x8876086C - D3DERR_INVALIDCALL). The surfacePtr pointer ends up pointing to 0x00000000.
Any idea on why it is not working? And any ideas on how to fix it?
Heres the code:
public static unsafe Texture2D GetDepthStencilBuffer(GraphicsDevice g)
{
if (g.DepthStencilBuffer.Format != DepthFormat.Depth24Stencil8)
{
return null;
}
Texture2D t2d = new Texture2D(g, g.DepthStencilBuffer.Width, g.DepthStencilBuffer.Height, 1, TextureUsage.None, SurfaceFormat.Color);
FieldInfo f = typeof(GraphicsDevice).GetField("pComPtr", BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance);
object o = f.GetValue(g);
void* devicePtr = Pointer.Unbox(f.GetValue(g));
void* getDepthPtr = AccessVTable(devicePtr, 160);
void* surfacePtr;
var getDepthBuffer = (GetDepthStencilBufferDelegate)Marshal.GetDelegateForFunctionPointer(new IntPtr(getDepthPtr), typeof(GetDepthStencilBufferDelegate));
var rv = getDepthBuffer(&surfacePtr);
SetData(t2d, 0, surfacePtr, g.DepthStencilBuffer.Width, g.DepthStencilBuffer.Height, (uint)(g.DepthStencilBuffer.Width / 4), D3DFORMAT.D24S8);
Marshal.Release(new IntPtr(devicePtr));
Marshal.Release(new IntPtr(getDepthPtr));
Marshal.Release(new IntPtr(surfacePtr));
return t2d;
}
XNA3.1 will not clear your depth-stencil buffer upon changing render targets, it will however resolve it (so that it's unusable for depth tests) if you are not careful with your render target changes.
For example:
SetRenderTarget(someRenderTarget)
DrawStuff()
SetRenderTarget(null)
SetRenderTarget(someOtherRenderTarget)
Will cause the depth-stencil buffer to be resolved, but the following will not:
SetRenderTarget(someRenderTarget)
DrawStuff()
SetRenderTarget(someOtherRenderTarget)
I'm unsure why this happens with XNA3.1 (and earlier versions), but ever since figuring that out I've been able to keep the same depth-stencil buffer alive through many render target changes, even Clear operations as long as the clear specified ClearOptions.Target only.

Resources