DirectX 11 ID3DDevice::CreateTexture2D with initial data fail - directx

so long I follow a tutorial that CreateTexture2D() with NULL initial data, then
using UpdateSubresource() to load data,every thing works fine. But UpdateSubResource() is a device context function, so I try to use CreateTexture2D() with initial data in one shot,but it fail.
below code show the original that works fine.
tex_desc.Height = h;
tex_desc.Width = w;
tex_desc.MipLevels = 0;
tex_desc.ArraySize = 1;
tex_desc.Format= DXGI_FORMAT_R8G8B8A8_UNORM;
tex_desc.SampleDesc.Count = 1;
tex_desc.SampleDesc.Quality = 0;
tex_desc.Usage= D3D11_USAGE_DEFAULT;
tex_desc.BindFlags= D3D11_BIND_SHADER_RESOURCE|D3D11_BIND_RENDER_TARGET;
tex_desc.CPUAccessFlags= 0;
tex_desc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
row_pitch = (w * 4) * sizeof(unsigned char);
hres = _dv->CreateTexture2D(&tex_desc, NULL, &m_tex);
HR_FAIL(hres, "create 2D texture");
_dc->UpdateSubresource(m_tex, 0, NULL, data, row_pitch, 0);
then I change it to
tex_desc.Height = h;
tex_desc.Width = w;
tex_desc.MipLevels = 0;
tex_desc.ArraySize = 1;
tex_desc.Format= DXGI_FORMAT_R8G8B8A8_UNORM;
tex_desc.SampleDesc.Count = 1;
tex_desc.SampleDesc.Quality = 0;
tex_desc.Usage= D3D11_USAGE_DEFAULT;
tex_desc.BindFlags= D3D11_BIND_SHADER_RESOURCE|D3D11_BIND_RENDER_TARGET;
tex_desc.CPUAccessFlags= 0;
tex_desc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
row_pitch = (w * 4) * sizeof(unsigned char);
D3D11_SUBRESOURCE_DATA sdata; // <<===changed
sdata.pSysMem = data; // <<===changed
sdata.SysMemPitch = row_pitch; // <<===changed
sdata.SysMemSlicePitch = (w*h*4)*sizeof(unsigned char); // <<===changed
hres = _dv->CreateTexture2D(&tex_desc, &sdata, &m_tex); // <<===changed
HR_FAIL(hres, "testing create texture 2d");
then it generate the error
testing create texture 2d fail
E_INVALIDARG : An invalid parameter was passed to the returning function.
it said some parameter is invalid but I don't see why the argument i passed is invalid
UPDATE:
so I try to follow the links to setup debug device for DirectX . Here's what I got:
D3D11 ERROR: ID3D11Device::CreateTexture2D: pInitialData[4].SysMemPitch cannot be 0 [ STATE_CREATION ERROR #100: CREATETEXTURE2D_INVALIDINITIALDATA]
D3D11 ERROR: ID3D11Device::CreateTexture2D: pInitialData[6].pSysMem cannot be NULL. [ STATE_CREATION ERROR #100: CREATETEXTURE2D_INVALIDINITIALDATA]
D3D11 ERROR: ID3D11Device::CreateTexture2D: pInitialData[7].pSysMem cannot be NULL. [ STATE_CREATION ERROR #100: CREATETEXTURE2D_INVALIDINITIALDATA]
D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]
Exception thrown at 0x750BC54F in directx11_test.exe: Microsoft C++ exception: _com_error at memory location 0x002BF3D0.
D3D11 ERROR: ID3D11Device::CreateTexture2D: pInitialData[2].pSysMem cannot be NULL. [ STATE_CREATION ERROR #100: CREATETEXTURE2D_INVALIDINITIALDATA]
D3D11 ERROR: ID3D11Device::CreateTexture2D: pInitialData[4].SysMemPitch cannot be 0 [ STATE_CREATION ERROR #100: CREATETEXTURE2D_INVALIDINITIALDATA]
D3D11 ERROR: ID3D11Device::CreateTexture2D: pInitialData[6].pSysMem cannot be NULL. [ STATE_CREATION ERROR #100: CREATETEXTURE2D_INVALIDINITIALDATA]
D3D11 ERROR: ID3D11Device::CreateTexture2D: pInitialData[7].pSysMem cannot be NULL. [ STATE_CREATION ERROR #100: CREATETEXTURE2D_INVALIDINITIALDATA]
D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]
so it said that the Create function try to access the InitialData as an array while I only has one element as pass as argument.
One thing to notice is that I only get the debug output in release config(I manually define _DEBUG in the code). In Debug config, I get access violation exception trigger so I don't get the D3D Debug Message.
so I get a way to work around it is get an array of subresource big enough to to store all kind of mipmap level to prevent access violation.
D3D11_SUBRESOURCE_DATA sdata[20];
int idx = 0;
while (row_pitch > 0)
{
sdata[idx].pSysMem = data;
sdata[idx].SysMemPitch = row_pitch;
row_pitch >>= 1;
idx++;
}
hres = _dv->CreateTexture2D(&tex_desc, sdata, &m_tex);
HR_FAIL(hres, "testing create texture 2d");
but I think it's unwise since I still need generate mipmap and this function is in device context

I believe the reason it is failing is that you have requested auto-gen mipmaps (D3D11_RESOURCE_MISC_GENERATE_MIPS) which you can't use initial data to set up. You have to load the top level through the device context instead.
For a full implementation of auto-gen mipmaps as well as using initData for other scenarios, see WICTextureLoader in the DirectX Tool Kit
If you used the Direct3D Debug Device it would have provided a lot more detail in the output window when returning E_INVALIDARG.

I think the problem is that you assign a non-zero value to D3D11_SUBRESOURCE_DATA::SysMemSlicePitch field that only has meaning for 3d textures and should be set to 0 for other texture types. In the first example you correctly pass 0 as value of similar parameter when invoking UpdateSubresource.

Related

error when creating 2d texture with dynamic format

i am pretty newbie on directx.
and i am stumbling with handling resource.
Okay First, i created texture that i can read/write in GPU, and it worked well.
And now, as you can check in my code, i wanted to read this texture in CPU as well(reading from application side), so i edited the usage from USAGE_DEFAULT to USAGE_DYNAMIC.
Microsoft::WRL::ComPtr<ID3D11Texture2D> outputTexture;
D3D11_TEXTURE2D_DESC outputTex_desc;
outputTex_desc.Format = DXGI_FORMAT_R32_FLOAT;
outputTex_desc.Width = 3;
outputTex_desc.Height = 3;
outputTex_desc.MipLevels = 1;
outputTex_desc.ArraySize = 1;
outputTex_desc.BindFlags = D3D11_BIND_UNORDERED_ACCESS |
D3D11_BIND_SHADER_RESOURCE;
outputTex_desc.SampleDesc.Count = msCount;
outputTex_desc.SampleDesc.Quality = msQuality;
outputTex_desc.Usage = D3D11_USAGE_DYNAMIC;
outputTex_desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
outputTex_desc.MiscFlags = 0;
// CREATE 'TEXTURE'
device->CreateTexture2D( // FAIL HERE !!!
&outputTex_desc,
nullptr,
outputTexture.GetAddressOf());
// CREATE 'SRV'
...
// CREATE 'UAV'
...
and it starts failing exactly when it executes the 'device->CreateTexture2D()'
any advice would be amazing.
The first two steps in debugging any Direct3D 11 program is:
Make sure you are checking every HRESULT for either success (SUCCEEDED macro) or failure (FAILED macro). If it is safe to ignore the return value at runtime, then the function returns void. See ThrowIfFailed.
Enable the Direct3D debug device and look for debug output (a.k.a. use D3D11_CREATE_DEVICE_DEBUG in your Debug configuration).
If you enable the Direct3D debug device, you'll get detailed information on why the API returns failure codes in many cases.
If you did that, you'd see the error for this code:
D3D11 ERROR: ID3D11Device::CreateTexture2D: A D3D11_USAGE_DYNAMIC Resource
may only have the D3D11_CPU_ACCESS_WRITE CPUAccessFlags set.
[ STATE_CREATION ERROR #98: CREATETEXTURE2D_INVALIDCPUACCESSFLAGS]
In order to READ it on the CPU, you must copy to a D3D11_USAGE_STAGING Resource first. For example source code on doing that, see the ScreenGrab code from DirectX Tool Kit.
You don't mention what your msCount or msQuality values are here. I assumed 1, and 0 respectively. If you use any other values, you'll get:
D3D11 ERROR: ID3D11Device::CreateTexture2D: Multisampling is not supported
with the D3D11_BIND_UNORDERED_ACCESS BindFlag. SampleDesc.Count must be 1
and SampleDesc.Quality must be 0.
[ STATE_CREATION ERROR #99: CREATETEXTURE2D_INVALIDBINDFLAGS]
In order to access a texture from cpu (read mode), you need you create a separate staging texture, then copy your texture into it.
These are the flags for the gpu only texture (please note I force sample count to 1, as UAV access is not allowed for multisampled textures)
D3D11_TEXTURE2D_DESC gpuTexDesc;
gpuTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
gpuTexDesc.Width = 3;
gpuTexDesc.Height = 3;
gpuTexDesc.MipLevels = 1;
gpuTexDesc.ArraySize = 1;
gpuTexDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS |
D3D11_BIND_SHADER_RESOURCE;
gpuTexDesc.SampleDesc.Count = 1;
gpuTexDesc.SampleDesc.Quality = 0;
gpuTexDesc.Usage = D3D11_USAGE_DEFAULT;
gpuTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_NONE;
gpuTexDesc.MiscFlags = 0;
Then you create a second texture for reading
D3D11_TEXTURE2D_DESC readTexDesc;
readTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
readTexDesc.Width = 3;
readTexDesc.Height = 3;
readTexDesc.MipLevels = 1;
readTexDesc.ArraySize = 1;
readTexDesc.BindFlags = 0; //No bind flags allowed for staging
readTexDesc.SampleDesc.Count = 1;
readTexDesc.SampleDesc.Quality = 0;
readTexDesc.Usage = D3D11_USAGE_STAGING; //need staging flag for read
readTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
readTexDesc.MiscFlags = 0;
then you can use CopyResource:
deviceContext->CopyResource(readTex, gpuTex);
Once done, you can finally access texture data for reading using Map
D3D11_MAPPED_SUBRESOURCE MappedResource;
deviceContext->Map(readTex, 0, D3D11_MAP_READ, 0, &MappedResource);
MappedResource will give you access to data in cpu, once you done processing, don't forget to Unmap the resource.
deviceContext->Unmap(readTex, 0);

Assembly explanation about stereoCalibrate error with OutputArray::Create assertion error

I came across an error during execute stereoCalibrate in Opencv 2.4.11, which is says :
OpenCV Error: Assertion failed (!fixedSize() || ((Mat*)obj)->size.operator()() == Size(cols, rows)) in cv::_OutputArray::create,
I think this must be some size error between these parameters, which go through them one by one. But there is still error. I hope someone awesome could find the error from the assembly code below. Here is the method call in my code.
double error = cv::stereoCalibrate(
objPoints, cali0.imgPoints, cali1.imgPoints,
camera0.intr.cameraMatrix, camera0.intr.distCoeffs,
camera1.intr.cameraMatrix, camera1.intr.distCoeffs,
cv::Size(1920,1080), m.rvec, m.tvec, m.evec, m.fvec,
cv::TermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 100, 1e-5)
,CV_CALIB_FIX_INTRINSIC + CV_CALIB_USE_INTRINSIC_GUESS
);
In my code, m.rvec is (3,3,CV_64F), m.tvec is (3,1,CV_64F), m.evec and m.fvec are not preallocated which is same with the stereoCalibrate example. And intr.cameraMatrix is (3,3,CV_64F) and intr.distCoeffs is (8,1,CV_64F), objPoints is computed from the checkerboard which stores the 3d position of corners and all z value for point is zero.
After reading advice from #Josh, I modify the code as plain output mat object which are in CV_64F, but it still throws this assertion.
cv::Mat R, t, e, f;
double error = cv::stereoCalibrate(
objPoints, cali0.imgPoints, cali1.imgPoints,
camera0.intr.cameraMatrix, camera0.intr.distCoeffs,
camera1.intr.cameraMatrix, camera1.intr.distCoeffs,
cali0.imgSize, R, t, e, f,
cv::TermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 100, 1e-5));
Finally I solved this problem, as a reminder, make sure the camera parameters you passed in are not const type....
Why go for assembly? OpenCV is open source and you can check the code you're calling here: https://github.com/opencv/opencv/blob/master/modules/calib3d/src/calibration.cpp#L3523
If you get assertion fails in OpenCV it's usually because you've passed a matrix with an incorrect shape. OpenCV is extremely picky. The assertion fail is on an OutputArray, so checking the function signature there are four possible culprits:
OutputArray _Rmat, OutputArray _Tmat, OutputArray _Emat, OutputArray _Fmat
The sizing is done inside cv::stereoCalibrate here:
https://github.com/opencv/opencv/blob/master/modules/calib3d/src/calibration.cpp#L3550
_Rmat.create(3, 3, rtype);
_Tmat.create(3, 1, rtype);
<-- snipped -->
if( _Emat.needed() )
{
_Emat.create(3, 3, rtype);
p_matE = &(c_matE = _Emat.getMat());
}
if( _Fmat.needed() )
{
_Fmat.create(3, 3, rtype);
p_matF = &(c_matF = _Fmat.getMat());
}
The assertion is being triggered in one of these calls, the code is here:
https://github.com/opencv/opencv/blob/master/modules/core/src/matrix.cpp#L2241
Try passing in plain Mat objects without preallocating their shape.

DirectX Device::CreateTexture2D() crashes when calling with all 3 params

I am gonna try to render a 2D Texture from opencv::Mat to a Texture in DX11 and also to a shaderesource afterwars. The Problem is, the program crashes on Device::CreateTexture2D() and I can't figure out why. I researched the whole day, I just don't see whats wrong here.
Furthermore, the Problem seems not to be the cv::Mat as Resource, because I have tried also this example here: D3D11 CreateTexture2D in 32 bit format with the chess-example as resource for the texture... and the functions still crashes, when calling with the 3rd param.
I found others, who had Problems with that function, sometimes the Problem was because that SysMemPitch was not set for 2D Textures but that's not the case here unfortunately.
Error Output: First-chance exception at 0x692EF11E (igd10iumd32.dll) in ARift.exe: 0xC0000005: Access violation reading location 0x03438000.
Unhandled exception at 0x692EF11E (igd10iumd32.dll) in ARift.exe: 0xC0000005: Access violation reading location 0x03438000.
here is the relevant code:
bool Texture::InitCameraStream(ID3D11Device* device, ARiftControl* arift_control)
{
D3D11_TEXTURE2D_DESC td;
ZeroMemory(&td, sizeof(td));
td.ArraySize = 1;
td.BindFlags = D3D11_BIND_SHADER_RESOURCE;
td.Usage = D3D11_USAGE_DYNAMIC;
td.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
td.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
td.Height = arift_control->picture_1_.size().height;
td.Width = arift_control->picture_1_.size().width;
td.MipLevels = 1;
td.MiscFlags = 0;
td.SampleDesc.Count = 1;
td.SampleDesc.Quality = 0;
D3D11_SUBRESOURCE_DATA srInitData;
srInitData.pSysMem = arift_control->picture_1_.ptr();
srInitData.SysMemPitch = arift_control->picture_1_.size().width * 4;
ID3D11Texture2D* tex = 0;
if ((device->CreateTexture2D(&td, &srInitData, NULL) == S_FALSE));
{
std::cerr << "Texture Description: OK " << std::endl << "Subresource: OK" << std::endl;
}
if (FAILED(device->CreateTexture2D(&td, &srInitData, &tex)));
{
std::cerr << "Error: Texture could not be created! "<< std::endl;
return false;
}
// Create the shader-resource view
D3D10_SHADER_RESOURCE_VIEW_DESC srDesc;
srDesc.Format = td.Format;
srDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
srDesc.Texture2D.MostDetailedMip = 0;
srDesc.Texture2D.MipLevels = 1;
if (FAILED(device->CreateShaderResourceView(tex, NULL, &texture_)));
{
std::cerr << "Can't create Shader Resource View" << std::endl;
return false;
}
return true;
}
CreateTexture2D Returns S_FALSE when the first 2 Parameters are valid, and passing 0 as the 3rd param. So in my case, it also Returns S_FALSE the first time, so the Debug Output appears. When calling CreateTexture2D with the 3rd param (the TExture COM object), it crashed. I have absolutely no idea anymore.
Furthermore, I tried to Setup Debugging with DirectX and followed that tutorial: http://blog.rthand.com/post/2010/10/25/Capture-DirectX-1011-debug-output-to-Visual-Studio.aspx - but I can't see a "Debug" window in my Project Properties in Visual Studio 2013. So I still get to "igd10iumd32.pdb not loaded" window, after the program crashes.
Edit: at least I could fix the issue with the additional D3D debug Outputs for now. In my Visual Studio 2013 I had to set the following: Project Properties -> Debugging -> Debug Type -> Mixed for getting the Additional D3D logs :)
Can anyone help here? It's really frustrating, just getting stuck on that single function the whole day ..
Many thanks!
Max
Your input texture data passed in D3D11_SUBRESOURCE_DATA is not sufficiently sized. In your comment, you said that the input image data is 900x1600, and the link is a JPEG. However, you are specifying to D3D that the data format is DXGI_FORMAT_B8G8R8A8_UNORM. JPEG is a compressed format, thus, the data stream will be smaller than it would be in BGRA format. When your drive (igd10iumd32.dll) attempts to read this input stream, it crashes because the buffer is not as large as you told D3D it was.
You can use D3DX11CreateTextureFromFile to load JPEG data. There are also some free image conversion libraries you can use to convert the JPEG into a D3D natively compatible format.

Depth stencil buffer not working directx11

ok i tried everything at this point and I'm really lost....
ID3D11Texture2D* depthStencilTexture;
D3D11_TEXTURE2D_DESC depthTexDesc;
ZeroMemory (&depthTexDesc, sizeof(D3D11_TEXTURE2D_DESC));
depthTexDesc.Width = set->mapSettings["SCREEN_WIDTH"];
depthTexDesc.Height = set->mapSettings["SCREEN_HEIGHT"];
depthTexDesc.MipLevels = 1;
depthTexDesc.ArraySize = 1;
depthTexDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthTexDesc.SampleDesc.Count = 1;
depthTexDesc.SampleDesc.Quality = 0;
depthTexDesc.Usage = D3D11_USAGE_DEFAULT;
depthTexDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ;
depthTexDesc.MiscFlags = 0;
mDevice->CreateTexture2D(&depthTexDesc, NULL, &depthStencilTexture);
D3D11_DEPTH_STENCIL_DESC dsDesc;
// Depth test parameters
dsDesc.DepthEnable = true;
dsDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
dsDesc.DepthFunc = D3D11_COMPARISON_LESS;//LESS
// Stencil test parameters
dsDesc.StencilEnable = false;
dsDesc.StencilReadMask = 0xFF;
dsDesc.StencilWriteMask = 0xFF;
// Stencil operations if pixel is front-facing
dsDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR; //INCR
dsDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// Stencil operations if pixel is back-facing
dsDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR; //DECR
dsDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// Create depth stencil state
mDevice->CreateDepthStencilState(&dsDesc, &mDepthStencilState);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory (&depthStencilViewDesc, sizeof(depthStencilViewDesc));
depthStencilViewDesc.Format = depthTexDesc.Format;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
mDevice->CreateDepthStencilView(depthStencilTexture, &depthStencilViewDesc, &mDepthStencilView);
mDeviceContext->OMSetDepthStencilState(mDepthStencilState, 1);
and then afterwards i call
mDeviceContext->OMSetRenderTargets(1, &mTargetView, mDepthStencilView);
obviously i clean before every frame
mDeviceContext->ClearRenderTargetView(mTargetView, D3DXCOLOR(0.0f, 0.0f, 0.0f, 1.0f));
mDeviceContext->ClearDepthStencilView(mDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0 );
and still it just keeps the last pixel drawn with no testing....
screenshot
PS i've checked the rasterizer and it is correctly drawing only the front faces
any help anyone?
Check your HRESULTs - the call to CreateTexture2D is almost certainly failing because you have specified CPU_ACCESS flags on a DEFAULT texture. Since you never check any errors or pointers, this just propagates NULL to all your depth objects, effectively disabling depth testing.
You can also catch errors like this by enabling D3D debug layers, by adding D3D11_CREATE_DEVICE_DEBUG to the flags on D3D11CreateDevice. If you had done this, you would see the following debug spew:
D3D11 ERROR: ID3D11Device::CreateTexture2D: A D3D11_USAGE_DEFAULT
Resource cannot have any CPUAccessFlags set. The following
CPUAccessFlags bits cannot be set in this case: D3D11_CPU_ACCESS_READ
(1), D3D11_CPU_ACCESS_WRITE (1). [ STATE_CREATION ERROR #98:
CREATETEXTURE2D_INVALIDCPUACCESSFLAGS]

cvExtractSURF don't work when useProvidedKeypoints = true

So, I'm trying to extract some SURF keypoints, but I want to impose these key points! So, I put the last parameter to "true" which is "useProvidedKeypoints".
Also, when I create my Keypoint, I used the default constructor (so some default values there). I only change the point "pt" and the octave that I set to 3.
I'm using the C++ interface with SURF. But I know that the problem is right at cvExtractSURF because I copied that part of the code in mine to help me debug.
When I call that function, with the last parameter set to true, I got this error:
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /home/widgg/opencv/trunk/modules/core/src/matrix.cpp, line 651
terminate called after throwing an instance of 'cv::Exception'
what(): /home/widgg/opencv/trunk/modules/core/src/matrix.cpp:651: error: (-5) Unknown array type in function cvarrToMat
I really don't know what I'm doing wrong!
EDIT:
Here's some code. First how I create the keypoints (I left a couple of informations, like the layer_id stuff, but you get the main idea):
for (json_pt_info_vector::iterator b_beg = beg->points.begin(); b_beg != b_end; ++b_beg)
{
int layer_id = b_beg->layer_id;
json_point_info_coord &jpic = b_beg->coord;
jpic.feature_id = features[layer_id].keypoints.size();
KeyPoint kp;
kp.octave = 3;
kp.pt.x = jpic.x;
kp.pt.y = jpic.y;
features[layer_id].keypoints.push_back(kp);
}
Here's the call to SURF:
SURF surf(300, 3, 4);
for (int i = 0; i < nb_img; ++i)
{
debug_msg("extract_features #4.1");
cv::detail::ImageFeatures &cdif = features[i];
Mat gray_image = imread(param.layer_images[i], 0); // 0 = force to gray scale!
debug_msg("extract_features #4.2");
vector<float> descriptors;
debug_msg("extract_features #4.3");
surf(gray_image, Mat(), cdif.keypoints, descriptors, true); // MUST BE TRUE TO FORCE THE PROVIDED KEYPOINTS
debug_msg("extract_features #4.4");
cdif.descriptors = Mat(descriptors, true).reshape(1, (int)cdif.keypoints.size());
debug_msg("extract_features #4.5");
gray_image.release();
debug_msg("extract_features #4.6");
images[i] = imread(param.layer_images[i]); // keep the image open
}
It crashes after #4.3 in the debug message!
Hope that helps!
EDIT 2:
I replaced some part by cv::SurfDescriptorExtracter. I replace everything from 4.3 to 4.5 with the following line:
extractor.compute(gray_image, cdif.keypoints, cdif.descriptors);
So now, there's still a bug, but it's located somewhere else, not necessary related to this question!
I'm surprised that the call to surf(gray_image, Mat(), cdif.keypoints, descriptors, true) even compiles. the descriptors argument should be a cv::Mat, not a vector.

Resources