According to the document, SIFT object could use the function below to compute descriptors for multiple images:
virtual void compute (InputArrayOfArrays images, std::vector< std::vector< KeyPoint > > &keypoints, OutputArrayOfArrays descriptors)
I'm trying to compute the SIFT descriptors for multiple images with the code below:
Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
vector<vector<KeyPoint>> train_keypoints;
f2d->detect(train_imgs, train_keypoints);
vector<Mat> train_descriptors;
f2d->compute(train_imgs, train_keypoints, train_descriptors);
It could be compiled under Mac OS 10.10.5 with opencv3, while it might terminate during execution with error:
libc++abi.dylib: terminating with uncaught exception of type std::length_error: vector
Or I could change the type of train_descriptors into Mat (instead of vector< Mat >), it'll still fail during execution with another error:
OpenCV Error: Assertion failed (_descriptors.kind() == _InputArray::STD_VECTOR_MAT) in compute, file /tmp/opencv320151228-32931-2p5ggk/opencv-3.1.0/modules/features2d/src/feature2d.cpp, line 126 libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /tmp/opencv320151228-32931-2p5ggk/opencv-3.1.0/modules/features2d/src/feature2d.cpp:126: error: (-215) _descriptors.kind() == _InputArray::STD_VECTOR_MAT in function compute
What type of train_descriptors should I use to make this code compile and run correctly?
Could anyone told me what's the difference between vector< Mat > and OutputArrayOfArrays?
Your code
Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
vector<vector<KeyPoint>> train_keypoints;
f2d->detect(train_imgs, train_keypoints);
vector<Mat> train_descriptors;
f2d->compute(train_imgs, train_keypoints, train_descriptors);
works well if train_imgs is a vector<Mat>.
You don't need to create a vector of 50000 elements, simply use vector<Mat> train_descriptors;.
OutputArrayOfArrays, like InputArray, OutputArray and the like, are an abstraction layer that OpenCV uses to allow to pass to a function both cv::Mat and std::vector. You should never use these classes explicitly. From OpenCV doc:
The class is designed solely for passing parameters. That is, normally you should not declare class members, local and global variables of this type.
Also, note that OutputArrayOfArrays is just a typedef of OutputArray.
This code could work I guess:
Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
vector<vector<KeyPoint>> train_keypoints;
f2d->detect(train_imgs, train_keypoints);
vector<Mat> train_descriptors = vector<Mat>(5e4);
f2d->compute(train_imgs, train_keypoints, train_descriptors);
Related
I have a matrix D which I would like to set to zero where another matrix T is zero and keep intact otherwise. In numpy, I'd do this:
D[T==0]=0;
but with cv::Mat, not sure how to do it. I tried this:
D&=(T!=0);
with this result:
OpenCV Error: Assertion failed (type == B.type()) in gemm, file /build/opencv-AuXD2R/opencv-3.3.1/modules/core/src/matmul.cpp, line 1558
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-AuXD2R/opencv-3.3.1/modules/core/src/matmul.cpp:1558: error: (-215) type == B.type() in function gemm
Is the problem that I am mixing numeric types? I also tried this then (D is CV_32F as well, which I verified by outputting T.type(), 5):
cv::Mat TnotZero;
cv::Mat(T!=0).convertTo(TnotZero,CV_32F);
D&=TnotZero;
with the same result.
What is the solution?
I wrote this c++ code using openCV for anding operation, i also used bitwise_and and cvAnd but it didnt work. I'm sure that there is no syntax errors but when i run it it gives me an exception
the code:
IplImage* result1 = cvCreateImage( cvGetSize(v_plane), 8, 3 );
cvAdd(h_plane, s_plane, result1,NULL);
h_plane, s_plane, and result1 must ALL be of the same format.
Same size
Same depth
Same number of channels
cvConvertImage() can be helpful here.
I'm working on a project which implement face detection algorithm on CUDA platform.
Currently I'd to access an element on GpuMat instance.
I have tried the following conventional methods:
Trying to make induction from cv:Mat, GpuMat doesn't have a <T>.at method.
I have tried using CV_MAT_ELEM , I receive an error.
Here is my code on FaceDetection.cu file:
int DetectFacesGPU(cv::gpu::GpuMat * sumMat ,cv::gpu::GpuMat * qSumMat , float factor)
{
//
int i = CV_MAT_ELEM(*sumMat,int ,0,0);
//
I receive an error
Error 29 error : expression must have class type C:\Users\Shahar\Dropbox\OpenCV2.3\OpenCV2.3\FaceDetectionLatest\FaceDetectionCuda\FaceDetectionLatest\FaceDetection.cu 139
You have to download it to a cv::Mat and then access it with the standard way.
I think downloading it is as simple as:
cv::Mat mat;
sumMat->download(mat); // or something like that.
From the definition of CV_MAT_ELEM, CV_MAT_ELEM(*sumMat, int, 0, 0); is expanded to:
*(int*)CV_MAT_ELEM_PTR_FAST(*sumMat, 0, 0, 4));
and in turn expanded to:
*(int*)(assert ( 0 < (*sumMat).rows && 0 < (*sumMat).cols ),
(*sumMat).data.ptr + (size_t)(*sumMat).step*0 + 4*0);
Assuming that your DetectFacesGPU function is defined as a device code, the above statement is faulty because you can't dereference sumMat in device code. By doing so you're accessing host memory from device code. You have to be careful because GpuMat class itself is allocated in host memory. It is only the actual matrix data that is allocated in device memory.
So GpuMat class itself is, in general, not passed to kernel. To access individual pixels of GpuMat in kernel, you have to pass the pointer referencing the actual matrix data (which is in device memory) to the kernel and do pointer arithmetic on that pointer.
I need to resize an IplImage and convert it into a CvMat of different depth, this is the code I've written so far:
void cvResize2(IplImage *imgSrc, IplImage *imgDst)
{
IplImage *imgTemp;
imgTemp = cvCreateImage( cvGetSize( imgSrc ), IPL_DEPTH_64F, 1 );
cvScale( imgSrc, imgTemp, 1/255., 0.0 );
cvResize( imgTemp, imgDst );
}
The source image is grayscale, the destination one is 64F bit deep. cvScale only scales between images of same size, hence the temp image.
The program rises the following exception when invoking cvResize:
OpenCV Error: Assertion failed (func != 0) in resize, file /tmp/buildd/opencv-2.1.0/src/cv/cvimgwarp.cpp, line 1488
terminate called after throwing an instance of 'cv::Exception'
what(): /tmp/buildd/opencv-2.1.0/src/cv/cvimgwarp.cpp:1488: error: (-215) func != 0 in function resize
I can't figure out why, I've checked that the images respect the conditions imposed
src: 512x384, 8 depth
tmp: 512x384, 64 depth
dst: 64x64, 64 depth
Any clues?
Thanks in advance
You may have found a bug. I can reproduce it on my end, too (Ubuntu 64-bit, OpenCV-2.1.0). If you use 32-bit floating point precision, it works, but crashes with 64-bit floats. My recommendation is to update your OpenCV to the most recent version and see if the problem goes away. If not, then build the library in debug mode and step through the function that is throwing the assertion. From looking at the culprit source in cvimgwarp.cpp, it looks like it's unable to find an interpolation method to use for the destination image.
I'm trying to implement the fourier transformation in frequency domain.
I used getOptimalDFTSize accordingly, and I copied the image and mask, in bigger images, suitable for fourier transformation. I used the sample code from here as a reference.
Now, I have to separate the real and imaginary part, and to perform pixelwise multiplication of the image imaginary part with the mask imaginary part, and the same for the real part.But when I try to do so, I get the following error message:
OpenCV Error: Assertion failed (type == srcB.type() && srcA.size() == srcB.size()) in mulSpectrums, file /build/buildd/opencv-2.1.0/src/cxcore/cxdxt.cpp, line 1855
/build/buildd/opencv-2.1.0/src/cxcore/cxdxt.cpp:1855: error: (-215) type == srcB.type() && srcA.size() == srcB.size() in function mulSpectrums
The code is following:
//fourier transfromation of real and imaginary part
Mat complex_image, real_image, complex_mask, real_mask;
cv::dft(new_image, complex_image, DFT_COMPLEX_OUTPUT);
cv::dft(new_image, real_image, DFT_REAL_OUTPUT);
cv::dft(new_mask, complex_mask, DFT_COMPLEX_OUTPUT);
cv::dft(new_mask, real_mask, DFT_REAL_OUTPUT);
//pixelwise multiplication
Mat multiplied_complex, multiplied_real;
cv::mulSpectrums(complex_image, complex_mask, multiplied_complex, DFT_COMPLEX_OUTPUT );
cv::mulSpectrums(real_image, real_mask, multiplied_real, DFT_REAL_OUTPUT);
What am I doing wrong here?
Image and mask should have same size (width and height) and (most probably this is problem) type. So if it is different type you need to convert one of them so they have equal type.