Compile error in CV_MAT_ELEM - opencv

As a result of a call to estimateRigidTransform() I get a cv::Mat object named "trans". To retrieve its contained matrix I try to access its elements this way:
for (i=0; i<2; i++) for (j=0; j<3; j++)
{
mtx[j][i]=CV_MAT_ELEM(trans,double,i,j);
}
Unfortunately with VS2010 I get a compiler error
error C2228: left of '.ptr' must have class/struct/union
for the line with CV_MAT_ELEM. When I unwrap this macro I find something like
(mat).data.ptr + (size_t)(mat).step*(row) + (pix_size)*(col))
When I remove the ".ptr" behind (mat).data it compiles. But I can't imagine that's the solution (or can't imagine that's a bug and I'm the only one who noticed it). So what could be wrong here really?
Thanks!

You don't access the mat elements like this. For a traversal see my other answer here with sample code:
color matrix traversal
or see the opencv refman for grayscale Mat:
Mat M; // should be grayscale
int cols = M.cols, rows = M.rows;
for(int i = 0; i < rows; i++)
{
const double* Mi = M.ptr<double>(i);
for(int j = 0; j < cols; j++)
{
Mi[j]; // is the matrix element.
}
}

Just an addendum from my side: meanwhile I found CV_MAT_ELEM expects a structure CvMat (OpenCV-C-interface) but not cv::Mat (the C++-interface). That's why I get this funny error. Conversion from cv::Mat to CvMat can be done simply by casting to CvMat. Funny confusion with the C and C++ interfaces in OpenCV...

Related

Assertion Failure with cv::Size

I'm new to OpenCV and I stuck with the problem while declaring a Mat instance.
#include <opencv2\opencv.hpp>
int main(int argc, char *argv[]) {
cv::Mat before = cv::imread("./irene.jpg", CV_LOAD_IMAGE_COLOR);
cv::imshow("before", before);
cv::waitKey(0);
cv::Mat after(cv::Size(before.rows, before.cols), CV_8UC3);
for (int y = 0; y < before.rows; y++) {
for (int x = 0; x < before.cols; x++) {
after.at<cv::Vec3b>(y, x)[0] = before.at<cv::Vec3b>(y, x)[2];
after.at<cv::Vec3b>(y, x)[1] = before.at<cv::Vec3b>(y, x)[1];
after.at<cv::Vec3b>(y, x)[2] = before.at<cv::Vec3b>(y, x)[0];
}
}
cv::imshow("after", after);
cv::waitKey(0);
return 0;
}
Just a simple code for changing color of an image.
The problem is that, when I try to use cv::Size(), the compiler returns Assertion Failure. It builds okay, but when I try to press any key to go next of the code, it makes an exception.
What the compiler references and put on the console is:
OpenCV(3.4.3) Error: Assertion failed ((unsigned)(i1 *
DataType<_Tp>::channels) < (unsigned)(size.p[1] * channels())) in
cv::Mat::at, file
c:\opencv343\opencv\build\include\opencv2\core\mat.inl.hpp, line 1101
I'm certain on that the exception is thrown because of the Size() struct, because if I change Size(before.rows, before.cols) to before.rows, before.cols, it works fine!
I cannot figure out what is wrong, but all the tutorials say the code I tried is okay code.
This exception occurred because I set the width and height inside the Size() in wrong order.
Right order is cv::Size(width(cols)_of_image, height(rows)_of_image)
It was not a problem for many tutorials as they usually used image of square size.

Checking the value of a cvMat element when during iteration

I'm using CV_MAT_ELEM to access the value of a cvmat without any problem, but when I use it in a for loop it gives me an error ( assertion failed ).
for (int i=0;i<=direction->cols;i++){
for(int j=0;j<=direction->rows;j++){
if ((CV_MAT_ELEM(*direction,float,i,j)<22.0) ) {
CV_MAT_ELEM(*direction,float,i,j)=0;
}
}
}
You are trying to access some pixels that are not within the image's range.
Try to change
for (int i=0;i<=direction->cols;i++){
^^
for(int j=0;j<=direction->rows;j++){
^^
to
for (int i=0;i<direction->cols;i++){
for(int j=0;j<direction->rows;j++){
P.S.: As #berak commented, you are still using old OpenCV API, i.e. using IplImage and CV_MAT_ELEM. Try to use the new API, i.e. Mat and at() correspondingly.

show vector elements in listbox

I would like to display the elements in a vector in a listbox. However, I'm constantly getting a error: error C2664: 'System::Windows::Forms::ListBox::ObjectCollection::Add' : cannot convert parameter 1 from 'std::basic_string<_Elem,_Traits,_Alloc>' to 'System::Object ^'
I am using windows form in c++/cli.
this is the code:
for (size_t z = 0; z < container.size(); z++){
listBox_name->Items->Add(container[z]);
}
Based on the error message, your vector is a vector of std::string. Use marshal_as to convert the std::string to a managed String^ that the managed list box can accept.
for (size_t z = 0; z < container.size(); z++){
listBox_name->Items->Add(marshal_as<String^>(container[z]));
}
If you find you're doing this a lot, consider changing your vector of std::string to be a fully-managed type, such as List<String^>^.

OpenCV 2.4.5: FLANN and hierarchicalClustering

I have recently started porting an application to a new platform which runs OpenCV 2.4.5.
Part of my code which uses OpenCV's implementation of FLANN to do hierarchical clustering no longer compiles.
The code is as follows:
cv::Mat mergedFeatures = cvCreateMat(descriptorTotal, descriptorDims, CV_32F);
int counter = 0;
for (uint j = 0; j < ImageFeatures.size(); j++) {
cv::Mat features = ImageFeatures[j];
for (int k = 0; k < features.rows; k++) {
cv::Mat roi = mergedFeatures.row(counter);
features.row(k).copyTo(roi);
counter++;
}
}
cv::Mat centers = cvCreateMat(1000, descriptorDims, CV_32FC1);
cv::flann::KMeansIndexParams k_params = cv::flann::KMeansIndexParams();
cv::flann::AutotunedIndexParams atp = cv::flann::AutotunedIndexParams();
int numClusters = cv::flann::hierarchicalClustering<float, float>(mergedFeatures, centers, k_params);
The error that I am getting (in Eclipse) is that cv::flann::hierarchicalClustering has invalid arguments and that neither of the candidates for this function are met.
Can someone explain how I suddenly seem to be calling this method incorrectly?
Many thanks for any help.
I fixed the problem myself.
Instead of accepting:
cv::flann::KMeansIndexParams k_params
the hierarchicalClustering function actually needs:
cvflann::KMeansIndexParams k_params
It is rather a confusing namespace convention with the FLANN library in OpenCV and I had just overlooked the namespace difference in what the compiler error was telling me.
It is all working now. The KMeansIndexParams type is present in both namespaces and I guess that the hierarchicalClustering method has changed very slightly from OpenCV 2.3 to 2.4.5.

Bug in OpenCV2.3 cv::split() function. Identical values in all 3 channels

After spending a couple of days trying figure out why opencv DFT would give 100% similar results for all three channels I ended up finding out that there might be a bug in the split() function that OpenCV provides for splitting a input image to 3 single channel images.
std::vector<cv::Mat> rgbChannels(3,cv::Mat(inputImage.size(),CV_64FC1));
cv::split(inputImage, rgbChannels);
After saving the image values onto disk and using a file differencing tool, I found out that all values in the split channels were identical.
Have I done something wrong?
My work around was the following function. But that also gave me the exact identical values, giving me a hint that somehow vectors were not being handled correctly by OpenCV.
SplitImage(cv::Mat inputImage)
{
//copy original in BGR order
std::vector<cv::Mat> splittedImage(3,cv::Mat(inputImage.size(),CV_64FC1));
cv::Mat tempImage(inputImage.size(),CV_64FC1);
for (int row = 0; row < inputImage.size().height; row++)
{
for (int col = 0; col < inputImage.size().width; col++)
{
splittedImage[0].at<double>(row, col) = inputImage.at<cv::Vec3d>(row, col)[0];
splittedImage[1].at<double>(row, col) = inputImage.at<cv::Vec3d>(row, col)[1];
splittedImage[2].at<double>(row, col) = inputImage.at<cv::Vec3d>(row, col)[2];
}
}
return splittedImage;
}
And finally wrote the following to solve the problem
SplitImage(cv::Mat inputImage)
{
//copy original in BGR order
std::vector<cv::Mat> splittedImage(3,cv::Mat(inputImage.size(),CV_64FC1));
std::vector<cv::Mat>::iterator it;
it = splittedImage.begin();
for(int channelNo = 0; channelNo < inputImage.channels(); channelNo++)
{
cv::Mat tempImage(inputImage.size(),CV_64FC1);
for (int row = 0; row < inputImage.size().height; row++)
{
for (int col = 0; col < inputImage.size().width; col++)
{
tempImage.at<double>(row, col) = inputImage.at<cv::Vec3d>(row, col)[channelNo];
}
}
it = splittedImage.insert ( it , tempImage );
it++;
}
return splittedImage;
}
Has anyone had a problem with split() function or have I done something wrong?
It is not a bug in OpenCV but there is a problem with your code.
The following line does not create a vector of 3 different Mats:
std::vector<cv::Mat> rgbChannels(3,cv::Mat(inputImage.size(),CV_64FC1));
Instead, this line produces a vector of 3 Mat headers sharing the same memory. It works this way because Mat copy constructor does not make a deep copy - it just increments an internal reference counter.
Just change your code to the following to solve your problem:
std::vector<cv::Mat> rgbChannels(3);
cv::split(inputImage, rgbChannels);

Resources