I am doing a 6-dof transformation with the RANSAC given in OpenCV and I now want to convert two matrices of cv::Mat to an Isometry3d of Eigen but I didn't find good examples about this problem.
e.g.
cv::Mat rot;
cv::Mat trsl;
// the rot is 3-by-3 and trsl is 3-by-1 vector.
Eigen::Isometry3d trsf;
trsf.rotation = rot;
trsf.translation = trsl; // I know trsf has two members but it seems not the correct way to do a concatenation.
Anyone give me a hand? Thanks.
Essentially, you need an Eigen::Map to read the opencv data and store it to parts of your trsf:
typedef Eigen::Matrix<double, 3, 3, Eigen::RowMajor> RMatrix3d;
Eigen::Isometry3d trsf;
trsf.linear() = RMatrix3d::Map(reinterpret_cast<const double*>(rot.data));
trsf.translation() = Eigen::Vector3d::Map(reinterpret_cast<const double*>(trsl.data));
You need to be sure that rot and trsl indeed hold double data (perhaps consider using cv::Mat_<double> instead).
Related
I want to mutiply 2 with each element of vec3 in opencv as we do in Matlab simplt by ".*". I searched alot but didn't find any command is their any command for this or not in opencv? thanks in advance for any help.
This answer would suggest you can just use the * assignment operator in C++.
If you are using Java I don't think this is possible, you can only multiply a Mat by another Mat.
So you would need to create a new Mat instance of the same size and type, initialised with the scalar value you want to multiply by.
You can easily create a funcion to do this:
public Mat multiplyScalar(Mat m, double i)
{
return m = m.mul(new Mat((int)m.size().height, (int)m.size().width, m.type(), new Scalar(i)));
}
Then x = multiplyScalar(x, 5); will multiply each element by 5.
I'm trying to use cvPerspectiveTransform to transform four 2D points. I got the transformation matrix (3x3) already through cvFindHomography. I can't figure out what kind of structure to provide to not run into some error.
Would anybody be so kind to show me how to do it with these points?
x:y
0:0
640:0
0:480
640:480
I'm using OpenCV 2.4.0 on Win.
This is one way to initialize your matrices correctly. It's probably not the most elegant, but it works:
CvMat* input = cvCreateMat(1, 4, CV_32FC2);
CvMat* output = cvCreateMat(1, 4, CV_32FC2);
float data[8] = {0,0,0,640,480,0,640,480};
for (int i =0; i < 8; i++)
{
input->data.fl[i] = data[i];
}
cvPerspectiveTransform(input, output, matrix_from_cvFindHomography);
The C++ API offers a more intuitive implementation. Many OpenCV functions, like perspectiveTransform, accept vectors of points as inputs, which can be initialized in this manner:
std::vector<cv::Point2f> inputs;
std::vector<cv::Point2f> outputs;
inputs.push_back(cv::Point2f(0,0));
inputs.push_back(cv::Point2f(640,0));
inputs.push_back(cv::Point2f(0,480));
inputs.push_back(cv::Point2f(640,480));
cv::perspectiveTransform(inputs, outputs, matrix_from_findHomography);
assuming you have a 3x3 cv::Mat of floats, you can convert that to (if you want double change all the f's to d's)
cv::Matx33f transform(your_cv_Mat);
cv::Matx31f pt1(0,0,1);
cv::Matx31f pt2(640,0,1);
...
pt1 = transform*pt1;
pt2 = transform*pt2;
...
make sure you normalize by the third coordinate, read up on homogenous coordinates if that does not make sense
pt1 *= 1/pt1(2);
pt2 *= 1/pt2(2);
...
cv::Point2f final_pt1(pt1(0),pt1(1));
cv::Point2f final_pt2(pt2(0),pt2(1));
You do not need to do this with Matx, it will work with cv::Mat just as well. Personally I like Matx for working with transforms because its size and type is easier to keep track of and its contents can be more easily viewed in the debugger.
I have two programs, one which accepts an Image as a matrix and does the processing like tracking objects using contour detection.The second program takes image as an array(IplImage) and counting no. of objects.But I want to merge these programs to count as well as track these objects.How can I merge them ?
In the following code left is CvMat, left1 is IplImage. In this way you can manually convert cvmat to IplImage.
for (int y=0;y<height1;y++)
{
uchar* leftdata=(uchar*)(left->data.ptr+y*left->step);
uchar* left1data=(uchar* )(left->imageData+y*left1step);
for (int x=0;x<width1;x++)
left1data[x]=leftdata[x];
}
or here is another link How to convert a Mat variable type in an IplImage variable type in OpenCV 2.0?
can any one help me about how to get the absolute value of a complex matrix.the matrix contains real value in one channel and imaginary value in another one channel.please help me
if s possible means give me some example.
Thanks in advance
Arangarajan
Let's assume you have 2 components: X and Y, two matrices of the same size and type. In your case it can be real/im values.
// n rows, m cols, type float; we assume the following matrices are filled
cv::Mat X(n,m,CV_32F);
cv::Mat Y(n,m,CV_32F);
You can compute the absolute value of each complex number like this:
// create a new matrix for storage
cv::Mat A(n,m,CV_32F,cv::Scalar(0.0));
for(int i=0;i<n;i++){
// pointer to row(i) values
const float* rowi_x = X.ptr<float>(i);
const float* rowi_y = Y.ptr<float>(i);
float* rowi_a = A.ptr<float>(i);
for(int j=0;j<=m;j++){
rowi_a[j] = sqrt(rowi_x[j]*rowi_x[j]+rowi_y[j]*rowi_y[j]);
}
}
If you look in the OpenCV phasecorr.cpp module, there's a function called magSpectrums that does this already and will handle conjugate symmetry-packed DFT results too. I don't think it's exposed by the header file, but it's easy enough to copy it. If you care about speed, make sure you compile with any available SIMD options turned on too because they can make a big difference with this calculation.
I am trying to make the dft of one single channeled image, and as cvDft is expecting complex values, I was adviced to merge the original image with another image with all 0's so this last one will be considered as imaginary part.
My problem comes when using cvmerge function,
Mat tmp = imread(filename,0);
if( tmp.empty() )
{cout << "Usage: dft <image_name>" << endl;
return -1;}
Mat Result(tmp.rows,tmp.cols,CV_64F,2);
Mat tmp1(tmp.rows,tmp.cols,CV_64F, 0);
Mat image(tmp.rows,tmp.cols,CV_64F,2);
cvMerge(tmp,tmp1,image);`
It gives me the next error: can not convert cvMAt to cvArr
Anyone could help me? thanks!
1) it seems like you're mixing up 2 different styles of opencv code
cv::Mat (- Mat) is a c++ class from the new version of opencv, cvMerge is a c function from the old version of opencv.
instead of using cvmerge use merge
2) you're trying to merge a matrix (tmp) of type CV_8U (probably) with a CV_64F
use convertTo to get tmp as CV_64F
3) why is your Result & image mats (the destination mat) are initializes to cv::Scalar(2)? i think you're misusing the constractor parameters. see here for more info.
4) you're image mat is a single channel mat and you wanted it as a 2 channel mat (as mentioned in the question), change the declaration to
Mat image(tmp.rows,tmp.cols,CV_64FC2);