I have the following code:
Mat image = cvLoadImage("our path ");
How do I convert the result to a two dimensional array of uchar? Please help me know how to read it as bytes and put it in the array.
plz can you tell me how to display the resulat as array i have problem with coding it .
First, cvLoadImage() returns IplImage* not Mat. It is converted implicitly to Mat, so you can use it but it won't be released properly, leading to memory leak. You should use imread instead.
As for the question itself, you can use ptr function. For example:
uchar* p = image.ptr<uchar>(i);
p is now pointing to i-th line of image. You can work with it as usual array of uchar (read, change, copy, and so on).
Related
I am doing a 6-dof transformation with the RANSAC given in OpenCV and I now want to convert two matrices of cv::Mat to an Isometry3d of Eigen but I didn't find good examples about this problem.
e.g.
cv::Mat rot;
cv::Mat trsl;
// the rot is 3-by-3 and trsl is 3-by-1 vector.
Eigen::Isometry3d trsf;
trsf.rotation = rot;
trsf.translation = trsl; // I know trsf has two members but it seems not the correct way to do a concatenation.
Anyone give me a hand? Thanks.
Essentially, you need an Eigen::Map to read the opencv data and store it to parts of your trsf:
typedef Eigen::Matrix<double, 3, 3, Eigen::RowMajor> RMatrix3d;
Eigen::Isometry3d trsf;
trsf.linear() = RMatrix3d::Map(reinterpret_cast<const double*>(rot.data));
trsf.translation() = Eigen::Vector3d::Map(reinterpret_cast<const double*>(trsl.data));
You need to be sure that rot and trsl indeed hold double data (perhaps consider using cv::Mat_<double> instead).
I want to create a 1-D array of exactly 100 values and at each index store the index of another array. If I am to use std::vector<int16_t> someVector, how do I ensure that someVector has only 100 values and maybe add the first value at location 48 like someVector[48] = 29322, and so on.
As an alternative I tried creating a 1-D mat of Mat someArray(1,100,CV_16UC1,Scalar(9999)). Now when I try to retrieve the value at index 48, by using int retrievedValue = someArray.row(0).col(48), it says cannot convert from Mat to int.
I realize I'm doing something crazy for something very simple, but please help.
When you initialize vector you can set its size:
std::vector<int16_t> someVector(100);
This way it will be created with as array of 100 elements. But don't forget that size of vector may be changed later.
As for Mat, operators like row() or col() give you sub-matrix of initial matrix. So the code you created will return you a 1x1 matrix, not a short. If you want to access an element in matrix it should be:
int retrievedValue = someArray.at<ushort>(0,48);
I am new to opencv, so please help me in solving this basic query. I am trying to find the max. value of a Mat variable. I tried to use the max_element and minMaxLoc, but end up facing errors, as the function keeps saying the datatype matched is wrong. I checked it over and over again, but am not successful. here is my code.
ABS_DST is the MAT variable
double *estimate,*min;
CvPoint *minLoc,*maxLoc;
Size s = abs_dst.size();
int rows = s.height;
int cols = s.width;
double imagearray[rows][cols] = abs_dst.data();
minMaxLoc(imagearray,min,estimate,minLoc,maxLoc);
I even tried giving the Mat variable abs_dst directly. But have not succeeded. there's an optional input mask array, which I have ignored as I do not require that.
Do next:
Point[] Mat_To_Point = Your_Mat_Variable.toArray();
And next you can to sort your array
I think I got the answer. Thanks for your efforts. The problem is minMaxLoc doesn't take RGB images array, as it is 3 channel. I had to convert the ABS_DST to Gray scale.
Secondly,
it is not
CvPoint *minLoc,maxLoc;
it is
Point *minLoc,*maxLoc;
I need not convert it to array, as converting to Gray Scale will directly give me a 1D channel, enough for the minMaxLoc. My apologies for my own mistakes and thanks once again for your efforts.
I have two programs, one which accepts an Image as a matrix and does the processing like tracking objects using contour detection.The second program takes image as an array(IplImage) and counting no. of objects.But I want to merge these programs to count as well as track these objects.How can I merge them ?
In the following code left is CvMat, left1 is IplImage. In this way you can manually convert cvmat to IplImage.
for (int y=0;y<height1;y++)
{
uchar* leftdata=(uchar*)(left->data.ptr+y*left->step);
uchar* left1data=(uchar* )(left->imageData+y*left1step);
for (int x=0;x<width1;x++)
left1data[x]=leftdata[x];
}
or here is another link How to convert a Mat variable type in an IplImage variable type in OpenCV 2.0?
I am trying to make the dft of one single channeled image, and as cvDft is expecting complex values, I was adviced to merge the original image with another image with all 0's so this last one will be considered as imaginary part.
My problem comes when using cvmerge function,
Mat tmp = imread(filename,0);
if( tmp.empty() )
{cout << "Usage: dft <image_name>" << endl;
return -1;}
Mat Result(tmp.rows,tmp.cols,CV_64F,2);
Mat tmp1(tmp.rows,tmp.cols,CV_64F, 0);
Mat image(tmp.rows,tmp.cols,CV_64F,2);
cvMerge(tmp,tmp1,image);`
It gives me the next error: can not convert cvMAt to cvArr
Anyone could help me? thanks!
1) it seems like you're mixing up 2 different styles of opencv code
cv::Mat (- Mat) is a c++ class from the new version of opencv, cvMerge is a c function from the old version of opencv.
instead of using cvmerge use merge
2) you're trying to merge a matrix (tmp) of type CV_8U (probably) with a CV_64F
use convertTo to get tmp as CV_64F
3) why is your Result & image mats (the destination mat) are initializes to cv::Scalar(2)? i think you're misusing the constractor parameters. see here for more info.
4) you're image mat is a single channel mat and you wanted it as a 2 channel mat (as mentioned in the question), change the declaration to
Mat image(tmp.rows,tmp.cols,CV_64FC2);