OpenCV Error: Bad argument <Unknown array type> in unknown function, file ..\..\..\modules\core\src\matrix.cpp, line 697 - opencv

I'm currently trying to rectify stereo cameras to create a disparity map. Unfortunately, I'm having trouble getting past the stereo rectification step because I keep receiving the error
"OpenCV Error: Bad argument in unknown function, file ..\..\..\modules\core\src\matrix.cpp, line 697."
The process is complicated by the fact that I'm not the one one who calibrated the cameras, nor do I have access to the cameras used to record the videos. I was given all of the calibration parameters (intrinsics, distortion coefficients, rotation matrix, and translation vector). As you can see, I've tried to turn these directly into CvMats and use them that way, but I get an error when I try to actually use them.
Thanks in advance.
CvMat li, lm, ri, rm, r, t, Rl, Rr, Pl, Pr;
double init_li[3][3] =
{ {477.984984743, 0, 316.17458671},
{0, 476.861945645, 253.45073026},
{0, 0 ,1} };
double init_lm[5] = {-0.117798518453, 0.147554949385, -0.0549082041898, 0, 0};
double init_ri[3][3] =
{{478.640315323, 0, 299.957994781},
{0, 477.898896505, 251.665771947},
{0, 0, 1}};
double init_rm[5] = {-0.10884732532, 0.12118405303, -0.0322073237741, 0, 0};
double init_r[3][3] =
{{0.999973709051976, 0.00129700728791757, -0.00713435189275776},
{-0.00132096594266573, 0.999993501087837, -0.00335452397041856},
{0.00712995468519435, 0.00336386001267643, 0.99996892361313}};
double init_t[3] = {-0.0830973040641153, -0.00062704210860633, 1.4287643345188e-005};
cvInitMatHeader(&li, 3, 3, CV_64FC1, init_li);
cvInitMatHeader(&lm, 5, 1, CV_64FC1, init_lm);
cvInitMatHeader(&ri, 3, 3, CV_64FC1, init_ri);
cvInitMatHeader(&rm, 5, 1, CV_64FC1, init_rm);
cvInitMatHeader(&r, 3, 3, CV_64FC1, init_r);
cvInitMatHeader(&t, 3, 1, CV_64FC1, init_t);
cvInitMatHeader(&Rl, 3,3, CV_64FC1);
cvInitMatHeader(&Rr, 3,3, CV_64FC1);
cvInitMatHeader(&Pl, 3,4, CV_64FC1);
cvInitMatHeader(&Pr, 3,4, CV_64FC1);
//frame is a cv::MAT holding the first frame of the video.
CvSize imageSize = frame.size();
imageSize.width /= 2;
//IT BREAKS HERE
cvStereoRectify(&li, &ri, &lm, &rm, imageSize, &r, &t, &Rl, &Rr, &Pl, &Pr);

so, you've been bitten by the c-api ? why don't you just turn your back on it ?
use the c++ api whenever possible, don't start learning opencv with the old(1.0), deprecated api, please !
double init_li[9] =
{ 477.984984743, 0, 316.17458671,
0, 476.861945645, 253.45073026,
0, 0 ,1 };
double init_lm[5] = {-0.117798518453, 0.147554949385, -0.0549082041898, 0, 0};
double init_ri[9] =
{ 478.640315323, 0, 299.957994781,
0, 477.898896505, 251.665771947,
0, 0, 1};
double init_rm[5] = {-0.10884732532, 0.12118405303, -0.0322073237741, 0, 0};
double init_r[9] =
{ 0.999973709051976, 0.00129700728791757, -0.00713435189275776,
-0.00132096594266573, 0.999993501087837, -0.00335452397041856,
0.00712995468519435, 0.00336386001267643, 0.99996892361313};
double init_t[3] = {-0.0830973040641153, -0.00062704210860633, 1.4287643345188e-005};
cv::Mat li(3, 3, CV_64FC1, init_li);
cv::Mat lm(5, 1, CV_64FC1, init_lm);
cv::Mat ri(3, 3, CV_64FC1, init_ri);
cv::Mat rm(5, 1, CV_64FC1, init_rm);
cv::Mat r, t, Rl, Rr, Pl, Pr; // note: no initialization needed.
//frame is a cv::MAT holding the first frame of the video.
cv::Size imageSize = frame.size();
imageSize.width /= 2;
//IT won't break HERE
cv::stereoRectify(li, ri, lm, rm, imageSize, r, t, Rl, Rr, Pl, Pr);
// no need ever to release or care about anything

Ok, so I figured out the answer. The problem was that I had only initialized headers for Rl, Rr, Pl, and Pr, but no memory was allocated for the data itself. I was able to fix it as follows:
double init_Rl[3][3];
double init_Rr[3][3];
double init_Pl[3][4];
double init_Pr[3][4];
cvInitMatHeader(&Rl, 3,3, CV_64FC1, init_Rl);
cvInitMatHeader(&Rr, 3,3, CV_64FC1, init_Rr);
cvInitMatHeader(&Pl, 3,4, CV_64FC1, init_Pl);
cvInitMatHeader(&Pr, 3,4, CV_64FC1, init_Pr);
Although, I have a theory that I might have been able to use cv::stereoRectify with cv::Mats as parameters, which would have made life much easier. I don't know if cv::stereoRectify exists, but it seems that versions of many of the other c functions are in the cv namespace. In case it's hard to tell, I'm very new to OpenCV.

Related

OpenCV fisheye::projectpoints assertion faild

I want to project a single point (-1450,-1660) on an image
I am using opencv 4.0.1 c++
I have the camera matrix and distortion coefficient
and my code is
vector <Point3f> inputpoints;
Point3f myPoint;
myPoint.x = -1450;
myPoint.y = -1660;
myPoint.z = 0;
inputpoints.push_back(myPoint);
vector<Point2f> outputpoints;
vector<Point3f> tvec;
tvec.push_back(Point3f(0, 0, 0));
vector<Point3f> rvec;
rvec.push_back(Point3f(0, 0, 0));
double mydata[9] = { 3.3202343554882879e+02, 1., 6.4337059696010670e+02, 0, 3.3196938477610536e+02, 5.3844814394773562e+02, 0., 0., 1. };
Mat mycameraMatrix = Mat(3, 3, CV_64F, mydata);
double mydata2[4] = { -1.1129472191078109e-03, 4.9443845791693870e-02,
-7.2244333582166609e-03, -1.7309984187889034e-03 };
Mat mydiscoff = Mat{ 4,1, CV_64F ,mydata2 };
Mat newCamMat1= Mat(3, 3, CV_64F);
cv::fisheye::projectPoints(inputpoints, rvec, tvec, mycameraMatrix, mydiscoff, outputpoints);
when I run the program I get this exception
OpenCV(4.0.1) Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in cv::debug_build_guard::_OutputArray::create, file c:\build\master_winpack-build-win64-vc15\opencv\modules\core\src\matrix_wrap.cpp, line 1395
I changed the type of camera matrix and distortion coefficient to CV_32f but I still got the same error , I am a very beginner in openCV ..so can any one tell me what caused this exception?
I know the rvec should be 3*3 but I just followed someone else code who wrote that can be written in this way
okay the problem was that projectpoints and fisheye::projectpoints differ in the order of parameters ..so I was putting the order which belongs to projectpoints

How compute divergence and gradient of image in OpenCV?

I know that to implement the following
I would use this code:
Mat o_k;
Mat Lapl;
double lambda;
Laplacian(o_k, Lapl, o_k.depth(), 1, 1, 0, BORDER_REFLECT);
Lapl = 1.0 - 2.0*lambda*Lapl;
However, I am trying to implement in OpenCV the following equation:
I know the div, or divergence, term would be like this, right?
int ksize = parser.get<int>("ksize");
int scale = parser.get<int>("scale");
int delta = parser.get<int>("delta");
Sobel(res, sobelx, CV_64F, 1, 0, ksize, scale, delta, BORDER_DEFAULT);
Sobel(res, sobely, CV_64F, 0, 1, ksize, scale, delta, BORDER_DEFAULT);
div = sobelx + sobely;
Where res is the result of the term in parenthesis. But how I get the term in parenthesis?
Or am I doing this wrong? Would div above actually be equal to the gradient of res? If so, then how do I get the divergence?
EDIT:
According to this link, the magnitude can also be computed as mag = abs(x) + abs(y): https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/sobel_derivatives/sobel_derivatives.html#sobel-derivatives
And since the div of a gradient is the Laplacian, would the below code be equivalent to the 2nd equation?
Sobel(res, sobelx, CV_64F, 1, 0, ksize, scale, delta, BORDER_DEFAULT);
Sobel(res, sobely, CV_64F, 0, 1, ksize, scale, delta, BORDER_DEFAULT);
convertScaleAbs( sobelx, abs_grad_x );
convertScaleAbs( sobely, abs_grad_y );
/// Total Gradient (approximate)
Mat mag;
addWeighted( abs_grad_x, 1, abs_grad_y, 1, 0, mag);
Laplacian(o_k, Lapl, o_k.depth(), 1, 1, 0, BORDER_REFLECT);
Mat top;
top = lambda * Lapl;
Mat result;
divide(top, mag, result);

OpenCV: Why projectPoints() is giving me weird results?

Why is my code snippet giving me weird results for projected points?
//Generate the one 3D Point which i want to project onto 2D plane
vector<Point3d> points_3d;
points_3d.push_back(Point3d(10, 10, 100));
Mat points3d = Mat(points_3d);
//Generate the identity matrix and zero vector for rotation matrix and translation vector
Mat rvec = (Mat_<double>(3, 3) << (1, 0, 0, 0, 1, 0, 0, 0, 1));
Mat tvec = (Mat_<double>(3, 1) << (0, 0, 0));
//Generate a camera intrinsic matrix
Mat K = (Mat_<double>(3,3)
<< (1000, 0, 50,
0, 1000, 50,
0, 0, 1));
//Project the 3D Point onto 2D plane
Mat points_2d;
projectPoints(points_3d, rvec, tvec, K, Mat(), points_2d);
//Output
cout << points_2d;
I get as projected 2D Point
points_2d = (-1.708699427820658e+024, -9.673395654445999e-026)
If i calculate it on a paper on my own, i'm expecting a point points_2d = (150, 150) with that formula:
Add cv::Rodrigues(InputArray src, OutputArray dst, OutputArray jacobian=noArray()). OpenCv uses rotation vector inside calculation instead of rotation matrix. Rodrigues transformation allows you to convert rotation vector to matrix and matrix to vector. Below i attached part of your code with one line added.
//Generate the identity matrix and zero vector for rotation matrix and translation vector
Mat rvec,rMat = (Mat_<double>(3, 3) << (1, 0, 0, 0, 1, 0, 0, 0, 1));
Rodrigues(rMat,rvec); //here
Mat tvec = (Mat_<double>(3, 1) << (0, 0, 0));
And it should work properly. It also will be better to define distortion coefficents as
Mat dist = Mat::zeros(8,1,CV_32f);
EDIT:
One more remark, you have little syntax error in matrix initialization:
cv::Mat rvec,rMat = (cv::Mat_<double>(3, 3) << /* ( */1, 0, 0, 0, 1, 0, 0, 0, 1); //you had error here
cv::Rodrigues(rMat, rvec);
cv::Mat tvec = (cv::Mat_<double>(3, 1) <</* ( */ 0, 0, 0); //and here
It works on my computer after changes.

Kalman Filter : some doubts

I have several questions:
In the example given in openCV document:
/* generate measurement */
cvMatMulAdd( kalman->measurement_matrix, state, measurement, measurement );
Is this correct?
In the tutorial: An Introduction to the Kalman Filter by Welch and Bishop
in Equation 1.2 it says measurement = H*state + measurement noise
Doesn't seems both are same.
I was trying to implement bouncing ball tracking for a single ball.
I tried the following: (Please point out if I am doing it incorrectly.)
For the measurement I am measuring two things: a) x b) y of the centroid of the ball.
I am just mentioning lines which are different from the example given in opencv documentation.
CvKalman* kalman = cvCreateKalman( 5, 2, 0 );
const float A[] = { 1, 0, 1, 0, 0,
0, 1, 0, 1, 0,
0, 0, 1, 0, 0,
0, 0, 0, 1, 1,
0, 0, 0, 0, 1};
CvMat* state = cvCreateMat( 5, 1, CV_32FC1 );
CvMat* measurement = cvCreateMat( 2, 1, CV_32FC1 );
//initialize the state of kalman filter
state->data.fl[0] = mean_c;
state->data.fl[1] = mean_r;
state->data.fl[2] = mean_c - prev_mean_c;
state->data.fl[3] = mean_r - prev_mean_r;
state->data.fl[4] = 9.81;
after initialization, this is what gives crash
cvMatMulAdd( kalman->transition_matrix, state,
kalman->process_noise_cov, state );
In this line they just use variable measurement to store noise. See previous line:
cvRandArr( &rng, measurement, CV_RAND_NORMAL, cvRealScalar(0),cvRealScalar(sqrt(kalman->measurement_noise_cov->data.fl[0])) );
You should change dimension of H matrix as well. It must be 5 by 2 to make it possible to calculate H*state + measurement noise. You get an error probably in line
memcpy( cvkalman->measurement_matrix->data.fl, H, sizeof(H));
because in initial example cvkalman->measurement_matrix and H are allocated as 4 by 4 matrices and you decreased dimension of cvkalman->measurement_matrix only to 5 by 2 (4*4 is more than 5*2)

Where in my code have I broken the Mat equivalence rule?

I'm trying to achieve background subtraction in openCV 2.2 using the cv namespace (Qt4.7). I have the following code which compiles fine but when running the program breaks because one mat doesn't equal the other but I can't find out where it is and I'm currently going through the API reference to try and find it.
cvtColor( mcolImage, mcolImage, CV_BGR2RGB);
cvtColor( mcolImage, gscaleImage, CV_RGB2GRAY);
acc = Mat(Size(440,320), CV_32FC3);
accSQ = Mat(Size(440,320), CV_32FC3);
//we accumulate into a Mat to get an frames average
Mat avg;
accumulateWeighted(gscaleImage, acc, 3.0, Mat());
accumulateSquare(gscaleImage, accSQ, Mat());
multiply(acc, acc, avg, 1);
Mat sigma, sigmaSQRT;
subtract(accSQ, avg, sigmaSQRT, Mat());
sqrt(sigmaSQRT, sigma); //Holds the standard deviation
Mat fgImage; //hold the foreground image
cv::absdiff(avg,gscaleImage, fgImage);
//GaussianBlur(gscaleImage, gscaleImage, Size(7,7), 2, 2 );
Mat buff ;
//convert to black and white
threshold(fgImage, buff, 75, THRESH_BINARY, 100);
dilate(buff, buff, Mat(3, 3, CV_8UC1), Point(-1, -1), 1, BORDER_CONSTANT, Scalar(1.0, 1.0, 1.0, 0));
erode(buff, buff, Mat(3, 3, CV_8UC1), Point(-1, -1), 1, BORDER_CONSTANT, Scalar(1.0, 1.0, 1.0, 0));
//rectangle(gscaleImage, cvPoint(100, 300), cvPoint(200, 100), cvScalar(255, 255, 255, 0), 1);
QImage colImagetmp((uchar*)mcolImage.data, mcolImage.cols, mcolImage.rows, mcolImage.step,
QImage::Format_RGB888 ); //Colour
QImage gscaleImagetmp ((uchar*)gscaleImage.data, gscaleImage.cols, gscaleImage.rows, gscaleImage.step,
QImage::Format_Indexed8); //Greyscale. I hope
QImage bwImagetmp((uchar*)buff.data, buff.cols, buff.rows, buff.step,
QImage::Format_Indexed8);
//Setup a colour table for the greyscale image
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++) colorTable.push_back(qRgb(i, i, i));
bwImagetmp.setColorTable(colorTable);
gscaleImagetmp.setColorTable(colorTable);
ui.intDisplay->setPixmap(QPixmap::fromImage(bwImagetmp));
ui.bwDisplay->setPixmap(QPixmap::fromImage(gscaleImagetmp));
ui.colDisplay->setPixmap( QPixmap::fromImage(colImagetmp ));
Thanks for the help in advanced.
Edit:
After going through the code I found that the absdiff(avg, gscaleImage, fgImage); is where the program is crashing. I think it maybe crashing on the second parameter but not sure.
I solved it (I think) by declaring a new temporary Mat and converting that specifically (using avg.convert() ) to match the gscaleImage type and size.

Resources