i have some problems while computing rectification of stereo pairs with OpenCV: stereoCalibrate returns an high rms error and i obtain bad rectification pairs.
I tried both my rectification program and the stereo_calib.cpp provided with opencv. They both returns similiar rms errors. Plus, i ran my program with the sample stereo pairs in opencv/sample/cpp and i got correctly rectified images. So i believe the problem is in the way i take stereo picture, is it possible?
I use the stereo camera of an Htc Evo 3D (a 3D smartphone) taking pictures of a chessboard pattern. I tried to change the number and the set of pictures used as input but the littlest stereoCalibration rms i got was around 1.5 and the rectified images are totally wrong.
Is there any "suggested" way to take a set of pictures for calibration?
Thanks, Andrea
Check the detected chessboard pattern pairs (e.g. with drawChessboardCorners). In some cases the order of the points differs in both images (see figure; in top image the pattern is recognised from right to left, in the bottom picture from left to right). If that is the case, then the points do not longer correspond to each other. This leads to a high average reprojection error in stereoCalibrate. When checking the images for each camera individually with calibrateCamera the rms could at the same time be very low for the intrinsics, as the differing order in the stereo images doesn't matter here.
I solved the problem by checking the found chessboard patterns and removing the image pairs where the pattern is detected in different order. In my case this improved the rms from 15 (only one or few wrong pairs) to less than 1. In cases where the pairs are corrupted more often or when using higher resolution images (with more pixels) the error will be much higher, I guess.
Another solution could be the usage of ArUco and ChArUco patterns where the order shouldn't be ambiguous, but I haven't tested that approach.
Check out the frequent mistakes guide here to see if you have made any of these common errors.
http://www.cvlibs.net/software/calibration/mistakes.php
Also when calibrating a stereo camera, you may want to first calibrate each camera on it's own, then calibrate them as a stereo pair given their pre-estimated camera matrices.
another alternative toolbox, as well as cvlibs (link above), is here:
http://www.vision.caltech.edu/bouguetj/calib_doc/
cheers
I take 9 calibration pictures of a 12x18 corners chessboard, forbid fisheye lenses k3=0, and refine the found corners to subpixel positions. The minimum achieved error is something like 0.2 at a resolution 640x480. I suggest to look for cornerSubpix(), TermCritiria and flags for stereocalibrate() in opencv doc. Code looks like this:
namedWindow( "Webcaml", CV_WINDOW_AUTOSIZE );
namedWindow( "Webcamr", CV_WINDOW_AUTOSIZE );
int successes=0;
int n_boards=9;
while(successes<n_boards)
{
video_l.read( frame_l );
video_r.read( frame_r );
if( framenumber++ % board_dt == 0 && framenumber != 0)
{
bool patternfoundl = findChessboardCorners( frame_l, Size( board_w, board_h ), corners_l, CV_CALIB_CB_FILTER_QUADS + CV_CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK );
bool patternfoundr = findChessboardCorners( frame_r, Size( board_w, board_h ), corners_r, CV_CALIB_CB_FILTER_QUADS + CV_CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK );
if(patternfoundl && patternfoundr)
{
cvtColor(frame_l,frame_l_grey,CV_RGB2GRAY);
cvtColor(frame_r,frame_r_grey,CV_RGB2GRAY);
cornerSubPix(frame_l_grey,corners_l,Size(6,6),Size(-1,-1),TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER,50000000000,0.0000000000001));
cornerSubPix(frame_r_grey,corners_r,Size(6,6),Size(-1,-1),TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER,50000000000,0.0000000000001));
}
drawChessboardCorners( frame_l, Size( board_w, board_h ), corners_l, patternfoundl );
drawChessboardCorners( frame_r, Size( board_w, board_h ), corners_r, patternfoundr );
imshow( "Webcaml", frame_l );
imshow( "Webcamr", frame_r );
if( corners_l.size() == (board_w*board_h) && corners_r.size() == (board_w*board_h) )
{
cornervector_l.push_back( corners_l );
cornervector_r.push_back( corners_r );
point3dvector.push_back( point3d );
successes++;
int c = cvWaitKey( 1000 );
}
}
else
{
imshow( "Webcaml", frame_l);
imshow( "Webcamr", frame_r);
}
char c = cvWaitKey( 1 );
if( c == 27 )break;
}
destroyAllWindows();
rms_l = calibrateCamera( point3dvector, cornervector_l, Size( video_r.get( CV_CAP_PROP_FRAME_WIDTH ), video_r.get( CV_CAP_PROP_FRAME_HEIGHT )),
intrinsics_l, distortion_l, rvecs_l, tvecs_l, CV_CALIB_FIX_K3 , cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,150000000000000000,DBL_EPSILON ) );
rms_r = calibrateCamera( point3dvector, cornervector_r, Size( video_r.get( CV_CAP_PROP_FRAME_WIDTH ), video_r.get( CV_CAP_PROP_FRAME_HEIGHT )),
intrinsics_r, distortion_r, rvecs_r, tvecs_r, CV_CALIB_FIX_K3 , cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,150000000000000000,DBL_EPSILON ) );
cout << "intrinsic_l = " << endl << format(intrinsics_l,"C" ) << endl << endl;
cout << "intrinsic_r = " << endl << format(intrinsics_r,"C" ) << endl << endl;
cout << "distortion_l = " << endl << format(distortion_l,"C" ) << endl << endl;
cout << "distortion_r = " << endl << format(distortion_r,"C" ) << endl << endl;
cout << "tvecs_l = " << endl << format(tvecs_l[0],"C" ) << endl << endl;
cout << "rvecs_l = " << endl << format(rvecs_l[0],"C" ) << endl << endl;
double rms_stereo = stereoCalibrate( point3dvector, cornervector_l, cornervector_r, intrinsics_l, distortion_l, intrinsics_r, distortion_r,
Size( video_r.get( CV_CAP_PROP_FRAME_WIDTH ),video_r.get( CV_CAP_PROP_FRAME_HEIGHT )), R, T, E, F,
TermCriteria( TermCriteria::COUNT+TermCriteria::EPS, 150000000000000000,DBL_EPSILON ), CV_CALIB_FIX_K3+CV_CALIB_FIX_INTRINSIC);
Rodrigues(R, R);
cout << "R = " << endl << format(R,"C" ) << endl << endl;
cout << "T = " << endl << format(T,"C" ) << endl << endl;
cout << "E = " << endl << format(E,"C" ) << endl << endl;
cout << "F = " << endl << format(F,"C" ) << endl << endl;
cout << "RMS Fehler l,r,stereo: " << rms_l << rms_r << rms_stereo << endl;
stereoRectify( intrinsics_l, distortion_l, intrinsics_r, distortion_r, Size( video_r.get( CV_CAP_PROP_FRAME_WIDTH ),video_r.get( CV_CAP_PROP_FRAME_HEIGHT )), R, T,
rectify_l, rectify_r, projection_l, projection_r, Q);
Hi made some more research and found this post on OpenCV answer.
The phone seems to adjust the intrinsic matrix continuosly changing focus and i am unable to make a good calibration.
Related
I am doing an image stitching project using OpenCV. Now I have the homography H1 between img1 and img2, and the homography H2 between img2 and img3. Now I need to compute the homography between img1 and img3, simply multiply H1*H2 is not working.
Are any ideas to calculate the new homography between img1 and img3?
for me computing H1 * H2 works well and gives the right results.
Here, H1 = H2_1 since it warps from image2 to image1.
H2 = H3_2 since it warps from image3 to image2.
H1 * H2 = H3_1 since it warps from image3 to image 1.
int main()
{
try {
cv::Mat H2_1 = (cv::Mat_<double>(3, 3) << 1.0e+00, 0.0e+00, 1.8e+03, 0.0e+00, 1.0e+00, 5.0e+01, 0.0e+00, 0.0e+00, 1.0e+00);
cv::Mat H3_2 = (cv::Mat_<double>(3, 3) << 0.949534471, -0.00581765975, 66.7917766, -0.0113810490, 0.981450515, -0.672362563, -0.000147974585, 0.00000992770511, 1.0);
cv::Mat H3_1 = H2_1 * H3_2;
cv::Point2f p3(500, 500);
std::vector<cv::Point2f> src3;
src3.push_back( p3 );
src3.push_back(p3);
std::vector<cv::Point2f> dst2;
cv::perspectiveTransform(src3, dst2, H3_2);
std::cout << "p1: " << p3 << std::endl;
std::cout << "p2: " << dst2[0] << std::endl;
std::vector<cv::Point2f> dst1;
cv::perspectiveTransform(dst2, dst1, H2_1);
std::cout << "p1 from p2: " << dst1[0] << std::endl;
dst1.clear();
cv::perspectiveTransform(src3, dst1, H3_1);
std::cout << "p1 from p3: " << dst1[0] << std::endl;
}
catch (std::exception& e)
{
std::cout << e.what() << std::endl;
}
std::cin.get();
}
results:
I need to access the real part specific element of a cv::Mat that contains std::complex<double>'s.
OpenCV provides codes of how to create a complex cv::Mat_ here (search the page for the keyword "complex" and the first mention of that word is where the example is).
Here is my attempt:
Mat B = Mat_<std::complex<double> >(3, 3);
cout << B.depth() << ", " << B.channels() << endl;
B.at<double>(0, 0) = 0;
cout << "B(0,0) = " << B.at<double>(0, 0).real(); // Error due to .rea()
The Mat is filled with the type std::complex<double> but you're requesting a double when you write B.at<double>(0, 0); the return type is double, which doesn't have a .real() method. Instead you need to return the complex type which your Mat holds:
cout << "B(0,0) = " << B.at<std::complex<double> >(0, 0).real();
B(0,0) = 0
If you want to set an imaginary number, you'll need to actually pass that into the matrix, otherwise it just sets the real part:
B.at<double>(0, 0) = 2;
cout << "B(0,0) = " << B.at<std::complex<double> >(0, 0);
B(0,0) = (2,0)
B.at<std::complex<double> >(0, 0) = std::complex<double> (2, 1);
cout << "B(0,0) = " << B.at<std::complex<double> >(0, 0);
B(0,0) = (2,1)
I think the following examples shows a bug in warpAffine (OpenCV 3.1 with precompiled Win64 dlls):
Mat x(1,20, CV_32FC1);
for (int iCol(0); iCol<x.cols; iCol++) { x.col(iCol).setTo(iCol); }
Mat crop;
Point2d c(10., 0.);
double scale(1.3);
int cropSz(11);
double vals[6] = { scale, 0.0, c.x-(cropSz/2)*scale, 0.0, scale, c.y };
Mat map(2, 3, CV_64FC1, vals);
warpAffine(x, crop, map, Size(cropSz, 1), WARP_INVERSE_MAP | INTER_LINEAR);
float dx = (crop.at<float>(0, crop.cols-1) - crop.at<float>(0, 0))/(crop.cols-1);
Mat constGrad = crop.clone().setTo(0);
for (int iCol(0); iCol<constGrad.cols; iCol++) {
constGrad.col(iCol) = c.x + (iCol-cropSz/2)*scale;
}
Mat diff = crop - constGrad;
double err = norm(diff, NORM_INF);
if (err>1e-4) {
cout << "Problem:" << endl;
cout << "computed output: " << crop << endl;
cout << "expected output: " << constGrad << endl;
cout << "difference: " << diff << endl;
Mat dxImg;
Mat dxFilt(1, 2, CV_32FC1);
dxFilt.at<float>(0) = -1.0f;
dxFilt.at<float>(1) = 1.0f;
filter2D(crop, dxImg, crop.depth(), dxFilt);
cout << "x-derivative in computed output: " << dxImg(Rect(1,0,10,1)) << endl;
cout << "Note: We expect a constant difference of 1.3" << endl;
}
Here is the program output:
Problem:
computed output: [3.5, 4.8125, 6.09375, 7.40625, 8.6875, 10, 11.3125, 12.59375, 13.90625, 15.1875, 16.5]
expected output: [3.5, 4.8000002, 6.0999999, 7.4000001, 8.6999998, 10, 11.3, 12.6, 13.9, 15.2, 16.5]
difference: [0, 0.012499809, -0.0062499046, 0.0062499046, -0.012499809, 0, 0.012499809, -0.0062503815, 0.0062503815, -0.012499809, 0]
x-derivative in computed output: [1.3125, 1.28125, 1.3125, 1.28125, 1.3125, 1.3125, 1.28125, 1.3125, 1.28125, 1.3125]
Note: We expect a constant difference of 1.3
I create an image with entries 0, 1, 2, ...n-1, and cut a region around (10,0) with scale 1.3. I also create an expected image constGrad. However, they are not the same. Even more, since the input image has a constant derivative in x-direction and the mapping is affine, I expect also a constant gradient in the resulting image.
The problem is not a boundary stuff problem, the same happens at the inner of an image. It's also not related to WARP_INVERSE_MAP.
Is this a known issue? Any comments on this?
I am trying to get pixel value before doing some calculation. When i run the below code at loc (10,10)i get the below answers. I am not sure why the color value is different than others. When do the same for loc (0,0) i get same answers for all. Image is black and white with 3 channels. Or is it that for 3 channels i should always use Vec3b?
char imageName[] = "0001.png";
cv::Mat img = cv::imread(imageName);
cout << "Image rows is: " << img.rows << " cols is : " << img.cols << endl;
cout << "Image channels " << img.channels() << endl;
cout << "Image depth " << img.depth() << endl; //CV_8U == 0
cout << (int)img.at<uchar>(10, 10) << endl;
cout << (int)img.ptr(10)[10] << endl;
Scalar intensity = img.at<uchar>(10, 10);
cout << intensity << endl;
Vec3b color = img.at<Vec3b>(10, 10);
cout << color << endl;
output:
Image rows is: 96 cols is : 72
Image channels 3
Image depth 0
103
103
[103, 0, 0, 0]
[164, 164, 164]
An analog framegrabber's image can be read via OpenCV only after the demo-application has opened the grabber, otherwise a black image results.
The following debug code
qDebug() << "Brightness" << cap->get(CV_CAP_PROP_BRIGHTNESS);
qDebug() << "Contrast " << cap->get(CV_CAP_PROP_CONTRAST);
qDebug() << "Saturation" << cap->get(CV_CAP_PROP_SATURATION);
qDebug() << "Hue " << cap->get(CV_CAP_PROP_HUE);
qDebug() << "Gain " << cap->get(CV_CAP_PROP_GAIN);
qDebug() << "Exposure " << cap->get(CV_CAP_PROP_EXPOSURE);
qDebug() << "Width " << cap->get(CV_CAP_PROP_FRAME_WIDTH);
qDebug() << "Height " << cap->get(CV_CAP_PROP_FRAME_HEIGHT);
outputs
Brightness 5000
Contrast 5000
Saturation 4000
Hue 5000
Gain -8.58993e+08
Exposure -1
Width 720
Height 576
Of course these settings seem defective, but they are the same when opening the device successfully after it has been accessed by the grabber's demo application.
I suppose this is a driver issue where certain device settings are required that OpenCV cannot access, including invalid standard settings (gain, exposure). What lower-level methods could be used to find out about / write those settings?
It seems that camera actually did not load yet and OpenCV already tries to take the image.
For me querying couple of more frames usually helps, like this:
CvCapture* capture = cvCaptureFromCAM( CV_CAP_ANY );
if ( capture ) {
IplImage* frame = cvQueryFrame( capture );
for (int i = 0; i < `0; i++)
{
frame = cvQueryFrame(capture);
}
cvSaveImage("mypic.jpg",frame);
cvReleaseCapture( &capture );
}