Related
I am currently reading this book. The author wrote a code snippet on page 83 in order to (if i understand it correctly) calculate the element-wise power of two matrices. But i think the code doesn't fulfill its purpose because the matrix dst does not contain the element-wise power after the execution.
Here is the original code:
const Mat* arrays[] = { src1, src2, dst, 0 };
float* ptrs[3];
NAryMatIterator it(arrays, (uchar**)ptrs);
for( size_t i = 0; i < it.nplanes; i++, ++it )
{
for( size_t j = 0; j < it.size; j++ )
{
ptrs[2][j] = std::pow(ptrs[0][j], ptrs[1][j]);
}
}
Since the parameter of the constructor or cv::NAryMatIterator is const cv::Mat **, i think change of values in the matrix dst is not allowed.
I tried to assign ptrs[2][j] back in dst but failed to get the correct indices of dst. My questions are as follows:
Is there a convenient method for the matrix element-wise power, like A .^ B in Matlab?
Is there a way to use cv::NAryMatIterator to achieve this goal? If no, then what is the most efficient way to implement it?
You can get this working by converting the src1, src2 and dst to float (CV_32F) type matrices. This is because the code treats them that way in float* ptrs[3];.
An alternative implementation using opencv functions log, multiply and exp is given at the end.
As an example for your 2nd question,
Mat src1 = (Mat_<int>(3, 3) <<
1, 2, 3,
4, 5, 6,
7, 8, 9);
Mat src2 = (Mat_<uchar>(3, 3) <<
1, 2, 3,
1, 2, 3,
1, 2, 3);
Mat dst = (Mat_<float>(3, 3) <<
1, 2, 3,
4, 5, 6,
7, 8, 9);
src1.convertTo(src1, CV_32F);
src2.convertTo(src2, CV_32F);
cout << "before\n";
cout << dst << endl;
const Mat* arrays[] = { &src1, &src2, &dst, 0 };
float* ptrs[3];
NAryMatIterator it(arrays, (uchar**)ptrs);
for( size_t i = 0; i < it.nplanes; i++, ++it )
{
for( size_t j = 0; j < it.size; j++ )
{
ptrs[2][j] = std::pow(ptrs[0][j], ptrs[1][j]);
}
}
cout << "after\n";
cout << dst << endl;
outputs
before
[1, 2, 3;
4, 5, 6;
7, 8, 9]
after
[1, 4, 27;
4, 25, 216;
7, 64, 729]
If you remove the src1.convertTo(src1, CV_32F); or src2.convertTo(src2, CV_32F);, you won't get the desired result. Try it.
If this is a separate function, don't place the convertTo within the function, as it modifies the image representation, that could affect later operations. Instead, use convertTo on temporary Mats, like
Mat src132f, src232f, dst32f;
src1.convertTo(src132f, CV_32F);
src2.convertTo(src132f, CV_32F);
dst.convertTo(dst32f, CV_32F);
pow_mat(&src132f, &src232f, &dst32f); /* or whatever the name */
As for your first question, I'm not aware of such function. But you can try something like
Mat tmp;
cv::log(src1, tmp);
cv::multiply(src2, tmp, dst);
cv::exp(dst, dst);
using the relation that c = a^b is equivalent to c = e^(b.ln(a)). Here, the matrices should have type 32F or 64F. This produces
[1, 4, 27.000002;
4, 25.000002, 216.00002;
6.9999995, 64, 729.00006]
for the example above.
I'm trying to reproduce the behavior of the method projectPoints() from OpenCV.
In the two images below, red/green/blue axis are obtained with OpenCV's method, whereas magenta/yellow/cyan axis are obtained with my own method :
image1
image2
With my method, axis seem to have a good orientation but translations are incorrect.
Here is my code :
void drawVector(float x, float y, float z, float r, float g, float b, cv::Mat &pose, cv::Mat &cameraMatrix, cv::Mat &dst) {
//Origin = (0, 0, 0, 1)
cv::Mat origin(4, 1, CV_64FC1, double(0));
origin.at<double>(3, 0) = 1;
//End = (x, y, z, 1)
cv::Mat end(4, 1, CV_64FC1, double(1));
end.at<double>(0, 0) = x; end.at<double>(1, 0) = y; end.at<double>(2, 0) = z;
//multiplies transformation matrix by camera matrix
cv::Mat mat = cameraMatrix * pose.colRange(0, 4).rowRange(0, 3);
//projects points
origin = mat * origin;
end = mat * end;
//draws corresponding line
cv::line(dst, cv::Point(origin.at<double>(0, 0), origin.at<double>(1, 0)),
cv::Point(end.at<double>(0, 0), end.at<double>(1, 0)),
CV_RGB(255 * r, 255 * g, 255 * b));
}
void drawVector_withProjectPointsMethod(float x, float y, float z, float r, float g, float b, cv::Mat &pose, cv::Mat &cameraMatrix, cv::Mat &dst) {
std::vector<cv::Point3f> points;
std::vector<cv::Point2f> projectedPoints;
//fills input array with 2 points
points.push_back(cv::Point3f(0, 0, 0));
points.push_back(cv::Point3f(x, y, z));
//Gets rotation vector thanks to cv::Rodrigues() method.
cv::Mat rvec;
cv::Rodrigues(pose.colRange(0, 3).rowRange(0, 3), rvec);
//projects points using cv::projectPoints method
cv::projectPoints(points, rvec, pose.colRange(3, 4).rowRange(0, 3), cameraMatrix, std::vector<double>(), projectedPoints);
//draws corresponding line
cv::line(dst, projectedPoints[0], projectedPoints[1],
CV_RGB(255 * r, 255 * g, 255 * b));
}
void drawAxis(cv::Mat &pose, cv::Mat &cameraMatrix, cv::Mat &dst) {
drawVector(0.1, 0, 0, 1, 1, 0, pose, cameraMatrix, dst);
drawVector(0, 0.1, 0, 0, 1, 1, pose, cameraMatrix, dst);
drawVector(0, 0, 0.1, 1, 0, 1, pose, cameraMatrix, dst);
drawVector_withProjectPointsMethod(0.1, 0, 0, 1, 0, 0, pose, cameraMatrix, dst);
drawVector_withProjectPointsMethod(0, 0.1, 0, 0, 1, 0, pose, cameraMatrix, dst);
drawVector_withProjectPointsMethod(0, 0, 0.1, 0, 0, 1, pose, cameraMatrix, dst);
}
What am I doing wrong ?
I just forgot to divide the resulting points by their last component after projection :
Given the matrix of the camera wich serve to take an image, and for any point (x, y, z, 1) in 3d space, its projection on that image is computed like following :
//point3D has 4 component (x, y, z, w), point2D has 3 (x, y, z).
point2D = cameraMatrix * point3D;
//then we have to divide the 2 first component of point2D by the third.
point2D /= point2D.z;
OpenCV Version 3.2.0
I am reading Bradski and trying to make Different cv::Mat constructors - single channel.
Can someone please tell, why the constructors do not work?
float data1[6] = {1,2,3,4,5,6};
float data2[6] = {10,20,30,40,50,60};
float data3[6] = {100,200,300,400,500,600};
cv::Mat mat1(3,4,CV_32FC1); //OK
cv::Mat mat2(3,4,CV_32FC1,cv::Scalar(33.3)); //OK
cv::Mat mat3(3,4,CV_32FC1,data1,sizeof(float)); //OK
cv::Mat mat4(cv::Size(3,4),CV_32FC1); //OK
cv::Mat mat5(cv::Size(3,4),CV_32FC1,cv::Scalar(66.6)); //OK
cv::Mat mat6(cv::Size(3,4),CV_32FC1,data2,sizeof(float)); //OK
int sz[] = {8, 8, 8};
cv::Mat bigCube1(3, sz, CV_32FC1); // OK
cv::Mat bigCube2(3, sz, CV_32FC1, cv::Scalar::all(99)); // OK
cv::Mat bigCube3(3, sz, CV_32FC1, data3, 4); // Not OK, How to initialise a 3D from data?
std::cout << mat1 << std::endl << mat2 << std::endl << mat3 << std::endl << mat4 << std::endl << mat5 << std::endl << mat6 << std::endl; // OK
std::cout << bigCube1.at<float>(10,10,10) << std::endl << bigCube2.at<float>(10,10,10) << std::endl; // OK
cv::Mat img_rgb = cv::imread("lena.jpg", CV_LOAD_IMAGE_COLOR);
std::vector<cv::Range> ranges(3, cv::Range(2,3));
cv::Mat roiRange( img_rgb, cv::Range(100, 300), cv::Range(0, 512)); //OK
cv::Mat roiRect( img_rgb, cv::Rect(0,100,512,200)); // OK
cv::Mat roiRangeMultiple( bigCube1, ranges); // OK
cv::namedWindow("range", CV_WINDOW_AUTOSIZE);
imshow("range", roiRange); // OK
cv::namedWindow("rect", CV_WINDOW_AUTOSIZE);
imshow("rect", roiRect); // OK
std::cout << roiRangeMultiple.at<float>(0,1,1); // Not OK. Expecting a float value as answer
cv::waitKey(0);
The corresponding answers are:
[4.6634629e-10, 0, 0, 0;
0, 0, 0, 0;
127.62516, 2.8025969e-45, 0, 0]
[33.299999, 33.299999, 33.299999, 33.299999;
33.299999, 33.299999, 33.299999, 33.299999;
33.299999, 33.299999, 33.299999, 33.299999]
[1, 2, 3, 4;
2, 3, 4, 5;
3, 4, 5, 6]
[0, 0, 0;
0, 0, 0;
0, 0, 0;
0, 0, 0]
[66.599998, 66.599998, 66.599998;
66.599998, 66.599998, 66.599998;
66.599998, 66.599998, 66.599998;
66.599998, 66.599998, 66.599998]
[10, 20, 30;
20, 30, 40;
30, 40, 50;
40, 50, 60]
0 // bigCube1
99 // bigCube2
And then the corresponding answers for lena.jpg is the cropped version from Range and Rect. I dont know how to use the ranges though.
Multiple issues.
cv::Mat mat3(3,4,CV_32FC1,data1,sizeof(float));
This will crash in debug mode, failing an assertion. Even though this is not in the documentation, the step size must be at least the length of a row (i.e. no overlap is allowed).
The correct code for this scenario would be something like.
float data1[12] = { 1, 2, 3, 4, 2, 3, 4, 5, 3, 4, 5, 6 };
cv::Mat mat3(3, 4, CV_32FC1, data1, 4 * sizeof(float));
cv::Mat mat6(cv::Size(3,4),CV_32FC1,data2,sizeof(float));
Similar situation as in previous case. Also note that this produces a differently shaped array -- previous was 3 rows, 4 columns, this one has 4 rows and 3 columns (see docs of cv::Size).
float data2[12] = { 10, 20, 30, 40, 20, 30, 40, 50, 30, 40, 50, 60 };
cv::Mat mat6(cv::Size(3, 4), CV_32FC1, data2, 3 * sizeof(float));
cv::Mat bigCube1(3, sz, CV_8UC1);
std::cout << bigCube1 << std::endl;
Formatting of arrays with more than 2 dimensions is not supported.
You can test that the array was correctly created by manually printing all the values:
for (auto const& v : cv::Mat1b(bigCube2)) {
std::cout << uint(v) << " ";
}
std::cout << "\n";
cv::Mat bigCube3(3, sz, CV_32FC1, data3, 4);
The problem here is the last parameter. From the docs
cv::Mat::Mat(int ndims,
const int * sizes,
int type,
void * data,
const size_t * steps = 0
)
steps - Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous.
You're not passing an array of steps as the last parameter (only a single integer)
you don't pass enough data
and the rows would again overlap.
One way to do this would be something like
float data3[8 * 8 * 8];
// Populate the data with sequence 0..511
std::iota(std::begin(data3), std::end(data3), 0.0f);
int sz[] = { 8, 8, 8 };
size_t steps[] = { 8 * 8 * sizeof(float), 8 * sizeof(float) };
cv::Mat bigCube3(3, sz, CV_32FC1, data3, steps);
cv::Mat bigCube1(3, sz, CV_8UC1);
// ...
std::cout << roiRangeMultiple.at<float>(0,1,1); // Not OK. Expecting a float value as answer
The data type is CV_8UC1, so each element is an unsigned char. That means you shouldn't be extracting float values from it. Your expectation is incorrect. (Now I see you changed the code in your question).
Also, note that with cv::Range "start is an inclusive left boundary of the range and end is an exclusive right boundary of the range". Since you extract cv::Range(2,3) in each axis, the resulting Mat is 1 x 1 x 1. Hence, you're accessing elements out of range (again, this would trigger a debug mode assertion).
std::cout << roiRangeMultiple.at<unsigned char>(0,0,0);
After the change you have the correct type. However notice that you never initialize bigCube1. You most likely get a 0.0f as a result, which will print as 0. You can try this yourself, just execute std::cout << 0.0f; and see.
I have two different images (Image A and Image B), whose histograms (histImage and histImage1) i have already computed.
Now I want that the histogram of Image A becomes the histogram of Image B. So that the Image B gets the colors similar to Image A.
code is as follow:
#include "stdafx.h"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main( )
{
Mat src, dst, src1;
/// Load image
src = imread("ImageA", 1 ); // Image A
src1 = imread("ImageB", 1 ); // Image B
if( !src.data )
{ return -1; }
/// Separate the image in 3 places ( B, G and R )
vector<Mat> bgr_planes;
vector<Mat> bgr_planes1;
split( src, bgr_planes );
split( src1, bgr_planes1 );
/// Establish the number of bins
int histSize = 256;
/// Set the ranges ( for B,G,R) )
float range[] = { 0, 256 } ;
const float* histRange = { range };
bool uniform = true; bool accumulate = false;
Mat b_hist, g_hist, r_hist; //ImageA
Mat b_hist1, g_hist1, r_hist1; //ImageB
/// Compute the histograms of Image A
calcHist( &bgr_planes[0], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes[1], 1, 0, Mat(), g_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes[2], 1, 0, Mat(), r_hist, 1, &histSize, &histRange, uniform, accumulate );
/// Compute the histograms of Image B
calcHist( &bgr_planes1[0], 1, 0, Mat(), b_hist1, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes1[1], 1, 0, Mat(), g_hist1, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &bgr_planes1[2], 1, 0, Mat(), r_hist1, 1, &histSize, &histRange, uniform, accumulate );
// Draw the histograms for B, G and R
int hist_w = 512; int hist_h = 400; //Image A
int bin_w = cvRound( (double) hist_w/histSize ); //Image A
int hist_w1 = 512; int hist_h1 = 400; //Image B
int bin_w1 = cvRound( (double) hist_w1/histSize );//Image B
Mat histImage( hist_h, hist_w, CV_8UC3, Scalar( 0,0,0) ); //ImageA
Mat histImage1( hist_h1, hist_w1, CV_8UC3, Scalar( 0,0,0) ); //ImageB
/// Normalize the result to [ 0, histImage.rows ] ImageA
normalize(b_hist, b_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(g_hist, g_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(r_hist, r_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
/// Normalize the result to [ 0, histImage.rows ] ImageB
normalize(b_hist1, b_hist1, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(g_hist1, g_hist1, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(r_hist1, r_hist1, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
/// Draw for each channel ImageA
for( int i = 1; i < histSize; i++ )
{
line( histImage, Point( bin_w*(i-1), hist_h - cvRound(b_hist.at<float>(i-1)) ) ,
Point( bin_w*(i), hist_h - cvRound(b_hist.at<float>(i)) ),
Scalar( 255, 0, 0), 2, 8, 0 );
line( histImage, Point( bin_w*(i-1), hist_h - cvRound(g_hist.at<float>(i-1)) ) ,
Point( bin_w*(i), hist_h - cvRound(g_hist.at<float>(i)) ),
Scalar( 0, 255, 0), 2, 8, 0 );
line( histImage, Point( bin_w*(i-1), hist_h - cvRound(r_hist.at<float>(i-1)) ) ,
Point( bin_w*(i), hist_h - cvRound(r_hist.at<float>(i)) ),
Scalar( 0, 0, 255), 2, 8, 0 );
}
////////////////////////////////////////////////////
/// Draw for each channel ImageB
for( int i = 1; i < histSize; i++ )
{
line( histImage1, Point( bin_w1*(i-1), hist_h1 - cvRound(b_hist1.at<float>(i-1)) ) ,
Point( bin_w1*(i), hist_h1 - cvRound(b_hist1.at<float>(i)) ),
Scalar( 255, 0, 0), 2, 8, 0 );
line( histImage1, Point( bin_w1*(i-1), hist_h1 - cvRound(g_hist1.at<float>(i-1)) ) ,
Point( bin_w1*(i), hist_h1 - cvRound(g_hist1.at<float>(i)) ),
Scalar( 0, 255, 0), 2, 8, 0 );
line( histImage1, Point( bin_w1*(i-1), hist_h1 - cvRound(r_hist1.at<float>(i-1)) ) ,
Point( bin_w1*(i), hist_h1 - cvRound(r_hist1.at<float>(i)) ),
Scalar( 0, 0, 255), 2, 8, 0 );
}
/////////////////////////////////////////////////////
/// Display
namedWindow("calcHist", CV_WINDOW_AUTOSIZE );
imshow("face ", histImage ); //Histogram of Image A
/// Display
namedWindow("calcHist1", CV_WINDOW_AUTOSIZE );
imshow("body ", histImage1 ); //Histogram of Image B
waitKey(0);
return 0;
}
One way to swap the histograms would be to follow the methodology used in histogram equalisation.
Compute the histograms (H1 and H2) respectively for the two images (I1 and I2) and normalise them (already done in your code).
Compute the cumulative histograms - also called cumulative distribution functions - C1 and C2 corresponding to H1 and H2 as explained here.
Substitute new values for every pixel in I1 using the cumulative histogram C2 as explained here.
Do the same for every pixel in I2, using cumulative histogram C1.
So I am having problems with OpenCV. I used the sample code from the book, "Learning OpenCV". I got the code to compute all of the intrinsics and extrinsics of the two cameras, but when I go to Remap the images, all I get is a blank image. I use 6 images from both cameras, with a 9x6 chessboard. The input file alternates with left and right images (the lr=i%2 made me think that...).
Below is my code. I only added the cvRemap() function towards the end.
#undef _GLIBCXX_DEBUG
#include <opencv\cv.h>
#include <opencv\cxmisc.h>
#include <opencv\highgui.h>
#include <vector>
#include <string>
#include <algorithm>
#include <stdio.h>
#include <ctype.h>
#include <Windows.h>
using namespace std;
//
// Given a list of chessboard images, the number of corners (nx, ny)
// on the chessboards, and a flag: useCalibrated for calibrated (0) or
// uncalibrated (1: use cvStereoCalibrate(), 2: compute fundamental
// matrix separately) stereo. Calibrate the cameras and display the
// rectified results along with the computed disparity images.
//
static void
StereoCalib(const char* imageList, int useUncalibrated)
{
IplImage* L_img1 = cvLoadImage("bad1.bmp");
IplImage* R_img1 = cvLoadImage("good1.bmp");
IplImage* fixed_L = cvCloneImage(L_img1);
IplImage* fixed_R = cvCloneImage(R_img1);
CvRect roi1, roi2;
int nx = 0, ny = 0;
int displayCorners = 1;
int showUndistorted = 1;
bool isVerticalStereo = false; //OpenCV can handle left-right
//or up-down camera arrangements
const int maxScale = 1;
const float squareSize = 1.f; //Set this to your actual square size
FILE* f = fopen(imageList, "rt");
int i, j, lr, nframes = 0, n, N = 0;
vector<string> imageNames[2];
vector<CvPoint3D32f> objectPoints;
vector<CvPoint2D32f> points[2];
vector<CvPoint2D32f> temp_points[2];
vector<int> npoints;
//vector<uchar> active[2];
int is_found[2] = {0, 0};
vector<CvPoint2D32f> temp;
CvSize imageSize = {0,0};
// ARRAY AND VECTOR STORAGE:
double M1[3][3], M2[3][3], D1[5], D2[5];
double R[3][3], T[3], E[3][3], F[3][3];
double Q[4][4];
CvMat _M1 = cvMat(3, 3, CV_64F, M1 );
CvMat _M2 = cvMat(3, 3, CV_64F, M2 );
CvMat _D1 = cvMat(1, 5, CV_64F, D1 );
CvMat _D2 = cvMat(1, 5, CV_64F, D2 );
CvMat _R = cvMat(3, 3, CV_64F, R );
CvMat _T = cvMat(3, 1, CV_64F, T );
CvMat _E = cvMat(3, 3, CV_64F, E );
CvMat _F = cvMat(3, 3, CV_64F, F );
CvMat _Q = cvMat(4, 4, CV_64FC1, Q);
char buf[1024];
if( displayCorners )
cvNamedWindow( "corners", 1 );
// READ IN THE LIST OF CHESSBOARDS:
if( !f )
{
fprintf(stderr, "can not open file %s\n", imageList );
Sleep(2000);
return;
}
if( !fgets(buf, sizeof(buf)-3, f) || sscanf(buf, "%d%d", &nx, &ny) != 2 )
return;
n = nx*ny;
temp.resize(n);
temp_points[0].resize(n);
temp_points[1].resize(n);
for(i=0;;i++)
{
int count = 0, result=0;
lr = i % 2;
vector<CvPoint2D32f>& pts = temp_points[lr];//points[lr];
if( !fgets( buf, sizeof(buf)-3, f ))
break;
size_t len = strlen(buf);
while( len > 0 && isspace(buf[len-1]))
buf[--len] = '\0';
if( buf[0] == '#')
continue;
IplImage* img = cvLoadImage( buf, 0 );
if( !img )
break;
imageSize = cvGetSize(img);
imageNames[lr].push_back(buf);
//FIND CHESSBOARDS AND CORNERS THEREIN:
for( int s = 1; s <= maxScale; s++ )
{
IplImage* timg = img;
if( s > 1 )
{
timg = cvCreateImage(
cvSize(img->width*s,img->height*s),
img->depth, img->nChannels
);
cvResize( img, timg, CV_INTER_CUBIC );
}
result = cvFindChessboardCorners(
timg, cvSize(nx, ny),
&temp[0], &count,
CV_CALIB_CB_ADAPTIVE_THRESH |
CV_CALIB_CB_NORMALIZE_IMAGE
);
if( timg != img )
cvReleaseImage( &timg );
if( result || s == maxScale )
for( j = 0; j < count; j++ )
{
temp[j].x /= s;
temp[j].y /= s;
}
if( result )
break;
}
if( displayCorners )
{
printf("%s\n", buf);
IplImage* cimg = cvCreateImage( imageSize, 8, 3 );
cvCvtColor( img, cimg, CV_GRAY2BGR );
cvDrawChessboardCorners(
cimg, cvSize(nx, ny), &temp[0],
count, result
);
IplImage* cimg1 = cvCreateImage(cvSize(640, 480), IPL_DEPTH_8U, 3);
cvResize(cimg, cimg1);
cvShowImage( "corners", cimg1 );
cvReleaseImage( &cimg );
cvReleaseImage( &cimg1 );
int c = cvWaitKey(1000);
if( c == 27 || c == 'q' || c == 'Q' ) //Allow ESC to quit
exit(-1);
}
else
putchar('.');
//N = pts.size();
//pts.resize(N + n, cvPoint2D32f(0,0));
//active[lr].push_back((uchar)result);
is_found[lr] = result > 0 ? 1 : 0;
//assert( result != 0 );
if( result )
{
//Calibration will suffer without subpixel interpolation
cvFindCornerSubPix(
img, &temp[0], count,
cvSize(11, 11), cvSize(-1,-1),
cvTermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 30, 0.01)
);
copy( temp.begin(), temp.end(), pts.begin() );
}
cvReleaseImage( &img );
if(lr)
{
if(is_found[0] == 1 && is_found[1] == 1)
{
assert(temp_points[0].size() == temp_points[1].size());
int current_size = points[0].size();
points[0].resize(current_size + temp_points[0].size(), cvPoint2D32f(0.0, 0.0));
points[1].resize(current_size + temp_points[1].size(), cvPoint2D32f(0.0, 0.0));
copy(temp_points[0].begin(), temp_points[0].end(), points[0].begin() + current_size);
copy(temp_points[1].begin(), temp_points[1].end(), points[1].begin() + current_size);
nframes++;
printf("Pair successfully detected...\n");
}
is_found[0] = 0;
is_found[1] = 0;
}
}
fclose(f);
printf("\n");
// HARVEST CHESSBOARD 3D OBJECT POINT LIST:
objectPoints.resize(nframes*n);
for( i = 0; i < ny; i++ )
for( j = 0; j < nx; j++ )
objectPoints[i*nx + j] = cvPoint3D32f(i*squareSize, j*squareSize, 0);
for( i = 1; i < nframes; i++ )
copy(
objectPoints.begin(), objectPoints.begin() + n,
objectPoints.begin() + i*n
);
npoints.resize(nframes,n);
N = nframes*n;
CvMat _objectPoints = cvMat(1, N, CV_32FC3, &objectPoints[0] );
CvMat _imagePoints1 = cvMat(1, N, CV_32FC2, &points[0][0] );
CvMat _imagePoints2 = cvMat(1, N, CV_32FC2, &points[1][0] );
CvMat _npoints = cvMat(1, npoints.size(), CV_32S, &npoints[0] );
cvSetIdentity(&_M1);
cvSetIdentity(&_M2);
cvZero(&_D1);
cvZero(&_D2);
// CALIBRATE THE STEREO CAMERAS
printf("Running stereo calibration ...");
fflush(stdout);
cvStereoCalibrate(
&_objectPoints, &_imagePoints1,
&_imagePoints2, &_npoints,
&_M1, &_D1, &_M2, &_D2,
imageSize, &_R, &_T, &_E, &_F,
cvTermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5),
CV_CALIB_FIX_ASPECT_RATIO +
CV_CALIB_ZERO_TANGENT_DIST +
CV_CALIB_SAME_FOCAL_LENGTH +
CV_CALIB_FIX_K3
);
printf(" done\n");
// CALIBRATION QUALITY CHECK
// because the output fundamental matrix implicitly
// includes all the output information,
// we can check the quality of calibration using the
// epipolar geometry constraint: m2^t*F*m1=0
vector<CvPoint3D32f> lines[2];
points[0].resize(N);
points[1].resize(N);
_imagePoints1 = cvMat(1, N, CV_32FC2, &points[0][0] );
_imagePoints2 = cvMat(1, N, CV_32FC2, &points[1][0] );
lines[0].resize(N);
lines[1].resize(N);
CvMat _L1 = cvMat(1, N, CV_32FC3, &lines[0][0]);
CvMat _L2 = cvMat(1, N, CV_32FC3, &lines[1][0]);
//Always work in undistorted space
cvUndistortPoints(
&_imagePoints1, &_imagePoints1,
&_M1, &_D1, 0, &_M1
);
cvUndistortPoints(
&_imagePoints2, &_imagePoints2,
&_M2, &_D2, 0, &_M2
);
cvComputeCorrespondEpilines( &_imagePoints1, 1, &_F, &_L1 );
cvComputeCorrespondEpilines( &_imagePoints2, 2, &_F, &_L2 );
double avgErr = 0;
for( i = 0; i < N; i++ )
{
double err =
fabs(
points[0][i].x*lines[1][i].x +
points[0][i].y*lines[1][i].y + lines[1][i].z
) +
fabs(
points[1][i].x*lines[0][i].x +
points[1][i].y*lines[0][i].y + lines[0][i].z
);
avgErr += err;
}
printf( "avg err = %g\n", avgErr/(nframes*n) );
// save intrinsic parameters
CvFileStorage* fstorage = cvOpenFileStorage("intrinsics.yml", NULL, CV_STORAGE_WRITE);
cvWrite(fstorage, "M1", &_M1);
cvWrite(fstorage, "D1", &_D1);
cvWrite(fstorage, "M2", &_M2);
cvWrite(fstorage, "D2", &_D2);
cvReleaseFileStorage(&fstorage);
//COMPUTE AND DISPLAY RECTIFICATION
if( showUndistorted )
{
CvMat* mx1 = cvCreateMat( imageSize.height, imageSize.width, CV_32F );
CvMat* my1 = cvCreateMat( imageSize.height, imageSize.width, CV_32F );
CvMat* mx2 = cvCreateMat( imageSize.height, imageSize.width, CV_32F );
CvMat* my2 = cvCreateMat( imageSize.height, imageSize.width, CV_32F );
CvMat* img1r = cvCreateMat( imageSize.height, imageSize.width, CV_8U );
CvMat* img2r = cvCreateMat( imageSize.height, imageSize.width, CV_8U );
CvMat* disp = cvCreateMat( imageSize.height, imageSize.width, CV_16S );
double R1[3][3], R2[3][3], P1[3][4], P2[3][4];
CvMat _R1 = cvMat(3, 3, CV_64F, R1);
CvMat _R2 = cvMat(3, 3, CV_64F, R2);
// IF BY CALIBRATED (BOUGUET'S METHOD)
if( useUncalibrated == 0 )
{
CvMat _P1 = cvMat(3, 4, CV_64F, P1);
CvMat _P2 = cvMat(3, 4, CV_64F, P2);
cvStereoRectify(
&_M1, &_M2, &_D1, &_D2, imageSize,
&_R, &_T,
&_R1, &_R2, &_P1, &_P2, &_Q,
CV_CALIB_ZERO_DISPARITY,
1, imageSize, &roi1, &roi2
);
CvFileStorage* file = cvOpenFileStorage("extrinsics.yml", NULL, CV_STORAGE_WRITE);
cvWrite(file, "R", &_R);
cvWrite(file, "T", &_T);
cvWrite(file, "R1", &_R1);
cvWrite(file, "R2", &_R2);
cvWrite(file, "P1", &_P1);
cvWrite(file, "P2", &_P2);
cvWrite(file, "Q", &_Q);
cvReleaseFileStorage(&file);
isVerticalStereo = fabs(P2[1][3]) > fabs(P2[0][3]);
if(!isVerticalStereo)
roi2.x += imageSize.width;
else
roi2.y += imageSize.height;
//Precompute maps for cvRemap()
cvNamedWindow( "Original" );
cvNamedWindow( "Fixed" );
cvInitUndistortRectifyMap(&_M1,&_D1,&_R1,&_P1,mx1,my1);
cvInitUndistortRectifyMap(&_M2,&_D2,&_R2,&_P2,mx2,my2);
cvRemap(R_img1, fixed_R, mx2, my2);
cvShowImage("Original", R_img1);
cvShowImage("Fixed", fixed_R);
while(1){
int c = cvWaitKey(15);
if(c == 'p') {
c = 0;
while(c != 'p' && c != 27) {
c = cvWaitKey(250);
}
}
if(c == 27)
break;
}// end while
}
//OR ELSE HARTLEY'S METHOD
else if( useUncalibrated == 1 || useUncalibrated == 2 )
// use intrinsic parameters of each camera, but
// compute the rectification transformation directly
// from the fundamental matrix
{
double H1[3][3], H2[3][3], iM[3][3];
CvMat _H1 = cvMat(3, 3, CV_64F, H1);
CvMat _H2 = cvMat(3, 3, CV_64F, H2);
CvMat _iM = cvMat(3, 3, CV_64F, iM);
//Just to show you could have independently used F
if( useUncalibrated == 2 )
cvFindFundamentalMat(&_imagePoints1, &_imagePoints2, &_F);
cvStereoRectifyUncalibrated(
&_imagePoints1, &_imagePoints2, &_F,
imageSize,
&_H1, &_H2, 3
);
cvInvert(&_M1, &_iM);
cvMatMul(&_H1, &_M1, &_R1);
cvMatMul(&_iM, &_R1, &_R1);
cvInvert(&_M2, &_iM);
cvMatMul(&_H2, &_M2, &_R2);
cvMatMul(&_iM, &_R2, &_R2);
//Precompute map for cvRemap()
cvInitUndistortRectifyMap(&_M1,&_D1,&_R1,&_M1,mx1,my1);
cvInitUndistortRectifyMap(&_M2,&_D1,&_R2,&_M2,mx2,my2);
}
else
assert(0);
cvReleaseMat( &mx1 );
cvReleaseMat( &my1 );
cvReleaseMat( &mx2 );
cvReleaseMat( &my2 );
cvReleaseMat( &img1r );
cvReleaseMat( &img2r );
cvReleaseMat( &disp );
}
}
int main(int argc, char** argv)
{
StereoCalib(argc > 1 ? argv[1] : "stereo_calib.txt", 0);
return 0;
}
Below are the extrinsic matrices obtained from the program.
R: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 9.9997887582765532e-001, 4.2746998112201760e-003,
-4.8964109286960510e-003, -4.1317666335754111e-003,
9.9957553950354616e-001, 2.8838677686057253e-002,
5.0176092857428471e-003, -2.8817837665560161e-002,
9.9957208635962669e-001 ]
T: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [ -8.3141294302865210e-001, -3.2181226087457654e-001,
-4.5924165239318537e-001 ]
R1: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 8.3000228682826938e-001, 3.1110786082949388e-001,
4.6293423160308594e-001, -3.1818678207964091e-001,
9.4578880995670123e-001, -6.5120647036789381e-002,
-4.5809756119155060e-001, -9.3249267508025396e-002,
8.8399728423766677e-001 ]
R2: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 8.2904793019998391e-001, 3.2089684317297251e-001,
4.5793530708249980e-001, -3.1381823995200708e-001,
9.4482404014772625e-001, -9.3944906367255512e-002,
-4.6281491084940990e-001, -6.5823621903907531e-002,
8.8400769741835628e-001 ]
P1: !!opencv-matrix
rows: 3
cols: 4
dt: d
data: [ -4.4953673002726404e+001, 0., -1.3375267505645752e+001, 0.,
0., -4.4953673002726404e+001, 2.4430860614776611e+002, 0., 0., 0.,
1., 0. ]
P2: !!opencv-matrix
rows: 3
cols: 4
dt: d
data: [ -4.4953673002726404e+001, 0., -1.3375267505645752e+001,
4.5081911684079330e+001, 0., -4.4953673002726404e+001,
2.4430860614776611e+002, 0., 0., 0., 1., 0. ]
And the intrinsic parameters found are as follows.
M1: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 4.3107336978610317e+002, 0., 3.4686501809547735e+002, 0.,
4.3107336978610317e+002, 1.9221944996848421e+002, 0., 0., 1. ]
D1: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ -1.6825480517169825e-001, 1.0756945282000266e-001, 0., 0., 0. ]
M2: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 4.3107336978610317e+002, 0., 3.5310162800332756e+002, 0.,
4.3107336978610317e+002, 1.8963116073129768e+002, 0., 0., 1. ]
D2: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ -1.9546177300030809e-001, 1.7624631189915094e-001, 0., 0., 0. ]
Any help would be much appreciated. I am not very experienced with OpenCV, and I have a hard time wrapping my head around what most of the functions are even doing. So I ca
I think I found the answer. After much experimenting, it seemed that the flag for cvStereoCalibrate, CV_CALIB_SAME_FOCAL_LENGTH, caused my output images to appear warped and/or not work. Also, I took many more chessboard pictures with a larger chessboard, and this seemed to help my results quite a bit.
Hope this helps anyone in the future.