using getoptiamlnewcameramatrix to recover all the original image - opencv

enter image description here
this is my program output
and this is my distorted imageenter image description here
to get the original image`s whole pixel ,I used the getoptimalnewcameramatrix,but the output is terrible ,my intrix and distcoffes is totally correct.
here is my program
Mat src = imread("E:\\40_office\\distorted_bot\\0.jpg");
Size newsize(2280,3072);
cout << src.rows;
namedWindow("", WINDOW_AUTOSIZE);
imshow("", src);
waitKey();
Mat mapx,mapy;//size(x,y)
Mat cameraMatrix = (Mat_<double>(3, 3) << 1224.1, 0, 761.7497, 0, 1209.8, 1043.6, 0, 0, 1);
//Mat cameraMatrix = (Mat_<double>(3, 3) << 1224.1, 0, 1141.7497, 0, 1209.8, 1442.6, 0, 0, 1);
//Mat cameraMatrix = (Mat_<double>(3, 3) << 1209.8, 0, 1043.6, 0, 1224.1, 761.7497, 0, 0, 1);
Mat distCoeffs = (Mat_<double>(5, 1) << -0.3649, 0.1451, -0.0273, -0.000035214,0.0012);
double alpha = 1;
Mat newcameramatrix = getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, src.size(), alpha, newsize,0);
cout << newcameramatrix<<"src.size"<<src.size();
fisheye::initUndistortRectifyMap(cameraMatrix, distCoeffs,Mat(), newcameramatrix ,newsize, CV_16SC2, mapx,mapy);
Mat newimage=Mat(newsize,CV_8UC3);
remap(src, newimage, mapx, mapy, INTER_LINEAR);
imwrite("C:\\Users\\wk\\Desktop\\1_cali100.jpg", newimage);
return;

Related

What are the different ways of constructing cv::Mat?

OpenCV Version 3.2.0
I am reading Bradski and trying to make Different cv::Mat constructors - single channel.
Can someone please tell, why the constructors do not work?
float data1[6] = {1,2,3,4,5,6};
float data2[6] = {10,20,30,40,50,60};
float data3[6] = {100,200,300,400,500,600};
cv::Mat mat1(3,4,CV_32FC1); //OK
cv::Mat mat2(3,4,CV_32FC1,cv::Scalar(33.3)); //OK
cv::Mat mat3(3,4,CV_32FC1,data1,sizeof(float)); //OK
cv::Mat mat4(cv::Size(3,4),CV_32FC1); //OK
cv::Mat mat5(cv::Size(3,4),CV_32FC1,cv::Scalar(66.6)); //OK
cv::Mat mat6(cv::Size(3,4),CV_32FC1,data2,sizeof(float)); //OK
int sz[] = {8, 8, 8};
cv::Mat bigCube1(3, sz, CV_32FC1); // OK
cv::Mat bigCube2(3, sz, CV_32FC1, cv::Scalar::all(99)); // OK
cv::Mat bigCube3(3, sz, CV_32FC1, data3, 4); // Not OK, How to initialise a 3D from data?
std::cout << mat1 << std::endl << mat2 << std::endl << mat3 << std::endl << mat4 << std::endl << mat5 << std::endl << mat6 << std::endl; // OK
std::cout << bigCube1.at<float>(10,10,10) << std::endl << bigCube2.at<float>(10,10,10) << std::endl; // OK
cv::Mat img_rgb = cv::imread("lena.jpg", CV_LOAD_IMAGE_COLOR);
std::vector<cv::Range> ranges(3, cv::Range(2,3));
cv::Mat roiRange( img_rgb, cv::Range(100, 300), cv::Range(0, 512)); //OK
cv::Mat roiRect( img_rgb, cv::Rect(0,100,512,200)); // OK
cv::Mat roiRangeMultiple( bigCube1, ranges); // OK
cv::namedWindow("range", CV_WINDOW_AUTOSIZE);
imshow("range", roiRange); // OK
cv::namedWindow("rect", CV_WINDOW_AUTOSIZE);
imshow("rect", roiRect); // OK
std::cout << roiRangeMultiple.at<float>(0,1,1); // Not OK. Expecting a float value as answer
cv::waitKey(0);
The corresponding answers are:
[4.6634629e-10, 0, 0, 0;
0, 0, 0, 0;
127.62516, 2.8025969e-45, 0, 0]
[33.299999, 33.299999, 33.299999, 33.299999;
33.299999, 33.299999, 33.299999, 33.299999;
33.299999, 33.299999, 33.299999, 33.299999]
[1, 2, 3, 4;
2, 3, 4, 5;
3, 4, 5, 6]
[0, 0, 0;
0, 0, 0;
0, 0, 0;
0, 0, 0]
[66.599998, 66.599998, 66.599998;
66.599998, 66.599998, 66.599998;
66.599998, 66.599998, 66.599998;
66.599998, 66.599998, 66.599998]
[10, 20, 30;
20, 30, 40;
30, 40, 50;
40, 50, 60]
0 // bigCube1
99 // bigCube2
And then the corresponding answers for lena.jpg is the cropped version from Range and Rect. I dont know how to use the ranges though.
Multiple issues.
cv::Mat mat3(3,4,CV_32FC1,data1,sizeof(float));
This will crash in debug mode, failing an assertion. Even though this is not in the documentation, the step size must be at least the length of a row (i.e. no overlap is allowed).
The correct code for this scenario would be something like.
float data1[12] = { 1, 2, 3, 4, 2, 3, 4, 5, 3, 4, 5, 6 };
cv::Mat mat3(3, 4, CV_32FC1, data1, 4 * sizeof(float));
cv::Mat mat6(cv::Size(3,4),CV_32FC1,data2,sizeof(float));
Similar situation as in previous case. Also note that this produces a differently shaped array -- previous was 3 rows, 4 columns, this one has 4 rows and 3 columns (see docs of cv::Size).
float data2[12] = { 10, 20, 30, 40, 20, 30, 40, 50, 30, 40, 50, 60 };
cv::Mat mat6(cv::Size(3, 4), CV_32FC1, data2, 3 * sizeof(float));
cv::Mat bigCube1(3, sz, CV_8UC1);
std::cout << bigCube1 << std::endl;
Formatting of arrays with more than 2 dimensions is not supported.
You can test that the array was correctly created by manually printing all the values:
for (auto const& v : cv::Mat1b(bigCube2)) {
std::cout << uint(v) << " ";
}
std::cout << "\n";
cv::Mat bigCube3(3, sz, CV_32FC1, data3, 4);
The problem here is the last parameter. From the docs
cv::Mat::Mat(int ndims,
const int * sizes,
int type,
void * data,
const size_t * steps = 0
)
steps - Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous.
You're not passing an array of steps as the last parameter (only a single integer)
you don't pass enough data
and the rows would again overlap.
One way to do this would be something like
float data3[8 * 8 * 8];
// Populate the data with sequence 0..511
std::iota(std::begin(data3), std::end(data3), 0.0f);
int sz[] = { 8, 8, 8 };
size_t steps[] = { 8 * 8 * sizeof(float), 8 * sizeof(float) };
cv::Mat bigCube3(3, sz, CV_32FC1, data3, steps);
cv::Mat bigCube1(3, sz, CV_8UC1);
// ...
std::cout << roiRangeMultiple.at<float>(0,1,1); // Not OK. Expecting a float value as answer
The data type is CV_8UC1, so each element is an unsigned char. That means you shouldn't be extracting float values from it. Your expectation is incorrect. (Now I see you changed the code in your question).
Also, note that with cv::Range "start is an inclusive left boundary of the range and end is an exclusive right boundary of the range". Since you extract cv::Range(2,3) in each axis, the resulting Mat is 1 x 1 x 1. Hence, you're accessing elements out of range (again, this would trigger a debug mode assertion).
std::cout << roiRangeMultiple.at<unsigned char>(0,0,0);
After the change you have the correct type. However notice that you never initialize bigCube1. You most likely get a 0.0f as a result, which will print as 0. You can try this yourself, just execute std::cout << 0.0f; and see.

Image Rectification Using camera intrinsic and extrinsic parameters

I want to rectify stereo images using intrinsic and extrinsic camera parameters that obtained from photomodeler software.
I wrote the code (modifying this link https://gist.github.com/anonymous/6586653) and I determined the relative rotation and translation parameters , but when I input the images the obtained results is not as expected although I tried to find the error but I couldn't.
your help is really appreciated:
the input images are:
I couldn't load all the images therefore I have put this link regarding the images and the results:
https://www.dropbox.com/s/5tmj9rk91tkrot4/RECTIFICATION_TEST_DATA.docx?dl=0
the code is
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <iomanip>
#include<opencv2/opencv.hpp>
#include <iostream>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <fstream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
// Mat img1 = imread("E:\\12_0628.tif", 1);
// Mat img2 = imread("E:\\12_0629.tif", 1);
Mat img1 = imread("E:\\DSC_0483.JPG");
Mat img2 = imread("E:\\DSC_0484.JPG");
//EXTERIOR OREINTATION FOR THE 1ST IMAGE
double omega1 = -172.672440, phi1 = -80.168311, kappa1 = 163.005082, tx1 = -35.100000, ty1 = -56.700000, tz1 = -59.300000;
//EXTERIOR OREINTATION FOR THE 2ND IMAGE
double omega2 = 27.576999, phi2 = -67.089920, kappa2 = 2.826051, tx2 = -37.600000, ty2 = -18.600000, tz2 = -41.700000;
//Rotation matrices of the 1st image
omega1 = omega1 * CV_PI / 180.;
phi1 = phi1 * CV_PI / 180.;
kappa1 = kappa1 * CV_PI / 180.;
omega2 = omega2 * CV_PI / 180.;
phi2 = phi2 * CV_PI / 180.;
kappa2 = kappa2 * CV_PI / 180.;
Mat RX1 = (Mat_<double>(3, 3) <<
1, 0, 0,
0, cos(omega1), sin(omega1),
0, -sin(omega1), cos(omega1));
Mat RY1 = (Mat_<double>(3, 3) <<
cos(phi1), 0, -sin(phi1),
0, 1, 0,
sin(phi1), 0, cos(phi1));
Mat RZ1 = (Mat_<double>(3, 3) <<
cos(kappa1), sin(kappa1), 0,
-sin(kappa1), cos(kappa1), 0,
0, 0, 1);
// Composed rotation matrix with (RX, RY, RZ)
Mat R1 = RX1 * RY1 * RZ1;
Mat T1 = (Mat_<double>(3, 1) <<
tx1,
ty1,
tz1);
/////////////////////Rotation matrices of the 2nd image//////////////////////////////////////////
//Rotation matrices of the 1st image
Mat RX2 = (Mat_<double>(3, 3) <<
1, 0, 0,
0, cos(omega2), sin(omega2),
0, -sin(omega2), cos(omega2));
Mat RY2 = (Mat_<double>(3, 3) <<
cos(phi2), 0, -sin(phi2),
0, 1, 0,
sin(phi2), 0, cos(phi2));
Mat RZ2 = (Mat_<double>(3, 3) <<
cos(kappa2), sin(kappa2), 0,
-sin(kappa2), cos(kappa2), 0,
0, 0, 1);
// Composed rotation matrix with (RX, RY, RZ)
Mat R2 = RX2 * RY2 * RZ2;
Mat T2 = (Mat_<double>(3, 1) <<
tx2,
ty2,
tz2);
/////////////////////////////////////////////////////////////
double f = 2284.;// the focal length of the camera nikon D40, this is equivalant to 18mm
double w = (double)img1.cols;
double h = (double)img1.rows;
Mat M = (Mat_<double>(3, 3) <<//camera matrix
f , 0. , w/2,
0. , f , h/2,
0. , 0. , 1.);
Mat D = (Mat_<double>(5, 1) <<// distortion coefficicents
0,
0,
0,
0,
0.
);
Mat R1inv = R1.inv();
Mat Rrel = R2 * R1inv;
Mat Trel = (-1 * Rrel) * T1+ T2;
Mat T = (Mat_<double>(3, 1) <<//translation matrix
-2376.6,
-740.0,
229.0);
cout << img1.size() << endl;
cout << img2.size() << endl;
//Mat R1, R2,
Mat P1, P2, Q;
stereoRectify(M, D, M, D, img1.size(), Rrel, Trel, //the input data
R1, R2, P1, P2, Q);//the output data
Mat map1x, map1y, map2x, map2y;
Mat imgdst1, imgdst2;
// Size (flaot)imageSize;
// imageSize = img1.size();
initUndistortRectifyMap(M, D, R1, P1, img1.size(), CV_32FC1, map1x, map1y);
initUndistortRectifyMap(M, D, R2, P2, img1.size(), CV_32FC1, map2x, map2y);
remap(img1, imgdst1, map1x, map1y, INTER_LINEAR, BORDER_CONSTANT, Scalar());
remap(img2, imgdst2, map2x, map2y, INTER_LINEAR, BORDER_CONSTANT, Scalar());
namedWindow("image1");
namedWindow("image2");
imshow("image1", imgdst1);
imshow("image2", imgdst2);
// imwrite("DSC_0906_rect.jpg", imgdst1);
// imwrite("DSC_0913_rect.jpg", imgdst2);
imwrite("E:\\Researches\\2016-2017_res\\2_8_epipolar_geometry\\temp_image\\output1.bmp", imgdst1);
imwrite("E:\\Researches\\2016-2017_res\\2_8_epipolar_geometry\\temp_image\\output2.bmp", imgdst2);
waitKey();
return 0;
}

3D rotation matrix between two 3D points

I have 2 known 3d points OC1 and OC2 which are the origin of 2 axis plot in the space and I need to compute the 3D rotation matrix between them.
I know that using R1&T1 I can get to Oc1 and using R2&T2 I can get to Oc2 but I need to compute the 3D rotation matrix between Oc1 and Oc2. I just thought about this rule:
oMc1=(R1 | T1) and oMc2=(R2 | T2) and what I want is:
c1Mc2 = (oMc1)^-1 oMc2
So I tried to implement it and here is my code:
vector <Point3f> listOfPointsOnTable;
cout << "******** DATA *******" << endl;
listOfPointsOnTable.push_back(Point3f(0,0,0));
listOfPointsOnTable.push_back(Point3f(100,0,0));
listOfPointsOnTable.push_back(Point3f(100,100,0));
listOfPointsOnTable.push_back(Point3f(0,100,0));
cout << endl << "Scene points :" << endl;
for (int i = 0; i < listOfPointsOnTable.size(); i++)
{
cout << listOfPointsOnTable[i] << endl;
}
//Define the optical center of each camera
Point3f centreOfC1 = Point3f(23,0,50);
Point3f centreOfC2 = Point3f(0,42,20);
cout << endl << "Center Of C1: " << centreOfC1 << " , Center of C2 : " << centreOfC2 << endl;
//Define the translation and rotation between main axis and the camera 1 axis
Mat translationOfC1 = (Mat_<double>(3, 1) << (0-centreOfC1.x), (0-centreOfC1.y), (0-centreOfC1.z));
float rotxC1 = 0, rotyC1 = 0, rotzC1 = -45;
int focaleC1 = 2;
Mat rotationOfC1 = rotation3D(rotxC1, rotyC1,rotzC1);
cout << endl << "Translation from default axis to C1: " << translationOfC1 << endl;
cout << "Rotation from default axis to C1: " << rotationOfC1 << endl;
Mat transformationToC1 = buildTransformationMatrix(rotationOfC1, translationOfC1);
cout << "Transformation from default axis to C1: " << transformationToC1 << endl << endl;
//Define the translation and rotation between main axis and the camera 2 axis
Mat translationOfC2 = (Mat_<double>(3, 1) << (0-centreOfC2.x), (0-centreOfC2.y), (0-centreOfC2.z));
float rotxC2 = 0, rotyC2 = 0, rotzC2 = -90;
int focaleC2 = 2;
Mat rotationOfC2 = rotation3D(rotxC2, rotyC2,rotzC2);
cout << endl << "Translation from default axis to C2: " << translationOfC2 << endl;
cout << "Rotation from default axis to C2: " << rotationOfC2 << endl;
Mat transformationToC2 = buildTransformationMatrix(rotationOfC2, translationOfC2);
cout << "Transformation from default axis to C2: " << transformationToC2 << endl << endl;
Mat centreOfC2InMat = (Mat_<double>(3, 1) << centreOfC2.x, centreOfC2.y, centreOfC2.z);
Mat centreOfC2InCamera1 = rotationOfC1 * centreOfC2InMat + translationOfC1;
Mat translationBetweenC1AndC2 = -centreOfC2InCamera1;
cout << endl << "****Translation from C2 to C1" << endl;
cout << translationBetweenC1AndC2 << endl;
Mat centreOfC1InMat = (Mat_<double>(3, 1) << centreOfC1.x, centreOfC1.y, centreOfC1.z);
Mat centreOfC1InCamera2 = rotationOfC2 * centreOfC1InMat + translationOfC2;
Mat translationBetweenC2AndC1 = -centreOfC1InCamera2;
cout << "****Translation from C1 to C2" << endl;
cout << translationBetweenC2AndC1 << endl;
cout << "Tran1-1 * Trans2 = " << transformationToC1.inv() * transformationToC2 << endl;
cout << "Tran2-1 * Trans1 = " << transformationToC2.inv() * transformationToC1 << endl;
Mat rotation3D(int alpha, int beta, int gamma)
{
// Rotation matrices around the X, Y, and Z axis
double alphaInRadian = alpha * M_PI / 180.0;
double betaInRadian = beta * M_PI / 180.0;
double gammaInRadian = gamma * M_PI / 180.0;
Mat RX = (Mat_<double>(3, 3) <<
1, 0, 0,
0, cosf(alphaInRadian), sinf(alphaInRadian),
0, -sinf(alphaInRadian), cosf(alphaInRadian));
Mat RY = (Mat_<double>(3, 3) <<
cosf(betaInRadian), 0, sinf(betaInRadian),
0, 1, 0,
-sinf(betaInRadian), 0, cosf(betaInRadian));
Mat RZ = (Mat_<double>(3, 3) <<
cosf(gammaInRadian), sinf(gammaInRadian), 0,
-sinf(gammaInRadian),cosf(gammaInRadian), 0,
0, 0, 1);
// Composed rotation matrix with (RX, RY, RZ)
Mat R = RX * RY * RZ;
return R;
}
Mat buildTransformationMatrix(Mat rotation, Mat translation)
{
Mat transformation = (Mat_<double>(4, 4) <<
rotation.at<double>(0,0), rotation.at<double>(0,1), rotation.at<double>(0,2), translation.at<double>(0,0),
rotation.at<double>(1,0), rotation.at<double>(1,1), rotation.at<double>(1,2), translation.at<double>(1,0),
rotation.at<double>(2,0), rotation.at<double>(2,1), rotation.at<double>(2,2), translation.at<double>(2,0),
0, 0, 0, 1);
return transformation;
}
here is the output:
//Origin of 3 axis
O(0,0,0), OC1 (23, 0, 50), OC2 (0, 42, 20)
Translation from default axis to OC1: [-23;
0;
-50]
Rotation from default axis to OC1: [0.7071067690849304, -0.7071067690849304, 0;
0.7071067690849304, 0.7071067690849304, 0;
0, 0, 1]
Trans1 = Transformation from default axis to OC1: [0.7071067690849304, -0.7071067690849304, 0, -23;
0.7071067690849304, 0.7071067690849304, 0, 0;
0, 0, 1, -50;
0, 0, 0, 1]
Translation from default axis to OC2: [0;
-42;
-20]
Rotation from default axis to OC2: [-4.371138828673793e-08, -1, 0;
1, -4.371138828673793e-08, 0;
0, 0, 1]
Trans2 = Transformation from default axis to OC2: [-4.371138828673793e-08, -1, 0, 0;
1, -4.371138828673793e-08, 0, -42;
0, 0, 1, -20;
0, 0, 0, 1]
(Trans1)-1 * (Trans2) = [0.7071067623795453, -0.7071068241967844, 0, -13.43502907247513;
0.7071068241967844, 0.7071067623795453, 0, -45.96194156373071;
0, 0, 1, 30;
0, 0, 0, 1]
(Trans2)-1 * (Trans1) = [0.7071067381763105, 0.7071067999935476, 0, 42.00000100536185;
-0.7071067999935475, 0.7071067381763104, -0, 22.99999816412165;
0, 0, 1, -30;
0, 0, 0, 1]
//Calculation of translation between OC1 and OC2:
****Translation from C2 to C1
[52.69848430156708;
-29.69848430156708;
30]
****Translation from C1 to C2
[1.005361930594972e-06;
19;
-30]
As you can see above, the 4th column of (Trans1)-1 * (Trans2) is not equal neither to the translation from C2->C1 nor to the translation from C2->C1
it lets me think that c1Mc2 = (oMc1)^-1 oMc2 does not get what I want but I don't really understand why? Is there any other solution to get what I want?
The rotation matrix of Oc1 is by definition the component of the local XYZ axes:
| Xc1[0] Yc1[0] Zc1[0] |
R_1 = | Xc1[1] Yc1[1] Zc1[1] |
| Xc1[2] Yc1[2] Zc1[2] |
and similarly for R_2 of Oc2.
If the relative rotation between them is R then you can define
R_2 = R_1*R
and thus:
R = transpose(R_1)*R_2
that is all you need.

Heap Curroption error in findContours

I have a heap corruption error because of the cv::findContours function. I need help figuring out the solution for this problem.
int GetEndPoints(cv::Mat image)
{
cv::Mat imgBW = cv::Mat::zeros(image.rows, image.cols, CV_8UC1);
cv::cvtColor(image, imgBW, CV_BGR2GRAY);
std::cout << std::endl << imgBW.channels();
cv::threshold(imgBW, imgBW, 150, 255, cv::THRESH_BINARY);
cv::namedWindow("image", 0);
cv::imshow("image", imgBW);
std::vector<std::vector<cv::Point>> contours;
cv::Mat image1 = image.clone();
// Find the Contours
std::cout << std::endl << imgBW.channels();
cv::findContours(imgBW, contours, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
cv::drawContours(image1, contours, -1, cv::Scalar(255, 0, 0), 2, 8);
cv::namedWindow("contours", 0);
cv::imshow("contours", image1);
cv::waitKey();
return 0;
}
You shoud check number of contours you found. I think you get error when there is no contours in contours variable.

What's the difference between Mat::clone and Mat::copyTo?

I know 'copyTo' can handle mask. But when mask is not needed, can I use both equally?
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-clone
Actually, they are NOT the same even without mask.
The major difference is that when the destination matrix and the source matrix have the same type and size, copyTo will not change the address of the destination matrix, while clone will always allocate a new address for the destination matrix.
This is important when the destination matrix is copied using copy assignment operator before copyTo or clone. For example,
Using copyTo:
Mat mat1 = Mat::ones(1, 5, CV_32F);
Mat mat2 = mat1;
Mat mat3 = Mat::zeros(1, 5, CV_32F);
mat3.copyTo(mat1);
cout << mat1 << endl;
cout << mat2 << endl;
Output:
[0, 0, 0, 0, 0]
[0, 0, 0, 0, 0]
Using clone:
Mat mat1 = Mat::ones(1, 5, CV_32F);
Mat mat2 = mat1;
Mat mat3 = Mat::zeros(1, 5, CV_32F);
mat1 = mat3.clone();
cout << mat1 << endl;
cout << mat2 << endl;
Output:
[0, 0, 0, 0, 0]
[1, 1, 1, 1, 1]
This is the implementation of Mat::clone() function:
inline Mat Mat::clone() const
{
Mat m;
copyTo(m);
return m;
}
So, as #rotating_image had mentioned, if you don't provide mask for copyTo() function, it's same as clone().
Mat::copyTo is for when you already have a destination cv::Mat that (may be or) is already allocated with the right data size. Mat::clone is a convenience for when you know you have to allocate a new cv::Mat.
copyTo doesn't allocate new memory in the heap which is faster.

Resources