What's the difference between Mat::clone and Mat::copyTo? - opencv

I know 'copyTo' can handle mask. But when mask is not needed, can I use both equally?
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-clone

Actually, they are NOT the same even without mask.
The major difference is that when the destination matrix and the source matrix have the same type and size, copyTo will not change the address of the destination matrix, while clone will always allocate a new address for the destination matrix.
This is important when the destination matrix is copied using copy assignment operator before copyTo or clone. For example,
Using copyTo:
Mat mat1 = Mat::ones(1, 5, CV_32F);
Mat mat2 = mat1;
Mat mat3 = Mat::zeros(1, 5, CV_32F);
mat3.copyTo(mat1);
cout << mat1 << endl;
cout << mat2 << endl;
Output:
[0, 0, 0, 0, 0]
[0, 0, 0, 0, 0]
Using clone:
Mat mat1 = Mat::ones(1, 5, CV_32F);
Mat mat2 = mat1;
Mat mat3 = Mat::zeros(1, 5, CV_32F);
mat1 = mat3.clone();
cout << mat1 << endl;
cout << mat2 << endl;
Output:
[0, 0, 0, 0, 0]
[1, 1, 1, 1, 1]

This is the implementation of Mat::clone() function:
inline Mat Mat::clone() const
{
Mat m;
copyTo(m);
return m;
}
So, as #rotating_image had mentioned, if you don't provide mask for copyTo() function, it's same as clone().

Mat::copyTo is for when you already have a destination cv::Mat that (may be or) is already allocated with the right data size. Mat::clone is a convenience for when you know you have to allocate a new cv::Mat.

copyTo doesn't allocate new memory in the heap which is faster.

Related

using getoptiamlnewcameramatrix to recover all the original image

enter image description here
this is my program output
and this is my distorted imageenter image description here
to get the original image`s whole pixel ,I used the getoptimalnewcameramatrix,but the output is terrible ,my intrix and distcoffes is totally correct.
here is my program
Mat src = imread("E:\\40_office\\distorted_bot\\0.jpg");
Size newsize(2280,3072);
cout << src.rows;
namedWindow("", WINDOW_AUTOSIZE);
imshow("", src);
waitKey();
Mat mapx,mapy;//size(x,y)
Mat cameraMatrix = (Mat_<double>(3, 3) << 1224.1, 0, 761.7497, 0, 1209.8, 1043.6, 0, 0, 1);
//Mat cameraMatrix = (Mat_<double>(3, 3) << 1224.1, 0, 1141.7497, 0, 1209.8, 1442.6, 0, 0, 1);
//Mat cameraMatrix = (Mat_<double>(3, 3) << 1209.8, 0, 1043.6, 0, 1224.1, 761.7497, 0, 0, 1);
Mat distCoeffs = (Mat_<double>(5, 1) << -0.3649, 0.1451, -0.0273, -0.000035214,0.0012);
double alpha = 1;
Mat newcameramatrix = getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, src.size(), alpha, newsize,0);
cout << newcameramatrix<<"src.size"<<src.size();
fisheye::initUndistortRectifyMap(cameraMatrix, distCoeffs,Mat(), newcameramatrix ,newsize, CV_16SC2, mapx,mapy);
Mat newimage=Mat(newsize,CV_8UC3);
remap(src, newimage, mapx, mapy, INTER_LINEAR);
imwrite("C:\\Users\\wk\\Desktop\\1_cali100.jpg", newimage);
return;

Element-wise power using OpenCV

I am currently reading this book. The author wrote a code snippet on page 83 in order to (if i understand it correctly) calculate the element-wise power of two matrices. But i think the code doesn't fulfill its purpose because the matrix dst does not contain the element-wise power after the execution.
Here is the original code:
const Mat* arrays[] = { src1, src2, dst, 0 };
float* ptrs[3];
NAryMatIterator it(arrays, (uchar**)ptrs);
for( size_t i = 0; i < it.nplanes; i++, ++it )
{
for( size_t j = 0; j < it.size; j++ )
{
ptrs[2][j] = std::pow(ptrs[0][j], ptrs[1][j]);
}
}
Since the parameter of the constructor or cv::NAryMatIterator is const cv::Mat **, i think change of values in the matrix dst is not allowed.
I tried to assign ptrs[2][j] back in dst but failed to get the correct indices of dst. My questions are as follows:
Is there a convenient method for the matrix element-wise power, like A .^ B in Matlab?
Is there a way to use cv::NAryMatIterator to achieve this goal? If no, then what is the most efficient way to implement it?
You can get this working by converting the src1, src2 and dst to float (CV_32F) type matrices. This is because the code treats them that way in float* ptrs[3];.
An alternative implementation using opencv functions log, multiply and exp is given at the end.
As an example for your 2nd question,
Mat src1 = (Mat_<int>(3, 3) <<
1, 2, 3,
4, 5, 6,
7, 8, 9);
Mat src2 = (Mat_<uchar>(3, 3) <<
1, 2, 3,
1, 2, 3,
1, 2, 3);
Mat dst = (Mat_<float>(3, 3) <<
1, 2, 3,
4, 5, 6,
7, 8, 9);
src1.convertTo(src1, CV_32F);
src2.convertTo(src2, CV_32F);
cout << "before\n";
cout << dst << endl;
const Mat* arrays[] = { &src1, &src2, &dst, 0 };
float* ptrs[3];
NAryMatIterator it(arrays, (uchar**)ptrs);
for( size_t i = 0; i < it.nplanes; i++, ++it )
{
for( size_t j = 0; j < it.size; j++ )
{
ptrs[2][j] = std::pow(ptrs[0][j], ptrs[1][j]);
}
}
cout << "after\n";
cout << dst << endl;
outputs
before
[1, 2, 3;
4, 5, 6;
7, 8, 9]
after
[1, 4, 27;
4, 25, 216;
7, 64, 729]
If you remove the src1.convertTo(src1, CV_32F); or src2.convertTo(src2, CV_32F);, you won't get the desired result. Try it.
If this is a separate function, don't place the convertTo within the function, as it modifies the image representation, that could affect later operations. Instead, use convertTo on temporary Mats, like
Mat src132f, src232f, dst32f;
src1.convertTo(src132f, CV_32F);
src2.convertTo(src132f, CV_32F);
dst.convertTo(dst32f, CV_32F);
pow_mat(&src132f, &src232f, &dst32f); /* or whatever the name */
As for your first question, I'm not aware of such function. But you can try something like
Mat tmp;
cv::log(src1, tmp);
cv::multiply(src2, tmp, dst);
cv::exp(dst, dst);
using the relation that c = a^b is equivalent to c = e^(b.ln(a)). Here, the matrices should have type 32F or 64F. This produces
[1, 4, 27.000002;
4, 25.000002, 216.00002;
6.9999995, 64, 729.00006]
for the example above.

What are the different ways of constructing cv::Mat?

OpenCV Version 3.2.0
I am reading Bradski and trying to make Different cv::Mat constructors - single channel.
Can someone please tell, why the constructors do not work?
float data1[6] = {1,2,3,4,5,6};
float data2[6] = {10,20,30,40,50,60};
float data3[6] = {100,200,300,400,500,600};
cv::Mat mat1(3,4,CV_32FC1); //OK
cv::Mat mat2(3,4,CV_32FC1,cv::Scalar(33.3)); //OK
cv::Mat mat3(3,4,CV_32FC1,data1,sizeof(float)); //OK
cv::Mat mat4(cv::Size(3,4),CV_32FC1); //OK
cv::Mat mat5(cv::Size(3,4),CV_32FC1,cv::Scalar(66.6)); //OK
cv::Mat mat6(cv::Size(3,4),CV_32FC1,data2,sizeof(float)); //OK
int sz[] = {8, 8, 8};
cv::Mat bigCube1(3, sz, CV_32FC1); // OK
cv::Mat bigCube2(3, sz, CV_32FC1, cv::Scalar::all(99)); // OK
cv::Mat bigCube3(3, sz, CV_32FC1, data3, 4); // Not OK, How to initialise a 3D from data?
std::cout << mat1 << std::endl << mat2 << std::endl << mat3 << std::endl << mat4 << std::endl << mat5 << std::endl << mat6 << std::endl; // OK
std::cout << bigCube1.at<float>(10,10,10) << std::endl << bigCube2.at<float>(10,10,10) << std::endl; // OK
cv::Mat img_rgb = cv::imread("lena.jpg", CV_LOAD_IMAGE_COLOR);
std::vector<cv::Range> ranges(3, cv::Range(2,3));
cv::Mat roiRange( img_rgb, cv::Range(100, 300), cv::Range(0, 512)); //OK
cv::Mat roiRect( img_rgb, cv::Rect(0,100,512,200)); // OK
cv::Mat roiRangeMultiple( bigCube1, ranges); // OK
cv::namedWindow("range", CV_WINDOW_AUTOSIZE);
imshow("range", roiRange); // OK
cv::namedWindow("rect", CV_WINDOW_AUTOSIZE);
imshow("rect", roiRect); // OK
std::cout << roiRangeMultiple.at<float>(0,1,1); // Not OK. Expecting a float value as answer
cv::waitKey(0);
The corresponding answers are:
[4.6634629e-10, 0, 0, 0;
0, 0, 0, 0;
127.62516, 2.8025969e-45, 0, 0]
[33.299999, 33.299999, 33.299999, 33.299999;
33.299999, 33.299999, 33.299999, 33.299999;
33.299999, 33.299999, 33.299999, 33.299999]
[1, 2, 3, 4;
2, 3, 4, 5;
3, 4, 5, 6]
[0, 0, 0;
0, 0, 0;
0, 0, 0;
0, 0, 0]
[66.599998, 66.599998, 66.599998;
66.599998, 66.599998, 66.599998;
66.599998, 66.599998, 66.599998;
66.599998, 66.599998, 66.599998]
[10, 20, 30;
20, 30, 40;
30, 40, 50;
40, 50, 60]
0 // bigCube1
99 // bigCube2
And then the corresponding answers for lena.jpg is the cropped version from Range and Rect. I dont know how to use the ranges though.
Multiple issues.
cv::Mat mat3(3,4,CV_32FC1,data1,sizeof(float));
This will crash in debug mode, failing an assertion. Even though this is not in the documentation, the step size must be at least the length of a row (i.e. no overlap is allowed).
The correct code for this scenario would be something like.
float data1[12] = { 1, 2, 3, 4, 2, 3, 4, 5, 3, 4, 5, 6 };
cv::Mat mat3(3, 4, CV_32FC1, data1, 4 * sizeof(float));
cv::Mat mat6(cv::Size(3,4),CV_32FC1,data2,sizeof(float));
Similar situation as in previous case. Also note that this produces a differently shaped array -- previous was 3 rows, 4 columns, this one has 4 rows and 3 columns (see docs of cv::Size).
float data2[12] = { 10, 20, 30, 40, 20, 30, 40, 50, 30, 40, 50, 60 };
cv::Mat mat6(cv::Size(3, 4), CV_32FC1, data2, 3 * sizeof(float));
cv::Mat bigCube1(3, sz, CV_8UC1);
std::cout << bigCube1 << std::endl;
Formatting of arrays with more than 2 dimensions is not supported.
You can test that the array was correctly created by manually printing all the values:
for (auto const& v : cv::Mat1b(bigCube2)) {
std::cout << uint(v) << " ";
}
std::cout << "\n";
cv::Mat bigCube3(3, sz, CV_32FC1, data3, 4);
The problem here is the last parameter. From the docs
cv::Mat::Mat(int ndims,
const int * sizes,
int type,
void * data,
const size_t * steps = 0
)
steps - Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous.
You're not passing an array of steps as the last parameter (only a single integer)
you don't pass enough data
and the rows would again overlap.
One way to do this would be something like
float data3[8 * 8 * 8];
// Populate the data with sequence 0..511
std::iota(std::begin(data3), std::end(data3), 0.0f);
int sz[] = { 8, 8, 8 };
size_t steps[] = { 8 * 8 * sizeof(float), 8 * sizeof(float) };
cv::Mat bigCube3(3, sz, CV_32FC1, data3, steps);
cv::Mat bigCube1(3, sz, CV_8UC1);
// ...
std::cout << roiRangeMultiple.at<float>(0,1,1); // Not OK. Expecting a float value as answer
The data type is CV_8UC1, so each element is an unsigned char. That means you shouldn't be extracting float values from it. Your expectation is incorrect. (Now I see you changed the code in your question).
Also, note that with cv::Range "start is an inclusive left boundary of the range and end is an exclusive right boundary of the range". Since you extract cv::Range(2,3) in each axis, the resulting Mat is 1 x 1 x 1. Hence, you're accessing elements out of range (again, this would trigger a debug mode assertion).
std::cout << roiRangeMultiple.at<unsigned char>(0,0,0);
After the change you have the correct type. However notice that you never initialize bigCube1. You most likely get a 0.0f as a result, which will print as 0. You can try this yourself, just execute std::cout << 0.0f; and see.

Bug in cv::warpAffine?

I think the following examples shows a bug in warpAffine (OpenCV 3.1 with precompiled Win64 dlls):
Mat x(1,20, CV_32FC1);
for (int iCol(0); iCol<x.cols; iCol++) { x.col(iCol).setTo(iCol); }
Mat crop;
Point2d c(10., 0.);
double scale(1.3);
int cropSz(11);
double vals[6] = { scale, 0.0, c.x-(cropSz/2)*scale, 0.0, scale, c.y };
Mat map(2, 3, CV_64FC1, vals);
warpAffine(x, crop, map, Size(cropSz, 1), WARP_INVERSE_MAP | INTER_LINEAR);
float dx = (crop.at<float>(0, crop.cols-1) - crop.at<float>(0, 0))/(crop.cols-1);
Mat constGrad = crop.clone().setTo(0);
for (int iCol(0); iCol<constGrad.cols; iCol++) {
constGrad.col(iCol) = c.x + (iCol-cropSz/2)*scale;
}
Mat diff = crop - constGrad;
double err = norm(diff, NORM_INF);
if (err>1e-4) {
cout << "Problem:" << endl;
cout << "computed output: " << crop << endl;
cout << "expected output: " << constGrad << endl;
cout << "difference: " << diff << endl;
Mat dxImg;
Mat dxFilt(1, 2, CV_32FC1);
dxFilt.at<float>(0) = -1.0f;
dxFilt.at<float>(1) = 1.0f;
filter2D(crop, dxImg, crop.depth(), dxFilt);
cout << "x-derivative in computed output: " << dxImg(Rect(1,0,10,1)) << endl;
cout << "Note: We expect a constant difference of 1.3" << endl;
}
Here is the program output:
Problem:
computed output: [3.5, 4.8125, 6.09375, 7.40625, 8.6875, 10, 11.3125, 12.59375, 13.90625, 15.1875, 16.5]
expected output: [3.5, 4.8000002, 6.0999999, 7.4000001, 8.6999998, 10, 11.3, 12.6, 13.9, 15.2, 16.5]
difference: [0, 0.012499809, -0.0062499046, 0.0062499046, -0.012499809, 0, 0.012499809, -0.0062503815, 0.0062503815, -0.012499809, 0]
x-derivative in computed output: [1.3125, 1.28125, 1.3125, 1.28125, 1.3125, 1.3125, 1.28125, 1.3125, 1.28125, 1.3125]
Note: We expect a constant difference of 1.3
I create an image with entries 0, 1, 2, ...n-1, and cut a region around (10,0) with scale 1.3. I also create an expected image constGrad. However, they are not the same. Even more, since the input image has a constant derivative in x-direction and the mapping is affine, I expect also a constant gradient in the resulting image.
The problem is not a boundary stuff problem, the same happens at the inner of an image. It's also not related to WARP_INVERSE_MAP.
Is this a known issue? Any comments on this?

Heap Curroption error in findContours

I have a heap corruption error because of the cv::findContours function. I need help figuring out the solution for this problem.
int GetEndPoints(cv::Mat image)
{
cv::Mat imgBW = cv::Mat::zeros(image.rows, image.cols, CV_8UC1);
cv::cvtColor(image, imgBW, CV_BGR2GRAY);
std::cout << std::endl << imgBW.channels();
cv::threshold(imgBW, imgBW, 150, 255, cv::THRESH_BINARY);
cv::namedWindow("image", 0);
cv::imshow("image", imgBW);
std::vector<std::vector<cv::Point>> contours;
cv::Mat image1 = image.clone();
// Find the Contours
std::cout << std::endl << imgBW.channels();
cv::findContours(imgBW, contours, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
cv::drawContours(image1, contours, -1, cv::Scalar(255, 0, 0), 2, 8);
cv::namedWindow("contours", 0);
cv::imshow("contours", image1);
cv::waitKey();
return 0;
}
You shoud check number of contours you found. I think you get error when there is no contours in contours variable.

Resources