Adding a scalar to a Mat object - opencv

So I'm trying to add a scalar value to all elements of a Mat object in openCV, however for raw_t_ubit8 and raw_t_ubit16 types I get wrong results. Here's the code.
Mat A;
//Initialize Mat A;
A = A + 0.1;
The Matrix is initially
The result of the addition is exactly the same matrix. This problem does not occur when I try to add scalars to raw_t_real types of matrices. By raw_t_ubit8 I mean the depth is CV_8UC1

If, as you mentioned in the comments, the contained values are scaled in the matrix to fit the integer domain 0..255, then you should also scale the scalar value you sum. Namely:
A = A + cv::Scalar(round(0.1 * 255) );
Or even better:
A += cv::Scalar(round(0.1 * 255) );
Note that cv::Scalar, as pointed out in the comments by Miki, is in any case made from a double (it's a cv::Scalar_<double>).
The rounding could be omitted, but then you leave the choice on how to convert your double into integer to the function implementation.
You should also check what happens when the values saturate.
Documentation for Opencv matrix expressions.

As stated in the comments and in #Antonio's answer, you can't add 0.1 to an integer.
If you are using CV_8UC1 matrices, but you want to work with floating points values, you should multiply by 255.
Mat1b A; // <-- type CV_8UC1
...
A += 0.1 * 255;
If the result of the operation need to be casted, as in this case, then ultimately saturated_cast is called.
This is equivalent to #Antonio's answer, but it results in cleaner code (at least for me).
The same code will be used, either if you sum a double or a Scalar. A Scalar object will be created in both ways using:
template<typename _Tp> inline
Scalar_<_Tp>::Scalar_(_Tp v0)
{
this->val[0] = v0;
this->val[1] = this->val[2] = this->val[3] = 0;
}
However if you need to sum exactly 0.1 to your matrix (and not to scale it by 255), you need to convert your matrix to CV_32FC1:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char** argv)
{
Mat1b A = (Mat1b(3,3) << 1,2,3,4,5,6,7,8,9);
Mat1f F;
A.convertTo(F, CV_32FC1);
F += 0.1;
return 0;
}

Related

Copy Vector of Contour Points into Mat

I am using OpenCV 3.1 with VS2012 C++/CLI.
I have stored the result of a finContours call into:
std::vector<std::vector<Point>> Contours;
Thus, Contours[0] is a vector of the contour points of the first contour.
Contours[1] is a vector of the contour points of the second vector, etc.
Now, I want to load one of the contours into a Mat Based on Convert Mat to vector <float> and Vector<float> to mat in opencv I thought something like this would work.
Mat testMat=Mat(Images->Contours[0].size(),2,CV_32FC1);
memcpy(testMat.data,Images->Contours[0].data(),Images->Contours[0].size()*CV_32FC1);
I specified two columns because I each underlying pint must be composed of both an X point and a Y point and each of those should be a float. However, when I access the Mat elements, I can see that the first element is not the underlying data but the total number of contour points.
Any help on the right way to accomplish this appreaciated.
You can do that with:
Mat testMat = Mat(Images->Contours[0]).reshape(1);
Now testMat is of type CV_32SC1, aka of int. If you need float you can:
testMat.convertTo(testMat, CV_32F);
Some more details and variants...
You can simply use the Mat constructor that accepts a std::vector:
vector<Point> v = { {0,1}, {2,3}, {4,5} };
Mat m(v);
With this, you get a 2 channel matrix with the underlying data in v. This means that if you change the value in v, also the values in m change.
v[0].x = 7; // also 'm' changes
If you want a deep copy of the values, so that changes in v are not reflected in m, you can use clone:
Mat m2 = Mat(v).clone();
Your matrices are of type CV_32SC2, i.e. 2 channels matrices of int (because Point uses int. Use Point2f for float). If you want a 2 columns single channel matrix you can use reshape:
Mat m3 = m2.reshape(1);
If you want to convert to float type, you need to use convertTo:
Mat m4;
m2.convertTo(m4, CV_32F);
Here some working code as a proof of concept:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
vector<Point> v = { {0,1}, {2,3}, {4,5} };
// changes in v affects m
Mat m(v);
// changes in v doesn't affect m2
Mat m2 = Mat(v).clone();
// m is changed
v[0].x = 7;
// m3 is a 2 columns single channel matrix
Mat m3 = m2.reshape(1);
// m4 is a matrix of floats
Mat m4;
m2.convertTo(m4, CV_32F);
return 0;
}

incorrect size of vector when converting Mat to Vector

I'm trying to convert the result Mat of templateMatch with the following code (which was found at: this question):
void convertmatVec(const cv::Mat& m, std::vector<uchar>& v) {
if (m.isContinuous()) {
v.assign(m.datastart, m.dataend);
}
else {
printf("failed to convert / not continuous");
return;
}
}
and when I check the size of the output it's not the same as the product of result's columns and rows (which is the same when I try to convert another Mat that I created):
result:
another Mat created with:
cv::Mat test(result.size(),false);
test.setTo(cv::Scalar(255));
which is then converted shows that the size is the same as the product:
So my question is how can I get the result's data so I can process it futher because I'm assuming the size of the vector should be the same as the product which it clearly isn't.
EDIT1: Added templateMatching code
void matchTemplatenoRotation(cv::Mat src, cv::Mat templ) {
cv::Mat img_display, result;
src.copyTo(img_display);
int result_cols = src.cols - templ.cols + 1;
int result_rows = src.rows - templ.rows + 1;
result.create(result_rows, result_cols, CV_32FC1);
cv::matchTemplate(src, templ, result, CV_TM_SQDIFF_NORMED);
cv::normalize(result, result, 0, 1, cv::NORM_MINMAX, -1, cv::Mat());
cv::Point minLoc, maxLoc;
double minVal, maxVal;
cv::minMaxLoc(result, &minVal, &maxVal, &minLoc, &maxLoc, cv::Mat());
cv::Point matchLoc = minLoc;
//end of templatematching
EDIT2 follow up question:
How come when I create another Mat with following code:
cv::Mat test(cv::Size(result.cols, result.rows),true);
test.setTo(cv::Scalar(255));
//cv::imshow("test3", test);
std::vector<float> testVector;
convertmatVec(test, testVector);
the vector size is as following:
You have 4 times the expected number of elements in your vector because your matrix is of type CV_32FC1. If you look at the type of m.datastart and m.dataend you will see that they are uchar* and not float* as you expect.
To correct this, change v.assign(m.datastart, m.dataend); to v.assign((float*)m.datastart, (float*)m.dataend);. And you will need to pass a vector of float instead of a vector<uchar>.
Of course, your conversion function will only work for float type matrices. you could add some tests to detect the type of the matrix inside the function.
To answer your follow up question, it appears that you have the inverse problem. You are passing a CV_8U type matrix to a function that expects a CV_32F type one. Use your old conversion function for 8-bit matrices and use the fix I suggested for 32-bit floating-values matrices. You can also add a test inside the conversion function to automatically choose the right conversion. I also advise you to read a bit on OpenCV Mat class to understand better what type of data is in your matrices.
It looks like your const cv::Mat& m has four channels.
Another thing to consider: if this Mat is the result of a matchTemplate(), then you should be using a vector<float>, instead of vector<uchar>.

Mat and Vec_ types multiplication

Is there any easy way to multiplicate Mat and Vec_? (Provided, that they have proper sizes, e.g.:
Mat_<double> M = Mat(3,3,CV_32F);
Vec3f V=(1,2,3);
result = M*V //?
Maybe there is some easy method of creating row (or col) Mat based on Vec3?
You can't just multiply Mat and Vec (or, more generally, Matx_) elements. Cast the Vec object to Mat:
Mat_<float> M = Mat::eye(3,3,CV_32F);
Vec3f V=(1,2,3);
Mat result = M*Mat(V);
Also, I noticed an error in your code: when constructing M, the type CV_32F corresponds to float elements, not double. This is also corrected in my code example.
Hope that it helps.

Size of Matrix OpenCV

I know this might be very rudimentary, but I am new to OpenCV. Could you please tell me how to obtain the size of a matrix in OpenCV?. I googled and I am still searching, but if any of you know the answer, please help me.
Size as in number of rows and columns.
And is there a way to directly obtain the maximum value of a 2D matrix?
cv:Mat mat;
int rows = mat.rows;
int cols = mat.cols;
cv::Size s = mat.size();
rows = s.height;
cols = s.width;
Note that apart from rows and columns there is a number of channels and type. When it is clear what type is, the channels can act as an extra dimension as in CV_8UC3 so you would address a matrix as
uchar a = M.at<Vec3b>(y, x)[i];
So the size in terms of elements of elementary type is M.rows * M.cols * M.cn
To find the max element one can use
Mat src;
double minVal, maxVal;
minMaxLoc(src, &minVal, &maxVal);
For 2D matrix:
mat.rows – Number of rows in a 2D array.
mat.cols – Number of columns in a 2D array.
Or:
C++: Size Mat::size() const
The method returns a matrix size: Size(cols, rows) . When the matrix is more than 2-dimensional, the returned size is (-1, -1).
For multidimensional matrix, you need to use
int thisSizes[3] = {2, 3, 4};
cv::Mat mat3D(3, thisSizes, CV_32FC1);
// mat3D.size tells the size of the matrix
// mat3D.size[0] = 2;
// mat3D.size[1] = 3;
// mat3D.size[2] = 4;
Note, here 2 for z axis, 3 for y axis, 4 for x axis.
By x, y, z, it means the order of the dimensions. x index changes the fastest.
If you are using the Python wrappers, then (assuming your matrix name is mat):
mat.shape gives you an array of the type- [height, width, channels]
mat.size gives you the size of the array
Sample Code:
import cv2
mat = cv2.imread('sample.png')
height, width, channel = mat.shape[:3]
size = mat.size
A complete C++ code example, may be helpful for the beginners
#include <iostream>
#include <string>
#include "opencv/highgui.h"
using namespace std;
using namespace cv;
int main()
{
cv:Mat M(102,201,CV_8UC1);
int rows = M.rows;
int cols = M.cols;
cout<<rows<<" "<<cols<<endl;
cv::Size sz = M.size();
rows = sz.height;
cols = sz.width;
cout<<rows<<" "<<cols<<endl;
cout<<sz<<endl;
return 0;
}

OpenCV::solvePNP() - Assertion failed

I am trying to get the pose of the camera with the help of solvePNP() from OpenCV.
After running my program I get the following errors:
OpenCV Error: Assertion failed (npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F))) in solvePnP, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/OpenCV-2.4.2/modules/calib3d/src/solvepnp.cpp, line 55
libc++abi.dylib: terminate called throwing an exception
I tried to search how to solve these errors, but I couldn't resolve it unfortunately!
Here is my code, all comment/help is much appreciated:
enum Pattern { NOT_EXISTING, CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,
Pattern patternType)
{
corners.clear();
switch(patternType)
{
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; ++i )
for( int j = 0; j < boardSize.width; ++j )
corners.push_back(Point3f(float( j*squareSize ), float( i*squareSize ), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize), float(i*squareSize), 0));
break;
}
}
int main(int argc, char* argv[])
{
float squareSize = 50.f;
Pattern calibrationPattern = CHESSBOARD;
//vector<Point2f> boardCorners;
vector<vector<Point2f> > imagePoints(1);
vector<vector<Point3f> > boardPoints(1);
Size boardSize;
boardSize.width = 9;
boardSize.height = 6;
vector<Mat> intrinsics, distortion;
string filename = "out_camera_xml.xml";
FileStorage fs(filename, FileStorage::READ);
fs["camera_matrix"] >> intrinsics;
fs["distortion_coefficients"] >> distortion;
fs.release();
vector<Mat> rvec, tvec;
Mat img = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE); // at kell adnom egy kepet
bool found = findChessboardCorners(img, boardSize, imagePoints[0], CV_CALIB_CB_ADAPTIVE_THRESH);
calcBoardCornerPositions(boardSize, squareSize, boardPoints[0], calibrationPattern);
boardPoints.resize(imagePoints.size(),boardPoints[0]);
//***Debug start***
cout << imagePoints.size() << endl << boardPoints.size() << endl << intrinsics.size() << endl << distortion.size() << endl;
//***Debug end***
solvePnP(Mat(boardPoints), Mat(imagePoints), intrinsics, distortion, rvec, tvec);
for(int i=0; i<rvec.size(); i++) {
cout << rvec[i] << endl;
}
return 0;
}
EDIT (some debug info):
I debugged it row by row. I stepped into all of the functions. I am getting the Assertion failed in SolvePNP(...). You can see below what I see when I step into the solvePNP function. First it jumps over the first if statement /if(vec.empty())/, and goes into the second if statement /if( !copyData )/, there when it executes the last line /*datalimit = dataend = datastart + rows*step[0]*/ jumps back to the first if statement and returns => than I get the Assertion failed error.
template<typename _Tp> inline Mat::Mat(const vector<_Tp>& vec, bool copyData)
: flags(MAGIC_VAL | DataType<_Tp>::type | CV_MAT_CONT_FLAG),
dims(2), rows((int)vec.size()), cols(1), data(0), refcount(0),
datastart(0), dataend(0), allocator(0), size(&rows)
{
if(vec.empty())
return;
if( !copyData )
{
step[0] = step[1] = sizeof(_Tp);
data = datastart = (uchar*)&vec[0];
datalimit = dataend = datastart + rows*step[0];
}
else
Mat((int)vec.size(), 1, DataType<_Tp>::type, (uchar*)&vec[0]).copyTo(*this);
}
Step into the function in a debugger and see exactly which assertion is failing. ( Probably it requires values in double (CV_64F) rather than float. )
OpenCVs new "inputarray" wrapper issuppsoed to allow you to call functions with any shape of mat, vector of points, etc - and it will sort it out. But a lot of functions assume a particular inut format or have obsolete assertions enforcing a particular format.
The stereo/calibration systems are the worst for requiring a specific layout, and frequently succesive operations require a different layout.
The types don't seem right, at least in the code that worked for me I used different types(as mentioned in the documentation).
objectPoints – Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector can be also passed here.
imagePoints – Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points.
vector can be also passed here.
cameraMatrix – Input camera matrix A = \vecthreethree{fx}{0}{cx}{0}{fy}{cy}{0}{0}{1} .
distCoeffs – Input
vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4,
k_5, k_6]]) of 4, 5, or 8 elements. If the vector is NULL/empty, the
zero distortion coefficients are assumed.
rvec – Output rotation vector (see Rodrigues() ) that, together with tvec , brings points from the model coordinate system to the
camera coordinate system.
tvec – Output translation vector.
useExtrinsicGuess – If true (1), the function uses the provided rvec and tvec values as initial
approximations of the rotation and translation vectors, respectively,
and further optimizes them.
Documentation from here.
vector<Mat> rvec, tvec should be Mat rvec, tvec instead.
vector<vector<Point2f> > imagePoints(1) should be vector<Point2f> imagePoints(1) instead.
vector<vector<Point3f> > boardPoints(1) should be
vector<Point3f> boardPoints(1) instead.
Note: I encountered the exact same problem, and this worked for me(It is a little bit confusing since calibrateCamera use vectors). Haven't tried it for imagePoints or boardPoints though.(but as it is documented in the link above, vector,vector should work, I thought I'd better mention it), but for rvec,trec I tried it myself.
I run in exactly the same problem with solvePnP and opencv3. I tried to isolate the problem in a single test case. I seams passing a std::vector to cv::InputArray does not what is expected. The following small test works with opencv 2.4.9 but not with 3.2.
And this is exactly the problem when passing a std::vector of points to solvePnP and causes the assert at line 63 in solvepnp.cpp to fail !
Generating a cv::mat out of the vector list before passing to solvePnP works.
//create list with 3 points
std::vector<cv::Point3f> vectorList;
vectorList.push_back(cv::Point3f(1.0, 1.0, 1.0));
vectorList.push_back(cv::Point3f(1.0, 1.0, 1.0));
vectorList.push_back(cv::Point3f(1.0, 1.0, 1.0));
//to input array
cv::InputArray inputArray(vectorList);
cv::Mat mat = inputArray.getMat();
cv::Mat matDirect = cv::Mat(vectorList);
LOG_INFO("Size vector: %d mat: %d matDirect: %d", vectorList.size(), mat.checkVector(3, CV_32F), matDirect.checkVector(3, CV_32F));
QVERIFY(vectorList.size() == mat.checkVector(3, CV_32F));
Result opencv 2.4.9 macos:
TestObject: OpenCV
Size vector: 3 mat: 3 matDirect: 3
Result opencv 3.2 win64:
TestObject: OpenCV
Size vector: 3 mat: 9740 matDirect: 3
I faced the same issue. In my case, (in python) converted the input array type as float.
It worked fine afterwards.

Resources