Size of Matrix OpenCV - opencv

I know this might be very rudimentary, but I am new to OpenCV. Could you please tell me how to obtain the size of a matrix in OpenCV?. I googled and I am still searching, but if any of you know the answer, please help me.
Size as in number of rows and columns.
And is there a way to directly obtain the maximum value of a 2D matrix?

cv:Mat mat;
int rows = mat.rows;
int cols = mat.cols;
cv::Size s = mat.size();
rows = s.height;
cols = s.width;

Note that apart from rows and columns there is a number of channels and type. When it is clear what type is, the channels can act as an extra dimension as in CV_8UC3 so you would address a matrix as
uchar a = M.at<Vec3b>(y, x)[i];
So the size in terms of elements of elementary type is M.rows * M.cols * M.cn
To find the max element one can use
Mat src;
double minVal, maxVal;
minMaxLoc(src, &minVal, &maxVal);

For 2D matrix:
mat.rows – Number of rows in a 2D array.
mat.cols – Number of columns in a 2D array.
Or:
C++: Size Mat::size() const
The method returns a matrix size: Size(cols, rows) . When the matrix is more than 2-dimensional, the returned size is (-1, -1).
For multidimensional matrix, you need to use
int thisSizes[3] = {2, 3, 4};
cv::Mat mat3D(3, thisSizes, CV_32FC1);
// mat3D.size tells the size of the matrix
// mat3D.size[0] = 2;
// mat3D.size[1] = 3;
// mat3D.size[2] = 4;
Note, here 2 for z axis, 3 for y axis, 4 for x axis.
By x, y, z, it means the order of the dimensions. x index changes the fastest.

If you are using the Python wrappers, then (assuming your matrix name is mat):
mat.shape gives you an array of the type- [height, width, channels]
mat.size gives you the size of the array
Sample Code:
import cv2
mat = cv2.imread('sample.png')
height, width, channel = mat.shape[:3]
size = mat.size

A complete C++ code example, may be helpful for the beginners
#include <iostream>
#include <string>
#include "opencv/highgui.h"
using namespace std;
using namespace cv;
int main()
{
cv:Mat M(102,201,CV_8UC1);
int rows = M.rows;
int cols = M.cols;
cout<<rows<<" "<<cols<<endl;
cv::Size sz = M.size();
rows = sz.height;
cols = sz.width;
cout<<rows<<" "<<cols<<endl;
cout<<sz<<endl;
return 0;
}

Related

Copy Vector of Contour Points into Mat

I am using OpenCV 3.1 with VS2012 C++/CLI.
I have stored the result of a finContours call into:
std::vector<std::vector<Point>> Contours;
Thus, Contours[0] is a vector of the contour points of the first contour.
Contours[1] is a vector of the contour points of the second vector, etc.
Now, I want to load one of the contours into a Mat Based on Convert Mat to vector <float> and Vector<float> to mat in opencv I thought something like this would work.
Mat testMat=Mat(Images->Contours[0].size(),2,CV_32FC1);
memcpy(testMat.data,Images->Contours[0].data(),Images->Contours[0].size()*CV_32FC1);
I specified two columns because I each underlying pint must be composed of both an X point and a Y point and each of those should be a float. However, when I access the Mat elements, I can see that the first element is not the underlying data but the total number of contour points.
Any help on the right way to accomplish this appreaciated.
You can do that with:
Mat testMat = Mat(Images->Contours[0]).reshape(1);
Now testMat is of type CV_32SC1, aka of int. If you need float you can:
testMat.convertTo(testMat, CV_32F);
Some more details and variants...
You can simply use the Mat constructor that accepts a std::vector:
vector<Point> v = { {0,1}, {2,3}, {4,5} };
Mat m(v);
With this, you get a 2 channel matrix with the underlying data in v. This means that if you change the value in v, also the values in m change.
v[0].x = 7; // also 'm' changes
If you want a deep copy of the values, so that changes in v are not reflected in m, you can use clone:
Mat m2 = Mat(v).clone();
Your matrices are of type CV_32SC2, i.e. 2 channels matrices of int (because Point uses int. Use Point2f for float). If you want a 2 columns single channel matrix you can use reshape:
Mat m3 = m2.reshape(1);
If you want to convert to float type, you need to use convertTo:
Mat m4;
m2.convertTo(m4, CV_32F);
Here some working code as a proof of concept:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
vector<Point> v = { {0,1}, {2,3}, {4,5} };
// changes in v affects m
Mat m(v);
// changes in v doesn't affect m2
Mat m2 = Mat(v).clone();
// m is changed
v[0].x = 7;
// m3 is a 2 columns single channel matrix
Mat m3 = m2.reshape(1);
// m4 is a matrix of floats
Mat m4;
m2.convertTo(m4, CV_32F);
return 0;
}

Adding a scalar to a Mat object

So I'm trying to add a scalar value to all elements of a Mat object in openCV, however for raw_t_ubit8 and raw_t_ubit16 types I get wrong results. Here's the code.
Mat A;
//Initialize Mat A;
A = A + 0.1;
The Matrix is initially
The result of the addition is exactly the same matrix. This problem does not occur when I try to add scalars to raw_t_real types of matrices. By raw_t_ubit8 I mean the depth is CV_8UC1
If, as you mentioned in the comments, the contained values are scaled in the matrix to fit the integer domain 0..255, then you should also scale the scalar value you sum. Namely:
A = A + cv::Scalar(round(0.1 * 255) );
Or even better:
A += cv::Scalar(round(0.1 * 255) );
Note that cv::Scalar, as pointed out in the comments by Miki, is in any case made from a double (it's a cv::Scalar_<double>).
The rounding could be omitted, but then you leave the choice on how to convert your double into integer to the function implementation.
You should also check what happens when the values saturate.
Documentation for Opencv matrix expressions.
As stated in the comments and in #Antonio's answer, you can't add 0.1 to an integer.
If you are using CV_8UC1 matrices, but you want to work with floating points values, you should multiply by 255.
Mat1b A; // <-- type CV_8UC1
...
A += 0.1 * 255;
If the result of the operation need to be casted, as in this case, then ultimately saturated_cast is called.
This is equivalent to #Antonio's answer, but it results in cleaner code (at least for me).
The same code will be used, either if you sum a double or a Scalar. A Scalar object will be created in both ways using:
template<typename _Tp> inline
Scalar_<_Tp>::Scalar_(_Tp v0)
{
this->val[0] = v0;
this->val[1] = this->val[2] = this->val[3] = 0;
}
However if you need to sum exactly 0.1 to your matrix (and not to scale it by 255), you need to convert your matrix to CV_32FC1:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char** argv)
{
Mat1b A = (Mat1b(3,3) << 1,2,3,4,5,6,7,8,9);
Mat1f F;
A.convertTo(F, CV_32FC1);
F += 0.1;
return 0;
}

Get values from OpenCV Histogram

I have what should be a simple exercise in OpenCV, but can't seem to get it working. I'm trying to determine the density of edges in a section of an image. This is the process I follow:
1. pull subimage from image
2. use Canny to find edges in subImage
3. threshold to create binary image
4. create histogram for binary image
5. get number of pixels in binary image that are "on" (255)
6. calculate "edge density" as numPixelsOn/totalPixels
I've checked the results of 1,2,and 3 above, and results seem ok. Steps 4 and 5 seem to be giving me trouble.
Here's my code for calculating the histogram:
int histSize = 256; // bin size
float range[] = { 0, 256} ;
const float* histRange = { range };
bool uniform = true;
bool accumulate = false;
Mat hist;
/// Compute the histograms:
calcHist( &gray, 1, 0, Mat(), hist, 1, &histSize, &histRange, uniform, accumulate );
This doesn't seem to be working. When I check hist after calling calcHist, it has no data (i.e. data == 0)... or maybe I don't understand what I'm looking at.
Now for accessing the "bins" in the histogram, I've tried a number of things. First I tried this:
uchar* p;
p = hist.ptr<uchar>(0);
double edgePixels = p[255];
I also tried to use:
cvQueryHistValue_1D(hist,255); // #include <opencv2/legacy/compat.hpp>
This wouldn't compile. Gave 2 errors: 'cv::Mat' does not have an overloaded member 'operator ->', and 'bins': is not a member of 'cv::Mat'
I guess I need some help on this.
There is an error in your 3rd param - channels, that should be an array so you should call it like this
int histSize = 256; // bin size
float range[] = { 0, 256} ;
const float* histRange = { range };
bool uniform = true;
bool accumulate = false;
Mat hist;
int channels[] = {0};
/// Compute the histograms:
calcHist( &gray, 1, channels, Mat(), hist, 1, &histSize, &histRange, uniform, accumulate );
You should also call:
hist.at<float>(0);
to get your value, OpenCV stores them as floats, this is the reason you're getting 0 when using uchar as uchar is smaller than float and the numbers stores as small enough to not fill the first bites.

OpenCV vector to Mat but not element->row

There is a very simple way to construct a Mat from a vector...just by doing:
vector<int> myVector;
Mat myMatFromVector(myVector,true); //the boolean is to define if you want to copy the data
The problem with this contructor is that each vector's element will be placed in each row of the Matrix. What I want is each element of my vector to be placed in each column of the matrix.
As is:
vector<int> = [1,2,3,4]
Matrix = [1;2;3;4]
I want:
vector<int> = [1,2,3,4]
Matrix = [1,2,3,4]
Either specify the shape and type of the Matrix and pass the vector data
// constructor for matrix headers pointing to user-allocated data
Mat(int _rows, int _cols, int _type, void* _data, size_t _step=AUTO_STEP);
Mat(Size _size, int _type, void* _data, size_t _step=AUTO_STEP);
Or call reshape on the Mat to swap the number of row sand columns ( doesn't change any data)
// creates alternative matrix header for the same data, with different
// number of channels and/or different number of rows. see cvReshape.
Mat reshape(int _cn, int _rows=0) const;
The matrix formed by reflecting a matrix through its main diagonal (ie interchanging the rows and columns) is called the transpose. Using OpenCV, you can easily obtain the transpose of a matrix A as:
Mat A;
Mat A_transpose = A.t();
If A is [1; 2; 3; 4], A_transpose will be [1, 2, 3, 4] as required.
So, you could either create a transposed copy of your matrix after converting it from the vector, or you could create it easily when subsequently required in your calculations.
Mat A, B;
Mat answer = A.t() * B;

Sum of each column opencv

In Matlab, If A is a matrix, sum(A) treats the columns of A as vectors, returning a row vector of the sums of each column.
sum(Image); How could it be done with OpenCV?
Using cvReduce has worked for me. For example, if you need to store the column-wise sum of a matrix as a row matrix you could do this:
CvMat * MyMat = cvCreateMat(height, width, CV_64FC1);
// Fill in MyMat with some data...
CvMat * ColSum = cvCreateMat(1, MyMat->width, CV_64FC1);
cvReduce(MyMat, ColSum, 0, CV_REDUCE_SUM);
More information is available in the OpenCV documentation.
EDIT after 3 years:
The proper function for this is cv::reduce.
Reduces a matrix to a vector.
The function reduce reduces the matrix to a vector by treating the
matrix rows/columns as a set of 1D vectors and performing the
specified operation on the vectors until a single row/column is
obtained. For example, the function can be used to compute horizontal
and vertical projections of a raster image. In case of REDUCE_MAX and
REDUCE_MIN , the output image should have the same type as the source
one. In case of REDUCE_SUM and REDUCE_AVG , the output may have a
larger element bit-depth to preserve accuracy. And multi-channel
arrays are also supported in these two reduction modes.
OLD:
I've used ROI method: move ROI of height of the image and width 1 from left to right and calculate means.
Mat src = imread(filename, 0);
vector<int> graph( src.cols );
for (int c=0; c<src.cols-1; c++)
{
Mat roi = src( Rect( c,0,1,src.rows ) );
graph[c] = int(mean(roi)[0]);
}
Mat mgraph( 260, src.cols+10, CV_8UC3);
for (int c=0; c<src.cols-1; c++)
{
line( mgraph, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,0,0), 1, CV_AA);
}
imshow("mgraph", mgraph);
imshow("source", src);
EDIT:
Just out of curiosity, I've tried resize to height 1 and the result was almost the same:
Mat test;
cv::resize(src,test,Size( src.cols,1 ));
Mat mgraph1( 260, src.cols+10, CV_8UC3);
for(int c=0; c<test.cols; c++)
{
graph[c] = test.at<uchar>(0,c);
}
for (int c=0; c<src.cols-1; c++)
{
line( mgraph1, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,255,0), 1, CV_AA);
}
imshow("mgraph1", mgraph1);
cvSum respects ROI, so if you move a 1 px wide window over the whole image, you can calculate the sum of each column.
My c++ got a little rusty so I won't provide a code example, though the last time I did this I used OpenCVSharp and it worked fine. However, I'm not sure how efficient this method is.
My math skills are getting rusty too, but shouldn't it be possible to sum all elements in columns in a matrix by multiplying it by a vector of 1s?
For an 8 bit greyscale image, the following should work (I think).
It shouldn't be too hard to expand to different image types.
int imgStep = image->widthStep;
uchar* imageData = (uchar*)image->imageData;
uint result[image->width];
memset(result, 0, sizeof(uchar) * image->width);
for (int col = 0; col < image->width; col++) {
for (int row = 0; row < image->height; row++) {
result[col] += imageData[row * imgStep + col];
}
}
// your desired vector is in result

Resources