Convert several 1D mat to a single 2D mat in OpenCV [duplicate] - opencv

I have three matrices, each of size 4x1. I want to copy all of these matrices to another matrix of size 4x3 and call it R. Is there a smart way to do it?

You can just use hconcat for horizontal concatenation. You can use it per matrix, e.g. hconcat( mat1, mat2, R ), or apply it directly on a vector or array of matrices.
Here's a sample code:
vector<Mat> matrices = {
Mat(4, 1, CV_8UC1, Scalar(1)),
Mat(4, 1, CV_8UC1, Scalar(2)),
Mat(4, 1, CV_8UC1, Scalar(3)),
};
Mat R;
hconcat( matrices, R );
cout << R << endl;
Here's the result:
[1, 2, 3;
1, 2, 3;
1, 2, 3;
1, 2, 3]
Program ended with exit code: 1
Similarly, if you want to do it vertically (stack by rows), use vconcat.

You can use
Mat R(3, 4, CV_32F); // [3 rows x 4 cols] with float values
mat1.copyTo(R.row(0));
mat2.copyTo(R.row(1));
mat3.copyTo(R.row(2));
or
Mat R(4, 3, CV_32F); // [4 rows x 3 cols] with float values
mat1.copyTo(R.col(0));
mat2.copyTo(R.col(1));
mat3.copyTo(R.col(2));
Alternatively, as #sub_o suggested, you can also use hconcat()/vconcat() to concatenate matrices.

For those using OpenCv in Python, if you have arrays A, B, and C, and want array D that is horizontal concatenation of the others:
D = cv2.hconcat((A, B, C))
There is also a vconcat method.

Related

how to train svm on multiple type features

I want to train svm on data set consisting of features p1, p2 , p3 . p1 is vector , p2 and p3 are integers on which i want to train . For e.g p1=[1,2,3], p2=4 , p3=5
X=[p1 , p2 , p3],but p1 itself is a vector, so X=[ [ 1 , 2 , 3 ], 4 , 5 ] and Y is output named label
but X can't take input in this form
clf.fit(X,Y)
It gives error of form below: meaning X cannot take in this form
array = np.array(array, dtype=dtype, order=order, copy=copy)
ValueError: setting an array element with a sequence.
You basically have two options:
Convert your data to regular format and run typical SVM kernel, in your case if p1 is always 3-element, just flatten representation thus [[1,2,3],4,5] becomes [1,2,3,4,5] and you are good to go.
Implement your own custom kernel function, that treats each part separately, since sum of two kernels is still a kernel, you can for example define K(x, y) = K([p1, p2, p3], [q1, q2, q3]) := K1(p1, q1) + K2([p2,p3], [q2,q3]). Now both K1 and K2 work on regular vectors, so you can define them in arbitrary manner and just use their sum as your "joint" kernel function. This approach is more complex, but gives you much freedom in how you define the way of dealing with your complex data.
Here is a simple example
#include <opencv2/ml.hpp>
using namespace cv::ml;
// Set up training data
int labels[4] = { -1, -1, 1, 1}; //Negative and Positive class
Mat labelsMat(4, 1, CV_32SC1, labels);
//training data inputs
float a1 = 1, a2 = 2; //negative
float b1 = 2, b2 = 1; //negative
float c1 = 3, c2 = 4; //positive
float d1 = 4, d2 = 3; //positive
float trainingData[4][2] = {{ a1, a2 },{ b1, b2 },{ c1, c2 },{ d1, d2 };
Mat trainingDataMat(20, 2, CV_32FC1, trainingData);
// Set up SVM's parameters
Ptr<SVM> svm = SVM::create();
svm->setType(SVM::C_SVC);
svm->setKernel(SVM::RBF);
svm->setC(10);
svm->setGamma(0.01);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 500, 1e-6));
// Train the SVM with given parameters
Ptr<TrainData> td = TrainData::create(trainingDataMat, ROW_SAMPLE, labelsMat);
svm->train(td);
To test
float t1 = 2, t2 = 2;
float testing[1][2] = { { t1,t2 } };
Mat testData(1, 2, CV_32FC1, testing);
Mat results;
svm->predict(testData, results);
Mat vec[2];
results.copyTo(vec[0]);
for (int i = 0; i < 2; i++)
{
cout << vec[i] << endl;
}

cvCalibrateCamera2 - how to properly define rotation matrix?

I try to use cvCalibrateCamera2, but I get error that rotation matrix is not properly defined:
...calibration.cpp:1495: error: (-5) the output array of rotation vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 or nx9 array, where n is the number of views
I have already tried all possibilities from that info but I still get this error.
My code:
CvMat *object_points = cvCreateMat((int)pp.object_points.size(), 1, CV_32FC3);
CvMat *image_points = cvCreateMat((int)pp.image_points.size(), 1, CV_32FC2);
const CvMat point_counts = cvMat((int)pp.point_counts.size(), 1, CV_32SC1, &pp.point_counts[0]);
for (size_t i=0; i<pp.object_points.size(); i++)
{
object_points->data.fl[i*3+0] = (float)pp.object_points[i].x;
object_points->data.fl[i*3+1] = (float)pp.object_points[i].y;
object_points->data.fl[i*3+2] = (float)pp.object_points[i].z;
image_points->data.fl[i*2+0] = (float)pp.image_points[i].x;
image_points->data.fl[i*2+1] = (float)pp.image_points[i].y;
}
CvMat* tempR = cvCreateMat(1, 3, CV_32F);
cvCalibrateCamera2(object_points, image_points, &point_counts,
cvSize(pp.width, pp.height), camera->m_calib_K,
camera->m_calib_D, tempR, &tempData->m_calib_T,
CV_CALIB_USE_INTRINSIC_GUESS)
// camera->calib_T is defined as:
// double m_calib_T_data[3];
// cvMat(3, 1, CV_64F, camera->m_calib_T_data);
I thought that rotation matrix used in cvCalibrateCamera2 should be 1x3 (then I want to use Rodrigues function to get 3x3 matrix) but it doesn't work as any other combination mentioned in error.
Any ideas?
And I use opencv 2.4.0 (maybe there is bug in that method, but for some reasons I can't use later version of opencv)
I think the statement is clear. I am not confident with C# but I know it requires a strong initialization.
The problem in line
CvMat* tempR = cvCreateMat(1, 3, CV_32F);
is that tempR should have a line 1x3 for every N objects point you use. In this sense, the statement becomes clear
...calibration.cpp:1495: error: (-5) the output array of rotation
vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 or nx9
array, where n is the number of views
You must create a tempR like that (more or less, I do not know how to calculate N in C#)
CvMat* tempR = cvCreateMat(N, 3, CV_32F);
Try to extract N from dimensions of object.point.size. If it does not work, try image.point.size

OpenCV bitwise_or just adds up two images' pixel value

I tried to use bitwise_or on two CV_8UC1 images, but the result is not what I expected.
In my case, for image_1, every pixel value is set to 2, and for image_2, every pixel value is set to 3, the output of bitwise_or is an image with every pixel value set to 5, while what I expected is every pixel value is 2|3, which should be 2.
Can someone tell me why?
The result of the bitwise or operation between 2 and 3 is 3. You can check it like this:
cout<<(2|3); // The result will be 3
Also, if you do a bitwise_or on two matrices that have all pixels 2 and 3 respectively, you should get a matrix with all its pixels set to 3, like in this example:
Mat m1 = Mat(3, 3, CV_8UC1, Scalar(2));
Mat m2 = Mat(3, 3, CV_8UC1, Scalar(3));
Mat r;
bitwise_or(m1, m2, r);
cout<<r;
Result:
[3, 3, 3;
3, 3, 3;
3, 3, 3]
Do you want to add the two images? If this is the case, you can simply use the + operator, like this:
Mat m1 = Mat(3, 3, CV_8UC1, Scalar(2));
Mat m2 = Mat(3, 3, CV_8UC1, Scalar(3));
Mat r = m1+m2;
cout<<r;
Result:
[5, 5, 5;
5, 5, 5;
5, 5, 5]
In decimal system, the equivalent of the or operation is the maximum operation. (Also, the equivalent of the and operation is the minimum operation).
If this is what you want, OpenCV provides a cv::max() function that calculates the elementwise maximum from two matrices of the same size. Here is an example:
Mat a = Mat::ones(3, 3, CV_8UC1) * 2;
Mat b = Mat::ones(3, 3, CV_8UC1) * 100;
cout<<a<<endl<<b<<endl;
Mat max = cv::max(a, b);
cout<<max;
The result is:
a=[2, 2, 2;
2, 2, 2;
2, 2, 2]
b=[100, 100, 100;
100, 100, 100;
100, 100, 100]
max=[100, 100, 100;
100, 100, 100;
100, 100, 100]

Opencv multiply scalar and matrix

I have been trying to achieve something which should pretty trivial and is trivial in Matlab.
Using methods of OpenCV, I want to simply achieve something such as:
cv::Mat sample = [4 5 6; 4 2 5; 1 4 2];
sample = 5*sample;
After which sample should just be:
[20 25 30; 20 10 25; 5 20 10]
I have tried scaleAdd, Mul, Multiply and neither allow a scalar multiplier and require a matrix of the same "size and type". In this scenario I could create a Matrix of Ones and then use the scale parameter but that seems so very extraneous
Any direct simple method would be great!
OpenCV does in fact support multiplication by a scalar value with overloaded operator*. You might need to initialize the matrix correctly, though.
float data[] = {1 ,2, 3,
4, 5, 6,
7, 8, 9};
cv::Mat m(3, 3, CV_32FC1, data);
m = 3*m; // This works just fine
If you are mainly interested in mathematical operations, cv::Matx is a little easier to work with:
cv::Matx33f mx(1,2,3,
4,5,6,
7,8,9);
mx *= 4; // This works too
For java there is no operator overloading, but the Mat object provides the functionality with a convertTo method.
Mat dst= new Mat(src.rows(),src.cols(),src.type());
src.convertTo(dst,-1,scale,offset);
Doc on this method is here
For big Mats you should use forEach.
If C++11 is available:
m.forEach<double>([&](double& element, const int position[]) -> void
{
element *= 5;
}
);
something like this.
Mat m = (Mat_<float>(3, 3)<<
1, 2, 3,
4, 5, 6,
7, 8, 9)*5;
Mat A = //data;//source Matrix
Mat B;//destination Matrix
Scalar alpha = new Scalar(5)//factor
Core.multiply(A,alpha,b);

OpenCV vector to Mat but not element->row

There is a very simple way to construct a Mat from a vector...just by doing:
vector<int> myVector;
Mat myMatFromVector(myVector,true); //the boolean is to define if you want to copy the data
The problem with this contructor is that each vector's element will be placed in each row of the Matrix. What I want is each element of my vector to be placed in each column of the matrix.
As is:
vector<int> = [1,2,3,4]
Matrix = [1;2;3;4]
I want:
vector<int> = [1,2,3,4]
Matrix = [1,2,3,4]
Either specify the shape and type of the Matrix and pass the vector data
// constructor for matrix headers pointing to user-allocated data
Mat(int _rows, int _cols, int _type, void* _data, size_t _step=AUTO_STEP);
Mat(Size _size, int _type, void* _data, size_t _step=AUTO_STEP);
Or call reshape on the Mat to swap the number of row sand columns ( doesn't change any data)
// creates alternative matrix header for the same data, with different
// number of channels and/or different number of rows. see cvReshape.
Mat reshape(int _cn, int _rows=0) const;
The matrix formed by reflecting a matrix through its main diagonal (ie interchanging the rows and columns) is called the transpose. Using OpenCV, you can easily obtain the transpose of a matrix A as:
Mat A;
Mat A_transpose = A.t();
If A is [1; 2; 3; 4], A_transpose will be [1, 2, 3, 4] as required.
So, you could either create a transposed copy of your matrix after converting it from the vector, or you could create it easily when subsequently required in your calculations.
Mat A, B;
Mat answer = A.t() * B;

Resources