How can I assign values to a opencv matrix Mat? - opencv

For example, I have a 2by3 matrix [1,0,5;1,0,-5], and a Mat trans_mat( 2, 3, CV_32FC1).
How can I assign those values to the trans_mat matrix?

Mat trans_mat( 2, 3, CV_32FC1);
trans_mat = (Mat_<float>(2, 3) << 1, 0, 5, 1, 0, -5);

Related

How compute divergence and gradient of image in OpenCV?

I know that to implement the following
I would use this code:
Mat o_k;
Mat Lapl;
double lambda;
Laplacian(o_k, Lapl, o_k.depth(), 1, 1, 0, BORDER_REFLECT);
Lapl = 1.0 - 2.0*lambda*Lapl;
However, I am trying to implement in OpenCV the following equation:
I know the div, or divergence, term would be like this, right?
int ksize = parser.get<int>("ksize");
int scale = parser.get<int>("scale");
int delta = parser.get<int>("delta");
Sobel(res, sobelx, CV_64F, 1, 0, ksize, scale, delta, BORDER_DEFAULT);
Sobel(res, sobely, CV_64F, 0, 1, ksize, scale, delta, BORDER_DEFAULT);
div = sobelx + sobely;
Where res is the result of the term in parenthesis. But how I get the term in parenthesis?
Or am I doing this wrong? Would div above actually be equal to the gradient of res? If so, then how do I get the divergence?
EDIT:
According to this link, the magnitude can also be computed as mag = abs(x) + abs(y): https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/sobel_derivatives/sobel_derivatives.html#sobel-derivatives
And since the div of a gradient is the Laplacian, would the below code be equivalent to the 2nd equation?
Sobel(res, sobelx, CV_64F, 1, 0, ksize, scale, delta, BORDER_DEFAULT);
Sobel(res, sobely, CV_64F, 0, 1, ksize, scale, delta, BORDER_DEFAULT);
convertScaleAbs( sobelx, abs_grad_x );
convertScaleAbs( sobely, abs_grad_y );
/// Total Gradient (approximate)
Mat mag;
addWeighted( abs_grad_x, 1, abs_grad_y, 1, 0, mag);
Laplacian(o_k, Lapl, o_k.depth(), 1, 1, 0, BORDER_REFLECT);
Mat top;
top = lambda * Lapl;
Mat result;
divide(top, mag, result);

OpenCV: subtract same BGR values from all pixels

I have some BGR image:
cv::Mat image;
I want to subtract from all the pixels in the image the vector:
[10, 103, 196]
Meaning that the blue channel for all the pixels will be reduced by 10, the green by 103 and the red by 196.
Is there a standard way to do that, or should I run for loops over all the channels and all the pixels?
suppose we have image that all channels filled with zero and for instance it's dimension is 2x3
cv::Mat image = cv::Mat::zeros(2,3,CV_32SC3)
output will be:
[0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0]
then if we want to add or subtract a singleton variable, then we can use cv::Scalar
1- suppose we want to add 3 in blue channel:
image = image + Scalar(3,0,0); // the result will be same as image=image+3;
with above code our matrix is now:
[3, 0, 0, 3, 0, 0, 3, 0, 0;
3, 0, 0, 3, 0, 0, 3, 0, 0]
2- if you want to add to another channel you can use second or third argument(or forth) of cv::Scalar like below
image = image +Scalar(3,2,-3);
output will be
[3, 2, -3, 3, 2, -3, 3, 2, -3;
3, 2, -3, 3, 2, -3, 3, 2, -3]
Using cv::subtract
cv::Mat image = cv::Mat::zeros(2,3,CV_32SC3);
subtract(image,Scalar(2,3,1),image);
output
[-2, -3, -1, -2, -3, -1, -2, -3, -1;
-2, -3, -1, -2, -3, -1, -2, -3, -1]

OpenCV bitwise_or just adds up two images' pixel value

I tried to use bitwise_or on two CV_8UC1 images, but the result is not what I expected.
In my case, for image_1, every pixel value is set to 2, and for image_2, every pixel value is set to 3, the output of bitwise_or is an image with every pixel value set to 5, while what I expected is every pixel value is 2|3, which should be 2.
Can someone tell me why?
The result of the bitwise or operation between 2 and 3 is 3. You can check it like this:
cout<<(2|3); // The result will be 3
Also, if you do a bitwise_or on two matrices that have all pixels 2 and 3 respectively, you should get a matrix with all its pixels set to 3, like in this example:
Mat m1 = Mat(3, 3, CV_8UC1, Scalar(2));
Mat m2 = Mat(3, 3, CV_8UC1, Scalar(3));
Mat r;
bitwise_or(m1, m2, r);
cout<<r;
Result:
[3, 3, 3;
3, 3, 3;
3, 3, 3]
Do you want to add the two images? If this is the case, you can simply use the + operator, like this:
Mat m1 = Mat(3, 3, CV_8UC1, Scalar(2));
Mat m2 = Mat(3, 3, CV_8UC1, Scalar(3));
Mat r = m1+m2;
cout<<r;
Result:
[5, 5, 5;
5, 5, 5;
5, 5, 5]
In decimal system, the equivalent of the or operation is the maximum operation. (Also, the equivalent of the and operation is the minimum operation).
If this is what you want, OpenCV provides a cv::max() function that calculates the elementwise maximum from two matrices of the same size. Here is an example:
Mat a = Mat::ones(3, 3, CV_8UC1) * 2;
Mat b = Mat::ones(3, 3, CV_8UC1) * 100;
cout<<a<<endl<<b<<endl;
Mat max = cv::max(a, b);
cout<<max;
The result is:
a=[2, 2, 2;
2, 2, 2;
2, 2, 2]
b=[100, 100, 100;
100, 100, 100;
100, 100, 100]
max=[100, 100, 100;
100, 100, 100;
100, 100, 100]

OpenCV Error: Bad argument <Unknown array type> in unknown function, file ..\..\..\modules\core\src\matrix.cpp, line 697

I'm currently trying to rectify stereo cameras to create a disparity map. Unfortunately, I'm having trouble getting past the stereo rectification step because I keep receiving the error
"OpenCV Error: Bad argument in unknown function, file ..\..\..\modules\core\src\matrix.cpp, line 697."
The process is complicated by the fact that I'm not the one one who calibrated the cameras, nor do I have access to the cameras used to record the videos. I was given all of the calibration parameters (intrinsics, distortion coefficients, rotation matrix, and translation vector). As you can see, I've tried to turn these directly into CvMats and use them that way, but I get an error when I try to actually use them.
Thanks in advance.
CvMat li, lm, ri, rm, r, t, Rl, Rr, Pl, Pr;
double init_li[3][3] =
{ {477.984984743, 0, 316.17458671},
{0, 476.861945645, 253.45073026},
{0, 0 ,1} };
double init_lm[5] = {-0.117798518453, 0.147554949385, -0.0549082041898, 0, 0};
double init_ri[3][3] =
{{478.640315323, 0, 299.957994781},
{0, 477.898896505, 251.665771947},
{0, 0, 1}};
double init_rm[5] = {-0.10884732532, 0.12118405303, -0.0322073237741, 0, 0};
double init_r[3][3] =
{{0.999973709051976, 0.00129700728791757, -0.00713435189275776},
{-0.00132096594266573, 0.999993501087837, -0.00335452397041856},
{0.00712995468519435, 0.00336386001267643, 0.99996892361313}};
double init_t[3] = {-0.0830973040641153, -0.00062704210860633, 1.4287643345188e-005};
cvInitMatHeader(&li, 3, 3, CV_64FC1, init_li);
cvInitMatHeader(&lm, 5, 1, CV_64FC1, init_lm);
cvInitMatHeader(&ri, 3, 3, CV_64FC1, init_ri);
cvInitMatHeader(&rm, 5, 1, CV_64FC1, init_rm);
cvInitMatHeader(&r, 3, 3, CV_64FC1, init_r);
cvInitMatHeader(&t, 3, 1, CV_64FC1, init_t);
cvInitMatHeader(&Rl, 3,3, CV_64FC1);
cvInitMatHeader(&Rr, 3,3, CV_64FC1);
cvInitMatHeader(&Pl, 3,4, CV_64FC1);
cvInitMatHeader(&Pr, 3,4, CV_64FC1);
//frame is a cv::MAT holding the first frame of the video.
CvSize imageSize = frame.size();
imageSize.width /= 2;
//IT BREAKS HERE
cvStereoRectify(&li, &ri, &lm, &rm, imageSize, &r, &t, &Rl, &Rr, &Pl, &Pr);
so, you've been bitten by the c-api ? why don't you just turn your back on it ?
use the c++ api whenever possible, don't start learning opencv with the old(1.0), deprecated api, please !
double init_li[9] =
{ 477.984984743, 0, 316.17458671,
0, 476.861945645, 253.45073026,
0, 0 ,1 };
double init_lm[5] = {-0.117798518453, 0.147554949385, -0.0549082041898, 0, 0};
double init_ri[9] =
{ 478.640315323, 0, 299.957994781,
0, 477.898896505, 251.665771947,
0, 0, 1};
double init_rm[5] = {-0.10884732532, 0.12118405303, -0.0322073237741, 0, 0};
double init_r[9] =
{ 0.999973709051976, 0.00129700728791757, -0.00713435189275776,
-0.00132096594266573, 0.999993501087837, -0.00335452397041856,
0.00712995468519435, 0.00336386001267643, 0.99996892361313};
double init_t[3] = {-0.0830973040641153, -0.00062704210860633, 1.4287643345188e-005};
cv::Mat li(3, 3, CV_64FC1, init_li);
cv::Mat lm(5, 1, CV_64FC1, init_lm);
cv::Mat ri(3, 3, CV_64FC1, init_ri);
cv::Mat rm(5, 1, CV_64FC1, init_rm);
cv::Mat r, t, Rl, Rr, Pl, Pr; // note: no initialization needed.
//frame is a cv::MAT holding the first frame of the video.
cv::Size imageSize = frame.size();
imageSize.width /= 2;
//IT won't break HERE
cv::stereoRectify(li, ri, lm, rm, imageSize, r, t, Rl, Rr, Pl, Pr);
// no need ever to release or care about anything
Ok, so I figured out the answer. The problem was that I had only initialized headers for Rl, Rr, Pl, and Pr, but no memory was allocated for the data itself. I was able to fix it as follows:
double init_Rl[3][3];
double init_Rr[3][3];
double init_Pl[3][4];
double init_Pr[3][4];
cvInitMatHeader(&Rl, 3,3, CV_64FC1, init_Rl);
cvInitMatHeader(&Rr, 3,3, CV_64FC1, init_Rr);
cvInitMatHeader(&Pl, 3,4, CV_64FC1, init_Pl);
cvInitMatHeader(&Pr, 3,4, CV_64FC1, init_Pr);
Although, I have a theory that I might have been able to use cv::stereoRectify with cv::Mats as parameters, which would have made life much easier. I don't know if cv::stereoRectify exists, but it seems that versions of many of the other c functions are in the cv namespace. In case it's hard to tell, I'm very new to OpenCV.

Multiply a CvMat* and a number

I'm working with some filters in OpenCV and don't know how to multiply a number (1/5) in this example
CvMat* kernel=0;
IplImage* dst = cvCreateImage(cvGetSize( entrada ), IPL_DEPTH_8U, 3);
kernel = cvCreateMat(3, 3,CV_32FC1);
cvSet2D( kernel, 0, 0, cvRealScalar(1));
cvSet2D( kernel, 0, 1, cvRealScalar(1));
cvSet2D( kernel, 0, 2, cvRealScalar(1));
cvSet2D( kernel, 1, 0, cvRealScalar(1));
cvSet2D( kernel, 1, 1, cvRealScalar(2));
cvSet2D( kernel, 1, 2, cvRealScalar(1));
cvSet2D( kernel, 2, 0, cvRealScalar(1));
cvSet2D( kernel, 2, 1, cvRealScalar(1));
cvSet2D( kernel, 2, 2, cvRealScalar(1));
// Matriz utilizada para el filtrado paso alto
// 1 1 1
// 1 2 1
// 1 1 1
cvFilter2D(entrada, dst, kernel, cvPoint(-1,-1));
What about cvScale(src, dst, scale) with scale being the number, whatever matrix you want to multiply it with. If you want to multiply it with the kernel, what about just initializing the kernel with the multiplied values?

Resources