I have a non square matrix in OpenCV.
I want to calculate it's rank.
I understood you need to do SVD decomposition and count the rows or on one of the parts of it? Not sure...
I could really use code example in OpenCV(C/C++), because there is too much room for me to make errors...
I found this thread... opencv calculate matrix rank
But it has no code example...
So if there is no code example maybe you could explain the steps to find the rank of a non square matrix in OpenCV?
As mentioned here, you need to find the number of non-zero singular value. So, first find the singular values with SVD decomposition, and then count the number of non zero values. You may need to apply a small threshold to account for numeric errors:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Your matrix
Mat1d M = (Mat1d(4,5) << 1, 0, 0, 0, 2,
0, 0, 3, 0, 0,
0, 0, 0, 0, 0,
0, 2, 0, 0, 0);
// Compute SVD
Mat1d w, u, vt;
SVD::compute(M, w, u, vt);
// w is the matrix of singular values
// Find non zero singular values.
// Use a small threshold to account for numeric errors
Mat1b nonZeroSingularValues = w > 0.0001;
// Count the number of non zero
int rank = countNonZero(nonZeroSingularValues);
return 0;
}
Related
I am trying to create a 2D perspective transform matrix from individual components like translation, rotation, scale, shear. But at the end the matrix is not producing a true perspective effect like the image below. I think I am missing some component in the code that I wrote to create the matrix. Could some one help me add the missing components and their formulation in the below function? I have used opencv library for my code
cv::Mat getPerspMatrix2D( double rz, double s, double tx, double ty ,double shx, double shy)
{
cv::Mat R = (cv::Mat_<double>(3,3) <<
cos(rz), -sin(rz), 0,
sin(rz), cos(rz), 0,
0, 0, 1);
cv::Mat S = (cv::Mat_<double>(3,3) <<
s, 0, 0,
0, s, 0,
0, 0, 1);
cv::Mat Sh = (cv::Mat_<double>(3,3) <<
1, shx, 0,
shy, 1, 0,
0, 0, 1);
cv::Mat T = (cv::Mat_<double>(3,3) <<
1, 0, tx,
0, 1, ty,
0, 0, 1);
return T * Sh * S * R;
}
Keywords are Homography and 8DOF. Taken from 1 and 2 there exists two coefficients for perspective transformation. But it needs a 2nd step to calculate it. I'm not familiar with OpenCV but I'm hoping to answer your question a bit late in a basically way ;-)
Step 1
You can imagine lx describes a vanishing point on the x axis. The image shows a31=lx=1. lx=100 is less transformation. For lx=0 the position is infinite far means no perspective transform = identity matrix.
[1 0 0]
PL = [0 1 0]
[lx ly 1]
lx/ly are perspective foreshortening parameters
Step 2
When you apply a right hand matrix multiplication P x [u; v; 1] you will recognize that the last value in the result is sometimes other than 1. For affine transformation it is always 1 for perspective projection not. In the 2nd step the result is scaled to make the last coefficient 1. This is a part of the effect.
Your Example Image
Image' = P4 x P3 x P2 x P1 x Image
I would translate the center of the blue rectangle to the origin tx=-w/2 and ty=-h/2 = P1.
Apply projective projection with ly = h (to make both sides at an angle)
Eventually translate back that all point are located in one quadrant
Eventually scale to desired size
Step 2 from the perspective projection can be done after 2.) or at the end.
I am trying to pre multiply a Homography matrix before I send it it the warpperspective function, but I cannot figure out how to do this. I am trying to use gemm for multiplying the matrices. Also How do you specify an element (like HomOffset(0,0)) in a matrix obj then multiply it by a scalar? I have been reading the opencv documentation but did not come across this. Code is below. Thanks in advance.
cv:: Mat Hom = cv::findHomography(scene,obj, CV_RANSAC);
cv:: Mat HomOffset[3][3] = {
{ 1, 0, 25 },
{ 0, 1, 25 },
{ 0, 0, 1 }
};
error for declartion of HomOffSet code is int to cv:: Mat is ambigious
gemm(Hom,HomOffset,1,0,0,H);
Multiple errors for the gemm function.
you need to assign your Matrix's values (HomOffset) correctly. Use at operator: see it here
I've just tested Sobel using C api and C++ api. But why is it different? All the parameters I used are the same.
Output - using C API
Output - using C++ API
Edited
C API :
/// Generate grad_x
grad_x = cvCreateImage(cvGetSize(grayImg), IPL_DEPTH_16S, 1);
abs_grad_x = cvCreateImage(cvGetSize(grayImg), 8, 1);
/// Gradient X
cvSobel(grayImg, grad_x, 1, 0, 3);
cvConvertScaleAbs(grad_x, abs_grad_x);
cvThreshold(abs_grad_x, abs_grad_x, 0, 255, CV_THRESH_BINARY|CV_THRESH_OTSU);
C++ API :
cv::Mat img_sobel;
cv::Sobel(img_gray, img_sobel, CV_8U, 1, 0, 3, 1, 0, BORDER_DEFAULT);
Mat img_threshold;
threshold(img_sobel, img_threshold, 0, 255, CV_THRESH_OTSU+CV_THRESH_BINARY);
There is only 1 reason for different results. Data-Type!
In the C version, you are creating grad_x with IPL_DEPTH_16S depth. So each pixel has short data type. This increases the precision of the results which you get when calling the cvSobel function. cvSobel is able to accommodate a wider range of values (-32768 to 32767) in the result grad_x.
In the C++ version, you are not initializing the matrix and specifying the destination type CV_8U. The function cv::Sobel internally creates destination matrix of type CV_8U, calculates the results and then clamps them to the range of destination data type, i.e from 0 to 255. So all the negative values become 0.
To get the same results in C version, change the IPL_DEPTH_16S to IPL_DEPTH_8U.
Change last line of c++ code from
threshold(img_sobel, img_threshold, 0, 255, CV_THRESH_OTSU+CV_THRESH_BINARY);
to
threshold(img_sobel, img_threshold, 0, 255, CV_THRESH_OTSU | CV_THRESH_BINARY);
Im trying to create my own sobel edge detection based off of the gx and gy matrices on three channels i have in my code below.
[[0,1,2],
[-1,0,1],
[-2,-1,0]]
and
[-2,-1,0],
[-1,0,1],
[0,1,2]]
I edited the variables j and i in my code further down but it is not working, how can i create a sobel edge detection on those three channels
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
void salt(cv::Mat &image, int n) {
int i,j;
for (int k=0; k<n; k++) {
// rand() is the MFC random number generator
i= rand()%image.cols;
j= rand()%image.rows;
if (image.channels() == 1) { // gray-level image
image.at<uchar>(j,i)= 255;
} else if (image.channels() == 3) { // color image
image.at<cv::Vec3b>(j,i)[0]= 255;
image.at<cv::Vec3b>(j-1,i-1)[1]= 255;
image.at<cv::Vec3b>(j,i-1)[2]= 255;
}
}
}
int main()
{
srand(cv::getTickCount()); // init random number generator
cv::Mat image= cv::imread("space.jpg",0);
salt(image,3000);
cv::namedWindow("Image");
cv::imshow("Image",image);
cv::imwrite("salted.bmp",image);
cv::waitKey(5000);
return 0;
}
I'm a little confused by the question, because the question relates to sobel filters, but you provided a function that adds noise to an image.
To start with, here is the Sobel function, which will call the classic sobel functions (that will calculate dx and dy gradients).
Secondly, there is the more generic filter2D which will let you apply an arbitrary kernel (like the one you created in the question).
Lastly, if you want to apply a different kernel in each channel or band, you can do as the filter2D documentation implies, and call split on an image, and then call filter2D on each channel, and then combine the values into a single band image using the matrix operators.
The most complicated thing I think you could be asking is how to find the locations of that salt you added to the image, and the answer would be to make a kernel for each band like so:
band 0:
[[ 0, 0, 0],
[ 0, 1, 0],
[ 0, 0, 0]]
band 1:
[[ 1, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]]
band 2:
[[ 0, 1, 0],
[ 0, 0, 0],
[ 0, 0, 0]]
Be sure to put the anchor in the center of the kernel (1,1).
I have several questions:
In the example given in openCV document:
/* generate measurement */
cvMatMulAdd( kalman->measurement_matrix, state, measurement, measurement );
Is this correct?
In the tutorial: An Introduction to the Kalman Filter by Welch and Bishop
in Equation 1.2 it says measurement = H*state + measurement noise
Doesn't seems both are same.
I was trying to implement bouncing ball tracking for a single ball.
I tried the following: (Please point out if I am doing it incorrectly.)
For the measurement I am measuring two things: a) x b) y of the centroid of the ball.
I am just mentioning lines which are different from the example given in opencv documentation.
CvKalman* kalman = cvCreateKalman( 5, 2, 0 );
const float A[] = { 1, 0, 1, 0, 0,
0, 1, 0, 1, 0,
0, 0, 1, 0, 0,
0, 0, 0, 1, 1,
0, 0, 0, 0, 1};
CvMat* state = cvCreateMat( 5, 1, CV_32FC1 );
CvMat* measurement = cvCreateMat( 2, 1, CV_32FC1 );
//initialize the state of kalman filter
state->data.fl[0] = mean_c;
state->data.fl[1] = mean_r;
state->data.fl[2] = mean_c - prev_mean_c;
state->data.fl[3] = mean_r - prev_mean_r;
state->data.fl[4] = 9.81;
after initialization, this is what gives crash
cvMatMulAdd( kalman->transition_matrix, state,
kalman->process_noise_cov, state );
In this line they just use variable measurement to store noise. See previous line:
cvRandArr( &rng, measurement, CV_RAND_NORMAL, cvRealScalar(0),cvRealScalar(sqrt(kalman->measurement_noise_cov->data.fl[0])) );
You should change dimension of H matrix as well. It must be 5 by 2 to make it possible to calculate H*state + measurement noise. You get an error probably in line
memcpy( cvkalman->measurement_matrix->data.fl, H, sizeof(H));
because in initial example cvkalman->measurement_matrix and H are allocated as 4 by 4 matrices and you decreased dimension of cvkalman->measurement_matrix only to 5 by 2 (4*4 is more than 5*2)