Center of gravity of optical flow cluster - opencv

I need to find the center of gravity of the optical flow vectors. I applied the OpenCV Lucas Kanade function and can visually see the optical flow vectors. Now how do I cluster these vectors and find their center of gravity? Finding the location where the flow vectors are clustered is what I want to achieve.
I get the vectors are Point2f previous points and next points. I am not sure how to cluster these vectors. If I use kmeans function, then what should be the structure of the Mat samples?
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 0.0001, 10000), attempts, KMEANS_PP_CENTERS, centers );
Thanks.

It depends what results you want to achive. If you want to cluster same moving pixels than you should compute the motion by estimating the difference between the next and previous points the code would look like follows:
std::vector<cv::Point2f> prevPts, currPts;
... run lucas kanade ...
cv::Mat samples(prevPts.size(), 2, CV_32FC1);
for(unsigned int n = 0; n < prevPts.size(); n++)
{
samples.at<float>(n,0) = currPts[n].x - prevPts[n].x;
samples.at<float>(n,1) = currPts[n].y - prevPts[n].y;
}
... run clustering
this like a global approach. But in most cases you also need to the position into account. Than you have to consider other segmentation methods or you have to add the position as additional dimensions.

Related

stereo rectification with measured extrinsic parameters

I am trying to rectify two sequences of images for stereo matching. The usual approach of using stereoCalibrate() with a checkerboard pattern is not of use to me, since I am only working with the footage.
What I have is the correct calibration data of the individual cameras (camera matrix and distortion parameters) as well as measurements of their distance and angle between each other.
How can I construct the rotation matrix and translation vector needed for stereoRectify()?
The naive approach of using
Mat T = (Mat_<double>(3,1) << distance, 0, 0);
Mat R = (Mat_<double>(3,3) << cos(angle), 0, sin(angle), 0, 1, 0, -sin(angle), 0, cos(angle));
resulted in a heavily warped image. Do these matrices need to relate to a different origin point I am not aware of? Or do I need to convert the distance/angle value somehow to be dependent of pixelsize?
Any help would be appreciated.
It's not clear whether you have enough information about the camera poses to perform an accurate rectification.
Both T and R are measured in 3D, but in your case:
T is one-dimensional (along x-axis only), which means that you are confident that the two cameras are perfectly aligned along the other axes (in particular, you have less-than-1 pixel error on the y axis, ie a few microns by today's standards);
R leaves the Y-coordinates untouched. Thus, all you have is rotation around this axis, does it match your experimental setup ?
Finally, you need to check the consistency of the units that you are using for the translation and rotation to match with the units from the intrinsic data.
If it is feasible, you can check your results by finding some matching points between the two cameras and proceeding to a projective calibration: the accurate knowledge of the 3D position of the calibration points is required for metric reconstruction only. Other tasks rely on the essential or fundamental matrices, that can be computed from image-to-image point correspondences.
If intrinsics and extrinsics known, I recommend this method: http://link.springer.com/article/10.1007/s001380050120#page-1
It is easy to implement. Basically you rotate the right camera till both cameras have the same orientation, means both share a common R. Epipols are then transformed to the infinity and you have epipolar lines parallel to the image x-axis.
First row of the new R (x) is simply the baseline, e.g. the subtraction of both camera centers. Second row (y) the cross product of the baseline with the old left z-axis. Third row (z) equals cross product of the first two rows.
At last you need to calculate a 3x3 homography described in the above link and use warpPerspective() to get a rectified version.

Rotation and Position Tracking with OpenCV and Optical Flow

I would like to track the rotation and translation of an object in OpenCV using Optical Flow. So far I've got something like this:
Call goodFeaturesToTrack to find initial features
Call calcOpticalFlowPyrLK to track the movement of feature points
Call findHomography to find how the points in image A moved to image B
Call perspectiveTransform to move points based on the homography
Call solvePnPRansac to find the rotation matrix and translation
vector
At this point I'm trying to take the difference between the rotation and translation Mat between images and add them to an initial Rotation and Translation Matrix.
cv::solvePnPRansac(pattern.points3d, _points2d, calibration.getIntrinsic(), calibration.getDistorsion(), raux, taux);
raux.convertTo(Rvec, CV_32F);
taux.convertTo(Tvec, CV_32F);
cv::Mat_<float> rotMat(3, 3);
cv::Rodrigues(Rvec, rotMat);
cv::Mat_<float> transDiff = _prevTranslation - Tvec;
cv::Mat_<float> rotDiff = _prevRotation - rotMat;
_absRotation += rotDiff;
_absTranslation += transDiff;
The problem with this approach is translation vector doesn't follow the images. The vector tends to stay in the range
[0.02 0.2 -1.5]
It doesn't stray far from this position.
Thanks.

optical flow for moving object: few points

I'm trying to do something like this:
http://www.youtube.com/watch?feature=player_embedded&v=MIYt1yNwoZU
and I'm on the right way, it works well for 2 hours coding. But I have some question:
I'm using opencv 2.4 and there are some options around.. see here. which one is the best? lucas kanade with some automatic feature detection? or maybe a simple global orientation is enough? or even kalman filter? for now I'm using a dense farneback’s algorithm and i think is the first (= more simple) option but maybe is not the best one.
after calculating optical flow on the image (scaled down by factor of 2 for calculating optical flow because it is an hard work) I take the average of the vectors. normal average, summing all of them and dividing from the number of vectors. so with a nested for loop on flow mat. better way?
Point2f average_motion(0,0); float n=1;
for(int y = 0; y < flow.rows; y += step)
for(int x = 0; x < flow.cols; x += step) {
const Point2f& fxy = flow.at<Point2f>(y, x);
if( abs(fxy.x) > threshold || abs(fxy.y) > threshold) {
average_motion += fxy;
n++;
}
}
average_motion *= 1/n;
average_motion *= 1/n;
cout << average_motion << endl;
I'm moving the rects BUT the right/left movement seems to be a little bit weird, instead the up/down works really nice! someone can explain me why?
translating is ok, but i'm stuck on rotating.. if i get the average vector how can i get the degree? I've tried with angle between vectors with X axis but is does not work nice. some hint?
Now I'm drawing stuff with opencv drawing api but from 2.4 there is also opengl support.. and should be nice, but i don't find example on that..
The best approach for optical flow is using a Kalman filter for predicting the movement, so you can project the patches in that directions and reduce the searching area for the next frame. Increasing computational speed.
The bad news is that it is a difficult task to make Kalman filter track properly.
I would propse the use the Lucas Kanade method because it is quite fast. Or you could use the GPU implementation of the RLOF, which is similar to the Lucas Kanade. Do not estimate a dense motion vector field just estimate motion vectors for a grid (e.g. each 5th pixel) this saves you a lot of runtime. Or seed the features to track regarding your rectangles you want to move. To move your rectangle it would be a more elegant to estimate transformation matrices e.g. affine or perspective by cv::getPerspectiveTransform or cv::getAffineTransform. The affine transformation contains translation, rotation, and scaling and the perspective contains also scheering. (By both RANSAC is a good estimator). The new positions of the rectangle points could be easly computed by matrix operation.
[x,y,1] = Matrix * [x_old, y_old, 1], see OpenCV documentation

Reshaping noisy coin into a circle form

I'm doing a coin detection using JavaCV (OpenCV wrapper) but I have a little problem when the coins are connected. If I try to erode them to separate these coins they loose their circle form and if I try to count pixels inside each coin there can be problems so that some coins can be miscounted as one that bigger. What I want to do is firstly to reshape them and make them like a circle (equal with the radius of that coin) and then count pixels inside them.
Here is my thresholded image:
And here is eroded image:
Any suggestions? Or is there any better way to break bridges between coins?
It looks similar to a problem I recently had to separate bacterial colonies growing on agar plates.
I performed a distance transform on the thresholded image (in your case you will need to invert it).
Then found the peaks of the distance map (by calculating the difference between a the dilated distance map and the distance map and finding the zero values).
Then, I assumed each peak to be the centre of a circle (coin) and the value of the peak in the distance map to be the radius of the circle.
Here is the result of your image after this pipeline:
I am new to OpenCV, and c++ so my code is probably very messy, but I did that:
int main( int argc, char** argv ){
cv::Mat objects, distance,peaks,results;
std::vector<std::vector<cv::Point> > contours;
objects=cv::imread("CUfWj.jpg");
objects.copyTo(results);
cv::cvtColor(objects, objects, CV_BGR2GRAY);
//THIS IS THE LINE TO BLUR THE IMAGE CF COMMENTS OF THIS POST
cv::blur( objects,objects,cv::Size(3,3));
cv::threshold(objects,objects,125,255,cv::THRESH_BINARY_INV);
/*Applies a distance transform to "objects".
* The result is saved in "distance" */
cv::distanceTransform(objects,distance,CV_DIST_L2,CV_DIST_MASK_5);
/* In order to find the local maxima, "distance"
* is subtracted from the result of the dilatation of
* "distance". All the peaks keep the save value */
cv::dilate(distance,peaks,cv::Mat(),cv::Point(-1,-1),3);
cv::dilate(objects,objects,cv::Mat(),cv::Point(-1,-1),3);
/* Now all the peaks should be exactely 0*/
peaks=peaks-distance;
/* And the non-peaks 255*/
cv::threshold(peaks,peaks,0,255,cv::THRESH_BINARY);
peaks.convertTo(peaks,CV_8U);
/* Only the zero values of "peaks" that are non-zero
* in "objects" are the real peaks*/
cv::bitwise_xor(peaks,objects,peaks);
/* The peaks that are distant from less than
* 2 pixels are merged by dilatation */
cv::dilate(peaks,peaks,cv::Mat(),cv::Point(-1,-1),1);
/* In order to map the peaks, findContours() is used.
* The results are stored in "contours" */
cv::findContours(peaks, contours, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
/* The next steps are applied only if, at least,
* one contour exists */
cv::imwrite("CUfWj2.jpg",peaks);
if(contours.size()>0){
/* Defines vectors to store the moments of the peaks, the center
* and the theoritical circles of the object of interest*/
std::vector <cv::Moments> moms(contours.size());
std::vector <cv::Point> centers(contours.size());
std::vector<cv::Vec3f> circles(contours.size());
float rad,x,y;
/* Caculates the moments of each peak and then the center of the peak
* which are approximatively the center of each objects of interest*/
for(unsigned int i=0;i<contours.size();i++) {
moms[i]= cv::moments(contours[i]);
centers[i]= cv::Point(moms[i].m10/moms[i].m00,moms[i].m01/moms[i].m00);
x= (float) (centers[i].x);
y= (float) (centers[i].y);
if(x>0 && y>0){
rad= (float) (distance.at<float>((int)y,(int)x)+1);
circles[i][0]= x;
circles[i][3]= y;
circles[i][2]= rad;
cv::circle(results,centers[i],rad+1,cv::Scalar( 255, 0,0 ), 2, 4, 0 );
}
}
cv::imwrite("CUfWj2.jpg",results);
}
return 1;
}
You don't need to erode, just a good set of params for cvHoughCircles():
The code used to generate this image came from my other post: Detecting Circles, with these parameters:
CvSeq* circles = cvHoughCircles(gray, storage, CV_HOUGH_GRADIENT, 1, gray->height/12, 80, 26);
OpenCV has a function called HoughCircles() that can be applied to your case, without separating the different circles. Can you call it from JavaCV ? If so, it will do what you want (detecting and counting circles), bypassing your separation problem.
The main point is to detect the circles accurately without separating them first. Other algorithms (such as template matching can be used instead of generalized Hough transform, but you have to take into account the different sizes of the coins.
The usual approach for erosion-based object recognition is to label continuous regions in the eroded image and then re-grow them until they match the regions in the original image. Hough circles is a better idea in your case, though.
After detecting the joined coins, I recommend applying morphological operations to classify areas as "definitely coin" and "definitely not coin", apply a distance transformation, then run the watershed to determine the boundaries. This scenario is actually the demonstration example for the watershed algorithm in OpenCV − perhaps it was created in response to this question.

OpenCV extrinsic camera from feature points

How do I retrieve the rotation matrix, the translation vector and maybe some scaling factors of each camera using OpenCV when I have pictures of an object from the view of each of these cameras? For every picture I have the image coordinates of several feature points. Not all feature points are visible in all of the pictures.
I want to map the computed 3D coordinates of the feature points of the object to a slightly different object to align the shape of the second object to the first object.
I heard it is possible using cv::calibrateCamera(...) but I can't get quite through it...
Does someone have experiences with that kind of problem?
I was confronted with the same problem as you, in OpenCV. I had a stereo image pair and I wanted to computed the external parameters of the cameras and the world coordinates of all observed points. This problem has been treated here:
Berthold K. P. Horn. Relative orientation revisited. Berthold K. P. Horn. Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 545 Technology ...
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.4700
However, I wasn't able to find a suitable implementation of this problem (perhaps you will find one). Due to time limitations I did not have time to understand all the maths in this paper and implement it myself, so I came up with a quick-and-dirty solution that works for me. I will explain what I did to solve it:
Assuming we have two cameras, where the first camera has external parameters RT = Matx::eye(). Now make a guess about the the rotation R of the second camera. For every pair of image points observed in both images, we compute the directions of their corresponding rays in world coordinates and store them in a 2d-array dirs (EDIT: The internal camera parameters are assumed to be known). We can do this since we assume that we know the orientation of every camera. Now we build an overdetermined linear system AC = 0 where C is the centre of the second camera. I provide you with the function to compute A:
Mat buildA(Matx<double, 3, 3> &R, Array<Vec3d, 2> dirs)
{
CV_Assert(dirs.size(0) == 2);
int pointCount = dirs.size(1);
Mat A(pointCount, 3, DataType<double>::type);
Vec3d *a = (Vec3d *)A.data;
for (int i = 0; i < pointCount; i++)
{
a[i] = dirs(0, i).cross(toVec(R*dirs(1, i)));
double length = norm(a[i]);
if (length == 0.0)
{
CV_Assert(false);
}
else
{
a[i] *= (1.0/length);
}
}
return A;
}
Then calling cv::SVD::solveZ(A) will give you the least-squares solution of norm 1 to this system. This way, you obtain the rotation and translation of the second camera. However, since I just made a guess about the rotation of the second camera, I make several guesses about its rotation (parameterized using a 3x1 vector omega from which i compute the rotation matrix using cv::Rodrigues) and then I refine this guess by solving the system AC = 0 repetedly in a Levenberg-Marquardt optimizer with numeric jacobian. It works for me but it is a bit dirty, so you if you have time, I encourage you to implement what is explained in the paper.
EDIT:
Here is the routine in the Levenberg-Marquardt optimizer for evaluating the vector of residues:
void Stereo::eval(Mat &X, Mat &residues, Mat &weights)
{
Matx<double, 3, 3> R2Ref = getRot(X); // Map the 3x1 euler angle to a rotation matrix
Mat A = buildA(R2Ref, _dirs); // Compute the A matrix that measures the distance between ray pairs
Vec3d c;
Mat cMat(c, false);
SVD::solveZ(A, cMat); // Find the optimum camera centre of the second camera at distance 1 from the first camera
residues = A*cMat; // Compute the output vector whose length we are minimizing
weights.setTo(1.0);
}
By the way, I searched a little more on the internet and found some other code that could be useful for computing the relative orientation between cameras. I haven't tried any code yet, but it seems useful:
http://www9.in.tum.de/praktika/ppbv.WS02/doc/html/reference/cpp/toc_tools_stereo.html
http://lear.inrialpes.fr/people/triggs/src/
http://www.maths.lth.se/vision/downloads/
Are these static cameras which you wish to calibrate for future use as a stereo pair? In this case you would want to use the cv::stereoCalibrate() function. OpenCV contains some sample code, one of which is stereo_calib.cpp which may be worth investigating.

Resources