Remove outliers from Lucas-Kanade optical flow - opencv

There are similar questions on SO, but I didn't find the answer I wanted. I need to implement a robust optical flow in order to track features on a (detected) face. I use goodFeaturesToTrack/SURF (I haven't yet decided which is best) to get the initial features.
My question is how can I remove the outliers generated from optical flow? Is RANSAC a valid option for this and if so, how can you combine it with calcOpticalFlowPyrLK?
I also thought of rejecting the features for which the displacement is bigger than a threshold, but it's just an idea and don't really know how to implement it (how to choose the threshold, should I compute the mean displacement, etc). So, which approach is best ?

RANSAC is a good and robust option if you have a model that you expect your motion to conform to.
In general LK is local flow and does not have to conform to any (global) motion model, so in many cases RANSAC is inappropriate.
For general flow you might consider:
Symmetric flow: LK flow from A to B give the same results as an independent LK flow from B to A.
Motion bounds: use domain specific knowledge to, e.g. remove motions that are too big, too sparse, too different than neighbors etc.

If you would use a grid of flowpoints instead of feature detection then you could asses flowpoints by comparing the results with the surrounding flowpoints. If the distance with the surrounding vectors is too big, you could eliminate them. But doing this with irregular features is rather too expensive.
If you do continous tracking (of the same features) over several frames, you could also add some temporal smoothness assumption. e.g. a tracking vector from frame N to N+1 is likely to be very similar with the vector from N-1 to N and N+1 to N+2.
Generally, it always makes sense to eliminate suspicous vectors by the features already mentioned above:
- vectors which are very long
- vectors with high error
- tracking points with poor gradient (already excluded, if you use corner detection for the features)
Ransac would only work if you are particularly interested in one rather global feature. e.g. the movement of the head. But I guess that's not what you are interested in (otherwise you could probably also just take the mean of all vectors)

Related

OpenCV - Feature Matching vs Optical Flow

I am interested in making a motion tracking app using OpenCV, and there has been a wealth of information available online. However, I am a tad confused between feature matching and tracking features using a sparse optical flow algorithm such as Lucas-Kanade. With that in mind, I have the following questions:
What is the main difference between the two (feature matching and optical flow) if I have specified a region of pixels to track? I'm not interested in tracking in real time, if that helps clear up any assumptions.
In addition, since I'm not doing real time tracking, is it a better idea to use dense optical flow (Farneback) to keep track of the pixels in my specified region of interest?
Thank you.
I would like to add a few thoughts about that theme since I found this a very interesting question too.
As said before Feature Matching is a technique that is based on:
A feature detection step which returns a set of so called feature points. These feature points are located at positions with salient image structures, e.g. edge-like structures when you are using FAST or blob like structures if you are using SIFT or SURF.
The second step is the matching. The association of feature points extracted from two different images. The matching is based on local visual descriptors, e.g. histogram of gradients or binary patterns, that are locally extracted around the feature positions. The descriptor is a feature vector and associated feature point pairs are pairs a minimal feature vector distances.
Most feature matching methods are scale and rotation invariant and are robust for changes in illuminations (e.g caused by shadow or different contrast). Thus these methods can be applied to image sequences but are more often used to align image pairs captured from different views or with different devices.The disadvantage of Feature Matching methods is the difficulty of defining where the feature matches are spawn and that the feature pair (which in a image sequence are motion vectors) are in general very sparse. In addition the subpixel accuracy of matching approaches are very limited as most detector are fine-graded to integer positions.
From my experience the main advantage of feature matching approaches is that they can compute very large motions/ displacements.
OpenCV offers some feature matching methods but there are a lot of more recent, faster and more accurate approaches available online e.g.:
DeepMatching which relies on deep learning and are often used to initialize optical flow methods to help them deal with long-range motions.
Stereoscann which is a very fast approach at its origin proposed for visual odometry.
Optical flow methods in contrast rely on the minimization of the brightness constancy and additional constrain e.g. smoothness etc. Thus they derive motion vector based on spatial and temporal image gradients of a sequence of consecutive frames. Thus they are more suited image sequences rather than image pairs that are captured from very different view points. The main challenges in the estimation of motion with optical flow vectors are large motions, occlusion, strong illumination changes and changes of the appearance of the objects and mostly the low runtime. However optical flow methods can be highly accurate and compute dense motion fields which respect to shared motion boundaries of the objects in a scene.
However, the accuracy of different optical flow methods is very different. Local methods such as the PLK (Lucas Kanade) are in general less accurate but allow to compute pre selected motion vectors only and can thus be very fast. (In the recent years we have done some research to improve the accuracy of the local approach, see here for further information).
The main OpenCV trunk offers global approaches such as the Farnback. But this is a quite outdated approach. Try the OpenCV contrib trunk which more recent methods. But to get an good overview of the most recent methods take a look at the public optical flow benchmarks. Here you will find code and implementations as well e.g.:
MPI-Sintel optical flow benchmark
KITTI 2012 optical flow benchmark. Both offer links e.g. to git's or source code for some newer methods. Such as FlowFields.
But from my point of view I would not on an early stage reject a specific approach matching or optical flow. Try as much as possible available online implementations and see what is the best for your application.
Feature matching uses the feature descriptors to match features with one another (usually) using a nearest neighbor search in the feature descriptor space. The basic idea is you have descriptor vectors, and the same feature in two images should be near each other in the descriptor space, so you just match that way.
Optical flow algorithms do not look at a descriptor space, and instead, looks at pixel patches around features and tries to match those patches instead. If you're familiar with dense optical flow, sparse optical flow just does dense optical flow but on small patches of the image around feature points. Thus optical flow assumes brightness constancy, that is, that pixel brightness doesn't change between frames. Also, since you're looking around neighboring pixels, you need to make the assumption that neighboring points to your features move similarly to your feature. Finally, since it's using a dense flow algorithm on small patches, the points where they move cannot be very far in the image from the original feature location. If they are, then the pyramid-resolution approach is recommended, where you scale down the image before you do this so that what once was a 16 pixel translation is now a 2 pixel translation, and then you can scale up with the found transformation as your prior.
So feature matching algorithms are all-in-all far better when it comes to using templates where the scale is not exactly the same, or if there's a perspective difference in the image and template, or if the transformations are large. However, your matches are only as good as your feature detector is exact. On optical flow algorithms, as long as it's looking in the right spot, the transformations can be really, really precise. They're both computationally expensive a bit; optical flow algorithms being an iterative approach makes them expensive (and although you'd think the pyramid approach can eat up more costs by running on more images, it can actually make it faster in some cases to reach the desired accuracy), and nearest neighbor searches are also expensive. Optical flow algorithms OTOH can work really well when the transformations are small, but if anything in your scene messes with your lighting or you get some incorrect pixels (like say, even minor occlusion) can really throw it off.
Which one to use definitely depends on the project. For a project I worked on with satellite imagery, I used dense optical flow because the images of desert terrain I was working with did not have precise enough features (in location) and different feature descriptors happen to look relatively similar so searching that feature space wasn't giving tons of great matches. In this case, optical flow was the better method. However, if you were doing image alignment on satellite imagery of a city where buildings can occlude parts of the scene, there are a lot of features that will stay matched and give a better result.
The OpenCV Lucas-Kanade tutorial doesn't give a whole lot of insight but should get your code moving in the right direction with the above in mind.
key-point matching = sparse optical flow
KLT tracking is a good example of sparse flow, see the demo LKDemo.cpp (it had some python wrapper example too, cant remember it now).
for a dense example, see samples/python/opt_flow.py, using Farnebäcks method.
You are right in being confused... The entire world is confused about this terribly simple topic. Alot of the reason is because people believe Lucas-Kanade to be sparse flow (due to a terribly badly named and commented example in openCV: LKdemo which should be called KLTDemo).

Right order of doing feature selection, PCA and normalization?

I know that feature selection helps me remove features that may have low contribution. I know that PCA helps reduce possibly correlated features into one, reducing the dimensions. I know that normalization transforms features to the same scale.
But is there a recommended order to do these three steps? Logically I would think that I should weed out bad features by feature selection first, followed by normalizing them, and finally use PCA to reduce dimensions and make the features as independent from each other as possible.
Is this logic correct?
Bonus question - are there any more things to do (preprocess or transform)
to the features before feeding them into the estimator?
If I were doing a classifier of some sort I would personally use this order
Normalization
PCA
Feature Selection
Normalization: You would do normalization first to get data into reasonable bounds. If you have data (x,y) and the range of x is from -1000 to +1000 and y is from -1 to +1 You can see any distance metric would automatically say a change in y is less significant than a change in X. we don't know that is the case yet. So we want to normalize our data.
PCA: Uses the eigenvalue decomposition of data to find an orthogonal basis set that describes the variance in data points. If you have 4 characteristics, PCA can show you that only 2 characteristics really differentiate data points which brings us to the last step
Feature Selection: once you have a coordinate space that better describes your data you can select which features are salient.Typically you'd use the largest eigenvalues(EVs) and their corresponding eigenvectors from PCA for your representation. Since larger EVs mean there is more variance in that data direction, you can get more granularity in isolating features. This is a good method to reduce number of dimensions of your problem.
of course this could change from problem to problem, but that is simply a generic guide.
Generally speaking, Normalization is needed before PCA.
The key to the problem is the order of feature selection, and it's depends on the method of feature selection.
A simple feature selection is to see whether the variance or standard deviation of the feature is small. If these values are relatively small, this feature may not help the classifier. But if you do normalization before you do this, the standard deviation and variance will become smaller (generally less than 1), which will result in very small differences in std or var between the different features.If you use zero-mean normalization, the mean of all the features will equal 0 and std equals 1.At this point, it might be bad to do normalization before feature selection
Feature selection is flexible, and there are many ways to select features. The order of feature selection should be chosen according to the actual situation
Good answers here. One point needs to be highlighted. PCA is a form of dimensionality reduction. It will find a lower dimensional linear subspace that approximates the data well. When the axes of this subspace align with the features that one started with, it will lead to interpretable feature selection as well. Otherwise, feature selection after PCA, will lead to features that are linear combinations of the original set of features and they are difficult to interpret based on the original set of features.

Structure from Motion with Optical Flow

Let say I have a video from a drive recorder. I want to construct the recorded scene's points cloud using structure from motion technique. First I need to track some points.
Which algorithm can yield a better result? By using the sparse optical flow (Kanade-Lucas-Tomasi tracker) or the dense optical flow (Farneback)? I have experimented a bit but cannot really decide. Each one of them has their own strengths and weaknesses.
The ultimate target is to get the points cloud of the recorded cars in the scene. By using the sparse optical flow, I can track the interesting points of the cars. But it would be quite unpredictable. One solution is to make some kind of grid in the image, and force the tracker to track one interesting point in each of the grid. But I think this would be quite hard.
By using the dense flow, I can get the movement of every pixel, but the problem is, it cannot really detect the motion of cars that have only little motion. Also, I have doubt that the flow of every pixel yielded by the algorithm would be that accurate. Plus, with this, I believe I can only get the pixels movement between two frames only (unlike by using the sparse optical flow in which I can get multiple coordinates of the same interesting point along time t)
Your title indicate SFM which includes pose estimation ,
tracking is only the first step (matching) , if you want point cloud from video (very hard task) first thing I would think of, is bundle adjustment which also works for MVE,
Nevertheless , for video we can do more, as frames are too close to each other, we can use faster algorithm like ( optical flow ) , /than matching SIFT/ and extract F matrix from it , then :
E = 1/K * F * K
Back to your original question , what is better:
1) Dense Optical flow , or
2) Sparse one .
apparently you are working offline , so no importance of speed ,but I would recommend the sparse one ,
Update
for 3d reconstruction , the dense may seem more attractive, but as you said it's rarely robust, so you can use sparse but add as many points as you want to make it semi-dense ,
I cannot name but a few methods that could do this, like mono-slam or orb-slam
Final Update
use semi-dense as I write earlier, but SFM always assume static objects (no movement) or it will never works.
in practical using all the pixels in the image is something never used for 3d reconstruction (not direct methods), and always SIFT were praised way for features detecting and matching, .. recently all the pixels were used in different kind of calibration ,for ex in methods like: Direct Sparse odometry and LSD known as Direct methods

Shape features from canny edge detection

I am trying to implement Canny edge detection found hereCanny edge to differentiate objects based on their shapes. I would like to know what are the features? I need to find a score/metric so that I can define a probability from information like mean of the shape. The purpose is to differentiate between objects of different shapes. So, lets assume that the mean shape(x) of Object1 and Object2 are x1,x2 and the standard deviation(s) is s1,s2 respectively. From what do I calculate these information and How do I find these information?
Canny Algorithm is an edge detector. It searches for high frequencies in the image by computing the magnitude of the derivatives in x and y direction. In the end of you have contours of objects. What you are trying to do is to classify objects and using Canny does not sound like a right way to do it, I am not saying you cannot build features out of edges, but it might perform poorly.
In order to achieve what you want, you need first to identify what features are important for you. You mentioned the shape but is the color a good feature for the class of objects you are trying to find? Your pictures show very colorful objects. Are you only trying to distinguish one object to the other (considering the images only display only the object of interest) or do you want locate them in the screen? Does the image contain only one object or multiple ones?
I will give you some direction regarding feature modeling.
If color is a strong information for your objects, you could model your features using histogram information, compute n bins for all objects and store the distribution of the bins as a feature vector. You can use HOG.
Another possible (naive) solution is to compute all colors of patches (e.g. 7x7) belonging to each object and to compute later the histogram over patches instead of single pixels.
If you are not satisfied with color information and you would like to differentiate objects by comparing information in their neighborhood, you can use local binary patterns, which might be enough for the type of information you have.
Once you decide on the features which are important and modeled them, you can go for the classification (which is gonna determine which object you are seeing given a certain feature).
A probabilistic framework tries to estimate the posterior probability P(X|C), i.e. what is the probability of being object X given that we observed C (C could be your feature) and this is very powerful. You might consider reading about Maximum Likelihood Estimation and Maximum a posteriori. Also, a Naive Bayes classifier is a simple off the shelf algorithm available on Opencv that you could use.
You could use many other algorithms, such as SVM, Boost, Decision Trees, Neural Networks and so on. Bag of visual words is also a nice alternative.
If you are interested how to separate the object of interest from the background you are talking about image segmentation, you can look at K-Means or more powerfully Graph Cuts techniques. Of course you can always segment first and then classify the segmented blobs.
Samuel

How is a homography calculated?

I am having quite a bit of trouble understanding the workings of plane to plane homography. In particular I would like to know how the opencv method works.
Is it like ray tracing? How does a homogeneous coordinate differ from a scale*vector?
Everything I read talks like you already know what they're talking about, so it's hard to grasp!
Googling homography estimation returns this as the first link (at least to me):
http://cseweb.ucsd.edu/classes/wi07/cse252a/homography_estimation/homography_estimation.pdf. And definitely this is a poor description and a lot has been omitted. If you want to learn these concepts reading a good book like Multiple View Geometry in Computer Vision would be far better than reading some short articles. Often these short articles have several serious mistakes, so be careful.
In short, a cost function is defined and the parameters (the elements of the homography matrix) that minimize this cost function are the answer we are looking for. A meaningful cost function is geometric, that is, it has a geometric interpretation. For the homography case, we want to find H such that by transforming points from one image to the other the distance between all the points and their correspondences be minimum. This geometric function is nonlinear, that means: 1-an iterative method should be used to solve it, in general, 2-an initial starting point is required for the iterative method. Here, algebraic cost functions enter. These cost functions have no meaningful/geometric interpretation. Often designing them is more of an art, and for a problem usually you can find several algebraic cost functions with different properties. The benefit of algebraic costs is that they lead to linear optimization problems, hence a closed form solution for them exists (that is a one shot /non-iterative method). But the downside is that the found solution is not optimal. Therefore, the general approach is to first optimize an algebraic cost and then use the found solution as starting point for an iterative geometric optimization. Now if you google for these cost functions for homography you will find how usually these are defined.
In case you want to know what method is used in OpenCV simply need to have a look at the code:
http://code.opencv.org/projects/opencv/repository/entry/trunk/opencv/modules/calib3d/src/fundam.cpp#L81
This is the algebraic function, DLT, defined in the mentioned book, if you google homography DLT should find some relevant documents. And then here:
http://code.opencv.org/projects/opencv/repository/entry/trunk/opencv/modules/calib3d/src/fundam.cpp#L165
An iterative procedure minimizes the geometric cost function.It seems the Gauss-Newton method is implemented:
http://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm
All the above discussion assumes you have correspondences between two images. If some points are matched to incorrect points in the other image, then you have got outliers, and the results of the mentioned methods would be completely off. Robust (against outliers) methods enter here. OpenCV gives you two options: 1.RANSAC 2.LMeDS. Google is your friend here.
Hope that helps.
To answer your question we need to address 4 different questions:
1. Define homography.
2. See what happens when noise or outliers are present.
3. Find an approximate solution.
4. Refine it.
Homography in a 3x3 matrix that maps 2D points. The mapping is linear in homogeneous coordinates: [x2, y2, 1]’ ~ H * [x1, y1, 1]’, where ‘ means transpose (to write column vectors as rows) and ~ means that the mapping is up to scale. It is easier to see in Cartesian coordinates (multiplying nominator and denominator by the same factor doesn’t change the result)
x2 = (h11*x1 + h12*y1 + h13)/(h31*x1 + h32*y1 + h33)
y2 = (h21*x1 + h22*y1 + h23)/(h31*x1 + h32*y1 + h33)
You can see that in Cartesian coordinates the mapping is non-linear, but for now just keep this in mind.
We can easily solve a former set of linear equations in Homogeneous coordinates using least squares linear algebra methods (see DLT - Direct Linear Transform) but this unfortunately only minimizes an algebraic error in homography parameters. People care more about another kind of error - namely the error that shifts points around in Cartesian coordinate systems. If there is no noise and no outliers two erros can be identical. However the presence of noise requires us to minimize the residuals in Cartesian coordinates (residuals are just squared differences between the left and right sides of Cartesian equations). On top of that, a presence of outliers requires us to use a Robust method such as RANSAC. It selects the best set of inliers and rejects a few outliers to make sure they don’t contaminate our solution.
Since RANSAC finds correct inliers by random trial and error method over many iterations we need a really fast way to compute homography and this would be a linear approximation that minimizes parameters' error (wrong metrics) but otherwise is close enough to the final solution (that minimizes squared point coordinate residuals - a right metrics). We use a linear solution as a guess for further non-linear optimization;
The final step is to use our initial guess (solution of linear system that minimized Homography parameters) in solving non-linear equations (that minimize a sum of squared pixel errors). The reason to use squared residuals instead of their absolute values, for example, is because in Gaussian formula (describes noise) we have a squared exponent exp(x-mu)^2, so (skipping some probability formulas) maximum likelihood solutions requires squared residuals.
In order to perform a non-linear optimization one typically employs a Levenberg-Marquardt method. But in the first approximation one can just use a gradient descent (note that gradient points uphill but we are looking for a minimum thus we go against it, hence a minus sign below). In a nutshell, we go through a set of iterations 1..t..N selecting homography parameters at iteration t as param(t) = param(t-1) - k * gradient, where gradient = d_cost/d_param.
Bonus material: to further minimize the noise in your homography you can try a few tricks: reduce a search space for points (start tracking your points); use different features (lines, conics, etc. that are also transformed by homography but possibly have a higher SNR); reject impossible homographs to speed up RANSAC (e.g. those that correspond to ‘impossible’ point movements); use low pass filter for small changes in Homographies that may be attributed to noise.

Resources