How does KLT work in OpenCV? - opencv

I am curious about the logic behind KLT in openCV.
From what I have known so far, the images sent to find optical flow in OpenCV is firstly converted to grayscale.
What I am curious is that, when running the algorithm, we need set of features for computation. What are the features used in finding optical flow method in openCV?
Thank you :)

There are 2 types of optical flow. Dense and sparse.
Dense finds flow for all the pixels while sparse finds flow for the selected points.
The selected points may be user specified, or calculated automatically using any of the feature detectors available in OpenCV. Most common feature detectors include GoodFeaturesToTrack which finds corners using cornerHarris or cornerMinEigenVal
The feature list is then passed to the KLT Tracker calcOpticalFlowPyrLK.
Feature can be any point in the image. Most common features are corners and edges.

Related

How to detect hand palm and its orientation (like facing outwards)?

I am working on a hand detection project. There are many good project on web to do this, but what I need is a specific hand pose detection. It needs a totally open palm and the whole palm face to outwards, like the image below:
The first hand faces to inwards, so it will not be detected, and the right one faces to outwards, it will be detected. Now I can detect hand with OpenCV. but how to tell the hand orientation?
Problem of matching with the forehand belongs to the texture classification, it's a classic pattern recognition problem. I suggest you to try one of the following methods:
Gabor filters: it is good to detect the orientation and pixel intensities (as forehand has different features), opencv has getGaborKernel function, the very important params of this function is theta (orientation) and lambd: (frequencies). To make it simple you can apply this process on a cropped zone of palm (as you have already detected it, it would be easy to crop for example the thumb, or a rectangular zone around the gravity center..etc). Then you can convolute it with a small database of images of the same zone to get the a rate of matching, or you can use the SVM classifier, where you have to train your SVM on a set of images by constructing the training matrix needed for SVM (check this question), this paper
Local Binary Patterns (LBP): it's an important feature descriptor used for texture matching, you can apply it on whole palm image or on a cropped zone or finger of image, it's easy to use in opencv, a lot of tutorials with codes are available for this method. I recommend you to read this paper talking about Invariant Texture Classification
with Local Binary Patterns. here is a good tutorial
Haralick Texture: I've read that it works perfectly when a set of features quantifies the entire image (Global Feature Descriptors). it's not implemented in opencv but easy to be implemented, check this useful tutorial
Training Models: I've already suggested a SVM classifier, to be coupled with some descriptor, that can works perfectly.
Opencv has an interesting FaceRecognizer class for face recognition, it could be an interesting idea to use it replacing the face images by the palm ones, (do resizing and rotation to get an unique pose of palm), this class has three methods can be used, one of them is Local Binary Patterns Histograms, which is recommended for texture recognition. and why not to try the other models (Eigenfaces and Fisherfaces ) , check this tutorial
well if you go for a MacGyver way you can notice that the left hand has bones sticking out in a certain direction, while the right hand has all finger lines and a few lines in the hand palms.
These lines are always sort of the same, so you could try to detect them with opencv edge detection or hough lines. Due to the dark color of the lines, you might even be able to threshold them out of it. Then gather the information from those lines, like angles, regressions, see which features you can collect and train a simple decision tree.
That was assuming you do not have enough data, if you have then you go into deeplearning, just take a basic inceptionV3 model and retrain the last dense layer to classify between two classes with a softmax, or to predict the probablity if the hand being up/down with sigmoid. Check this link, Tensorflow got your back on the training of this one, pure already ready code to execute.
Questions? Ask away
Take a look at what leap frog has done with the oculus rift. I'm not sure what they're using internally to segment hand poses, but there is another paper that produces hand poses effectively. If you have a stereo camera setup, you can use this paper's methods: https://arxiv.org/pdf/1610.07214.pdf.
The only promising solutions I've seen for mono camera train on large datasets.
use Haar-Cascade classifier,
you can get the classifier model file then use it here.
Just search for 'Haarcascade detection of Palm in Google' or use below code.
import cv2
cam=cv2.VideoCapture(0)
ccfr2=cv2.CascadeClassifier('haar-cascade-files-master/palm.xml')
while True:
retval,image=cam.read()
grey=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
palm=ccfr2.detectMultiScale(grey,scaleFactor=1.05,minNeighbors=3)
for x,y,w,h in palm:
image=cv2.rectangle(image,(x,y),(x+w,y+h),(256,256,256),2)
cv2.imshow("Window",image)
if cv2.waitKey(1) & 0xFF==ord('q'):
cv2.destroyAllWindows()
break
del(cam)
Best of Luck for your experience using HaarCascade.

OpenCV - Feature Matching vs Optical Flow

I am interested in making a motion tracking app using OpenCV, and there has been a wealth of information available online. However, I am a tad confused between feature matching and tracking features using a sparse optical flow algorithm such as Lucas-Kanade. With that in mind, I have the following questions:
What is the main difference between the two (feature matching and optical flow) if I have specified a region of pixels to track? I'm not interested in tracking in real time, if that helps clear up any assumptions.
In addition, since I'm not doing real time tracking, is it a better idea to use dense optical flow (Farneback) to keep track of the pixels in my specified region of interest?
Thank you.
I would like to add a few thoughts about that theme since I found this a very interesting question too.
As said before Feature Matching is a technique that is based on:
A feature detection step which returns a set of so called feature points. These feature points are located at positions with salient image structures, e.g. edge-like structures when you are using FAST or blob like structures if you are using SIFT or SURF.
The second step is the matching. The association of feature points extracted from two different images. The matching is based on local visual descriptors, e.g. histogram of gradients or binary patterns, that are locally extracted around the feature positions. The descriptor is a feature vector and associated feature point pairs are pairs a minimal feature vector distances.
Most feature matching methods are scale and rotation invariant and are robust for changes in illuminations (e.g caused by shadow or different contrast). Thus these methods can be applied to image sequences but are more often used to align image pairs captured from different views or with different devices.The disadvantage of Feature Matching methods is the difficulty of defining where the feature matches are spawn and that the feature pair (which in a image sequence are motion vectors) are in general very sparse. In addition the subpixel accuracy of matching approaches are very limited as most detector are fine-graded to integer positions.
From my experience the main advantage of feature matching approaches is that they can compute very large motions/ displacements.
OpenCV offers some feature matching methods but there are a lot of more recent, faster and more accurate approaches available online e.g.:
DeepMatching which relies on deep learning and are often used to initialize optical flow methods to help them deal with long-range motions.
Stereoscann which is a very fast approach at its origin proposed for visual odometry.
Optical flow methods in contrast rely on the minimization of the brightness constancy and additional constrain e.g. smoothness etc. Thus they derive motion vector based on spatial and temporal image gradients of a sequence of consecutive frames. Thus they are more suited image sequences rather than image pairs that are captured from very different view points. The main challenges in the estimation of motion with optical flow vectors are large motions, occlusion, strong illumination changes and changes of the appearance of the objects and mostly the low runtime. However optical flow methods can be highly accurate and compute dense motion fields which respect to shared motion boundaries of the objects in a scene.
However, the accuracy of different optical flow methods is very different. Local methods such as the PLK (Lucas Kanade) are in general less accurate but allow to compute pre selected motion vectors only and can thus be very fast. (In the recent years we have done some research to improve the accuracy of the local approach, see here for further information).
The main OpenCV trunk offers global approaches such as the Farnback. But this is a quite outdated approach. Try the OpenCV contrib trunk which more recent methods. But to get an good overview of the most recent methods take a look at the public optical flow benchmarks. Here you will find code and implementations as well e.g.:
MPI-Sintel optical flow benchmark
KITTI 2012 optical flow benchmark. Both offer links e.g. to git's or source code for some newer methods. Such as FlowFields.
But from my point of view I would not on an early stage reject a specific approach matching or optical flow. Try as much as possible available online implementations and see what is the best for your application.
Feature matching uses the feature descriptors to match features with one another (usually) using a nearest neighbor search in the feature descriptor space. The basic idea is you have descriptor vectors, and the same feature in two images should be near each other in the descriptor space, so you just match that way.
Optical flow algorithms do not look at a descriptor space, and instead, looks at pixel patches around features and tries to match those patches instead. If you're familiar with dense optical flow, sparse optical flow just does dense optical flow but on small patches of the image around feature points. Thus optical flow assumes brightness constancy, that is, that pixel brightness doesn't change between frames. Also, since you're looking around neighboring pixels, you need to make the assumption that neighboring points to your features move similarly to your feature. Finally, since it's using a dense flow algorithm on small patches, the points where they move cannot be very far in the image from the original feature location. If they are, then the pyramid-resolution approach is recommended, where you scale down the image before you do this so that what once was a 16 pixel translation is now a 2 pixel translation, and then you can scale up with the found transformation as your prior.
So feature matching algorithms are all-in-all far better when it comes to using templates where the scale is not exactly the same, or if there's a perspective difference in the image and template, or if the transformations are large. However, your matches are only as good as your feature detector is exact. On optical flow algorithms, as long as it's looking in the right spot, the transformations can be really, really precise. They're both computationally expensive a bit; optical flow algorithms being an iterative approach makes them expensive (and although you'd think the pyramid approach can eat up more costs by running on more images, it can actually make it faster in some cases to reach the desired accuracy), and nearest neighbor searches are also expensive. Optical flow algorithms OTOH can work really well when the transformations are small, but if anything in your scene messes with your lighting or you get some incorrect pixels (like say, even minor occlusion) can really throw it off.
Which one to use definitely depends on the project. For a project I worked on with satellite imagery, I used dense optical flow because the images of desert terrain I was working with did not have precise enough features (in location) and different feature descriptors happen to look relatively similar so searching that feature space wasn't giving tons of great matches. In this case, optical flow was the better method. However, if you were doing image alignment on satellite imagery of a city where buildings can occlude parts of the scene, there are a lot of features that will stay matched and give a better result.
The OpenCV Lucas-Kanade tutorial doesn't give a whole lot of insight but should get your code moving in the right direction with the above in mind.
key-point matching = sparse optical flow
KLT tracking is a good example of sparse flow, see the demo LKDemo.cpp (it had some python wrapper example too, cant remember it now).
for a dense example, see samples/python/opt_flow.py, using Farnebäcks method.
You are right in being confused... The entire world is confused about this terribly simple topic. Alot of the reason is because people believe Lucas-Kanade to be sparse flow (due to a terribly badly named and commented example in openCV: LKdemo which should be called KLTDemo).

How to do grid-based (dense) optical flow on a masked image?

I am trying to track multiple people using a video camera. I do not want to use blob segmentation techniques.
What I want to do:
Perform background subtraction to obtain a mask isolating the peoples' motion.
Perform grid based optical flow on those areas -
What would be my best bet?
I am struggling to implement. I have tried blob detection and also some optical flow based examples (sparse), sparse didn't really do it for me as I wasn't getting enough feature points from goodfeaturestotrack() - I would like to end up with at least 20 track able points per person so that's why I think a grid based method would be better for me, I will use the motion vectors obtained to classify different people ( clustering on magnitude and direction possibly? )
I am using opencv3 with Python 3.5 - but am still quite noobish in this field.
Would appreciate some guidance immensely!
For a sparse optical flow ( in OpenCV the pyramidal Lucas Kanade method) you don't need good features-to-track mandatory to get the positions.
The calcOpticalFlowPyrLK function allows you to estimate the motion at predefined positions and these can be given by you too.
So just initialized a grid of cv::Point2f by your self, e.g. create a list of points and set the positions to the grid points located at your blobs, and run calcOpticalFlowPyrLK().
The idea of the good features-to-track method is that it gives you the points where the calcOpticalFlowPyrLK() result is more likely to be accurate and this is on image locations with edge-like structures. But in my experiences this gives not always the optimal feature point set. I prefer to use regular grids as feature point sets.

Can we use Lucas Kanade optical flow(opencv) for color based detection or contour object tracking?

From what I have studied, LK optical flow can be achieved in three methods.
cvgoodfeaturestotrack
cvfindcornerSubPix
calcOpticalFlowPyrLK
Is there any possibility to track objects using color or by using contour?
I'm little bit confused about the exact meaning of your question.
Here are the answers to what I can possibly interpret from you statement:
Q: Can cvgoodfeaturestotrack, cvfindcornerSubPix, calcOpticalFlowPyrLK methods be used on color image directly?
A: No. Convert to grayscale first.
++++++
Q: Can Lucas Kanade optical flow be used for tracking a particular color?
A: No. Possibly not, using the existing library functions/algorithms in openCV. Probably a research topic. Go through http://robots.stanford.edu/cs223b04/algo_tracking.pdf
The very first line of the paper assumes the two input image are 2D and grayscale. Try going through available literature and see if you can tweak the algorithm to include color information. You may want to consult additional resources like this: http://www.dca.ufrn.br/~adelardo/artigos/SAC08.pdf
+++++++++++
Q: Can optical flow be used for tracking a particular color?
A: Yes. Dense optical flow tracking (unlike sparse optical flow, viz. LK optical flow tracking). You may use openCV method: calcOpticalFlowFarneback
++++++
There are other simpler methods, if you want to implement this from scratch and robustness ain't one of your primary concerns.
Try thresholding input image for your target color ----> calculate largest blob ----> obtain centroid of that blob ---> check the flow of that centroid across consecutive frames.

What is the difference between sparse and dense optical flow?

Lots of resources say that there are two types optical flow algorithms. And Lucas-Kanade is a sparse technique, but I can't find the meanings of sparse and dense? Can some one tell me what is the difference between dense and sparse optical flow?
The short explanation is, sparse techniques only need to process some pixels from the whole image, dense techniques process all the pixels. Dense techniques are slower but can be more accurate, but in my experience Lucas-Kanade accuracy might be enough for real-time applications. An example of a dense optical flow algorithm (the most popular) is Gunner Farneback's Optical Flow.
To get an overview of the flow quality look at the benchmark page e.g. the KITTI or the Middleburry dataset
Sparse optical flow gives you the flow vectors of some "interesting features" within the image.
Dense optical flow attempts to give you the flow all over the image - up to a flow vector per pixel.
First of all, Lucas-Kanade is NOT a sparse optical flow technique. The reason so many believe it is, is due to a wide spread misunderstanding. The misconception became an accepted truth since the very first implementation of Lucas-Kanade in OpenCV was labelled as SPARSE, and still is to this day. The arguments to why Lucas-Kanade should be called sparse, apply to any dense flow algorithm. If you insist that Lucas-Kanade is sparse, then all flow algorithms are sparse and there is no point in distinguising them.
Sparse flow is the same as point tracking, dense flow consists of vectors over the video, indicating estimates of motion of fixed positions.
You can read more about all of this in this tutorial that I wrote, where I also show how Lucas-Kanade is just as dense as any other algoritm out there (although not as accurate).
Sparse optical flow - Lucas-Kanade method computes optical flow for a sparse feature set (e.g. corners detected using Shi-Tomasi algorithm).
Dense optical flow - Gunner Farneback's algorithm computes the optical flow for all the points in the frame. This is explained in "Two-Frame Motion Estimation Based on Polynomial Expansion" by Gunner Farneback in 2003.
Example implementation of can be found in opencv docmentation here
Sparse optical flow works on features(edges,corners etc). Dense optical flow is designed to work on all the pixels. The advantage of the first is that it is generally faster while the second can give estimates for more pixels than the first.
Sparse optical flow gives you the velocity vectors for some interesting (corner) points, these points are extracted beforehand using algorithms like Shi-Tomashi, Harris etc. The extracted points are passed into your [optical flow function] along with the present image and next image. Any good optical flow function should check the optical flow in the forward direction using the above corner points and also back track to cross check if it is following the same points.
On the other hand, dense optical flow can referred from here: http://www.cs.toronto.edu/~fleet/courses/cifarSchool09/flowChapter05.pdf

Resources