I am pretty new to opencv and I have read some tutorials and projects source code. It seems that people tend to do the object detection base on color.
I wonder if there is a way to do the object detection base on the curve of the edge.
For example, I have a known object whose edge is like A, and I have known that the edge of this object in different images is pretty similar but not the same, like B. Now consider B is in the Pic C, and my target is to find out the contour or edge of B in the C based on the searching for the similar shape of A in the C.
So is there any method of contour detection or edge detection based on the searching for the similar shape of an known shape.
I think the default method in OpenCV is matchShapes; but I have used a generic method that works rather well.
For each contour, take the centroid.
Following the contour from the point highest above the centroid, take the distance to the centroid.
Move a step along the contour (for example 1/100 of the length of the contour). The the distance again.
After all the steps, you have a arrays of distances.
Compare arrays, for example using Cosine similarity
You can make this method scale and rotation invariant.
If you want to have scale invariance, normalize the array before comparison.
If you want to have rotation invariance, you can shift the array and find the best match over the shifst
Related
I have a non-planar object with 9 points with known dimensions in 3D i.e. length of all sides is known. Now given a 2D projection of this shape, I want to reconstruct the 3D model of it. I basically want to retrieve the shape of this object in the real world i.e. angles between different sides in 3D. For eg: given all the dimensions of every part of the table and a 2D image, I'm trying to reconstruct its 3D model.
I've read about homography, perspective transform, procrustes and fundamental/essential matrix so far but haven't found a solution that'll apply here. I'm new to this, so might have missed out something. Any direction on this will be really helpful.
In your question, you mention that you want to achieve this using only a single view of the object. In that case, homographies or Essential/Fundamental matrices wont help you, because these require at least two views of the scene to make sense. If you don't have any priors on the shape of the objects that you want to reconstruct, the key information that you'll be missing is (relative) depth, and in that case I think those are the two possible solutions:
Leverage a learning algorithm. There is a rich literature on 6dof object pose estimation with deep networks, see this paper for example. You wont have to deal with depth directly if you use those since those networks are trained end to end to estimate a pose in SO(3).
Add many more images and use a dense photometric SLAM/SFM pipeline, such as elastic fusion. However, in that case you will need to segment the resulting models since the estimation they produce is of the entire environment, which can be difficult depending on the scene.
However, as you mentioned in your comment, it is possible to reconstruct the model up to scale if you have very strong priors on its geometry. In the case of a planar object (a cuboid will just be an extension of that), you can use this simple algorithm (that is more or less what they do here, there are other methods but I find them a bit messy, equation-wise):
//let's note A,B,C,D the rectangle in 3d that we are after, such that
//AB is parellel with CD. Let's also note a,b,c,d their respective
//reprojections in the image, i.e. a=KA where K is the calibration matrix, and so on.
1) Compute the common vanishing point of AB and CD. This is just the intersection
of ab and cd in the image plane. Let's call it v_1.
2) Do the same for the two other edges, i.e bc and da. Let's call this
vanishing point v_2.
3) Now, you can compute the vanishing line, which will just be
crossproduct(v_1, v_2), i.e. the line going through both v_1 and v_2. This gives
you the orientation of your plane. Let's write its normal N.
5) All you need to find now is the boundaries of the rectangle. To do
that, just consider any plane with normal N that doesn't go through
the camera center. Now find the intersections of K^{-1}a, K^{-1}b,
K^{-1}c, K^{-1}d with that plane.
If you need a refresher on vanishing points and lines, I suggest you take a look at pages 213 and 216 of Hartley-Zisserman's book.
I have a few doubts about how to approach my goal. I have an outside camera who is recording people and I want to draw an ellipse on every person.
Right now what I do is get the feature points of the people from the frame (I get them using a mask to only have the feature points on the people), set a EM algorithm and train it with my samples (the feature points extracted). The number of clusters is twice the number of people from the image (I get it before start the EM algorithm using other methods such as pixel counting with a codebook).
My question is
(a) Do I have to just train it only for the first frame and then use predict in the following frames? or,
(b) use train with the feature points in every frame?
Right now I am doing the option b) (I don't use predict) because I don't really know how to use the predict.
If I do a), can you help me with it and after that how to draw the ellipses?. If I do b), can you help me drawing an ellipse for every person? Since right know I got different ellipses for the same person using the cov, mean, etc (one for the arm, for example).
What I want to achieve is this paper using the Gaussian model: Link
If you would draw bounding boxes, rather then ellipses, you could use the function groupRectanlges to merge the different bounding boxes.
But, more important - for people detection, you can simply use openCV's person detector (based on HOG) or latent svm detector with the person model.
You should do b) anyway because, otherwise you'll try to match the keypoints to the clusters (persons) in the first frame. After a few seconds this would not be relevant.
It seems reasonable to assume that from frame to frame change is not going to be overwhelming, so reusing the results of the training on frame N-1 is a good seed to train on frame N, likely to converge faster that running EM from scratch on each frame.
in order to draw the ellipses you can leverage from the mixture of gaussian example in the python bindings:
https://github.com/opencv/opencv/blob/master/samples/python/gaussian_mix.py
Note if you use a diagonal covariance matrix, your ellipses are going to be aligned "straight", their own axis aligned with the X and Y axis of the frame, you can skip the calculation of the angle of the ellipse
I am a newbie to opencv and am trying to implement shape context descriptor outlined in the slide http://www.cs.utexas.edu/~grauman/courses/spring2008/slides/ShapeContexts425.pdf
I found the edge points on the shape using canny edge detector for the first part of step 1. Then I need to calculate the Euclidean distance on each edge point to the other ones. Rather than using for-loops to find the distance between each and every point, is there any opencv function that can do this step more efficiently?
Finding all pairwise distances between the set of points is not a standard operation and I don't think you will find something similar in OpenCV. And it is very easy to compute by hand. Given two points a and b, you can calculate distance between them as cv::norm(a - b), as described here.
You might want to utilize the matchShapes function. However, it operates with image moments, not with shape descriptor you mentioned.
I am currently looking for a way to fit a simple shape (e.g. a T or an L shape) to a 2D point cloud. What I need as a result is the position and orientation of the shape.
I have been looking at a couple of approaches but most seem very complicated and involve building and learning a sample database first. As I am dealing with very simple shapes I was hoping that there might be a simpler approach.
By saying you don't want to do any training I am guessing that you mean you don't want to do any feature matching; feature matching is used to make good guesses about the pose (location and orientation) of the object in the image, and would be applicable along with RANSAC to your problem for guessing and verifying good hypotheses about object pose.
The simplest approach is template matching, but this may be too computationally complex (it depends on your use case). In template matching you simply loop over the possible locations of the object and its possible orientations and possible scales and check how well the template (a cloud that looks like an L or a T at that location and orientation and scale) matches (or you sample possible locations orientations and scales randomly). The checking of the template could be made fairly fast if your points are organised (or you organise them by e.g. converting them into pixels).
If this is too slow there are many methods for making template matching faster and I would recommend to you the Generalised Hough Transform.
Here, before starting the search for templates you loop over the boundary of the shape you are looking for (T or L) and for each point on its boundary you look at the gradient direction and then the angle at that point between the gradient direction and the origin of the object template, and the distance to the origin. You add that to a table (Let us call it Table A) for each boundary point and you end up with a table that maps from gradient direction to the set of possible locations of the origin of the object. Now you set up a 2D voting space, which is really just a 2D array (let us call it Table B) where each pixel contains a number representing the number of votes for the object in that location. Then for each point in the target image (point cloud) you check the gradient and find the set of possible object locations as found in Table A corresponding to that gradient, and then add one vote for all the corresponding object locations in Table B (the Hough space).
This is a very terse explanation but knowing to look for Template Matching and Generalised Hough transform you will be able to find better explanations on the web. E.g. Look at the Wikipedia pages for Template Matching and Hough Transform.
You may need to :
1- extract some features from the image inside which you are looking for the object.
2- extract another set of features in the image of the object
3- match the features (it is possible using methods like SIFT)
4- when you find a match apply RANSAC algorithm. it provides you with transformation matrix (including translation, rotation information).
for using SIFT start from here. it is actually one of the best source-codes written for SIFT. It includes RANSAC algorithm and you do not need to implement it by yourself.
you can read about RANSAC here.
Two common ways for detecting the shapes (L, T, ...) in your 2D pointcloud data would be using OpenCV or Point Cloud Library. I'll explain steps you may take for detecting those shapes in OpenCV. In order to do that, you can use the following 3 methods and the selection of the right method depends on the shape (Size, Area of the shape, ...):
Hough Line Transformation
Template Matching
Finding Contours
The first step would be converting your point to a grayscale Mat object, by doing that you basically make an image of your 2D pointcloud data and so you can use other OpenCV functions. Then you may smooth the image in order to reduce the noises and the result would be somehow a blurry image which contains real edges, if your application does not need real-time processing, you can use bilateralFilter. You can find more information about smoothing here.
The next step would be choosing the method. If the shape is just some sort of orthogonal lines (such as L or T) you can use Hough Line Transformation in order to detect the lines and after detection, you can loop over the lines and calculate the dot product of the lines (since they are orthogonal the result should be 0). You can find more information about Hough Line Transformation here.
Another way would be detecting your shape using Template Matching. Basically, you should make a template of your shape (L or T) and use it in matchTemplate function. You should consider that the size of the template you want to use should be in the order of your image, otherwise you may resize your image. More information about the algorithm can be found here.
If the shapes include areas you can find contours of the shape using findContours, it will give you the number of polygons which are around your shape you want to detect. For instance, if your shape is L, it would have polygon which has roughly 6 lines. Also, you can use some other filters along with findContours such as calculating the area of the shape.
I'm creating a part scanner in C that pulls all possibilities for scanned parts as images in a directory. My code currently fetches all images from that directory and dumps them into a vector. I then produce groups of contours for all the images. The program then falls into a while loop where it constantly grabs images from a webcam, and generates contours for those as well. I have set up a jig for the part to rest on, so orientation and size are not a concern, however I don't want to have to calibrate the machine, so there may be movement between the template images and the part images taken.
What is the best way to compare the contours? I have tried several methods including matchTemplate without contours, but if you take a look at the two parts below, you can see that these two are very close to each other, so matchShapes and matchTemplate can't distinguish between them the way I was using them. I'm also not sure how to use cvMatchShapes. It works with just loading the images directly into match shapes, but the results are inconclusive. I think that contours is the way to go, I'm just not sure of how to go about implementing the comparison phase. Any help would be great.
You can view the templates here: http://www.cryogendesign.com/partDetection.html"
If you are ready for do-it-yourself, one approach could be to compute a "distance image" (assign every pixel the smallest Euclidean distance to the contour taken as the reference). See http://en.wikipedia.org/wiki/Distance_transform.
Using this distance image, you can quickly compute the average distance of a new contour to the reference one (for every contour pixel, get the distance from the distance image). The average distance gives you an indication of the goodness-of-fit and will let you find the best match to a set of reference templates.
If the parts have some moving freedom, the situation is a bit harder: before computing the average distance, you must fit the new contour to the reference one. You will need to apply a suitable transform (translation, rotation, possibly scaling), and find the parameters that will minimize... the average distance.
You can calculate the chamfer distance between the two contours:
T and E are the set of edges of the template and the image and x is the point of reference where you start to compare the two set of edges. So for each x you get a different value.
DT is the distance transform of an image. Matlab provides the algorithm here.
If you want a more detailed version of how to calculate the chamfer distance, take a look here.