How to compare two contours of a binary pattern image? - opencv

I'm creating a part scanner in C that pulls all possibilities for scanned parts as images in a directory. My code currently fetches all images from that directory and dumps them into a vector. I then produce groups of contours for all the images. The program then falls into a while loop where it constantly grabs images from a webcam, and generates contours for those as well. I have set up a jig for the part to rest on, so orientation and size are not a concern, however I don't want to have to calibrate the machine, so there may be movement between the template images and the part images taken.
What is the best way to compare the contours? I have tried several methods including matchTemplate without contours, but if you take a look at the two parts below, you can see that these two are very close to each other, so matchShapes and matchTemplate can't distinguish between them the way I was using them. I'm also not sure how to use cvMatchShapes. It works with just loading the images directly into match shapes, but the results are inconclusive. I think that contours is the way to go, I'm just not sure of how to go about implementing the comparison phase. Any help would be great.
You can view the templates here: http://www.cryogendesign.com/partDetection.html"

If you are ready for do-it-yourself, one approach could be to compute a "distance image" (assign every pixel the smallest Euclidean distance to the contour taken as the reference). See http://en.wikipedia.org/wiki/Distance_transform.
Using this distance image, you can quickly compute the average distance of a new contour to the reference one (for every contour pixel, get the distance from the distance image). The average distance gives you an indication of the goodness-of-fit and will let you find the best match to a set of reference templates.
If the parts have some moving freedom, the situation is a bit harder: before computing the average distance, you must fit the new contour to the reference one. You will need to apply a suitable transform (translation, rotation, possibly scaling), and find the parameters that will minimize... the average distance.

You can calculate the chamfer distance between the two contours:
T and E are the set of edges of the template and the image and x is the point of reference where you start to compare the two set of edges. So for each x you get a different value.
DT is the distance transform of an image. Matlab provides the algorithm here.
If you want a more detailed version of how to calculate the chamfer distance, take a look here.

Related

OpenCV "coordinate specific" match template

I have an image where I need to detect an object as fast as possible. I also know that I only need to detect the object closest to the center.
AFAIK Opencv's MatchTemplate works somewhat like this (pseudocode):
for(x in width):
for(y in height):
value = calcSimilarity(inputImage, searchedImage, x, y)
matched[x][y] = value
After that, I have to loop through the resulting image and find the point closest to the center, which is all quite a waste.
So I'm wondering if I can do something like:
coordsGen = new CoordsGen() // a class that generates specific coords for me
while(!coordsGen.stop):
x, y = coordsGen.next()
value = calcSimilarity(inputImage, searchedImage, x, y)
if(value > treshold)
return x, y
Basically what I need here is the calcSimilarity function. This would allow me to optimize the process greatly.
There are many choices of similarity scoring methods for template matching in general.*
OpenCV has 3 available template matching modes:
Sum of square differences (Euclidean distance)
Cross-correlation
Pearson correlation coefficient
And in OpenCV each of those three have normed/scaled versions as well:
Normalized sum of square differences
Normalized cross-correlation
Normalized Pearson correlation coefficient
You can see the actual formulas used in the OpenCV docs under TemplateMatchModes though these agree with the general formulas you can find everywhere for the above methods.
You can code the template matching yourself instead of using OpenCV. However, note that OpenCV is optimized for these operations and in general is blazing fast at template matching. OpenCV uses a DFT to perform some of these computations to reduce the computational load. For e.g., see:
Why is opencv's Template Matching ... so fast?
OpenCV Sum of squared differences speed
You can also use OpenCV's minMaxLoc() to find the min/maximum value instead of looping through yourself. Also, you didn't specify how you're accessing your values but not all lookup methods are as fast as others. See How to scan images to see the fastest Mat access operations. Spoiler: raw pointers.
The main speedup your optimization would look to give you is early termination of the function. However, I don't think you'll achieve faster times in general by coding it yourself, unless there's a significantly smaller subset of the original image that the template is usually in.
A better method to reduce search time if your images are very big would be to use a pyramid resolution approach. Basically, make template and search images 1/2 your image since, 1/2 of that, 1/2 of that, and so on. Then you start the template matching on a small 1/16 or whatever sized image and find the general location of the template. Then you do the same for the next image size up, but you only search a small subset around where your template was at the previous scale. Then each time you grow the image size closer to the original, you're only looking for small differences of a few pixels to nail down the position more accurately. The general location is first found with the smallest scaled image, which only takes a fraction of the time to find compared to the original image size, and then you simply refine it by scaling up.
* Note that OpenCV doesn't include other template matching methods which you may see elsewhere. In particular, OpenCV has a sum of square differences but no sum of absolute distances method. Phase differences are also used as a similarity metric, but don't exist in OpenCV. Either way, cross-correlation and sum of square differences are both extremely common in image processing and unless you have a special image domain, should work fine.

comparing two Binary images in opencv

I have two binary images of hand which are almost same.How should I compare them to know whether they represent almost same shape or not.I have tried finding euclidean distance between two images but its not giving correct answer if the image is slightly changed or moved to left or right or slight decrease in size.I have also tried HOG descriptors in opencv still I am unable to get correct answer if I compare more than one image.What is the best way to compare two binary images based on shape or any feature to know nearly matching images not considering the size of the image.Links to images are http://postimg.org/image/w20tuuzmv/ and http://postimg.org/image/jndr4br9x/
I think that Generalized Hough transform might be a good solution for you. Here is a tutorial about it.
Alternatively uou can try to cut hand from one image (just use contour bounding rect) and than use it as a template and search for it in second image using template matching technique - here you can read more about. When you will find point with highest correlation value, you need to decide whether it is big enough - you need to find threshold on your own.
Are the images just rotated, translated and scaled? If so you could compute the principal components of the images using PCA, then rotate the images so that the first component is in a certain direction (e.g. always vertical) you could then compute the centroids of the images and translate them to be always in the same position (e.g. center of the image), to use always the same scale you could resize the images so that the sum of the distances between each white pixel with the centroid is the same in both images. Now it's easy to compare the images for example score = np.sum(A==B)

Finding simple shapes in 2D point clouds

I am currently looking for a way to fit a simple shape (e.g. a T or an L shape) to a 2D point cloud. What I need as a result is the position and orientation of the shape.
I have been looking at a couple of approaches but most seem very complicated and involve building and learning a sample database first. As I am dealing with very simple shapes I was hoping that there might be a simpler approach.
By saying you don't want to do any training I am guessing that you mean you don't want to do any feature matching; feature matching is used to make good guesses about the pose (location and orientation) of the object in the image, and would be applicable along with RANSAC to your problem for guessing and verifying good hypotheses about object pose.
The simplest approach is template matching, but this may be too computationally complex (it depends on your use case). In template matching you simply loop over the possible locations of the object and its possible orientations and possible scales and check how well the template (a cloud that looks like an L or a T at that location and orientation and scale) matches (or you sample possible locations orientations and scales randomly). The checking of the template could be made fairly fast if your points are organised (or you organise them by e.g. converting them into pixels).
If this is too slow there are many methods for making template matching faster and I would recommend to you the Generalised Hough Transform.
Here, before starting the search for templates you loop over the boundary of the shape you are looking for (T or L) and for each point on its boundary you look at the gradient direction and then the angle at that point between the gradient direction and the origin of the object template, and the distance to the origin. You add that to a table (Let us call it Table A) for each boundary point and you end up with a table that maps from gradient direction to the set of possible locations of the origin of the object. Now you set up a 2D voting space, which is really just a 2D array (let us call it Table B) where each pixel contains a number representing the number of votes for the object in that location. Then for each point in the target image (point cloud) you check the gradient and find the set of possible object locations as found in Table A corresponding to that gradient, and then add one vote for all the corresponding object locations in Table B (the Hough space).
This is a very terse explanation but knowing to look for Template Matching and Generalised Hough transform you will be able to find better explanations on the web. E.g. Look at the Wikipedia pages for Template Matching and Hough Transform.
You may need to :
1- extract some features from the image inside which you are looking for the object.
2- extract another set of features in the image of the object
3- match the features (it is possible using methods like SIFT)
4- when you find a match apply RANSAC algorithm. it provides you with transformation matrix (including translation, rotation information).
for using SIFT start from here. it is actually one of the best source-codes written for SIFT. It includes RANSAC algorithm and you do not need to implement it by yourself.
you can read about RANSAC here.
Two common ways for detecting the shapes (L, T, ...) in your 2D pointcloud data would be using OpenCV or Point Cloud Library. I'll explain steps you may take for detecting those shapes in OpenCV. In order to do that, you can use the following 3 methods and the selection of the right method depends on the shape (Size, Area of the shape, ...):
Hough Line Transformation
Template Matching
Finding Contours
The first step would be converting your point to a grayscale Mat object, by doing that you basically make an image of your 2D pointcloud data and so you can use other OpenCV functions. Then you may smooth the image in order to reduce the noises and the result would be somehow a blurry image which contains real edges, if your application does not need real-time processing, you can use bilateralFilter. You can find more information about smoothing here.
The next step would be choosing the method. If the shape is just some sort of orthogonal lines (such as L or T) you can use Hough Line Transformation in order to detect the lines and after detection, you can loop over the lines and calculate the dot product of the lines (since they are orthogonal the result should be 0). You can find more information about Hough Line Transformation here.
Another way would be detecting your shape using Template Matching. Basically, you should make a template of your shape (L or T) and use it in matchTemplate function. You should consider that the size of the template you want to use should be in the order of your image, otherwise you may resize your image. More information about the algorithm can be found here.
If the shapes include areas you can find contours of the shape using findContours, it will give you the number of polygons which are around your shape you want to detect. For instance, if your shape is L, it would have polygon which has roughly 6 lines. Also, you can use some other filters along with findContours such as calculating the area of the shape.

Contour analysis in LabVIEW Vision Development

I have images with the same element. I want to detect contours of element on both images and compute contour distances.
For debug I'm drawing points which are taken as corresponding to visualize which points are taken to compute distances.
Unfortunately it seems that almost the same points are taken on template image as on target image. I thought that it should compute distances between corresponding points on two images. So if contour is rotated distance will be big.
My question is how are points choosen to compute distances? What is wrong with my code? LabVIEW documentation mentions nothing about the controls I use.
I'm adding vi to test it and check whether my code is ok or not -> Link
I'm adding no images as it's not a point to solve my case, but the point is to figure out how LabVIEW works.
Answer appeared on the topic referenced in comment. Link again http://forums.ni.com/t5/Machine-Vision/Contour-analysis/td-p/2138766
To sum up and answer this question:
Compute contour distance locates the template contour on target image using contour matching algorithm (based on Geometric Pattern Matching). Matching algorithm takes care of shift, rotation, scale and occlusion.Once the match is found there is refinement algorithm for accurate correspondance generation between template contour points and target contour points. After completing one to one correspondance, distance will be calculated.

Shape/Pattern Matching Approach in Computer Vision

I am currently facing a, in my opinion, rather common problem which should be quite easy to solve but so far all my approached have failed so I am turning to you for help.
I think the problem is explained best with some illustrations. I have some Patterns like these two:
I also have an Image like (probably better, because the photo this one originated from was quite poorly lit) this:
(Note how the Template was scaled to kinda fit the size of the image)
The ultimate goal is a tool which determines whether the user shows a thumb up/thumbs down gesture and also some angles in between. So I want to match the patterns against the image and see which one resembles the picture the most (or to be more precise, the angle the hand is showing). I know the direction in which the thumb is showing in the pattern, so if i find the pattern which looks identical I also have the angle.
I am working with OpenCV (with Python Bindings) and already tried cvMatchTemplate and MatchShapes but so far its not really working reliably.
I can only guess why MatchTemplate failed but I think that a smaller pattern with a smaller white are fits fully into the white area of a picture thus creating the best matching factor although its obvious that they dont really look the same.
Are there some Methods hidden in OpenCV I havent found yet or is there a known algorithm for those kinds of problem I should reimplement?
Happy New Year.
A few simple techniques could work:
After binarization and segmentation, find Feret's diameter of the blob (a.k.a. the farthest distance between points, or the major axis).
Find the convex hull of the point set, flood fill it, and treat it as a connected region. Subtract the original image with the thumb. The difference will be the area between the thumb and fist, and the position of that area relative to the center of mass should give you an indication of rotation.
Use a watershed algorithm on the distances of each point to the blob edge. This can help identify the connected thin region (the thumb).
Fit the largest circle (or largest inscribed polygon) within the blob. Dilate this circle or polygon until some fraction of its edge overlaps the background. Subtract this dilated figure from the original image; only the thumb will remain.
If the size of the hand is consistent (or relatively consistent), then you could also perform N morphological erode operations until the thumb disappears, then N dilate operations to grow the fist back to its original approximate size. Subtract this fist-only blob from the original blob to get the thumb blob. Then uses the thumb blob direction (Feret's diameter) and/or center of mass relative to the fist blob center of mass to determine direction.
Techniques to find critical points (regions of strong direction change) are trickier. At the simplest, you might also use corner detectors and then check the distance from one corner to another to identify the place when the inner edge of the thumb meets the fist.
For more complex methods, look into papers about shape decomposition by authors such as Kimia, Siddiqi, and Xiaofing Mi.
MatchTemplate seems like a good fit for the problem you describe. In what way is it failing for you? If you are actually masking the thumbs-up/thumbs-down/thumbs-in-between signs as nicely as you show in your sample image then you have already done the most difficult part.
MatchTemplate does not include rotation and scaling in the search space, so you should generate more templates from your reference image at all rotations you'd like to detect, and you should scale your templates to match the general size of the found thumbs up/thumbs down signs.
[edit]
The result array for MatchTemplate contains an integer value that specifies how well the fit of template in image is at that location. If you use CV_TM_SQDIFF then the lowest value in the result array is the location of best fit, if you use CV_TM_CCORR or CV_TM_CCOEFF then it is the highest value. If your scaled and rotated template images all have the same number of white pixels then you can compare the value of best fit you find for all different template images, and the template image that has the best fit overall is the one you want to select.
There are tons of rotation/scaling independent detection functions that could conceivably help you, but normalizing your problem to work with MatchTemplate is by far the easiest.
For the more advanced stuff, check out SIFT, Haar feature based classifiers, or one of the others available in OpenCV
I think you can get excellent results if you just compute the two points that have the furthest shortest path going through white. The direction in which the thumb is pointing is just the direction of the line that joins the two points.
You can do this easily by sampling points on the white area and using Floyd-Warshall.

Resources