OpenCV: comparing simple images with small difference - opencv

I have a bunch of "simple" images and I want to compare if they are similar together. I compare them to each other using template matching (cv::matchTemplate) and results are quite good.
Now I want to fine tune my program and I face a problem. For example I have two images which look very much alike. Only differences they have is that another one has thicker line and the digit front of item is different. When both images are small, one pixell difference in line thickness makes big result differences when doing template matching. When line thicknesses are same and only difference is the front digit, I get template matching result something like 0.98 with CV_TM_CCORR_NORMED when match successful. When line thickness is different matching result is something like 0.95.
I cannot decrease my threshold value below 0.98 because some other similar images have same line thickness.
Here are example images:
So what options do I have?
I have tried:
dilate the original and template
erode also both
morphologyEx both
calculating keypoints and comparing them
finding corners
But no big success yet. Are those images too simple that detecting "good features" is hard?
Any help is very wellcome.
Thank you!
EDIT:
Here are some other example images. What my program consider as similar are put in same zip-folder.
ZIP

A possible way might be thinning the two images, so that every line is of one pixel width, since the differing thickness is causing you the main problem with similarity.
The procedure would be to first binarize/threshold the images, then apply a thinning operation on both images, so both are now having the same thickness of 1 px. Then use the usual template matching that you used before with good results.
In case you'd like more details on the thinning/skeletonization of binary images here are a few OpenCV implementations posted on various discussion forums and OpenCV groups:
OpenCV code for thinning (Guo and Hall algo, works with CvMat inputs)
The JR Parker implementation using OpenCV
Possibly more efficient code here (uses OpenCV optimized access methods a lot, however most of the page is in Japanese!)
And lastly a brief overview of thinning in case you're interested.

You need something more elementary here, there isn't much reason to go for fancy methods. Your figures are already binary ones, and their shapes are very similar overall.
One initial idea: consider the upper points and bottom points in a certain image and form a upper hull and a bottom hull (simply a hull, not a convex hull or anything else). A point is said to be an upper point (respec. bottom point) if, given a column i, it is the first point starting at the top (bottom) of the image that is not a background point in i. Also, your image is mostly one single connected component (in some cases there are vertical bars separated, but that is fine), so you can discard small components easily. This step is important for your situation because I saw there are some figures with some form of noise that is irrelevant to the rest of the image. Considering that a connected component with less than 100 points is small, these are the hulls you get for the respective images included in the question:
The blue line is indicating the upper hull, the green line the bottom hull. If it is not apparent, when we consider the regional maxima and regional minima of these hulls we obtain the same amount in both of them. Furthermore, they are all very close except for some displacement in the y axis. If we consider the mean x position of the extrema and plot the lines of both images together we get the following figure. In this case, the lines in blue and green are for the second image, and the lines in red and cyan for the first. Red dots are in the mean x coordinate of some regional minima, and blue dots the same but for regional maxima (these are our points of interest). (The following image has been resized for better visualization)
As you can see, you get many nearly overlapping points without doing anything. If we do even less, i.e. not even care about this overlapping, and proceed to classify your images in the trivial way: if an image a and another image b have the same amount of regional maxima in the upper hull, the same amount of regional minima in the upper hull, the same amount of regional maxima in the bottom hull, and the same amount of regional minima in the bottom hull, then a and b belong to the same class. Doing this for all your images, all images are correctly grouped except for the following situation:
In this case we have only 3 maxima and 3 minima for the upper hull in the first image, while there are 4 maxima and 4 minima for the second. Following you see the plots for the hulls and points of interest obtained:
As you can notice, in the second upper hull there are two extrema very close. Smoothing this curve eliminates both extrema, making the images match by the trivial method. Also, note that if you draw a rectangle around your images, then this method will tell they are all equal. In that case you will want to compare multiple hulls, discarding the points in the current hull and constructing other ones. Nevertheless, this method is able to group all your images correctly given they are all very simple and mostly noisy-free.

From as much as I can get, the difficulty is when the shape is the same, just size is different. A simple hack approach could be:
- subtract the images, then erode. If the shapes were the same but one slightly bigger, subtracting will leave only the edges, which will be thin an vanish with erosion as noise.
Somewhat more formal, would be to take the contours and then the approximate polygons and do a invariants comparison (Hu Moments etc.)

Related

Edge/structure matching for image registration

I am working on image registration between LWIR & RGB images. I am able to extract the edges from both images.
RGB_Edges, LWIR_Edges
Now, I want to match the edges of these images to calculate homography.
I tried to match each edge of RGB with LWIR image separately using template matching (OpenCV) but it didn't worked.
Therefore, can anyone please suggest some methods to mach the edges/structures from both images that can be helpful to compute homography?
I will really appreciate any suggestion/help.
Thanks.
These two images are already fairly well aligned.
Due to the large thickness and irregularity of the edges, I doubt you can do much better.
If you have the option of operator supervision, point at corresponding points in the two images (four pairs are enough for an homography).
For an automated approach, you can try to thin the strokes then to find (approximate) line segments in both images. For a certain number of segments in one image, find the segment which is (approximately) parallel, close and facing with a significant overlap in the other. You can expect that these segments are in correspondence.
Next, you can you can obtain corresponding points by forming the intersections between some segments in each image (take segments that are close but as perpendicular as possible).
As this procedure will suffer from outliers, model fitting by RANSAC is probably a good option.

Extract coordinates from image file

How to get an array of coordinates of a (drawn) line in image? Coordinates should be relative to image borders. Input: *.img . Output array of coordinates (with fixed step). Any 3rd party software to do this? For example there is high contrast difference - white background and color black line; or red and green etc.
Example:
Oh, you mean non-straight lines. You need to define a "line". Intuitively, you might mean a connected area of the image with a high aspect ratio between the length of its medial axis and the distance between medial axis and edges (ie relatively long and narrow, even if it winds around). Possible approach:
Threshold or select by color. Perhaps select by color based on a histogram of colors, or posterize as described here: Adobe Photoshop-style posterization and OpenCV, then call scipy.ndimage.measurements.label()
For each area above, skeletonize. Helpful tutorial: "Skeletonization using OpenCV-Python". However, you will likely need the distance to the edges as well, so use skimage.morphology.medial_axis(..., return_distance=True)
Do some kind of cleanup/filtering on the skeleton to remove short branches, etc. Thinking about your particular use, and assuming your lines don't loop around, you can just find the longest single path in the skeleton. This is where you can also decide if a shape is a "line" or not, based on how long the longest path in its skeleton is, relative to distance to the edges. Not sure how to best do that in opencv, but "Analyze Skeleton" in Fiji/ImageJ will let you filter by branch length.
What is left is the most elongated medial axis of the original "line" shape. You can resample that to some step that you prefer, or fit it with a spline, etc.
Due to the nature of what you want to do, it is hard to come up with a sample code that will work on a range of images. This is likely to require some careful tuning. I recommend using a small set of images (corpus), running any version of your algo on them and checking the results manually until it is pretty good, then trying it on a large corpus.
EDIT: Original answer, only works for straight lines:
You probably want to use the Hough transform (OpenCV tutorial).
Python sample code: Horizontal Line detection with OpenCV
EDIT: Related question with sample code to skeletonize: How can I get a full medial-axis line with its perpendicular lines crossing it?

pattern recognition between two very different images

So, my problem is that I have to find common points between two images of a microchip. Here's an example of two images:
Between these two images, we can clearly see some common pattern like the wires on the bottom right of the first images that can be found in relatively the same place in the second image. Also, the sort of white Z shape in the first image can be seen in the second images, a bit harder, but it's there.
I tried to match them with SURF (OpenCV), found no common point at all. Tried to apply some filter on both images, like edge detection, thresholding, and other filter that I could found in GIMP, but whatever I tried, no common point were ever found.
I'd like to know if you have any idea to solve this problem ? My suggestion right now would be to manually match key features in both images with line segments, but preferably, it should be automated.
A solution that uses OpenCV would be preferable, but I'm looking for any suggestion possible. In OpenCV, all pattern matching situation that I saw were problems way more obvious that this one. No difference in color and so on.
Unless realtime is required, do a simple approach to test if rotation can be automated:
Circuit boards like the ones in the images, are often based on perpendicular straight line segments. Hence you can "despeckle" and remove stuff like coffee stains, by finding linesegments.
Think about creating a kernel, that have a line with dark pixels on one side, and bright pixels on the other. Fold it on the image (or cross-correlate it) to identify all pixels that have a sequence of bright/dark pixels which are nearly vertical or horizontal.
you may interlace to speed things up.
edges of stains and speckles may survive this, if you want angles close to 45* representatations!
The resulting image can be interpreted as a sparse pointcloud.
You can now use RANSAC or other similar approaches to describe many of the remaining correlations, as line segments.
* use a 2 point line segment as input model for RANSAC, Degrade if small.
* Determine infinite lines that have many inliers
* use growth or binninng approaches to segmentate lines.
benefits:
high likelyhood of line segment representations that are actually present as circuitry in image. 2 point description of segments, possible transforms are easy.
easy interpretation of data, as it can be overlayed in openCV
Rotation should be easily found as the rotation that matches most found lines to horizontal and/or vertical axis'es.
apply rotation.
repeat for both images.
now you can determine best translation between the images, by simple x,y cross correlation.
If the top image is always of that quality (quasi bilevel patterns, easy edge detection), I would try a good geometric matching algorithm (such as Cognex or Halcon), training with the top image and searching the bottom one.
Maybe it is worth to first compensate rotation (I hope there is no scaling). You would do that by determining the dominant edge direction, possibly using a Hough transform. Or, much better, by careful mechanical alignment of the sensors.
Anyway, chances of success are low, this is a difficult problem.

Shape/Pattern Matching Approach in Computer Vision

I am currently facing a, in my opinion, rather common problem which should be quite easy to solve but so far all my approached have failed so I am turning to you for help.
I think the problem is explained best with some illustrations. I have some Patterns like these two:
I also have an Image like (probably better, because the photo this one originated from was quite poorly lit) this:
(Note how the Template was scaled to kinda fit the size of the image)
The ultimate goal is a tool which determines whether the user shows a thumb up/thumbs down gesture and also some angles in between. So I want to match the patterns against the image and see which one resembles the picture the most (or to be more precise, the angle the hand is showing). I know the direction in which the thumb is showing in the pattern, so if i find the pattern which looks identical I also have the angle.
I am working with OpenCV (with Python Bindings) and already tried cvMatchTemplate and MatchShapes but so far its not really working reliably.
I can only guess why MatchTemplate failed but I think that a smaller pattern with a smaller white are fits fully into the white area of a picture thus creating the best matching factor although its obvious that they dont really look the same.
Are there some Methods hidden in OpenCV I havent found yet or is there a known algorithm for those kinds of problem I should reimplement?
Happy New Year.
A few simple techniques could work:
After binarization and segmentation, find Feret's diameter of the blob (a.k.a. the farthest distance between points, or the major axis).
Find the convex hull of the point set, flood fill it, and treat it as a connected region. Subtract the original image with the thumb. The difference will be the area between the thumb and fist, and the position of that area relative to the center of mass should give you an indication of rotation.
Use a watershed algorithm on the distances of each point to the blob edge. This can help identify the connected thin region (the thumb).
Fit the largest circle (or largest inscribed polygon) within the blob. Dilate this circle or polygon until some fraction of its edge overlaps the background. Subtract this dilated figure from the original image; only the thumb will remain.
If the size of the hand is consistent (or relatively consistent), then you could also perform N morphological erode operations until the thumb disappears, then N dilate operations to grow the fist back to its original approximate size. Subtract this fist-only blob from the original blob to get the thumb blob. Then uses the thumb blob direction (Feret's diameter) and/or center of mass relative to the fist blob center of mass to determine direction.
Techniques to find critical points (regions of strong direction change) are trickier. At the simplest, you might also use corner detectors and then check the distance from one corner to another to identify the place when the inner edge of the thumb meets the fist.
For more complex methods, look into papers about shape decomposition by authors such as Kimia, Siddiqi, and Xiaofing Mi.
MatchTemplate seems like a good fit for the problem you describe. In what way is it failing for you? If you are actually masking the thumbs-up/thumbs-down/thumbs-in-between signs as nicely as you show in your sample image then you have already done the most difficult part.
MatchTemplate does not include rotation and scaling in the search space, so you should generate more templates from your reference image at all rotations you'd like to detect, and you should scale your templates to match the general size of the found thumbs up/thumbs down signs.
[edit]
The result array for MatchTemplate contains an integer value that specifies how well the fit of template in image is at that location. If you use CV_TM_SQDIFF then the lowest value in the result array is the location of best fit, if you use CV_TM_CCORR or CV_TM_CCOEFF then it is the highest value. If your scaled and rotated template images all have the same number of white pixels then you can compare the value of best fit you find for all different template images, and the template image that has the best fit overall is the one you want to select.
There are tons of rotation/scaling independent detection functions that could conceivably help you, but normalizing your problem to work with MatchTemplate is by far the easiest.
For the more advanced stuff, check out SIFT, Haar feature based classifiers, or one of the others available in OpenCV
I think you can get excellent results if you just compute the two points that have the furthest shortest path going through white. The direction in which the thumb is pointing is just the direction of the line that joins the two points.
You can do this easily by sampling points on the white area and using Floyd-Warshall.

How do I find Waldo with Mathematica?

This was bugging me over the weekend: What is a good way to solve those Where's Waldo? ['Wally' outside of North America] puzzles, using Mathematica (image-processing and other functionality)?
Here is what I have so far, a function which reduces the visual complexity a little bit by dimming
some of the non-red colors:
whereIsWaldo[url_] := Module[{waldo, waldo2, waldoMask},
waldo = Import[url];
waldo2 = Image[ImageData[
waldo] /. {{r_, g_, b_} /;
Not[r > .7 && g < .3 && b < .3] :> {0, 0,
0}, {r_, g_, b_} /; (r > .7 && g < .3 && b < .3) :> {1, 1,
1}}];
waldoMask = Closing[waldo2, 4];
ImageCompose[waldo, {waldoMask, .5}]
]
And an example of a URL where this 'works':
whereIsWaldo["http://www.findwaldo.com/fankit/graphics/IntlManOfLiterature/Scenes/DepartmentStore.jpg"]
(Waldo is by the cash register):
I've found Waldo!
How I've done it
First, I'm filtering out all colours that aren't red
waldo = Import["http://www.findwaldo.com/fankit/graphics/IntlManOfLiterature/Scenes/DepartmentStore.jpg"];
red = Fold[ImageSubtract, #[[1]], Rest[#]] &#ColorSeparate[waldo];
Next, I'm calculating the correlation of this image with a simple black and white pattern to find the red and white transitions in the shirt.
corr = ImageCorrelate[red,
Image#Join[ConstantArray[1, {2, 4}], ConstantArray[0, {2, 4}]],
NormalizedSquaredEuclideanDistance];
I use Binarize to pick out the pixels in the image with a sufficiently high correlation and draw white circle around them to emphasize them using Dilation
pos = Dilation[ColorNegate[Binarize[corr, .12]], DiskMatrix[30]];
I had to play around a little with the level. If the level is too high, too many false positives are picked out.
Finally I'm combining this result with the original image to get the result above
found = ImageMultiply[waldo, ImageAdd[ColorConvert[pos, "GrayLevel"], .5]]
My guess at a "bulletproof way to do this" (think CIA finding Waldo in any satellite image any time, not just a single image without competing elements, like striped shirts)... I would train a Boltzmann machine on many images of Waldo - all variations of him sitting, standing, occluded, etc.; shirt, hat, camera, and all the works. You don't need a large corpus of Waldos (maybe 3-5 will be enough), but the more the better.
This will assign clouds of probabilities to various elements occurring in whatever the correct arrangement, and then establish (via segmentation) what an average object size is, fragment the source image into cells of objects which most resemble individual people (considering possible occlusions and pose changes), but since Waldo pictures usually include a LOT of people at about the same scale, this should be a very easy task, then feed these segments of the pre-trained Boltzmann machine. It will give you probability of each one being Waldo. Take one with the highest probability.
This is how OCR, ZIP code readers, and strokeless handwriting recognition work today. Basically you know the answer is there, you know more or less what it should look like, and everything else may have common elements, but is definitely "not it", so you don't bother with the "not it"s, you just look of the likelihood of "it" among all possible "it"s you've seen before" (in ZIP codes for example, you'd train BM for just 1s, just 2s, just 3s, etc., then feed each digit to each machine, and pick one that has most confidence). This works a lot better than a single neural network learning features of all numbers.
I agree with #GregoryKlopper that the right way to solve the general problem of finding Waldo (or any object of interest) in an arbitrary image would be to train a supervised machine learning classifier. Using many positive and negative labeled examples, an algorithm such as Support Vector Machine, Boosted Decision Stump or Boltzmann Machine could likely be trained to achieve high accuracy on this problem. Mathematica even includes these algorithms in its Machine Learning Framework.
The two challenges with training a Waldo classifier would be:
Determining the right image feature transform. This is where #Heike's answer would be useful: a red filter and a stripped pattern detector (e.g., wavelet or DCT decomposition) would be a good way to turn raw pixels into a format that the classification algorithm could learn from. A block-based decomposition that assesses all subsections of the image would also be required ... but this is made easier by the fact that Waldo is a) always roughly the same size and b) always present exactly once in each image.
Obtaining enough training examples. SVMs work best with at least 100 examples of each class. Commercial applications of boosting (e.g., the face-focusing in digital cameras) are trained on millions of positive and negative examples.
A quick Google image search turns up some good data -- I'm going to have a go at collecting some training examples and coding this up right now!
However, even a machine learning approach (or the rule-based approach suggested by #iND) will struggle for an image like the Land of Waldos!
I don't know Mathematica . . . too bad. But I like the answer above, for the most part.
Still there is a major flaw in relying on the stripes alone to glean the answer (I personally don't have a problem with one manual adjustment). There is an example (listed by Brett Champion, here) presented which shows that they, at times, break up the shirt pattern. So then it becomes a more complex pattern.
I would try an approach of shape id and colors, along with spacial relations. Much like face recognition, you could look for geometric patterns at certain ratios from each other. The caveat is that usually one or more of those shapes is occluded.
Get a white balance on the image, and red a red balance from the image. I believe Waldo is always the same value/hue, but the image may be from a scan, or a bad copy. Then always refer to an array of the colors that Waldo actually is: red, white, dark brown, blue, peach, {shoe color}.
There is a shirt pattern, and also the pants, glasses, hair, face, shoes and hat that define Waldo. Also, relative to other people in the image, Waldo is on the skinny side.
So, find random people to obtain an the height of people in this pic. Measure the average height of a bunch of things at random points in the image (a simple outline will produce quite a few individual people). If each thing is not within some standard deviation from each other, they are ignored for now. Compare the average of heights to the image's height. If the ratio is too great (e.g., 1:2, 1:4, or similarly close), then try again. Run it 10(?) of times to make sure that the samples are all pretty close together, excluding any average that is outside some standard deviation. Possible in Mathematica?
This is your Waldo size. Walso is skinny, so you are looking for something 5:1 or 6:1 (or whatever) ht:wd. However, this is not sufficient. If Waldo is partially hidden, the height could change. So, you are looking for a block of red-white that ~2:1. But there has to be more indicators.
Waldo has glasses. Search for two circles 0.5:1 above the red-white.
Blue pants. Any amount of blue at the same width within any distance between the end of the red-white and the distance to his feet. Note that he wears his shirt short, so the feet are not too close.
The hat. Red-white any distance up to twice the top of his head. Note that it must have dark hair below, and probably glasses.
Long sleeves. red-white at some angle from the main red-white.
Dark hair.
Shoe color. I don't know the color.
Any of those could apply. These are also negative checks against similar people in the pic -- e.g., #2 negates wearing a red-white apron (too close to shoes), #5 eliminates light colored hair. Also, shape is only one indicator for each of these tests . . . color alone within the specified distance can give good results.
This will narrow down the areas to process.
Storing these results will produce a set of areas that should have Waldo in it. Exclude all other areas (e.g., for each area, select a circle twice as big as the average person size), and then run the process that #Heike laid out with removing all but red, and so on.
Any thoughts on how to code this?
Edit:
Thoughts on how to code this . . . exclude all areas but Waldo red, skeletonize the red areas, and prune them down to a single point. Do the same for Waldo hair brown, Waldo pants blue, Waldo shoe color. For Waldo skin color, exclude, then find the outline.
Next, exclude non-red, dilate (a lot) all the red areas, then skeletonize and prune. This part will give a list of possible Waldo center points. This will be the marker to compare all other Waldo color sections to.
From here, using the skeletonized red areas (not the dilated ones), count the lines in each area. If there is the correct number (four, right?), this is certainly a possible area. If not, I guess just exclude it (as being a Waldo center . . . it may still be his hat).
Then check if there is a face shape above, a hair point above, pants point below, shoe points below, and so on.
No code yet -- still reading the docs.
I have a quick solution for finding Waldo using OpenCV.
I used the template matching function available in OpenCV to find Waldo.
To do this a template is needed. So I cropped Waldo from the original image and used it as a template.
Next I called the cv2.matchTemplate() function along with the normalized correlation coefficient as the method used. It returned a high probability at a single region as shown in white below (somewhere in the top left region):
The position of the highest probable region was found using cv2.minMaxLoc() function, which I then used to draw the rectangle to highlight Waldo:

Resources