comparing two Binary images in opencv - opencv

I have two binary images of hand which are almost same.How should I compare them to know whether they represent almost same shape or not.I have tried finding euclidean distance between two images but its not giving correct answer if the image is slightly changed or moved to left or right or slight decrease in size.I have also tried HOG descriptors in opencv still I am unable to get correct answer if I compare more than one image.What is the best way to compare two binary images based on shape or any feature to know nearly matching images not considering the size of the image.Links to images are http://postimg.org/image/w20tuuzmv/ and http://postimg.org/image/jndr4br9x/

I think that Generalized Hough transform might be a good solution for you. Here is a tutorial about it.
Alternatively uou can try to cut hand from one image (just use contour bounding rect) and than use it as a template and search for it in second image using template matching technique - here you can read more about. When you will find point with highest correlation value, you need to decide whether it is big enough - you need to find threshold on your own.

Are the images just rotated, translated and scaled? If so you could compute the principal components of the images using PCA, then rotate the images so that the first component is in a certain direction (e.g. always vertical) you could then compute the centroids of the images and translate them to be always in the same position (e.g. center of the image), to use always the same scale you could resize the images so that the sum of the distances between each white pixel with the centroid is the same in both images. Now it's easy to compare the images for example score = np.sum(A==B)

Related

How to identify changes in two images of same object

I have two images which I know represent the exact same object. In the picture below, they are referred as Reference and Match.
The image Match can undergo the following transformations compared to Reference:
The object may have changed its appearance locally by addition(e.g. dirt or lettering added to the side) or omission (side mirror has been taken out).
Stretched or reduced in size horizontally only (it is not resized in vertical direction)
Portions of Reference image are not present in Match (shaded in red in Reference Image).
Question: How can the regions which have "changed" in the ways mentioned above be identified ?
Idea#1: Dynamic Time Warping seems like a good candidate once the beginning and end of Match image (numbered 1 and 3 in the image) are aligned with corresponding columns in Reference Image, but I am not sure how to proceed.
Idea#2: Match SIFT features across images. The tessellation produced by feature point locations breaks up the image into non-uniform tiles. Use feature correspondences across images to determine which tiles to match across images. Use a similarity measure to figure out any changes.
You might want to consider an iterative registration algorithm. Basically you want to perform optimization to find the parameters of the transform, in your case horizontal scaling and horizontal translation. Once you optimize the parameters you will have the transformation between the two images, transform one to match the other, and can then use a subtraction to identify the regions with differences.
For registration take a look at the ITK library.
You can probably do a gradient decent optimization using mutual information as the metric. It has a number of different transforms that will capture translation and scaling. The code should run quickly on the sample images you show.

Image Registration by Manual marking of corresponding points using OpenCV

I have a processed binary image of dimension 300x300. This processed image contains few object(person or vehicle).
I also have another RGB image of the same scene of dimensiion 640x480. It is taken from a different position
note : both cameras are not the same
I can detect objects to some extent in the first image using background subtraction. I want to detect corresponding objects in the 2nd image. I went through opencv functions
getAffineTransform
getPerspectiveTransform
findHomography
estimateRigidTransform
All these functions require corresponding points(coordinates) in two images
In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).
I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is not feasible because I think I cannot determine and match features from binary and RGB image(am I right??).
If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.
The solution which I tried more of Manual marking to estimate transformation parameters(please correct me if I am wrong)
Note : There is no movement of both cameras.
Manually marked rectangles around objects in processed image(binary)
Noted down the coordinates of the rectangles
Manually marked rectangles around objects in 2nd RGB image
Noted down the coordinates of the rectangles
Repeated above steps for different samples of 1st binary and 2nd RGB images
Now that I have some 20 corresponding points, I used them in the function as :
findHomography(src_pts, dst_pts, 0) ;
So once I detect an object in 1st image,
I drew a bounding box around it,
Transform the coordinates of the vertices using the above found transformation,
finally draw a box in 2nd RGB image with transformed coordinates as vertices.
But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..
What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?
Is my approach good...what should I modify/do to make the registration accurate?
Any alternative approach to be used?
I'm fairly new to OpenCV myself, but my suggestions would be:
Seeing as you have the objects identified in the first image, I shouldn't think it would be hard to get keypoints and extract features? (or maybe you have this already?)
Identify features in the 2nd image
Match the features using OpenCV FlannBasedMatcher or similar
Highlight matching features in 2nd image or whatever you want to do.
I'd hope that because all your features in the first image should be positives (you know they are the features you want), then it'll be relatively straight forward to get accurate matches.
Like I said, I'm new to this so the ideas may need some elaboration.
It might be a little late to answer this and the asker might not see this, but if the 1st image is originally a grayscale then this could be done:
1.) 2nd image ----> grayscale ------> gray2ndimg
2.) Point to Point correspondences b/w gray1stimg and gray2ndimg by matching features.

How to compare two contours of a binary pattern image?

I'm creating a part scanner in C that pulls all possibilities for scanned parts as images in a directory. My code currently fetches all images from that directory and dumps them into a vector. I then produce groups of contours for all the images. The program then falls into a while loop where it constantly grabs images from a webcam, and generates contours for those as well. I have set up a jig for the part to rest on, so orientation and size are not a concern, however I don't want to have to calibrate the machine, so there may be movement between the template images and the part images taken.
What is the best way to compare the contours? I have tried several methods including matchTemplate without contours, but if you take a look at the two parts below, you can see that these two are very close to each other, so matchShapes and matchTemplate can't distinguish between them the way I was using them. I'm also not sure how to use cvMatchShapes. It works with just loading the images directly into match shapes, but the results are inconclusive. I think that contours is the way to go, I'm just not sure of how to go about implementing the comparison phase. Any help would be great.
You can view the templates here: http://www.cryogendesign.com/partDetection.html"
If you are ready for do-it-yourself, one approach could be to compute a "distance image" (assign every pixel the smallest Euclidean distance to the contour taken as the reference). See http://en.wikipedia.org/wiki/Distance_transform.
Using this distance image, you can quickly compute the average distance of a new contour to the reference one (for every contour pixel, get the distance from the distance image). The average distance gives you an indication of the goodness-of-fit and will let you find the best match to a set of reference templates.
If the parts have some moving freedom, the situation is a bit harder: before computing the average distance, you must fit the new contour to the reference one. You will need to apply a suitable transform (translation, rotation, possibly scaling), and find the parameters that will minimize... the average distance.
You can calculate the chamfer distance between the two contours:
T and E are the set of edges of the template and the image and x is the point of reference where you start to compare the two set of edges. So for each x you get a different value.
DT is the distance transform of an image. Matlab provides the algorithm here.
If you want a more detailed version of how to calculate the chamfer distance, take a look here.

Shape/Pattern Matching Approach in Computer Vision

I am currently facing a, in my opinion, rather common problem which should be quite easy to solve but so far all my approached have failed so I am turning to you for help.
I think the problem is explained best with some illustrations. I have some Patterns like these two:
I also have an Image like (probably better, because the photo this one originated from was quite poorly lit) this:
(Note how the Template was scaled to kinda fit the size of the image)
The ultimate goal is a tool which determines whether the user shows a thumb up/thumbs down gesture and also some angles in between. So I want to match the patterns against the image and see which one resembles the picture the most (or to be more precise, the angle the hand is showing). I know the direction in which the thumb is showing in the pattern, so if i find the pattern which looks identical I also have the angle.
I am working with OpenCV (with Python Bindings) and already tried cvMatchTemplate and MatchShapes but so far its not really working reliably.
I can only guess why MatchTemplate failed but I think that a smaller pattern with a smaller white are fits fully into the white area of a picture thus creating the best matching factor although its obvious that they dont really look the same.
Are there some Methods hidden in OpenCV I havent found yet or is there a known algorithm for those kinds of problem I should reimplement?
Happy New Year.
A few simple techniques could work:
After binarization and segmentation, find Feret's diameter of the blob (a.k.a. the farthest distance between points, or the major axis).
Find the convex hull of the point set, flood fill it, and treat it as a connected region. Subtract the original image with the thumb. The difference will be the area between the thumb and fist, and the position of that area relative to the center of mass should give you an indication of rotation.
Use a watershed algorithm on the distances of each point to the blob edge. This can help identify the connected thin region (the thumb).
Fit the largest circle (or largest inscribed polygon) within the blob. Dilate this circle or polygon until some fraction of its edge overlaps the background. Subtract this dilated figure from the original image; only the thumb will remain.
If the size of the hand is consistent (or relatively consistent), then you could also perform N morphological erode operations until the thumb disappears, then N dilate operations to grow the fist back to its original approximate size. Subtract this fist-only blob from the original blob to get the thumb blob. Then uses the thumb blob direction (Feret's diameter) and/or center of mass relative to the fist blob center of mass to determine direction.
Techniques to find critical points (regions of strong direction change) are trickier. At the simplest, you might also use corner detectors and then check the distance from one corner to another to identify the place when the inner edge of the thumb meets the fist.
For more complex methods, look into papers about shape decomposition by authors such as Kimia, Siddiqi, and Xiaofing Mi.
MatchTemplate seems like a good fit for the problem you describe. In what way is it failing for you? If you are actually masking the thumbs-up/thumbs-down/thumbs-in-between signs as nicely as you show in your sample image then you have already done the most difficult part.
MatchTemplate does not include rotation and scaling in the search space, so you should generate more templates from your reference image at all rotations you'd like to detect, and you should scale your templates to match the general size of the found thumbs up/thumbs down signs.
[edit]
The result array for MatchTemplate contains an integer value that specifies how well the fit of template in image is at that location. If you use CV_TM_SQDIFF then the lowest value in the result array is the location of best fit, if you use CV_TM_CCORR or CV_TM_CCOEFF then it is the highest value. If your scaled and rotated template images all have the same number of white pixels then you can compare the value of best fit you find for all different template images, and the template image that has the best fit overall is the one you want to select.
There are tons of rotation/scaling independent detection functions that could conceivably help you, but normalizing your problem to work with MatchTemplate is by far the easiest.
For the more advanced stuff, check out SIFT, Haar feature based classifiers, or one of the others available in OpenCV
I think you can get excellent results if you just compute the two points that have the furthest shortest path going through white. The direction in which the thumb is pointing is just the direction of the line that joins the two points.
You can do this easily by sampling points on the white area and using Floyd-Warshall.

Image Comparison

What is the efficient way to compare two images in visual c..?
Also in which format images has to be stored.(bmp, gif , jpeg.....)?
Please provide some suggestions
If the images you are trying to compare have distinctive characteristics that you are trying to differentiate then PCA is an excellent way to go. The question of what format of the file you need is irrelevant really; you need to load it into the program as an array of numbers and do analysis.
Your question opens a can of worms in terms of complexity.
If you want to compare two images to check if they are the same, then you need to perform an md5 on the file (removing possible metainfos which could distort your result).
If you want to compare if they look the same, then it's a completely different story altogether. "Look the same" is intended in a very loose meaning (e.g. they are exactly the same image but stored with two different file formats). For this, you need advanced algorithms, which will give you a probability for two images to be the same. Not being an expert in the field, I would perform the following "invented out of my head" algorithm:
take an arbitrary set of pixel points from the image.
for each pixel "grow" a polygon out of the surrounding pixels which are near in color (according to HSV colorspace)
do the same for the other image
for each polygon of one image, check the geometrical similitude with all the other polygons in the other image, and pick the highest value. Divide this value by the area of the polygon (to normalize).
create a vector out of the highest values obtained
the higher is the norm of this vector, the higher is the chance that the two images are the same.
This algorithm should be insensitive to color drift and image rotation. Maybe also scaling (you normalize against the area). But I restate: not an expert, there's probably much better, and it could make kittens cry.
I did something similar to detect movement from a MJPEG stream and record images only when movement occurs.
For each decoded image, I compared to the previous using the following method.
Resize the image to effectively thumbnail size (I resized fairly hi-res images down by a factor of ten
Compare the brightness of each pixel to the previous image and flag if it is much lighter or darker (threshold value 1)
Once you've done that for each pixel, you can use the count of different pixels to determine whether the image is the same or different (threshold value 2)
Then it was just a matter of tuning the two threshold values.
I did the comparisons using System.Drawing.Bitmap, but as my source images were jpg, there were some artifacting.
It's a nice simple way to compare images for differences if you're going to roll it yourself.
If you want to determine if 2 images are the same perceptually, I believe the best way to do it is using an Image Hashing algorithm. You'd compute the hash of both images and you'd be able to use the hashes to get a confidence rating of how much they match.
One that I've had some success with is pHash, though I don't know how easy it would be to use with Visual C. Searching for "Geometric Hashing" or "Image Hashing" might be helpful.
Testing for strict identity is simple: Just compare every pixel in source image A to the corresponding pixel value in image B. If all pixels are identical, the images are identical.
But I guess don't want this kind of strict identity. You probably want images to be "identical" even if certain transformations have been applied to image B. Examples for these transformations might be:
changing image brightness globally (for every pixel)
changing image brightness locally (for every pixel in a certain area)
changing image saturation golbally or locally
gamma correction
applying some kind of filter to the image (e.g. blurring, sharpening)
changing the size of the image
rotation
e.g. printing an image and scanning it again would probably include all of the above.
In a nutshell, you have to decide which transformations you want to treat as "identical" and then find image measures that are invariant to those transformations. (Alternatively, you could try to revert the translations, but that's not possible if the transformation removes information from the image, like e.g. blurring or clipping the image)

Resources