I am coding a program in OpenCV where I want to adjust camera position. Is there any metric in OpenCV to measure the amount of perspective in two images? How can a homography be used to quantify the degree of perspective in two images? The method that comes to my mind is to run edge detection and compare the parallel edge sizes but that method is prone to errors.
As a first solution I'd recommend maximizing the distance between the image of the line at infinity and the center of your picture.
Identify at least two pairs of lines that are parallel in the original image. Intersect the lines of each pair and connect the resulting points. Best do all of this in homogeneous coordinates so you won't have to worry about lines being still parallel in the transformed version. Compute the distance between the center of the image and that line, possibly taking the resolution of the image into account somehow to make the result invariant to resampling. The result will be infinity for an image obtained from a pure affine transformation. So the larger that value the closer you are to the affine scenario.
Related
I have 2 binary images (black/white). They have the same content but might slightly differ in rotation/translation and scale of the content (text).
How do I get a simple measure for the similarity of 2 images in OpenCV?
The operation needs to be as fast as possible (live).
Examples:
A:
B:
You can use LogPolarFFT registration algorithm to register the images, then compare them using similarity check (PSNR or SSIM).
You need to remove the scale and rotation.
To remove the rotation, take pca which gives you the primary axis, and rotate both images so the primary axis is along the x. (Use a shear rotate). To remove the scale, you then either simply take a bounding box or, if there's a bit of noise in there, take area and scale one until it is equal. Just sample pixel centres to scale ( a bit icky, but it hard to scale a binary image nicely).
I should put some support in the binary image library for this. You might find the material helpful
http://malcolmmclean.github.io/binaryimagelibrary/
The one way I can think of is to find keypoint using SIFT/SURF and then calculate Homography between two images and warp them according to the calculated Homography ( so as to fix rotation and Translation). Then you can simply calculate similarity in terms of SAD.
Need to use rotation and scale invariant approach. Also need a threshold based area segmentation before feature extraction is needed. I suggest to follow below steps:
1/ Binary threshold & scan line algorithm can be used to segment specific text line area.
2/ After segmentation you should adjust the rotation using warpAffine transformation. See this example
3/ On adjusted image you can apply SIFT or BRISK or SURF features to get features
4/ Use template matching approach to match or generate similarity or distance score.
see following link for more detail:
scale and rotation Template matching
What are the possible fast ways to detect circle in an image ?
For ex:
i have an image with one Big Circle and has 6 small circles inside big Circle.
I need to find a big circle without using Hough Circles(OpencV).
Standard algorithms to find circles are Hough (which jamk mentioned in the comments) and RANSAC. Parameterizing these algorithms will set a baseline speed for your application.
http://en.wikipedia.org/wiki/Hough_transform
http://en.wikipedia.org/wiki/RANSAC
To speed up these algorithms, you can look at your collection of images and decide whether limiting the search ranges will help speed up the search. That's straightforward enough: only search within a reasonable range for the radius. Since they take edge points as inputs, you can also look at methods to reduce the number of edge points checked.
However, there are a few other tricks to speed up processing.
Carefully set the range or ranges over which radii are checked. For example, you might not simply check from the smallest possible radius to the largest possible radius, but instead you might split the search into two different ranges: from radius R1 to R2, and then from radius R3 to R4.
Ditch the Canny edge detection in favor of the fastest possible edge detection your application can tolerate. (You can ditch Canny for lots of applications.)
Preprocess your image of edge points to eliminate outliers. The appropriate algorithm to eliminate outliers will be specific to your image set, but you'll probably be able to find an algorithm that eliminates obvious outliers and thereby saves some search time in the more expensive circle fit algorithms.
If your circles are very well defined, and all or nearly all points are present, figure out how you might match only a quarter circle or semicircle instead of a full circle.
Long story short: start with a complete implementation and benchmark it, then gradually tighten up parameter settings and limit search ranges while ensuring that you can still find circles for your application and your image set.
If your images are amenable to scaling, then one possibility is to create an image pyramid of images at different scales: 1/2 scale, 1/4 scale, 1/8 scale, etc. You'll need an edge-preserving scaling method at smaller scales.
Once you have your image pyramid, try the following:
Find circles at the very smallest scale. The image will be small and
the range of possible radii will be limited, so this should be a
quick operation.
If you find a circle using the initial fit at the small scale, improve the fit by testing in the next larger scale image -OR- go ahead and search in the full scale image.
Check the next largest scale. Circles that weren't visible in the smaller scale image may suddenly "appear" in the current scale.
Repeat the steps above through all scales in the image.
Image scaling will be a fast operation, and you can see that if at least one of your circles is present in a smaller scale image you should be able to reduce the total number of cycles by performing a rough circle fit in the small scale image and then optimizing the fit for those edge points alone in the full scale image.
Edge-preserving scaling can also make it possible to use correlation-type tools to find circles, but being able to do so depends on the content of your images, including the noise, how completely edge points represent circles, and so on.
Maybe, detect contours and check their properties, e.g. try to use cv::isContourConvex or another way could be to use the eigenvalues of the covariance matrix and check if contour's representative ellipse first eccentricity is ~0.
I'm creating a part scanner in C that pulls all possibilities for scanned parts as images in a directory. My code currently fetches all images from that directory and dumps them into a vector. I then produce groups of contours for all the images. The program then falls into a while loop where it constantly grabs images from a webcam, and generates contours for those as well. I have set up a jig for the part to rest on, so orientation and size are not a concern, however I don't want to have to calibrate the machine, so there may be movement between the template images and the part images taken.
What is the best way to compare the contours? I have tried several methods including matchTemplate without contours, but if you take a look at the two parts below, you can see that these two are very close to each other, so matchShapes and matchTemplate can't distinguish between them the way I was using them. I'm also not sure how to use cvMatchShapes. It works with just loading the images directly into match shapes, but the results are inconclusive. I think that contours is the way to go, I'm just not sure of how to go about implementing the comparison phase. Any help would be great.
You can view the templates here: http://www.cryogendesign.com/partDetection.html"
If you are ready for do-it-yourself, one approach could be to compute a "distance image" (assign every pixel the smallest Euclidean distance to the contour taken as the reference). See http://en.wikipedia.org/wiki/Distance_transform.
Using this distance image, you can quickly compute the average distance of a new contour to the reference one (for every contour pixel, get the distance from the distance image). The average distance gives you an indication of the goodness-of-fit and will let you find the best match to a set of reference templates.
If the parts have some moving freedom, the situation is a bit harder: before computing the average distance, you must fit the new contour to the reference one. You will need to apply a suitable transform (translation, rotation, possibly scaling), and find the parameters that will minimize... the average distance.
You can calculate the chamfer distance between the two contours:
T and E are the set of edges of the template and the image and x is the point of reference where you start to compare the two set of edges. So for each x you get a different value.
DT is the distance transform of an image. Matlab provides the algorithm here.
If you want a more detailed version of how to calculate the chamfer distance, take a look here.
I have a some scanned images, where the scanner appears to have introduced a certain kind of noise that I've not encountered before. I would like to find a way to remove it automatically. The noise looks like high frequency vertical shear. In other words, a horizontal line that should look like ------------ shows up as /\/\/\/\/\/\/\/\/\, where the amplitude and frequency of the shear seem pretty regular.
Can someone suggest a way of doing the following steps?
Given an image, identify the frequency and amplitude of the shear noise. One can assume that it is always vertical and the characteristic frequency is higher than other frequencies that naturally appear in the image.
Given the above parameters, apply an opposite, vertical, periodic shear to the image to cancel this noise.
It would also be helpful to know how these could be implemented using the tools implemented by a freely available image processing package. (Netpbm, ImageMagick, Gimp, some Python library are some examples.)
Update: Here's a sample from an image with this kind of distortion. Actually, this sample shows that the shear amplitude need not be uniform throughout the image. :-(
The original images are higher resolution (600 dpi).
My solution to the problem would be to convert the image to frequency domain using FFT. The result will be two matrices: the image signal amplitude and the image signal phase. These two matrices should have the same dimensions of the input image.
Now, you should use the amplitude matrix to detect a spike in the area tha corresponds to the noise frequency. Note that the top left of this corner of this matrix should correspond to low frequency components and bottom right to high frequencies.
After you have indentified the spike, you should set the corresponding coefficients (amplitude matrix entries) to zero. After you apply the inverse FFT you should get the input image without the noise.
Please provide an example image for a more concrete (a practical) solution to your problem.
You could use a Hough fit or RANSAC to fit lines first. For Hough to work you may need to "smear" the points using Gaussian blur or morphological dilation so that you get more hits for a given (rho, theta) line in parameter space.
Once you have line fits, you can determine the relative distance of the original points to each line. From that spatial information you can use FFT to find help find a "best fit" spatial frequency and then shift pixels up/down accordingly.
As a first take, you might even skip FFT and use more of a brute force method:
Find the best fit lines using Hough or RANSAC.
Determine the orientation of the lines.
Sampling perpendicular to the (nominally) horizontal lines, find the points along that column with respect to the closest best fit lines.
If the points along one sample are on average a distance +N away from their best fit lines, shift all the pixels in that column (or along that perpendicular sample) by -N.
This sort of technique should work if the shear is consistent along a vertical sample, but not necessarily from left to right. If the shear is always exactly vertical, then finding horizontal lines should be relatively easy.
Judging from your sample image, it looks as though the shear may be consistent across a horizontal line segment between a 3-way or 4-way intersection with a nominally vertical line segment. You could use corner detectors or other methods to find these intersections to limit the extent over which a pixel shifting operation takes place.
A technique I posted here is another way to find horizontal stretches of dark pixels in case they don't fall on a line:
Is there an efficient algorithm for segmentation of handwritten text?
All that aside, is there a chance you could have the scanner fixed?
I'm trying to implement user-assisted edge detection using OpenCV.
Assume you have an image in which we need to find a polygonal shape. For the sake of discussion, let's say we need to find the top of a rectangular table in a picture. The user will click on the four corners of the table to help us narrow things down. Connecting those four points gives us a polygon, or four vectors.
But the user is not very accurate when clicking on those corners. So I'd like to use edge information from the image to increase the accuracy.
I'm using a Canny edge detector with a fairly high treshold to determine important edges in my image. (more precisely, I'm scaling down, blurring, converting to grayscale, then run Canny). How can I compute whether a vector aligns with an edge in my image? If I have a way to compute "alignment", my overal algorithm comes down to perturbating the location of the four edge points, computing the total "alignment" of my polygon with the edges in the image, until I find an optimum.
What is a good way to define and compute this "alignment" metric?
You may want to try to use FindContours to detect your table or any other contour. Then build a contour also from the user input points. After this you can read about Contour Moments by which you can compare contours. You can compare all the contours from the image with the one built from the user points and then select the closest match.