I am currently working in SIFT, I had generated the difference of Gaussian and the extrema image layers. Can anyone explain to me how to use Hessian matrix to eliminate the low contrast keypoint?
A good keypoint is a corner. This comes from the Harris corner work and the Good features to track (KLT) papers first, then emphasized by the Mikolajczyk and Schmid paper.
Intuitively, a corner is a good feature because it is an intersection of two lines, while a single line segment can be moved along its direction, thus causing a less accurate localization.
A line segment is an edge, i.e., a first order derivative (gradient). A corner is an edge that changes its direction abruptly. This is measured by a second order derivative, hence the use of the Hessian matrix that contains the values of the directional second derivatives.
Related
I have a problem at hand where I need to detect/predict the coordinates of the hinge point or axis of rotation point using image processing. The image is as shown below:
I've used a method where I started with tracking the circular movement (in an arc) of a few feature points in an RoI around the default hinge coordinates (entered manually) in a configuration file. This circular motion of these tracked points happens around the vertical axis which passes through the hinge point. Now, I tracked these points from their initial position until the connecting bar made a particular angle (15°/20°) with the y-axis, I drew secants between these different positions (start and end positions) of the same point and drew its perpendicular bisector, which will ideally pass through the centre of the (concentric) circles, which is the ideal hinge point.
Eg:
y_intercepts calculated for each point
H0 (322, 42)
H1 (322, 64) (within tolerance, closest to GT)
H2 (322, 48)
H_avg (322,52)
H_groundtruth (x,y): (322, 61)
We need an accuracy or tolerance of +/- 3 pixels.
Now, the issues we faced in this ideal scenario to practical working of it is:
Different tracked points give different potential hinge points (different dots on the vertical yellow line), (few of which are very close the ground truth(yellow circle)), but their weighted/average (big green circle) goes off the mark. Quite frankly, this is a problem of too many in which we do get the closest potentially to ground truth, but we’re not sure, which of these points is the closest as we’re not to use the default hitch coordinates (entered manually) from config file.
One solution could be to use frameworks already implemented for image registration such as elastix. If you configure it for a rigid registration, you can get the transformation matrix and therefore the center of the rotation.
The problem here is that only one part of your image is moving. Before doing the registration, I would simply mask the region of interest by calculating a mask from the subtraction of the two images, to keep only the part where something actually moved.
Such approach could get a subpixel accuracy. You could also repeat it for multiple angles and average the result. Alternatively to the averaging, you could use the RANSAC algorithm to know which hinge points are off (outliers) and exclude them.
Here is an example how to do a simple rigid transformation with elastix.
I hope this helps!
I intended this as only a comment, but it ended up significantly over the character limit:
The problem from an accuracy perspective (sorry, couldn't resist) seems to be that you're trying to use a planar euclidean geometry technique to solve a projective geometry problem.
Those feature tracks are only circular arcs in 3D world space. They're actually (noisy) elliptical arcs in 2D image pixel space due to the projection.
Your hinge rotation axis isn't a single pixel either, unless your camera's optical axis is directly aligned with the hinge axis. If that's not the case (as the perspective in the photo you added suggests), then your hinge axis is actually a line in pixel space, not a point, and different heights for the different tracks in model space will be 'centered' around different pixels on that line. So asking for +/- 3 pixel hinge 'point' accuracy is unclear, and so is measuring angles in pixel space in general in a way that doesn't account for perspective.
I only mention these details because you seem focused on measuring accurately. Often, those kinds of 2D approximations are fine for many applications, but high accuracy and precision from a single camera (if that's really what you need) requires better 3D scene understanding. (Or you could train a deep network with a bunch of labeled ground truth images and let it figure out the mappings.)
Now maybe you don't need such high accuracy for your application after all. In that case, simple affine geometry techniques like that mentioned in the other answer might work well enough.
EDIT: To further clarify if my question is not clear,
Input: The image below
Output: The points on edge 1, the points on edge 2, the points on edge 3, and the points on edge 4. (I do not have a problem finding contours. I am just unable to separate the points that lie on each of the four edges. I want to group those points into four separate edges so that I can fit four separate curves to them)
My problem here is to detect points and fit separate curves to each of the curved edges of objects like what is shown below (The image shown is one example. The actual shape of each object is different, but there will be either a sharp corner or change in slope from one edge to another):
One way to approach this is to separate out the points/pixels on each edge (the four lines in the above example) and fit polynomials on each of them. By searching a little bit, I learnt that Hough Transform is available for detecting straight edges in OpenCV, but not for curved edges. I also tried detecting contours, but it does not separate out edges of a closed shape. The main criterion for an edge to be considered separate from an adjacent one is that there is a sharp change in slope.
Could anyone give me ideas on how to achieve this? I prefer using C++ with OpenCV due to the other modules of my task.
What you are trying to do is essentially to find high curvature points in the outline. There are several methods for curvature estimation. Some are based on local derivatives of the intensity, and some are based on the arrangement of the pixels along the curve. This problem is very close to that of corner detection.
You may be interested by the following references: "A Comparative Study
on 2D Curvature Estimators, Simon Hermann and Reinhard Klette" or "Curvature estimation in noisy curves, Thanh Phuong Nguyen, Isabelle Debled-Rennesson". Notice that there is large litterature on the topic as curvature estimation in the digital domain is uneasy because it takes second order derivatives.
I'm looking for an efficient way of selecting a relatively large portion of points (2D Euclidian graph) that are the furthest away from the center. This resembles the convex hull, but would include (many) more points. Further criteria:
The number of points in the selection / set ("K") must be within a specified range. Most likely it won't be very narrow, but it most work for different ranges (eg. 0.01*N < K < 0.05*N as well as 0.1*N < K < 0.2*N).
The algorithm must be able to balance distance from the center and "local density". If there are dense areas near the upper part of the graph range, but sparse areas near the lower part, then the algorithm must make sure to select some points from the lower part even if they are closer to the center than the points in the upper region. (See example below)
Bonus: rather than simple distance from center, taking into account distance to a specific point (or both a point and the center) would be perfect.
My attempts so far have focused on using "pigeon holing" (divide graph into CxR boxes, assign points to boxes based on coordinates) and selecting "outer" boxes until we have sufficient points in the set. However, I haven't been successful at balancing the selection (dense regions over-selected because of fixed box size) nor at using a selected point as reference instead of (only) the center.
I've (poorly) drawn an Example: The red dots are the points, the green shape is an example of what I want (outside the green = selected). For sparse regions, the bounding shape comes closer to the center to find suitable points (but doesn't necessarily find any, if they're too close to the center). The yellow box is an example of what my Pigeon Holing based algorithms does. Even when trying to adjust for sparser regions, it doesn't manage well.
Any and all ideas are welcome!
I don't think there are any standard algorithms that will give you what you want. You're going to have to get creative. Assuming your points are embedded in 2D Euclidean space here are some ideas:
Iteratively compute several convex hulls. For example, compute the convex hull, keep the points that are part of the convex hull, then compute another convex hull ignoring the points from the original convex hull. Continue to do this until you have a sufficient number of points, essentially plucking off points on the perimeter for each iteration. The only problem with this approach is that it will not work well for concavities in your data set (e.g., the one on the bottom of your sample you posted).
Fit a Gaussian to your data and keep everything > N standard
deviations away from the mean (where N is a value that you'd have to
choose). This should work pretty well if your data is Gaussian. If
it isn't, you could always model it with several Gaussians (instead
of one), and keep points with a joint probability less than some threshold. Using multiple Gaussians will probably handle concavities decently. References:
http://en.wikipedia.org/wiki/Gaussian_function
How to fit a gaussian to data in matlab/octave?\
Use Kernel Density Estimation - If you create a kernel density
surface, you could slice the surface at some height (e.g., turning
it into a plateau), giving you a perimeter shape (the shape of the
plateau) around the points. The trick would be to slice it at the
right location though, because you could end up getting no points
outside of the shape, but with the right selection you could easily
get the green shape you drew. This approach will work well and give you the green shape in your example if you choose the slice point wisely (which may be difficult to do). The big drawback of this approach is that it is very computationally expensive. More information:
http://en.wikipedia.org/wiki/Multivariate_kernel_density_estimation
Use alpha shapes to get a general shape the wraps tightly around
the outside perimeter of the point set. Then erode the shape a
little to force some points outside of the shape. I don't have a lot of experience with alpha shapes, but this approach will also be quite computationally expensive. More info:
http://doc.cgal.org/latest/Alpha_shapes_2/index.html
I have a some scanned images, where the scanner appears to have introduced a certain kind of noise that I've not encountered before. I would like to find a way to remove it automatically. The noise looks like high frequency vertical shear. In other words, a horizontal line that should look like ------------ shows up as /\/\/\/\/\/\/\/\/\, where the amplitude and frequency of the shear seem pretty regular.
Can someone suggest a way of doing the following steps?
Given an image, identify the frequency and amplitude of the shear noise. One can assume that it is always vertical and the characteristic frequency is higher than other frequencies that naturally appear in the image.
Given the above parameters, apply an opposite, vertical, periodic shear to the image to cancel this noise.
It would also be helpful to know how these could be implemented using the tools implemented by a freely available image processing package. (Netpbm, ImageMagick, Gimp, some Python library are some examples.)
Update: Here's a sample from an image with this kind of distortion. Actually, this sample shows that the shear amplitude need not be uniform throughout the image. :-(
The original images are higher resolution (600 dpi).
My solution to the problem would be to convert the image to frequency domain using FFT. The result will be two matrices: the image signal amplitude and the image signal phase. These two matrices should have the same dimensions of the input image.
Now, you should use the amplitude matrix to detect a spike in the area tha corresponds to the noise frequency. Note that the top left of this corner of this matrix should correspond to low frequency components and bottom right to high frequencies.
After you have indentified the spike, you should set the corresponding coefficients (amplitude matrix entries) to zero. After you apply the inverse FFT you should get the input image without the noise.
Please provide an example image for a more concrete (a practical) solution to your problem.
You could use a Hough fit or RANSAC to fit lines first. For Hough to work you may need to "smear" the points using Gaussian blur or morphological dilation so that you get more hits for a given (rho, theta) line in parameter space.
Once you have line fits, you can determine the relative distance of the original points to each line. From that spatial information you can use FFT to find help find a "best fit" spatial frequency and then shift pixels up/down accordingly.
As a first take, you might even skip FFT and use more of a brute force method:
Find the best fit lines using Hough or RANSAC.
Determine the orientation of the lines.
Sampling perpendicular to the (nominally) horizontal lines, find the points along that column with respect to the closest best fit lines.
If the points along one sample are on average a distance +N away from their best fit lines, shift all the pixels in that column (or along that perpendicular sample) by -N.
This sort of technique should work if the shear is consistent along a vertical sample, but not necessarily from left to right. If the shear is always exactly vertical, then finding horizontal lines should be relatively easy.
Judging from your sample image, it looks as though the shear may be consistent across a horizontal line segment between a 3-way or 4-way intersection with a nominally vertical line segment. You could use corner detectors or other methods to find these intersections to limit the extent over which a pixel shifting operation takes place.
A technique I posted here is another way to find horizontal stretches of dark pixels in case they don't fall on a line:
Is there an efficient algorithm for segmentation of handwritten text?
All that aside, is there a chance you could have the scanner fixed?
I'm trying to implement user-assisted edge detection using OpenCV.
Assume you have an image in which we need to find a polygonal shape. For the sake of discussion, let's say we need to find the top of a rectangular table in a picture. The user will click on the four corners of the table to help us narrow things down. Connecting those four points gives us a polygon, or four vectors.
But the user is not very accurate when clicking on those corners. So I'd like to use edge information from the image to increase the accuracy.
I'm using a Canny edge detector with a fairly high treshold to determine important edges in my image. (more precisely, I'm scaling down, blurring, converting to grayscale, then run Canny). How can I compute whether a vector aligns with an edge in my image? If I have a way to compute "alignment", my overal algorithm comes down to perturbating the location of the four edge points, computing the total "alignment" of my polygon with the edges in the image, until I find an optimum.
What is a good way to define and compute this "alignment" metric?
You may want to try to use FindContours to detect your table or any other contour. Then build a contour also from the user input points. After this you can read about Contour Moments by which you can compare contours. You can compare all the contours from the image with the one built from the user points and then select the closest match.