This is similar to a question here But I couldn't use its solution. I'm working in Julia and I'm not sure how it works in this case. Here is an image I'm working on
Due to the curve in line, and unsmooth edges, its not straightforward to get width 'd' of the line. I dont know how to find orthogonal line at each point and get distance d1,d2.. I plan to take its average later to get an estimate of line width. Any tips? Thanks
Using distance transform gets distance from neighbouring pixels, dist = 1 .- distance_transform(feature_transform(Gray.(line_image) .> 0.5)); and lookingup closest pixel with greater distance is the pixel of interest. Using Euclidian distance from an index in centrepoint (skeleton) to nearest distance * 2 gets width of the line.
Related
Say I have the lines shown in the image below, represented in polar coordinate format (rho and theta). These lines are the output of OpenCV's HoughLines function after some post processing. (Sorry I'm not allowed to embed images yet.)
What I want to do is, given any one line, find all of the lines that are perpendicular to that line, as shown in the second image below.
I understand how to do this with Cartesian lines, but I'm having trouble wrapping my mind around what properties of rho and theta the two lines would have to have to be perpendicular, although I understand how polar lines work at least fundamentally. Sorry if this is elementary stuff, but I'm having trouble finding any explanation of this online anywhere. Do I need to first convert the lines to Cartesian coordinates, or is there some simpler way to do this? Any help would be much appreciated, thanks!
To get perpendicular lines in polar coordinates, you simply take the theta for the first line, and find all lines whose theta = +/- 90° of the first theta.
You have to normalize the angles to be within 0°-360° or some other range, when comparing them.
So if line 1 has a theta line1.Theta
Then the angle to another line is a = (line2.Theta - line1.Theta)
and you want all lines where a is close to -90°, 90°, 270°, -270°, ...
depending on how you normalize your angles
In the below picture, I have the 2D locations of the green points and I want to calculate the locations of the red points, or, as an intermediate step, I want to calculate the locations of the blue points. All in 2D.
Of course, I do not only want to find those locations for the picture above. In the end, I want an automated algorithm which takes a set of checkerboard corner points to calculate the outer corners.
I need the resulting coordinates to be as accurate as possible, so I think that I need a solution which does not only take the outer green points into account, but which also uses all the other green points' locations to calculate a best fit for the outer corners (red or blue).
If OpenCV can do this, please point me into that direction.
In general, if all you have is the detection of some, but not all, the inner corners, the problem cannot be solved. This is because the configuration is invariant to translation - shifting the physical checkerboard by whole squares would produce the same detected corner position on the image, but due to different physical corners.
Further, the configuration is also invariant to rotations by 180 deg in the checkerboard plane and, unless you are careful to distinguish between the colors of the squares adjacent each corner, to rotations by 90 deg and reflections with respect the center and the midlines.
This means that, in addition to detecting the corners, you need to extract from the image some features of the physical checkerboard that can be used to break the above invariance. The simplest break is to detect all 9 corners of one row and one column, or at least their end-corners. They can be used directly to rectify the image by imposing the condition that their lines be at 90 deg angle. However, this may turn out to be impossible due to occlusions or detector failure, and more sophisticated methods may be necessary.
For example, you can try to directly detect the chessboard edges, i.e. the fat black lines at the boundary. One way to do that, for example, would be to detect the letters and numbers nearby, and use those locations to constrain a line detector to nearby areas.
By the way, if the photo you posted is just a red herring, and you are interested in detecting general checkerboard-like patterns, and can control the kind of pattern, there are way more robust methods of doing it. My personal favorite is the "known 2D crossratios" pattern of Matsunaga and Kanatani.
I solved it robustly, but not accurately, with the following solution:
Find lines with at least 3 green points closely matching the line. (thin red lines in pic)
Keep bounding lines: From these lines, keep those with points only to one side of the line or very close to the line.
Filter bounding lines: From the bounding lines, take the 4 best ones/those with most points on them. (bold white lines in pic)
Calculate the intersections of the 4 remaining bounding lines (none of the lines are perfectly parallel, so this results in 6 intersections, of which we want only 4).
From the intersections, remove the one farthest from the average position of the intersections until only 4 of them are left.
That's the 4 blue points.
You can then feed these 4 points into OpenCV's findPerspectiveTransform function to find a perspective transform (aka a homography):
Point2f* srcPoints = (Point2f*) malloc(4 * sizeof(Point2f));
std::vector<Point2f> detectedCorners = CheckDet::getOuterCheckerboardCorners(srcImg);
for (int i = 0; i < MIN(4, detectedCorners.size()); i++) {
srcPoints[i] = detectedCorners[i];
}
Point2f* dstPoints = (Point2f*) malloc(4 * sizeof(Point2f));
int dstImgSize = 400;
dstPoints[0] = Point2f(dstImgSize * 1/8, dstImgSize * 1/8);
dstPoints[1] = Point2f(dstImgSize * 7/8, dstImgSize * 1/8);
dstPoints[2] = Point2f(dstImgSize * 7/8, dstImgSize * 7/8);
dstPoints[3] = Point2f(dstImgSize * 1/8, dstImgSize * 7/8);
Mat m = getPerspectiveTransform(srcPoints, dstPoints);
For our example image, the input and output of findPerspectiveTranform looks like this:
input
(349.1, 383.9) -> ( 50.0, 50.0)
(588.9, 243.3) -> (350.0, 50.0)
(787.9, 404.4) -> (350.0, 350.0)
(506.0, 593.1) -> ( 50.0, 350.0)
output
( 1.6 -1.1 -43.8 )
( 1.4 2.4 -1323.8 )
( 0.0 0.0 1.0 )
You can then transform the image's perspective to board coordinates:
Mat plainBoardImg;
warpPerspective(srcImg, plainBoardImg, m, Size(dstImgSize, dstImgSize));
Results in the following image:
For my project, the red points that you can see on the board in the question are not needed anymore, but I'm sure they can be calculated easily from the homography by inverting it and then using the inverse for back-tranforming the points (0, 0), (0, dstImgSize), (dstImgSize, dstImgSize), and (dstImgSize, 0).
The algorithm works surprisingly reliable, however, it does not use all the available information, because it uses only the outer points (those which are connected with the white lines). It does not use any data of the inner points for additional accuracy. I would still like to find an even better solution, which uses the data of the inner points.
Given a set of points on an image, I want to detect groups of aligned points as shown in the figure:
How can I do this? Any help will be appreciated.
This is a good potential application of the Hough Transform. The Hough space for lines is (r, \theta) where r is the distance from origin to closest point on line and \theta is its orientation.
Each point in x-y space becomes a sinusoid in Hough space as shown in the Wiki article.
The places where all the sinusoids intersect corresponds to a single line that passes through all the points. If the points are not perfectly colinear, the intersection will be "fuzzy".
The simplest algorithm to fit lines to points is to make a rectangular (r, \theta) accumulator array set to zero initially. Then trace a sinusoid for each point into this discrete (r, \theta) space, incrementing each accumulator element by a fixed amount. Find prospective line fits by looking for large array elements. The element coordinates give (r, \theta) for the fit.
Tracing the sinusoid is straightforward. If you have T accumulator bins on the \theta axis then each corresponds to an angle k(\pi)/N for some 0 <= k < T. So for k in this range, calculate the distance from the origin to the closest point of a line with this orientation passing through the point. This provides an r value. If there are R bins on the R axis and Rmax is the maximum value of r, then increment bin (floor(r/rMax*R), k).
As a start, you can try this:
List all lines that can be formed by selecting any two of these points (n(n-1)/2 ones for n points).
For any two of these lines, check if they are aligned (i.e. slope diff within say 10 degrees).
For each aligned pair lines, you can easily check whether other points are also aligned on these lines. And these points will be the aligned points you need.
I'm trying to implement rectangle detection using the Hough transform, based on
this paper.
I programmed it using Matlab, but after the detection of parallel pair lines and orthogonal pairs, I must detect the intersection of these pairs. My question is about the quality of the two line intersection in Hough space.
I found the intersection points by solving four equation systems. Do these intersection points lie in cartesian or polar coordinate space?
For those of you wondering about the paper, it's:
Rectangle Detection based on a Windowed Hough Transform by Cláudio Rosito Jung and Rodrigo Schramm.
Now according to the paper, the intersection points are expressed as polar coordinates, obviously you implementation may be different (the only way to tell is to show us your code).
Assuming you are being consistent with his notation, your peaks should be expressed as:
You must then perform peak paring given by equation (3) in section 4.3 or
where represents the angular threshold corresponding to parallel lines
and is the normalized threshold corresponding to lines of similar length.
The accuracy of the Hough space should be dependent on two main factors.
The accumulator maps onto Hough Space. To loop through the accumulator array requires that the accumulator divide the Hough Space into a discrete grid.
The second factor in accuracy in Linear Hough Space is the location of the origin in the original image. Look for a moment at what happens if you do a sweep of \theta for any given change in \rho. Near the origin, one of these sweeps will cover far less pixels than a sweep out near the edges of the image. This has the consequence that near the edges of the image you need a much higher \rho \theta resolution in your accumulator to achieve the same level of accuracy when transforming back to Cartesian.
The problem with increasing the resolution of course is that you will need more computational power and memory to increase it. Also If you uniformly increase the accumulator resolution you have wasted resolution near the origin where it is not needed.
Some ideas to help with this.
place the origin right at the
center of the image. as opposed to
using the natural bottom left or top
left of an image in code.
try using the closest image you can
get to a square. the more elongated an
image is for a given area the more
pronounced the resolution trap
becomes at the edges
Try dividing your image into 4/9/16
etc different accumulators each with
an origin in the center of that sub-image.
It will require a little overhead to link
the results of each accumulator together
for rectangle detection, but it should help
spread the resolution more evenly.
The ultimate solution would be to increase
the resolution linearly depending on the
distance from the origin. this can be achieved using the
(x-a)^2 + (y-b)^2 = \rho^2
circle equation where
- x,y are the current pixel
- a,b are your chosen origin
- \rho is the radius
once the radius is known adjust your accumulator
resolution accordingly. You will have to keep
track of the center of each \rho \theta bin.
for transforming back to Cartesian
The link to the referenced paper does not work, but if you used the standard hough transform than the four intersection points will be expressed in cartesian coordinates. In fact, the four lines detected with the hough tranform will be expressed using the "normal parametrization":
rho = x cos(theta) + y sin(theta)
so you will have four pairs (rho_i, theta_i) that identifies your four lines. After checking for orthogonality (for example just by comparing the angles theta_i) you solve four equation system each of the form:
rho_j = x cos(theta_j) + y sin(theta_j)
rho_k = x cos(theta_k) + y sin(theta_k)
where x and y are the unknowns that represents the cartesian coordinates of the intersection point.
I am not a mathematician. I am willing to stand corrected...
From Hough 2) ... any line on the xy plane can be described as p = x cos theta + y sin theta. In this representation, p is the normal distance and theta is the normal angle of a straight line, ... In practical applications, the angles theta and distances p are quantized, and we obtain an array C(p, theta).
from CRC standard math tables Analytic Geometry, Polar Coordinates in a Plane section ...
Such an ordered pair of numbers (r, theta) are called polar coordinates of the point p.
Straight lines: let p = distance of line from O, w = counterclockwise angle from OX to the perpendicular through O to the line. Normal form: r cos(theta - w) = p.
From this I conclude that the points lie in polar coordinate space.
(This is a follow-up from this previous question).
I was able to successfully use OpenCV / Hough transforms to detect lines in pictures (scanned text); at first it would detect many many lines (at least one line per line of text), but by adjusting the 'threshold' parameter via trial-and-error, it now only detects "real" lines.
(The 'threshold' parameter is dependant on image size, which is a bit of a problem if one has to deal with images of different resolutions, but that's another story).
My problem is that the Hough transform sometimes detects two lines where there is only one; those two lines are very near one another and (apparently) parallel.
=> How can I identify that two lines are almost parallel and very near one another? (so that I can keep only one).
If you use the standard or multiscale hough, you will end up with the rho and theta coordinates of the lines in polar coordinates. Rho is the distance to the origin, and theta is normally the angle between the detected line and the Y axis. Without looking into the details of the hough transform in opencv, this is a general rule in those coordinates: two lines will be almost parallel and very near one another when:
- their thetas are nearly identical AND their rhos are nearly identical
OR
- their thetas are near 180 degrees apart AND their rhos are near each other's negative
I hope that makes sense.
That's interesting about the theta being the angle between the line and the y-axis.
Generally, the rho and theta values are visualized as being the angle from the x-axis to the line perpendicular to the line in question. The rho is then the length of this perpendicular line. Thus, a theta = 90 and rho = 20 would mean a horizontal line 20 pixels up from the origin.
A nice image is shown on Hough Transform question