How to find perpendicular lines in polar coordinates? - opencv

Say I have the lines shown in the image below, represented in polar coordinate format (rho and theta). These lines are the output of OpenCV's HoughLines function after some post processing. (Sorry I'm not allowed to embed images yet.)
What I want to do is, given any one line, find all of the lines that are perpendicular to that line, as shown in the second image below.
I understand how to do this with Cartesian lines, but I'm having trouble wrapping my mind around what properties of rho and theta the two lines would have to have to be perpendicular, although I understand how polar lines work at least fundamentally. Sorry if this is elementary stuff, but I'm having trouble finding any explanation of this online anywhere. Do I need to first convert the lines to Cartesian coordinates, or is there some simpler way to do this? Any help would be much appreciated, thanks!

To get perpendicular lines in polar coordinates, you simply take the theta for the first line, and find all lines whose theta = +/- 90° of the first theta.
You have to normalize the angles to be within 0°-360° or some other range, when comparing them.
So if line 1 has a theta line1.Theta
Then the angle to another line is a = (line2.Theta - line1.Theta)
and you want all lines where a is close to -90°, 90°, 270°, -270°, ...
depending on how you normalize your angles

Related

Compute Tangent vector over 2D Points

I have computed the contours of an object in an image. Now I have a 2D array, each of element representing X & Y coordinates of a contour point.
Now, I want to compute a tangent vector over each point and angle between them (contour point and tangent vector).
My points are ordered. i.e. p[i+1,] is next to p[i,] and my path is closed. i.e. p[0] is next to p[N-1] (If I consider N points. The image of contour points is attached below.
I have done a lot of search but never find any clue. Any help would be highly appreciated. Thanks.
The trivial way is :
Tangent[i] = Normalize(Contour[i+1] - Contour[i-1])
You would simply need to take care of boundary conditions if any!

How to group letters in OpenCV knowing their RotatedRects?

I have an image with letters, for example like this:
It's a binary image obtained from previous image processing stages and I know boundingRect and RotatedRect of every letter, but these letters are not grouped in words yet. It is worth mentioning, that RotatedRect can be returned from minAreaRect() or fitEllipse(), what is shown here and here. In my case RotatedRects look like this:
Blue rectangles are obtained from minAreaRect and red are obtained from fitEllipse. They give a little different boxes (center, width, height, angle), but the biggest difference is in values of angle. In first option angle changes from -90 to 0 degrees , in second case angle changes from 0 to 180 degrees. My problem is: how to group these letters in words, basing on parameters of RotatedRects? I can check angle of every RotatedRect and also measure distance between centers of every two RotatedRects. With simple assumptions on direction of text and distance between letters my algorithm of grouping works. But in more complicated case I encounter a problem. For example, in the image below there are few groups of text, with different directions, different angles and distances between letters.
Problems are when letter from one word is close to letter from other word and when angle of RotatedRect inside given word is more different than the angles of its neighbours. What could be the best way to connect letters in right words?
First, you need to define metric. It may be Euclidian 3D distance for example, defined as ||delta_X,delta_y,Delta_angle|| , where delta_X and delta_Y are distances beetween rectangle centers along x and y coordinate, and Delta_angle as a distance between angular orientation.
In short, your rectangles transforms to 3D data points, with coordinates (x,y,angle).
After you define this. You can use clusetering algorithm on your data. Seems DBSCAN should work good here. Check this article for example: link it may help to choose clustering algorithm.
I extended the aforementioned metric by a few other elements related to geometric properties of letters and words (distances, angles, areas, a ratio of neighboring letters areas, etc.) and now it works fine. Thanks for the suggestion.

Why angle of parallel lines is not same? opencv c++

I detected lane lines in opencv and calculated their angles (which are shown by read lines in the image), although they look almost the same angle, the angle calculated by the program shows quiet a difference with left line always greater than right.
I am using arctan(slope) to find angles.
Is is due to the fact that y-axis in MAT matrix is inverted?
I am trying to detect the difference in the lane line angles to detect turns and straight road. How can I do achieve my goal? which I can not right now because lines do not have same(but opposite) angle on the straight road.
Below is the image.
Image
The difference of the two angles is not close to zero, because the lines are not parallel in 2D, simple as that. You are comparing angles of 2D lines in the image plane!
What you want to do is to check how close is the sum of the angles to zero, i.e. fabs(angle1 + angle2). You probably also want to check if fabs(angle1) and fabs(angle2) are within a specific range.
Furthermore, you shouldn't use slopes, as the slope of a vertical line is infinity. You probably have 2D direction vectors for each line at some point. Either use atan2(dy, dx) to compute the angle for each line, or you could stick with the direction vectors, in the latter case adding the normalized direction vectors and comparing their angle to the vector (0, 1), which is the vertical line.
Be aware that all this assumes that the camera points into the direction of the (straight) lane.

How to dertmine the DirectionVector of a Line?

I have a programming problem , in the context of a geometric shape recognition(Rectangles, ovals etc).
In this context, if I have a a simple line, from say (x1,y1) to (x2,y2) - made up of a series of points(x-y pairs) -
How would I calculate the DIRECTION VECTOR for this line? I understand the math behind it, but I'm finding the algorithm provided by my client a bit vague. I'm stuck at step 3) of this algorithm.
The following is the algorithm(in English as opposed ot psedocode), exactly as provided by my client.
1) Brake the points that make up a "stroke" or "line" up in to sets of X(where by default X= 20 - we will adjust) points = a PointSet
2) For Each PointSet, find the EndPouint(average of the points at the ends) for the first and last Y points(where by default Y= X/5).
3) Find the DirectionVector of the PointSet= Subtract the CentrePoints
4) For each pair of PointSets, find the AngleChange = the angle between the DirectionVectors of the PointSets.
and so on.......
I am trying to figure out what point (3) means......
Any help would be DEEPLy appreciated folks! THANKS in advance.
If the segment from (x1,y1) to (x2,y2) is short, then you can approximate its direction vector simply by: (x2-x1)*i + (y2-y1)*j.
Otherwise, you could use PCA to estimate the direction vector as the principal axis of individual points forming the segment,

How to test proximity of lines (Hough transform) in OpenCV

(This is a follow-up from this previous question).
I was able to successfully use OpenCV / Hough transforms to detect lines in pictures (scanned text); at first it would detect many many lines (at least one line per line of text), but by adjusting the 'threshold' parameter via trial-and-error, it now only detects "real" lines.
(The 'threshold' parameter is dependant on image size, which is a bit of a problem if one has to deal with images of different resolutions, but that's another story).
My problem is that the Hough transform sometimes detects two lines where there is only one; those two lines are very near one another and (apparently) parallel.
=> How can I identify that two lines are almost parallel and very near one another? (so that I can keep only one).
If you use the standard or multiscale hough, you will end up with the rho and theta coordinates of the lines in polar coordinates. Rho is the distance to the origin, and theta is normally the angle between the detected line and the Y axis. Without looking into the details of the hough transform in opencv, this is a general rule in those coordinates: two lines will be almost parallel and very near one another when:
- their thetas are nearly identical AND their rhos are nearly identical
OR
- their thetas are near 180 degrees apart AND their rhos are near each other's negative
I hope that makes sense.
That's interesting about the theta being the angle between the line and the y-axis.
Generally, the rho and theta values are visualized as being the angle from the x-axis to the line perpendicular to the line in question. The rho is then the length of this perpendicular line. Thus, a theta = 90 and rho = 20 would mean a horizontal line 20 pixels up from the origin.
A nice image is shown on Hough Transform question

Resources