I have set of lines that have been transformed using a perspective transformation.
The information that I know about these lines are:
They are lines not line segments (no length or start point or end point is known)
They are all parallel
Distances between them are unknown and vary from pair to pair.
So, to make it clear again, I do not know the blue lines. I have just the greens. Even, I do not know what is the Homograph Matrix that was applied.
Question:
I need a method, a measurement, an algorithm or even a hint about the condition that must all the green lines satisfied.
For example if I add this red line to the set:
It is obvious that the red line could not be exist in the set of lines before applying the transformation so it is a noise of course.
So I need a measurement if I applied it on the green lines would give me positive response and if add the red line to the green set it would show a negative response or at least a lower confidence.
P.S. OpenCV is available and preferred.
If they are parallel before perspective projection all lines should intersect in the same vanishing point. I would say you should compute this point using your green lines (maybe this is helpful) and if the distance from your red line to the vanishing point is to big it can be rejected.
Related
I have an algorithm that simply goes through a number of corners and finds those which are parallel. My problem is, as shown below, that I sometimes get false positive results.
To eliminate this I was going to check if both points fell onto a single hough line but this would be quite computationally intensive and I was wondering if anyone had any simpler ideas.
Thanks.
Ok based on the comments, this should be fix-able. When you detect a pair of parallel lines, get the equation of the line using the two corners that you've used to construct it. This line may be of the form y = mx + c. Then for every y coordinate between the two points, compute the x coordinate. This gives you a set of all the pixels that the line segment covers. Go through these pixels, and check if the intensity at every pixel is closer to black than white. If a majority of the pixels in the set are black-ish, then it's a line. If not, it's probably a non-line.
I have to calculate a slope (or an angle) of every single detectable line of the image. And even to detect the changes of the slope of the line, if it is possible.
I've performed 2D Fourier and I know a neighborhood averege angle at every area (sets of 64x64px). I even try a Hough transform, but neither sobel nor prewitt edge detection seems to detect these lines appropriately.
Please note that some of the lines are crossing each other, and some aren't straight.
Is there a method to detect the slope of each line? Or to detect these lines in order to perform an usefull Hough transform?
If you need the full image I can upload it somewhere.
Image
Greets Adamek,
I hope it is not too late. Here some quick ideas:
1) Using Hough trafo to detect lines is a good idea as a first step
2) Second step would be some kind of labeling to really know what lines there are. The most difficult problem to address is probably how to determine start and end of lines and seperate potentially connected ones. Search for the labeling keyword in this context, that should give some results.
3) Afterwards, having end and startpoint, I would
a) calculate for each line a regression line if you need more exact data in further analysis
b) just compute slope and intercept via f(x)=mx+n, where m is the slope and n the intercept. Given two points in 2D this is easily done as follows:
slope = (yRight - yLeft)/(xRight - xLeft);
m_oIntercept = ((yLeft - slope*xLeft) + (yRight - slope*xRight))*0.5;
and don't forget to test for (xRight-xLeft) < eps before to avoid zero division.
Hope that helps,
G.
How many way to tell if a photo (in a few lines) in opencv
In the example below that line there are 3
http://www.uploadimage.co.uk/images/1511642.jpg
Thanks
Try dilating the white portion of the image first. This will make the black strip thinner. Once it is thin comparatively, you could use hough lines transform which returns an array of lines it finds. The task would then be as simple as counting the number of elements in the arrray. Of course, you will have to do a lot of trial and error in passing appropriate parameters to the Hough transform, lest it returns you closely spaced lines which represent the same region in your image. You might probably need to group the lines based on the slope and the intercept also.
Are there any implementations or papers that modify the Hough transform to detect the width of line segments? Hough space maxima can be used to determine potential lines, and line segments are groups of pixels that are on the line for sufficient intervals. After doing that, I'm trying to determine the width of each line segment.
All I've been able to find thus far is this poster:
http://www.cse.cuhk.edu.hk/~lyu/staff/SongJQ/poster_47_song_j.pdf
Depending if you are willing to spend some money, there is a package called Halcon that has the kind of things you are after.
For example http://www.mvtec.com/download/reference/lines_gauss.html (that's not a Hough transform, but the main package does have those as well).
I used Google to find a paper called "Extraction of Curved Lines from Images" which mentions line width (I can't get the link to work either).
If you have a binary mask for each line segment could you possibly take the maximum of the distance transform on that line segment? It should tell you how far away the center of the line is from the edge, the width should be 2*max(distanceTranform(segment)) - 1 for odd widths and 2*max(distanceTranform(segment)) for even widths.
OpenCV has an implementation of this method here. They also have HoughLinesP to detect line segments, but sounds like you already have that worked out.
I have a set of points to define a shape. These points are in order and essentially are my "selection".
I want to be able to contract this selection by an arbitrary amount to get a smaller version of my original shape.
In a basic example with a triangle, the points are simply moved along their normal which is defined by the points to the left and the right of the points in question.
Eventually all 3 points will meet and form one point but until that point they will make a smaller and smaller triangle.
For more complex shapes, when moving the individual points inward, they may pass through the outer edge of the shape resulting in weird artifacts. Obviously I'll need to cull these points and remove them from the array.
Any help in exactly how I can do that would be greatly appreciated.
Thanks!
This is just an idea but couldn't you find the center of mass of the object, create a vector from the center to each point, and move each point along this vector?
To find the center of mass would of course involve averaging each x and y coordinate. Getting a vector is as simple a subtracting the point in question with the center point. Normalizing and scaling are common vector operations that can be found with the Google.
EDIT
Another way to interpret what you're asking is you want to erode your collection of points. As in morphology erosion. This is typically applied to binary images but you can slightly modify the concept to work with a collection of points. Essentially, you need to write a function that, given a point, will return true (black) or false (white) depending on if that point is inside or outside the shape defined by your points. You'd have to look up how to do that for shapes that aren't always concave (it's harder but not impossible).
Now, obviously, every single one of your actual points will return false because they're all on the border (by definition). However, you now have a matrix of points around your point of interest that define where is "inside" and where is "outside". Average all of the "inside" points and move your actual point along the vector between itself and towards this average. You could play with different erosion kernels to see what works best.
You could even work with a kernel with floating point weights instead of either/or values which will affect your average calculation proportional to their weights. With this, you could approximate a circular kernel with a low number of points. Try the simpler method first.
Find the selection center (as suggested by colithium)
Map the selection points to the coordinate system with the selection center at (0,0). For example, if the selection center is at (150,150), and a given selection point is at (125,75), the mapped position of the point becomes (-25,-75).
Scale the mapped points (multiply X and Y by something in the range of 0.0..1.0)
Remap the points back to the original coordinate system
Only simple maths required, no need to muck about normalizing vectors.