Are there any implementations or papers that modify the Hough transform to detect the width of line segments? Hough space maxima can be used to determine potential lines, and line segments are groups of pixels that are on the line for sufficient intervals. After doing that, I'm trying to determine the width of each line segment.
All I've been able to find thus far is this poster:
http://www.cse.cuhk.edu.hk/~lyu/staff/SongJQ/poster_47_song_j.pdf
Depending if you are willing to spend some money, there is a package called Halcon that has the kind of things you are after.
For example http://www.mvtec.com/download/reference/lines_gauss.html (that's not a Hough transform, but the main package does have those as well).
I used Google to find a paper called "Extraction of Curved Lines from Images" which mentions line width (I can't get the link to work either).
If you have a binary mask for each line segment could you possibly take the maximum of the distance transform on that line segment? It should tell you how far away the center of the line is from the edge, the width should be 2*max(distanceTranform(segment)) - 1 for odd widths and 2*max(distanceTranform(segment)) for even widths.
OpenCV has an implementation of this method here. They also have HoughLinesP to detect line segments, but sounds like you already have that worked out.
Related
Suppose you have an image like this:
How can you measure the combined length of all the lines in this image?
I have tried (naively) skeletonising the image and then counting the number of pixels. However, this gives inaccurate results, as diagonal steps are actually longer than vertical/horizontal ones.
My other idea is to generate a chain code for all the line segments , and then use something like Freeman's method to measure the length from the chain code. However, generating the chain code is going to be quite tricky I think, as usually they start/stop at the same point, and this won't work for the grid shape.
Am I missing something obvious here? Is there an easier way to do this?
As far as I can see, the strokes are 3 pixels wide. So dividing the number of black pixels by three isn't a too bad approximation.
Alternatively, use a thinning algorithm to reduce the width to a single pixel (connexity 8), then seed-fill the whole outline. You will use a simple recursive 8-ways fill, and count the lateral and diagonal moves separately. In the end the length is given by L + D√2.
There a is an ellipse on the picture,just as following.
I have got the points of the contour by using opencv. But you can see the pictrue,because the resolution is low, there is a straight line on the contour.How can i fit it into curve like the blue line?
One Of the method to solve your problem is to vectorize your shape (moving from simple intensity space to vectors space).
I am not aware of the state-of-art in this field. However, from school information, I can suggest this solution.
Bezier curves, you can try to model your shape using simple bezier curve.This is not a hard operation you can google for dozen of them. Then, you can resizing it as much as you want after that you may render it to simple image.
Be aware that you may also Splines instead of Bezier.
Another method would be more simple but less efficient. Since you mentioned OpenCV, you can apply the cv::fitEllipse on the points. Be aware that this will return a RotatedRect which contains the ellipse. You can infer your ellipse simply like this:
Center = Center of RotatedRect.
Longest Radius = The Line which pass from the center and intersect with the two small sides of the RotatedRect.
Smallest Radius = The Line which pass from the center and intersect with the two long sides of the RotatedRect.
After you got your Ellipse Parameters, You can resize it as you want then just repaint it in the size you want using cv::ellipse.
I know that this is a pseudo answer. However, I think every thing is easy to apply. If you faced any problem implementing it, just give me a comment.
I have set of lines that have been transformed using a perspective transformation.
The information that I know about these lines are:
They are lines not line segments (no length or start point or end point is known)
They are all parallel
Distances between them are unknown and vary from pair to pair.
So, to make it clear again, I do not know the blue lines. I have just the greens. Even, I do not know what is the Homograph Matrix that was applied.
Question:
I need a method, a measurement, an algorithm or even a hint about the condition that must all the green lines satisfied.
For example if I add this red line to the set:
It is obvious that the red line could not be exist in the set of lines before applying the transformation so it is a noise of course.
So I need a measurement if I applied it on the green lines would give me positive response and if add the red line to the green set it would show a negative response or at least a lower confidence.
P.S. OpenCV is available and preferred.
If they are parallel before perspective projection all lines should intersect in the same vanishing point. I would say you should compute this point using your green lines (maybe this is helpful) and if the distance from your red line to the vanishing point is to big it can be rejected.
I want to draw a line with dynamic width as shown in attached picture. What should be the best approach for this. ?
Updated:
My task is to draw line on finger move. And the line width is changes as speed of swipe is change. both (Line width and finger swipe speed) are directly proportional .
As the image you posted doesn't has any consistent height-width proportion to calculate and change, i doubt this cannot be achieved.
In other solution you can draw a line of fixed pixel say 2 pixel and based on drawn length inflate the width if line till center and then again start deflate from center point to end point.
You need to see the difference between x coordinates otherwise if a sine wave is drawn with high nodes the line width will overlap each other.
Edited : This link might be of your interest then.You can modify it according to your need, its in cocos2d.
There is no direct support for variable thickness curves in iOS (or Mac OS for that matter.) The cocos2d project looks like a good approach.
There is also no support for soft-edged curves who's edges are feathered to transparent. I've thought about implementing a similar approach to the one outlined in the Cocos link using OpenGL. This would be a good application for a vertex shader, since it would take advantage of the parallel vertex processing and vector math available in shaders.
Take a look at this article Smooth Freehand Drawing. It might be helpfull.
You can manipulate with control points of
[path addCurveToPoint:pts[3] controlPoint1:pts[1] controlPoint2:pts[2]];
and fill the area between two bezierPaths. I am not sure if it will work, but you can try if you dont find anything else.
How many way to tell if a photo (in a few lines) in opencv
In the example below that line there are 3
http://www.uploadimage.co.uk/images/1511642.jpg
Thanks
Try dilating the white portion of the image first. This will make the black strip thinner. Once it is thin comparatively, you could use hough lines transform which returns an array of lines it finds. The task would then be as simple as counting the number of elements in the arrray. Of course, you will have to do a lot of trial and error in passing appropriate parameters to the Hough transform, lest it returns you closely spaced lines which represent the same region in your image. You might probably need to group the lines based on the slope and the intercept also.