Get edge co-ordinates after edge detection (Canny) - opencv

I have been working with OpenCV for a fairly short time, and have performed Canny Edge Detection on an image, and also performed dilation after that to further separate the object (in my case a square) from the background.
My problem, now is to identify graspable regions in 2D using an algorithm that requires me to handle co-ordinates of the points in those edges. Is there any way I can use OpenCV to get the co-ordinates of the corners so I can find the equation of the lines forming the edge of the square? I know the size of the square. My problem involves 2D co-ordinate geometry, and hence the need for co-ordinates.
I can provide the image after edge detection and dilation if need be. Help would be appreciated a lot.

Just offering a second method - not guaranteed to work.
Step 1: extract connected component and their contours. This can be applied after the Canny Edge Detection step.
FindContours
Step 2: If the contours are fairly good approximation of a square, you can use their bounding box directly.
BoundingRect - if the rectangles are always upright (not rotated)
MinAreaRect - if the rectangles are rotated.
The reason for the disclaimer is that it only works on very clean results, without any broken edges or gaps in the Canny edges, etc. If you need a more robust way of finding rectangles, Hough transform will be necessary.

You could use the corner detectors provided in OpenCV like Harris or Corner Eigenvalues. Here's an example of that along with full-fledged code.
In case other features are also throwing up corners you may need to go in for connected component analysis.

Related

Detect missing squares in color calibration chart opencv python

I am am currently working on a method to extract colors from a macbeth color chart. So far I have had moderate success by using thresholding and then extracting square contours. Sadly through, colors that are too close to each other either mix together or do no get detected.
The code in it's current form:
<script src="https://pastebin.com/embed_js/mNi0TcDE"></script>
The image before any processing
After thresholding, you can see that there are areas where lines are incomplete due to too small differences in color. I have tried to use dilation to midigate these issues and it does work to a degree. But not enough to detect all squares.
Image after thresholding
This results in the following contours being detected
Detected contours
I have tried using:
Hough lines, sadly no lines here detected.
Centroids of contours, but I was unable to find a way to use centroids to draw lines and detect the centers of the missing contours
Corner detection, corners where found. But I was unsuccessful in finding a real way to put them to use.
Can anyone point me in the right direction?
Thaks in advance,
Emil
Hum, if your goal is color calibration, you really do not need to detect the squares in their entirety. A 10x10 sample near the center of the image of each physical square will give you 100 color samples, which is plenty for any reasonable calibration procedure.
There are many ways to approach this problem. If you can guarantee that the chart will cover the image, you could even just do k-means clustering, since you know in advance the exact number of clusters you seek.
If you insist on using geometry, I'd do template matching in scale+angle space - it is reasonable to assume that the chart will be mostly facing, and only slightly rotated, so you only need to estimate scale and a small rotation about the axis orthogonal to the chart.

Detect non-closed contour on opencv

I'm doing computer vision project for automatic card detection.
I need to separate the card from the background. I have applied the canny edge detection, using automatic parameter settings from this
Automatic calculation of low and high thresholds for the Canny operation in opencv
The result is excellent. However, sometimes the canny is not perfect like this
I have applied cvFindContour to detect the box. However, due to "hole" on the upper side, opencv failed to detect the contour.
How do I tune the cvFindContour to detect the contour or should I tune the canny edge instead?
There are multiple possible solutions.
The simplest one may be:
if FindContours does not find a closed contour, repeat the canny filter with a slightly decreased low_threshold, until you find a closed contour. If the closed contour has roughly the right size and shape, it is a card. The answer linked by Haris explains how to check whether a contour is closed
Another rather simple solution:
Don't apply Canny to the image at all. Execute findContours on the otsu thresholded image. Optionally use morphological opening and closing on the thresholded image to remove noise before findContours
FindContours does not need an edge image, it is usually executed with a thresholded image. I don't know your source image, so I cannot say how good this would work, but you would definitely avoid the problem of holes in the shape.
If the source image does not allow this, then the following may help:
use watershed to separate the card from the background. Use a high threshold to get some seed pixels that are definitely foreground and a low threshold to get pixels that are definitely background, then grow those two seeds using cv:watershed().
If the background in that image is the same color as the card, then the previous two methods may not work so well. In that case, your best bet may be the solution suggested by Micka:
use hough transform to find the 4 most prominent lines in the image. Form a rectangle with these 4 lines.

trapezoid fitting in OpenCV

I am using OpenCV to do segmentation using methods like grabcut, watershed. Then use findContours to obtain the contour. The actual contour I would like to obtain is a trapezoid and the functions approxPolyDP and convexHull cannot do this. Can somebody give me some hits? Maybe there are other methods rather than segmentation to obtain it? I can think of edge detection using methods like Canny but the result is not good because of unconstrained background. A lot of segments have to be connected and it is kind of hard.
The sample image is also attached (the first one--human shoulder). I would like to find the contour and the location of where the contour/edge changes its direction, that is the human shoulders. As in the second image, the right corner point can change resulting in a trapezoid.
1.jpg: original image
2.jpg: the contour is labelled by hand
3.jpg: fitted lines
https://drive.google.com/folderview?id=0ByQ8kRZEPlqXUUZyaGtpSkJDeXc&usp=sharing
Thanks.

Correlating a vector with edges in an image

I'm trying to implement user-assisted edge detection using OpenCV.
Assume you have an image in which we need to find a polygonal shape. For the sake of discussion, let's say we need to find the top of a rectangular table in a picture. The user will click on the four corners of the table to help us narrow things down. Connecting those four points gives us a polygon, or four vectors.
But the user is not very accurate when clicking on those corners. So I'd like to use edge information from the image to increase the accuracy.
I'm using a Canny edge detector with a fairly high treshold to determine important edges in my image. (more precisely, I'm scaling down, blurring, converting to grayscale, then run Canny). How can I compute whether a vector aligns with an edge in my image? If I have a way to compute "alignment", my overal algorithm comes down to perturbating the location of the four edge points, computing the total "alignment" of my polygon with the edges in the image, until I find an optimum.
What is a good way to define and compute this "alignment" metric?
You may want to try to use FindContours to detect your table or any other contour. Then build a contour also from the user input points. After this you can read about Contour Moments by which you can compare contours. You can compare all the contours from the image with the one built from the user points and then select the closest match.

OpenCV detect corners

I'm using OpenCV on the iPhone. I want to find a Sudoku in a photo.
I started with some Gaussian Blur, Adaptive Threshold, Inverting the image and Dilate.
Then I did some findContour and drawContour to isolate the Sudoku grid.
Then I've used the Hough Transform to find lines and what I need to do now is find the corners of the grid. The Sudoku photo may be taken in an angle so I need to find the corners so I can crop and warp the image correctly.
This is how two different photos may look. One is pretty straight and one in an angle:
Probabilistic Hough
http://img96.imageshack.us/i/skrmavbild20110424kl101.png/
http://img846.imageshack.us/i/skrmavbild20110424kl101.png/
(Standard Hough comes in a comment. I can't post more than two links)
So, what would be the best approach to find those corners? And which of the two transform is easiest to use?
Best Regards
Linus
Why not use OpenCV's corner detection? Take a look at cvCornerHarris().
Alternatively, take a look at cvGoodFeaturesToTrack(). It's the Swiss Army Knife of feature detection and can be configured to use the Harris corner detector (among others).
I suggest the following approach. First, find all intersections of lines. It is helpful to sepparate lines into "horisontal" and "vertical" by angle (i.e. find two major directions of lines). Then find the convex hull of acquired points. Now you have corners and some points on the boundaries. You can remove the latter by analysing the angle between neighbour points in the convex hull. Corners will have the angle about 90 degrees and points on the boundaries - about 180 degrees.

Resources