trapezoid fitting in OpenCV - opencv

I am using OpenCV to do segmentation using methods like grabcut, watershed. Then use findContours to obtain the contour. The actual contour I would like to obtain is a trapezoid and the functions approxPolyDP and convexHull cannot do this. Can somebody give me some hits? Maybe there are other methods rather than segmentation to obtain it? I can think of edge detection using methods like Canny but the result is not good because of unconstrained background. A lot of segments have to be connected and it is kind of hard.
The sample image is also attached (the first one--human shoulder). I would like to find the contour and the location of where the contour/edge changes its direction, that is the human shoulders. As in the second image, the right corner point can change resulting in a trapezoid.
1.jpg: original image
2.jpg: the contour is labelled by hand
3.jpg: fitted lines
https://drive.google.com/folderview?id=0ByQ8kRZEPlqXUUZyaGtpSkJDeXc&usp=sharing
Thanks.

Related

Any way to get strongest edge local to a contour line using cv2 or scikit-image?

I am working on accurately segmenting objects from an image.
I have found contour lines by using a simple rectangular prism in HSV space as a color filter (followed by some morphological operations on the resulting mask to clear up noise). I found this approach to be better than applying canny edge detection to the whole image as that just picked up a lot of other edges I don't care about.
Is there a way to go about refining the contour line I have extracted such that it clips to the strongest local edge kind of like Adobe Photoshop's smart cropping utility?
Here's an image of what I mean
You can see a boundary between the sky blue and the gray. The dark blue is a drawn on contour. I'd like to somehow clip this to the nearby edge. It also looks like there are other lines in the grey region, so I think the algorithm should do some sort of more globalish optimisation to ensure that the "clipping" action doesn't jump randomly between my boundary of interest and the nearby lines.
Here are some ideas to try:
Morphological snakes: https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_morphsnakes.html
Active contours: https://scikit-image.org/docs/dev/auto_examples/edges/plot_active_contours.html
Whatever livewire is doing under the hood: https://github.com/PyIFT/livewire-gui
Based on this comment, the last one is the most useful.

OpenCV The contour area larger than, eg. 200px

I use the background substraction to detect hand. http://docs.opencv.org/trunk/doc/tutorials/video/background_subtraction/background_subtraction.html
Then I would like to outline just a hand. To get rid of imperfections from the background.
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html
But the method is all the contours. I would like to alb contoured those elements which have larger area than, for example. 200px. How to do it? There is a better method to obtain the same hand in the picture?
There are two parameters of contour from which you can extract the info of contour
1.contourArea
2.arcLength
Another way of focus only on hand is Training haar cascade for hand detection.
Please refer to the similar question ask before.

Detect non-closed contour on opencv

I'm doing computer vision project for automatic card detection.
I need to separate the card from the background. I have applied the canny edge detection, using automatic parameter settings from this
Automatic calculation of low and high thresholds for the Canny operation in opencv
The result is excellent. However, sometimes the canny is not perfect like this
I have applied cvFindContour to detect the box. However, due to "hole" on the upper side, opencv failed to detect the contour.
How do I tune the cvFindContour to detect the contour or should I tune the canny edge instead?
There are multiple possible solutions.
The simplest one may be:
if FindContours does not find a closed contour, repeat the canny filter with a slightly decreased low_threshold, until you find a closed contour. If the closed contour has roughly the right size and shape, it is a card. The answer linked by Haris explains how to check whether a contour is closed
Another rather simple solution:
Don't apply Canny to the image at all. Execute findContours on the otsu thresholded image. Optionally use morphological opening and closing on the thresholded image to remove noise before findContours
FindContours does not need an edge image, it is usually executed with a thresholded image. I don't know your source image, so I cannot say how good this would work, but you would definitely avoid the problem of holes in the shape.
If the source image does not allow this, then the following may help:
use watershed to separate the card from the background. Use a high threshold to get some seed pixels that are definitely foreground and a low threshold to get pixels that are definitely background, then grow those two seeds using cv:watershed().
If the background in that image is the same color as the card, then the previous two methods may not work so well. In that case, your best bet may be the solution suggested by Micka:
use hough transform to find the 4 most prominent lines in the image. Form a rectangle with these 4 lines.

Finding a grid in an image

Having a match-3 game screenshot (for example http://www.gameplay3.com/images/games/jewel-quest-ii-01S.jpg), what would be the correct way to find the bound box for the grid (table with tiles)? The board doesn't have to be a perfect rectangle (as can be seen in the screenshot), but each cell is completely square.
I've tried several games, and found that there are some per-game image transformations that can be done to enhance the tiles inside the grid (for example in this game it's enough to take the V channel out of HSV color space). Then I can enlarge the tiles so that they overlap, find the largest contour of the image and get the bound box from it.
The problem with above approach is that every game (or even level inside the same game) may need a different transformation to get hold of the tiles. So the question is - is there a standard way to enhance either tiles inside the grid or grid's lines (I've tried finding lines with Hough transform, but, although the grid seems pretty visible to the eye, Hough doesn't find it)?
Also, what if the screenshot is obtained using the phone camera instead of taking a screenshot of a desktop? From my experience, captured images have less defined colors (which depends on lighting), and also can be distorted a little, as there is no way to hold the phone exactly in front of the screen.
I would go with the following approach for a screenshot:
Find corners in the image using for example a canny like edge detector.
Perform a hough line transform. This should work quite nicely on the edge image.
If you have some information about size of the tiles you could eliminate false positive lines using some sort of spatial model of the grid (eg. lines only having a small angle to x/y axis of the image and/or distance/angle of tile borders.
Identifiy tile borders under the found hough lines by looking for edges found by canny under/next to the lines.
Which implementation of the hough transform did you use? How did you preprocess the image?
Another approach would be to use some sort of machine learning approach. As you are working in OpenCV you could use either a Haar like feature detector. An example for face detection using Haar like features can be found here:
OpenCV Haar Face Detector example
Another machine learning approach would be to follow a Histogram of Oriented Gradients (Hog) approach in combination with a Support Vector Machine (SVM). An example is located here:
HOG example
You can find general information about HoG detection at:
Hog detection

Get edge co-ordinates after edge detection (Canny)

I have been working with OpenCV for a fairly short time, and have performed Canny Edge Detection on an image, and also performed dilation after that to further separate the object (in my case a square) from the background.
My problem, now is to identify graspable regions in 2D using an algorithm that requires me to handle co-ordinates of the points in those edges. Is there any way I can use OpenCV to get the co-ordinates of the corners so I can find the equation of the lines forming the edge of the square? I know the size of the square. My problem involves 2D co-ordinate geometry, and hence the need for co-ordinates.
I can provide the image after edge detection and dilation if need be. Help would be appreciated a lot.
Just offering a second method - not guaranteed to work.
Step 1: extract connected component and their contours. This can be applied after the Canny Edge Detection step.
FindContours
Step 2: If the contours are fairly good approximation of a square, you can use their bounding box directly.
BoundingRect - if the rectangles are always upright (not rotated)
MinAreaRect - if the rectangles are rotated.
The reason for the disclaimer is that it only works on very clean results, without any broken edges or gaps in the Canny edges, etc. If you need a more robust way of finding rectangles, Hough transform will be necessary.
You could use the corner detectors provided in OpenCV like Harris or Corner Eigenvalues. Here's an example of that along with full-fledged code.
In case other features are also throwing up corners you may need to go in for connected component analysis.

Resources