I'm trying to extract the geometries of the papers in the image below, but I'm having some trouble with grabbing the contours. I don't know which threshold algorithm to use (here I used static threshold = 10, which is probably not ideal.
And as you can see, I can get the correct number of images, but I can't get the proper bounds using this method.
Simply applying Otsu just doesn't work, it doesn't capture the geometries.
I assume I need to apply some edge detection, but I'm not sure what to do once I apply Canny or some other.
I also tried sobel in both directions (+ve and -ve in x and y), but unsure how to extract these contours from there.
How do I grab these contours?
Below is some previews of the images in the process of the final convex hull results.
**Original Image** **Sharpened**
**Dilate,Sharpen,Erode,Sharpen** **Convex Of Approximated Polygons Hulls (which doesn't fully capture desired regions)**
Sorry in advance about the horrible formatting, I have no idea how to make images smaller or title them nicely in SOF
Related
I am trying to detect the regions of traffic signs. Using OpenCV, my approach is as follows:
The color image:
Using the TanTriggs Preprocessing get rid of the illumination variances:
Equalize histogram:
And binarize (Cv2.Threshold(blobs, blobs, 127, 255, ThresholdTypes.BinaryInv):
Iterate each blob using ConnectedComponents and get the mean color value using the blob as mask. If it is a red color then it may be a red sign.
Then get contours of this blob using FindContours.
Simplify the contours using ApproxPolyDP and check the points of each contour:
If 3 points then triangle shape is acceptable --> candidate for triangle sign
If 4 points then shape is acceptable --> candidate
If more than 4 points, BBox dimensions are acceptable and most of the points are on the ellipse fitted (FitEllipse) --> candidate
This approach works for the separated blobs in the binary image, like the circular 100km sign in my example. However if there is a connection to the outside objects, like the triangle left bottom part in the binary image, it fails.
Because, the mean value of this blob is far from red!
Using Erosion helps in some cases, however makes it worse in many of the other images.
Using different threshold values for the binarization also works for some, but fails on many; like the erosion.
Using HoughCircle is just very slow and I couldn't manage to get good results playing with the parameters.
I have tried using matchShapes but couldn't get good results.
Can anybody show me another way the achieve what I want (with a reasonable computational time)?
Any information, or code in any language is wellcome.
Edit:
Using circularity measure (C=P^2/4πA) or the approach I have described above, triangle and ellips shapes can be found when they are separated. However when the contour is like this for example:
I could not find a robust way to extract the triangle piece. If I could, I would check the mean color, and decide if its a red sign candidate.
Sorry, I don't have the kudos to comment, but can't you use the red colour?
import common
myshow = common.myshow
img = cv2.imread("ms0QB.png")
grey = np.zeros(img.shape[:2],np.uint8)
hsv = cv2.cvtColor(img,cv2.COLOR_mask = np.logical_or(hsv[:,:,0]>160,hsv[:,:,0]<10 )
grey[mask] = 255
cv2.imshow("160<hue<182",grey)
cv2.waitKey()
I am trying to detect ROI for a fixed repetitive pattern in an image using opencv C++.
The ROI which I am trying to find - is shown with red boundary as shown in the pic:
I tried canny edge detection after blurring but it detects edge of the vertical/horizontal black and white lines. This is not something I am trying to detect.
What is the best approach to my problem?
Since you're starting with a binary image you could use
findContours()
to get the contours for the individual strips. Since there are a couple of solitary pixels from noise you should then filter for size using
contourArea(contour)
and merge the points of all contours meeting your size criteria into a combined contour. Then get the bounding box for the combined contour:
boundingRect(combinedContour)
Does someone have an idea to get the size and the position from an object? The Object is detected in a binary image with white pixels:
For example: Detected / Original
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/segmentation/2_sal/0_12_12171.jpg
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/comparison/orig/0_12_12171.jpg
I know about the CvMoments- Method. But I don't know how to use it in this case.
By the way: How can I make my mask more clearly?
Simple algorithm:
Delete small areas of white pixels using morphological operations (erosion).
Use findContours to find all contours.
Use countNonZero or contourArea to find area of each contour.
Cycle throught all points of each contour and find mean of them. This will be the center of contour.
If the object is tree, you should delete small areas by using morphology as Astor written.
Alternative of finding mass, and mass center is using moments:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=moments#moments
m00 as doc says is mass
There are also formulas for mass center.
This approach works when only your object remains on image after segmentation.
I'm new to image processing and I'm working on detecting lines in a document image. I read the theory of Hough line transform but I can't see why I must use Canny before calling that function in opencv like being said in many tutorials. What's the point of finding edges in this case? The fact is that if I don't use Canny or threshold before HoughLines() the results will be very messy. I hope someone will explain for me the reason why.
2 of the tutorials I've read:
Imgproc Feature Detection
Hough Line Transform
Short Answer
cvCanny is used to detect Edges, as well as increase contrast and remove image noise.
HoughLines which uses the Hough Transform is used to determine whether those edges are lines or not. Hough Transform requires edges to be detected well in order to be efficient and provide meaning results.
Long Answer
The Limitations of the Hough Transform are described in more detail on Wikipedia.
The efficiency of the Hough Transform relies of the bin of acculumated pixel being distinct, e.g. a direct contrast between a pixel and its surrounding neighbours or if using a mask region a pixel region and its surrounds regions. If all pixels had similar acculumated values nothing would stand out as a line or circle. This leads to the reduction of colour (colour to grayscale, grayscale to black and white) in order to increase contract.
The number of parameters to the Hough Transform also increase the spread of votes in the pixel bins and increase the complexity of the transform, which mean that normally only lines or circles are reliably detected using it as they have less than 3 parameters.
The edges need to be detected well before running the Hough Transform otherwise its efficiency suffers further. Also noisy images don't work well with Hough transform unless the noise is removed before hand.
First of all, to detect lines you need to work on a boolean matrix image (or binary), I mean: the color is black or white, there's no grayscale.
HoughLines()'s requirement to work properly is to have this kind of image as input. That's the reason you have to use Canny or Treshold, to convert the colored image matrix into a boolean one.
Hough transformation
A line in one picture is actually an edge. Hough transform scans the whole image and using a transformation that converts all white pixel cartesian coordinates in polar coordinates; the black pixels are left out. So you won't be able to get a line if you first don't detect edges, because HoughLines() don't know how to behave when there's a grayscale.
Theoretically, you are correct. Finding edges is not absolutely required for the Hough Line algorithm to work.
The way the Hough works is basically it takes every point and connects it to every other point, and whatever points have the most lines going through them, those lines stay. For this, we need points. The Canny creates those points. Theoretically you could use any sort of filter - isolate all blue or purple points and connect them, whatever - but edges works well.
The Hough also does not weight its lines or points. To the Hough, an image is binary - made up of either 1s or 0, points or not points. There is no need for greyscale, and the canny conveniently returns binary images.
Thus is the Canny always part of the Hough.
all is about processing binary data,
complex data -> (a binary data, b binary data, c binary data, ..) (using canny(),sobel(), etc)
a binary data -> function1() (using houghlines())
b binary data -> function2()
c binary data -> function3() ..
a binary data -X-> function2() ..
complex data -X-> function1() ..
HTH
I'd like to know what would be the best strategy to compare a group of contours, in fact are edges resulting of a canny edges detection, from two pictures, in order to know which pair is more alike.
I have this image:
http://i55.tinypic.com/10fe1y8.jpg
And I would like to know how can I calculate which one of these fits best to it:
http://i56.tinypic.com/zmxd13.jpg
(it should be the one on the right)
Is there anyway to compare the contours as a whole?
I can easily rotate the images but I don't know what functions to use in order to calculate that the reference image on the right is the best fit.
Here it is what I've already tried using opencv:
matchShapes function - I tried this function using 2 gray scales images and I always get the same result in every comparison image and the value seems wrong as it is 0,0002.
So what I realized about matchShapes, but I'm not sure it's the correct assumption, is that the function works with pairs of contours and not full images. Now this is a problem because although I have the contours of the images I want to compare, they are hundreds and I don't know which ones should be "paired up".
So I also tried to compare all the contours of the first image against the other two with a for iteration but I might be comparing,for example, the contour of the 5 against the circle contour of the two reference images and not the 2 contour.
Also tried simple cv::compare function and matchTemplate, none with success.
Well, for this you have a couple of options depending on how robust you need your approach to be.
Simple Solutions (with assumptions):
For these methods, I'm assuming your the images you supplied are what you are working with (i.e., the objects are already segmented and approximately the same scale. Also, you will need to correct the rotation (at least in a coarse manner). You might do something like iteratively rotate the comparison image every 10, 30, 60, or 90 degrees, or whatever coarseness you feel you can get away with.
For example,
for(degrees = 10; degrees < 360; degrees += 10)
coinRot = rotate(compareCoin, degrees)
// you could also try Cosine Similarity, or even matchedTemplate here.
metric = SAD(coinRot, targetCoin)
if(metric > bestMetric)
bestMetric = metric
coinRotation = degrees
Sum of Absolute Differences (SAD): This will allow you to quickly compare the images once you have determined an approximate rotation angle.
Cosine Similarity: This operates a bit differently by treating the image as a 1D vector, and then computes the the high-dimensional angle between the two vectors. The better the match the smaller the angle will be.
Complex Solutions (possibly more robust):
These solutions will be more complex to implement, but will probably yield more robust classifications.
Haussdorf Distance: This answer will give you an introduction on using this method. This solution will probably also need the rotation correction to work properly.
Fourier-Mellin Transform: This method is an extension of Phase Correlation, which can extract the rotation, scale, and translation (RST) transform between two images.
Feature Detection and Extraction: This method involves detecting "robust" (i.e., scale and/or rotation invariant) features in the image and comparing them against a set of target features with RANSAC, LMedS, or simple least squares. OpenCV has a couple of samples using this technique in matcher_simple.cpp and matching_to_many_images.cpp. NOTE: With this method you will probably not want to binarize the image, so there are more detectable features available.