I am tryig to find a way to determine if a contour is closed or not,
but I am usign findContours and not cvFindContours so I dont have the flags..
Any idea how to do it?
By the way, I was asked to find the number of loops in the contour,
(meaning how many times he crosses himself).
Is it possible that a single contour will have loops?
If so, any idea of how to find how many there are?
Thanks,
Tamir.
i think that you can't detect the contours which have intersections using cvFindContours. if this function return the contours which are have a intersection than you can be sure that this contour is a loop. If the contour have 1 intersection for example imagine the contour corresponded to number "8" than cvFindContours return 3 contours, the 2 circles and the large outlier. I think you must use the graph theory for this task. create graph where the vertex is the pixels which lie in the contour, and the edges of graph is the neighbor pixels in the image. than you can find the all loops in the graph.
Related
I'm dealing with an image and I need your help. After a lot of image processing I get this from a microscopic image. This is my pre-final thresholded image:
As you can see there's a big C in the upper left corner. This should not be an open blob, it must be a closed one.
How can I achieve that with out modifying the rest? I was thinking on applying Convex Hull to that contour, but I don't know how to apply to that and only that contour, with out even touching the others.
I mean, maybe there's a "meassure" I can use to isolate this contour from the rest. Maybe a way to tell how convex/cancave it is or how big is the "hole" it delimits.
In further works maybe appears some other unclosed contours that I'll need to close, so don't focus it this particular case I'll need something I can use or adapt to other similar cases.
Thanks in advance!
While Jeru's answer is correct on the part where you want to close the contour once you have identified it, I think OP also wants to know how he can automatically identify the "C" blob without having to find out manually that's it the 29th contour.
Hence, I propose a method to identify it : compute the centroids of each shape and check if this centroid is inside the shape. It should be the case for blobs (circle) but not for "C"s.
img=cv2.imread(your_image,0)
if img is None:
sys.exit("No input image") #good practice
res=np.copy(img) #just for visualisation purposes
#finding the connectedComponents (each blob)
output=cv2.connectedComponentsWithStats(img,8)
#centroid is sort of the "center of mass" of the object.
centroids=output[3]
#taking out the background
centroids=centroids[1:]
#for each centroid, check if it is inside the object
for centroid in centroids:
if(img[int(centroid[1]),int(centroid[0])]==0):
print(centroid)
#save it somewhere, then do what Jeru Luke proposes
#an image with the centroids to visualize
res[int(centroid[1]), int(centroid[0])]=100
This works for your code (I tried it out), but caveat, may not work for every "C form" especially if they are "fatter", as their centroid could well be inside them. I think there may be a better measure for convexity as you say, at least looking for such a measure seems the right direction for me.
Maybe you can try something like computing ConvexHull on all your objects (without modifying your input image), than measure the ratio between the object's area and the "convex hull around it"'s area, and if that ratio is under a certain threshold then you classify it as a "C" shape and modify it accordingly.
I have a solution.
First, I found and drew contours on the threshold image given by you.
In the image, I figured out that the 29th contour is the one with the C. Hence I colored every contour apart from the 29th contour with black. The contour having the C alone was in white.
Code:
#---- finding all contours
contours, iji = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
#---- Turning all contours black
cv2.drawContours(im1, contours, -1, (0,0,0), -1)
#---- Turning contour of interest alone white
cv2.drawContours(im1, contours, 29, (255, 255, 255), -1)
You are left with the blob of interest
Having isolated the required blob, I then performed morphological closing using the ellipse kernel for a certain number of iterations.
#---- k = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(30,30))
#---- closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, k)
#---- cv2.imshow("closed_img", closing)
The ball is now in your court! I learnt something as well! Have fun.
I have some really simple images, from which I would like to extract the longest contour.
An example image would be like this one:
I am using the exact same sample code from OpenCV's tutorial page. With one differenc I set the threshold to a fix number, namely 100.
The main line is this one:
cv::findContours(cannyOutput, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
After I call the above function I iterate through the found contours and check which one is the longest, then I save the longest one. Under longest I mean which one has the most points.
In some cases, like in the above example image the longest contour is doubled. To make it more understandable what I mean under "doubled" here is a visualized image of the found contour:
So I tried to figure it out myself why this is happening by trying to understand the OpenCV docs of findContour, but I still can't understand the real reason.
What I manged to achieve, if I change to CV_RETR_EXTERNAL from CV_RETR_TREE, I don't get the doubled contour.
So my questions would be:
What is the reason behind the doubled contour and why does CV_RETR_EXTERNAL solve the problem?
Getting the contour which has the most points doesn't necessarily mean, that it is the longest, right? Due to CV_CHAIN_APPROX_SIMPLE flag. Would CV_CHAIN_APPROX_NONE solve this problem for example?
Q: What is the reason behind the doubled contour and why does CV_RETR_EXTERNAL solve the problem?
A: OpenCV findCountours standard mode is CV_RETR_LIST, which outputs, for a line, as in your case, the inner and outer contour. CV_RETR_EXTERNAL, as described in the docs, will outputs only the "extreme outer contours". Note that the outer contour does not mean the longest one. I would recommend you to loop through all the contours given by the CV_RETR_LIST mode and do your calculation.
Q: Getting the contour which has the most points doesn't necessarily mean, that it is the longest, right? Due to CV_CHAIN_APPROX_SIMPLE flag. Would CV_CHAIN_APPROX_NONE solve this problem for example?
A: The first question is true, if your findCountours method is different than CV_CHAIN_APPROX_NONE. It is also true that CV_CHAIN_APPROX_NONE will solve this problem as it will "store absolutely all the contour points", but you can also sum all the distances between the points if you prefer to use any other method.
Using OpenCV's findContours() I have a list of contours in an image. I'm interested only in the straight lines, so if they are too 'squiggly' they should be rejected. The question is how to evaluate how straight each contour is?
I looked at fitLine(), but there doesn't appear to be a goodness-of-fit measure returned. I could evaluate this myself using the returned line.
I looked at arcLength() with the aim to compare this to the bounding rectangle dimensions, but even for somewhat straight lines, the arc length can be relatively long if the contour points are dense.
I could find the convex hull and compare to the bounding rectangle dimensions, but I'd have to analyze the convexity defects.
Is there a moment that would be useful here?
Find the contours as you are doing now
Find the straight lines in the image using HoughLines()
Compute the overlap between the contours and the straight lines
Take two points (with for instance cv::approxPoly) on your contour and compute their absolute distance. Then go through the contour points between the two points and add up all the distances. If the difference between distance over the contour and the absolute distance is bigger than a certain threshold you can reject it.
The function, findContours() already approximated contours with line segments somehow. Each contour is represented by a list of points around it. For your purpose, simply computing the distances of each pair of consecutive points in the contour would give you all line segment lengths.
Here is an example:
c = cnts[0]
#d is the points in contour c shifted by one with wraparound (numpy.roll)
d = np.roll(c, 1, axis=0)
np.linalg.norm(c - d, axis = -1)
After using cv::Canny(), it seems that there are some non-closed curves in the image. So my question is, what will cv::ContourArea() deal with them? Counting the area by close the curve first or just ignore them?
From ContourArea reference:
Calculates the contour area
So it just calculates area (number of pixels if image is discontinuous) of contour.
I found contours on two images with same object and I want to find displacement and rotation of this object. I've tried with rotated bounding boxes of this contours and then its angles and center points but rotations of bounding boxes don't tell about contour rotation correctly because it's the same for angles a+0, a+90, a+180 etc. degrees.
Is it any other good way to find rotation and displacement of contours? Maybe some use of convex hull, convexity defects? I've read in Learning OpenCv about matching contours but it hasn't helped. Could someone give some example?
//edit:
Maybe there is some way to use something similar to freeman chains to this? But I can't figure out algorithm at the moment. Making chain with angles between sequence point and then checking sequence match isn't working good...
If the object has convexity defects then you could choose one defect, make a vector from the centroid of the first contour to the centroid of this defect.
Then you could check the defects in the second contour and match the one that you used before.Again a vector from the centroid of the contour to the centroid of the matched defect.
From this you get 2 segments (vectors) from which you could obtain a displacement and a rotation.