Given a binary image with multiple objects in it, I would like to enclose each object in contour. And then, I would like to calculate area inside object, followed by area inside contour. Any ideas how to do this?
Use the OpenCV findContours() method for contours, the contourArea() method for contour area, and the OpenCV Moments class to calculate the object area.
See these pages from the OpenCV documentation site:
Contours
Contour Features
Related
I'm trying to extract the geometries of the papers in the image below, but I'm having some trouble with grabbing the contours. I don't know which threshold algorithm to use (here I used static threshold = 10, which is probably not ideal.
And as you can see, I can get the correct number of images, but I can't get the proper bounds using this method.
Simply applying Otsu just doesn't work, it doesn't capture the geometries.
I assume I need to apply some edge detection, but I'm not sure what to do once I apply Canny or some other.
I also tried sobel in both directions (+ve and -ve in x and y), but unsure how to extract these contours from there.
How do I grab these contours?
Below is some previews of the images in the process of the final convex hull results.
**Original Image** **Sharpened**
**Dilate,Sharpen,Erode,Sharpen** **Convex Of Approximated Polygons Hulls (which doesn't fully capture desired regions)**
Sorry in advance about the horrible formatting, I have no idea how to make images smaller or title them nicely in SOF
I am new to OpenCV and I was trying to extract the region bound by largest contour. It may be a simple question, but I am not able to figure it out. I tried googling too, without any luck.
I would:
Use contourArea() to find the largest closed contour.
Use boundingRect() to get the bounds of that contour.
Draw the contour using drawContours() (with thickness set to -1 to
fill the contour) and use this as a mask.
Use use the mask to set all pixels in the original image not in the
ROI to (0,0,0).
Use the bounding rectangle to extract just that area from the
original image.
Here is well explained what do you want do develop.
Basically you have to:
apply threshold to a copy of the original image;
use findContours -> output is:
vector<vector<Point>>
that stores contours;
iterate on contours to find the largest.
I'm searching for a way to extract the "inner" contours of a binary image with opencv. I know that findContours extracts contours but I need the silhouette pixel which belong to the thresholded object in my binary image and not the outer contours.
Here is a fictive image which describes better what I'm searching for. I am searching for a method to extract the red contour.
I already tried a naive approach in copying the original binary image and shrinking the copy by 2 pixels each side and filling up the edges with black pixels and used findContoursbut the outcome is not satisfying.
You could just run findContours() on the negative image.
Little update:
I solved it with opencv erosion and dilatation.
I would like to use matchShapes() function to find an object inside a query image.
Let's say I have a model image of a book, i want to extract its shape and then try to find this book (its shape) inside another image.
I have googled a lot but couldn't find any real example on how to use matchShapes to achive this. The documentation lacks. Can someoen make a little example in C++ ?
Thanks a lot! (Note I know I can use SIFT/ORB etc, but i want to use matchShapes())
Step 1: Detect contour of book and store it in vector<Point>.
Step 2: Detect contours on another image.
Step 3: Iterate over detected contours and match shape detected in Step 1 with each contour detected on another image. You have detected vector<vector<Point> > contours. Iterating over them you pass model vector<Point> from Step 1 and vector<Point> from contours to matchShape() function. See my answer here how to use matchShape() function.
Note that book must have the same shape on another image as on model image. It can only be rotated, displaced or scaled.
Does someone have an idea to get the size and the position from an object? The Object is detected in a binary image with white pixels:
For example: Detected / Original
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/segmentation/2_sal/0_12_12171.jpg
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/comparison/orig/0_12_12171.jpg
I know about the CvMoments- Method. But I don't know how to use it in this case.
By the way: How can I make my mask more clearly?
Simple algorithm:
Delete small areas of white pixels using morphological operations (erosion).
Use findContours to find all contours.
Use countNonZero or contourArea to find area of each contour.
Cycle throught all points of each contour and find mean of them. This will be the center of contour.
If the object is tree, you should delete small areas by using morphology as Astor written.
Alternative of finding mass, and mass center is using moments:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=moments#moments
m00 as doc says is mass
There are also formulas for mass center.
This approach works when only your object remains on image after segmentation.