I would like to use matchShapes() function to find an object inside a query image.
Let's say I have a model image of a book, i want to extract its shape and then try to find this book (its shape) inside another image.
I have googled a lot but couldn't find any real example on how to use matchShapes to achive this. The documentation lacks. Can someoen make a little example in C++ ?
Thanks a lot! (Note I know I can use SIFT/ORB etc, but i want to use matchShapes())
Step 1: Detect contour of book and store it in vector<Point>.
Step 2: Detect contours on another image.
Step 3: Iterate over detected contours and match shape detected in Step 1 with each contour detected on another image. You have detected vector<vector<Point> > contours. Iterating over them you pass model vector<Point> from Step 1 and vector<Point> from contours to matchShape() function. See my answer here how to use matchShape() function.
Note that book must have the same shape on another image as on model image. It can only be rotated, displaced or scaled.
Related
Now, I have a stitching project. I need to find the best stitching seam, but I have no idea of the function below. I have already seen the illustration of OpenCV documentation, I think it’s unclear.
seam_finder = new detail::GraphCutSeamFinder(GraphCutSeamFinderBase::COST_COLOR);
seam_finder->find(images_warped_f, corners, masks_warped);
Can someone help me? tell me the meaning of images_warped_f and corners.Thank u so much!
For every image we have type defined. You can check here for more information. images_warped_f is the vector of images converted to type 5.
Corners is the vector of left corner points of the images you are trying to stitch. If you use the SphericalWarper to warp your image, the warp function would perform a spherical projectiona and return you the top left corner of the result image. You can check here for reference.
I have some really simple images, from which I would like to extract the longest contour.
An example image would be like this one:
I am using the exact same sample code from OpenCV's tutorial page. With one differenc I set the threshold to a fix number, namely 100.
The main line is this one:
cv::findContours(cannyOutput, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
After I call the above function I iterate through the found contours and check which one is the longest, then I save the longest one. Under longest I mean which one has the most points.
In some cases, like in the above example image the longest contour is doubled. To make it more understandable what I mean under "doubled" here is a visualized image of the found contour:
So I tried to figure it out myself why this is happening by trying to understand the OpenCV docs of findContour, but I still can't understand the real reason.
What I manged to achieve, if I change to CV_RETR_EXTERNAL from CV_RETR_TREE, I don't get the doubled contour.
So my questions would be:
What is the reason behind the doubled contour and why does CV_RETR_EXTERNAL solve the problem?
Getting the contour which has the most points doesn't necessarily mean, that it is the longest, right? Due to CV_CHAIN_APPROX_SIMPLE flag. Would CV_CHAIN_APPROX_NONE solve this problem for example?
Q: What is the reason behind the doubled contour and why does CV_RETR_EXTERNAL solve the problem?
A: OpenCV findCountours standard mode is CV_RETR_LIST, which outputs, for a line, as in your case, the inner and outer contour. CV_RETR_EXTERNAL, as described in the docs, will outputs only the "extreme outer contours". Note that the outer contour does not mean the longest one. I would recommend you to loop through all the contours given by the CV_RETR_LIST mode and do your calculation.
Q: Getting the contour which has the most points doesn't necessarily mean, that it is the longest, right? Due to CV_CHAIN_APPROX_SIMPLE flag. Would CV_CHAIN_APPROX_NONE solve this problem for example?
A: The first question is true, if your findCountours method is different than CV_CHAIN_APPROX_NONE. It is also true that CV_CHAIN_APPROX_NONE will solve this problem as it will "store absolutely all the contour points", but you can also sum all the distances between the points if you prefer to use any other method.
I am learning JavaCV and want to extract part of images dynamically based on color.
As identification I am outlining the region which I need to extract with a color. Is there anyway I can do extract ROI based on color outline. Any help appreciated.
Here is the Sample Image
it is quite simple. Since your figure has 4 corners hence you ought to follow the following steps.
1.identify the orientation of the image and store the points in a MatofPoint2f in a specific order.
(clock wise or anti clockwise- For this you can use Math.atan2(p1(y)-centerpoint(y),p1(x)-centerpoint(x)) and then sort the points according to the result of the equation. find the center point by finding the avg all the xcoords and y coords or any method you prefer).
2.Create a MatofPoint2f containing the corner coords of the result image size you want the cropped image in.
3.use Imgproc.getPerspectiveTransform() to perform the cropping.
4.Finally use Imgproc.warpPerspective() to obtain the output that is desired.
And for creating the border of the ROI the best way to go is to threshold the image by using some specific range so as to extract only those parts of the spectrum which is required.
I am new to OpenCV and I was trying to extract the region bound by largest contour. It may be a simple question, but I am not able to figure it out. I tried googling too, without any luck.
I would:
Use contourArea() to find the largest closed contour.
Use boundingRect() to get the bounds of that contour.
Draw the contour using drawContours() (with thickness set to -1 to
fill the contour) and use this as a mask.
Use use the mask to set all pixels in the original image not in the
ROI to (0,0,0).
Use the bounding rectangle to extract just that area from the
original image.
Here is well explained what do you want do develop.
Basically you have to:
apply threshold to a copy of the original image;
use findContours -> output is:
vector<vector<Point>>
that stores contours;
iterate on contours to find the largest.
Does someone have an idea to get the size and the position from an object? The Object is detected in a binary image with white pixels:
For example: Detected / Original
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/segmentation/2_sal/0_12_12171.jpg
http://ivrgwww.epfl.ch/supplementary_material/RK_CVPR09/Images/comparison/orig/0_12_12171.jpg
I know about the CvMoments- Method. But I don't know how to use it in this case.
By the way: How can I make my mask more clearly?
Simple algorithm:
Delete small areas of white pixels using morphological operations (erosion).
Use findContours to find all contours.
Use countNonZero or contourArea to find area of each contour.
Cycle throught all points of each contour and find mean of them. This will be the center of contour.
If the object is tree, you should delete small areas by using morphology as Astor written.
Alternative of finding mass, and mass center is using moments:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=moments#moments
m00 as doc says is mass
There are also formulas for mass center.
This approach works when only your object remains on image after segmentation.