This is my first question here, thank you for reading it.
I am trying to count the number of inner contours inside a contour.
I found a nice tutorial showing how to use h_next and v_next
http://jmpelletier.com/a-simple-opencv-tutorial/
The problem is I use Mat and not IplImage.
I tried to convert it with:
Mat *oimg;
IplImage img = *oimg;
But I get an error when calling cvFindContours.
I also tried usign findContours which is built to work with Mat,
by going through the hierrarchy but it didnt work.
I'm usign C++ and OpenCV2.0
Thanks allot,
Tamir.
Instead of converting the cv::Mat to an IplImage to use the C API, I suggest directly using the C++ version of cvFindContours(): cv::findContours(). Instead of building a true tree data structure, it is flattened and stored in two vectors:
cv::Mat image = // ...
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(image, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE);
Check the C++ API documentation for instructions on how to interpret hierarchy (emphasis mine):
hiararchy – The optional output vector
that will contain information about
the image topology. It will have as
many elements as the number of
contours. For each contour contours[i]
, the elements hierarchy[i][0] ,
hiearchyi , hiearchy[i][2] ,
hiearchy[i][3] will be set to 0-based
indices in contours of the next and
previous contours at the same
hierarchical level, the first child
contour and the parent contour,
respectively. If for some contour i
there is no next, previous, parent or
nested contours, the corresponding
elements of hierarchy[i] will be
negative
Switching between the C and C++ API in the same codebase really hurts readability. I suggest only using the C API if the functionality you need is missing from the C++ API.
Related
I am new to image processing and trying to get the contours of the apples in these images. To do so, i use openCV. But i do not get a propper contour detection. I want the algorithm also be able to get contours of other objects. So not limmited to apples (= circles).
Original picture
If i follow the instructions there are 4 steps to be taken.
Open the image file
Convert the file to grayscale
Do some processing (blur, errode, dillitate, you name it)
Get the contours
The first point that confuses me is the grayscale conversion.
I did:
Mat image;
Mat HSVimage;
Mat Grayimage;
image = imread(imageName, IMREAD_COLOR); // Read the file
cvtColor(image, HSVimage, COLOR_BGR2HSV);
Mat chan[3];
split(HSVimage, chan);
Grayimage = chan[2];
First question:
Is this correct choice, or shoud i just read the file in Grayscale or use YUV ?
I only use 1 channel of the HSV, is this correct ?
I tried alot of processing methodes, but there are so many i lost track. The best result i got was when i used a treshold and an adaptiveTreshold.
threshold(Grayimage, Grayimage,49, 0, THRESH_TOZERO);
adaptiveThreshold(Grayimage, Tresholdimage, 256, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 23, 4);
The result i get is:
result after processing
But a find contours does not find a closed object. So i get:
contours
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(Tresholdimage, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, Point(0, 0));
I tried hough cicles,
vector<Vec3f> circles;
HoughCircles(Tresholdimage, circles, HOUGH_GRADIENT, 2.0, 70);
and i got:
hough circles ok
So i was a happy man, but as soon is i tried the code on an other picture, i got:
second picture original
hough circles wrong
I can experiment with the HoughCircles function i see there are alot of posibilities, but i can only detect circles with it, so it is not my first choice.
I am a newbie at this. My questions are:
Is this a correct way or is it better to use technics like blob detection or ML to find the objects in the picture ?
If it is a correct way, what functions should i use to get the better results ?
Regards,
Peter
I have used the excellent answer to the question here:
How to detect bullet holes on the target using python
I have verified that it works in both Python 2 and 3.6, but I would like to use the concept in an iOS application written in Objective C(++). This is my attempt at translating it. Ultimately, I need it to work with an image taken by the camera, so I don't want to use imread, but I've checked that this makes no difference.
UIImage *nsi = [UIImage imageNamed:#"CANDX.jpg"];
cv::Mat original;
UIImageToMat(nsi, original);
cv::Mat thresholded;
cv::inRange(original, cv::Scalar(40,40,40), cv::Scalar(160,160,160), thresholded);
cv::Mat kernel = cv::Mat::ones(10, 10, CV_64FC1);
cv::Mat opening;
cv::morphologyEx(thresholded, opening, cv::MORPH_OPEN, kernel);
vector<vector<cv::Point>> contours;
cv::findContours(opening, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
The call to inRange, with the same values as the Python version, gives a completely black image. Indeed, it is impossible to pick values for lower- and upper-bounds that do not result in this outcome. I've tried converting the image to HSV and using HSV values for lower- and upper-bound. This makes a slight difference in that I can get some vaguely recognisable outcomes, but nothing like the useful result I should be getting.
If I substitute the 'thresholded' image from the answer and comment out the inRange call, the morphology and findContours calls work okay.
Am I doing something wrong in setting up the inRange call?
As you mention in the comments, the data type of original is CV_8UC4 -- i.e. it's a 4 channel image. However, in your call to cv::inRange, you provide ranges for only 3 channels.
cv::Scalar represents a 4-element vector. When you call the constructor with only 3 values, a default value of 0 is used for the 4-th element.
Hence, your call to inRange is actually equivalent to this:
cv::inRange(original, cv::Scalar(40,40,40,0), cv::Scalar(160,160,160,0), thresholded);
You're looking only for pixels that have the alpha channel set to 0 (fully transparent). Since the image came from a camera, it's highly unlikely there will be any transparent pixels -- the alpha channel is probably just all 255s.
There are 2 options to solve this:
Drop the unneeded alpha channel. One way to do this is to use cv::cvtColor, e.g.
cv::cvtColor(original, original, cv::COLOR_BGRA2BGR);
Specify desired range for all the channels, e.g.
cv::inRange(original, cv::Scalar(40,40,40,0), cv::Scalar(160,160,160,255), thresholded);
I have a code in python and I am porting it to c++. I am getting a weird issue with drawContours function in OpenCV c++.
self.contours[i] = cv2.convexHull(self.contours[i])
cv2.drawContours(self.segments[object], [self.contours[i]], 0, 255, -1)
this is the function call in python and the value -1 for the thickness parameter is used for filling the contour and the result looks like
I am doing exactly the same in c++,
cv::convexHull(cv::Mat(contour), hull);
cv::drawContours(this->objectSegments[currentObject], cv::Mat(hull), -1, 255, -1);
but this is the resulting image:
(please look careful to see the convexhull points, this is not easily visible). I am getting only the points and not the filled polygon. I also tried using fillPoly like,
cv::fillPoly(this->objectSegments[currentObject],cv::Mat(hull),255);
but doesn't help.
Please help me in fixing the issue. I am sure that i am missing something very trivial but couldn't spot it.
The function drawContours() expects to receive a sequence of contours, each contour being a "vector of points".
The expression cv::Mat(hull) you use as a parameter returns the matrix in incorrect format, with each point being treated as a separate contour -- that's why you see only a few pixels.
According to the documentation of cv::Mat::Mat(const std::vector<_Tp>& vec) the vector passed into the constructor is used in the following manner:
STL vector whose elements form the matrix. The matrix has a single column and the number of rows equal to the number of vector elements.
Considering this, you have two options:
Transpose the matrix you've created (using cv::Mat::t()
Just use a vector of vectors of Points directly
The following sample shows how to use the vector directly:
cv::Mat output_image; // Work image
typedef std::vector<cv::Point> point_vector;
typedef std::vector<point_vector> contour_vector;
// Create with 1 "contour" for our convex hull
contour_vector hulls(1);
// Initialize the contour with the convex hull points
cv::convexHull(cv::Mat(contour), hulls[0]);
// And draw that single contour, filled
cv::drawContours(output_image, hulls, 0, 255, -1);
I'm currently using OpenCV for detecting blobs in a binary image. I'd like to erase small lines without changing the big objects.
Here's an example: The original image is
And I want to convert it into the following
"Opening" didn't work, because when applying it the edges of the triangle were cut off. Is there any other method for removing the lines, without losing information of the big triangle?
Use Erosion to remove such a noise,
The code look like,
Mat src;//load source
Mat dst;//destination image
Mat element = getStructuringElement( MORPH_RECT,Size(5,5), Point( -1, -1 ) ); // kernel performing drode
erode( src, dst, element );
Edit
Adding #Bull comments here as it more appropriate method, which suggest erosion followed by dilation will get you very close to what you want.
I am converting Python OpenCV code to Emgu.
In Python, function findContours can return hierarchy
hierarchy – Optional output vector, containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i] , the elements hierarchy[i][0] , hiearchy[i][1] , hiearchy[i][2] , and hiearchy[i][3] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative.
Unfortunately in Emgu I can't not return such array for findContours function.Is there any equivalent for this?
If you choose CV_RETR_TREE as retrieval type, the Contour<Point> that is returned will contain a hierarchical tree structure.
This image from here shows how you can navigate in the hierarchy using h_next and v_next pointers in OpenCV (i.e. HNext and VNext in Emgu CV).
In this way, you can get the whole hierarchy.