I am new to image processing and trying to get the contours of the apples in these images. To do so, i use openCV. But i do not get a propper contour detection. I want the algorithm also be able to get contours of other objects. So not limmited to apples (= circles).
Original picture
If i follow the instructions there are 4 steps to be taken.
Open the image file
Convert the file to grayscale
Do some processing (blur, errode, dillitate, you name it)
Get the contours
The first point that confuses me is the grayscale conversion.
I did:
Mat image;
Mat HSVimage;
Mat Grayimage;
image = imread(imageName, IMREAD_COLOR); // Read the file
cvtColor(image, HSVimage, COLOR_BGR2HSV);
Mat chan[3];
split(HSVimage, chan);
Grayimage = chan[2];
First question:
Is this correct choice, or shoud i just read the file in Grayscale or use YUV ?
I only use 1 channel of the HSV, is this correct ?
I tried alot of processing methodes, but there are so many i lost track. The best result i got was when i used a treshold and an adaptiveTreshold.
threshold(Grayimage, Grayimage,49, 0, THRESH_TOZERO);
adaptiveThreshold(Grayimage, Tresholdimage, 256, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 23, 4);
The result i get is:
result after processing
But a find contours does not find a closed object. So i get:
contours
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(Tresholdimage, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, Point(0, 0));
I tried hough cicles,
vector<Vec3f> circles;
HoughCircles(Tresholdimage, circles, HOUGH_GRADIENT, 2.0, 70);
and i got:
hough circles ok
So i was a happy man, but as soon is i tried the code on an other picture, i got:
second picture original
hough circles wrong
I can experiment with the HoughCircles function i see there are alot of posibilities, but i can only detect circles with it, so it is not my first choice.
I am a newbie at this. My questions are:
Is this a correct way or is it better to use technics like blob detection or ML to find the objects in the picture ?
If it is a correct way, what functions should i use to get the better results ?
Regards,
Peter
Related
I’ve been working with text recognition in a dataset of images. I want to segment the characters of the image using components and finding contours of a thresholded image. However, many of the characters are merged with each other and with other components in the image.
Can you give me some idea for separating them? Thanks for the help!
Below are some examples, and part of my code:
Mat placa_contornos = processContourns(img_placa_adaptativeTreshold_mean);
vector<vector<Point>> contours_placa;
findContours(placa_contornos,
contours_placa,
CV_RETR_EXTERNAL, externos)
CV_CHAIN_APPROX_NONE);
vector<vector<Point> >::iterator itc = contours_placa.begin();
while (itc != contours_placa.end()) {
//Create bounding rect of object
Rect mr = boundingRect(Mat(*itc));
rectangle(imagem_placa_cor, mr, Scalar(0, 255, 0));
++itc;
}
imshow("placa con rectangles", imagem_placa_cor);
Results examples
original image, binarized image, result
I would try to erode the binary image more to see if that helps. You may also want to try fixing the skew and then removing the bottom line that connects the letters
Also, this might be relevant: Recognize the characters of license plate
You can try an opening operation on your thresholded image to get rid of the noise. You can adjust the kernel size based on your need.
// Get a rectangular kernel with size 7
Mat element = getStructuringElement(0, Size(7, 7), Point(1, 1));
// Apply the morphology operation
morphologyEx(placa_contornos, result, CV_MORPH_OPEN, element);
It gives the following intermediate output on your thresholded image, I guess it would improve your detection.
I am trying to use opencv via visual c++ to extract contours of an image. I was able to do that using the opencv tutorial for findcontours.
findcontours works in two steps
Detect edges using canny edge detector.
Feed the output of canny to findcontours.
I want to try out the same with 'Structured Forest Edge Detection' (Zitnick et al). I am able to extract the edges and display them, but when I try to feed the output to findcontours. I am getting a 'cv::Exception at memory location 0x0020EE9C' error. (see code below). What am I doing wrong?
Mat src = imread("image.jpg");
src.convertTo(src, CV_32F, 1.0 / 255.0);
Mat edges(src.size(), src.type());
Ptr<StructuredEdgeDetection> pDollar = createStructuredEdgeDetection("model.yml.gz");
pDollar->detectEdges(src, edges);
findContours(edges, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
pDollar->detectEdges(src, edges);
edges type is CV_32F. you must convert it to 8-bit single-channel image
This question has been annoying me over 2 weeks.
My goal is to analyze a set of products stored in cartons on a shelf.
Right now, I have tried using the following methods from OpenCV Python module: findContours, canny,HoughLines,cv2.HoughLinesP, but I can't find the result grid.
My goal is to check if the products been filled up in carton.
Here is the original image: http://postimg.org/image/hyz1jpd7p/7a4dd87c/
My first step is to use closing transformation:
closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel, iterations=1)]
This gives me the contours (I have not enough reputation to post this url, this image is similar with the last image below, but without red lines!).
Finally, the question is, how could I find the carton grid (i.e., the products in it one by one).
I have added the red lines in the image below.
Please give me the hints, thank you very much!
Red lines: http://postimg.org/image/6i0di4gsx/
I've played a little bit with the input and found a way to extract basically the grid with HoughLinesP after thresholding the Hue channel.
edit: I'm using C++, but similar python methods should be available I guess.
cv::Mat image = cv::imread("box1.png");
cv::Mat output; image.copyTo(output);
cv::Mat hsv;
cv::cvtColor(image, hsv, CV_BGR2HSV);
std::vector<cv::Mat> hsv_channels;
cv::split(hsv, hsv_channels);
// thresholding here is a little sloppy, maybe you have to use some smarter way
cv::Mat h_thres = hsv_channels[0] < 50;
// unfortunately, HoughLinesP couldnt detect all the lines if they were too wide
// to make this part more robust I would suggest a ridge detection on the distance transformed image instead of 'some erodes after a dilate'
cv::dilate(h_thres, h_thres, cv::Mat());
cv::erode(h_thres, h_thres, cv::Mat());
cv::erode(h_thres, h_thres, cv::Mat());
cv::erode(h_thres, h_thres, cv::Mat());
std::vector<cv::Vec4i> lines;
cv::HoughLinesP( h_thres, lines, 1, CV_PI/(4*180.0), 50, image.cols/4, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
cv::line( output, cv::Point(lines[i][0], lines[i][1]),
cv::Point(lines[i][2], lines[i][3]), cv::Scalar(155,255,155), 1, 8 );
}
here are the images:
hue channel after hsv convert:
threshholded hue channel:
output:
maybe someone else has an idea how to improve the HoughLinesP without those erode steps...
Hope this method helps you a bit and you can improve it further to use it for your needs.
I writing a code that draw circle, line and rectangle in a single channel blank image. After that I just find out the contour in the image and I am getting all the contour correctly. But after finding the contour my source image is getting distorted. Why this happening ? Any one can help me to solve it out. And my code look like below.
using namespace cv;
using namespace std;
int main()
{
Mat dst = Mat::zeros(480, 480, CV_8UC1);
Mat draw= Mat::zeros(480, 480, CV_8UC1);
line(draw, Point(100,100), Point(150,150), Scalar(255,0,0),1,8,0);
rectangle(draw, Rect(200,300,10,15), Scalar(255,0,0),1, 8,0);
circle(draw, Point(80,80),20, Scalar(255,0,0),1,8,0);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( draw, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int i = 0; i< contours.size(); i++ )
{
Scalar color( 255,255,255);
drawContours( dst, contours, i, color, 1, 8, hierarchy );
}
imshow( "Components", dst );
imshow( "draw", draw );
waitKey(0);
}
Source image
Distorted source after finding contour
Documentation clearly states that source image is altered when using findContours.
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours
See first note.
If you need the source image, you have to run findContours on copy.
try using
findContours( draw.clone(), contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
For me, the second image looks like what I would expect as a result from an edge detection algorithm. My guess is the findContours function overwrites the original image with the result found.
Have a look here.
I think that the problem is that you are expecting a perfect plot from the findContours and it gives you an ugly drawing.
FindContours is not gonna give an exact plot of your figures. You must use the drawContours in order to generate a properly image.
Look the reference here: http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours
You can see that the first parameter is Input/Output array. So the function uses the same array to open, modify and saving the image. That's why you are getting a distorted image.
In addition see the parameters explanation. When it talks about the first parameter it says: "The function modifies the image while extracting the contours."
I havn't worked a lot with findContours but i never had a clear image of what I wanted. I must use always the drawContours to get a nice plot of it.
Otherwise you can use the Canny function wich is gonna give you the edges instead of the contours.
I have problems getting the contours of an object in my picture(s).
In order to delete all noise, I use adjustROI() and Canny().
I also tried erode() and dillate(), Laplacian(), GaussianBlur(), Sobel()... and I even found this code snippet to sharpen a picture:
GaussianBlur(src, dst_gaussian, Size(0, 0), 3);
addWeighted(src, 1.5, dst_gaussian, -0.5, 0, dst);
But my result is always the same: My object is filled with black and white colour (like noise on a TV-screen) so that it is impossible to get the contours with findContours() (findContours() finds a million of contours, but not the one of the whole object. I check this with drawContours()).
I use C++ and I load my picture as a grayscale Mat (for Canny it has to be grayscale). My object has a diffenent shape on every picture, but it is always around the middle of the picture.
I either need to find a way to get a better coloured object by image processing - but I don't know what else to try - or a way how to fill the object with colour after image processing (without having it's contours, because this is what I want in the end).
Any ideas are welcome. Thank you in advance.
I found a solution that works in most cases. I fill my object using the probabilistic Hough transform HoughLinesP().
vector<Vec4i> lines;
HoughLinesP(dst, lines, 1, CV_PI/180, 80, 30, 10);
for(size_t i = 0; i < lines.size(); i++)
{
line(color_dst, Point(lines[i][0], lines[i][1]), Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8);
}
This is from some sample code OpenCV documentation provides.
After using some edge-detection algorithm (like Canny()), the probabilistic Hough transform finds objects in binary pictures. The algorithm finds lines, which, if drawn, represent the whole object. Of course some of the parameters have to be adapted for some kind of picture.
I'm not sure if this will work on every picture or every object, but in my case, it does.