I have a very basic code that uses the standardized HoughCircles command in openCV to detect a circle. However, my problem is that my data (images) are generated using an algorithm (for the purpose of data simulation) that plots a point in the range of +-15% (randomly in this range) of r (where r is the radius of the circle, that has been randomly generated to be between 5 and 10 (real numbers)) and does so for 360 degrees using the equation of a circle. (Attached a sample image).
http://imgur.com/a/iIZ1N
Now using the Hough circle command, I was able to detect a circle of approximately the same radius by manually playing around with the parameters (by settings up trackbars, inspired from a github code of the same nature) but I want to automate the process as I have over a 1000 images that I want to do this over and over on. Is there a better way to do that? Or if anyone has any suggestions, I would highly appreciate them as I am a beginner in the field of image processing and have a physics background rather than a CS one.
A rough sample of my code (without trackbars etc is below):
Mat img = imread("C:\\Users\\walee\\Documents\\MATLAB\\plot_2 .jpeg", 0);
Mat cimg,copy;
copy = img;
medianBlur(img, img, 5);
GaussianBlur(img, img, Size(1, 5), 1.1, 0);
cvtColor(img, cimg, COLOR_GRAY2BGR);
vector<Vec3f> circles;
HoughCircles(img, circles, HOUGH_GRADIENT,1, 10, 94, 57, 120, 250);
for (size_t i = 0; i < circles.size(); i++)
{
Vec3i c = circles[i];
circle(cimg, Point(c[0], c[1]), c[2], Scalar(0, 0, 255), 1, LINE_AA);
circle(cimg, Point(c[0], c[1]), 2, Scalar(0, 255, 0), 1, LINE_AA);
}
imshow("detected circles", cimg);
waitKey();
return 0;
If all images have the same nature (black axis and points as circles) I would suggest to do following:
1) remove axis by finding black elements and replace them with background
2) invert colours to have black background
3) perform morphological closing to fill the circles and create more solid points
4) (optional) if the density of the points is high you can try to apply another morphological operation, namely dilation to make the data circle thinner
5) apply Hough circle
I am working a program that extract the area of the given text from one frame using OpenCV. After got it, it should be blur processing in that area.
The text which is displayed in a frame is given, and the text is always horizontal status and the color is white.
But I don't know in where the text is displayed in a frame. Occasionally the position of text is changed.
How can I extract the area(x,y,width,height) of text with OpenCV?
Is there anything tools to do this?
I have attached two samples frames. You can see the 8-length hexadecimal code under the game mark.
sample 1: a case of complex color background
sample 2: a case of single color background
Please advice, thanks.
There are several resources in the web. One way is to detect text by finding close edge elements (link). Summarized you first detect edges in your image with cv::Canny or cv::Sobel or any other edge detection method (try out which works the best in your case). Then you binarize your image with a threshold.
To remove artifacts you can apply a rank filter . Then you merge your letters with morphological operations cv::morphologyEx. You could try a dilation or a closing. The closing will close the space between the letters and merge them together without changing the size too much. You have to play with the kernel size and shape. Know you detect contrours with cv::findContours, do a polygon approximation and compute the bounding rect of your contour.
To detect just the right contour you should test for the right size or so (e.g. if (contours[i].size()>100)). Then you can order your found fields according to this article where it is explained in detail.
This is the code from the first post:
#include "opencv2/opencv.hpp"
std::vector<cv::Rect> detectLetters(cv::Mat img)
{
std::vector<cv::Rect> boundRect;
cv::Mat img_gray, img_sobel, img_threshold, element;
cvtColor(img, img_gray, CV_BGR2GRAY);
cv::Sobel(img_gray, img_sobel, CV_8U, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
cv::threshold(img_sobel, img_threshold, 0, 255, CV_THRESH_OTSU+CV_THRESH_BINARY);
element = getStructuringElement(cv::MORPH_RECT, cv::Size(17, 3) );
cv::morphologyEx(img_threshold, img_threshold, CV_MOP_CLOSE, element); //Does the trick
std::vector< std::vector< cv::Point> > contours;
cv::findContours(img_threshold, contours, 0, 1);
std::vector<std::vector<cv::Point> > contours_poly( contours.size() );
for( int i = 0; i < contours.size(); i++ )
if (contours[i].size()>100)
{
cv::approxPolyDP( cv::Mat(contours[i]), contours_poly[i], 3, true );
cv::Rect appRect( boundingRect( cv::Mat(contours_poly[i]) ));
if (appRect.width>appRect.height)
boundRect.push_back(appRect);
}
return boundRect;
}
I want to improve my project which is designed for object detection.
Firstly, to get my actual result I use absdiff, and next I use the following operations are in my code below:
cv::threshold(subtractionResultEdges, threshold, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Sobel(threshold, sobel, CV_32F, 1, 0);
minMaxLoc(sobel, &minVal, &maxVal);
sobel.convertTo(sobel, CV_8U, 255.0 / (maxVal - minVal), -minVal * 255.0 / (maxVal - minVal));
dilate(subtractionResultEdges, subtractionResultEdges, verticalStructreMat, Point(-1, -1));
erode(subtractionResultEdges, filteredResult, verticalStructreMat, Point(-1, -1));
Canny(filteredResult, filteredResult, 33, 100, 3);
My last operation is findContours(canny_output, *contours, *hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
This is my result after use every function and foreground which I get with using accumulate function (20 frames) :
foreground:
http://j71i.imgup.net/foregroundc3dc.PNG
subtraction:
http://p81i.imgup.net/subtractio2866.PNG
Sobel:
http://g51i.imgup.net/sobela1fb.PNG
threshold:
http://p46i.imgup.net/treshold14c9.PNG
dilate, erode and Canny:
http://q68i.imgup.net/canny2e1a.PNG
findContours:
http://v76i.imgup.net/contours6845.PNG
Background is also obtained from accumulate function.
Could you help me get better corner or contours detection? I need it, to get object size in pixels.
Thanks in advance!
Use a larger kernel for dilate/erode part, maybe (11, 11) or even bigger, or alternatively do multiple iterations (this can be set as a parameter. This should connect the individual parts of your detected object better and then you'll have less contours.
To calculate area, you can then use contourArea()
We're currently trying to detect the object regions in medical instruments images using the methods available in OpenCV, C++ version. An example image is shown below:
Here are the steps we're following:
Converting the image to gray scale
Applying median filter
Find edges using sobel filter
Convert the result to binary image using a threshold of 25
Skeletonize the image to make sure we have neat edges
Finding X largest connected components
This approach works perfectly for the image 1 and here is the result:
The yellow borders are the connected components detected.
The rectangles are just to highlight the presence of a connected component.
To get understandable results, we just removed the connected components that are completely inside any another one, so the end result is something like this:
So far, everything was fine but another sample of image complicated our work shown below.
Having a small light green towel under the objects results this image:
After filtering the regions as we did earlier, we got this:
Obviously, it is not what we need..we're excepting something like this:
I'm thinking about clustering the closest connected components found(somehow!!) so we can minimize the impact of the presence of the towel, but don't know yet if it's something doable or someone has tried something like this before? Also, does anyone have any better idea to overcome this kind of problems?
Thanks in advance.
Here's what I tried.
In the images, the background is mostly greenish and the area of the background is considerably larger than that of the foreground. So, if you take a color histogram of the image, the greenish bins will have higher values. Threshold this histogram so that bins having smaller values are set to zero. This way we'll most probably retain the greenish (higher value) bins and discard other colors. Then backproject this histogram. The backprojection will highlight these greenish regions in the image.
Backprojection:
Then threshold this backprojection. This gives us the background.
Background (after some morphological filtering):
Invert the background to get foreground.
Foreground (after some morphological filtering):
Then find the contours of the foreground.
I think this gives a reasonable segmentation, and using this as mask you may be able to use a segmentation like GrabCut to refine the boundaries (I haven't tried this yet).
EDIT:
I tried the GrabCut approach and it indeed refines the boundaries. I've added the code for GrabCut segmentation.
Contours:
GrabCut segmentation using the foreground as mask:
I'm using the OpenCV C API for the histogram processing part.
// load the color image
IplImage* im = cvLoadImage("bFly6.jpg");
// get the color histogram
IplImage* im32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 3);
cvConvertScale(im, im32f);
int channels[] = {0, 1, 2};
int histSize[] = {32, 32, 32};
float rgbRange[] = {0, 256};
float* ranges[] = {rgbRange, rgbRange, rgbRange};
CvHistogram* hist = cvCreateHist(3, histSize, CV_HIST_ARRAY, ranges);
IplImage* b = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* g = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* r = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* backproject32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 1);
IplImage* backproject8u = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplImage* bw = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplConvKernel* kernel = cvCreateStructuringElementEx(3, 3, 1, 1, MORPH_ELLIPSE);
cvSplit(im32f, b, g, r, NULL);
IplImage* planes[] = {b, g, r};
cvCalcHist(planes, hist);
// find min and max values of histogram bins
float minval, maxval;
cvGetMinMaxHistValue(hist, &minval, &maxval);
// threshold the histogram. this sets the bin values that are below the threshold to zero
cvThreshHist(hist, maxval/32);
// backproject the thresholded histogram. backprojection should contain higher values for the
// background and lower values for the foreground
cvCalcBackProject(planes, backproject32f, hist);
// convert to 8u type
double min, max;
cvMinMaxLoc(backproject32f, &min, &max);
cvConvertScale(backproject32f, backproject8u, 255.0 / max);
// threshold backprojected image. this gives us the background
cvThreshold(backproject8u, bw, 10, 255, CV_THRESH_BINARY);
// some morphology on background
cvDilate(bw, bw, kernel, 1);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_CLOSE, 2);
// get the foreground
cvSubRS(bw, cvScalar(255, 255, 255), bw);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_OPEN, 2);
cvErode(bw, bw, kernel, 1);
// find contours of the foreground
//CvMemStorage* storage = cvCreateMemStorage(0);
//CvSeq* contours = 0;
//cvFindContours(bw, storage, &contours);
//cvDrawContours(im, contours, CV_RGB(255, 0, 0), CV_RGB(0, 0, 255), 1, 2);
// grabcut
Mat color(im);
Mat fg(bw);
Mat mask(bw->height, bw->width, CV_8U);
mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, fg);
Mat bgdModel, fgdModel;
grabCut(color, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);
Mat gcfg = mask == GC_PR_FGD;
vector<vector<cv::Point>> contours;
vector<Vec4i> hierarchy;
findContours(gcfg, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
for(int idx = 0; idx < contours.size(); idx++)
{
drawContours(color, contours, idx, Scalar(0, 0, 255), 2);
}
// cleanup ...
UPDATE: We can do the above using the C++ interface as shown below.
const int channels[] = {0, 1, 2};
const int histSize[] = {32, 32, 32};
const float rgbRange[] = {0, 256};
const float* ranges[] = {rgbRange, rgbRange, rgbRange};
Mat hist;
Mat im32fc3, backpr32f, backpr8u, backprBw, kernel;
Mat im = imread("bFly6.jpg");
im.convertTo(im32fc3, CV_32FC3);
calcHist(&im32fc3, 1, channels, Mat(), hist, 3, histSize, ranges, true, false);
calcBackProject(&im32fc3, 1, channels, hist, backpr32f, ranges);
double minval, maxval;
minMaxIdx(backpr32f, &minval, &maxval);
threshold(backpr32f, backpr32f, maxval/32, 255, THRESH_TOZERO);
backpr32f.convertTo(backpr8u, CV_8U, 255.0/maxval);
threshold(backpr8u, backprBw, 10, 255, THRESH_BINARY);
kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
dilate(backprBw, backprBw, kernel);
morphologyEx(backprBw, backprBw, MORPH_CLOSE, kernel, Point(-1, -1), 2);
backprBw = 255 - backprBw;
morphologyEx(backprBw, backprBw, MORPH_OPEN, kernel, Point(-1, -1), 2);
erode(backprBw, backprBw, kernel);
Mat mask(backpr8u.rows, backpr8u.cols, CV_8U);
mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, backprBw);
Mat bgdModel, fgdModel;
grabCut(im, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);
Mat fg = mask == GC_PR_FGD;
I would consider a few options. My assumption is that the camera does not move. I haven't used the images or written any code, so this is mostly from experience.
Rather than just looking for edges, try separating the background using a segmentation algorithm. Mixture of Gaussian can help with this. Given a set of images over the same region (i.e. video), you can cancel out regions which are persistent. Then, new items such as instruments will pop out. Connected components can then be used on the blobs.
I would look at segmentation algorithms to see if you can optimize the conditions to make this work for you. One major item is to make sure your camera is stable or you stabilize the images yourself pre-processing.
I would consider using interest points to identify regions in the image with a lot of new material. Given that the background is relatively plain, small objects such as needles will create a bunch of interest points. The towel should be much more sparse. Perhaps overlaying the detected interest points over the connected component footprint will give you a "density" metric which you can then threshold. If the connected component has a large ratio of interest points for the area of the item, then it is an interesting object.
On this note, you can even clean up the connected component footprint by using a Convex Hull to prune the objects you have detected. This may help situations such as a medical instrument casting a shadow on the towel which stretches the component region. This is a guess, but interest points can definitely give you more information than just edges.
Finally, given that you have a stable background with clear objects in view, I would take a look at Bag-of-Features to see if you can just detect each individual object in the image. This may be useful since there seems to be a consistent pattern to the objects in these images. You can build a big database of images such as needles, gauze, scissors, etc. Then BoF, which is in OpenCV will find those candidates for you. You can also mix it in with other operations you are doing to compare results.
Bag of Features using OpenCV
http://www.codeproject.com/Articles/619039/Bag-of-Features-Descriptor-on-SIFT-Features-with-O
-
I would also suggest an idea to your initial version. You can also skip the contours, whose regions have width and height greater than the half the image width and height.
//take the rect of the contours
Rect rect = Imgproc.boundingRect(contours.get(i));
if (rect.width < inputImageWidth / 2 && rect.height < inputImageHeight / 2)
//then continue to draw or use for next purposes.
I try to identify Contour around this black polygon and I need to access those points but it doesn't work for me. This is the input image
But when I try to do following code It didn't gave the expected result which means it should.
CanvasFrame cnvs=new CanvasFrame("Polygon");
cnvs.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
CvMemStorage storage=CvMemStorage.create();
CvSeq squares = new CvContour();
squares = cvCreateSeq(0, sizeof(CvContour.class), sizeof(CvSeq.class), storage);
String path="project/Test/img/black.png";
IplImage src = cvLoadImage(path);
IplImage gry=cvCreateImage(cvGetSize(src),IPL_DEPTH_8U,1);
cvCvtColor(src, gry, CV_BGR2GRAY);
cvThreshold(gry, gry, 230, 255, CV_THRESH_BINARY_INV);
cnvs.showImage(gry);
cvFindContours(gry, storage, squares, Loader.sizeof(CvContour.class), CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
CvSeq ss=null;
CvSeq tmp=null;
int ii=0;
for (ss=squares; ss!=null; ss=ss.h_next()) {
tmp=cvApproxPoly(ss, sizeof(CvContour.class), storage, CV_POLY_APPROX_DP, 8, 0);
System.out.println("index "+ii+" points "+tmp.total()+" area "+cvContourArea(ss, CV_WHOLE_SEQ, 0));
cvDrawContours(src, ss, CvScalar.RED, CV_RGB(248, 18, 18), 1, -1, 8);
//drawPoly(src, tmp);
}
IplConvKernel mat=cvCreateStructuringElementEx(7, 7, 3, 3, CV_SHAPE_RECT, null);
cvDilate(src, src, mat, CV_C);
cvErode(src, src, mat, CV_C);
cnvs.showImage(src);
saveImage("nw.png", src);
But when I check the out put it gives only
index 0 points 8 area 20179.0
That means it only identify 8 points of the polygon but it should be 12 points.
Please can some one explain problem of this code.
This show the out put image
The cvApproxPoly() function uses Ramer–Douglas–Peucker algorithm for curve approximation. The purpose of the algorithm is to find a similar curve with fewer points. The algorithm itself takes two parameters as input:
list of points (vertices),
aproximation accuracy.
Briefly, the greater the aproximation acuracy value, the bigger chance to the point being omitted in aproximated curve (please refer to the Wikipedia article, especially this animation). In your function call:
cvApproxPoly(ss, sizeof(CvContour.class), storage, CV_POLY_APPROX_DP, 8, 0);
the 5th parameter is the aproximation accuracy. If you don't want to reduce number of vertices the value should be small (for this example values around 1 give exactly 12 vertices, therefore no approximation).