I am trying to set up my programme to threshold for a colour (in BGR format). I have not fully decided which colour I will be looking for yet. I would also like the program to record how many pixels it has detected of that colour. My code so far is below but it is not working.
#include "cv.h"
#include "highgui.h"
int main()
{
// Initialize capturing live feed from the camera
CvCapture* capture = 0;
capture = cvCaptureFromCAM(0);
// Couldn't get a device? Throw an error and quit
if(!capture)
{
printf("Could not initialize capturing...\n");
return -1;
}
// The two windows we'll be using
cvNamedWindow("video");
cvNamedWindow("thresh");
// An infinite loop
while(true)
{
// Will hold a frame captured from the camera
IplImage* frame = 0;
frame = cvQueryFrame(capture);
// If we couldn't grab a frame... quit
if(!frame)
break;
//create image where threshloded image will be stored
IplImage* imgThreshed = cvCreateImage(cvGetSize(frame), 8, 1);
//i want to keep it BGR format. Im not sure what colour i will be looking for yet. this can be easily changed
cvInRangeS(frame, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
//show the original feed and thresholded feed
cvShowImage("thresh", imgThreshed);
cvShowImage("video", frame);
// Wait for a keypress
int c = cvWaitKey(10);
if(c!=-1)
{
// If pressed, break out of the loop
break;
}
cvReleaseImage(&imgThreshed);
}
cvReleaseCapture(&capture);
return 0;
}
To threshold for a color,
1) convert the image to HSV
2) Then apply cvInrangeS
3) Once you got threshold image, you can count number of white pixels in it.
Try this tutorial to track yellow color: Tracking colored objects in OpenCV
I can tell how to do it in both Python and C++ and both with and without converting to HSV.
C++ Version (Converting to HSV)
Convert the image into an HSV image:
// Convert the image into an HSV image
IplImage* imgHSV = cvCreateImage(cvGetSize(img), 8, 3);
cvCvtColor(img, imgHSV, CV_BGR2HSV);
Create a new image that will hold the threholded image:
IplImage* imgThreshed = cvCreateImage(cvGetSize(img), 8, 1);
Do the actual thresholding using cvInRangeS
cvInRangeS(imgHSV, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
Here, imgHSV is the reference image. And the two cvScalars represent the lower and upper bound of values that are yellowish in colour. (These bounds should work in almost all conditions. If they don't, try experimenting with the last two values).
Consider any pixel. If all three values of that pixel (H, S and V, in that order) lie within the stated ranges, imgThreshed gets a value of 255 at that corresponding pixel. This is repeated for all pixels. So what you finally get is a thresholded image.
Use countNonZero to count the number of white pixels in the thresholded image.
Python Version (Without converting to HSV):
Create the lower and upper boundaries of the range you are interested in, in Numpy array format (Note: You need to use import numpy as np)
lower = np.array((a,b,c), dtype = "uint8")
upper = np.array((x,y,z), dtype = "uint8")
In the above (a,b,c) is the lower bound and (x,y,z) is the upper bound.
2.Get the mask for the pixels that satisfy the range:
mask = cv2.inRange(image, lower, upper)
In the above, image is the image on which you want to work.
Count the number of white pixels that are present in the mask using countNonZero:
yellowpixels = cv2.countNonZero(mask)
print "Number of Yellow pixels are %d" % (yellowpixels)
Sources:
http://srikanthvidyasagar.blogspot.com/2016/01/tracking-colored-objects-in-opencv.html
http://www.pyimagesearch.com/2014/08/04/opencv-python-color-detection/
count number of black pixels in an image in Python with OpenCV
Related
I'm wondering how to plot a 2d histogram of an HSV Mat in opencv c++. My current code attempting to display it fails miserably. I've looked around on how to plot histograms and all the ones I've found were those plotting them as independent 1d histograms.
Here's my current output with the number of hue bins being 30 and saturation bins being 32:
Here's another output with the number of hue bins being 7 and saturaation bins being 5:
I would like it to look more like the result here
http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html
I also noticed whenever I do cout << Hist.size it gives me 50x50. Am I to understand that just means the first dimension of the array is 250 in size?
Also, how does one sort the histogram from highest to lowest (or vice versa) value frequency? That is another problem I am trying to solve.
My current function is as follows.
void Perform_Hist(Mat& MeanShift, Mat& Pyramid_Result, Mat& BackProj){
Mat HSV, Hist;
int histSize[] = {hbins, sbins};
int channels[] = {0, 1};
float hranges[] = {0, 180};
float sranges[] = {0, 256};
const float* ranges[] = {hranges, sranges};
cvtColor(MeanShift, HSV, CV_BGR2HSV);
Mat PyrGray = Pyramid_Result.clone();
calcHist(&HSV, 1, channels, Mat(), Hist, 2, histSize, ranges, true, false);
normalize(Hist, Hist, 0, 255, NORM_MINMAX, -1, Mat());
invert(Hist, Hist, 1);
calcBackProject(&PyrGray, 1, channels, Hist, BackProj, ranges, 1, true);
double maxVal = 0; minMaxLoc(Hist, 0, &maxVal, 0, 0);
int scale = 10;
Mat histImage = Mat::zeros(sbins*scale, hbins*10, CV_8UC3);
for(int i = 1; i < hbins * sbins; i++){
line(histImage,
Point(hbins*sbins*(i-1), sbins - cvRound(Hist.at<float>(i-1))),
Point(hbins*sbins*(i-1), sbins - cvRound(Hist.at<float>(i))),
Scalar(255,0,0), 2, 8, 0);
}
imshow (HISTOGRAM, histImage);
}
Did you mean something like this?
it is HSV histogram showed as 3D graph
V is ignored to get to 3D (otherwise it would be 4D graph ...)
if yes then this is how to do it (I do not use OpenCV so adjust it to your needs):
convert source image to HSV
compute histogram ignoring V value
all colors with the same H,S are considered as single color no matter what the V is
you can ignore any other but the V parameter looks like the best choice
draw the graph
first draw ellipse with darker color (HSV base disc)
then for each dot take the corresponding histogram value and draw vertical line with brighter color. Line size is proportional to the histogram value
Here is the C++ code I did this with:
picture pic0,pic1,pic2,zed;
int his[65536];
DWORD w;
int h,s,v,x,y,z,i,n;
double r,a;
color c;
// compute histogram (ignore v)
pic2=pic0; // copy input image pic0 to pic2
pic2.rgb2hsv(); // convert to HSV
for (x=0;x<65536;x++) his[x]=0; // clear histogram
for (y=0;y<pic2.ys;y++) // compute it
for (x=0;x<pic2.xs;x++)
{
c=pic2.p[y][x];
h=c.db[picture::_h];
s=c.db[picture::_s];
w=h+(s<<8); // form 16 bit number from 24bit HSV color
his[w]++; // update color usage count ...
}
for (n=0,x=0;x<65536;x++) if (n<his[x]) n=his[x]; // max probability
// draw the colored HSV base plane and histogram
zed =pic1; zed .clear(999); // zed buffer for 3D
pic1.clear(0); // image of histogram
for (h=0;h<255;h++)
for (s=0;s<255;s++)
{
c.db[picture::_h]=h;
c.db[picture::_s]=s;
c.db[picture::_v]=100; // HSV base darker
c.db[picture::_a]=0;
x=pic1.xs>>1; // HSV base disc position centers on the bottom
y=pic1.ys-100;
a=2.0*M_PI*double(h)/256.0; // disc -> x,y
r=double(s)/256.0;
x+=120.0*r*cos(a); // elipse for 3D ilusion
y+= 50.0*r*sin(a);
z=-y;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } x++;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } y++;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } x--;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } y--;
w=h+(s<<8); // get histogram index for this color
i=((pic1.ys-150)*his[w])/n;
c.db[picture::_v]=255; // histogram brighter
for (;(i>0)&&(y>0);i--,y--)
{
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } x++;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } y++;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } x--;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } y--;
}
}
pic1.hsv2rgb(); // convert to RGB to see correct colors
input image is pic0 (rose), output image is pic1 (histogram graph)
pic2 is the pic0 converted to HSV for histogram computation
zed is the Zed buffer for 3D display avoiding Z sorting ...
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution
rgb2hsv() and hsv2rgb() ... guess what it does :)
[edit1] your 2D histogram
It looks like you have color coded into 2D array. One axis is H and second is S. So you need to calculate H,S value from array address. If it is linear then for HSV[i][j]:
H=h0+(h1-h0)*i/maxi
S=s0+(s1-s0)*j/maxj
or i,j reversed
h0,h1,s0,s1 are the color ranges
maxi,maxj are the array size
As you can see you also discard V like me so now you have H,S for each cell in histogram 2D array. Where probability is the cell value. Now if you want to draw an image you need to know how to output this (as a 2D graph, 3D, mapping,...). For unsorted 2D graph draw graph where:
x=i+maj*i
y=HSV[i][j]
color=(H,S,V=200);
If you want to sort it then just compute the x axis differently or loop the 2D array in sort order and x just increment
[edit2] update of code and some images
I have repaired the C++ code above (wrong Z value sign, changed Z buffer condition and added bigger points for nicer output). Your 2D array colors can be as this:
Where one axis/index is H, the other S and Value is fixed (I choose 200). If your axises are swapped then just mirror it by y=x I think ...
The color sorting is really just an order in which you pick all the colors from array. for example:
v=200; x=0;
for (h=0;h<256;h++)
for (s=0;s<256;s++,x++)
{
y=HSV[h][s];
// here draw line (x,0)->(x,y) by color hsv2rgb(h,s,v);
}
This is the incrementing way. You can compute x from H,S instead to achieve different sorting or swap the fors (x++ must be in the inner loop)
If you want RGB histogram plot instead see:
how to plot rgb color histogram of image with objective c
I have got a mask calculated in grab_cut(which calculates the foreground). I want to extract only the background leaving the foreground transparent. I manage to do so using the following code in order to extract foreground(background transparent). How is it possible to do the opposite?
int border = 20;
int border2 = border + border;
cv::Rect rectangle(border,border,image.cols-border2,image.rows-border2);
cv::Mat result; // segmentation result (4 possible values)
cv::Mat bgModel,fgModel; /
cv::grabCut(image, // input image
result, // segmentation result
rectangle,// rectangle containing foreground
bgModel,fgModel, // models
1, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);
cv::imwrite(argv[2], foreground);
cv::imwrite(argv[3], image);
Mat dst;//(src.rows,src.cols,CV_8UC4);
Mat tmp,alpha;
cvtColor(foreground,tmp,CV_BGR2GRAY);
threshold(tmp,alpha,100,255,THRESH_BINARY);
Mat rgb[3];
split(foreground,rgb);
Mat rgba[4]={rgb[0],rgb[1],rgb[2],alpha};
merge(rgba,4,dst);
imwrite("dst.png",dst);
Basically i think I ve got to change those lines:
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
How is is possible to select the rest of the image the opposite of result?
Just invert your mask as in:
cv::Mat background(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(background, ~result); // fg pixels not copied
I know that there are a hundred topics about my question in all over the web, but i would like to ask specific for my problem because I tried almost all solutions without any success.
I am trying to count circles in an image (yes i have already tried hough circles but due to light reflections, i think, on my object is not very robust).
Then I tried to create a classifier (no success i think there is no enough features so the detection is not good)
I have also tried HSV conversation and tried to find my object with color (again I had some problems because of the light and the variations of colors)
As you can see on image, there are 8 caps and i would like to be able to count them.
Using all of this methods, i was able to detect the objects on an image (because I was optimizing all the parameters of functions for the specific image) but as soon as I load a new, similar, image the results was disappointing.
Please follow this link to see the Image
Bellow you can find parts of everything i have tried:
1. Hough circles
img = cv2.imread('frame71.jpg',1)
img = cv2.medianBlur(img,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
if img == None:
print "There is no image file. Quiting..."
quit()
circles = cv2.HoughCircles(img,cv.CV_HOUGH_GRADIENT,3,50,
param1=55,param2=125,minRadius=25,maxRadius=45)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
print len(circles[0,:])
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
2. HSV Transform, color detection
def image_process(frame, h_low, s_low, v_low, h_up, s_up, v_up, ksize):
temp = ksize
if(temp%2==1):
ksize = temp
else:
ksize = temp+1
#if(True):
# return frame
#thresh = frame
#try:
#TODO: optimize as much as possiblle this part of code
try:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower = np.array([h_low, s_low, v_low],np.uint8)
upper = np.array([h_up,s_up,h_up],np.uint8)
mask = cv2.inRange(hsv, lower, upper)
res = cv2.bitwise_and(hsv,hsv, mask= mask)
thresh = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
#thresh = cv2.threshold(res, 50, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.threshold(thresh, 50, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.medianBlur(thresh,ksize)
except Exception as inst:
print type(inst)
#cv2.imshow('thresh', thresh)
return thresh
3. Cascade classifier
img = cv2.imread('frame405.jpg', 1)
cap_cascade = cv2.CascadeClassifier('haar_30_17_16_stage.xml')
caps = cap_cascade.detectMultiScale(img, 1.3, 5)
#print caps
for (x,y,w,h) in caps:
cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0),2)
#cv2.rectangle(img, (10,10),(100,100),(0,255,255),4)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
quit()
About training the classifier I really used a lot of variations of images, samples, negatives and positives, number of stages, w and h but the results was not very accurate.
Finally I would like to know from your experience which is the best method I should follow and I will stick on that in order to optimize my detection. Keep in mind that all images are similiar but NOT identical. There are some differences due to light, movement etc
Than you in advance,
I did some experiment with the sample image. I'm posting my results, and if you find it useful, you can improve it further and optimize. Here are the steps:
downsample the image
perform morphological opening
find Hough circles
cluster the circles by radii (bottle circles should get the same label)
filter the circles by a radius threshold
you can also cluster circles by their center x and y coordinates (I haven't done this)
prepare a mask from the filtered circles and extract the possible bottles region
cluster this region by color
Code is in C++. I'm attaching my results.
Mat im = imread(INPUT_FOLDER_PATH + string("frame71.jpg"));
Mat small;
int kernelSize = 9; // try with different kernel sizes. 5 onwards gives good results
pyrDown(im, small); // downsample the image
Mat morph;
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(kernelSize, kernelSize));
morphologyEx(small, morph, MORPH_OPEN, kernel); // open
Mat gray;
cvtColor(morph, gray, CV_BGR2GRAY);
vector<Vec3f> circles;
HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 2, gray.rows/8.0); // find circles
// -------------------------------------------------------
// cluster the circles by radii. similarly you can cluster them by center x and y for further filtering
Mat circ = Mat(circles);
Mat data[3];
split(circ, data);
Mat labels, centers;
kmeans(data[2], 2, labels, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 2, KMEANS_PP_CENTERS, centers);
// -------------------------------------------------------
Mat rgb;
small.copyTo(rgb);
//cvtColor(gray, rgb, CV_GRAY2BGR);
Mat mask = Mat::zeros(Size(gray.cols, gray.rows), CV_8U);
for(size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
float r = centers.at<float>(labels.at<int>(i));
if (r > 30.0f && r < 45.0f) // filter circles by radius (values are based on the sample image)
{
// just for display
circle(rgb, center, 3, Scalar(0,255,0), -1, 8, 0);
circle(rgb, center, radius, Scalar(0,0,255), 3, 8, 0);
// prepare a mask
circle(mask, center, radius, Scalar(255,255,255), -1, 8, 0);
}
}
// use each filtered circle as a mask and extract the region from original downsampled image
Mat rgb2;
small.copyTo(rgb2, mask);
// cluster the masked region by color
Mat rgb32fc3, lbl;
rgb2.convertTo(rgb32fc3, CV_32FC3);
int imsize[] = {rgb32fc3.rows, rgb32fc3.cols};
Mat color = rgb32fc3.reshape(1, rgb32fc3.rows*rgb32fc3.cols);
kmeans(color, 4, lbl, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 2, KMEANS_PP_CENTERS);
Mat lbl2d = lbl.reshape(1, 2, imsize);
Mat lbldisp;
lbl2d.convertTo(lbldisp, CV_8U, 50);
Mat lblColor;
applyColorMap(lbldisp, lblColor, COLORMAP_JET);
Results:
Filtered circles:
Masked:
Segmented:
Hello finally i think I found a way to count caps on bottles.
Read image
Teach (find correct values for HSV up/low limits)
Select desire color (using HSV and mask)
Find contours on the masked image
Find the minCircles for contours
Reject all circles beyond thresholds
I have also order a polarized filter which I think it will reduce glares a lot. I am open to suggestions for further improvement (robustness and speed). Both robustness and speed is crucial for my application.
Thank you.
i'm using openNI for some project with kinect sensor. i'd like to color the users pixels given with the depth map. now i have pixels that goes from white to black, but i want from red to black. i've tried with alpha blending, but my result is only that i have pixels from pink to black because i add (with addWeight) red+white = pink.
this is my actual code:
layers = device.getDepth().clone();
cvtColor(layers, layers, CV_GRAY2BGR);
Mat red = Mat(240,320, CV_8UC3, Scalar(255,0,0));
Mat red_body; // = Mat::zeros(240,320, CV_8UC3);
red.copyTo(red_body, device.getUserMask());
addWeighted(red_body, 0.8, layers, 0.5, 0.0, layers);
where device.getDepth() returns a cv::Mat with depth map and device.getUserMask() returns a cv::Mat with user pixels (only white pixels)
some advice?
EDIT:
one more thing:
thanks to sammy answer i've done it. but actually i don't have values exactly from 0 to 255, but from (for example) 123-220.
i'm going to find minimum and maximum via a simple for loop (are there better way?), and how can i map my values from min-max to 0-255 ?
First, OpenCV's default color format is BGR not RGB. So, your code for creating the red image should be
Mat red = Mat(240,320, CV_8UC3, Scalar(0,0,255));
For red to black color map, you can use element wise multiplication instead of alpha blending
Mat out = red_body.mul(layers, 1.0/255);
You can find the min and max values of a matrix M using
double minVal, maxVal;
minMaxLoc(M, &minVal, &maxVal, 0, 0);
You can then subtract the minValue and scale with a factor
double factor = 255.0/(maxVal - minVal);
M = factor*(M -minValue)
Kinda clumsy and slow, but maybe split layers, copy red_body (make it a one channel Mat, not 3) to the red channel, merge them back into layers?
Get the same effect, but much faster (in place) with reshape:
layers = device.getDepth().clone();
cvtColor(layers, layers, CV_GRAY2BGR);
Mat red = Mat(240,320, CV_8UC1, Scalar(255)); // One channel
Mat red_body;
red.copyTo(red_body, device.getUserMask());
Mat flatLayer = layers.reshape(1,240*320); // presumed dimensions of layer
red_body.reshape(0,240*320).copyTo(flatLayer.col(0));
// layers now has the red from red_body
I have a problem with filling white holes inside a black coin so that I can have only 0-255 binary images with filled black coins. I have used a Median filter to accomplish it but in that case connection bridge between coins grows and it goes impossible to recognize them after several times of erosion... So I need a simple floodFill like method in opencv
Here is my image with holes:
EDIT: floodfill like function must fill holes in big components without prompting X, Y coordinates as a seed...
EDIT: I tried to use the cvDrawContours function but it doesn't fill contours inside bigger ones.
Here is my code:
CvMemStorage mem = cvCreateMemStorage(0);
CvSeq contours = new CvSeq();
CvSeq ptr = new CvSeq();
int sizeofCvContour = Loader.sizeof(CvContour.class);
cvThreshold(gray, gray, 150, 255, CV_THRESH_BINARY_INV);
int numOfContours = cvFindContours(gray, mem, contours, sizeofCvContour, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
System.out.println("The num of contours: "+numOfContours); //prints 87, ok
Random rand = new Random();
for (ptr = contours; ptr != null; ptr = ptr.h_next()) {
Color randomColor = new Color(rand.nextFloat(), rand.nextFloat(), rand.nextFloat());
CvScalar color = CV_RGB( randomColor.getRed(), randomColor.getGreen(), randomColor.getBlue());
cvDrawContours(gray, ptr, color, color, -1, CV_FILLED, 8);
}
CanvasFrame canvas6 = new CanvasFrame("drawContours");
canvas6.showImage(gray);
Result: (you can see black holes inside each coin)
There are two methods to do this:
1) Contour Filling:
First, invert the image, find contours in the image, fill it with black and invert back.
des = cv2.bitwise_not(gray)
contour,hier = cv2.findContours(des,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
cv2.drawContours(des,[cnt],0,255,-1)
gray = cv2.bitwise_not(des)
Resulting image:
2) Image Opening:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
res = cv2.morphologyEx(gray,cv2.MORPH_OPEN,kernel)
The resulting image is as follows:
You can see, there is not much difference in both cases.
NB: gray - grayscale image, All codes are in OpenCV-Python
Reference. OpenCV Morphological Transformations
A simple dilate and erode would close the gaps fairly well, I imagine. I think maybe this is what you're looking for.
A more robust solution would be to do an edge detect on the whole image, and then a hough transform for circles. A quick google shows there are code samples available in various languages for size invariant detection of circles using a hough transform, so hopefully that will give you something to go on.
The benefit of using the hough transform is that the algorithm will actually give you an estimate of the size and location of every circle, so you can rebuild an ideal image based on that model. It should also be very robust to overlap, especially considering the quality of the input image here (i.e. less worry about false positives, so can lower the threshold for results).
You might be looking for the Fillhole transformation, an application of morphological image reconstruction.
This transformation will fill the holes in your coins, even though at the cost of also filling all holes between groups of adjacent coins. The Hough space or opening-based solutions suggested by the other posters will probably give you better high-level recognition results.
In case someone is looking for the cpp implementation -
std::vector<std::vector<cv::Point> > contours_vector;
cv::findContours(input_image, contours_vector, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
cv::Mat contourImage(input_image.size(), CV_8UC1, cv::Scalar(0));
for ( ushort contour_index = 0; contour_index < contours_vector.size(); contour_index++) {
cv::drawContours(contourImage, contours_vector, contour_index, cv::Scalar(255), -1);
}
cv::imshow("con", contourImage);
cv::waitKey(0);
Try using cvFindContours() function. You can use it to find connected components. With the right parameters this function returns a list with the contours of each connected components.
Find the contours which represent a hole. Then use cvDrawContours() to fill up the selected contour by the foreground color thereby closing the holes.
I think if the objects are touched or crowded, there will be some problems using the contours and the math morophology opening.
Instead, the following simple solution is found and tested. It is working very well, and not only for this images, but also for any other images.
here is the steps (optimized) as seen in http://blogs.mathworks.com/steve/2008/08/05/filling-small-holes/
let I: the input image
1. filled_I = floodfill(I). // fill every hole in the image.
2. inverted_I = invert(I)`.
3. holes_I = filled_I AND inverted_I. // finds all holes
4. cc_list = connectedcomponent(holes_I) // list of all connected component in holes_I.
5. holes_I = remove(cc_list,holes_I, smallholes_threshold_size) // remove all holes from holes_I having size > smallholes_threshold_size.
6. out_I = I OR holes_I. // fill only the small holes.
In short, the algorithm is just to find all holes, remove the big ones then write the small ones only on the original image.
I've been looking around the internet to find a proper imfill function (as the one in Matlab) but working in C with OpenCV. After some reaserches, I finally came up with a solution :
IplImage* imfill(IplImage* src)
{
CvScalar white = CV_RGB( 255, 255, 255 );
IplImage* dst = cvCreateImage( cvGetSize(src), 8, 3);
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contour = 0;
cvFindContours(src, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
cvZero( dst );
for( ; contour != 0; contour = contour->h_next )
{
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
}
IplImage* bin_imgFilled = cvCreateImage(cvGetSize(src), 8, 1);
cvInRangeS(dst, white, white, bin_imgFilled);
return bin_imgFilled;
}
For this: Original Binary Image
Result is: Final Binary Image
The trick is in the parameters setting of the cvDrawContours function:
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
dst = destination image
contour = pointer to the first contour
white = color used to fill the contour
0 = Maximal level for drawn contours. If 0, only contour is drawn
CV_FILLED = Thickness of lines the contours are drawn with. If it is negative (For example, =CV_FILLED), the contour interiors are drawn.
More info in the openCV documentation.
There is probably a way to get "dst" directly as a binary image but I couldn't find how to use the cvDrawContours function with binary values.