I am trying to use HoughCircles method to detect a circle from an Image, but it looks like that this method is not useful to detect all circle with almost the same centres. For example, if I have 3 circles with almost the same centre it detects as a single circle. Please suggest if there is any way around to find all circles. Here is the source image:
I might be wrong with HoughCircle Methos assumption.
Thanks in advance.
The reason is that when you call HoughCircles you should decide the minimum distance between the detected circle centerces. The same center you mentioned means that zero distance between them. So in this case you should set the minimum distance parameter almost 0.
void cv::HoughCircles ( InputArray image,
OutputArray circles,
int method,
double dp,
double minDist, // You should set this parameter almost zero cos 0 not accepted.
double param1 = 100,
double param2 = 100,
int minRadius = 0,
int maxRadius = 0
)
When I tried with these parameters:
HoughCircles( input, output, CV_HOUGH_GRADIENT, 1, 0.5, 60, 30, 1, 200 );
I get this:
Edit: When I tried some more playing on this, I got 12 circles but the reality is 18 circles(edge circles not included). The reason could be about the image quality. Here is my code and result:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace cv;
using namespace std;
int main()
{
/// Load source image and convert it to gray
Mat src_gray,dst,src = imread("/ur/source/image/image.jpg", 1 );
imshow("Source",src);
int i = 50;
bilateralFilter(src,dst,i,i*2,i/2);
imshow("Output",dst);
cvtColor( dst, src_gray, CV_BGR2GRAY );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, 0.01, 80, 55, 0, 100 );
Mat zero_mask = Mat::zeros(src.rows,src.cols,CV_8UC3);
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( zero_mask, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( zero_mask, center, radius, Scalar(0,0,255), 1, 8, 0 );
}
cout<<circles.size()<<endl;
imshow("Output2",src_gray);
imshow("outt",zero_mask);
waitKey(0);
return(0);
}
Output:
Related
This is a picture above. I am using opencv to process it and I have tried to use Hough Transform, but failed. Also, I found that it is so hard to set relative parameters in Hough Transform.
The codes are as following:
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat srcImg = imread("srccenter.bmp");
Mat greyImg;
cvtColor(srcImg, greyImg, COLOR_BGR2GRAY);
std::vector<cv::Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles(greyImg, circles, CV_HOUGH_GRADIENT, 1, 10, 100, 20, 0, 0);
/// Draw the circles detected
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle(srcImg, center, 3, Scalar(0, 255, 255), -1);
circle(srcImg, center, radius, Scalar(0, 255, 0), 1);
}
namedWindow("srcImg", WINDOW_NORMAL);
imshow("srcImg", srcImg);
waitKey(0);
return 0;
}
But the result is I can not detect any circle.
How I can detect the inner circle?
Do you have any good ideas?
You need to change min_dist parameter to zero. This parameter is for minimum distance between detected centers. in your case, centers of the circles are so near.
And Also, you must change param_1, the parameter of the Canny edge detector.
I am currently trying to black out everything outside a circle.
I am drawing the circle using the following lines of code:
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); // CVRound converts floating numbers to integer
int radius = cvRound(circles[i][2]); // Radius is the third parameter [i][0] = x [i][1]= y [i][2] = radius
circle( image, center, 3, cv::Scalar(0,255,0), -1, 8, 0 ); // Drawing little circle to Image Center , next Line of Code draws the real circle
circle( image, center, radius, cv::Scalar(0,0,255), 3, 8, 0 ); // Circle(img, center, radius, color, thickness=1, lineType=8, shift=0)
What is the best approach of painting everything of the circly black, if I have a radius and the center of my circle?
Does OpenCV provide an easy mechanism of doing this or should I iterate through all the pixels of my image and depending on the position color them black or not?
Thanks to Abid for the hint, i ended up with this approach. Everything works fine:
cv::Mat src = someMethodThatReturnsSrcImage(); // src Image
cv::Mat maskedImage; // stores masked Image
std::vector<cv::Vec3f> circles = someMethodThatReturnsCircles(src);
cv::Mat mask(srcImageForDimensions.size(),srcImageForDimensions.type()); // create an Mat that has same Dimensons as src
mask.setTo(cv::Scalar(0,0,0)); // creates black-Image
// Add all found circles to mask
for( size_t i = 0; i < circles.size(); i++ ) // iterate through all detected Circles
{
cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); // CVRound converts floating numbers to integer
int radius = cvRound(circles[i][2]); // Radius is the third parameter [i][0] = x [i][1]= y [i][2] = radius
cv::circle( mask, center, radius, cv::Scalar(255,255,255),-1, 8, 0 ); // Circle(img, center, radius, color, thickness=1, lineType=8, shift=0)
}
src.copyTo(maskedImage,mask); // creates masked Image and copies it to maskedImage
you can make the background the color you want
image=cv::Scalar(red_value, green_value, blue_value);
then draw your circles
I think that comment just under your question is the best solution.
I made a modified version of your code for 5M image from fisheye camera. This image also needs to make black all points outside circle.
#include <Windows.h>
#include <Vfw.h>
#include <string>
#include <iostream>
#include "opencv2\core\core.hpp"
#include "opencv2\imgproc\imgproc.hpp"
#include "opencv2\imgcodecs\imgcodecs.hpp"
#include "opencv2\highgui\highgui.hpp"
using namespace std;
using namespace cv;
int _tmain(int argc, _TCHAR* argv[])
{
cv::Mat im_source_non_square = cv::imread("D:/FishLib/sample_02.bmp", CV_LOAD_IMAGE_COLOR);
cv::namedWindow("Image",CV_WINDOW_FREERATIO);
cv::imshow("Image", im_source_non_square);
Mat im_source_square;
int m_nCenterX=1280;
int m_nCenterY=960;
int m_nRadius=916;
Mat im_mask=im_source_non_square.clone();
im_mask.setTo(cv::Scalar(0,0,0));
circle( im_mask, cv::Point(m_nCenterX,m_nCenterY), m_nRadius, cv::Scalar(255,255,255), -3, 8, 0 );
cv::namedWindow("Mask image",CV_WINDOW_FREERATIO);
cv::imshow("Mask image", im_mask);
Mat im_source_circle;
cv::bitwise_and(im_source_non_square,im_mask,im_source_circle);
cv::namedWindow("Combined image",CV_WINDOW_FREERATIO);
cv::imshow("Combined image", im_source_circle);
cv::waitKey(0);
return 0;
}
Just tried your code snippet and it works.
Also if you want to change the background color instead of black, according to opencv docs here, before copyTo the destination mat will be initialized if needed, so just add code below:
cv::Mat maskedImage(srcImageForDimensions.size(), srcImageForDimensions.type()); // stores masked Image
maskedImage.setTo(cv::Scalar(0,0,255)); // set background color to red
I am doing some detection work using OpenCV, and I need to use the distance transform. Except the distance transform function in opencv gives me an image that is exactly the same as the image I use as source. Anyone know what I am doing wrong? Here is the portion of my code:
cvSetData(depthImage, m_rgbWk, depthImage->widthStep);
//gotten openCV image in "depthImage"
IplImage *single_channel_depthImage = cvCreateImage(cvSize(320, 240), 8, 1);
cvSplit(depthImage, single_channel_depthImage, NULL, NULL, NULL);
//smoothing
IplImage *smoothed_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvSmooth(single_channel_depthImage, smoothed_image, CV_MEDIAN, 9, 9, 0, 0);
//do canny edge detector
IplImage *edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvCanny(smoothed_image, edges_image, 100, 200);
//invert values
IplImage *inverted_edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvNot(edges_image, inverted_edges_image);
//calculate the distance transform
IplImage *distance_image = cvCreateImage(cvSize(320, 240), IPL_DEPTH_32F, 1);
cvZero(distance_image);
cvDistTransform(inverted_edges_image, distance_image, CV_DIST_L2, CV_DIST_MASK_PRECISE, NULL, NULL);
In a nutshell, I grad the image from the kinect, turn it into a one channel image, smooth it, run the canny edge detector, invert the values, and then I do the distance transform. But the transformed image looks exactly the same as the input image. What's wrong?
Thanks!
I believe the key here is that they look the same. Here is a small program I wrote to show the difference:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
Mat before = imread("qrcode.png", 0);
Mat dist;
distanceTransform(before, dist, CV_DIST_L2, 3);
imshow("before", before);
imshow("non-normalized", dist);
normalize(dist, dist, 0.0, 1.0, NORM_MINMAX);
imshow("normalized", dist);
waitKey();
return 0;
}
In the non-normalized image, you see this:
which doesn't really look like it changed anything, but the distance steps are very small compared to the overall range of values [0, 255] (due to imshow converting the image from 32-bit float to 8-bits for display), we can't see the differences, so let's normalize it...
Now we get this:
The values themselves should be correct, but when displayed you will need to normalize the image to see the difference.
EDIT :
Here is a small 10x10 sample from the upper-left corner of the dist matrix show that the values are in fact different:
[10.954346, 10.540054, 10.125763, 9.7114716, 9.2971802, 8.8828888, 8.4685974, 8.054306, 7.6400146, 7.6400146;
10.540054, 9.5850525, 9.1707611, 8.7564697, 8.3421783, 7.927887, 7.5135956, 7.0993042, 6.6850128, 6.6850128;
10.125763, 9.1707611, 8.2157593, 7.8014679, 7.3871765, 6.9728851, 6.5585938, 6.1443024, 5.730011, 5.730011;
9.7114716, 8.7564697, 7.8014679, 6.8464661, 6.4321747, 6.0178833, 5.6035919, 5.1893005, 4.7750092, 4.7750092;
9.2971802, 8.3421783, 7.3871765, 6.4321747, 5.4771729, 5.0628815, 4.6485901, 4.2342987, 3.8200073, 3.8200073;
8.8828888, 7.927887, 6.9728851, 6.0178833, 5.0628815, 4.1078796, 3.6935883, 3.2792969, 2.8650055, 2.8650055;
8.4685974, 7.5135956, 6.5585938, 5.6035919, 4.6485901, 3.6935883, 2.7385864, 2.324295, 1.9100037, 1.9100037;
8.054306, 7.0993042, 6.1443024, 5.1893005, 4.2342987, 3.2792969, 2.324295, 1.3692932, 0.95500183, 0.95500183;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0]
I just figured this one out.
The OpenCV distanceTransform
Calculates the distance to the closest zero pixel for each pixel of
the source image.
and so it expects your edges image to be negative.
All you need to do is to negate your edges image:
edges = 255 - edges;
You can print this values using this code before normalize function:
for(int x=0; x<10;x++)
{
cout<<endl;
for(int y=0; y<10;y++)
cout<<std::setw(10)<<dist.at<float>(x, y);
}
Mat formats
Input: CV_8U
Dist: CV_32F
Normalized: CV_8U
normalize(Mat_dist, Mat_norm, 0, 255, NORM_MINMAX, CV_8U);
If you want to visualize the result, you need to scale the normalization to 0 ... 255 and not to 0 ... 1 or everything will seem black. Using imshow(); on a scaled to 0 ... 1 image will work but may cause problmes in the next processing steps. Al least it did in my case.
I am trying to detect solid circles using opencv. The example code from the opencv documentation seems like it cannot detect solid white. How would I modify that code to work for solid white circles? Can you explain why it does not work for solid white circles?
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace cv;
/** #function main */
int main(int argc, char** argv)
{
Mat src, src_gray;
/// Read the image
src = imread( argv[1], 1 );
if( !src.data )
{ return -1; }
/// Convert it to gray
cvtColor( src, src_gray, CV_BGR2GRAY );
/// Reduce the noise so we avoid false circle detection
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, src_gray.rows/8, 200, 100, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
/// Show your results
namedWindow( "Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE );
imshow( "Hough Circle Transform Demo", src );
waitKey(0);
return 0;
}
I would post images, but I don't have enough stack overflow street cred yet. Sorry!
You should extract the edges first. This is what the Hough Transform detects. Add a cvCanny transform before HoughCircles.
Today I wrote a program for detecting circles using Hough Transform using OpenCV in C.
The program inputs 3 images, each image contains a fixed small circle and a big circle with variable position. The program then recognizes both the circles and marks the centres of both the circles. Now what I want to do is that in the output image the (x,y) coordinates of the centre of the bigger circle should be displayed with respect to the centre of the fixed smaller circle . Here's the code for 'circle.cpp'
#include <cv.h>
#include <highgui.h>
#include <math.h>
int main(int argc, char** argv)
{
IplImage* img;
int n=3;
char input[21],output[21];
for(int l=1;l<=n;l++)
{
sprintf(input,"Frame%d.jpg",l); // Inputs Images
if( (img=cvLoadImage(input))!= 0)
{
IplImage* gray = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
IplImage* canny=cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,1);
IplImage* rgbcanny=cvCreateImage(cvGetSize(img),IPL_DEPTH_8U,3);
CvMemStorage* storage = cvCreateMemStorage(0);
cvCvtColor( img, gray, CV_BGR2GRAY );
cvSmooth( gray, gray, CV_GAUSSIAN, 9, 9 ); // smooth it, otherwise a lot of false circles may be detected
cvCanny(gray,canny,50,100,3);
CvSeq* circles = cvHoughCircles( canny, storage, CV_HOUGH_GRADIENT, 2, gray->height/4, 200, 100 );
int i;
cvCvtColor(canny,rgbcanny,CV_GRAY2BGR);
for( i = 0; i < circles->total; i++ )
{
float* p = (float*)cvGetSeqElem( circles, i );
cvCircle( rgbcanny, cvPoint(cvRound(p[0]),cvRound(p[1])), 3, CV_RGB(0,255,0), -1, 8, 0 );
cvCircle( rgbcanny, cvPoint(cvRound(p[0]),cvRound(p[1])), cvRound(p[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
cvNamedWindow( "circles", 1 );
cvShowImage( "circles", rgbcanny );
//Displays Output images
sprintf(output,"circle%d.jpg",l);
cvSaveImage(output,rgbcanny);
cvWaitKey(0);
}
}
return 0;
}
And here are the input and output images:
Please suggest what changes should I make in the code to display the desired (x,y)coordinates. Thanx a lot :)
Before you show the image, use cvPutText to add the desired text. The parameters of this function are self-explaining. The font should be initialized using cvInitFont.
When you calculate the relative coordinates, keep in mind that in OpenCV, the coordinate system is like this
-----> x
|
|
v
y
just in case you are interested in showing the relative coordinates in a system in which the axes point in another direction.
You should check that the Hough transform has detected exactly two circles. If so, all the data you need is in the circles variable. If (xa,ya) are the coordinates of the bigger circle and (xb,yb) the coordinates of the smaller one, the relative coordinates are (xa-xb,ya-yb).