After processing an image by converting it to grey scale and then blurring it, I'm trying to apply a Hough Circle Transformation with these parameters:
CV_HOUGH_GRADIENT
dp = 1
min_dist = 1
param_1 = 70
param_2 = 100
min_radius = 0
max_radius = 0
Here is one of the many images I've tried:
http://i.stack.imgur.com/JGRiM.jpg
But the algorithm fails to recognise the ball even with relaxed parameters.
(When I try it with an image of a circle created in GIMP it works fine)
I agree with krzych.
I had it working effortlessly with :
cv::Mat img,img2;
std::vector<cv::Vec3f> circles;
img = cv::imread("JGRiM.jpg",1);
cv::bilateralFilter(img, img2, 15, 1000, 1000);
cv::cvtColor(img2, img2,CV_BGR2GRAY);
cv::HoughCircles(img2, circles, CV_HOUGH_GRADIENT, 1,300,50, 10);
cv::circle(img2,cv::Point(circles[0][0],circles[0][1]),circles[0][2],cv::Scalar(126),2);
cv::imshow("test",img2);
cv::waitKey(0);
cv::imwrite("test.jpg",img2);
return 0;
Good luck :)
Check Canny output of your images first. From this Canny output it is possible to detect ball with very small param_2 as well as many false circles on image. (I've used for example param_2 = 10, and with specified ball center to eliminate false circles it works)
Try to help Hough Circle Transform. The task is to segment ball from other elements. In your image problem is line, you can try to segment ball using colours for example.
Related
Actually, I am noob for working with Computer Vision. Sorry in advance.
I want to detect edges of tram lane. Mostly, the code works well but sometimes It cannot even draw a line. I don't know why.
cropped_Image function is just cropping the polygonal area of the current frame.
display_lines function draw lines whose absolute value of angle is between 30 and 90. It uses cv2.line to draw lines.
Here is the code:
_,frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY) # convert image to gray to be one layer
blur = cv2.GaussianBlur(gray, (1, 1), 0) # to reduce noise in gray scale image
canny = cv2.Canny(blur, 150, 200, apertureSize=3)
cropped_image = region_of_interest(canny) # simply, it crops bottom of image
lines = cv2.HoughLinesP(cropped_image, 1, np.pi / 180, 100, np.array([]),
minLineLength=5, maxLineGap=5)
hough_bundler = HoughBundler()
lines_merged = hough_bundler.process_lines(lines, cropped_image)
line_image = display_lines(frame, lines_merged)
combo_image = cv2.addWeighted(frame, 0.8, line_image, 1, 1)
cv2.imshow(‘test’, combo_image)
To see it: HoughBundler
Expected: expected img
Canny: canny img of wrong result
Result: wrong result
First of all I'd start by fixing the cv2.GuassianBlur() line. You've used a 1x1 kernel which doesn't do anything, you need to use at least a 3x3 kernel. Look into how convolutions are applied if you want to know why a 1x1 filter doesn't work.
Secondly, I would play with the Canny aperture size to suit my needs. Also after edge detection you can use cv2.erode() with a 3x3 or 5x5 kernel so that you don't get a broken line in the image.
I am completely new for OpenCV, but during googling I came to know about Object Detection & Edge Detection. But, Still not able to figure out proper way to Detect Image from ScreenShot.
For Example, If I pass an image having photo inside it like below, then I need to extract that Photo from source Image.
EDIT
After following Answer of #Amitay Nachmani, I tried to implement the following code up to step 4.
-(UIImage*)processImage:(UIImage*)sourceImage{
cv::Mat processMat;
UIImageToMat(sourceImage, processMat);
cv::Mat grayImage;
cvtColor(processMat, grayImage, CV_BGR2GRAY);
cv::Mat cannyImage;
cv::Canny(grayImage, cannyImage, 0, 50);
cv::Vec2f lines2;
std::vector<cv::Vec2f> lines;
cv::HoughLines(cannyImage, lines, 1, CV_PI/180, 300);
size_t sizeOfLine = lines.size();
for(size_t i=0;i<sizeOfLine;i++){
float rho = lines[i][0], theta = lines[i][1];
if(rho==0){
cv::Point pt1,pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
cv::line(cannyImage, pt1, pt2, cv::Scalar(255,0,0),2.0);
}
}
UIImage *result = MatToUIImage(cannyImage);
return result;
}
From above code, I got generated following Image.
EDIT 2
I revised code by replacing Condition
if(rho==0) with if(theta==0)
This resulted in below Image
But, Still What to do next ? I am bit confused in next Steps.
I am not completely sure but, did you try template matching technique?
If you are using c++ to code opencv:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
I hope this will be helpful to find cross-correlation between template (your source image) and your test image (screenshot).
In the link below you will find a complete example of how to apply and draw template matching.
Hope this helps.
Cheers.
Unai.
I completely agree with the post below, this is the best solution but unfortunately, I guess that #Mrug development will be targeted to smartphone devices, and canny edge detection and hough line transform are computationally very expensive from those platforms.
May be you can use Sobel derivates which are designed to calculate horizontal and vertical derivates.
These links may help you:
Sobel Derivates
http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/sobel_derivatives/sobel_derivatives.html
Canny edge detectors
http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html
Hough transform:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html
If you know that the image is always between the second horizontal line and the third i would do the following:
Convert to gray scale (opencv cvtColor)
Run Canny edge detection (opecv Canny())
Find lines using Hough lines (opencv HoughLines() http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html)
Take only the 4 most prominent horizontal lines ( to take the horizontal ones you need theta = 90)
Sort the lines you find according to the y coordinate
Crop the image between the second and third line
Ok here's my scenario. I built a paper detection app that finds a piece of paper in an image. It doesn't work 100% of the time, given white balance and focusing changes, so if we found a sheet of paper in frame 1, we want to show a border around it in frame 2 (in reality there can be many frame gap), even if we didn't find it in frame 2. In order to do so, we keep the old image, and old 4 point contour from frame 1.
Then, in frame N where we did not find a convex contour, we want to transform the old contour using an affine transformation that we compute using estimageRigidTransform.
I am 100% positive that my math is slightly off, but I'm not sure where:
// new image = 3200 x 6400
// old image = 3200 x 6400
// old contour = contour found in old image, same scale
vector<cv::Point> transformContourWithNewImage(Mat & newImage, Mat & oldImage, vector<cv::Point> oldContour) {
CGFloat ratio = newImage.size().height / 500.0;
cv::Size outputSize = cv::Size(newImage.size().width / ratio, 500);
Mat image_copy;
resize(newImage, image_copy, outputSize); //shrink images down so that computations are cheaper
Mat oldImage_copy = cv::Mat();
resize(oldImage, oldImage_copy, outputSize);
cv::Mat transform = estimateRigidTransform(image_copy, oldImage_copy, false);
vector<cv::Point> transformedPoints;
cv::transform(oldContour, transformedPoints, transform);
return transformedPoints;
}
I think that I need to scale the transform, since it was done on a smaller image than the contour vector represents. I also get a crash saying my transform Mat has the wrong number of rows/cols
I want to find the corner position of an blurred image with a corner inside it. like the following example:
I can make sure that only one corner is inside the image, and I assume that
the corner is part of a black and white chessboard.
How can I detect the cross position with openCV?
Thanks!
Usually you can determine the corner using the gradient:
Gx = im[i][j+1] - im[i][j-1]; Gy = im[i+1][j] – im[i-1][j];
G^2 = Gx^2 + Gy^2;
teta = atan2 (Gy, Gx);
As your image is blurred, you should compute the gradient at a larger scale:
Gx = im[i][j+delta] - im[i][j-delta]; Gy = im[i+ delta][j] – im[i- delta][j];
Here is the result that I obtained for delta = 50:
The gradient norm (multiplied by 20)
gradient norm http://imageshack.us/scaled/thumb/822/xdpp.jpg
The gradient direction:
gradient direction http://imageshack.us/scaled/thumb/844/h6zp.jpg
another solution
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img=imread("c:/data/corner.jpg");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
threshold(gray,gray,100,255,CV_THRESH_BINARY);
int step=15;
std::vector<Point> points;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
//fit a rotated rectangle
RotatedRect box = minAreaRect(Mat(points));
//circle(img,box.center,2,Scalar(255,0,0),-1);
//invert it,fit again and get average of centers(may not be needed if a 'good' threshold is found)
Point p1=Point(box.center.x,box.center.y);
points.clear();
gray=255-gray;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
box = minAreaRect(Mat(points));
Point p2=Point(box.center.x,box.center.y);
//circle(img,p2,2,Scalar(0,255,0),-1);
circle(img,Point((p1.x+p2.x)/2,(p1.y+p2.y)/2),3,Scalar(0,0,255),-1);
imshow("img",img);
waitKey();
return 0;
}
Rather than work right away at a ridiculously large scale, as suggested by others, I recommend downsizing first (which has the effect of deblurring), do one pass of Harris to find the corner, then upscale its position and do a pass of findCornerSubpix at full resolution with a large window (large enough to encompass the obvious saddle point of the intensity).
In this way you get the best of both worlds: fast detection to initialize the refinement, and accurate refinement given the original imagery.
See also this other relevant answer
I have a problem with filling white holes inside a black coin so that I can have only 0-255 binary images with filled black coins. I have used a Median filter to accomplish it but in that case connection bridge between coins grows and it goes impossible to recognize them after several times of erosion... So I need a simple floodFill like method in opencv
Here is my image with holes:
EDIT: floodfill like function must fill holes in big components without prompting X, Y coordinates as a seed...
EDIT: I tried to use the cvDrawContours function but it doesn't fill contours inside bigger ones.
Here is my code:
CvMemStorage mem = cvCreateMemStorage(0);
CvSeq contours = new CvSeq();
CvSeq ptr = new CvSeq();
int sizeofCvContour = Loader.sizeof(CvContour.class);
cvThreshold(gray, gray, 150, 255, CV_THRESH_BINARY_INV);
int numOfContours = cvFindContours(gray, mem, contours, sizeofCvContour, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
System.out.println("The num of contours: "+numOfContours); //prints 87, ok
Random rand = new Random();
for (ptr = contours; ptr != null; ptr = ptr.h_next()) {
Color randomColor = new Color(rand.nextFloat(), rand.nextFloat(), rand.nextFloat());
CvScalar color = CV_RGB( randomColor.getRed(), randomColor.getGreen(), randomColor.getBlue());
cvDrawContours(gray, ptr, color, color, -1, CV_FILLED, 8);
}
CanvasFrame canvas6 = new CanvasFrame("drawContours");
canvas6.showImage(gray);
Result: (you can see black holes inside each coin)
There are two methods to do this:
1) Contour Filling:
First, invert the image, find contours in the image, fill it with black and invert back.
des = cv2.bitwise_not(gray)
contour,hier = cv2.findContours(des,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
cv2.drawContours(des,[cnt],0,255,-1)
gray = cv2.bitwise_not(des)
Resulting image:
2) Image Opening:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
res = cv2.morphologyEx(gray,cv2.MORPH_OPEN,kernel)
The resulting image is as follows:
You can see, there is not much difference in both cases.
NB: gray - grayscale image, All codes are in OpenCV-Python
Reference. OpenCV Morphological Transformations
A simple dilate and erode would close the gaps fairly well, I imagine. I think maybe this is what you're looking for.
A more robust solution would be to do an edge detect on the whole image, and then a hough transform for circles. A quick google shows there are code samples available in various languages for size invariant detection of circles using a hough transform, so hopefully that will give you something to go on.
The benefit of using the hough transform is that the algorithm will actually give you an estimate of the size and location of every circle, so you can rebuild an ideal image based on that model. It should also be very robust to overlap, especially considering the quality of the input image here (i.e. less worry about false positives, so can lower the threshold for results).
You might be looking for the Fillhole transformation, an application of morphological image reconstruction.
This transformation will fill the holes in your coins, even though at the cost of also filling all holes between groups of adjacent coins. The Hough space or opening-based solutions suggested by the other posters will probably give you better high-level recognition results.
In case someone is looking for the cpp implementation -
std::vector<std::vector<cv::Point> > contours_vector;
cv::findContours(input_image, contours_vector, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
cv::Mat contourImage(input_image.size(), CV_8UC1, cv::Scalar(0));
for ( ushort contour_index = 0; contour_index < contours_vector.size(); contour_index++) {
cv::drawContours(contourImage, contours_vector, contour_index, cv::Scalar(255), -1);
}
cv::imshow("con", contourImage);
cv::waitKey(0);
Try using cvFindContours() function. You can use it to find connected components. With the right parameters this function returns a list with the contours of each connected components.
Find the contours which represent a hole. Then use cvDrawContours() to fill up the selected contour by the foreground color thereby closing the holes.
I think if the objects are touched or crowded, there will be some problems using the contours and the math morophology opening.
Instead, the following simple solution is found and tested. It is working very well, and not only for this images, but also for any other images.
here is the steps (optimized) as seen in http://blogs.mathworks.com/steve/2008/08/05/filling-small-holes/
let I: the input image
1. filled_I = floodfill(I). // fill every hole in the image.
2. inverted_I = invert(I)`.
3. holes_I = filled_I AND inverted_I. // finds all holes
4. cc_list = connectedcomponent(holes_I) // list of all connected component in holes_I.
5. holes_I = remove(cc_list,holes_I, smallholes_threshold_size) // remove all holes from holes_I having size > smallholes_threshold_size.
6. out_I = I OR holes_I. // fill only the small holes.
In short, the algorithm is just to find all holes, remove the big ones then write the small ones only on the original image.
I've been looking around the internet to find a proper imfill function (as the one in Matlab) but working in C with OpenCV. After some reaserches, I finally came up with a solution :
IplImage* imfill(IplImage* src)
{
CvScalar white = CV_RGB( 255, 255, 255 );
IplImage* dst = cvCreateImage( cvGetSize(src), 8, 3);
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contour = 0;
cvFindContours(src, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
cvZero( dst );
for( ; contour != 0; contour = contour->h_next )
{
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
}
IplImage* bin_imgFilled = cvCreateImage(cvGetSize(src), 8, 1);
cvInRangeS(dst, white, white, bin_imgFilled);
return bin_imgFilled;
}
For this: Original Binary Image
Result is: Final Binary Image
The trick is in the parameters setting of the cvDrawContours function:
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
dst = destination image
contour = pointer to the first contour
white = color used to fill the contour
0 = Maximal level for drawn contours. If 0, only contour is drawn
CV_FILLED = Thickness of lines the contours are drawn with. If it is negative (For example, =CV_FILLED), the contour interiors are drawn.
More info in the openCV documentation.
There is probably a way to get "dst" directly as a binary image but I couldn't find how to use the cvDrawContours function with binary values.