OpenCV - HoughCircles gives 5-10% inaccuracy - opencv

Using EmguCV with C#, MS VS 2015. Goal is to recognise black circles on white sheet (with some dust). Circles radiuses is about 80 pixels, no adjoining circles.
Given IntPtr ptrImg contains byte[] with grayscale image (8 bits per sample, one channel).
There's a code for circles detection:
Mat mat = new Mat(height, width, DepthType.Cv8U, 1, ptrImg, width);
CvInvoke.FastNlMeansDenoising(mat, mat, 20);
return CvInvoke.HoughCircles(mat, HoughType.Gradient, 2.0, 120.0, 90, 60, 60, 100);
In fact, some of the circles are detected ok, but some others have a glitch - detected radius differs from real radius for about 5-7 pixels; detected boundary coincides real boundary at one side and misses at the opposite side.
What do I wrong? Maybe, I have to play with dp, param1, param2? What have I to do with them?
P.S. If I remove denoising, but add binarization by threshold, the situation isn't better:

Related

Extract dark contour

I want to extract the darker contours from images with opencv. I have tried using a simple threshold such as below (c++)
cv::threshold(gray, output, threshold, 255, THRESH_BINARY_INV);
I can iterate threshold lets say from 50 ~ 200
then I can get the darker contours in the middle
for images with a clear distinction such as this
here is the result of the threshold
but if the contours near the border, the threshold will fail because the pixel almost the same.
for example like this image.
What i want to ask is there any technique in opencv that can extract darker contour in the middle of image even though the contour reach the border and having almost the same pixel as the border?
(updated)
after threshold darker contour in the middle overlapped with border top.
It makes me fail to extract character such as the first two "SS".
I think you can simply add a edge preserving smoothing step to solve this :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
Mat filteredImg;
bilateralFilter(inputImg, filteredImg, 5, 60, 20);
// compute laplacian
Mat laplaceImg;
Laplacian(filteredImg, laplaceImg, CV_16S, 1);
// threshold
Mat resImg;
threshold(laplaceImg, resImg, 10, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
This will give you the following result : result
Regards,
I think using laplacian could partialy solve your problem :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
// compute laplacian
Mat laplaceImg;
Laplacian(inputImg, laplaceImg, CV_16S, 1);
Mat resImg;
threshold(laplaceImg, resImg, 30, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
Using this code you should obtain something like :
this result
You can then play with final threshold value and with laplacian kernel size.
You will probably have to remove small artifacts after this operation.
Regards

Can't determine document edges from camera with OpenCV

I need find edges of document that in user hands.
1) Original image from camera:
2) Then i convert image to BG:
3) Then i make blur:
3) Finds edges in an image using the Canny:
4) And use dilate :
As you can see on the last image the contour around the map is torn and the contour is not determined. What is my error and how to solve the problem in order to determine the outline of the document completely?
This is code how i to do it:
final Mat mat = new Mat();
sourceMat.copyTo(mat);
//convert the image to black and white
Imgproc.cvtColor(mat, mat, Imgproc.COLOR_BGR2GRAY);
//blur to enhance edge detection
Imgproc.GaussianBlur(mat, mat, new Size(5, 5), 0);
if (isClicked) saveImageFromMat(mat, "blur", "blur");
//convert the image to black and white does (8 bit)
int thresh = 128;
Imgproc.Canny(mat, mat, thresh, thresh * 2);
//dilate helps to connect nearby line segments
Imgproc.dilate(mat, mat,
Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(3, 3)),
new Point(-1, -1),
2,
1,
new Scalar(1));
This answer is based on my above comment. If someone is holding the document, you cannot see the edge that is behind the user's hand. So, any method for detecting the outline of the document must be robust to some missing parts of the edge.
I suggest using a variant of the Hough transform to detect the document. The Wikipedia article about the Hough transform makes it sound quite scary (as Wikipedia often does with mathematical subjects), but don't be discouraged, actually they are not too difficult to understand or implement.
The original Hough transform detected straight lines in images. As explained in this OpenCV tutorial, any straight line in an image can be defined by 2 parameters: an angle θ and a distance r of the line from the origin. So you quantize these 2 parameters, and create a 2D array with one cell for every possible line that could be present in your image. (The finer the quantization you use, the larger the array you will need, but the more accurate the position of the found lines will be.) Initialize the array to zeros. Then, for every pixel that is part of an edge detected by Canny, you determine every line (θ,r) that the pixel could be part of, and increment the corresponding bin. After processing all pixels, you will have, for each bin, a count of how many pixels were detected on the line corresponding to that bin. Counts which are high enough probably represent real lines in the image, even if parts of the line are missing. So you just scan through the bins to find bins which exceed the threshold.
OpenCV contains Hough detectors for straight lines and circles, but not for rectangles. You could either use the line detector and check for 4 lines that form the edges of your document; or you could write your own Hough detector for rectangles, perhaps using the paper Jung 2004 for inspiration. Rectangles have at least 5 degrees of freedom (2D position, scale, aspect ratio, and rotation angle), and memory requirement for a 5D array obviously goes up pretty fast. But since the range of each parameter is limited (ie, the document's aspect ratio is known, and you can assume the document will be well centered and not rotated much) it is probably feasible.

Simulate cataract vision in OpenCV

I'm working on a project to design a low vision aid. What is the image processing operation which can simulate cataract vision to the normal eye using OpenCV?
It would be useful if you described the symptoms of the cataract and what happens to the retinal images since not all the people here are experts in computer vision and eye deceases at the same time. If a retinal image gets out of focus and gets a yellow tint you can used openCV blur() function and also boost RGB values with yellow a bit. If there are different degree of blur across a visual field I recommend using integral images, see this post
I guess there are at least three operations to do: add noise, blur, whiten:
Rect rect2(0, 0, w/2, h);
Mat src = I4(rect2).clone();
Mat Mnoise(h, w/2, CV_8UC3);
randn(Mnoise, 100, 50);
src = src*0.5+Mnoise*0.5; // add noise
Mat Mblur;
blur(src, Mblur, Size(12, 12)); // blur
Rect rect3(w, 0, w/2, h);
Mat Mblurw = Mblur*0.8+ Scalar(255, 255, 255)*0.2; //whiten

Image processing techniques to stand out white tape on the floor with opencv

I have the following image:
And I'd like to obtain a thresholded image where only the tape is white, and the whole background is black.. so far I've tried this:
Mat image = Highgui.imread("C:/bezier/0.JPG");
Mat byn = new Mat();
Imgproc.cvtColor(image, byn, Imgproc.COLOR_BGR2GRAY);
Mat thresh = new Mat();
// apply filters
Imgproc.blur(byn, byn, new Size(2, 2));
Imgproc.threshold(byn, thresh, 0, 255, Imgproc.THRESH_BINARY+Imgproc.THRESH_OTSU);
Imgproc.erode(thresh, thresh, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(4, 4)));
But I obtain this image, that is far away from what I want:
The tape would be always of the same color (white) and width (about 2cm), any idea? Thanks
Let's see what you know:
The tape has a lower contrast
The tape is lighter than the background
If you know the scale of the picture, you can run adaptive thresholds on two levels. Let's say that the width of the tape is 100 pixels:
Reject a pixel that has brightness outside of +/- x from the average brightness in the 50x50 (maybe smaller, but not larger) window surrounding it AND
Reject a pixel that has brightness smaller than y + the average brightness in the 100x100(maybe larger, but not smaller) window surrounding it.
You should also experiment a bit, trying both mean and median as definitions of "average" for each threshold.
From there on you should have a much better-defined image, and you can remove all but the largest contour (presumably the trail)
I think you are not taking advantage of the fact that the tape is white (and the floor is in a shade of brown).
Rather than converting to grayscale with cvtColor(src, dst, Imgproc.COLOR_BGR2GRAY) try using a custom operation that penalizes saturation... Maybe something like converting to HSV and let G = V * (1-S).

How to detect when an image is out of focus?

At times our optical inspection system gets out of focus which results in nonsensical measurements. I've been tasked to develop an 'out of focus' detector which will be used to drive the Z axis of the camera system. The images available to me are bmp.
I'm looking for approaches and algorithms to investigate. For instance, should I be isolating features and measuring conformance or could edge detection be used?
This is the in focus image:
And this is the out of focus image:
The key is that in-focus image has much more strong gradients and sharp features.
So what I suggest is to apply a Gaussian Laplace filter and then look at the distribution of pixel values of the result. The plot below shows the application of this idea to your images, where black refers to the out of focus image, and red to the one in focus. The in-focus one has much more high values (because the image has more sharp gradients).
When you have the histograms, you can distinguish one from another by comparing e.g. 90%th percentiles of the distributions (which is sensitive to tails).
For the out of focus image it is 7
and for the in-focus image it is 13.6 (so twice the difference).
A quick and dirty version of the contrast algorithm is to sum the differences between adjacent pixels - higher sum is more contrast.
That's what I do in OpenCV to detect the focus quality:
Mat grad;
int scale = 1;
int delta = 0;
int ddepth = CV_8U;
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel(matFromSensor, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
/// Gradient Y
Sobel(matFromSensor, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs(grad_x, abs_grad_x);
convertScaleAbs(grad_y, abs_grad_y);
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
cv::Scalar mu, sigma;
cv::meanStdDev(grad, /* mean */ mu, /*stdev*/ sigma);
focusMeasure = mu.val[0] * mu.val[0];

Resources