At times our optical inspection system gets out of focus which results in nonsensical measurements. I've been tasked to develop an 'out of focus' detector which will be used to drive the Z axis of the camera system. The images available to me are bmp.
I'm looking for approaches and algorithms to investigate. For instance, should I be isolating features and measuring conformance or could edge detection be used?
This is the in focus image:
And this is the out of focus image:
The key is that in-focus image has much more strong gradients and sharp features.
So what I suggest is to apply a Gaussian Laplace filter and then look at the distribution of pixel values of the result. The plot below shows the application of this idea to your images, where black refers to the out of focus image, and red to the one in focus. The in-focus one has much more high values (because the image has more sharp gradients).
When you have the histograms, you can distinguish one from another by comparing e.g. 90%th percentiles of the distributions (which is sensitive to tails).
For the out of focus image it is 7
and for the in-focus image it is 13.6 (so twice the difference).
A quick and dirty version of the contrast algorithm is to sum the differences between adjacent pixels - higher sum is more contrast.
That's what I do in OpenCV to detect the focus quality:
Mat grad;
int scale = 1;
int delta = 0;
int ddepth = CV_8U;
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel(matFromSensor, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
/// Gradient Y
Sobel(matFromSensor, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs(grad_x, abs_grad_x);
convertScaleAbs(grad_y, abs_grad_y);
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
cv::Scalar mu, sigma;
cv::meanStdDev(grad, /* mean */ mu, /*stdev*/ sigma);
focusMeasure = mu.val[0] * mu.val[0];
Related
I want to extract the darker contours from images with opencv. I have tried using a simple threshold such as below (c++)
cv::threshold(gray, output, threshold, 255, THRESH_BINARY_INV);
I can iterate threshold lets say from 50 ~ 200
then I can get the darker contours in the middle
for images with a clear distinction such as this
here is the result of the threshold
but if the contours near the border, the threshold will fail because the pixel almost the same.
for example like this image.
What i want to ask is there any technique in opencv that can extract darker contour in the middle of image even though the contour reach the border and having almost the same pixel as the border?
(updated)
after threshold darker contour in the middle overlapped with border top.
It makes me fail to extract character such as the first two "SS".
I think you can simply add a edge preserving smoothing step to solve this :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
Mat filteredImg;
bilateralFilter(inputImg, filteredImg, 5, 60, 20);
// compute laplacian
Mat laplaceImg;
Laplacian(filteredImg, laplaceImg, CV_16S, 1);
// threshold
Mat resImg;
threshold(laplaceImg, resImg, 10, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
This will give you the following result : result
Regards,
I think using laplacian could partialy solve your problem :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
// compute laplacian
Mat laplaceImg;
Laplacian(inputImg, laplaceImg, CV_16S, 1);
Mat resImg;
threshold(laplaceImg, resImg, 30, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
Using this code you should obtain something like :
this result
You can then play with final threshold value and with laplacian kernel size.
You will probably have to remove small artifacts after this operation.
Regards
Using EmguCV with C#, MS VS 2015. Goal is to recognise black circles on white sheet (with some dust). Circles radiuses is about 80 pixels, no adjoining circles.
Given IntPtr ptrImg contains byte[] with grayscale image (8 bits per sample, one channel).
There's a code for circles detection:
Mat mat = new Mat(height, width, DepthType.Cv8U, 1, ptrImg, width);
CvInvoke.FastNlMeansDenoising(mat, mat, 20);
return CvInvoke.HoughCircles(mat, HoughType.Gradient, 2.0, 120.0, 90, 60, 60, 100);
In fact, some of the circles are detected ok, but some others have a glitch - detected radius differs from real radius for about 5-7 pixels; detected boundary coincides real boundary at one side and misses at the opposite side.
What do I wrong? Maybe, I have to play with dp, param1, param2? What have I to do with them?
P.S. If I remove denoising, but add binarization by threshold, the situation isn't better:
This picture:
shows two photos captured by a camera from a black photographic paper. The cross is marked by laser. The left one shows a 9 pixel pattern of noise
This gets in the way of auto-focus process.
Backgroud:
My boss asked me to improve the auto-focus algorithm of a camera to a higher precision (say from 1mm to 0.01mm). This auto-focus process is a preparation stage for laser marking.
The original algorithm uses "Sobel" to calculate sharpness and compare the sharpness of photos at consecutive camera distance to see which one corresponds to the distance nearest to focal length.
Sobel(im_gray, grad_x);convertScaleAbs(grad_x, abs_grad_x);
Sobel(im_gray, grad_y);convertScaleAbs(grad_y, abs_grad_y);
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
for (int i = 0; i < grad.rows; i++)
for (int j = 0; j < grad.cols; j++)
sharpness += = grad.at<unsigned char>(i, j);
This algorithm works fine for complicated photo (with higher brightness and more info), despite the noise, the sharpness value changes monotonically.
But for simple photo (with less brightness and less info), the sharpness value doesn't change monotonically.
I first noticed brightness variance gets in the way of calculating the correct sharpness so I used histogram equalization (already tried "BrightnessAndContrastAuto", not working), and it improves the result to some extent.
equalizeHist(im_gray, im_gray);
After inspecting the problematic shapness values, I realized the noise is another interfering factor. So I used GaussianBlur both of size 3x3 and 5x5 to denoise (already tried "fastNlMeansDenoising", not working ) before histogram equalization. Still there are problematic sharpness values ( certain values break the monotonic trend).
GaussianBlur(im_gray, im_gray, Size(5, 5), 0);
Z Pos Sharpness
-0.2 41.5362
-0.18 41.73
-0.16 41.9194
-0.14 42.2535
-0.12 42.4438
-0.1 42.9528
-0.08 **42.6879**
-0.06 43.4243
-0.04 43.7608
-0.02 43.9139
0 44.1061
0.02 44.3472
0.04 44.7846
0.06 44.9305
0.08 45.0761
0.1 **44.8107**
0.12 45.1979
0.14 45.7114
0.16 45.9627
0.18 46.2388
0.2 46.6344
To sum up,my current algorithm is as follows:
GaussianBlur(im_gray, im_gray, Size(5, 5), 0);
equalizeHist(im_gray, im_gray);
Sobel(im_gray, grad_x);convertScaleAbs(grad_x, abs_grad_x);
Sobel(im_gray, grad_y);convertScaleAbs(grad_y, abs_grad_y);
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
for (int i = 0; i < grad.rows; i++)
for (int j = 0; j < grad.cols; j++)
sharpness += = grad.at<unsigned char>(i, j);
Question: Could someone tell me how I can remove the noise by adjusting the sigma or size parameter of GaussianBlur or using another denoising algorithm?
Additional background: According to the comments, I noticed I have to clarify where I got this set of pictures. Actually they are not raw ouput of camera. They are from a software assisting laser marking. This software has a child window showing the grayscale real-time image of camera. The software has following features: 1. move camera to a certain position; 2.adjust the brightness and contrastness of the image; 3. save the image. When I capture the series of images, I first fix brightness and contrast setting, then move camera in Z direction consecutively, then click 'save the image' after each move. And the image showing in the window is stored in a series of .bmp files.
So in short, I captured images that were captured by a software. The raw image is already processed by grayscale, brightness and contrastness. I will add the new algorithm to this software once it's done, then the input to the algorithm will be raw output of camera. But currently I don't have bottom interface. And I believe the processing won't get in the way of coping with time-varying brightness and noise.
However, the processing by software is one factor interfering with the sharpness algorithm. It sets the 'easy or hard mode'. With high brightness and contrast setting the origial algorithm with only Sobel works fine. But with low brightness and contrast setting, the picture is showing less information, the time-varying brightness and noise comes into power. These are different types of factors from the software brightness and contrast setting, which is a fixed pipeline. They are intrinsic features of the image. In other words, with the brightness and position setting fixed, the image showing in the window is changing by itself in brightness and noise, whether randomly or in certain frequency. So when I 'save the image', the brightness and noise variance creeps in.
The two pictures at the top, are two pictures in .bmp captured at ajacent Z position with a difference of 0.02mm. I expect them to change only in sharpness, but the left one let the demon in and is reluctant to reveal its true self.
I'm working on a project to design a low vision aid. What is the image processing operation which can simulate cataract vision to the normal eye using OpenCV?
It would be useful if you described the symptoms of the cataract and what happens to the retinal images since not all the people here are experts in computer vision and eye deceases at the same time. If a retinal image gets out of focus and gets a yellow tint you can used openCV blur() function and also boost RGB values with yellow a bit. If there are different degree of blur across a visual field I recommend using integral images, see this post
I guess there are at least three operations to do: add noise, blur, whiten:
Rect rect2(0, 0, w/2, h);
Mat src = I4(rect2).clone();
Mat Mnoise(h, w/2, CV_8UC3);
randn(Mnoise, 100, 50);
src = src*0.5+Mnoise*0.5; // add noise
Mat Mblur;
blur(src, Mblur, Size(12, 12)); // blur
Rect rect3(w, 0, w/2, h);
Mat Mblurw = Mblur*0.8+ Scalar(255, 255, 255)*0.2; //whiten
I have the following image:
And I'd like to obtain a thresholded image where only the tape is white, and the whole background is black.. so far I've tried this:
Mat image = Highgui.imread("C:/bezier/0.JPG");
Mat byn = new Mat();
Imgproc.cvtColor(image, byn, Imgproc.COLOR_BGR2GRAY);
Mat thresh = new Mat();
// apply filters
Imgproc.blur(byn, byn, new Size(2, 2));
Imgproc.threshold(byn, thresh, 0, 255, Imgproc.THRESH_BINARY+Imgproc.THRESH_OTSU);
Imgproc.erode(thresh, thresh, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(4, 4)));
But I obtain this image, that is far away from what I want:
The tape would be always of the same color (white) and width (about 2cm), any idea? Thanks
Let's see what you know:
The tape has a lower contrast
The tape is lighter than the background
If you know the scale of the picture, you can run adaptive thresholds on two levels. Let's say that the width of the tape is 100 pixels:
Reject a pixel that has brightness outside of +/- x from the average brightness in the 50x50 (maybe smaller, but not larger) window surrounding it AND
Reject a pixel that has brightness smaller than y + the average brightness in the 100x100(maybe larger, but not smaller) window surrounding it.
You should also experiment a bit, trying both mean and median as definitions of "average" for each threshold.
From there on you should have a much better-defined image, and you can remove all but the largest contour (presumably the trail)
I think you are not taking advantage of the fact that the tape is white (and the floor is in a shade of brown).
Rather than converting to grayscale with cvtColor(src, dst, Imgproc.COLOR_BGR2GRAY) try using a custom operation that penalizes saturation... Maybe something like converting to HSV and let G = V * (1-S).