Is there a way to detect if an image is blurry? [closed] - image-processing
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I was wondering if there is a way to determine if an image is blurry or not by analyzing the image data.
Another very simple way to estimate the sharpness of an image is to use a Laplace (or LoG) filter and simply pick the maximum value. Using a robust measure like a 99.9% quantile is probably better if you expect noise (i.e. picking the Nth-highest contrast instead of the highest contrast.) If you expect varying image brightness, you should also include a preprocessing step to normalize image brightness/contrast (e.g. histogram equalization).
I've implemented Simon's suggestion and this one in Mathematica, and tried it on a few test images:
The first test blurs the test images using a Gaussian filter with a varying kernel size, then calculates the FFT of the blurred image and takes the average of the 90% highest frequencies:
testFft[img_] := Table[
(
blurred = GaussianFilter[img, r];
fft = Fourier[ImageData[blurred]];
{w, h} = Dimensions[fft];
windowSize = Round[w/2.1];
Mean[Flatten[(Abs[
fft[[w/2 - windowSize ;; w/2 + windowSize,
h/2 - windowSize ;; h/2 + windowSize]]])]]
), {r, 0, 10, 0.5}]
Result in a logarithmic plot:
The 5 lines represent the 5 test images, the X axis represents the Gaussian filter radius. The graphs are decreasing, so the FFT is a good measure for sharpness.
This is the code for the "highest LoG" blurriness estimator: It simply applies an LoG filter and returns the brightest pixel in the filter result:
testLaplacian[img_] := Table[
(
blurred = GaussianFilter[img, r];
Max[Flatten[ImageData[LaplacianGaussianFilter[blurred, 1]]]];
), {r, 0, 10, 0.5}]
Result in a logarithmic plot:
The spread for the un-blurred images is a little better here (2.5 vs 3.3), mainly because this method only uses the strongest contrast in the image, while the FFT is essentially a mean over the whole image. The functions are also decreasing faster, so it might be easier to set a "blurry" threshold.
Yes, it is. Compute the Fast Fourier Transform and analyse the result. The Fourier transform tells you which frequencies are present in the image. If there is a low amount of high frequencies, then the image is blurry.
Defining the terms 'low' and 'high' is up to you.
Edit:
As stated in the comments, if you want a single float representing the blurryness of a given image, you have to work out a suitable metric.
nikie's answer provide such a metric. Convolve the image with a Laplacian kernel:
1
1 -4 1
1
And use a robust maximum metric on the output to get a number which you can use for thresholding. Try to avoid smoothing too much the images before computing the Laplacian, because you will only find out that a smoothed image is indeed blurry :-).
During some work with an auto-focus lens, I came across this very useful set of algorithms for detecting image focus. It's implemented in MATLAB, but most of the functions are quite easy to port to OpenCV with filter2D.
It's basically a survey implementation of many focus measurement algorithms. If you want to read the original papers, references to the authors of the algorithms are provided in the code. The 2012 paper by Pertuz, et al. Analysis of focus measure operators for shape from focus (SFF) gives a great review of all of these measure as well as their performance (both in terms of speed and accuracy as applied to SFF).
EDIT: Added MATLAB code just in case the link dies.
function FM = fmeasure(Image, Measure, ROI)
%This function measures the relative degree of focus of
%an image. It may be invoked as:
%
% FM = fmeasure(Image, Method, ROI)
%
%Where
% Image, is a grayscale image and FM is the computed
% focus value.
% Method, is the focus measure algorithm as a string.
% see 'operators.txt' for a list of focus
% measure methods.
% ROI, Image ROI as a rectangle [xo yo width heigth].
% if an empty argument is passed, the whole
% image is processed.
%
% Said Pertuz
% Abr/2010
if ~isempty(ROI)
Image = imcrop(Image, ROI);
end
WSize = 15; % Size of local window (only some operators)
switch upper(Measure)
case 'ACMO' % Absolute Central Moment (Shirvaikar2004)
if ~isinteger(Image), Image = im2uint8(Image);
end
FM = AcMomentum(Image);
case 'BREN' % Brenner's (Santos97)
[M N] = size(Image);
DH = Image;
DV = Image;
DH(1:M-2,:) = diff(Image,2,1);
DV(:,1:N-2) = diff(Image,2,2);
FM = max(DH, DV);
FM = FM.^2;
FM = mean2(FM);
case 'CONT' % Image contrast (Nanda2001)
ImContrast = inline('sum(abs(x(:)-x(5)))');
FM = nlfilter(Image, [3 3], ImContrast);
FM = mean2(FM);
case 'CURV' % Image Curvature (Helmli2001)
if ~isinteger(Image), Image = im2uint8(Image);
end
M1 = [-1 0 1;-1 0 1;-1 0 1];
M2 = [1 0 1;1 0 1;1 0 1];
P0 = imfilter(Image, M1, 'replicate', 'conv')/6;
P1 = imfilter(Image, M1', 'replicate', 'conv')/6;
P2 = 3*imfilter(Image, M2, 'replicate', 'conv')/10 ...
-imfilter(Image, M2', 'replicate', 'conv')/5;
P3 = -imfilter(Image, M2, 'replicate', 'conv')/5 ...
+3*imfilter(Image, M2, 'replicate', 'conv')/10;
FM = abs(P0) + abs(P1) + abs(P2) + abs(P3);
FM = mean2(FM);
case 'DCTE' % DCT energy ratio (Shen2006)
FM = nlfilter(Image, [8 8], #DctRatio);
FM = mean2(FM);
case 'DCTR' % DCT reduced energy ratio (Lee2009)
FM = nlfilter(Image, [8 8], #ReRatio);
FM = mean2(FM);
case 'GDER' % Gaussian derivative (Geusebroek2000)
N = floor(WSize/2);
sig = N/2.5;
[x,y] = meshgrid(-N:N, -N:N);
G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig);
Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:));
Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:));
Rx = imfilter(double(Image), Gx, 'conv', 'replicate');
Ry = imfilter(double(Image), Gy, 'conv', 'replicate');
FM = Rx.^2+Ry.^2;
FM = mean2(FM);
case 'GLVA' % Graylevel variance (Krotkov86)
FM = std2(Image);
case 'GLLV' %Graylevel local variance (Pech2000)
LVar = stdfilt(Image, ones(WSize,WSize)).^2;
FM = std2(LVar)^2;
case 'GLVN' % Normalized GLV (Santos97)
FM = std2(Image)^2/mean2(Image);
case 'GRAE' % Energy of gradient (Subbarao92a)
Ix = Image;
Iy = Image;
Iy(1:end-1,:) = diff(Image, 1, 1);
Ix(:,1:end-1) = diff(Image, 1, 2);
FM = Ix.^2 + Iy.^2;
FM = mean2(FM);
case 'GRAT' % Thresholded gradient (Snatos97)
Th = 0; %Threshold
Ix = Image;
Iy = Image;
Iy(1:end-1,:) = diff(Image, 1, 1);
Ix(:,1:end-1) = diff(Image, 1, 2);
FM = max(abs(Ix), abs(Iy));
FM(FM<Th)=0;
FM = sum(FM(:))/sum(sum(FM~=0));
case 'GRAS' % Squared gradient (Eskicioglu95)
Ix = diff(Image, 1, 2);
FM = Ix.^2;
FM = mean2(FM);
case 'HELM' %Helmli's mean method (Helmli2001)
MEANF = fspecial('average',[WSize WSize]);
U = imfilter(Image, MEANF, 'replicate');
R1 = U./Image;
R1(Image==0)=1;
index = (U>Image);
FM = 1./R1;
FM(index) = R1(index);
FM = mean2(FM);
case 'HISE' % Histogram entropy (Krotkov86)
FM = entropy(Image);
case 'HISR' % Histogram range (Firestone91)
FM = max(Image(:))-min(Image(:));
case 'LAPE' % Energy of laplacian (Subbarao92a)
LAP = fspecial('laplacian');
FM = imfilter(Image, LAP, 'replicate', 'conv');
FM = mean2(FM.^2);
case 'LAPM' % Modified Laplacian (Nayar89)
M = [-1 2 -1];
Lx = imfilter(Image, M, 'replicate', 'conv');
Ly = imfilter(Image, M', 'replicate', 'conv');
FM = abs(Lx) + abs(Ly);
FM = mean2(FM);
case 'LAPV' % Variance of laplacian (Pech2000)
LAP = fspecial('laplacian');
ILAP = imfilter(Image, LAP, 'replicate', 'conv');
FM = std2(ILAP)^2;
case 'LAPD' % Diagonal laplacian (Thelen2009)
M1 = [-1 2 -1];
M2 = [0 0 -1;0 2 0;-1 0 0]/sqrt(2);
M3 = [-1 0 0;0 2 0;0 0 -1]/sqrt(2);
F1 = imfilter(Image, M1, 'replicate', 'conv');
F2 = imfilter(Image, M2, 'replicate', 'conv');
F3 = imfilter(Image, M3, 'replicate', 'conv');
F4 = imfilter(Image, M1', 'replicate', 'conv');
FM = abs(F1) + abs(F2) + abs(F3) + abs(F4);
FM = mean2(FM);
case 'SFIL' %Steerable filters (Minhas2009)
% Angles = [0 45 90 135 180 225 270 315];
N = floor(WSize/2);
sig = N/2.5;
[x,y] = meshgrid(-N:N, -N:N);
G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig);
Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:));
Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:));
R(:,:,1) = imfilter(double(Image), Gx, 'conv', 'replicate');
R(:,:,2) = imfilter(double(Image), Gy, 'conv', 'replicate');
R(:,:,3) = cosd(45)*R(:,:,1)+sind(45)*R(:,:,2);
R(:,:,4) = cosd(135)*R(:,:,1)+sind(135)*R(:,:,2);
R(:,:,5) = cosd(180)*R(:,:,1)+sind(180)*R(:,:,2);
R(:,:,6) = cosd(225)*R(:,:,1)+sind(225)*R(:,:,2);
R(:,:,7) = cosd(270)*R(:,:,1)+sind(270)*R(:,:,2);
R(:,:,7) = cosd(315)*R(:,:,1)+sind(315)*R(:,:,2);
FM = max(R,[],3);
FM = mean2(FM);
case 'SFRQ' % Spatial frequency (Eskicioglu95)
Ix = Image;
Iy = Image;
Ix(:,1:end-1) = diff(Image, 1, 2);
Iy(1:end-1,:) = diff(Image, 1, 1);
FM = mean2(sqrt(double(Iy.^2+Ix.^2)));
case 'TENG'% Tenengrad (Krotkov86)
Sx = fspecial('sobel');
Gx = imfilter(double(Image), Sx, 'replicate', 'conv');
Gy = imfilter(double(Image), Sx', 'replicate', 'conv');
FM = Gx.^2 + Gy.^2;
FM = mean2(FM);
case 'TENV' % Tenengrad variance (Pech2000)
Sx = fspecial('sobel');
Gx = imfilter(double(Image), Sx, 'replicate', 'conv');
Gy = imfilter(double(Image), Sx', 'replicate', 'conv');
G = Gx.^2 + Gy.^2;
FM = std2(G)^2;
case 'VOLA' % Vollath's correlation (Santos97)
Image = double(Image);
I1 = Image; I1(1:end-1,:) = Image(2:end,:);
I2 = Image; I2(1:end-2,:) = Image(3:end,:);
Image = Image.*(I1-I2);
FM = mean2(Image);
case 'WAVS' %Sum of Wavelet coeffs (Yang2003)
[C,S] = wavedec2(Image, 1, 'db6');
H = wrcoef2('h', C, S, 'db6', 1);
V = wrcoef2('v', C, S, 'db6', 1);
D = wrcoef2('d', C, S, 'db6', 1);
FM = abs(H) + abs(V) + abs(D);
FM = mean2(FM);
case 'WAVV' %Variance of Wav...(Yang2003)
[C,S] = wavedec2(Image, 1, 'db6');
H = abs(wrcoef2('h', C, S, 'db6', 1));
V = abs(wrcoef2('v', C, S, 'db6', 1));
D = abs(wrcoef2('d', C, S, 'db6', 1));
FM = std2(H)^2+std2(V)+std2(D);
case 'WAVR'
[C,S] = wavedec2(Image, 3, 'db6');
H = abs(wrcoef2('h', C, S, 'db6', 1));
V = abs(wrcoef2('v', C, S, 'db6', 1));
D = abs(wrcoef2('d', C, S, 'db6', 1));
A1 = abs(wrcoef2('a', C, S, 'db6', 1));
A2 = abs(wrcoef2('a', C, S, 'db6', 2));
A3 = abs(wrcoef2('a', C, S, 'db6', 3));
A = A1 + A2 + A3;
WH = H.^2 + V.^2 + D.^2;
WH = mean2(WH);
WL = mean2(A);
FM = WH/WL;
otherwise
error('Unknown measure %s',upper(Measure))
end
end
%************************************************************************
function fm = AcMomentum(Image)
[M N] = size(Image);
Hist = imhist(Image)/(M*N);
Hist = abs((0:255)-255*mean2(Image))'.*Hist;
fm = sum(Hist);
end
%******************************************************************
function fm = DctRatio(M)
MT = dct2(M).^2;
fm = (sum(MT(:))-MT(1,1))/MT(1,1);
end
%************************************************************************
function fm = ReRatio(M)
M = dct2(M);
fm = (M(1,2)^2+M(1,3)^2+M(2,1)^2+M(2,2)^2+M(3,1)^2)/(M(1,1)^2);
end
%******************************************************************
A few examples of OpenCV versions:
// OpenCV port of 'LAPM' algorithm (Nayar89)
double modifiedLaplacian(const cv::Mat& src)
{
cv::Mat M = (Mat_<double>(3, 1) << -1, 2, -1);
cv::Mat G = cv::getGaussianKernel(3, -1, CV_64F);
cv::Mat Lx;
cv::sepFilter2D(src, Lx, CV_64F, M, G);
cv::Mat Ly;
cv::sepFilter2D(src, Ly, CV_64F, G, M);
cv::Mat FM = cv::abs(Lx) + cv::abs(Ly);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
}
// OpenCV port of 'LAPV' algorithm (Pech2000)
double varianceOfLaplacian(const cv::Mat& src)
{
cv::Mat lap;
cv::Laplacian(src, lap, CV_64F);
cv::Scalar mu, sigma;
cv::meanStdDev(lap, mu, sigma);
double focusMeasure = sigma.val[0]*sigma.val[0];
return focusMeasure;
}
// OpenCV port of 'TENG' algorithm (Krotkov86)
double tenengrad(const cv::Mat& src, int ksize)
{
cv::Mat Gx, Gy;
cv::Sobel(src, Gx, CV_64F, 1, 0, ksize);
cv::Sobel(src, Gy, CV_64F, 0, 1, ksize);
cv::Mat FM = Gx.mul(Gx) + Gy.mul(Gy);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
}
// OpenCV port of 'GLVN' algorithm (Santos97)
double normalizedGraylevelVariance(const cv::Mat& src)
{
cv::Scalar mu, sigma;
cv::meanStdDev(src, mu, sigma);
double focusMeasure = (sigma.val[0]*sigma.val[0]) / mu.val[0];
return focusMeasure;
}
No guarantees on whether or not these measures are the best choice for your problem, but if you track down the papers associated with these measures, they may give you more insight. Hope you find the code useful! I know I did.
Building off of Nike's answer. Its straightforward to implement the laplacian based method with opencv:
short GetSharpness(char* data, unsigned int width, unsigned int height)
{
// assumes that your image is already in planner yuv or 8 bit greyscale
IplImage* in = cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
IplImage* out = cvCreateImage(cvSize(width,height),IPL_DEPTH_16S,1);
memcpy(in->imageData,data,width*height);
// aperture size of 1 corresponds to the correct matrix
cvLaplace(in, out, 1);
short maxLap = -32767;
short* imgData = (short*)out->imageData;
for(int i =0;i<(out->imageSize/2);i++)
{
if(imgData[i] > maxLap) maxLap = imgData[i];
}
cvReleaseImage(&in);
cvReleaseImage(&out);
return maxLap;
}
Will return a short indicating the maximum sharpness detected, which based on my tests on real world samples, is a pretty good indicator of if a camera is in focus or not. Not surprisingly, normal values are scene dependent but much less so than the FFT method which has to high of a false positive rate to be useful in my application.
I came up with a totally different solution.
I needed to analyse video still frames to find the sharpest one in every (X) frames. This way, I would detect motion blur and/or out of focus images.
I ended up using Canny Edge detection and I got VERY VERY good results with almost every kind of video (with nikie's method, I had problems with digitalized VHS videos and heavy interlaced videos).
I optimized the performance by setting a region of interest (ROI) on the original image.
Using EmguCV :
//Convert image using Canny
using (Image<Gray, byte> imgCanny = imgOrig.Canny(225, 175))
{
//Count the number of pixel representing an edge
int nCountCanny = imgCanny.CountNonzero()[0];
//Compute a sharpness grade:
//< 1.5 = blurred, in movement
//de 1.5 à 6 = acceptable
//> 6 =stable, sharp
double dSharpness = (nCountCanny * 1000.0 / (imgCanny.Cols * imgCanny.Rows));
}
Thanks nikie for that great Laplace suggestion.
OpenCV docs pointed me in the same direction:
using python, cv2 (opencv 2.4.10), and numpy...
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
numpy.max(cv2.convertScaleAbs(cv2.Laplacian(gray, 3)))
result is between 0-255. I found anything over 200ish is very in focus, and by 100, it's noticeably blurry. the max never really gets much under 20 even if it's completely blurred.
One way which I'm currently using measures the spread of edges in the image. Look for this paper:
#ARTICLE{Marziliano04perceptualblur,
author = {Pina Marziliano and Frederic Dufaux and Stefan Winkler and Touradj Ebrahimi},
title = {Perceptual blur and ringing metrics: Application to JPEG2000,” Signal Process},
journal = {Image Commun},
year = {2004},
pages = {163--172} }
It's usually behind a paywall but I've seen some free copies around. Basically, they locate vertical edges in an image, and then measure how wide those edges are. Averaging the width gives the final blur estimation result for the image. Wider edges correspond to blurry images, and vice versa.
This problem belongs to the field of no-reference image quality estimation. If you look it up on Google Scholar, you'll get plenty of useful references.
EDIT
Here's a plot of the blur estimates obtained for the 5 images in nikie's post. Higher values correspond to greater blur. I used a fixed-size 11x11 Gaussian filter and varied the standard deviation (using imagemagick's convert command to obtain the blurred images).
If you compare images of different sizes, don't forget to normalize by the image width, since larger images will have wider edges.
Finally, a significant problem is distinguishing between artistic blur and undesired blur (caused by focus miss, compression, relative motion of the subject to the camera), but that is beyond simple approaches like this one. For an example of artistic blur, have a look at the Lenna image: Lenna's reflection in the mirror is blurry, but her face is perfectly in focus. This contributes to a higher blur estimate for the Lenna image.
Answers above elucidated many things, but I think it is useful to make a conceptual distinction.
What if you take a perfectly on-focus picture of a blurred image?
The blurring detection problem is only well posed when you have a reference. If you need to design, e.g., an auto-focus system, you compare a sequence of images taken with different degrees of blurring, or smoothing, and you try to find the point of minimum blurring within this set. I other words you need to cross reference the various images using one of the techniques illustrated above (basically--with various possible levels of refinement in the approach--looking for the one image with the highest high-frequency content).
I tried solution based on Laplacian filter from this post. It didn't help me. So, I tried the solution from this post and it was good for my case (but is slow):
import cv2
image = cv2.imread("test.jpeg")
height, width = image.shape[:2]
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
def px(x, y):
return int(gray[y, x])
sum = 0
for x in range(width-1):
for y in range(height):
sum += abs(px(x, y) - px(x+1, y))
Less blurred image has maximum sum value!
You can also tune speed and accuracy by changing step, e.g.
this part
for x in range(width - 1):
you can replace with this one
for x in range(0, width - 1, 10):
Matlab code of two methods that have been published in highly regarded journals (IEEE Transactions on Image Processing) are available here: https://ivulab.asu.edu/software
check the CPBDM and JNBM algorithms. If you check the code it's not very hard to be ported and incidentally it is based on the Marzialiano's method as basic feature.
i implemented it use fft in matlab and check histogram of the fft compute mean and std but also fit function can be done
fa = abs(fftshift(fft(sharp_img)));
fb = abs(fftshift(fft(blured_img)));
f1=20*log10(0.001+fa);
f2=20*log10(0.001+fb);
figure,imagesc(f1);title('org')
figure,imagesc(f2);title('blur')
figure,hist(f1(:),100);title('org')
figure,hist(f2(:),100);title('blur')
mf1=mean(f1(:));
mf2=mean(f2(:));
mfd1=median(f1(:));
mfd2=median(f2(:));
sf1=std(f1(:));
sf2=std(f2(:));
That's what I do in Opencv to detect focus quality in a region:
Mat grad;
int scale = 1;
int delta = 0;
int ddepth = CV_8U;
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel(matFromSensor, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
/// Gradient Y
Sobel(matFromSensor, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs(grad_x, abs_grad_x);
convertScaleAbs(grad_y, abs_grad_y);
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
cv::Scalar mu, sigma;
cv::meanStdDev(grad, /* mean */ mu, /*stdev*/ sigma);
focusMeasure = mu.val[0] * mu.val[0];
Related
How to extract area of interest in the image while the boundary is not obvious
Are there ways to just extract the area of interest (the square light part in the red circle in the original image)? That means I need to get the coordinates of the edge and then masking the image outside the boundaries. I don't know how to do that. Could anyone help? Thanks! #define horizontal and Vertical sobel kernels Gx = np.array([[-1, 0, 1],[-2, 0, 2],[-1, 0, 1]]) Gy = np.array([[-1, -2, -1],[0, 0, 0],[1, 2, 1]]) #define kernal convolution function # with image X and filter F def convolve(X, F): # height and width of the image X_height = X.shape[0] X_width = X.shape[3] # height and width of the filter F_height = F.shape[0] F_width = F.shape[1] H = (F_height - 1) // 2 W = (F_width - 1) // 2 #output numpy matrix with height and width out = np.zeros((X_height, X_width)) #iterate over all the pixel of image X for i in np.arange(H, X_height-H): for j in np.arange(W, X_width-W): sum = 0 #iterate over the filter for k in np.arange(-H, H+1): for l in np.arange(-W, W+1): #get the corresponding value from image and filter a = X[i+k, j+l] w = F[H+k, W+l] sum += (w * a) out[i,j] = sum #return convolution return out #normalizing the vectors sob_x = convolve(image, Gx) / 8.0 sob_y = convolve(image, Gy) / 8.0 #calculate the gradient magnitude of vectors sob_out = np.sqrt(np.power(sob_x, 2) + np.power(sob_y, 2)) # mapping values from 0 to 255 sob_out = (sob_out / np.max(sob_out)) * 255 plt.imshow(sob_out, cmap = 'gray', interpolation = 'bicubic') plt.show()
Radius of a disk in a binary image
I have binarized images like this one: I need to determine the center and radius of the inner solid disk. As you can see, it is surrounded by a textured area which touches it, so that simple connected component detection doesn't work. Anyway, there is a void margin on a large part of the perimeter. A possible cure could be by eroding until all the texture disappears or disconnects from the disk, but this can be time consuming and the number of iterations is unsure. (In addition, in some unlucky cases there are tiny holes in the disk, which will grow with erosion.) Any better suggestion to address this problem in a robust and fast way ? (I tagged OpenCV, but this is not mandated, what matters is the approach.)
You can: Invert the image Find the largest axis-aligned rectangle containing only zeros, (I used my C++ code from this answer). The algorithm is pretty fast. Get the center and radius of the circle from the rectangle Code: #include <opencv2\opencv.hpp> using namespace std; using namespace cv; // https://stackoverflow.com/a/30418912/5008845 cv::Rect findMaxRect(const cv::Mat1b& src) { cv::Mat1f W(src.rows, src.cols, float(0)); cv::Mat1f H(src.rows, src.cols, float(0)); cv::Rect maxRect(0,0,0,0); float maxArea = 0.f; for (int r = 0; r < src.rows; ++r) { for (int c = 0; c < src.cols; ++c) { if (src(r, c) == 0) { H(r, c) = 1.f + ((r>0) ? H(r-1, c) : 0); W(r, c) = 1.f + ((c>0) ? W(r, c-1) : 0); } float minw = W(r,c); for (int h = 0; h < H(r, c); ++h) { minw = std::min(minw, W(r-h, c)); float area = (h+1) * minw; if (area > maxArea) { maxArea = area; maxRect = cv::Rect(cv::Point(c - minw + 1, r - h), cv::Point(c+1, r+1)); } } } } return maxRect; } int main() { cv::Mat1b img = cv::imread("path/to/img", cv::IMREAD_GRAYSCALE); // Correct image img = img > 127; cv::Rect r = findMaxRect(~img); cv::Point center ( std::round(r.x + r.width / 2.f), std::round(r.y + r.height / 2.f)); int radius = std::sqrt(r.width*r.width + r.height*r.height) / 2; cv::Mat3b out; cv::cvtColor(img, out, cv::COLOR_GRAY2BGR); cv::rectangle(out, r, cv::Scalar(0, 255, 0)); cv::circle(out, center, radius, cv::Scalar(0, 0, 255)); return 0; }
My method is to use morph-open, findcontours, and minEnclosingCircle as follow: #!/usr/bin/python3 # 2018/11/29 20:03 import cv2 fname = "test.png" img = cv2.imread(fname) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) th, threshed = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY) kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3)) morphed = cv2.morphologyEx(threshed, cv2.MORPH_OPEN, kernel, iterations = 3) cnts = cv2.findContours(morphed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2] cnt = max(cnts, key=cv2.contourArea) pt, r = cv2.minEnclosingCircle(cnt) pt = (int(pt[0]), int(pt[1])) r = int(r) print("center: {}\nradius: {}".format(pt, r)) The final result: center: (184, 170) radius: 103
My second attempt on this case. This time I am using morphological closing operation to weaken the noise and maintain the signal. This is followed by a simple threshold and a connectedcomponent analysis. I hope this code can run faster. Using this method, i can find the centroid with subpixel accuracy ('center : ', (184.12244328746746, 170.59771290442544)) Radius is derived from the area of the circle. ('radius : ', 101.34704439389715) Here is the full code import cv2 import numpy as np # load image in grayscale image = cv2.imread('radius.png',0) r,c = image.shape # remove noise blured = cv2.blur(image,(5,5)) # Morphological closing morph = cv2.erode(blured,None,iterations = 3) morph = cv2.dilate(morph,None,iterations = 3) cv2.imshow("morph",morph) cv2.waitKey(0) # Get the strong signal th, th_img = cv2.threshold(morph,200,255,cv2.THRESH_BINARY) cv2.imshow("th_img",th_img) cv2.waitKey(0) # Get connected components num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(th_img) print(num_labels) print(stats) # displat labels labels_disp = np.uint8(255*labels/np.max(labels)) cv2.imshow("labels",labels_disp) cv2.waitKey(0) # Find center label cnt_label = labels[r/2,c/2] # Find circle center and radius # Radius calculated by averaging the height and width of bounding box area = stats[cnt_label][4] radius = np.sqrt(area / np.pi)#stats[cnt_label][2]/2 + stats[cnt_label][3]/2)/2 cnt_pt = ((centroids[cnt_label][0]),(centroids[cnt_label][1])) print('center : ',cnt_pt) print('radius : ',radius) # Display final result edges_color = cv2.cvtColor(image,cv2.COLOR_GRAY2BGR) cv2.circle(edges_color,(int(cnt_pt[0]),int(cnt_pt[1])),int(radius),(0,0,255),1) cv2.circle(edges_color,(int(cnt_pt[0]),int(cnt_pt[1])),5,(0,0,255),-1) x1 = stats[cnt_label][0] y1 = stats[cnt_label][1] w1 = stats[cnt_label][2] h1 = stats[cnt_label][3] cv2.rectangle(edges_color,(x1,y1),(x1+w1,y1+h1),(0,255,0)) cv2.imshow("edges_color",edges_color) cv2.waitKey(0)
Here is an example of using hough circle. It can work if you set the min and max radius to a proper range. import cv2 import numpy as np # load image in grayscale image = cv2.imread('radius.png',0) r , c = image.shape # remove noise dst = cv2.blur(image,(5,5)) # Morphological closing dst = cv2.erode(dst,None,iterations = 3) dst = cv2.dilate(dst,None,iterations = 3) # Find Hough Circle circles = cv2.HoughCircles(dst ,cv2.HOUGH_GRADIENT ,2 ,minDist = 0.5* r ,param2 = 150 ,minRadius = int(0.5 * r / 2.0) ,maxRadius = int(0.75 * r / 2.0) ) # Display edges_color = cv2.cvtColor(image,cv2.COLOR_GRAY2BGR) for i in circles[0]: print(i) cv2.circle(edges_color,(i[0],i[1]),i[2],(0,0,255),1) cv2.imshow("edges_color",edges_color) cv2.waitKey(0) Here is the result [185. 167. 103.6]
Have you tried something along the lines of the Circle Hough Transform? I see that OpenCv has its own implementation. Some preprocessing (median filtering?) might be necessary here, though.
Here is a simple approach: Erode the image (using a large, circular SE), then find the centroid of the result. This should be really close to the centroid of the central disk. Compute the mean as a function of the radius of the original image, using the computed centroid as the center. The output looks like this: From here, determining the radius is quite simple. Here is the code, I'm using PyDIP (we don't yet have a binary distribution, you'll need to download and build form sources): import matplotlib.pyplot as pp import PyDIP as dip import numpy as np img = dip.Image(pp.imread('/home/cris/tmp/FDvQm.png')[:,:,0]) b = dip.Erosion(img, 30) c = dip.CenterOfMass(b) rmean = dip.RadialMean(img, center=c) pp.plot(rmean) r = np.argmax(rmean < 0.5) Here, r is 102, as the radius in integer number of pixels, I'm sure it's possible to interpolate to improve precision. c is [184.02, 170.45].
Obtain sigma of gaussian blur between two images
Suppose I have an image A, I applied Gaussian Blur on it with Sigam=3 So I got another Image B. Is there a way to know the applied sigma if A,B is given? Further clarification: Image A: Image B: I want to write a function that take A,B and return Sigma: double get_sigma(cv::Mat const& A,cv::Mat const& B); Any suggestions?
EDIT1: The suggested approach doesn't work in practice in its original form(i.e. using only 9 equations for a 3 x 3 kernel), and I realized this later. See EDIT1 below for an explanation and EDIT2 for a method that works. EDIT2: As suggested by Humam, I used the Least Squares Estimate (LSE) to find the coefficients. I think you can estimate the filter kernel by solving a linear system of equations in this case. A linear filter weighs the pixels in a window by its coefficients, then take their sum and assign this value to the center pixel of the window in the result image. So, for a 3 x 3 filter like the resulting pixel value in the filtered image result_pix_value = h11 * a(y, x) + h12 * a(y, x+1) + h13 * a(y, x+2) + h21 * a(y+1, x) + h22 * a(y+1, x+1) + h23 * a(y+1, x+2) + h31 * a(y+2, x) + h32 * a(y+2, x+1) + h33 * a(y+2, x+2) where a's are the pixel values within the window in the original image. Here, for the 3 x 3 filter you have 9 unknowns, so you need 9 equations. You can obtain those 9 equations using 9 pixels in the resulting image. Then you can form an Ax = b system and solve for x to obtain the filter coefficients. With the coefficients available, I think you can find the sigma. In the following example I'm using non-overlapping windows as shown to obtain the equations. You don't have to know the size of the filter. If you use a larger size, the coefficients that are not relevant will be close to zero. Your result image size is different than the input image, so i didn't use that image for following calculation. I use your input image and apply my own filter. I tested this in Octave. You can quickly run it if you have Octave/Matlab. For Octave, you need to load the image package. I'm using the following kernel to blur the image: h = 0.10963 0.11184 0.10963 0.11184 0.11410 0.11184 0.10963 0.11184 0.10963 When I estimate it using a window size 5, I get the following. As I said, the coefficients that are not relevant are close to zero. g = 9.5787e-015 -3.1508e-014 1.2974e-015 -3.4897e-015 1.2739e-014 -3.7248e-014 1.0963e-001 1.1184e-001 1.0963e-001 1.8418e-015 4.1825e-014 1.1184e-001 1.1410e-001 1.1184e-001 -7.3554e-014 -2.4861e-014 1.0963e-001 1.1184e-001 1.0963e-001 9.7664e-014 1.3692e-014 4.6182e-016 -2.9215e-014 3.1305e-014 -4.4875e-014 EDIT1: First of all, my apologies. This approach doesn't really work in the practice. I've used the filt = conv2(a, h, 'same'); in the code. The resulting image data type in this case is double, whereas in the actual image the data type is usually uint8, so there's loss of information, which we can think of as noise. I simulated this with the minor modification filt = floor(conv2(a, h, 'same'));, and then I don't get the expected results. The sampling approach is not ideal, because it's possible that it results in a degenerated system. Better approach is to use random sampling, avoiding the borders and making sure the entries in the b vector are unique. In the ideal case, as in my code, we are making sure the system Ax = b has a unique solution this way. One approach would be to reformulate this as Mv = 0 system and try to minimize the squared norm of Mv under the constraint squared-norm v = 1, which we can solve using SVD. I could be wrong here, and I haven't tried this. Another approach is to use the symmetry of the Gaussian kernel. Then a 3x3 kernel will have only 3 unknowns instead of 9. I think, this way we impose additional constraints on v of the above paragraph. I'll try these out and post the results, even if I don't get the expected results. EDIT2: Using the LSE, we can find the filter coefficients as pinv(A'A)A'b. For completion, I'm adding a simple (and slow) LSE code. Initial Octave Code: clear all im = double(imread('I2vxD.png')); k = 5; r = floor(k/2); a = im(:, :, 1); % take the red channel h = fspecial('gaussian', [3 3], 5); % filter with a 3x3 gaussian filt = conv2(a, h, 'same'); % use non-overlapping windows to for the Ax = b syatem % NOTE: boundry error checking isn't performed in the code below s = floor(size(a)/2); y = s(1); x = s(2); w = k*k; y1 = s(1)-floor(w/2) + r; y2 = s(1)+floor(w/2); x1 = s(2)-floor(w/2) + r; x2 = s(2)+floor(w/2); b = []; A = []; for y = y1:k:y2 for x = x1:k:x2 b = [b; filt(y, x)]; f = a(y-r:y+r, x-r:x+r); A = [A; f(:)']; end end % estimated filter kernel g = reshape(A\b, k, k) LSE method: clear all im = double(imread('I2vxD.png')); k = 5; r = floor(k/2); a = im(:, :, 1); % take the red channel h = fspecial('gaussian', [3 3], 5); % filter with a 3x3 gaussian filt = floor(conv2(a, h, 'same')); s = size(a); y1 = r+2; y2 = s(1)-r-2; x1 = r+2; x2 = s(2)-r-2; b = []; A = []; for y = y1:2:y2 for x = x1:2:x2 b = [b; filt(y, x)]; f = a(y-r:y+r, x-r:x+r); f = f(:)'; A = [A; f]; end end g = reshape(A\b, k, k) % A\b returns the least squares solution %g = reshape(pinv(A'*A)*A'*b, k, k)
How to find number from image in OCR?
I'm trying to get the number contours from an image. Original image is in number_img: After I've used the following code: gray = cv2.cvtColor(number_img, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (1, 1), 0) ret, thresh = cv2.threshold(blur, 70, 255, cv2.THRESH_BINARY_INV) img2, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) for c in contours: area = cv2.contourArea(c) [x, y, w, h] = cv2.boundingRect(c) if (area > 50 and area < 1000): [x, y, w, h] = cv2.boundingRect(c) cv2.rectangle(number_img, (x, y), (x + w, y + h), (0, 0, 255), 2) Since there are small boxes in between, I tried to limit with height: if (area > 50 and area < 1000) and h > 50: [x, y, w, h] = cv2.boundingRect(c) cv2.rectangle(number_img, (x, y), (x + w, y + h), (0, 0, 255), 2) What other ways should I do to get the best contours of number to do OCR? Thanks.
Just tried in Matlab. Hopefully you can adapt the code to OpenCV and tweak some parameters. It is not clear the right most blob is a number or not. img1 = imread('DSYEW.png'); % first we can convert the grayscale image you provided to a binary % (logical) image. It is always the best option in image preprocessing. % Here I used the threshold .28 based on your image. But you may change it % for a general solution. img = im2bw(img1,.28); % Then we can use the Matlab 'regionprops' command to identify the % individual blobs in binary image. 'regionprops' gives us an output, the % Area of the each blob. s = regionprops(imcomplement(img)); % Now as you did, we can filter out the bounding boxes with an area % threshold. I used 350 originally. But it can be changed for a better % output. s([s.Area] < 350) = []; % Now we draw each bounding box on the image. figure; imshow(img); for k = 1 : length(s) bb = s(k).BoundingBox; rectangle('Position', [bb(1),bb(2),bb(3),bb(4)],... 'EdgeColor','r','LineWidth',2 ) end Output image: Update 1: Just changed the area parameter in the above code as follows. Unfortunately I don't have Python OpenCV in my Mac. But, it is all about tweaking the parameters in you code. s([s.Area] < 373) = []; Output image: Update 2: Numbers 3 and 4 in the above figure were detected as one digit. If you look at carefully you are see that 3 and 4 are connected with each other, and that is why above code detected it as a single digit. So I used the imdilate function to get rid of that. Next, in your code, even the white holes inside some digits were detected as digits. To eliminate that we can fill the holes using imfill in Matlab. Updated code: img1 = imread('TCXeuO9.png'); img = im2bw(img1,.28); img = imcomplement(img); img = imfill(img,'holes'); img = imcomplement(img); se = strel('line',2,90); img = imdilate(img, se); s = regionprops(imcomplement(img)); s([s.Area] < 330) = []; figure; imshow(img); for k = 1 : length(s) bb = s(k).BoundingBox; rectangle('Position', [bb(1),bb(2),bb(3),bb(4)],... 'EdgeColor','r','LineWidth',2 ) end Output image:
Detect if UIImage is blurry [duplicate]
Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 1 year ago. Improve this question I was wondering if there is a way to determine if an image is blurry or not by analyzing the image data.
Another very simple way to estimate the sharpness of an image is to use a Laplace (or LoG) filter and simply pick the maximum value. Using a robust measure like a 99.9% quantile is probably better if you expect noise (i.e. picking the Nth-highest contrast instead of the highest contrast.) If you expect varying image brightness, you should also include a preprocessing step to normalize image brightness/contrast (e.g. histogram equalization). I've implemented Simon's suggestion and this one in Mathematica, and tried it on a few test images: The first test blurs the test images using a Gaussian filter with a varying kernel size, then calculates the FFT of the blurred image and takes the average of the 90% highest frequencies: testFft[img_] := Table[ ( blurred = GaussianFilter[img, r]; fft = Fourier[ImageData[blurred]]; {w, h} = Dimensions[fft]; windowSize = Round[w/2.1]; Mean[Flatten[(Abs[ fft[[w/2 - windowSize ;; w/2 + windowSize, h/2 - windowSize ;; h/2 + windowSize]]])]] ), {r, 0, 10, 0.5}] Result in a logarithmic plot: The 5 lines represent the 5 test images, the X axis represents the Gaussian filter radius. The graphs are decreasing, so the FFT is a good measure for sharpness. This is the code for the "highest LoG" blurriness estimator: It simply applies an LoG filter and returns the brightest pixel in the filter result: testLaplacian[img_] := Table[ ( blurred = GaussianFilter[img, r]; Max[Flatten[ImageData[LaplacianGaussianFilter[blurred, 1]]]]; ), {r, 0, 10, 0.5}] Result in a logarithmic plot: The spread for the un-blurred images is a little better here (2.5 vs 3.3), mainly because this method only uses the strongest contrast in the image, while the FFT is essentially a mean over the whole image. The functions are also decreasing faster, so it might be easier to set a "blurry" threshold.
Yes, it is. Compute the Fast Fourier Transform and analyse the result. The Fourier transform tells you which frequencies are present in the image. If there is a low amount of high frequencies, then the image is blurry. Defining the terms 'low' and 'high' is up to you. Edit: As stated in the comments, if you want a single float representing the blurryness of a given image, you have to work out a suitable metric. nikie's answer provide such a metric. Convolve the image with a Laplacian kernel: 1 1 -4 1 1 And use a robust maximum metric on the output to get a number which you can use for thresholding. Try to avoid smoothing too much the images before computing the Laplacian, because you will only find out that a smoothed image is indeed blurry :-).
During some work with an auto-focus lens, I came across this very useful set of algorithms for detecting image focus. It's implemented in MATLAB, but most of the functions are quite easy to port to OpenCV with filter2D. It's basically a survey implementation of many focus measurement algorithms. If you want to read the original papers, references to the authors of the algorithms are provided in the code. The 2012 paper by Pertuz, et al. Analysis of focus measure operators for shape from focus (SFF) gives a great review of all of these measure as well as their performance (both in terms of speed and accuracy as applied to SFF). EDIT: Added MATLAB code just in case the link dies. function FM = fmeasure(Image, Measure, ROI) %This function measures the relative degree of focus of %an image. It may be invoked as: % % FM = fmeasure(Image, Method, ROI) % %Where % Image, is a grayscale image and FM is the computed % focus value. % Method, is the focus measure algorithm as a string. % see 'operators.txt' for a list of focus % measure methods. % ROI, Image ROI as a rectangle [xo yo width heigth]. % if an empty argument is passed, the whole % image is processed. % % Said Pertuz % Abr/2010 if ~isempty(ROI) Image = imcrop(Image, ROI); end WSize = 15; % Size of local window (only some operators) switch upper(Measure) case 'ACMO' % Absolute Central Moment (Shirvaikar2004) if ~isinteger(Image), Image = im2uint8(Image); end FM = AcMomentum(Image); case 'BREN' % Brenner's (Santos97) [M N] = size(Image); DH = Image; DV = Image; DH(1:M-2,:) = diff(Image,2,1); DV(:,1:N-2) = diff(Image,2,2); FM = max(DH, DV); FM = FM.^2; FM = mean2(FM); case 'CONT' % Image contrast (Nanda2001) ImContrast = inline('sum(abs(x(:)-x(5)))'); FM = nlfilter(Image, [3 3], ImContrast); FM = mean2(FM); case 'CURV' % Image Curvature (Helmli2001) if ~isinteger(Image), Image = im2uint8(Image); end M1 = [-1 0 1;-1 0 1;-1 0 1]; M2 = [1 0 1;1 0 1;1 0 1]; P0 = imfilter(Image, M1, 'replicate', 'conv')/6; P1 = imfilter(Image, M1', 'replicate', 'conv')/6; P2 = 3*imfilter(Image, M2, 'replicate', 'conv')/10 ... -imfilter(Image, M2', 'replicate', 'conv')/5; P3 = -imfilter(Image, M2, 'replicate', 'conv')/5 ... +3*imfilter(Image, M2, 'replicate', 'conv')/10; FM = abs(P0) + abs(P1) + abs(P2) + abs(P3); FM = mean2(FM); case 'DCTE' % DCT energy ratio (Shen2006) FM = nlfilter(Image, [8 8], #DctRatio); FM = mean2(FM); case 'DCTR' % DCT reduced energy ratio (Lee2009) FM = nlfilter(Image, [8 8], #ReRatio); FM = mean2(FM); case 'GDER' % Gaussian derivative (Geusebroek2000) N = floor(WSize/2); sig = N/2.5; [x,y] = meshgrid(-N:N, -N:N); G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig); Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:)); Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:)); Rx = imfilter(double(Image), Gx, 'conv', 'replicate'); Ry = imfilter(double(Image), Gy, 'conv', 'replicate'); FM = Rx.^2+Ry.^2; FM = mean2(FM); case 'GLVA' % Graylevel variance (Krotkov86) FM = std2(Image); case 'GLLV' %Graylevel local variance (Pech2000) LVar = stdfilt(Image, ones(WSize,WSize)).^2; FM = std2(LVar)^2; case 'GLVN' % Normalized GLV (Santos97) FM = std2(Image)^2/mean2(Image); case 'GRAE' % Energy of gradient (Subbarao92a) Ix = Image; Iy = Image; Iy(1:end-1,:) = diff(Image, 1, 1); Ix(:,1:end-1) = diff(Image, 1, 2); FM = Ix.^2 + Iy.^2; FM = mean2(FM); case 'GRAT' % Thresholded gradient (Snatos97) Th = 0; %Threshold Ix = Image; Iy = Image; Iy(1:end-1,:) = diff(Image, 1, 1); Ix(:,1:end-1) = diff(Image, 1, 2); FM = max(abs(Ix), abs(Iy)); FM(FM<Th)=0; FM = sum(FM(:))/sum(sum(FM~=0)); case 'GRAS' % Squared gradient (Eskicioglu95) Ix = diff(Image, 1, 2); FM = Ix.^2; FM = mean2(FM); case 'HELM' %Helmli's mean method (Helmli2001) MEANF = fspecial('average',[WSize WSize]); U = imfilter(Image, MEANF, 'replicate'); R1 = U./Image; R1(Image==0)=1; index = (U>Image); FM = 1./R1; FM(index) = R1(index); FM = mean2(FM); case 'HISE' % Histogram entropy (Krotkov86) FM = entropy(Image); case 'HISR' % Histogram range (Firestone91) FM = max(Image(:))-min(Image(:)); case 'LAPE' % Energy of laplacian (Subbarao92a) LAP = fspecial('laplacian'); FM = imfilter(Image, LAP, 'replicate', 'conv'); FM = mean2(FM.^2); case 'LAPM' % Modified Laplacian (Nayar89) M = [-1 2 -1]; Lx = imfilter(Image, M, 'replicate', 'conv'); Ly = imfilter(Image, M', 'replicate', 'conv'); FM = abs(Lx) + abs(Ly); FM = mean2(FM); case 'LAPV' % Variance of laplacian (Pech2000) LAP = fspecial('laplacian'); ILAP = imfilter(Image, LAP, 'replicate', 'conv'); FM = std2(ILAP)^2; case 'LAPD' % Diagonal laplacian (Thelen2009) M1 = [-1 2 -1]; M2 = [0 0 -1;0 2 0;-1 0 0]/sqrt(2); M3 = [-1 0 0;0 2 0;0 0 -1]/sqrt(2); F1 = imfilter(Image, M1, 'replicate', 'conv'); F2 = imfilter(Image, M2, 'replicate', 'conv'); F3 = imfilter(Image, M3, 'replicate', 'conv'); F4 = imfilter(Image, M1', 'replicate', 'conv'); FM = abs(F1) + abs(F2) + abs(F3) + abs(F4); FM = mean2(FM); case 'SFIL' %Steerable filters (Minhas2009) % Angles = [0 45 90 135 180 225 270 315]; N = floor(WSize/2); sig = N/2.5; [x,y] = meshgrid(-N:N, -N:N); G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig); Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:)); Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:)); R(:,:,1) = imfilter(double(Image), Gx, 'conv', 'replicate'); R(:,:,2) = imfilter(double(Image), Gy, 'conv', 'replicate'); R(:,:,3) = cosd(45)*R(:,:,1)+sind(45)*R(:,:,2); R(:,:,4) = cosd(135)*R(:,:,1)+sind(135)*R(:,:,2); R(:,:,5) = cosd(180)*R(:,:,1)+sind(180)*R(:,:,2); R(:,:,6) = cosd(225)*R(:,:,1)+sind(225)*R(:,:,2); R(:,:,7) = cosd(270)*R(:,:,1)+sind(270)*R(:,:,2); R(:,:,7) = cosd(315)*R(:,:,1)+sind(315)*R(:,:,2); FM = max(R,[],3); FM = mean2(FM); case 'SFRQ' % Spatial frequency (Eskicioglu95) Ix = Image; Iy = Image; Ix(:,1:end-1) = diff(Image, 1, 2); Iy(1:end-1,:) = diff(Image, 1, 1); FM = mean2(sqrt(double(Iy.^2+Ix.^2))); case 'TENG'% Tenengrad (Krotkov86) Sx = fspecial('sobel'); Gx = imfilter(double(Image), Sx, 'replicate', 'conv'); Gy = imfilter(double(Image), Sx', 'replicate', 'conv'); FM = Gx.^2 + Gy.^2; FM = mean2(FM); case 'TENV' % Tenengrad variance (Pech2000) Sx = fspecial('sobel'); Gx = imfilter(double(Image), Sx, 'replicate', 'conv'); Gy = imfilter(double(Image), Sx', 'replicate', 'conv'); G = Gx.^2 + Gy.^2; FM = std2(G)^2; case 'VOLA' % Vollath's correlation (Santos97) Image = double(Image); I1 = Image; I1(1:end-1,:) = Image(2:end,:); I2 = Image; I2(1:end-2,:) = Image(3:end,:); Image = Image.*(I1-I2); FM = mean2(Image); case 'WAVS' %Sum of Wavelet coeffs (Yang2003) [C,S] = wavedec2(Image, 1, 'db6'); H = wrcoef2('h', C, S, 'db6', 1); V = wrcoef2('v', C, S, 'db6', 1); D = wrcoef2('d', C, S, 'db6', 1); FM = abs(H) + abs(V) + abs(D); FM = mean2(FM); case 'WAVV' %Variance of Wav...(Yang2003) [C,S] = wavedec2(Image, 1, 'db6'); H = abs(wrcoef2('h', C, S, 'db6', 1)); V = abs(wrcoef2('v', C, S, 'db6', 1)); D = abs(wrcoef2('d', C, S, 'db6', 1)); FM = std2(H)^2+std2(V)+std2(D); case 'WAVR' [C,S] = wavedec2(Image, 3, 'db6'); H = abs(wrcoef2('h', C, S, 'db6', 1)); V = abs(wrcoef2('v', C, S, 'db6', 1)); D = abs(wrcoef2('d', C, S, 'db6', 1)); A1 = abs(wrcoef2('a', C, S, 'db6', 1)); A2 = abs(wrcoef2('a', C, S, 'db6', 2)); A3 = abs(wrcoef2('a', C, S, 'db6', 3)); A = A1 + A2 + A3; WH = H.^2 + V.^2 + D.^2; WH = mean2(WH); WL = mean2(A); FM = WH/WL; otherwise error('Unknown measure %s',upper(Measure)) end end %************************************************************************ function fm = AcMomentum(Image) [M N] = size(Image); Hist = imhist(Image)/(M*N); Hist = abs((0:255)-255*mean2(Image))'.*Hist; fm = sum(Hist); end %****************************************************************** function fm = DctRatio(M) MT = dct2(M).^2; fm = (sum(MT(:))-MT(1,1))/MT(1,1); end %************************************************************************ function fm = ReRatio(M) M = dct2(M); fm = (M(1,2)^2+M(1,3)^2+M(2,1)^2+M(2,2)^2+M(3,1)^2)/(M(1,1)^2); end %****************************************************************** A few examples of OpenCV versions: // OpenCV port of 'LAPM' algorithm (Nayar89) double modifiedLaplacian(const cv::Mat& src) { cv::Mat M = (Mat_<double>(3, 1) << -1, 2, -1); cv::Mat G = cv::getGaussianKernel(3, -1, CV_64F); cv::Mat Lx; cv::sepFilter2D(src, Lx, CV_64F, M, G); cv::Mat Ly; cv::sepFilter2D(src, Ly, CV_64F, G, M); cv::Mat FM = cv::abs(Lx) + cv::abs(Ly); double focusMeasure = cv::mean(FM).val[0]; return focusMeasure; } // OpenCV port of 'LAPV' algorithm (Pech2000) double varianceOfLaplacian(const cv::Mat& src) { cv::Mat lap; cv::Laplacian(src, lap, CV_64F); cv::Scalar mu, sigma; cv::meanStdDev(lap, mu, sigma); double focusMeasure = sigma.val[0]*sigma.val[0]; return focusMeasure; } // OpenCV port of 'TENG' algorithm (Krotkov86) double tenengrad(const cv::Mat& src, int ksize) { cv::Mat Gx, Gy; cv::Sobel(src, Gx, CV_64F, 1, 0, ksize); cv::Sobel(src, Gy, CV_64F, 0, 1, ksize); cv::Mat FM = Gx.mul(Gx) + Gy.mul(Gy); double focusMeasure = cv::mean(FM).val[0]; return focusMeasure; } // OpenCV port of 'GLVN' algorithm (Santos97) double normalizedGraylevelVariance(const cv::Mat& src) { cv::Scalar mu, sigma; cv::meanStdDev(src, mu, sigma); double focusMeasure = (sigma.val[0]*sigma.val[0]) / mu.val[0]; return focusMeasure; } No guarantees on whether or not these measures are the best choice for your problem, but if you track down the papers associated with these measures, they may give you more insight. Hope you find the code useful! I know I did.
Building off of Nike's answer. Its straightforward to implement the laplacian based method with opencv: short GetSharpness(char* data, unsigned int width, unsigned int height) { // assumes that your image is already in planner yuv or 8 bit greyscale IplImage* in = cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1); IplImage* out = cvCreateImage(cvSize(width,height),IPL_DEPTH_16S,1); memcpy(in->imageData,data,width*height); // aperture size of 1 corresponds to the correct matrix cvLaplace(in, out, 1); short maxLap = -32767; short* imgData = (short*)out->imageData; for(int i =0;i<(out->imageSize/2);i++) { if(imgData[i] > maxLap) maxLap = imgData[i]; } cvReleaseImage(&in); cvReleaseImage(&out); return maxLap; } Will return a short indicating the maximum sharpness detected, which based on my tests on real world samples, is a pretty good indicator of if a camera is in focus or not. Not surprisingly, normal values are scene dependent but much less so than the FFT method which has to high of a false positive rate to be useful in my application.
I came up with a totally different solution. I needed to analyse video still frames to find the sharpest one in every (X) frames. This way, I would detect motion blur and/or out of focus images. I ended up using Canny Edge detection and I got VERY VERY good results with almost every kind of video (with nikie's method, I had problems with digitalized VHS videos and heavy interlaced videos). I optimized the performance by setting a region of interest (ROI) on the original image. Using EmguCV : //Convert image using Canny using (Image<Gray, byte> imgCanny = imgOrig.Canny(225, 175)) { //Count the number of pixel representing an edge int nCountCanny = imgCanny.CountNonzero()[0]; //Compute a sharpness grade: //< 1.5 = blurred, in movement //de 1.5 à 6 = acceptable //> 6 =stable, sharp double dSharpness = (nCountCanny * 1000.0 / (imgCanny.Cols * imgCanny.Rows)); }
Thanks nikie for that great Laplace suggestion. OpenCV docs pointed me in the same direction: using python, cv2 (opencv 2.4.10), and numpy... gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) numpy.max(cv2.convertScaleAbs(cv2.Laplacian(gray, 3))) result is between 0-255. I found anything over 200ish is very in focus, and by 100, it's noticeably blurry. the max never really gets much under 20 even if it's completely blurred.
One way which I'm currently using measures the spread of edges in the image. Look for this paper: #ARTICLE{Marziliano04perceptualblur, author = {Pina Marziliano and Frederic Dufaux and Stefan Winkler and Touradj Ebrahimi}, title = {Perceptual blur and ringing metrics: Application to JPEG2000,” Signal Process}, journal = {Image Commun}, year = {2004}, pages = {163--172} } It's usually behind a paywall but I've seen some free copies around. Basically, they locate vertical edges in an image, and then measure how wide those edges are. Averaging the width gives the final blur estimation result for the image. Wider edges correspond to blurry images, and vice versa. This problem belongs to the field of no-reference image quality estimation. If you look it up on Google Scholar, you'll get plenty of useful references. EDIT Here's a plot of the blur estimates obtained for the 5 images in nikie's post. Higher values correspond to greater blur. I used a fixed-size 11x11 Gaussian filter and varied the standard deviation (using imagemagick's convert command to obtain the blurred images). If you compare images of different sizes, don't forget to normalize by the image width, since larger images will have wider edges. Finally, a significant problem is distinguishing between artistic blur and undesired blur (caused by focus miss, compression, relative motion of the subject to the camera), but that is beyond simple approaches like this one. For an example of artistic blur, have a look at the Lenna image: Lenna's reflection in the mirror is blurry, but her face is perfectly in focus. This contributes to a higher blur estimate for the Lenna image.
Answers above elucidated many things, but I think it is useful to make a conceptual distinction. What if you take a perfectly on-focus picture of a blurred image? The blurring detection problem is only well posed when you have a reference. If you need to design, e.g., an auto-focus system, you compare a sequence of images taken with different degrees of blurring, or smoothing, and you try to find the point of minimum blurring within this set. I other words you need to cross reference the various images using one of the techniques illustrated above (basically--with various possible levels of refinement in the approach--looking for the one image with the highest high-frequency content).
I tried solution based on Laplacian filter from this post. It didn't help me. So, I tried the solution from this post and it was good for my case (but is slow): import cv2 image = cv2.imread("test.jpeg") height, width = image.shape[:2] gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) def px(x, y): return int(gray[y, x]) sum = 0 for x in range(width-1): for y in range(height): sum += abs(px(x, y) - px(x+1, y)) Less blurred image has maximum sum value! You can also tune speed and accuracy by changing step, e.g. this part for x in range(width - 1): you can replace with this one for x in range(0, width - 1, 10):
Matlab code of two methods that have been published in highly regarded journals (IEEE Transactions on Image Processing) are available here: https://ivulab.asu.edu/software check the CPBDM and JNBM algorithms. If you check the code it's not very hard to be ported and incidentally it is based on the Marzialiano's method as basic feature.
i implemented it use fft in matlab and check histogram of the fft compute mean and std but also fit function can be done fa = abs(fftshift(fft(sharp_img))); fb = abs(fftshift(fft(blured_img))); f1=20*log10(0.001+fa); f2=20*log10(0.001+fb); figure,imagesc(f1);title('org') figure,imagesc(f2);title('blur') figure,hist(f1(:),100);title('org') figure,hist(f2(:),100);title('blur') mf1=mean(f1(:)); mf2=mean(f2(:)); mfd1=median(f1(:)); mfd2=median(f2(:)); sf1=std(f1(:)); sf2=std(f2(:));
That's what I do in Opencv to detect focus quality in a region: Mat grad; int scale = 1; int delta = 0; int ddepth = CV_8U; Mat grad_x, grad_y; Mat abs_grad_x, abs_grad_y; /// Gradient X Sobel(matFromSensor, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT); /// Gradient Y Sobel(matFromSensor, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT); convertScaleAbs(grad_x, abs_grad_x); convertScaleAbs(grad_y, abs_grad_y); addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad); cv::Scalar mu, sigma; cv::meanStdDev(grad, /* mean */ mu, /*stdev*/ sigma); focusMeasure = mu.val[0] * mu.val[0];