Related
I am working on a OCR project, and in the preprocessing, some RED stamps need to be removed, so that the text near the stamps could be detected. I try a lot of methods(like change the values of pixel, threshold in Red channel) but fail.
Any suggestions are highly appreciated.
Python, C++, Java or what? Since you didn't state the OpenCV implementation you are using, I'm giving my answer in C++.
An option is to use the HSV color space to filter out the range of red values that defines the seal. My approach is to use the CMYK color space to filter everything except the black (or dark) text. It should do a pretty good job on printed media, which is your case.
//read input image:
std::string imageName = "C://opencvImages//seal.png";
cv::Mat imageInput = cv::imread( imageName );
Now, perform the CMYK conversion. OpenCV does not support this operation out of the box, bear with me as I provide the helper function at the end of this post.
//CMYK conversion:
std::vector<cv::Mat> cmyk;
cmyk = rgb2cmyk( imageInput );
//This is the Black channel:
cv::Mat blackChannel = cmyk[3].clone();
This is the image of the black channel; it is nice how everything that is not black (or dark) practically disappears!
Now, optionally, enhance the result applying brightness and contrast adjustment. Just try to separate the text from the background a little bit better; we want some defined pixel distributions to get a nice binary image.
//Brightness and contrast adjustment:
float alpha = 2.0;
float beta = -50.0;
contrastBrightnessAdjustment( blackChannel, alpha, beta );
Again, OpenCV does not offer brightness and contrast adjustment out of the box; however, its implementation is very easy. Hold on a little bit, and let me show you the result of this operation:
Nice. Let's Otsu-threshold this bad boy to get a nice binary image containing the clean text:
cv::threshold( blackChannel, binaryImage ,0, 255, cv::THRESH_OTSU );
This is what you get:
Now, the RGB to CMYK conversion function. I'm using the following implementation. The function receives an RGB image and returns a vector containing each of the CMYK channels
std::vector<cv::Mat> rgb2cmyk( cv::Mat& inputImage ){
std::vector<cv::Mat> cmyk;
for (int i = 0; i < 4; i++) {
cmyk.push_back( cv::Mat( inputImage.size(), CV_8UC1 ) );
}
std::vector<cv::Mat> inputRGB;
cv::split( inputImage, inputRGB );
for (int i = 0; i < inputImage.rows; i++)
{
for (int j = 0; j < inputImage.cols; j++)
{
float r = (int)inputRGB[2].at<uchar>(i, j) / 255.;
float g = (int)inputRGB[1].at<uchar>(i, j) / 255.;
float b = (int)inputRGB[0].at<uchar>(i, j) / 255.;
float k = std::min(std::min(1-r, 1-g), 1-b);
cmyk[0].at<uchar>(i, j) = (1 - r - k) / (1 - k) * 255.;
cmyk[1].at<uchar>(i, j) = (1 - g - k) / (1 - k) * 255.;
cmyk[2].at<uchar>(i, j) = (1 - b - k) / (1 - k) * 255.;
cmyk[3].at<uchar>(i, j) = k * 255.;
}
}
return cmyk;
}
And the contrastBrightnessAdjustment function is this, implemented using pointer arithmetic. The function receives a grayscale image and applies the linear transformation via the alpha and beta parameters:
void contrastBrightnessAdjustment( cv::Mat inputImage, float alpha, int beta ){
cv::MatIterator_<cv::Vec3b> it, end;
for (it = inputImage.begin<cv::Vec3b>(), end = inputImage.end<cv::Vec3b>(); it != end; ++it) {
uchar &pixel = (*it)[0];
pixel = cv::saturate_cast<uchar>(alpha*pixel+beta);
}
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I was wondering if there is a way to determine if an image is blurry or not by analyzing the image data.
Another very simple way to estimate the sharpness of an image is to use a Laplace (or LoG) filter and simply pick the maximum value. Using a robust measure like a 99.9% quantile is probably better if you expect noise (i.e. picking the Nth-highest contrast instead of the highest contrast.) If you expect varying image brightness, you should also include a preprocessing step to normalize image brightness/contrast (e.g. histogram equalization).
I've implemented Simon's suggestion and this one in Mathematica, and tried it on a few test images:
The first test blurs the test images using a Gaussian filter with a varying kernel size, then calculates the FFT of the blurred image and takes the average of the 90% highest frequencies:
testFft[img_] := Table[
(
blurred = GaussianFilter[img, r];
fft = Fourier[ImageData[blurred]];
{w, h} = Dimensions[fft];
windowSize = Round[w/2.1];
Mean[Flatten[(Abs[
fft[[w/2 - windowSize ;; w/2 + windowSize,
h/2 - windowSize ;; h/2 + windowSize]]])]]
), {r, 0, 10, 0.5}]
Result in a logarithmic plot:
The 5 lines represent the 5 test images, the X axis represents the Gaussian filter radius. The graphs are decreasing, so the FFT is a good measure for sharpness.
This is the code for the "highest LoG" blurriness estimator: It simply applies an LoG filter and returns the brightest pixel in the filter result:
testLaplacian[img_] := Table[
(
blurred = GaussianFilter[img, r];
Max[Flatten[ImageData[LaplacianGaussianFilter[blurred, 1]]]];
), {r, 0, 10, 0.5}]
Result in a logarithmic plot:
The spread for the un-blurred images is a little better here (2.5 vs 3.3), mainly because this method only uses the strongest contrast in the image, while the FFT is essentially a mean over the whole image. The functions are also decreasing faster, so it might be easier to set a "blurry" threshold.
Yes, it is. Compute the Fast Fourier Transform and analyse the result. The Fourier transform tells you which frequencies are present in the image. If there is a low amount of high frequencies, then the image is blurry.
Defining the terms 'low' and 'high' is up to you.
Edit:
As stated in the comments, if you want a single float representing the blurryness of a given image, you have to work out a suitable metric.
nikie's answer provide such a metric. Convolve the image with a Laplacian kernel:
1
1 -4 1
1
And use a robust maximum metric on the output to get a number which you can use for thresholding. Try to avoid smoothing too much the images before computing the Laplacian, because you will only find out that a smoothed image is indeed blurry :-).
During some work with an auto-focus lens, I came across this very useful set of algorithms for detecting image focus. It's implemented in MATLAB, but most of the functions are quite easy to port to OpenCV with filter2D.
It's basically a survey implementation of many focus measurement algorithms. If you want to read the original papers, references to the authors of the algorithms are provided in the code. The 2012 paper by Pertuz, et al. Analysis of focus measure operators for shape from focus (SFF) gives a great review of all of these measure as well as their performance (both in terms of speed and accuracy as applied to SFF).
EDIT: Added MATLAB code just in case the link dies.
function FM = fmeasure(Image, Measure, ROI)
%This function measures the relative degree of focus of
%an image. It may be invoked as:
%
% FM = fmeasure(Image, Method, ROI)
%
%Where
% Image, is a grayscale image and FM is the computed
% focus value.
% Method, is the focus measure algorithm as a string.
% see 'operators.txt' for a list of focus
% measure methods.
% ROI, Image ROI as a rectangle [xo yo width heigth].
% if an empty argument is passed, the whole
% image is processed.
%
% Said Pertuz
% Abr/2010
if ~isempty(ROI)
Image = imcrop(Image, ROI);
end
WSize = 15; % Size of local window (only some operators)
switch upper(Measure)
case 'ACMO' % Absolute Central Moment (Shirvaikar2004)
if ~isinteger(Image), Image = im2uint8(Image);
end
FM = AcMomentum(Image);
case 'BREN' % Brenner's (Santos97)
[M N] = size(Image);
DH = Image;
DV = Image;
DH(1:M-2,:) = diff(Image,2,1);
DV(:,1:N-2) = diff(Image,2,2);
FM = max(DH, DV);
FM = FM.^2;
FM = mean2(FM);
case 'CONT' % Image contrast (Nanda2001)
ImContrast = inline('sum(abs(x(:)-x(5)))');
FM = nlfilter(Image, [3 3], ImContrast);
FM = mean2(FM);
case 'CURV' % Image Curvature (Helmli2001)
if ~isinteger(Image), Image = im2uint8(Image);
end
M1 = [-1 0 1;-1 0 1;-1 0 1];
M2 = [1 0 1;1 0 1;1 0 1];
P0 = imfilter(Image, M1, 'replicate', 'conv')/6;
P1 = imfilter(Image, M1', 'replicate', 'conv')/6;
P2 = 3*imfilter(Image, M2, 'replicate', 'conv')/10 ...
-imfilter(Image, M2', 'replicate', 'conv')/5;
P3 = -imfilter(Image, M2, 'replicate', 'conv')/5 ...
+3*imfilter(Image, M2, 'replicate', 'conv')/10;
FM = abs(P0) + abs(P1) + abs(P2) + abs(P3);
FM = mean2(FM);
case 'DCTE' % DCT energy ratio (Shen2006)
FM = nlfilter(Image, [8 8], #DctRatio);
FM = mean2(FM);
case 'DCTR' % DCT reduced energy ratio (Lee2009)
FM = nlfilter(Image, [8 8], #ReRatio);
FM = mean2(FM);
case 'GDER' % Gaussian derivative (Geusebroek2000)
N = floor(WSize/2);
sig = N/2.5;
[x,y] = meshgrid(-N:N, -N:N);
G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig);
Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:));
Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:));
Rx = imfilter(double(Image), Gx, 'conv', 'replicate');
Ry = imfilter(double(Image), Gy, 'conv', 'replicate');
FM = Rx.^2+Ry.^2;
FM = mean2(FM);
case 'GLVA' % Graylevel variance (Krotkov86)
FM = std2(Image);
case 'GLLV' %Graylevel local variance (Pech2000)
LVar = stdfilt(Image, ones(WSize,WSize)).^2;
FM = std2(LVar)^2;
case 'GLVN' % Normalized GLV (Santos97)
FM = std2(Image)^2/mean2(Image);
case 'GRAE' % Energy of gradient (Subbarao92a)
Ix = Image;
Iy = Image;
Iy(1:end-1,:) = diff(Image, 1, 1);
Ix(:,1:end-1) = diff(Image, 1, 2);
FM = Ix.^2 + Iy.^2;
FM = mean2(FM);
case 'GRAT' % Thresholded gradient (Snatos97)
Th = 0; %Threshold
Ix = Image;
Iy = Image;
Iy(1:end-1,:) = diff(Image, 1, 1);
Ix(:,1:end-1) = diff(Image, 1, 2);
FM = max(abs(Ix), abs(Iy));
FM(FM<Th)=0;
FM = sum(FM(:))/sum(sum(FM~=0));
case 'GRAS' % Squared gradient (Eskicioglu95)
Ix = diff(Image, 1, 2);
FM = Ix.^2;
FM = mean2(FM);
case 'HELM' %Helmli's mean method (Helmli2001)
MEANF = fspecial('average',[WSize WSize]);
U = imfilter(Image, MEANF, 'replicate');
R1 = U./Image;
R1(Image==0)=1;
index = (U>Image);
FM = 1./R1;
FM(index) = R1(index);
FM = mean2(FM);
case 'HISE' % Histogram entropy (Krotkov86)
FM = entropy(Image);
case 'HISR' % Histogram range (Firestone91)
FM = max(Image(:))-min(Image(:));
case 'LAPE' % Energy of laplacian (Subbarao92a)
LAP = fspecial('laplacian');
FM = imfilter(Image, LAP, 'replicate', 'conv');
FM = mean2(FM.^2);
case 'LAPM' % Modified Laplacian (Nayar89)
M = [-1 2 -1];
Lx = imfilter(Image, M, 'replicate', 'conv');
Ly = imfilter(Image, M', 'replicate', 'conv');
FM = abs(Lx) + abs(Ly);
FM = mean2(FM);
case 'LAPV' % Variance of laplacian (Pech2000)
LAP = fspecial('laplacian');
ILAP = imfilter(Image, LAP, 'replicate', 'conv');
FM = std2(ILAP)^2;
case 'LAPD' % Diagonal laplacian (Thelen2009)
M1 = [-1 2 -1];
M2 = [0 0 -1;0 2 0;-1 0 0]/sqrt(2);
M3 = [-1 0 0;0 2 0;0 0 -1]/sqrt(2);
F1 = imfilter(Image, M1, 'replicate', 'conv');
F2 = imfilter(Image, M2, 'replicate', 'conv');
F3 = imfilter(Image, M3, 'replicate', 'conv');
F4 = imfilter(Image, M1', 'replicate', 'conv');
FM = abs(F1) + abs(F2) + abs(F3) + abs(F4);
FM = mean2(FM);
case 'SFIL' %Steerable filters (Minhas2009)
% Angles = [0 45 90 135 180 225 270 315];
N = floor(WSize/2);
sig = N/2.5;
[x,y] = meshgrid(-N:N, -N:N);
G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig);
Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:));
Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:));
R(:,:,1) = imfilter(double(Image), Gx, 'conv', 'replicate');
R(:,:,2) = imfilter(double(Image), Gy, 'conv', 'replicate');
R(:,:,3) = cosd(45)*R(:,:,1)+sind(45)*R(:,:,2);
R(:,:,4) = cosd(135)*R(:,:,1)+sind(135)*R(:,:,2);
R(:,:,5) = cosd(180)*R(:,:,1)+sind(180)*R(:,:,2);
R(:,:,6) = cosd(225)*R(:,:,1)+sind(225)*R(:,:,2);
R(:,:,7) = cosd(270)*R(:,:,1)+sind(270)*R(:,:,2);
R(:,:,7) = cosd(315)*R(:,:,1)+sind(315)*R(:,:,2);
FM = max(R,[],3);
FM = mean2(FM);
case 'SFRQ' % Spatial frequency (Eskicioglu95)
Ix = Image;
Iy = Image;
Ix(:,1:end-1) = diff(Image, 1, 2);
Iy(1:end-1,:) = diff(Image, 1, 1);
FM = mean2(sqrt(double(Iy.^2+Ix.^2)));
case 'TENG'% Tenengrad (Krotkov86)
Sx = fspecial('sobel');
Gx = imfilter(double(Image), Sx, 'replicate', 'conv');
Gy = imfilter(double(Image), Sx', 'replicate', 'conv');
FM = Gx.^2 + Gy.^2;
FM = mean2(FM);
case 'TENV' % Tenengrad variance (Pech2000)
Sx = fspecial('sobel');
Gx = imfilter(double(Image), Sx, 'replicate', 'conv');
Gy = imfilter(double(Image), Sx', 'replicate', 'conv');
G = Gx.^2 + Gy.^2;
FM = std2(G)^2;
case 'VOLA' % Vollath's correlation (Santos97)
Image = double(Image);
I1 = Image; I1(1:end-1,:) = Image(2:end,:);
I2 = Image; I2(1:end-2,:) = Image(3:end,:);
Image = Image.*(I1-I2);
FM = mean2(Image);
case 'WAVS' %Sum of Wavelet coeffs (Yang2003)
[C,S] = wavedec2(Image, 1, 'db6');
H = wrcoef2('h', C, S, 'db6', 1);
V = wrcoef2('v', C, S, 'db6', 1);
D = wrcoef2('d', C, S, 'db6', 1);
FM = abs(H) + abs(V) + abs(D);
FM = mean2(FM);
case 'WAVV' %Variance of Wav...(Yang2003)
[C,S] = wavedec2(Image, 1, 'db6');
H = abs(wrcoef2('h', C, S, 'db6', 1));
V = abs(wrcoef2('v', C, S, 'db6', 1));
D = abs(wrcoef2('d', C, S, 'db6', 1));
FM = std2(H)^2+std2(V)+std2(D);
case 'WAVR'
[C,S] = wavedec2(Image, 3, 'db6');
H = abs(wrcoef2('h', C, S, 'db6', 1));
V = abs(wrcoef2('v', C, S, 'db6', 1));
D = abs(wrcoef2('d', C, S, 'db6', 1));
A1 = abs(wrcoef2('a', C, S, 'db6', 1));
A2 = abs(wrcoef2('a', C, S, 'db6', 2));
A3 = abs(wrcoef2('a', C, S, 'db6', 3));
A = A1 + A2 + A3;
WH = H.^2 + V.^2 + D.^2;
WH = mean2(WH);
WL = mean2(A);
FM = WH/WL;
otherwise
error('Unknown measure %s',upper(Measure))
end
end
%************************************************************************
function fm = AcMomentum(Image)
[M N] = size(Image);
Hist = imhist(Image)/(M*N);
Hist = abs((0:255)-255*mean2(Image))'.*Hist;
fm = sum(Hist);
end
%******************************************************************
function fm = DctRatio(M)
MT = dct2(M).^2;
fm = (sum(MT(:))-MT(1,1))/MT(1,1);
end
%************************************************************************
function fm = ReRatio(M)
M = dct2(M);
fm = (M(1,2)^2+M(1,3)^2+M(2,1)^2+M(2,2)^2+M(3,1)^2)/(M(1,1)^2);
end
%******************************************************************
A few examples of OpenCV versions:
// OpenCV port of 'LAPM' algorithm (Nayar89)
double modifiedLaplacian(const cv::Mat& src)
{
cv::Mat M = (Mat_<double>(3, 1) << -1, 2, -1);
cv::Mat G = cv::getGaussianKernel(3, -1, CV_64F);
cv::Mat Lx;
cv::sepFilter2D(src, Lx, CV_64F, M, G);
cv::Mat Ly;
cv::sepFilter2D(src, Ly, CV_64F, G, M);
cv::Mat FM = cv::abs(Lx) + cv::abs(Ly);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
}
// OpenCV port of 'LAPV' algorithm (Pech2000)
double varianceOfLaplacian(const cv::Mat& src)
{
cv::Mat lap;
cv::Laplacian(src, lap, CV_64F);
cv::Scalar mu, sigma;
cv::meanStdDev(lap, mu, sigma);
double focusMeasure = sigma.val[0]*sigma.val[0];
return focusMeasure;
}
// OpenCV port of 'TENG' algorithm (Krotkov86)
double tenengrad(const cv::Mat& src, int ksize)
{
cv::Mat Gx, Gy;
cv::Sobel(src, Gx, CV_64F, 1, 0, ksize);
cv::Sobel(src, Gy, CV_64F, 0, 1, ksize);
cv::Mat FM = Gx.mul(Gx) + Gy.mul(Gy);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
}
// OpenCV port of 'GLVN' algorithm (Santos97)
double normalizedGraylevelVariance(const cv::Mat& src)
{
cv::Scalar mu, sigma;
cv::meanStdDev(src, mu, sigma);
double focusMeasure = (sigma.val[0]*sigma.val[0]) / mu.val[0];
return focusMeasure;
}
No guarantees on whether or not these measures are the best choice for your problem, but if you track down the papers associated with these measures, they may give you more insight. Hope you find the code useful! I know I did.
Building off of Nike's answer. Its straightforward to implement the laplacian based method with opencv:
short GetSharpness(char* data, unsigned int width, unsigned int height)
{
// assumes that your image is already in planner yuv or 8 bit greyscale
IplImage* in = cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
IplImage* out = cvCreateImage(cvSize(width,height),IPL_DEPTH_16S,1);
memcpy(in->imageData,data,width*height);
// aperture size of 1 corresponds to the correct matrix
cvLaplace(in, out, 1);
short maxLap = -32767;
short* imgData = (short*)out->imageData;
for(int i =0;i<(out->imageSize/2);i++)
{
if(imgData[i] > maxLap) maxLap = imgData[i];
}
cvReleaseImage(&in);
cvReleaseImage(&out);
return maxLap;
}
Will return a short indicating the maximum sharpness detected, which based on my tests on real world samples, is a pretty good indicator of if a camera is in focus or not. Not surprisingly, normal values are scene dependent but much less so than the FFT method which has to high of a false positive rate to be useful in my application.
I came up with a totally different solution.
I needed to analyse video still frames to find the sharpest one in every (X) frames. This way, I would detect motion blur and/or out of focus images.
I ended up using Canny Edge detection and I got VERY VERY good results with almost every kind of video (with nikie's method, I had problems with digitalized VHS videos and heavy interlaced videos).
I optimized the performance by setting a region of interest (ROI) on the original image.
Using EmguCV :
//Convert image using Canny
using (Image<Gray, byte> imgCanny = imgOrig.Canny(225, 175))
{
//Count the number of pixel representing an edge
int nCountCanny = imgCanny.CountNonzero()[0];
//Compute a sharpness grade:
//< 1.5 = blurred, in movement
//de 1.5 à 6 = acceptable
//> 6 =stable, sharp
double dSharpness = (nCountCanny * 1000.0 / (imgCanny.Cols * imgCanny.Rows));
}
Thanks nikie for that great Laplace suggestion.
OpenCV docs pointed me in the same direction:
using python, cv2 (opencv 2.4.10), and numpy...
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
numpy.max(cv2.convertScaleAbs(cv2.Laplacian(gray, 3)))
result is between 0-255. I found anything over 200ish is very in focus, and by 100, it's noticeably blurry. the max never really gets much under 20 even if it's completely blurred.
One way which I'm currently using measures the spread of edges in the image. Look for this paper:
#ARTICLE{Marziliano04perceptualblur,
author = {Pina Marziliano and Frederic Dufaux and Stefan Winkler and Touradj Ebrahimi},
title = {Perceptual blur and ringing metrics: Application to JPEG2000,” Signal Process},
journal = {Image Commun},
year = {2004},
pages = {163--172} }
It's usually behind a paywall but I've seen some free copies around. Basically, they locate vertical edges in an image, and then measure how wide those edges are. Averaging the width gives the final blur estimation result for the image. Wider edges correspond to blurry images, and vice versa.
This problem belongs to the field of no-reference image quality estimation. If you look it up on Google Scholar, you'll get plenty of useful references.
EDIT
Here's a plot of the blur estimates obtained for the 5 images in nikie's post. Higher values correspond to greater blur. I used a fixed-size 11x11 Gaussian filter and varied the standard deviation (using imagemagick's convert command to obtain the blurred images).
If you compare images of different sizes, don't forget to normalize by the image width, since larger images will have wider edges.
Finally, a significant problem is distinguishing between artistic blur and undesired blur (caused by focus miss, compression, relative motion of the subject to the camera), but that is beyond simple approaches like this one. For an example of artistic blur, have a look at the Lenna image: Lenna's reflection in the mirror is blurry, but her face is perfectly in focus. This contributes to a higher blur estimate for the Lenna image.
Answers above elucidated many things, but I think it is useful to make a conceptual distinction.
What if you take a perfectly on-focus picture of a blurred image?
The blurring detection problem is only well posed when you have a reference. If you need to design, e.g., an auto-focus system, you compare a sequence of images taken with different degrees of blurring, or smoothing, and you try to find the point of minimum blurring within this set. I other words you need to cross reference the various images using one of the techniques illustrated above (basically--with various possible levels of refinement in the approach--looking for the one image with the highest high-frequency content).
I tried solution based on Laplacian filter from this post. It didn't help me. So, I tried the solution from this post and it was good for my case (but is slow):
import cv2
image = cv2.imread("test.jpeg")
height, width = image.shape[:2]
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
def px(x, y):
return int(gray[y, x])
sum = 0
for x in range(width-1):
for y in range(height):
sum += abs(px(x, y) - px(x+1, y))
Less blurred image has maximum sum value!
You can also tune speed and accuracy by changing step, e.g.
this part
for x in range(width - 1):
you can replace with this one
for x in range(0, width - 1, 10):
Matlab code of two methods that have been published in highly regarded journals (IEEE Transactions on Image Processing) are available here: https://ivulab.asu.edu/software
check the CPBDM and JNBM algorithms. If you check the code it's not very hard to be ported and incidentally it is based on the Marzialiano's method as basic feature.
i implemented it use fft in matlab and check histogram of the fft compute mean and std but also fit function can be done
fa = abs(fftshift(fft(sharp_img)));
fb = abs(fftshift(fft(blured_img)));
f1=20*log10(0.001+fa);
f2=20*log10(0.001+fb);
figure,imagesc(f1);title('org')
figure,imagesc(f2);title('blur')
figure,hist(f1(:),100);title('org')
figure,hist(f2(:),100);title('blur')
mf1=mean(f1(:));
mf2=mean(f2(:));
mfd1=median(f1(:));
mfd2=median(f2(:));
sf1=std(f1(:));
sf2=std(f2(:));
That's what I do in Opencv to detect focus quality in a region:
Mat grad;
int scale = 1;
int delta = 0;
int ddepth = CV_8U;
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel(matFromSensor, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
/// Gradient Y
Sobel(matFromSensor, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs(grad_x, abs_grad_x);
convertScaleAbs(grad_y, abs_grad_y);
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
cv::Scalar mu, sigma;
cv::meanStdDev(grad, /* mean */ mu, /*stdev*/ sigma);
focusMeasure = mu.val[0] * mu.val[0];
I have been trying to make my own convolution operator instead of using the inbuilt one that comes with Java. I applied the inbuilt convolution operator on this image
link
using the inbuilt convolution operator with gaussian filter I got this image.
link
Now I run the same image using my code
public static int convolve(BufferedImage a,int x,int y){
int red=0,green=0,blue=0;
float[] matrix = {
0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f,
0.2196956447338621f, 0.28209479177387814f, 0.2196956447338621f,
0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f,
};
for(int i = x;i<x+3;i++){
for(int j = y;j<y+3;j++){
int color = a.getRGB(i,j);
red += Math.round(((color >> 16) & 0xff)*matrix[(i-x)*3+j-y]);
green += Math.round(((color >> 8) & 0xff)*matrix[(i-x)*3+j-y]);
blue += Math.round(((color >> 0) & 0xff)*matrix[(i-x)*3+j-y]);
}
}
return (a.getRGB(x, y)&0xFF000000) | (red << 16) | (green << 8) | (blue);
}
And The result I got is this.
link
Also how do I optimize the code that I wrote. The inbuilt convolution operator takes 1 ~ 2 seconds while my code even if it is not serving the exact purpose as it is suppose to, is taking 5~7 seconds !
I accidentally rotated my source image while uploading. So please ignore that.
First of all, you are needlessly (and wrongly) converting your result from float to int at each cycle of the loop. Your red, green and blue should be of type float and should be cast back to integer only after the convolution (when converted back to RGB):
float red=0.0f, green = 0.0f, blue = 0.0f
for(int i = x;i<x+3;i++){
for(int j = y;j<y+3;j++){
int color = a.getRGB(i,j);
red += ((color >> 16) & 0xff)*matrix[(i-x)*3+j-y];
green += ((color >> 8) & 0xff)*matrix[(i-x)*3+j-y];
blue += ((color >> 0) & 0xff)*matrix[(i-x)*3+j-y];
}
}
return (a.getRGB(x, y)&0xFF000000) | (((int)red) << 16) | (((int)green) << 8) | ((int)blue);
The bleeding of colors in your result is caused because your coefficients in matrix are wrong:
0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f +
0.2196956447338621f + 0.28209479177387814f + 0.2196956447338621f +
0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f =
1.8452741
The sum of the coefficients in a blurring convolution matrix should be 1.0. When you apply this matrix to an image you may get colors that are over 255. When that happens the channels "bleed" into the next channel (blue to green, etc.).
A completely green image with this matrix would result in:
green = 255 * 1.8452741 ~= 471 = 0x01D7;
rgb = 0xFF01D700;
Which is a less intense green with a hint of red.
You can fix that by dividing the coefficients by 1.8452741, but you want to make sure that:
(int)(255.0f * (sum of coefficients)) = 255
If not you need to add a check which limits the size of channels to 255 and don't let them wrap around. E.g.:
if (red > 255.0f)
red = 255.0f;
Regarding efficiency/optimization:
It could be that the difference in speed may be explained by this needless casting and calling Math.Round, but a more likely candidate is the way you are accessing the image. I'm not familiar enough with BufferedImage and Raster to advice you on the most efficient way to access the underlying image buffer.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I was wondering if there is a way to determine if an image is blurry or not by analyzing the image data.
Another very simple way to estimate the sharpness of an image is to use a Laplace (or LoG) filter and simply pick the maximum value. Using a robust measure like a 99.9% quantile is probably better if you expect noise (i.e. picking the Nth-highest contrast instead of the highest contrast.) If you expect varying image brightness, you should also include a preprocessing step to normalize image brightness/contrast (e.g. histogram equalization).
I've implemented Simon's suggestion and this one in Mathematica, and tried it on a few test images:
The first test blurs the test images using a Gaussian filter with a varying kernel size, then calculates the FFT of the blurred image and takes the average of the 90% highest frequencies:
testFft[img_] := Table[
(
blurred = GaussianFilter[img, r];
fft = Fourier[ImageData[blurred]];
{w, h} = Dimensions[fft];
windowSize = Round[w/2.1];
Mean[Flatten[(Abs[
fft[[w/2 - windowSize ;; w/2 + windowSize,
h/2 - windowSize ;; h/2 + windowSize]]])]]
), {r, 0, 10, 0.5}]
Result in a logarithmic plot:
The 5 lines represent the 5 test images, the X axis represents the Gaussian filter radius. The graphs are decreasing, so the FFT is a good measure for sharpness.
This is the code for the "highest LoG" blurriness estimator: It simply applies an LoG filter and returns the brightest pixel in the filter result:
testLaplacian[img_] := Table[
(
blurred = GaussianFilter[img, r];
Max[Flatten[ImageData[LaplacianGaussianFilter[blurred, 1]]]];
), {r, 0, 10, 0.5}]
Result in a logarithmic plot:
The spread for the un-blurred images is a little better here (2.5 vs 3.3), mainly because this method only uses the strongest contrast in the image, while the FFT is essentially a mean over the whole image. The functions are also decreasing faster, so it might be easier to set a "blurry" threshold.
Yes, it is. Compute the Fast Fourier Transform and analyse the result. The Fourier transform tells you which frequencies are present in the image. If there is a low amount of high frequencies, then the image is blurry.
Defining the terms 'low' and 'high' is up to you.
Edit:
As stated in the comments, if you want a single float representing the blurryness of a given image, you have to work out a suitable metric.
nikie's answer provide such a metric. Convolve the image with a Laplacian kernel:
1
1 -4 1
1
And use a robust maximum metric on the output to get a number which you can use for thresholding. Try to avoid smoothing too much the images before computing the Laplacian, because you will only find out that a smoothed image is indeed blurry :-).
During some work with an auto-focus lens, I came across this very useful set of algorithms for detecting image focus. It's implemented in MATLAB, but most of the functions are quite easy to port to OpenCV with filter2D.
It's basically a survey implementation of many focus measurement algorithms. If you want to read the original papers, references to the authors of the algorithms are provided in the code. The 2012 paper by Pertuz, et al. Analysis of focus measure operators for shape from focus (SFF) gives a great review of all of these measure as well as their performance (both in terms of speed and accuracy as applied to SFF).
EDIT: Added MATLAB code just in case the link dies.
function FM = fmeasure(Image, Measure, ROI)
%This function measures the relative degree of focus of
%an image. It may be invoked as:
%
% FM = fmeasure(Image, Method, ROI)
%
%Where
% Image, is a grayscale image and FM is the computed
% focus value.
% Method, is the focus measure algorithm as a string.
% see 'operators.txt' for a list of focus
% measure methods.
% ROI, Image ROI as a rectangle [xo yo width heigth].
% if an empty argument is passed, the whole
% image is processed.
%
% Said Pertuz
% Abr/2010
if ~isempty(ROI)
Image = imcrop(Image, ROI);
end
WSize = 15; % Size of local window (only some operators)
switch upper(Measure)
case 'ACMO' % Absolute Central Moment (Shirvaikar2004)
if ~isinteger(Image), Image = im2uint8(Image);
end
FM = AcMomentum(Image);
case 'BREN' % Brenner's (Santos97)
[M N] = size(Image);
DH = Image;
DV = Image;
DH(1:M-2,:) = diff(Image,2,1);
DV(:,1:N-2) = diff(Image,2,2);
FM = max(DH, DV);
FM = FM.^2;
FM = mean2(FM);
case 'CONT' % Image contrast (Nanda2001)
ImContrast = inline('sum(abs(x(:)-x(5)))');
FM = nlfilter(Image, [3 3], ImContrast);
FM = mean2(FM);
case 'CURV' % Image Curvature (Helmli2001)
if ~isinteger(Image), Image = im2uint8(Image);
end
M1 = [-1 0 1;-1 0 1;-1 0 1];
M2 = [1 0 1;1 0 1;1 0 1];
P0 = imfilter(Image, M1, 'replicate', 'conv')/6;
P1 = imfilter(Image, M1', 'replicate', 'conv')/6;
P2 = 3*imfilter(Image, M2, 'replicate', 'conv')/10 ...
-imfilter(Image, M2', 'replicate', 'conv')/5;
P3 = -imfilter(Image, M2, 'replicate', 'conv')/5 ...
+3*imfilter(Image, M2, 'replicate', 'conv')/10;
FM = abs(P0) + abs(P1) + abs(P2) + abs(P3);
FM = mean2(FM);
case 'DCTE' % DCT energy ratio (Shen2006)
FM = nlfilter(Image, [8 8], #DctRatio);
FM = mean2(FM);
case 'DCTR' % DCT reduced energy ratio (Lee2009)
FM = nlfilter(Image, [8 8], #ReRatio);
FM = mean2(FM);
case 'GDER' % Gaussian derivative (Geusebroek2000)
N = floor(WSize/2);
sig = N/2.5;
[x,y] = meshgrid(-N:N, -N:N);
G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig);
Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:));
Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:));
Rx = imfilter(double(Image), Gx, 'conv', 'replicate');
Ry = imfilter(double(Image), Gy, 'conv', 'replicate');
FM = Rx.^2+Ry.^2;
FM = mean2(FM);
case 'GLVA' % Graylevel variance (Krotkov86)
FM = std2(Image);
case 'GLLV' %Graylevel local variance (Pech2000)
LVar = stdfilt(Image, ones(WSize,WSize)).^2;
FM = std2(LVar)^2;
case 'GLVN' % Normalized GLV (Santos97)
FM = std2(Image)^2/mean2(Image);
case 'GRAE' % Energy of gradient (Subbarao92a)
Ix = Image;
Iy = Image;
Iy(1:end-1,:) = diff(Image, 1, 1);
Ix(:,1:end-1) = diff(Image, 1, 2);
FM = Ix.^2 + Iy.^2;
FM = mean2(FM);
case 'GRAT' % Thresholded gradient (Snatos97)
Th = 0; %Threshold
Ix = Image;
Iy = Image;
Iy(1:end-1,:) = diff(Image, 1, 1);
Ix(:,1:end-1) = diff(Image, 1, 2);
FM = max(abs(Ix), abs(Iy));
FM(FM<Th)=0;
FM = sum(FM(:))/sum(sum(FM~=0));
case 'GRAS' % Squared gradient (Eskicioglu95)
Ix = diff(Image, 1, 2);
FM = Ix.^2;
FM = mean2(FM);
case 'HELM' %Helmli's mean method (Helmli2001)
MEANF = fspecial('average',[WSize WSize]);
U = imfilter(Image, MEANF, 'replicate');
R1 = U./Image;
R1(Image==0)=1;
index = (U>Image);
FM = 1./R1;
FM(index) = R1(index);
FM = mean2(FM);
case 'HISE' % Histogram entropy (Krotkov86)
FM = entropy(Image);
case 'HISR' % Histogram range (Firestone91)
FM = max(Image(:))-min(Image(:));
case 'LAPE' % Energy of laplacian (Subbarao92a)
LAP = fspecial('laplacian');
FM = imfilter(Image, LAP, 'replicate', 'conv');
FM = mean2(FM.^2);
case 'LAPM' % Modified Laplacian (Nayar89)
M = [-1 2 -1];
Lx = imfilter(Image, M, 'replicate', 'conv');
Ly = imfilter(Image, M', 'replicate', 'conv');
FM = abs(Lx) + abs(Ly);
FM = mean2(FM);
case 'LAPV' % Variance of laplacian (Pech2000)
LAP = fspecial('laplacian');
ILAP = imfilter(Image, LAP, 'replicate', 'conv');
FM = std2(ILAP)^2;
case 'LAPD' % Diagonal laplacian (Thelen2009)
M1 = [-1 2 -1];
M2 = [0 0 -1;0 2 0;-1 0 0]/sqrt(2);
M3 = [-1 0 0;0 2 0;0 0 -1]/sqrt(2);
F1 = imfilter(Image, M1, 'replicate', 'conv');
F2 = imfilter(Image, M2, 'replicate', 'conv');
F3 = imfilter(Image, M3, 'replicate', 'conv');
F4 = imfilter(Image, M1', 'replicate', 'conv');
FM = abs(F1) + abs(F2) + abs(F3) + abs(F4);
FM = mean2(FM);
case 'SFIL' %Steerable filters (Minhas2009)
% Angles = [0 45 90 135 180 225 270 315];
N = floor(WSize/2);
sig = N/2.5;
[x,y] = meshgrid(-N:N, -N:N);
G = exp(-(x.^2+y.^2)/(2*sig^2))/(2*pi*sig);
Gx = -x.*G/(sig^2);Gx = Gx/sum(Gx(:));
Gy = -y.*G/(sig^2);Gy = Gy/sum(Gy(:));
R(:,:,1) = imfilter(double(Image), Gx, 'conv', 'replicate');
R(:,:,2) = imfilter(double(Image), Gy, 'conv', 'replicate');
R(:,:,3) = cosd(45)*R(:,:,1)+sind(45)*R(:,:,2);
R(:,:,4) = cosd(135)*R(:,:,1)+sind(135)*R(:,:,2);
R(:,:,5) = cosd(180)*R(:,:,1)+sind(180)*R(:,:,2);
R(:,:,6) = cosd(225)*R(:,:,1)+sind(225)*R(:,:,2);
R(:,:,7) = cosd(270)*R(:,:,1)+sind(270)*R(:,:,2);
R(:,:,7) = cosd(315)*R(:,:,1)+sind(315)*R(:,:,2);
FM = max(R,[],3);
FM = mean2(FM);
case 'SFRQ' % Spatial frequency (Eskicioglu95)
Ix = Image;
Iy = Image;
Ix(:,1:end-1) = diff(Image, 1, 2);
Iy(1:end-1,:) = diff(Image, 1, 1);
FM = mean2(sqrt(double(Iy.^2+Ix.^2)));
case 'TENG'% Tenengrad (Krotkov86)
Sx = fspecial('sobel');
Gx = imfilter(double(Image), Sx, 'replicate', 'conv');
Gy = imfilter(double(Image), Sx', 'replicate', 'conv');
FM = Gx.^2 + Gy.^2;
FM = mean2(FM);
case 'TENV' % Tenengrad variance (Pech2000)
Sx = fspecial('sobel');
Gx = imfilter(double(Image), Sx, 'replicate', 'conv');
Gy = imfilter(double(Image), Sx', 'replicate', 'conv');
G = Gx.^2 + Gy.^2;
FM = std2(G)^2;
case 'VOLA' % Vollath's correlation (Santos97)
Image = double(Image);
I1 = Image; I1(1:end-1,:) = Image(2:end,:);
I2 = Image; I2(1:end-2,:) = Image(3:end,:);
Image = Image.*(I1-I2);
FM = mean2(Image);
case 'WAVS' %Sum of Wavelet coeffs (Yang2003)
[C,S] = wavedec2(Image, 1, 'db6');
H = wrcoef2('h', C, S, 'db6', 1);
V = wrcoef2('v', C, S, 'db6', 1);
D = wrcoef2('d', C, S, 'db6', 1);
FM = abs(H) + abs(V) + abs(D);
FM = mean2(FM);
case 'WAVV' %Variance of Wav...(Yang2003)
[C,S] = wavedec2(Image, 1, 'db6');
H = abs(wrcoef2('h', C, S, 'db6', 1));
V = abs(wrcoef2('v', C, S, 'db6', 1));
D = abs(wrcoef2('d', C, S, 'db6', 1));
FM = std2(H)^2+std2(V)+std2(D);
case 'WAVR'
[C,S] = wavedec2(Image, 3, 'db6');
H = abs(wrcoef2('h', C, S, 'db6', 1));
V = abs(wrcoef2('v', C, S, 'db6', 1));
D = abs(wrcoef2('d', C, S, 'db6', 1));
A1 = abs(wrcoef2('a', C, S, 'db6', 1));
A2 = abs(wrcoef2('a', C, S, 'db6', 2));
A3 = abs(wrcoef2('a', C, S, 'db6', 3));
A = A1 + A2 + A3;
WH = H.^2 + V.^2 + D.^2;
WH = mean2(WH);
WL = mean2(A);
FM = WH/WL;
otherwise
error('Unknown measure %s',upper(Measure))
end
end
%************************************************************************
function fm = AcMomentum(Image)
[M N] = size(Image);
Hist = imhist(Image)/(M*N);
Hist = abs((0:255)-255*mean2(Image))'.*Hist;
fm = sum(Hist);
end
%******************************************************************
function fm = DctRatio(M)
MT = dct2(M).^2;
fm = (sum(MT(:))-MT(1,1))/MT(1,1);
end
%************************************************************************
function fm = ReRatio(M)
M = dct2(M);
fm = (M(1,2)^2+M(1,3)^2+M(2,1)^2+M(2,2)^2+M(3,1)^2)/(M(1,1)^2);
end
%******************************************************************
A few examples of OpenCV versions:
// OpenCV port of 'LAPM' algorithm (Nayar89)
double modifiedLaplacian(const cv::Mat& src)
{
cv::Mat M = (Mat_<double>(3, 1) << -1, 2, -1);
cv::Mat G = cv::getGaussianKernel(3, -1, CV_64F);
cv::Mat Lx;
cv::sepFilter2D(src, Lx, CV_64F, M, G);
cv::Mat Ly;
cv::sepFilter2D(src, Ly, CV_64F, G, M);
cv::Mat FM = cv::abs(Lx) + cv::abs(Ly);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
}
// OpenCV port of 'LAPV' algorithm (Pech2000)
double varianceOfLaplacian(const cv::Mat& src)
{
cv::Mat lap;
cv::Laplacian(src, lap, CV_64F);
cv::Scalar mu, sigma;
cv::meanStdDev(lap, mu, sigma);
double focusMeasure = sigma.val[0]*sigma.val[0];
return focusMeasure;
}
// OpenCV port of 'TENG' algorithm (Krotkov86)
double tenengrad(const cv::Mat& src, int ksize)
{
cv::Mat Gx, Gy;
cv::Sobel(src, Gx, CV_64F, 1, 0, ksize);
cv::Sobel(src, Gy, CV_64F, 0, 1, ksize);
cv::Mat FM = Gx.mul(Gx) + Gy.mul(Gy);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
}
// OpenCV port of 'GLVN' algorithm (Santos97)
double normalizedGraylevelVariance(const cv::Mat& src)
{
cv::Scalar mu, sigma;
cv::meanStdDev(src, mu, sigma);
double focusMeasure = (sigma.val[0]*sigma.val[0]) / mu.val[0];
return focusMeasure;
}
No guarantees on whether or not these measures are the best choice for your problem, but if you track down the papers associated with these measures, they may give you more insight. Hope you find the code useful! I know I did.
Building off of Nike's answer. Its straightforward to implement the laplacian based method with opencv:
short GetSharpness(char* data, unsigned int width, unsigned int height)
{
// assumes that your image is already in planner yuv or 8 bit greyscale
IplImage* in = cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
IplImage* out = cvCreateImage(cvSize(width,height),IPL_DEPTH_16S,1);
memcpy(in->imageData,data,width*height);
// aperture size of 1 corresponds to the correct matrix
cvLaplace(in, out, 1);
short maxLap = -32767;
short* imgData = (short*)out->imageData;
for(int i =0;i<(out->imageSize/2);i++)
{
if(imgData[i] > maxLap) maxLap = imgData[i];
}
cvReleaseImage(&in);
cvReleaseImage(&out);
return maxLap;
}
Will return a short indicating the maximum sharpness detected, which based on my tests on real world samples, is a pretty good indicator of if a camera is in focus or not. Not surprisingly, normal values are scene dependent but much less so than the FFT method which has to high of a false positive rate to be useful in my application.
I came up with a totally different solution.
I needed to analyse video still frames to find the sharpest one in every (X) frames. This way, I would detect motion blur and/or out of focus images.
I ended up using Canny Edge detection and I got VERY VERY good results with almost every kind of video (with nikie's method, I had problems with digitalized VHS videos and heavy interlaced videos).
I optimized the performance by setting a region of interest (ROI) on the original image.
Using EmguCV :
//Convert image using Canny
using (Image<Gray, byte> imgCanny = imgOrig.Canny(225, 175))
{
//Count the number of pixel representing an edge
int nCountCanny = imgCanny.CountNonzero()[0];
//Compute a sharpness grade:
//< 1.5 = blurred, in movement
//de 1.5 à 6 = acceptable
//> 6 =stable, sharp
double dSharpness = (nCountCanny * 1000.0 / (imgCanny.Cols * imgCanny.Rows));
}
Thanks nikie for that great Laplace suggestion.
OpenCV docs pointed me in the same direction:
using python, cv2 (opencv 2.4.10), and numpy...
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
numpy.max(cv2.convertScaleAbs(cv2.Laplacian(gray, 3)))
result is between 0-255. I found anything over 200ish is very in focus, and by 100, it's noticeably blurry. the max never really gets much under 20 even if it's completely blurred.
One way which I'm currently using measures the spread of edges in the image. Look for this paper:
#ARTICLE{Marziliano04perceptualblur,
author = {Pina Marziliano and Frederic Dufaux and Stefan Winkler and Touradj Ebrahimi},
title = {Perceptual blur and ringing metrics: Application to JPEG2000,” Signal Process},
journal = {Image Commun},
year = {2004},
pages = {163--172} }
It's usually behind a paywall but I've seen some free copies around. Basically, they locate vertical edges in an image, and then measure how wide those edges are. Averaging the width gives the final blur estimation result for the image. Wider edges correspond to blurry images, and vice versa.
This problem belongs to the field of no-reference image quality estimation. If you look it up on Google Scholar, you'll get plenty of useful references.
EDIT
Here's a plot of the blur estimates obtained for the 5 images in nikie's post. Higher values correspond to greater blur. I used a fixed-size 11x11 Gaussian filter and varied the standard deviation (using imagemagick's convert command to obtain the blurred images).
If you compare images of different sizes, don't forget to normalize by the image width, since larger images will have wider edges.
Finally, a significant problem is distinguishing between artistic blur and undesired blur (caused by focus miss, compression, relative motion of the subject to the camera), but that is beyond simple approaches like this one. For an example of artistic blur, have a look at the Lenna image: Lenna's reflection in the mirror is blurry, but her face is perfectly in focus. This contributes to a higher blur estimate for the Lenna image.
Answers above elucidated many things, but I think it is useful to make a conceptual distinction.
What if you take a perfectly on-focus picture of a blurred image?
The blurring detection problem is only well posed when you have a reference. If you need to design, e.g., an auto-focus system, you compare a sequence of images taken with different degrees of blurring, or smoothing, and you try to find the point of minimum blurring within this set. I other words you need to cross reference the various images using one of the techniques illustrated above (basically--with various possible levels of refinement in the approach--looking for the one image with the highest high-frequency content).
I tried solution based on Laplacian filter from this post. It didn't help me. So, I tried the solution from this post and it was good for my case (but is slow):
import cv2
image = cv2.imread("test.jpeg")
height, width = image.shape[:2]
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
def px(x, y):
return int(gray[y, x])
sum = 0
for x in range(width-1):
for y in range(height):
sum += abs(px(x, y) - px(x+1, y))
Less blurred image has maximum sum value!
You can also tune speed and accuracy by changing step, e.g.
this part
for x in range(width - 1):
you can replace with this one
for x in range(0, width - 1, 10):
Matlab code of two methods that have been published in highly regarded journals (IEEE Transactions on Image Processing) are available here: https://ivulab.asu.edu/software
check the CPBDM and JNBM algorithms. If you check the code it's not very hard to be ported and incidentally it is based on the Marzialiano's method as basic feature.
i implemented it use fft in matlab and check histogram of the fft compute mean and std but also fit function can be done
fa = abs(fftshift(fft(sharp_img)));
fb = abs(fftshift(fft(blured_img)));
f1=20*log10(0.001+fa);
f2=20*log10(0.001+fb);
figure,imagesc(f1);title('org')
figure,imagesc(f2);title('blur')
figure,hist(f1(:),100);title('org')
figure,hist(f2(:),100);title('blur')
mf1=mean(f1(:));
mf2=mean(f2(:));
mfd1=median(f1(:));
mfd2=median(f2(:));
sf1=std(f1(:));
sf2=std(f2(:));
That's what I do in Opencv to detect focus quality in a region:
Mat grad;
int scale = 1;
int delta = 0;
int ddepth = CV_8U;
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel(matFromSensor, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT);
/// Gradient Y
Sobel(matFromSensor, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs(grad_x, abs_grad_x);
convertScaleAbs(grad_y, abs_grad_y);
addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
cv::Scalar mu, sigma;
cv::meanStdDev(grad, /* mean */ mu, /*stdev*/ sigma);
focusMeasure = mu.val[0] * mu.val[0];
Does anyone know how adjustment layers work in Photoshop? I need to generate a result image having a source image and HSL values from Hue/Saturation adjustment layer. Conversion to RGB and then multiplication with the source color does not work.
Or is it possible to replace Hue/Saturation Adjustment Layer with normal layers with appropriately set blending modes (Mulitiply, Screen, Hue, Saturation, Color, Luminocity,...)?
If so then how?
Thanks
I've reverse-engineered the computation for when the "Colorize" checkbox is checked. All of the code below is pseudo-code.
The inputs are:
hueRGB, which is an RGB color for HSV(photoshop_hue, 100, 100).ToRGB()
saturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
lightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
value, which is the pixel.ToHSV().Value, scaled into 0..1 range.
The method to colorize a single pixel:
color = blend2(rgb(128, 128, 128), hueRGB, saturation);
if (lightness <= -1)
return black;
else if (lightness >= 1)
return white;
else if (lightness >= 0)
return blend3(black, color, white, 2 * (1 - lightness) * (value - 1) + 1)
else
return blend3(black, color, white, 2 * (1 + lightness) * (value) - 1)
Where blend2 and blend3 are:
blend2(left, right, pos):
return rgb(left.R * (1-pos) + right.R * pos, same for green, same for blue)
blend3(left, main, right, pos):
if (pos < 0)
return blend2(left, main, pos + 1)
else if (pos > 0)
return blend2(main, right, pos)
else
return main
I have figured out how Lightness works.
The input parameter brightness b is in [0, 2], Output is c (color channel).
if(b<1) c = b * c;
else c = c + (b-1) * (1-c);
Some tests:
b = 0 >>> c = 0 // black
b = 1 >>> c = c // same color
b = 2 >>> c = 1 // white
However, if you choose some interval (e.g. Reds instead of Master), Lightness behaves completely differently, more like Saturation.
Photoshop, dunno. But the theory is usually: The RGB image is converted to HSL/HSV by the particular layer's internal methods; each pixel's HSL is then modified according to the specified parameters, and the so-obtained result is being provided back (for displaying) in RGB.
PaintShopPro7 used to split up the H space (assuming a range of 0..360) in discrete increments of 30° (IIRC), so if you bumped only the "yellows", i.e. only pixels whose H component was valued 45-75 would be considered for manipulation.
reds 345..15, oranges 15..45, yellows 45..75, yellowgreen 75..105, greens 105..135, etc.
if (h >= 45 && h < 75)
s += s * yellow_percent;
There are alternative possibilities, such as applying a falloff filter, as in:
/* For h=60, let m=1... and linearly fall off to h=75 m=0. */
m = 1 - abs(h - 60) / 15;
if (m < 0)
m = 0;
s += s * yellow_percent * d;
Hello I wrote colorize shader and my equation is as folows
inputRGB is the source image which should be in monochrome
(r+g+b) * 0.333
colorRGB is your destination color
finalRGB is the result
pseudo code:
finalRGB = inputRGB * (colorRGB + inputRGB * 0.5);
I think it's fast and efficient
I did translate #Roman Starkov solution to java if any one needed, but for some reason It not worked so well, then I started read a little bit and found that the solution is very simple , there are 2 things have to be done :
When changing the hue or saturation replace the original image only hue and saturation and the lightness stay as is was in the original image this blend method called 10.2.4. luminosity blend mode :
https://www.w3.org/TR/compositing-1/#backdrop
When changing the lightness in photoshop the slider indicates how much percentage we need to add or subtract to/from the original lightness in order to get to white or black color in HSL.
for example :
If the original pixel is 0.7 lightness and the lightness slider = 20
so we need more 0.3 lightness in order to get to 1
so we need to add to the original pixel lightness : 0.7 + 0.2*0.3;
this will be the new blended lightness value for the new pixel .
#Roman Starkov solution Java implementation :
//newHue, which is photoshop_hue (i.e. 0..360)
//newSaturation, which is photoshop_saturation / 100.0 (i.e. 0..1)
//newLightness, which is photoshop_lightness / 100.0 (i.e. -1..1)
//returns rgb int array of new color
private static int[] colorizeSinglePixel(int originlPixel,int newHue,float newSaturation,float newLightness)
{
float[] originalPixelHSV = new float[3];
Color.colorToHSV(originlPixel,originalPixelHSV);
float originalPixelLightness = originalPixelHSV[2];
float[] hueRGB_HSV = {newHue,100.0f,100.0f};
int[] hueRGB = {Color.red(Color.HSVToColor(hueRGB_HSV)),Color.green(Color.HSVToColor(hueRGB_HSV)),Color.blue(Color.HSVToColor(hueRGB_HSV))};
int color[] = blend2(new int[]{128,128,128},hueRGB,newSaturation);
int blackColor[] = new int[]{Color.red(Color.BLACK),Color.green(Color.BLACK),Color.blue(Color.BLACK)};
int whileColor[] = new int[]{Color.red(Color.WHITE),Color.green(Color.WHITE),Color.blue(Color.WHITE)};
if(newLightness <= -1)
{
return blackColor;
}
else if(newLightness >=1)
{
return whileColor;
}
else if(newLightness >=0)
{
return blend3(blackColor,color,whileColor, (int) (2*(1-newLightness)*(originalPixelLightness-1) + 1));
}
else
{
return blend3(blackColor,color,whileColor, (int) ((1+newLightness)*(originalPixelLightness) - 1));
}
}
private static int[] blend2(int[] left,int[] right,float pos)
{
return new int[]{(int) (left[0]*(1-pos)+right[0]*pos),(int) (left[1]*(1-pos)+right[1]*pos),(int) (left[2]*(1-pos)+right[2]*pos)};
}
private static int[] blend3(int[] left,int[] main,int[] right,int pos)
{
if(pos < 0)
{
return blend2(left,main,pos+1);
}
else if(pos > 0)
{
return blend2(main,right,pos);
}
else
{
return main;
}
}
When the “Colorize” checkbox is checked, the lightness of the underlying layer is combined with the values of the Hue and Saturation sliders and converted from HSL to RGB according to the equations at https://en.wikipedia.org/wiki/HSL_and_HSV#From_HSL . (The Lightness slider just remaps the lightness to a subset of the scale as you can see from watching the histogram; the effect is pretty awful and I don’t see why anyone would ever use it.)