I'm trying to translate a photoshop setting for sharpening images to graphicsmagick. Therefore I found this helpful article:
https://redskiesatnight.com/2005/04/06/sharpening-using-image-magick/
The problem is that if I use to photoshop equivalent values explained in the article in graphicsmagick the images are not so sharp and clear like on photoshop.
For example I use this settings on photoshop:
Strength: 500%
Radius: 2.0 Pixel
Threshold: 8
In the article the parameters are explained like this:
The radius parameter
The radius parameter specifies (official documentation)
“the radius of the Gaussian, in pixels, not counting the center pixel”
Unsharp masking, like many other image-processing filters, is a
convolution kernel operation. The filter processes the image pixel by
pixel. For each pixel it examines a block of pixels surrounding it
(the kernel) and does some calculations on them to render the output
pixel value. The radius parameter determines which pixels surrounding
the center pixel will be considered in the convolution kernel: (think
of a circle) the larger the radius, the more pixels that will need to
be processed for each individual pixel.
Image Magick’s radius is similar to the same parameter in Photoshop,
GIMP and other image editors. In practical terms, it affects the size
of the “halos” created to increase contrast along the edges in the
image, increasing acutance and thus apparent sharpness.
How do you know how big of a radius to use? It depends on your output
target resolution, for one thing. It also depends on your personal
preferences, as well as the specific needs of the image at hand. As
far as the resolution issue goes, the GIMP User Manual recommends that
unsharp mask radius be set as follows:
radius = (output ppi / 30) * 0.2 Which is very similar to another commonly found rule of thumb:
radius = output ppi / 150 So for a monitor with 72 PPI resolution, you’d use a radius of approximately 0.5; if your targeting a printer
at 300 PPI you’d use a value of 2.0. Use these as a starting point;
different images have different sharpening requirements, and
individual preference is also a consideration. [Aside: there are a few
postings around the net (including some referenced in this article)
that suggest that Image Magick accepts, but does not honor, fractional
radii; that is, if you specify a radius of 0.5 or 1.2 it is rounded,
or defaults to an integer, or is silently ignored, etc. This is not
true, at least as of version 5.4.7, which is the one that I am using
as I write this article. You can easily see for yourself by doing
something like the following:
$ convert -unsharp 1.2x1.2+5+0 test.tif testo1.tif $ convert -unsharp
1.4x1.4+5+0 test.tif testo2.tif $ composite -compose difference test01.tif testo2.tif diff.tif $ display diff.tif you can also load
them into the GIMP or Photoshop into different layers and change the
blend mode to “Difference”; the resulting image is not black (you may
need to look closely for a 0.2 difference in radius). No, this
mistaken impression likely comes from the fact that there is a
relationship between the radius and sigma parameters, and if you do
not specify sigma properly in relation to the radius, the radius may
indeed be changed, or at least not work as expected. Read on for more
on this.]
Please note that the default radius (if you do not specify anything)
is 0, a special value which tells the unsharp mask algorithm to
“select an appropriate value for the radius”!.
The sigma parameter
The sigma parameter specifies (official documentation)
“the standard deviation of the Gaussian, in pixels”
This is the most confusing parameter of the four, probably because it
is “invisible” in other implementations of unsharp masking, and it is
most sparsely documented. The best explanation I have found for it
came from a google search that unearthed an archived mailing list
thread which had the following snippet:
Comparing the results of
convert -unsharp 1.2x1+4+0 test test1.2x1+4+0
and
convert -unsharp 30x1+4+0 test test30x1+4+0
results in no significant differences but the latter takes approx. 50 times
longer to complete.
That is not surprising. A radius of 30 involves on the order of 61x61
input pixels in the convolution of each output pixel. A radius of 1.2
involves 3x3 or 5x5 pixels.
Please can anybody give me any hints, what 'sigma' means?
It describes the relative weight of pixels as a function of their
distance from the center of the convolution kernel. For small sigma,
the outer pixels have little weight. Another important clue comes from
the documentation for the -unsharp option to convert (emphasis mine):
The -unsharp option sharpens an image. We convolve the image with a
Gaussian operator of the given radius and standard deviation (sigma).
For reasonable results, radius should be larger than sigma. Use a
radius of 0 to have the method select a suitable radius.
Combining the two clues provides some good insight: sigma is a
parameter that gives you some control over how the strength (given by
the amount parameter) of the sharpening is “graduated” or lessened as
you radiate away from a given pixel at the center of the convolution
matrix to the limit defined by the radius. My testing confirms this
inferred conclusion, namely that a bigger sigma causes more pronounced
sharpening for a given radius. That is why the poster in the mailing
list question (above) did not see any significant difference in the
sharpening even though he was using an amount of 400% (!!) and a
threshold of 0%; with a sigma of only 1.0, the strength of the filter
falls off too rapidly to be noticed despite the large difference in
radius between the two invocations. This is also why the man page says
“for reasonable results, radius should be larger than sigma”; if it is
not, then the sigma parameter does not have a graduated effect, as
designed, to “soften” the halos toward their edges; instead it simply
applies the amount evenly to the edge of the radius (which may be what
you want in some circumstances). A general rule of thumb for choosing
sigma might be:
if radius < 1, then sigma = radius else sigma = sqrt(radius) Summary:
choose your radius first, then choose a sigma smaller than or equal to
that. Experimentation will yield the best results. Please note that
the default sigma (if you do not specify anything) is 1.0. This is the
main culprit for why most people don’t see as much effect with Image
Magick’s unsharp mask operator as they do with other implementations
of unsharp mask if they are using a larger radius: unless you bump up
this parameter you are not getting the full benefit of the larger
radius!
[Aside: you might be wondering what happens if sigma is specifed
larger than the radius. The answer, as the documentation states, is
that the result may not be “reasonable”. In my testing, the usual
result is that the sharpening is extended at the specified amount to
the edge of the specified radius, and larger values of sigma have
little if any effect. In some cases (e.g. for radius < 0) specifying a
larger sigma increased the effective radius (e.g. to 1); this may be
the result of a “sanity check” on the parameters in the code. In any
case, keep in mind that the algorithm is designed for sigma to be less
than or equal to the radius, and results may be unexpected if used
otherwise.]
The amount parameter
The amount parameter specifies (official documentation)
“the percentage of the difference between the original and the blur
image that is added back into the original”
The amount parameter can be thought of as the “strength” of the
sharpening: higher values increase contrast in the halos more than
lower ones. Very large values may cause highlights on halos to blow
out or shadows on them to block up. If this happens, consider using a
lower amount with a higher radius instead.
amount should be specified as a decimal value indicating the
percentage. So, for example, if in Photoshop you would use an amount
of 170 (170%), in Image Magick you would use 1.7.
Please note that the default amount (if you do not specify anything)
is 1.0 (i.e. 100%).
The threshold parameter
The threshold parameter specifies (official documentation)
“as a fraction of MaxRGB, needed to apply the difference amount”
The threshold specifies a minimum amount of difference between the
center pixel vs. sourrounding pixels in the convolution kernel
necessary to apply the local contrast enhancement. Increasing this
value causes the algorithm to become less sensitive to differences
that may define edges. Specifying a positive threshold is often used
to avoid sharpening smooth areas that may contain noise (e.g. an area
of blue sky). If you have a noisy image, strongly consider raising the
threshold, or using some kind of smart sharpening technique instead.
The threshold parameter should be specified as a decimal value
indicating this percentage. This is different than GIMP or Photoshop,
which both specify the threshold in actual pixel levels between 0 and
the maximum (for 8-bit images, 255).
Please note that the default threshold (if you do not specify
anything) is 0.05 (i.e. 5%; this corresponds to a threshold of .05 *
255 = 12-13 in Photoshop). Photoshop uses a default threshold of 0
(i.e. no threshold) and the unsharp masking is applied evenly
throughout the image. If that is what you want you will need to
specify a 0.0 value for Image Magick’s threshold. This is undoubtedly
another source of confusion regarding Image Magick’s sharpening
algorithm.
So I did it like than and come up to this command:
gm convert file1.jpg -unsharp 2x1.41+5+0.03 file1_2x1.41+5+0.03.jpg
But like I said the images does not get that much sharpen like in photoshop. We also experimented with a lot of other values but without good images. So is it possible to do photoshop sharpening stuff with graphicsmagick? Or is it just a not good library? The main problem of just using photoshop for sharpenings is that we want to improve the images on our linux server and photoshop is not really good running on linux.
Related
I can clearly understand how bilinear interpolation works when up scaling the image, like fill the values while taking 4 nearest neighbours, but i can't understand how it works while down scaling the image. It would mean a lot to me if someone clarify for me.
Scaling an image requires mapping pixels from the input to pixels on the output. If those pixel coordinates don't map to an integer, interpolation is required to estimate what the pixel value would have been. The "Bi" part of bilinear means it's linear interpolation applied in two dimensions independently. If for example output pixel 2,3 needs to come from input coordinates 1.5,7.2 you would interpolate in the X direction by taking 0.5 of each of the pixels at 1.0 and 2.0, then interpolate in the Y direction by taking 0.8 of the pixel at 7.0 and 0.2 of the pixel at 8.0. Usually these operations are combined into a single set of equations, but they can be applied separately if needed.
Bilinear is a poor choice for downscaling because it leads to aliasing artifacts. This is when you attempt to create spatial frequencies that are beyond the Nyquist sampling limit, and high frequency detail turns into low frequency artifacts. You can minimize this by blurring the image before you downscale it. Or you can choose an interpolation algorithm that incorporates some low pass filtering.
When using the -canny option of the Imagemagick convert tool, what do these arguments refer to ?
-canny radiusxsigma{+lower-percent}{+upper-percent}
The documentation (https://www.imagemagick.org/script/command-line-options.php#canny) gives examples of what increase or decrease of percents may result in, but I can't find the exact meaning of radiusXsigma and its relation to the two numbers following (i.e. 10% and 30% in the doc example).
Might be worth jumping over to wikipedia's definition of Canny edge detector article.
The documentation assumes you are already aware of Gaussian function.
Both radius and sigma are user-defined constants; perhaps, best described by the GaussianBlurImage method header documentation. (quote below)
GaussianBlurImage() blurs an image. We convolve the image with a
Gaussian operator of the given radius and standard deviation (sigma).
For reasonable results, the radius should be larger than sigma. Use a
radius of 0 and GaussianBlurImage() selects a suitable radius for you
The format of the GaussianBlurImage method is:
Image *GaussianBlurImage(const Image *image,onst double radius,
const double sigma,ExceptionInfo *exception)
A description of each parameter follows:
image: the image.
radius: the radius of the Gaussian, in pixels, not counting the center pixel.
sigma: the standard deviation of the Gaussian, in pixels.
exception: return any errors or warnings in this structure.
Better hands-on docs w/ examples here.
Now for the last two options...
{+lower-percent}{+upper-percent}
They are essentially lower & upper bounds of a threshold. Defining an "envelope", or "range", if you will. They'll essentially make up the hysteresis to track.
I am trying to blur a ROI in an image using Gaussian filter and imageJ software.
I am getting the desired result with blur radius as 9 in imageJ.
Now I am trying to write the corresponding OpenCV C++ application to do same operations which I did with imageJ.
The Gaussian Blur signature in openCV is as below:
C++: void GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0, int borderType=BORDER_DEFAULT )
What is the sigmaX and sigmaY corresponding to ImageJ blur radius of 9?
I tried many resources such as:
Blur Radius
but I am not getting the same results with OpenCV.
Could you please elaborate on how the results are "not the same" ?
The blur radius in ImageJ is defined as "'Radius' means the radius of decay to exp(-0.5) ~ 61%, i.e. the standard deviation sigma of the Gaussian" (coming from ImageJ documentation : https://imagej.nih.gov/ij/developer/api/ij/plugin/filter/GaussianBlur.html#GaussianBlur--)
I see no reason why it should not be implemented the same way in OpenCV.
However, I also observe these differences between ImageJ and OpenCV gaussian blur.
While for the moment I have no solution to make these absolutely the same, I managed to get them closer, and can see one potential difference and one difference for sure in implementation :
Kernel size (potential difference) :
Are you aware that kernel size and gaussian radius are two different things ? Kernel size is the size of the kernel applied to the image (3*3, 5*5 etc), but inside this kernel a gaussian with any radius can theroetically exist. However, kernel size is often chosed such that on the kernel borders, the gaussian function has decayed to about zero.
This being said, ImageJ automatically choses the kernel for you depending on the radius you chose, in order to fulfill the "gaussian decays to zero on borders" condition. The OpenCV function also does that if you set sigma to your desired radius and ksize as zero. The question is "do they both do it the same way ?".
ImageJ's implementation of this is trickier than you might think : "In ImageJ, the size of the kernel actually used depends on the accuracy
needed: With sigma=1, for 16-bit and float images the kernel is 9 pixels
wide (which gives 9x9 for a 2D image), but for 8-bit or RGB images is is
only 7 pixels wide because there is no need for a very high accuracy if
there are only 256 different values. For large values of sigma, the situation is more complex: For sigma >=8, the data are first downscaled, then the Gaussian Blur is applied, and interpolation is used for upscaling to the original number of data points. The downscaling and interpolation algorithms are specially designed for best accuracy.", etc etc (coming from the "ImageJ forum", I can't post the link since I don't have enough reputation, but just google this quote if you want the source)
I do not know if OpenCV does such operations or if it computes the kernel size differently, thus giving different results. (couldn't find it with Google).
Borders (difference for sure) : As you probably know, the gaussian filter goes over every pixel in the image and computes a new value for this pixel based on its neighbors. But what about the pixels close to the borders, where the gaussian kernel is wider than their distance from the image's border ? How do algorithms handle it ? By inspecting my images closer, I found that the main differences between the OCV implementation and the IJ one were on the border pixels.
Well it turns out ImageJ and OpenCV handle these pixels differently :
ImageJ gaussian, "Like all convolution operations in ImageJ, it assumes that out-of-image pixels have a value equal to the nearest edge pixel." (from same ImageJ doc than above).
However, OpenCV lets you chose other options, and the default one, called BORDER_DEFAULT in the OpenCV call, is BORDER_REFLECT_101 (http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.html) (at least I think it is, it is the default border for another method using borders, so I would think it is also the default border for the gaussian). BORDER_REFLECT_101 sort of "mirrors" the borders (gfedcb|abcdefgh, see link).
To get closer to ImageJ (aaaaaaaa|abcdefgh), use BORDER_DEFAULT=BORDER_REPLICATE. With this, I get closer results between the two implementations (though not exactly the same, I will keep investigating and edit my answer if I find more clues).
[Note : I am working in Python2.7 (not C++) and OpenCV 3, but I don't think it has an impact on this problem]
I am looking for a "very" simple way to check if an image bitmap is blur. I do not need accurate and complicate algorithm which involves fft, wavelet, etc. Just a very simple idea even if it is not accurate.
I've thought to compute the average euclidian distance between pixel (x,y) and pixel (x+1,y) considering their RGB components and then using a threshold but it works very bad. Any other idea?
Don't calculate the average differences between adjacent pixels.
Even when a photograph is perfectly in focus, it can still contain large areas of uniform colour, like the sky for example. These will push down the average difference and mask the details you're interested in. What you really want to find is the maximum difference value.
Also, to speed things up, I wouldn't bother checking every pixel in the image. You should get reasonable results by checking along a grid of horizontal and vertical lines spaced, say, 10 pixels apart.
Here are the results of some tests with PHP's GD graphics functions using an image from Wikimedia Commons (Bokeh_Ipomea.jpg). The Sharpness values are simply the maximum pixel difference values as a percentage of 255 (I only looked in the green channel; you should probably convert to greyscale first). The numbers underneath show how long it took to process the image.
If you want them, here are the source images I used:
original
slightly blurred
blurred
Update:
There's a problem with this algorithm in that it relies on the image having a fairly high level of contrast as well as sharp focused edges. It can be improved by finding the maximum pixel difference (maxdiff), and finding the overall range of pixel values in a small area centred on this location (range). The sharpness is then calculated as follows:
sharpness = (maxdiff / (offset + range)) * (1.0 + offset / 255) * 100%
where offset is a parameter that reduces the effects of very small edges so that background noise does not affect the results significantly. (I used a value of 15.)
This produces fairly good results. Anything with a sharpness of less than 40% is probably out of focus. Here's are some examples (the locations of the maximum pixel difference and the 9×9 local search areas are also shown for reference):
(source)
(source)
(source)
(source)
The results still aren't perfect, though. Subjects that are inherently blurry will always result in a low sharpness value:
(source)
Bokeh effects can produce sharp edges from point sources of light, even when they are completely out of focus:
(source)
You commented that you want to be able to reject user-submitted photos that are out of focus. Since this technique isn't perfect, I would suggest that you instead notify the user if an image appears blurry instead of rejecting it altogether.
I suppose that, philosophically speaking, all natural images are blurry...How blurry and to which amount, is something that depends upon your application. Broadly speaking, the blurriness or sharpness of images can be measured in various ways. As a first easy attempt I would check for the energy of the image, defined as the normalised summation of the squared pixel values:
1 2
E = --- Σ I, where I the image and N the number of pixels (defined for grayscale)
N
First you may apply a Laplacian of Gaussian (LoG) filter to detect the "energetic" areas of the image and then check the energy. The blurry image should show considerably lower energy.
See an example in MATLAB using a typical grayscale lena image:
This is the original image
This is the blurry image, blurred with gaussian noise
This is the LoG image of the original
And this is the LoG image of the blurry one
If you just compute the energy of the two LoG images you get:
E = 1265 E = 88
or bl
which is a huge amount of difference...
Then you just have to select a threshold to judge which amount of energy is good for your application...
calculate the average L1-distance of adjacent pixels:
N1=1/(2*N_pixel) * sum( abs(p(x,y)-p(x-1,y)) + abs(p(x,y)-p(x,y-1)) )
then the average L2 distance:
N2= 1/(2*N_pixel) * sum( (p(x,y)-p(x-1,y))^2 + (p(x,y)-p(x,y-1))^2 )
then the ratio N2 / (N1*N1) is a measure of blurriness. This is for grayscale images, for color you do this for each channel separately.
Given an image (Like the one given below) I need to convert it into a binary image (black and white pixels only). This sounds easy enough, and I have tried with two thresholding functions. The problem is I cant get the perfect edges using either of these functions. Any help would be greatly appreciated.
The filters I have tried are, the Euclidean distance in the RGB and HSV spaces.
Sample image:
Here it is after running an RGB threshold filter. (40% it more artefects after this)
Here it is after running an HSV threshold filter. (at 30% the paths become barely visible but clearly unusable because of the noise)
The code I am using is pretty straightforward. Change the input image to appropriate color spaces and check the Euclidean distance with the the black color.
sqrt(R*R + G*G + B*B)
since I am comparing with black (0, 0, 0)
Your problem appears to be the variation in lighting over the scanned image which suggests that a locally adaptive thresholding method would give you better results.
The Sauvola method calculates the value of a binarized pixel based on the mean and standard deviation of pixels in a window of the original image. This means that if an area of the image is generally darker (or lighter) the threshold will be adjusted for that area and (likely) give you fewer dark splotches or washed-out lines in the binarized image.
http://www.mediateam.oulu.fi/publications/pdf/24.p
I also found a method by Shafait et al. that implements the Sauvola method with greater time efficiency. The drawback is that you have to compute two integral images of the original, one at 8 bits per pixel and the other potentially at 64 bits per pixel, which might present a problem with memory constraints.
http://www.dfki.uni-kl.de/~shafait/papers/Shafait-efficient-binarization-SPIE08.pdf
I haven't tried either of these methods, but they do look promising. I found Java implementations of both with a cursory Google search.
Running an adaptive threshold over the V channel in the HSV color space should produce brilliant results. Best results would come with higher than 11x11 size window, don't forget to choose a negative value for the threshold.
Adaptive thresholding basically is:
if (Pixel value + constant > Average pixel value in the window around the pixel )
Pixel_Binary = 1;
else
Pixel_Binary = 0;
Due to the noise and the illumination variation you may need an adaptive local thresholding, thanks to Beaker for his answer too.
Therefore, I tried the following steps:
Convert it to grayscale.
Do the mean or the median local thresholding, I used 10 for the window size and 10 for the intercept constant and got this image (smaller values might also work):
Please refer to : http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm if you need more
information on this techniques.
To make sure the thresholding was working fine, I skeletonized it to see if there is a line break. This skeleton may be the one needed for further processing.
To get ride of the remaining noise you can just find the longest connected component in the skeletonized image.
Thank you.
You probably want to do this as a three-step operation.
use leveling, not just thresholding: Take the input and scale the intensities (gamma correct) with parameters that simply dull the mid tones, without removing the darks or the lights (your rgb threshold is too strong, for instance. you lost some of your lines).
edge-detect the resulting image using a small kernel convolution (5x5 for binary images should be more than enough). Use a simple [1 2 3 2 1 ; 2 3 4 3 2 ; 3 4 5 4 3 ; 2 3 4 3 2 ; 1 2 3 2 1] kernel (normalised)
threshold the resulting image. You should now have a much better binary image.
You could try a black top-hat transform. This involves substracting the Image from the closing of the Image. I used a structural element window size of 11 and a constant threshold of 0.1 (25.5 on for a 255 scale)
You should get something like:
Which you can then easily threshold:
Best of luck.