Image Processing - Does PSNR and SSIM metrics show smoothing (noise reduction) quality? - image-processing

For my Image Processing class project, I am filtering an image with various filter algorithms (bilateral filter, NL-Means etc..) and trying to compare results with changing parameters. I came across PSNR and SSIM metrics to measure filter quality but could not fully understand what the values mean. Can anybody help me about:
Does a higher PSNR value means higher quality smoothing (getting rid of noise)?
Should SSIM value be close to 1 in order to have high quality smoothing?
Are there any other metrics or methods to measure smoothing quality?
I am really confused. Any help will be highly appreciated. Thank you.

With respect to an ideal result image, the PSNR computes the mean squared reconstruction error after denoising. Higher PSNR means more noise removed. However, as a least squares result, it is slightly biased towards over smoothed (= blurry) results, i.e. an algorithm that removes not only the noise but also a part of the textures will have a good score.
SSIm has been developed to have a quality reconstruction metric that also takes into account the similarity of the edges (high frequency content) between the denoised image and the ideal one. To have a good SSIM measure, an algorithm needs to remove the noise while also preserving the edges of the objects.
Hence, SSIM looks like a "better quality measure", but it is more complicated to compute (and the exact formula involves one number per pixel, while PSNR gives you an average value for the whole image).

Expanding on #sansuiso's answer
There are a lot of others Image quality measures you can use to evaluate the de-noising capability of various filters in your case NL means , bilateral filter etc
Here is a chart that demonstrates the various parameters that could be used
Yes and more the PSNR better is the de- noising capability
Here is a paper where you can find the details regarding these parameters and the MATLAB codes could be found here
PSNR is the evaluation standard of the reconstructed image quality, and is important feature
The large the value of NAE means that image is poor quality
The large value of SC means that image is a poor quality.

Regarding this article:
http://icpr2010.org/pdfs/icpr2010_WeAT8.44.pdf
I found out that the PSNR can be obtained by SSIM and vice-versa. And PSNR is more sensitive to the noise than SSIM. By the other hand the other paramethers are almost equal in sensitivity by both: Gaussian Blur and discriminating Quality.

Related

Semantic Segmentation: How to evaluate the noise influence of the effectivity and robustness of the medical image segmentation?

I have done some pre-processing including N4 Bias correction, noise removal and scaling on medical 3D MRIs, and I was asked one question:
How to evaluate the noise influence of the effectivity and robustness of the medical image segmentation? When affecting the image structure with various noise, the extracted features will be deteriorated. Such effect should be taken advantage in the context of the method
effectivity for different noise intensity.
How to evaluate the noise affect and how to justify the noise removal method used in the scientific manuscript?
I don't know if this can be helpful but I did once in classrom with nuclear magnetic resonance.
In that case we use the Shepp Logan Phantom with FFT. then we add noise to the picture (by adding random numbers with gaussian distribution).
When you transform the image back to the phantom you can see the effects of noise and sometimes artifacts (mostly due to the FFT algorithm and the window function choosed).
What I did was check the mean value of color in the image before and after, then on edges of the pahntom (skull) you can see how much is clear the passage from white to black and vice versa.
This can be easily tested with MATLAB code and the phantom. When you have the accuracy you need you can then apply the algorithm you choose on real images.

Image Processing: PSNR and MSE alternative method?

Assuming random Gaussian noise on an image, how would one tell which denoise method is the best quantitatively?
A lot of papers uses MSE & PSNR. However, lower MSE coulde also mean that not enough noise has been removed, thus I think that the MSE and PSNR aren't really the best way to tell.
A table of the PSNR of the original image and the image after the various denoise algorithms have been applied should be a good method to quantitatively analyze the results of the various methods. You could also calculate a deltaPSNR between the result and the noisy image.
If you have an original image without noise, you could calculate the PSNR of this image. Then you could add noise to the image, and again calculate the PSNR. Finally, after denoising, determine the PSNR again. This final PSNR can be compared to the original image to see how much like the original each result is.

Effect of variance (sigma) at gaussian smoothing

I know about Gaussian, varaince, image blurring and i think that i understood the concept of variance at Gaussian blur but still i am not 100% sure.
I just want to know the role of sigma or variance at Gaussian smoothing. I mean, what happens by increasing the value of sigma for the same window size..and why it happens?
It would be really helpful if somebody provide some nice literature about it. (I already tried few but couldn't find what i am looking for)
Major confusion:
Higher frequency-> details (e.g. noise),
Lower Frequency-> kind of overview of the image.
By increasing sigma, we are allowing some higher frequencies....so we should get more detailed with increasing frequency but the case is opposite, when we increase sigma, the image becomes more blurry.
I think it should be done in the following steps, first from the signal processing point of view:
Gaussian Filter is a low pass filter. Low pass filters as their names imply pass low frequencies - keeping low frequencies. So when we look at the image in the frequency domain the highest frequencies happen in the edges(places that there is a high change in intensity and each intensity value corresponds to a specific visible frequency).
The role of sigma in the Gaussian filter is to control the variation
around its mean value. So as the Sigma becomes larger the more variance allowed around mean and as the Sigma becomes smaller the less variance allowed around mean.
Filtering in the spatial domain is done through convolution. it simply
means that we apply a kernel on every pixel in the image. The law exists for kernels. Their sum has to be zero.
Now putting all together! When we apply a Gaussian filter to an image, we are doing a low pass filtering. But as you know this happen in the discrete domain(image pixels). So we have to quantize our Gaussian filter in order to make a Gaussian kernel. In the quantization step, as the Gaussian filter(GF) has a small sigma it has the steepest pick. So the more weights will be focused in the center and the less around it.
In the sense of natural image statistics! The scientists in this field of studies showed that our vision system is a kind of Gaussian filter in the responses to the images. see for example take a look at a broad scene! don't pay attention to a specific point! so you see a broad scene with lots things in it. but the details are not clear! Now see a specific point in that seen. you see more details that previously you didn't. This is the Sigma appear here. when you increase the sigma you are looking to the broad scene without paying attention to the details exits. and when you decrease the value you will get more details.
I think Wikipedia can help more than me, Low Pass Filters, Guassian Blur
Put simply, increasing the sigma terms will cast a broader net over the neighboring pixels and decrease the impact of the pixels nearest the pixel of interest, e.g. it makes a blurrier image.

Image blending modes for HDR images

The blending modes Screen, Color Dodge, Soft Light, etc.
like in Photoshop, each have their own math that works
for range 0-1. I wonder how do these blend modes work
for HDR images?
Thanks
I am not familiar with photoshop and it's filter but here is a general explanation of the math behind HDR filters.
Suppose you have 3 images (low light, medium and over exposed). You want to average those images but (I1+I2+I3)/3 is a stupid way. You want to give a higher weight to the image that captures more information in a given area.
So basically you average the images with a weight factor and there are different types of algorithms to calculate the weights. Here are few:
The simplest one is using STD (standard deviation). In each pixel, in each image calculate standard deviation of its 9 neighbours. Use std as weight:
HDR pixel(i,j) = I1(i,j)*stdI1(i,j) + I2(i,j)*stdI2(i,j) + I3(i,j)*stdI3(i,j).
Why std is used? since when std is high it means a high variation in pixels intencity which means more information was captured by the image.
Instead of STD you can use entropy filter, edge detection or any other which represents how much information is encoded around the given pixel
There are also slower but better ways to do HDR. Usually it is done with some kind of wavelet transformation. For example Furier transform. Each image is converted to furier space (coefficients of the frequencies and than the for each frequency, the maximal coefficient of 3 images is taken).
You can even combine the method of std filter and wavelet transforms. For example break the image to different frequencies, smooth the lower frequencies and take a stupid average (I1+I2+I3)/3, but with high frequencies use less smoothing and using std weighted average. The action of smoothing more lower frequencies is called 'blending'. It heavily used when stitching 2 images of different light exposure to a panorama.
Look at this image: http://magazine.magix.com/en/wp-content/uploads/2012/05/Panorama-3.jpg
You can clearly see that the sky gets different color on each image but since sky is a very low frequency (almost no information and no small object) it is heavily smoothed and averaged, thus allowing a gentle stitching.
Hope that answers your question

Determine if an image needs contrasting automatically in OpenCV

OpenCV has a handy cvEqualizeHist() function that works great on faded/low-contrast images.
However when an already high-contrast image is given, the result is a low-contrast one. I got the reason - the histogram being distributed evenly and stuff.
Question is - how do I get to know the difference between a low-contrast and a high-contrast image?
I'm operating on Grayscale images and setting their contrast properly so that thresholding them won't delete the text i'm supposed to extract (thats a different story).
Suggestions welcome - esp on how to find out if the majority of the pixels in the image are light gray (which means that the equalise hist is to be performed)
Please help!
EDIT: thanks everyone for many informative answers. But the standard deviation calculation was sufficient for my requirements and hence I'm taking that to be the answer to my query.
You can probably just use a simple statistical measure of the image to determine whether an image has sufficient contrast. The variance of the image would probably be a good starting point. If the variance is below a certain threshold (to be empirically determined) then you can consider it to be "low contrast".
If you're adjusting contrast just so you can threshold later on, you may be able to avoid the contrast adjustment step if you set your threshold adaptively using Ohtsu's method.
If you're still interested in finding out the image contrast, then read on.
While there are a number of different ways to calculate "contrast". Often, those metrics are applied locally as opposed to the entire image, to make the result more sensitive to image content:
Divide the image into adjacent non-overlaying neighborhoods.
Pick neighborhood sizes that are approximate to size of the features of your image (e.g. if your main feature is horizontal text, make neighborhoods tall enough to capture 2 lines of text, and just as wide).
Apply the metric to each neighborhood individually
Threshold the metric result to separate low and high variance blocks. This will prevent such things as large, blank areas of page skewing your contrast estimates.
From there, you can use a number of features to determine contrast:
The proportion of high metric blocks to low metric blocks
High metric block mean
Intensity distance between the high and low metric blocks (using means, modes, etc)
This may serve as a better indication of image contrast than global image variance alone. Here's why:
(stddev: 50.6)
(stddev: 7.9)
The two images are perfectly in contrast (the grey background is just there to make it obvious it's an image), but their standard deviations (and thus variance) are completely different.
Calculate cumulative histogram of image.
Make linear regression of cumulative histogram in the form y(x) = A*x + B.
Calculate RMSE of real_cumulative_frequency(x)-y(x).
If that RMSE is close to zero - image is already equalized. (That means that for equalized images cumulative histograms must be linear)
Idea is taken from here.
EDIT:
I've illustrated this approach in my blog (C example code included).
There is a support provided in skimage for this. skimage.exposure.is_low_contrast. reference
example :
>>> image = np.linspace(0, 0.04, 100)
>>> is_low_contrast(image)
True
>>> image[-1] = 1
>>> is_low_contrast(image)
True
>>> is_low_contrast(image, upper_percentile=100)
False

Resources