NDVI value range- Does NDVI always lie between -1 to 1? - image-processing

I am working with Landsat 8 images. I need to calculate NDVI and display it. For display purpose, I need to scale the NDVI values to 0 to 255 range.
NDVI is basically (p(4)-p(3))/(p(4)+p(3))..where p(4) and p(3) are reflectance corresponding to band 4 and band 3 respectively. These reflectance values can be calculated from metadata available while downloading Landsat imagery.
Now, the metadata file also mentions the maximum and minimum possible reflectance value for the given scene.(* I have attached a picture below).
Now considering a case of p(4) as max value 1.2107 and p(3) as min value -0.09998. The NDVI comes out to be (1.2107-(-0.09998))/(1.2107+(-0.09998)) = 1.18.
The literature I've read about NDVI says NDVI always falls between -1 to 1. But this clearly doesn't follow for the above mentioned case. So I am confused as to what range of NDVI values should I scale to 0 to 255 for display purpose ?

The input values are expected to fall in the range 0 to 1, as they are the ratio of the reflected radiation to the incoming radiation - where 1 represents perfect reflection and 0 represents total absorption of radiation. So, if a plant reflects 20% in band 3 and 80% band 4, your values should be p3=0.2 and p4=0.8.
On a purely intuitive level, we could look at b3 and b4 varyoing between 0 and 1. If both b3 and b4 are either zero or one, the NDVI numerator becomes zero and with it the NDVI ratio. So, we are left with the case where b3=1 and b4=0, which is the opposite of what vegetation does (real vegetation reflects more in b4 than b3), and that would lead to an NDVI=-1. And, finally, the case we are expecting where something reflects nothing in b3 (i.e. b3=0) and reflects perfectly in b4 (i.e. b4=1), this leads to the classic signature of vegetation with an NDVI=1, or at least a relatively high ratio.
Your values are clearly in a different range, so you would need to investigate their units - they should not have any units as they are themselves pure ratios of reflected to incoming radiation.

Related

What is mean removing in RGB to YCBCR transform?

Some of the research authors says that ,First of all, the mean values of the three color components R, G, and B are removed to reduce the internal
precision requirement of subsequent operations. Then, the
YCbCr transform is used to concentrate most of the image
energy into the Y component and reduce the correlation
among R, G, and B components. Therefore, the Y
component can be precisely quantified, while the Cb and Cr
components can be roughly quantified, so as to achieve the
purpose of compression without too much impact on the
quality of reconstructed images.
So can someone explain mean removing part ?
Removing the mean value of the R component means finding the mean (average) value of the R component and subtracting that from each R value. So if, for example, the R values were
204 204 192 200
then the mean would be 200. So you would adjust the values by subtracting 200 from each, yielding
4, 4, -8, 0
These values are smaller in magnitude than the original numbers, so the internal precision required to represent them is less.
(nb: this only helps if the values are not uniformly distributed across the available range already. But it doesn't hurt in any event, and most real world images don't have values that are uniformly distributed across the available range).
By removing the mean, you reduce the range of magnitudes needed.
To take an extreme example: if all pixels have the same value, whatever it is, removing the mean will convert everything to 0.

How do you handle negative pixel values after filtering?

I have a 8-bit image and I want to filter it with a matrix for edge detection. My kernel matrix is
0 1 0
1 -4 1
0 1 0
For some indices it gives me a negative value. What am I supposed to with them?
Your kernel is a Laplace filter. Applying it to an image yields a finite difference approximation to the Laplacian operator. The Laplace operator is not an edge detector by itself.
But you can use it as a building block for an edge detector: you need to detect the zero crossings to find edges (this is the Marr-Hildreth edge detector). To find zero crossings, you need to have negative values.
You can also use the Laplace filtered image to sharpen your image. If you subtract it from the original image, the result will be an image with sharper edges and a much crisper feel. For this, negative values are important too.
For both these applications, clamping the result of the operation, as suggested in the other answer, is wrong. That clamping sets all negative values to 0. This means there are no more zero crossings to find, so you can't find edges, and for the sharpening it means that one side of each edge will not be sharpened.
So, the best thing to do with the result of the Laplace filter is preserve the values as they are. Use a signed 16-bit integer type to store your results (I actually prefer using floating-point types, it simplifies a lot of things).
On the other hand, if you want to display the result of the Laplace filter to a screen, you will have to do something sensical with the pixel values. Common in this case is to add 128 to each pixel. This shifts the zero to a mid-grey value, shows negative values as darker, and positive values as lighter. After adding 128, values above 255 and below 0 can be clipped. You can also further stretch the values if you want to avoid clipping, for example laplace / 2 + 128.
Out of range values are extremely common in JPEG. One handles them by clamping.
If X < 0 then X := 0 ;
If X > 255 then X := 255 ;

Explaing Cross Correlation and Normalization for openCV's Match Template

My boss and I disagree as to what is going on with the CV_TM_CCORR_NORMED method for matchTemplate(); in openCV.
Can you please explain what is happening here especially the square root aspect of this equation.
Correlation is similarity of two signals,vectors etc. Suppose you have vectors
template=[0 1 0 0 1 0 ] A=[0 1 1 1 0 0] B =[ 1 0 0 0 0 1]
if you perform correlation between vectors and template to get which one is more similar ,you will see A is similar to template more than B because 1's are placed in corresponding indexes.This means the more nonzero elements corresponds the more correlation between vectors is.
In grayscale images the values are in the range of 0-255.Let's do that :
template=[10 250 36 30] A=[10 250 36 30] B=[220 251 240 210] .
Here it is clear that A is the same as template but correlation between B and template is bigger than A and template.In normalized cross correlation denumerator part of formula is solving this problem. If you check the formula below you can see that denumerator for B(x)template will be much bigger than A(x)template.
Formula as stated in opencv documentation :
In practice if you use cross correlation,if there is a brightness in a part of image , the correlation between that part and your template will be larger.But if you use normalized cross correlation you will get better result.
Think formula is this :
Before multiplying element by element you are normalizing two matrixes.By dividing root of square sum of all elements in matrix you are removing the gain;if all elements are large then divisor is large.
Think that you are dividing sum of all elements in matrix.If a pixel value is in a brighter area then its neighbours pixel values will be high.By dividing sum of its neighbourhood you are removing illumination effect.This is for image processing where pixel values are always positive.But for 2D matrix there may be some negative values so squaring ignores sign.

Adjustable sharpen color matrix

I am using .NET AForge libraries to sharpen and image. The "Sharpen" filter uses the following matrix.
0 -1 0
-1 5 -1
0 -1 0
This in fact does sharpen the image, but I need to sharpen the image more aggressively and based on a numeric range, lets say 1-100.
Using AForge, how do I transform this matrix with numbers 1 through 100 where 1 is almost not noticeable and 100 is very noticeable.
Thanks in advance!
The one property of a filter like this that must be maintained is that all the values sum to 1. You can subtract 1 from the middle value, multiple by some constant, then add 1 back to the middle and it will be scaled properly. Play around with the range (100 is almost certainly too large) until you find something that works.
You might also try using a larger filter matrix, or one that has values in the corners as well.
I would also suggest looking at the GaussianSharpen class and adjusting the sigma value.

How to normalize Difference of Gaussian Image pixels with negative values?

In the context of image processing for edge detection or in my case a basic SIFT implementation:
When taking the 'difference' of 2 Gaussian blurred images, you are bound to get pixels whose difference is negative (they are originally between 0 - 255, when subtracting they are possibly between -255 - 255). What is the normal approach to 'fixing' this? I don't see taking the absolute value to be very correct in this situation.
There are two different approaches depending on what you want to do with the output.
The first is to offset the output by 128, so that your calculation range of -128 to 127 maps to 0 to 255.
The second is to clamp negative values so that they all equal zero.

Resources