I've read across several Image Processing books and websites, but I'm still not sure the true definition of the term "energy" in Image Processing. I've found several definition, but sometimes they just don't match.
When we say "energy" in Image processing, what are we implying?
The energy is a measure the localized change of the image.
The energy gets a bunch of different names and a lot of different contexts but tends to refer to the same thing. It's the rate of change in the color/brightness/magnitude of the pixels over local areas. This is especially true for edges of the things inside the image and because of the nature of compression, these areas are the hardest to compress and therefore it's a solid guess that these are more important, they are often edges or quick gradients. These are the different contexts but they refer to the same thing.
The seam carving algorithm uses determinations of energy (uses gradient magnitude) to find the least noticed if removed. JPEG represents the local cluster of pixels relative to the energy of the first one. The Snake algorithm uses it to find the local contoured edge of a thing in the image. So there's a lot of different definitions but they all refer to the sort of oomph of the image. Whether that's the sum of the local pixels in terms of the square of absolute brightness or the hard bits to compress in a jpeg, or the edges in Canny Edge detection or the gradient magnitude:
The important bit is that energy is where the stuff is.
The energy of an image more broadly is the distances of some quality between the pixels of some locality.
We can take the sum of the LABdE2000 color distances within a properly weighted 2d gaussian kernel. Here the distances are summed together, the locality is defined by a gaussian kernel and the quality is color and the distance is LAB Delta formula from the year 2000 (Errata: previously this claimed E stood for Euclidean but the distance for standard delta E is Euclidean but the 94 and 00 formulas are not strictly Euclidean and the 'E' stands for Empfindung; German for "sensation"). We could also add up the local 3x3 kernel of the local difference in brightness, or square of brightness etc. We need to measure the localized change of the image.
In this example, local is defined as a 2d gaussian kernel and the color distance as LabDE2000 algorithm.
If you took an image and moved all the pixels and sorted them by color for some reason. You would reduce the energy of the image. You could take a collection of 50% black pixels and 50% white pixels and arrange them as random noise for maximal energy or put them as two sides of the image for minimum energy. Likewise, if you had 100% white pixels the energy would be 0 no matter how you arranged them.
It depends on the context, but in general, in Signal Processing, "energy" corresponds to the mean squared value of the signal (typically measured with respect to the global mean value). This concept is usually associated with the Parseval theorem, which allows us to think of the total energy as distributed along "frequencies" (and so one can say, for example, that a image has most of its energy concentrated in low frequencies).
Another -related- use is in image transforms: for example, the DCT transform (basis of the JPEG compression method) transforms a blocks of pixels (8x8 image) into a matrix of transformed coefficients; for typical images, it results that, while the original 8x8 image has its energy evenly distributed among the 64 pixels, the transformed image has its energy concentrated in the left-upper "pixels" (which, again, correspond to "low frequencies", in some analagous sense).
Energy is a fairly loose term used to describe any user defined function (in the image domain).
The motivation for using the term 'Energy' is that typical object detection/segmentation tasks are posed as a Energy minimization problem. We define an energy that would capture the solution we desire and perform gradient-descent to compute its lowest value, resulting in a solution for the image segmentation.
There is more than one definition of "energy" in image processing, so it depends on the context of where it was used.
Energy is used to describe a measure of "information" when formulating an operation under a probability framework such as MAP (maximum a priori) estimation in conjunction with Markov Random Fields. Sometimes the energy can be a negative measure to be minimised and sometimes it is a positive measure to be maximized.
If you consider that (for natural images captured by cameras) the light is an energy, you may call energy the value of the pixel on some channel.
However, I think that by energy the books are referring to the spectral density. From wikipedia:
The energy spectral density describes how the energy (or variance) of a signal or a time series is distributed with frequency
http://en.wikipedia.org/wiki/Spectral_density
Going back to my chemistry - Energy and Entropy are closely related terms. And Entropy and Randomness are also closely related. So in Image Processing, Energy might be similar to Randomness. For example, a picture of a plain wall has low energy, while the picture of a city taken from a helicopter might have high energy.
Image "energy" should be inversely proportional to Shannon entropy of image. But as already said image energy is loosely coupled term, it is better use "compressibility" term instead. That is - high image "energy" should correspond to high image compressibility.
http://lcni.uoregon.edu/~mark/Stat_mech/thermodynamic_entropy_and_information.html
Energy is like the "information present on the image". Compression of images cause energy-loss. I guess its something like that.
Energy is defined based on a normalized histogram of the image. Energy shows how the gray levels are distributed. When the number of gray levels is low then energy is high.
The Snake algorithm an image processing technique used to determine the contour of an object, the snake is nothing but a vector of (X,Y) points with some constraints, its final goal is to surround the object and describe its shape (contour) and then to track or represent the object by its shape.
The algorithm has two kinds of energies, internal and external.
Internal energy (the snake energy) (IE) is a user defined energy which acts on the snake (internally) to impose constraints on smoothness of the snake, without such a force, the snake shape will end up with the exact shape of the object, this is not desirable, because the exact shape of an object is very difficult to be obtained, due to light conditions, quality of image, noise, etc.
The external energy (EE) arises from the data (the image intensities), and it is nothing but the absolute difference of the intensities in the x and y directions (the intensity gradient) multiplied by -1, to be summed with internal energy, because the total energy must be minimized. so the total energy for all of the snake point should be minimized, Ideally, this come true when there are edges, because the gradient on the edge or (EE) is maximized, and since it is multiplied by -1, the total energy of the snake around the nearest object is minimized, and thus the algorithm converges to a solution, which is hopefully the true contour of the studied object.
because this algorithm relies on EE which is not only high on edges but also high on noisy points, sometimes the snake algorithm does not converge to an optimal solution, that why it is an approximate greedy algorithm.
I found this at Image Processing book;
Energy: S_N = sum (from b=0 to b=L-1) of abs(P(b))^2
P(b) = N(b) / M
where M represents the total number of pixels in a neighborhood window centered
about (j,k), and N(b) is the number of pixels of amplitude in the same window.
It may give us a better understand if we see this equation with entropy;
Entropy: S_E = - sum (from b=0 to b=L-1) of P(b)log2{P(b)}
source: Pp. 538~539 Digital Image Processing written by William K. Pratt (4th edition)
For my current imaging project, which is rendering a diffuse light source, I'd like to consider energy as light energy, or radiation energy. Question I had initially: does an RGB "pixel value" represent light energy ? It could be asserted using a light intensity meter and generating subsequent screens with gray pixel values (n,n,n) for 0..255. According to matlabs forum, the radiated energy of 1 greyscale pixel is always proportional to its pixel value, but pixel to pixel it will vary slightly.
There is another assumption regarding energy: while performing the forward ray tracing, I yield a ray count on each sampled position hit. This ray count is, or preferably should be, proportional to radiation energy that would hit the target at this position. In order to be able to compare it to actual photographs taken, I'd have to normalize the ray count to some pixel value range..(?) I enclose an example below, the energy source is a diffuse light emitter inside a dark cylinder.
Energy in signal processing is the integral of the signal square within signal boundaries. An analogy could be made that involves two dimensional signals and you can square pixel values and sum for all the pixels.
Image Energy is calculated through MATLAB using:
image_energy = graycoprops(i1, {'energy'})
Related
As I know, there are some functions in the CMOS Image Sensor ISP (Image Signal Processor).
Specifically, I'd like to know the difference between binning and sub-sampling. I think these purpose is same to reduce image size.
However, I'm not sure why these functions exist?
What is their purpose?
Binning and sub-sampling reduce the image size as you have suspected, but what they focus on are different things. Let's tackle each issue separately
Binning
Binning in image processing deals primarily with quantization. The closest thing I can think of is related to what is known as data binning. Basically, consider breaking up your image into distinct (non-overlapping) M x N tiles, where M and N are the rows and columns of a tile and M and N should be much smaller than the rows and columns of the image.
If you consider any grid of M x N pixels, all of these pixels get replaced with a representative colour. The way this representative colour is calculated is done in many ways... the average is a popular method. The reason why binning is performed is primarily as a data pre-processing technique which is used to reduce the effects of minor observation errors. This effectively reduces the amount of information that is representative of the image, and so it certainly reduces the image size by reducing the amount of unique colours that represent the image.
In addition, binning the data may also reduce the impact of noise that impacts the CMOS sensor on the final processed image, but at the cost of a lower dynamic range of colours.
Sub-sampling
Sub-sampling in the case of image processing mostly deals with image resizing. It's also called image scaling. The goal is to take an image and reduce its dimensions so that you get a smaller image as a result. Binning deals with keeping the image the same size (i.e. the same dimensions as the original) while reducing the amount of colours which ultimately reduces the amount of space the image takes up. Subsampling reduces the image size by removing information all together. Usually when you subsample, you also interpolate or smooth the image so that you reduce aliasing.
Sub-sampling has another application in video processing - especially in MPEG where video is encoded in YCbCr. Y is the luminance while Cb and Cr are the chrominance pairs. We tend to notice changes in luminance rather than chrominance, and so the chrominance is subsampled to reduce the amount of space taken up by the video. Specifically, the human visual system has poor acuity when it comes to colour information than we do with luminance / intensity. Usually, the chrominance values are filtered then subsampled by 1/2 or even 1/4 of that of the intensity. Even with a rather high subsampling rate, we don't notice any differences in terms of perceived image quality.
This is obviously a rather rough introduction on the differences between them both, but I hope this gives you enough of what you're after for your purposes.
Good luck!
I know about Gaussian, varaince, image blurring and i think that i understood the concept of variance at Gaussian blur but still i am not 100% sure.
I just want to know the role of sigma or variance at Gaussian smoothing. I mean, what happens by increasing the value of sigma for the same window size..and why it happens?
It would be really helpful if somebody provide some nice literature about it. (I already tried few but couldn't find what i am looking for)
Major confusion:
Higher frequency-> details (e.g. noise),
Lower Frequency-> kind of overview of the image.
By increasing sigma, we are allowing some higher frequencies....so we should get more detailed with increasing frequency but the case is opposite, when we increase sigma, the image becomes more blurry.
I think it should be done in the following steps, first from the signal processing point of view:
Gaussian Filter is a low pass filter. Low pass filters as their names imply pass low frequencies - keeping low frequencies. So when we look at the image in the frequency domain the highest frequencies happen in the edges(places that there is a high change in intensity and each intensity value corresponds to a specific visible frequency).
The role of sigma in the Gaussian filter is to control the variation
around its mean value. So as the Sigma becomes larger the more variance allowed around mean and as the Sigma becomes smaller the less variance allowed around mean.
Filtering in the spatial domain is done through convolution. it simply
means that we apply a kernel on every pixel in the image. The law exists for kernels. Their sum has to be zero.
Now putting all together! When we apply a Gaussian filter to an image, we are doing a low pass filtering. But as you know this happen in the discrete domain(image pixels). So we have to quantize our Gaussian filter in order to make a Gaussian kernel. In the quantization step, as the Gaussian filter(GF) has a small sigma it has the steepest pick. So the more weights will be focused in the center and the less around it.
In the sense of natural image statistics! The scientists in this field of studies showed that our vision system is a kind of Gaussian filter in the responses to the images. see for example take a look at a broad scene! don't pay attention to a specific point! so you see a broad scene with lots things in it. but the details are not clear! Now see a specific point in that seen. you see more details that previously you didn't. This is the Sigma appear here. when you increase the sigma you are looking to the broad scene without paying attention to the details exits. and when you decrease the value you will get more details.
I think Wikipedia can help more than me, Low Pass Filters, Guassian Blur
Put simply, increasing the sigma terms will cast a broader net over the neighboring pixels and decrease the impact of the pixels nearest the pixel of interest, e.g. it makes a blurrier image.
I wish to use Stein Unbiased Estimate of Risk (Sure) for denoising signals.
I have a 1-Dimensional signal. I am using wavelets to decompose the signal into multiple levels of approximate and detail coefficients.
For denoising the original signal, do I need to do a thresholding for every level of detail coefficients or doing it on the last level of detail coeffcient will do the job ?
Thresholding is usually applied to all the frequencies of a signal because the procedure exploits the fact that the wavelet transform maps white noise (purely random, uncorrelated and constant power spectral density noise) in the signal domain to white noise in the transform domain and as such is spread across the different frequencies Thus, while signal energy becomes more concentrated into fewer coefficients in the transform domain, noise energy does not. Other noises given that have different spectrum properties will map differently and this is where the selection of the type of thresholding procedure becomes important.
In thresholding the highest decomposition level (lowest frequencies) while leaving the lower levels (higher frequencies) not denoised sounds a little extrange if you want to reconstruct the signal.
However you could also extract a level and denoise its related range of frequencies (e.g. from level 1 to level 2) if you have a range of frequencies you may have interest for.
Speaking about the thresholding function be aware in any case that Sure has different results depending on the type of noises the signal has. For example it will reduce the distribution of white noise in horizontal components but will only decrease large amplitudes. For signals where togueter with white you may have other noise colors like random walk and flicker noise sure is not an efective procedure.
The blending modes Screen, Color Dodge, Soft Light, etc.
like in Photoshop, each have their own math that works
for range 0-1. I wonder how do these blend modes work
for HDR images?
Thanks
I am not familiar with photoshop and it's filter but here is a general explanation of the math behind HDR filters.
Suppose you have 3 images (low light, medium and over exposed). You want to average those images but (I1+I2+I3)/3 is a stupid way. You want to give a higher weight to the image that captures more information in a given area.
So basically you average the images with a weight factor and there are different types of algorithms to calculate the weights. Here are few:
The simplest one is using STD (standard deviation). In each pixel, in each image calculate standard deviation of its 9 neighbours. Use std as weight:
HDR pixel(i,j) = I1(i,j)*stdI1(i,j) + I2(i,j)*stdI2(i,j) + I3(i,j)*stdI3(i,j).
Why std is used? since when std is high it means a high variation in pixels intencity which means more information was captured by the image.
Instead of STD you can use entropy filter, edge detection or any other which represents how much information is encoded around the given pixel
There are also slower but better ways to do HDR. Usually it is done with some kind of wavelet transformation. For example Furier transform. Each image is converted to furier space (coefficients of the frequencies and than the for each frequency, the maximal coefficient of 3 images is taken).
You can even combine the method of std filter and wavelet transforms. For example break the image to different frequencies, smooth the lower frequencies and take a stupid average (I1+I2+I3)/3, but with high frequencies use less smoothing and using std weighted average. The action of smoothing more lower frequencies is called 'blending'. It heavily used when stitching 2 images of different light exposure to a panorama.
Look at this image: http://magazine.magix.com/en/wp-content/uploads/2012/05/Panorama-3.jpg
You can clearly see that the sky gets different color on each image but since sky is a very low frequency (almost no information and no small object) it is heavily smoothed and averaged, thus allowing a gentle stitching.
Hope that answers your question
OpenCV has a handy cvEqualizeHist() function that works great on faded/low-contrast images.
However when an already high-contrast image is given, the result is a low-contrast one. I got the reason - the histogram being distributed evenly and stuff.
Question is - how do I get to know the difference between a low-contrast and a high-contrast image?
I'm operating on Grayscale images and setting their contrast properly so that thresholding them won't delete the text i'm supposed to extract (thats a different story).
Suggestions welcome - esp on how to find out if the majority of the pixels in the image are light gray (which means that the equalise hist is to be performed)
Please help!
EDIT: thanks everyone for many informative answers. But the standard deviation calculation was sufficient for my requirements and hence I'm taking that to be the answer to my query.
You can probably just use a simple statistical measure of the image to determine whether an image has sufficient contrast. The variance of the image would probably be a good starting point. If the variance is below a certain threshold (to be empirically determined) then you can consider it to be "low contrast".
If you're adjusting contrast just so you can threshold later on, you may be able to avoid the contrast adjustment step if you set your threshold adaptively using Ohtsu's method.
If you're still interested in finding out the image contrast, then read on.
While there are a number of different ways to calculate "contrast". Often, those metrics are applied locally as opposed to the entire image, to make the result more sensitive to image content:
Divide the image into adjacent non-overlaying neighborhoods.
Pick neighborhood sizes that are approximate to size of the features of your image (e.g. if your main feature is horizontal text, make neighborhoods tall enough to capture 2 lines of text, and just as wide).
Apply the metric to each neighborhood individually
Threshold the metric result to separate low and high variance blocks. This will prevent such things as large, blank areas of page skewing your contrast estimates.
From there, you can use a number of features to determine contrast:
The proportion of high metric blocks to low metric blocks
High metric block mean
Intensity distance between the high and low metric blocks (using means, modes, etc)
This may serve as a better indication of image contrast than global image variance alone. Here's why:
(stddev: 50.6)
(stddev: 7.9)
The two images are perfectly in contrast (the grey background is just there to make it obvious it's an image), but their standard deviations (and thus variance) are completely different.
Calculate cumulative histogram of image.
Make linear regression of cumulative histogram in the form y(x) = A*x + B.
Calculate RMSE of real_cumulative_frequency(x)-y(x).
If that RMSE is close to zero - image is already equalized. (That means that for equalized images cumulative histograms must be linear)
Idea is taken from here.
EDIT:
I've illustrated this approach in my blog (C example code included).
There is a support provided in skimage for this. skimage.exposure.is_low_contrast. reference
example :
>>> image = np.linspace(0, 0.04, 100)
>>> is_low_contrast(image)
True
>>> image[-1] = 1
>>> is_low_contrast(image)
True
>>> is_low_contrast(image, upper_percentile=100)
False