Can I detect the intensity or the amount of light in a picture ?
For example,
I have some pictures which are captured at morning ,afternoon and the
time before the sunset and i want to know the amount of the light.
I just need an idea of how to do it. Also I have an access to the camera gain, exposure and other parameters.
The camera which I am using is the ZED Camera.
I understand the formula which convert from RGB space to luminance space as stated here. But I'm not sure if it's an efficient solution or not.
You seem not to be answering my question in the comments section for some reason, so I am not sure what you are trying to do, but it seems to be along the lines of determining the general brightness of the sky or somesuch.
So, firstly, you need to determine the average brightness/lightness within your image. For this step, you can convert to Lab or HSL colourspace using cvtColor() and then get the mean value of the L channel over the entire image using mean() or meanStdDev().
Now, and I guess this is what your question is actually about, you need to correct for the exposure since the exposure may vary between two images. So, the three things that affect exposure are ISO (a.k.a. film sensitivity), lens aperture and length of exposure.
Basically, every f-stop of aperture change represents a halving or doubling of the area of the lens aperture and therefore a doubling or halving of the amount of light that hits the sensor. So, f4 lets in 2x the light of f5.6, and that lets in twice the light of f8 and so on. Notice that each full stop of light differs from the next by sqrt(2).
Likewise with ISO, each time the ISO doubles (or the sensitivity doubles) the amount of light doubles.
Likewise with the time of the exposure, 1/2 a second is twice as long as a quarter of second.
So, basically, you have your mean Lightness value and you need to correct that for aperture, for ISO and exposure duration. In effect, you must normalise your images to a standard aperture, ISO and exposure time. Every stop different your image is from your normal, you must double or halve the mean Lightness. Every ISO step your image differs from your normal image by, you must double or halve the mean Lightness. Every time your exposure duration is different from your standardised, normalised duration, you must multiply your average mean Lightness by the ratio of the current image to your normalised image.
Then your mean lightnesses will be comparable with one another.
(Filters will also affect exposure, but I presume you are not adding or removing filters between exposures.)
In answer to your comment, I have never seen anyone write a formula as such, but my comments amount to this:
L * 2^aperture
--------------
ISO * time
In order to measure the luminance of an image, I would suggest working with the LAB color space. The L channel (light) represents the amount of light present in the image.
Few merits:
Since the L channel deals with the light intensity of the image, modifying it enhances the image.
Research studies also say that the L channel closely resembles the way we humans perceive light intensity in the real world.
All you can hope to measure with your camera and some images is the relative luminance. Even if you have the camera settings.
https://en.wikipedia.org/wiki/Relative_luminance
If you want to know the amount of light in absolute radiometric units, you're going to need to use some kind of absolute light meter or measured light source to calibrate your camera. For references on how to do this, see:
Section 2.5 Obtaining Absolute Radiance from http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf
I am looking for an application doing motion detection. Basically, I take a picture referencing the background of the scene I am watching, and I am comparing it to the frame a get from the camera.
I use this website to implement my app. The frames are highly disturbed by luminosity and so on... According to it, I can improve the result of the differential computation using tonal equalization techniques. Search for "tonal registration" in the page for more information.
I am working with HSV images.
Based on my research, I could find histogram equalization but appears to not yield good performances. I though about to use HDR but I don't know if it will be really relevant due to the fact I am working in a real-time video surveillance system (no time to "waste", cannot afford to get frame a different exposure).
Which technique could allow me to make the photometric range of my images as close as possible?
Once I participated in the development of surveillance system and there was a similar problem too. Instead of histogram equalization it used a different approach. Algorithm was as following:
For every pixel:
- Calculate an average luminosity around the pixel with some window (e.g. at square with 50 pixels)
- Calculate multiplier as some fixed value (like 128 for byte sized components)
divided by the average from previous step. Account for possible
division by zero here. So Multiplier = 128 / (Average + small value)
- Multiply pixel luminosity by multiplier.
Averages can be calculated incrementally in one go for the whole frame and in real time. It allowed to remove luminosity variations from clouds, changing weather etc. rather effectively.
I am doing a project on stereo vision, basically the system should estimate the distance in real time to avoid collision. The thing is i am not able to decide the proper baseline value . in formula what is the value of disparity.
Look at the formula that relates a baseline to disparity
D=focal*Baseline/z
where focal length is in pixels, Baseline is in mm and z, a distance along the optical axis, is also in mm. Pick your Baseline so that you have a few pixels of disparity at the longest working distance. Also keep in mind that though a long Baseline will accomplish this, at a closer distance you would have a lager dead zone where the cameras' field of views do not overlap enough to have a meaningful disparity calculation.
Also, when selecting the resolution for your images don't go too high since a stereo processing is very intensive and a higher resolution may have stronger noise. Typically people don't use color in stereo matching for the same reason. For your task, the algorithm that uses gray VGA images and works at least at 20 fps with a Baseline = 40-60 cm may be a reasonable choice given vehicle speed <40mph.
I want to measure the light intensity of microscope images with a BW camera attached to the microscope. My purpose is to compare particular images with each other concerning their brightness. I'm neither interested in measuring absolute light intensity nor in units.
I think, the Function should use exposure and some brightness-related metric (e.g. thresholded histogram-width or pixel-value mean).
My first attempt: 1/exposure * brightness works for smaller exposure ranges.
The exposure is a real [0.001..0.6], the brightens is a natural number [0..255].
Is there a formula for calculating the light intensity received by camera having these two figures?
Many thanks for suggestions!
P.S.:
Currently I estimate the intensity using fuzzy-logic. It works, but the calibration is not flexible.
EDIT:
I've got additional information from the camera manufacturer. The function of light is linear when the pixel values are within the range 50-200
You say "I'm neither interested in measuring absolute light intensity nor in units.". So I guess you only want to answer questions like: "The light source in this image was shining N-times as bright as in this other image: what is N?".
Of course estimating an answer to such a question from images makes sense only if everything else stays (approximately) the same: microscope, camera, transmission (or reflection) of the imaged sample, etc. Is this the case?
If the content of the images is approximately the same, I'd just start by comparing image-wide statistics: ratio of the median/average/n-th quantile intensities, and see if there is a common shift. Be careful if your image are 8-bit per channel: you will probably have to linearize them by removing whatever gamma compression was applied before computing the stats.
As you notice, however, things get more complicated when the variation in exposure increase, probably because on nonlinear effects (cutoff at the lower end or saturation at the higher end).
The exposure might be in seconds or some unit of time. Then your first attempt should be right:
1/exposure * brightness
A possible problem might be the flash. If it flashes during 10ms and your exposure is <10ms there is no problem, but if your exposure is >10ms, then the result will be similar to 10ms (depending on how much light there is in the room apart from the flash).
I would take several pictures at the same object changing the exposure. Plot the brightness vs exposure and it should be a diagonal line, if not, the shape might give you some clues. If at some point the line flattens, probably the flash has not enough duration.
A flash can have more issues, like bad synchronization with the exposure or non uniform brightness during all the exposure (it might take some fraction of millisecond to achieve the maximum brightness, for example).
It there is no flash, a gamma correction could be the problem as has been suggested. In any case, a plot of brightness vs exposure may help.
I've read across several Image Processing books and websites, but I'm still not sure the true definition of the term "energy" in Image Processing. I've found several definition, but sometimes they just don't match.
When we say "energy" in Image processing, what are we implying?
The energy is a measure the localized change of the image.
The energy gets a bunch of different names and a lot of different contexts but tends to refer to the same thing. It's the rate of change in the color/brightness/magnitude of the pixels over local areas. This is especially true for edges of the things inside the image and because of the nature of compression, these areas are the hardest to compress and therefore it's a solid guess that these are more important, they are often edges or quick gradients. These are the different contexts but they refer to the same thing.
The seam carving algorithm uses determinations of energy (uses gradient magnitude) to find the least noticed if removed. JPEG represents the local cluster of pixels relative to the energy of the first one. The Snake algorithm uses it to find the local contoured edge of a thing in the image. So there's a lot of different definitions but they all refer to the sort of oomph of the image. Whether that's the sum of the local pixels in terms of the square of absolute brightness or the hard bits to compress in a jpeg, or the edges in Canny Edge detection or the gradient magnitude:
The important bit is that energy is where the stuff is.
The energy of an image more broadly is the distances of some quality between the pixels of some locality.
We can take the sum of the LABdE2000 color distances within a properly weighted 2d gaussian kernel. Here the distances are summed together, the locality is defined by a gaussian kernel and the quality is color and the distance is LAB Delta formula from the year 2000 (Errata: previously this claimed E stood for Euclidean but the distance for standard delta E is Euclidean but the 94 and 00 formulas are not strictly Euclidean and the 'E' stands for Empfindung; German for "sensation"). We could also add up the local 3x3 kernel of the local difference in brightness, or square of brightness etc. We need to measure the localized change of the image.
In this example, local is defined as a 2d gaussian kernel and the color distance as LabDE2000 algorithm.
If you took an image and moved all the pixels and sorted them by color for some reason. You would reduce the energy of the image. You could take a collection of 50% black pixels and 50% white pixels and arrange them as random noise for maximal energy or put them as two sides of the image for minimum energy. Likewise, if you had 100% white pixels the energy would be 0 no matter how you arranged them.
It depends on the context, but in general, in Signal Processing, "energy" corresponds to the mean squared value of the signal (typically measured with respect to the global mean value). This concept is usually associated with the Parseval theorem, which allows us to think of the total energy as distributed along "frequencies" (and so one can say, for example, that a image has most of its energy concentrated in low frequencies).
Another -related- use is in image transforms: for example, the DCT transform (basis of the JPEG compression method) transforms a blocks of pixels (8x8 image) into a matrix of transformed coefficients; for typical images, it results that, while the original 8x8 image has its energy evenly distributed among the 64 pixels, the transformed image has its energy concentrated in the left-upper "pixels" (which, again, correspond to "low frequencies", in some analagous sense).
Energy is a fairly loose term used to describe any user defined function (in the image domain).
The motivation for using the term 'Energy' is that typical object detection/segmentation tasks are posed as a Energy minimization problem. We define an energy that would capture the solution we desire and perform gradient-descent to compute its lowest value, resulting in a solution for the image segmentation.
There is more than one definition of "energy" in image processing, so it depends on the context of where it was used.
Energy is used to describe a measure of "information" when formulating an operation under a probability framework such as MAP (maximum a priori) estimation in conjunction with Markov Random Fields. Sometimes the energy can be a negative measure to be minimised and sometimes it is a positive measure to be maximized.
If you consider that (for natural images captured by cameras) the light is an energy, you may call energy the value of the pixel on some channel.
However, I think that by energy the books are referring to the spectral density. From wikipedia:
The energy spectral density describes how the energy (or variance) of a signal or a time series is distributed with frequency
http://en.wikipedia.org/wiki/Spectral_density
Going back to my chemistry - Energy and Entropy are closely related terms. And Entropy and Randomness are also closely related. So in Image Processing, Energy might be similar to Randomness. For example, a picture of a plain wall has low energy, while the picture of a city taken from a helicopter might have high energy.
Image "energy" should be inversely proportional to Shannon entropy of image. But as already said image energy is loosely coupled term, it is better use "compressibility" term instead. That is - high image "energy" should correspond to high image compressibility.
http://lcni.uoregon.edu/~mark/Stat_mech/thermodynamic_entropy_and_information.html
Energy is like the "information present on the image". Compression of images cause energy-loss. I guess its something like that.
Energy is defined based on a normalized histogram of the image. Energy shows how the gray levels are distributed. When the number of gray levels is low then energy is high.
The Snake algorithm an image processing technique used to determine the contour of an object, the snake is nothing but a vector of (X,Y) points with some constraints, its final goal is to surround the object and describe its shape (contour) and then to track or represent the object by its shape.
The algorithm has two kinds of energies, internal and external.
Internal energy (the snake energy) (IE) is a user defined energy which acts on the snake (internally) to impose constraints on smoothness of the snake, without such a force, the snake shape will end up with the exact shape of the object, this is not desirable, because the exact shape of an object is very difficult to be obtained, due to light conditions, quality of image, noise, etc.
The external energy (EE) arises from the data (the image intensities), and it is nothing but the absolute difference of the intensities in the x and y directions (the intensity gradient) multiplied by -1, to be summed with internal energy, because the total energy must be minimized. so the total energy for all of the snake point should be minimized, Ideally, this come true when there are edges, because the gradient on the edge or (EE) is maximized, and since it is multiplied by -1, the total energy of the snake around the nearest object is minimized, and thus the algorithm converges to a solution, which is hopefully the true contour of the studied object.
because this algorithm relies on EE which is not only high on edges but also high on noisy points, sometimes the snake algorithm does not converge to an optimal solution, that why it is an approximate greedy algorithm.
I found this at Image Processing book;
Energy: S_N = sum (from b=0 to b=L-1) of abs(P(b))^2
P(b) = N(b) / M
where M represents the total number of pixels in a neighborhood window centered
about (j,k), and N(b) is the number of pixels of amplitude in the same window.
It may give us a better understand if we see this equation with entropy;
Entropy: S_E = - sum (from b=0 to b=L-1) of P(b)log2{P(b)}
source: Pp. 538~539 Digital Image Processing written by William K. Pratt (4th edition)
For my current imaging project, which is rendering a diffuse light source, I'd like to consider energy as light energy, or radiation energy. Question I had initially: does an RGB "pixel value" represent light energy ? It could be asserted using a light intensity meter and generating subsequent screens with gray pixel values (n,n,n) for 0..255. According to matlabs forum, the radiated energy of 1 greyscale pixel is always proportional to its pixel value, but pixel to pixel it will vary slightly.
There is another assumption regarding energy: while performing the forward ray tracing, I yield a ray count on each sampled position hit. This ray count is, or preferably should be, proportional to radiation energy that would hit the target at this position. In order to be able to compare it to actual photographs taken, I'd have to normalize the ray count to some pixel value range..(?) I enclose an example below, the energy source is a diffuse light emitter inside a dark cylinder.
Energy in signal processing is the integral of the signal square within signal boundaries. An analogy could be made that involves two dimensional signals and you can square pixel values and sum for all the pixels.
Image Energy is calculated through MATLAB using:
image_energy = graycoprops(i1, {'energy'})