Measure luminance of the light from a camera - opencv

Can I detect the intensity or the amount of light in a picture ?
For example,
I have some pictures which are captured at morning ,afternoon and the
time before the sunset and i want to know the amount of the light.
I just need an idea of how to do it. Also I have an access to the camera gain, exposure and other parameters.
The camera which I am using is the ZED Camera.
I understand the formula which convert from RGB space to luminance space as stated here. But I'm not sure if it's an efficient solution or not.

You seem not to be answering my question in the comments section for some reason, so I am not sure what you are trying to do, but it seems to be along the lines of determining the general brightness of the sky or somesuch.
So, firstly, you need to determine the average brightness/lightness within your image. For this step, you can convert to Lab or HSL colourspace using cvtColor() and then get the mean value of the L channel over the entire image using mean() or meanStdDev().
Now, and I guess this is what your question is actually about, you need to correct for the exposure since the exposure may vary between two images. So, the three things that affect exposure are ISO (a.k.a. film sensitivity), lens aperture and length of exposure.
Basically, every f-stop of aperture change represents a halving or doubling of the area of the lens aperture and therefore a doubling or halving of the amount of light that hits the sensor. So, f4 lets in 2x the light of f5.6, and that lets in twice the light of f8 and so on. Notice that each full stop of light differs from the next by sqrt(2).
Likewise with ISO, each time the ISO doubles (or the sensitivity doubles) the amount of light doubles.
Likewise with the time of the exposure, 1/2 a second is twice as long as a quarter of second.
So, basically, you have your mean Lightness value and you need to correct that for aperture, for ISO and exposure duration. In effect, you must normalise your images to a standard aperture, ISO and exposure time. Every stop different your image is from your normal, you must double or halve the mean Lightness. Every ISO step your image differs from your normal image by, you must double or halve the mean Lightness. Every time your exposure duration is different from your standardised, normalised duration, you must multiply your average mean Lightness by the ratio of the current image to your normalised image.
Then your mean lightnesses will be comparable with one another.
(Filters will also affect exposure, but I presume you are not adding or removing filters between exposures.)
In answer to your comment, I have never seen anyone write a formula as such, but my comments amount to this:
L * 2^aperture
--------------
ISO * time

In order to measure the luminance of an image, I would suggest working with the LAB color space. The L channel (light) represents the amount of light present in the image.
Few merits:
Since the L channel deals with the light intensity of the image, modifying it enhances the image.
Research studies also say that the L channel closely resembles the way we humans perceive light intensity in the real world.

All you can hope to measure with your camera and some images is the relative luminance. Even if you have the camera settings.
https://en.wikipedia.org/wiki/Relative_luminance
If you want to know the amount of light in absolute radiometric units, you're going to need to use some kind of absolute light meter or measured light source to calibrate your camera. For references on how to do this, see:
Section 2.5 Obtaining Absolute Radiance from http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf

Related

calculate auto exposure in openCv

I have a software working with an industrial grade camera with a manual lens (focus and aperture to be set manually).
I can control the cameras exposure time and gain.
I did some histogram analysis to check the exposure of the image.
No I am looking for a method to transfer the mean value of the grayscale intensity into a exposure value.
Goal is to calculate a exposure time for a fixed aperture setting and a currunt lighting condition since the exposure value is Ev = Av + Tv (Av aperture value or f stops, Tv time value, exposure time) I hope that there is some conversion from grayscale intensity into Exposure value.
I would like to give you my solution.
What I have found out is that normal cameras or not cabable of measuring brightness. In general an image sensor cannot distingish between bright colors as white and brithness circumstances.
Anyway, I implemented an historgram which is measuring the greyscale intensity. After that the mean value is extracted and scaled to a value range of 256. The goal is to have mean value of 128.
So I use the measured histogram mean value as input for a PI controller which controlls the the exposure time.
So this way I create a link between histogram mean value and exposure time
I think you may want to consider the histogram as providing the dynamic range required. Ansel Adams, and others, sometimes referred to this as Zones. So your 18% is Zone V (5). If your image has dynamic range that is clipping the high value (255) or clipping on the minimum (O), then you may need 2 or more images... One with less Ev at a high f# or less exposure to be unclipped ("blown out" think photo speak), and the another shot with more exposure to ensure that shadow detail is not lost ("blocked" in photo speak).
If you have an image that is within the 1-255 range, then you could rescale the image to have a mean value of 18%. Or have some calibration for light reading to exposure or Ev.
Generally you want the histogram both for computing then mean (EV or crestfactor) as well as a way to find the min/max in order to determine is you need more exposure/gain or less.
Of course if your picture is flat with respect to brightness, the. 128 is perfect. If there are a few bright sources and in general a "normal scene" then some mean value closer to 18% is statistically better (~46)

Removing sun light reflection from images of IR camera in realtime OpenCV application

I am developing speed estimation and vehicle counting application with OpencV and I use IR camera.
I am facing a problem of sun light reflection which causes vertical white region or lines in the images and has bad effect on my vehicle detections.
I want an approach with very high speed, because it is a real-time application.
The vertical streak defect in those images is called "blooming", happens when the one or a few wells in a CCD saturate to the point that they spill charge over neighboring wells in the same column. In addition, you have "regular" saturation with no blooming around the area of the reflection.
If you can, the best solution is to control the exposure (faster shutter time, or close lens iris if you have one). This will reduce but not eliminate blooming occurrence.
Blooming will always occur in a constant direction (vertical or horizontal, depending on your image orientation), and will normally fill entirely one or few contiguous columns. So you can cheaply detect it by heavily subsampling in the opposite dimension and looking for maxima that repeat in the same column. E.g., in you images, you could look for saturated maxima in the same column over 10 rows or so spread over the image height.
Once you detect the blooming columns, you can follow them in a small band around them to try to locate the saturated area. Note that saturation does not necessarily imply values at the end of the dynamic range (e.g. 255 for 8-bit image). Your sensor could be completely saturated at values that the A/D conversion assign at, say, 252. Saturation simply means that the image response becomes constant with respect to the input luminance.
The easiest solution (to me) is a hardware solution. If you can modify the physical camera setup add a polarizing filter to the lens of the camera. You don't even need a(n expensive) camera specific lens, adding a simple sheet of polarized film is good enough Here is one site I just googled "polarizing film" You will have to play with the orientation, but with this mounted position most surfaces are at the same angle and glare will be polarized near horizontal. So you should find a position that works well in most situations.
I've used this method before and the best part is it adds no extra algorithmic complexity or lag. Especially for mounted cameras where all surfaces are at nearly the same angle. This won't help you process the images you currently have but it will help in processing and acquiring future images.

Tonal equalization on different images

I am looking for an application doing motion detection. Basically, I take a picture referencing the background of the scene I am watching, and I am comparing it to the frame a get from the camera.
I use this website to implement my app. The frames are highly disturbed by luminosity and so on... According to it, I can improve the result of the differential computation using tonal equalization techniques. Search for "tonal registration" in the page for more information.
I am working with HSV images.
Based on my research, I could find histogram equalization but appears to not yield good performances. I though about to use HDR but I don't know if it will be really relevant due to the fact I am working in a real-time video surveillance system (no time to "waste", cannot afford to get frame a different exposure).
Which technique could allow me to make the photometric range of my images as close as possible?
Once I participated in the development of surveillance system and there was a similar problem too. Instead of histogram equalization it used a different approach. Algorithm was as following:
For every pixel:
- Calculate an average luminosity around the pixel with some window (e.g. at square with 50 pixels)
- Calculate multiplier as some fixed value (like 128 for byte sized components)
divided by the average from previous step. Account for possible
division by zero here. So Multiplier = 128 / (Average + small value)
- Multiply pixel luminosity by multiplier.
Averages can be calculated incrementally in one go for the whole frame and in real time. It allowed to remove luminosity variations from clouds, changing weather etc. rather effectively.

Light intensity as function of exposure and image brightnes

I want to measure the light intensity of microscope images with a BW camera attached to the microscope. My purpose is to compare particular images with each other concerning their brightness. I'm neither interested in measuring absolute light intensity nor in units.
I think, the Function should use exposure and some brightness-related metric (e.g. thresholded histogram-width or pixel-value mean).
My first attempt: 1/exposure * brightness works for smaller exposure ranges.
The exposure is a real [0.001..0.6], the brightens is a natural number [0..255].
Is there a formula for calculating the light intensity received by camera having these two figures?
Many thanks for suggestions!
P.S.:
Currently I estimate the intensity using fuzzy-logic. It works, but the calibration is not flexible.
EDIT:
I've got additional information from the camera manufacturer. The function of light is linear when the pixel values are within the range 50-200
You say "I'm neither interested in measuring absolute light intensity nor in units.". So I guess you only want to answer questions like: "The light source in this image was shining N-times as bright as in this other image: what is N?".
Of course estimating an answer to such a question from images makes sense only if everything else stays (approximately) the same: microscope, camera, transmission (or reflection) of the imaged sample, etc. Is this the case?
If the content of the images is approximately the same, I'd just start by comparing image-wide statistics: ratio of the median/average/n-th quantile intensities, and see if there is a common shift. Be careful if your image are 8-bit per channel: you will probably have to linearize them by removing whatever gamma compression was applied before computing the stats.
As you notice, however, things get more complicated when the variation in exposure increase, probably because on nonlinear effects (cutoff at the lower end or saturation at the higher end).
The exposure might be in seconds or some unit of time. Then your first attempt should be right:
1/exposure * brightness
A possible problem might be the flash. If it flashes during 10ms and your exposure is <10ms there is no problem, but if your exposure is >10ms, then the result will be similar to 10ms (depending on how much light there is in the room apart from the flash).
I would take several pictures at the same object changing the exposure. Plot the brightness vs exposure and it should be a diagonal line, if not, the shape might give you some clues. If at some point the line flattens, probably the flash has not enough duration.
A flash can have more issues, like bad synchronization with the exposure or non uniform brightness during all the exposure (it might take some fraction of millisecond to achieve the maximum brightness, for example).
It there is no flash, a gamma correction could be the problem as has been suggested. In any case, a plot of brightness vs exposure may help.

What exactly is the need for gamma correction?

I have problems to fully understand the need for gamma correction. I hope you guys can help me.
Let’s assume we want to display 256 neighboring pixels. These pixels should be a smooth gradient from black to white. To denote theirs colors, we use linear gray values from 0..255. Due to the non-linearity of the human eye, the monitor must not just turn these values into linear luminance values. If the neighboring pixels had the luminance values (1/256)*I_max, (2/256)*I_max, et cetera, we would perceive in the darker area too large differences in brightness between two pixels (the gradient would not be smooth).
Fortunately, a monitor has the reciprocal non-linearity to the human eye. That means, if we put linear gray values 0..255 into the frame buffer, then the monitor turns them into non-linear luminance values x^gamma. However, as our eye is non-linear the other way round, we perceive a smooth linear gradient. The non-linearity of the monitor and the one of our eye cancel each other out.
So, why do we need the gamma correction? I have read in books that we always want the monitor to produce linear luminance values. According to them, the non-linearity of the monitor must be compensated before writing the gray values to the frame buffer. That is done by the gamma correction. However, my problem here is that - as far as I understand it - we would not perceive linear brightness values (i.e. we would not perceive a smooth, steady gradient) when the monitor produces linear luminance values.
As far as I see it, it would be just perfect, if we put linear gray values into the frame buffer. The monitor turns these values into non-linear luminance values and our eye perceives linear brightness values again, because the eye is reciprocal non-linear. There would be no need to gamma correct the gray values in the frame buffer and no need to force the monitor to produce linear luminance values.
What is wrong with my way of looking at these things?
Thanks
Allow me to ‘resurrect’ this question since I am struggling with similar questions right now and I think I have found the answer -it may be useful for someone else. Or I might be wrong and someone could tell me :)
I think there is nothing wrong with your way of thinking. Thing is, you don‘t need to gamma-correct all the time, if you know what you are doing. It depends on what you want to achieve. Let‘s see two different cases.
A) Light simulation (AKA rendering). You have a diffuse surface with a light pointing towards it. Then, the light's intensity is doubled.
Well. Let’s see what happens in the real world in such situation. Assuming a purely diffuse surface, the intensity of the light reflected is going to be the surface's albedo multiplied by the incoming light intensity and the cosine of the incoming light angle and the normal. Whatever. Thing is, when the incoming light intensity is doubled, the reflected light intensity will be doubled too. This is why light transport is said to be a linear process. Funny enough, you will not perceive the surface as twice as bright, because our perception is nonlinear (This is modelled by the so-called Steven's power law). Put again: in the real world the reflected light is doubled, but you do not perceive it twice as bright.
Now, how would we simulate this? Well, if we have a sRGB texture with the surface's albedo, we would need to linearlize it (by de-correcting it, which means applying the 2.2 gamma). Now that it is linear, and we have the light intensity, we can use the formula I said before to compute the reflected light intensity. Since we are in a linear space, by doubling the intensity we will double the output, like in the real world. Now we gamma-correct our results. Because of this, when the screen displays the rendered image, it will apply the gamma and so it will have a linear response, meaning that the intensity of the light emited by the screen will be twice as much when we simulate the twice-as-poweful light than when we simulate the first one. So the light that arrive at your eyes from your screen will have double the intensity. Exactly as it would happen if you were looking at the real surface with real lights affecting it. You will not perceive the second render twice as bright, of course but, again, and as we said earlier, this is exactly what it would happen in the real situation. Same behavior in the real world and in the simulation means that the simulation (the render) was correct :)
B) A different case is precisely if you want a gradient that you want to 'look' (AKA being perceived) as linear.
Since you want the nonlinear response of the screen to cancel out our nonlinear visual perception, you can skip gamma correction altogether (as you suggest). Or, more accurately, keep operating in linear space and gamma-correcting, but creating your gradient not with consecutive values for the pixels(1,2,3...255) that would be perceived nonlinearly (because of Steven's), but values transformed by the inverse of our perceptual brightness response (that is, applying an exponent of 1/0.5=2 to the normalized values. This is applying the reciprocal of Steven's exponent for brightness).
As a matter of fact, if you see gamma-corrected linear gradient such as the one in http://scanline.ca/gradients/ you do not perceive it as linear at all: you see far more variation in the lower intensities than in the higher ones (as expected).
Well, at least this is my current understanding of the topic. I hope it helps anyone. And again, please, please, if it is wrong I would be really grateful if someone could point it out...
The problem is really when doing color calculations. For example, if you are blending two colors, you need to use the linear intensities to do the calculations. To actually display the proper result, you then have to convert the linear intensities back to the gamma-corrected intensities.
How your eyes perceive the intensities isn't relevant. To do color calculations correctly, they have to be done based on the physical principles of optics, which relies on linear luminance values. Once you have calculated a color, you want those luminance values to be output by your monitor, regardless of how it is perceived, so you have to compensate for the fact that the monitor doesn't directly produce the colors that you want.
To actually answer the question which is wrong with your way of looking at this - it is nothing really wrong with it. It WOULD be great to have a linear framebuffer, but as you say, it's definetely not great to have an 8-Bit linear frame buffer.
The fact that 8 bits are so easy to handle is pretty much the only justification for gamma compressed frame buffers and color notations (Think HTML's #888 - wouldn't it be uncool to use #333 for middle gray not #888).
About the monitor - you want to be able to predict it's response to your input, and you know from sRGB what it should be. Normally that's all you need to know. Some people think it's "correct" or something if the monitor produces "linear" output which can be simulated if you compensate for the monitor's gamma. I advise to steer clear of such a setup, which breaks all the apps which (correcly and sanely) assume standard gamma in favour of un-breaking ill-concieved linearity-assuming apps. Don't do that. Instead, fix the apps or dump them.

Resources