I am looking for an application doing motion detection. Basically, I take a picture referencing the background of the scene I am watching, and I am comparing it to the frame a get from the camera.
I use this website to implement my app. The frames are highly disturbed by luminosity and so on... According to it, I can improve the result of the differential computation using tonal equalization techniques. Search for "tonal registration" in the page for more information.
I am working with HSV images.
Based on my research, I could find histogram equalization but appears to not yield good performances. I though about to use HDR but I don't know if it will be really relevant due to the fact I am working in a real-time video surveillance system (no time to "waste", cannot afford to get frame a different exposure).
Which technique could allow me to make the photometric range of my images as close as possible?
Once I participated in the development of surveillance system and there was a similar problem too. Instead of histogram equalization it used a different approach. Algorithm was as following:
For every pixel:
- Calculate an average luminosity around the pixel with some window (e.g. at square with 50 pixels)
- Calculate multiplier as some fixed value (like 128 for byte sized components)
divided by the average from previous step. Account for possible
division by zero here. So Multiplier = 128 / (Average + small value)
- Multiply pixel luminosity by multiplier.
Averages can be calculated incrementally in one go for the whole frame and in real time. It allowed to remove luminosity variations from clouds, changing weather etc. rather effectively.
Related
Can I detect the intensity or the amount of light in a picture ?
For example,
I have some pictures which are captured at morning ,afternoon and the
time before the sunset and i want to know the amount of the light.
I just need an idea of how to do it. Also I have an access to the camera gain, exposure and other parameters.
The camera which I am using is the ZED Camera.
I understand the formula which convert from RGB space to luminance space as stated here. But I'm not sure if it's an efficient solution or not.
You seem not to be answering my question in the comments section for some reason, so I am not sure what you are trying to do, but it seems to be along the lines of determining the general brightness of the sky or somesuch.
So, firstly, you need to determine the average brightness/lightness within your image. For this step, you can convert to Lab or HSL colourspace using cvtColor() and then get the mean value of the L channel over the entire image using mean() or meanStdDev().
Now, and I guess this is what your question is actually about, you need to correct for the exposure since the exposure may vary between two images. So, the three things that affect exposure are ISO (a.k.a. film sensitivity), lens aperture and length of exposure.
Basically, every f-stop of aperture change represents a halving or doubling of the area of the lens aperture and therefore a doubling or halving of the amount of light that hits the sensor. So, f4 lets in 2x the light of f5.6, and that lets in twice the light of f8 and so on. Notice that each full stop of light differs from the next by sqrt(2).
Likewise with ISO, each time the ISO doubles (or the sensitivity doubles) the amount of light doubles.
Likewise with the time of the exposure, 1/2 a second is twice as long as a quarter of second.
So, basically, you have your mean Lightness value and you need to correct that for aperture, for ISO and exposure duration. In effect, you must normalise your images to a standard aperture, ISO and exposure time. Every stop different your image is from your normal, you must double or halve the mean Lightness. Every ISO step your image differs from your normal image by, you must double or halve the mean Lightness. Every time your exposure duration is different from your standardised, normalised duration, you must multiply your average mean Lightness by the ratio of the current image to your normalised image.
Then your mean lightnesses will be comparable with one another.
(Filters will also affect exposure, but I presume you are not adding or removing filters between exposures.)
In answer to your comment, I have never seen anyone write a formula as such, but my comments amount to this:
L * 2^aperture
--------------
ISO * time
In order to measure the luminance of an image, I would suggest working with the LAB color space. The L channel (light) represents the amount of light present in the image.
Few merits:
Since the L channel deals with the light intensity of the image, modifying it enhances the image.
Research studies also say that the L channel closely resembles the way we humans perceive light intensity in the real world.
All you can hope to measure with your camera and some images is the relative luminance. Even if you have the camera settings.
https://en.wikipedia.org/wiki/Relative_luminance
If you want to know the amount of light in absolute radiometric units, you're going to need to use some kind of absolute light meter or measured light source to calibrate your camera. For references on how to do this, see:
Section 2.5 Obtaining Absolute Radiance from http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf
Hey I need a 320x240 8Bit gray scale image for some Computer Vision Algorithm (Orb Feature tracking). The Raspicam driver I'm using can provide different Image Sizes. Different Image Sizes are achieved by cropping and not down sampling from the driver. As my environment is not ideal lighted the Image is quite dark and noisy. Now I had the idea to take a 640x480 Image and down sample it to 320x240 by combining always 2x2 pixels to one. Normally I would of course divide by 4 to get the correct result. But what would be the effect of dividing it by two or even one (assuming 99% of the intensity values are not bigger then 64 (256/4)). Wouldn't that simulate the effect of larger CCD cells which could gather more light in less time.
The first tests I did showed some pretty good results. Meaning I detected more Features and could follow them better between two frames.
Here, you are not taking proper average of 2x2 blocks(divide by 4). Say, you have two blocks and they have Delta-I difference in intensity. If you divide the intensity of the two blocks by a larger number, the intensity difference will reduce and vice-versa for smaller number.
When you divide the difference(Delta-I) by 2(instead of 4), you are in a way increasing the contrast(intensity difference between background and foreground. As you mentioned that your image is in poor illumination, thereby division by smaller number increases the contrast which is improving tracking. This approach will come under contrast enhancement technique and is a variation of Linear contrast enhancement.
I have a software working with an industrial grade camera with a manual lens (focus and aperture to be set manually).
I can control the cameras exposure time and gain.
I did some histogram analysis to check the exposure of the image.
No I am looking for a method to transfer the mean value of the grayscale intensity into a exposure value.
Goal is to calculate a exposure time for a fixed aperture setting and a currunt lighting condition since the exposure value is Ev = Av + Tv (Av aperture value or f stops, Tv time value, exposure time) I hope that there is some conversion from grayscale intensity into Exposure value.
I would like to give you my solution.
What I have found out is that normal cameras or not cabable of measuring brightness. In general an image sensor cannot distingish between bright colors as white and brithness circumstances.
Anyway, I implemented an historgram which is measuring the greyscale intensity. After that the mean value is extracted and scaled to a value range of 256. The goal is to have mean value of 128.
So I use the measured histogram mean value as input for a PI controller which controlls the the exposure time.
So this way I create a link between histogram mean value and exposure time
I think you may want to consider the histogram as providing the dynamic range required. Ansel Adams, and others, sometimes referred to this as Zones. So your 18% is Zone V (5). If your image has dynamic range that is clipping the high value (255) or clipping on the minimum (O), then you may need 2 or more images... One with less Ev at a high f# or less exposure to be unclipped ("blown out" think photo speak), and the another shot with more exposure to ensure that shadow detail is not lost ("blocked" in photo speak).
If you have an image that is within the 1-255 range, then you could rescale the image to have a mean value of 18%. Or have some calibration for light reading to exposure or Ev.
Generally you want the histogram both for computing then mean (EV or crestfactor) as well as a way to find the min/max in order to determine is you need more exposure/gain or less.
Of course if your picture is flat with respect to brightness, the. 128 is perfect. If there are a few bright sources and in general a "normal scene" then some mean value closer to 18% is statistically better (~46)
I want to measure the light intensity of microscope images with a BW camera attached to the microscope. My purpose is to compare particular images with each other concerning their brightness. I'm neither interested in measuring absolute light intensity nor in units.
I think, the Function should use exposure and some brightness-related metric (e.g. thresholded histogram-width or pixel-value mean).
My first attempt: 1/exposure * brightness works for smaller exposure ranges.
The exposure is a real [0.001..0.6], the brightens is a natural number [0..255].
Is there a formula for calculating the light intensity received by camera having these two figures?
Many thanks for suggestions!
P.S.:
Currently I estimate the intensity using fuzzy-logic. It works, but the calibration is not flexible.
EDIT:
I've got additional information from the camera manufacturer. The function of light is linear when the pixel values are within the range 50-200
You say "I'm neither interested in measuring absolute light intensity nor in units.". So I guess you only want to answer questions like: "The light source in this image was shining N-times as bright as in this other image: what is N?".
Of course estimating an answer to such a question from images makes sense only if everything else stays (approximately) the same: microscope, camera, transmission (or reflection) of the imaged sample, etc. Is this the case?
If the content of the images is approximately the same, I'd just start by comparing image-wide statistics: ratio of the median/average/n-th quantile intensities, and see if there is a common shift. Be careful if your image are 8-bit per channel: you will probably have to linearize them by removing whatever gamma compression was applied before computing the stats.
As you notice, however, things get more complicated when the variation in exposure increase, probably because on nonlinear effects (cutoff at the lower end or saturation at the higher end).
The exposure might be in seconds or some unit of time. Then your first attempt should be right:
1/exposure * brightness
A possible problem might be the flash. If it flashes during 10ms and your exposure is <10ms there is no problem, but if your exposure is >10ms, then the result will be similar to 10ms (depending on how much light there is in the room apart from the flash).
I would take several pictures at the same object changing the exposure. Plot the brightness vs exposure and it should be a diagonal line, if not, the shape might give you some clues. If at some point the line flattens, probably the flash has not enough duration.
A flash can have more issues, like bad synchronization with the exposure or non uniform brightness during all the exposure (it might take some fraction of millisecond to achieve the maximum brightness, for example).
It there is no flash, a gamma correction could be the problem as has been suggested. In any case, a plot of brightness vs exposure may help.
In my application I am getting images (captured by a high speed camera) containing projections of some light sources on the screen.
1-My first task is to plot a PDF or intensity distribution plot for the light intensity, which should come as bell shape or Gaussian, since at the center the light intensity will be maximum and at the ends it will be diminishing. Like this(just for example, not the exact case for me):
In worst cases I will be having a series of light sources illuminated simultaneously. In such cases theoretically I should get overlapping bell or Gaussian curves, some what like this:
How do I plot such a curve given the Images of light projection (like the one in the figure)?
2-After the Gaussian curve is drawn, the next job is to analyze the same such as finding width and height of the curve. How do I go for this?
I want an executable for this application, so a solution given by MATLAB or similar tool is not acceptable to my client. Also i want the solution to work in real time or near real time.
I guess OpenCV can be used here. But before I start I would like to know opinions of Image processing gurus on this forum. Especially for the step -1 above, I need some inputs.
Any pointers here?
Rgrds,
Heshsham
Note: Image is taken from http://pentileblog.com.
To get the 1D Gaussian out of the 2D one, you can do a couple of things depending on what you want exactly.
- You could sum over every column of the image;
- You could find the local maximum in intensity and copy the intensity profile of that row of the image only;
- You could threshold the image (in case your maximum will be saturated and therefore a plateau), determine the center of gravity of the remaining blob, and copy that row's intensity profile;
- You could threshold, find contours, determine multiple local maxima, and grab multiple intensity profiles if the application calls for it (e.g. if the blobs are not horizontally aligned).
To get the height and width, it's pretty easy, just find the maximum and the points left and right of it where the curve drops to half of the maximum. The standard deviation is the distance between the two points divided by 2.35 (wikipedia link).
Well I solved it:
Algorithms is as follows:
1-use cvSampleLine for reading a particual line of image
2- use cvMinMaxLoc to know the maximum pixel value in a line
3- Note which of these lines is having highest pixel value. Lets say line no. 150
4- Plot pixel value for line 150.
I used MATLAB for verifying my results and graphs, and the OpenCV result is exactly the same.
Thanks for your suggestions guys.