I am applying some gaussian noise to an image. I think that this type of noise is most similar to sensor noise one could expect from a rubbish camera (?).
My question is: for a 3-channel image is the noise value applied to all values of each pixel the same i.e.
noise = gaussian_value()
pixel = (r+noise, g+noise, b+noise)
this is effectively changing the brightness of the pixel overall.
or, is a separate noise value applied to each of the channels in the pixel i.e.
r_noise = gaussian_value()
g_noise = gaussian_value()
b_noise = gaussian_value()
pixel = (r+r_noise, g+g_noise, b+b_noise)
or, is a random channel chosen for each pixel and noise applied i.e.
noise = gaussian_value()
pixel[randint(0,2)] += noise
Which one of these methods most accurately models the type of noise I am after (i.e. sensor noise). I also think that most cameras do not have separate channel sensors for each pixel and interpolate colour values from the surrounding pixels, so if this is the case too, does it affect the answer?
If your goal is to simulate the noise from a real sensor, you should start with an image from a real camera. Take a picture of a gray card that's defocused and subtract the average value of a large block around a pixel from the pixel value itself - that should give you pure noise which you can analyze. Depending on your requirements you might even be able to use this saved noise directly, either by overlaying it or by choosing a random starting point and incrementing through it.
Related
I am looking for a workflow that would clean (and possibly straighten) old and badly scanned images of musical scores (like the one below).
I've tried to use denoise, hough filters, imagemagick geometry filters, and I am struggling to identify the series of filters that remove the scanner noise/bias.
Just some quick ideas:
Remove grayscale noise: Do a low pass filter (darks), since the music is darker than a lot of the noise. Remaining noise is mostly vertical lines.
Rotate image: Sum grayscale values for each column of the image. You'll get a vector with the total pixel lightness in that column. Use gradient descent or search on the rotation of the image (within some bounds like +/-15deg rotation) to maximize the variance of that vector. Idea here is that the vertical noise lines indicate vertical alignment, and so we want the columns of the image to align with these noise lines (= maximized variance).
Remove vertical line noise: After rotation, take median value of each column. The greater the distance (squared difference) a pixel is from that median darkness, the more confident we are it is its true color (e.g. a pure white or black pixel when vertical noise was gray). Since noise is non-white, you could try blending this distance by the whiteness of the median for an alternative confidence metric. Ideally, I think here you'd train some 7x7x2 convolution filter (2 channels being pixel value and distance from median) to estimate true value of the pixel. That would be the most minimal machine learning approach, not using some full-fledged NN. However, given your lack of training data, we'll have to come up with our own heuristic for what the true pixel value is. You likely will need to play around with it, but here's what I think might work:
Set some threshold of confidence; above that threshold we take the value as is. Below the threshold, set to white (the binary expected pixel value for the entire page).
For all values below threshold, take the max confidence value within a +/-2 pixels L1 distance (e.g. 5x5 convolution) as that pixel's value. Seems like features are separated by at least 2 pixels, but for lower resolutions that window size may need to be adjusted. Since white pixels may end up being more confident overall, you could experiment with prioritizing darker pixels (increase their confidence somehow).
Clamp the image contrast and maybe run another low pass filter.
In MeshLab, you can use the quality mapper to maps some qualities (values) on your mesh points to specific colors. The QualityMapperDialog offers an equalizer function that has three bars and affects the histogram of the quality. What is this effect, and what how does the three bars adjustment affect it?
Meshlab calls "quality" to a scalar value you can attach to each vertex or face of
the mesh. Examples of some qualities can be:
Area of each triangle.
ShapeFactor of each triangle (how it differs from an equilateral triangle).
Euclidean distance from each vertex to a given point.
Geodesic distance from each vertex to a given point.
Distance from each vertex to nearest border.
In general, any measure you can compute by vertex/face. For example, you can create a vertex quality using the formula of Relative Luminance (0.2126*R + 0.7152*G + 0.0722*B)
You can see if your mesh has some quality if the info panel shows the flags "VQ" (Vertex Quality) or "FQ" (Face Quality). To represent all those quality values meshlab offers you several options:
An histogram, available in the option Render->Show Quality Histogram
Contour lines, available in the option Render->Show Quality Contour
A mapping from scalar quality values to RGB values in vertex/faces.
For example, this is the Pythagoras mesh with "Distance to border" quality represented both as histogram and RGB values on vertex.
The Edit->Quality Mapper dialog allows the user to control the RGB color assigned to each scalar value. There is two panels there:
The "Transfer Function" panel lets you choose the color (R,G,B values) to each value inside your quality window of interest.
The "Equalizer" panel lets you to define that quality window of interest. The three bars allows you to define the lower and upper quality values in which you are interested, and also the main value of interest inside that interval in which you want to focus.
Here is the default window of interest, which map all the quality values to the complete RGB ramp.
Here we will use a window of interest for the qualities, so your ramp will map only the interval (25.43 ..45.57)
Here we use the same window of interest, but will focus on values around quality 43.00. Half of your ramp will be under value 43 and half of the ramp will be above 43. It may be easier to understand if you look at the "Gamma correction" graph... the input values are adapted to follow that red curve, that deform the qualities to adapt to a curve where 24.43 -> 0%, 25.57->100% and 43.00 -> 50%.
I would like to know the difference between contrast stretching and histogram equalization.
I have tried both using OpenCV and observed the results, but I still have not understood the main differences between the two techniques. Insights would be of much needed help.
Lets Define Contrast first,
Contrast is a measure of the “range” of an image; i.e. how spread its intensities are. It has many formal definitions one famous is Michelson’s:
He says contrast = ( Imax - Imin )/( Imax + I min )
Contrast is strongly tied to an image’s overall visual quality.
Ideally, we’d like images to use the entire range of values available
to them.
Contrast Stretching and Histogram Equalisation have the same goal: making the images to use entire range of values available to them.
But they use different techniques.
Contrast Stretching works like mapping
it maps minimum intensity in the image to the minimum value in the range( 84 ==> 0 in the example above )
With the same way, it maps maximum intensity in the image to the maximum value in the range( 153 ==> 255 in the example above )
This is why Contrast Stretching is un-reliable, if there exist only two pixels have 0 and 255 intensity, it is totally useless.
However a better approach is Histogram Equalisation which uses probability distribution. You can learn the steps here
I came across the following points after some reading.
Contrast stretching is all about increasing the difference between the maximum intensity value in an image and the minimum one. All the rest of the intensity values are spread out between this range.
Histogram equalization is about modifying the intensity values of all the pixels in the image such that the histogram is "flattened" (in reality, the histogram can't be exactly flattened, there would be some peaks and some valleys, but that's a practical problem).
In contrast stretching, there exists a one-to-one relationship of the intensity values between the source image and the target image i.e., the original image can be restored from the contrast-stretched image.
However, once histogram equalization is performed, there is no way of getting back the original image.
In Histogram equalization, you want to flatten the histogram into a uniform distribution.
In contrast stretching, you manipulate the entire range of intensity values. Like what you do in Normalization.
Contrast stretching is a linear normalization that stretches an arbitrary interval of the intensities of an image and fits the interval to an another arbitrary interval (usually the target interval is the possible minimum and maximum of the image, like 0 and 255).
Histogram equalization is a nonlinear normalization that stretches the area of histogram with high abundance intensities and compresses the area with low abundance intensities.
I think that contrast stretching broadens the histogram of the image intensity levels, so the intensity around the range of input may be mapped to the full intensity range.
Histogram equalization, on the other hand, maps all of the pixels to the full range according to the cumulative distribution function or probability.
Contrast is the difference between maximum and minimum pixel intensity.
Both methods are used to enhance contrast, more precisely, adjusting image intensities to enhance contrast.
During histogram equalization the overall shape of the histogram
changes, whereas in contrast stretching the overall shape of
histogram remains same.
I'm getting all pixels' RGB values into
R=[],
G=[],
B=[]
arrays from the picture. They are 8 bits [0-255] values containing arrays. And I need to use Fourier Transform to compress image with a lossy method.
Fourier Transform
N will be the pixel numbers. n is i for array. What will be the k and imaginary j?
Can I implement this equation into a programming language and get the compressed image file?
Or I need to use the transformation equation to a different value instead of RGB?
First off, yes, you should convert from RGB to a luminance space, such as YCbCr. The human eye has higher resolution in luminance (Y) than in the color channels, so you can decimate the colors much more than the luminance for the same level of loss. It is common to begin by reducing the resolution of the Cb and Cr channels by a factor of two in both directions, reducing the size of the color channels by a factor of four. (Look up Chroma Subsampling.)
Second, you should use a discrete cosine transform (DCT), which is effectively the real part of the discrete Fourier transform of the samples shifted over one-half step. What is done in JPEG is to break the image up into 8x8 blocks for each channel, and doing a DCT on every column and row of each block. Then the DC component is in the upper left corner, and the AC components increase in frequency as you go down and to the left. You can use whatever block size you like, though the overall computation time of the DCT will go up with the size, and the artifacts from the lossy step will have a broader reach.
Now you can make it lossy by quantizing the resulting coefficients, more so in the higher frequencies. The result will generally have lots of small and zero coefficients, which is then highly compressible with run-length and Huffman coding.
I am looking for a "very" simple way to check if an image bitmap is blur. I do not need accurate and complicate algorithm which involves fft, wavelet, etc. Just a very simple idea even if it is not accurate.
I've thought to compute the average euclidian distance between pixel (x,y) and pixel (x+1,y) considering their RGB components and then using a threshold but it works very bad. Any other idea?
Don't calculate the average differences between adjacent pixels.
Even when a photograph is perfectly in focus, it can still contain large areas of uniform colour, like the sky for example. These will push down the average difference and mask the details you're interested in. What you really want to find is the maximum difference value.
Also, to speed things up, I wouldn't bother checking every pixel in the image. You should get reasonable results by checking along a grid of horizontal and vertical lines spaced, say, 10 pixels apart.
Here are the results of some tests with PHP's GD graphics functions using an image from Wikimedia Commons (Bokeh_Ipomea.jpg). The Sharpness values are simply the maximum pixel difference values as a percentage of 255 (I only looked in the green channel; you should probably convert to greyscale first). The numbers underneath show how long it took to process the image.
If you want them, here are the source images I used:
original
slightly blurred
blurred
Update:
There's a problem with this algorithm in that it relies on the image having a fairly high level of contrast as well as sharp focused edges. It can be improved by finding the maximum pixel difference (maxdiff), and finding the overall range of pixel values in a small area centred on this location (range). The sharpness is then calculated as follows:
sharpness = (maxdiff / (offset + range)) * (1.0 + offset / 255) * 100%
where offset is a parameter that reduces the effects of very small edges so that background noise does not affect the results significantly. (I used a value of 15.)
This produces fairly good results. Anything with a sharpness of less than 40% is probably out of focus. Here's are some examples (the locations of the maximum pixel difference and the 9×9 local search areas are also shown for reference):
(source)
(source)
(source)
(source)
The results still aren't perfect, though. Subjects that are inherently blurry will always result in a low sharpness value:
(source)
Bokeh effects can produce sharp edges from point sources of light, even when they are completely out of focus:
(source)
You commented that you want to be able to reject user-submitted photos that are out of focus. Since this technique isn't perfect, I would suggest that you instead notify the user if an image appears blurry instead of rejecting it altogether.
I suppose that, philosophically speaking, all natural images are blurry...How blurry and to which amount, is something that depends upon your application. Broadly speaking, the blurriness or sharpness of images can be measured in various ways. As a first easy attempt I would check for the energy of the image, defined as the normalised summation of the squared pixel values:
1 2
E = --- Σ I, where I the image and N the number of pixels (defined for grayscale)
N
First you may apply a Laplacian of Gaussian (LoG) filter to detect the "energetic" areas of the image and then check the energy. The blurry image should show considerably lower energy.
See an example in MATLAB using a typical grayscale lena image:
This is the original image
This is the blurry image, blurred with gaussian noise
This is the LoG image of the original
And this is the LoG image of the blurry one
If you just compute the energy of the two LoG images you get:
E = 1265 E = 88
or bl
which is a huge amount of difference...
Then you just have to select a threshold to judge which amount of energy is good for your application...
calculate the average L1-distance of adjacent pixels:
N1=1/(2*N_pixel) * sum( abs(p(x,y)-p(x-1,y)) + abs(p(x,y)-p(x,y-1)) )
then the average L2 distance:
N2= 1/(2*N_pixel) * sum( (p(x,y)-p(x-1,y))^2 + (p(x,y)-p(x,y-1))^2 )
then the ratio N2 / (N1*N1) is a measure of blurriness. This is for grayscale images, for color you do this for each channel separately.