How to implement despeckle in OpenCV? - image-processing

If histogram equalization is done on a poorly-contrasted image then its features become more visible. However there is also a large amount of grains/speckles/noise. using blurring functions already available in OpenCV is not desirable - i'll be doing text-detection on the image later on and the letters will get unrecognizable.
So what are the preprocessing techniques that should be applied?

Standard blur techniques that convolve the image with a kernel (e.g. Gaussian blur, box filter, etc) act as a low-pass filter and distort the high-frequency text. If you have not done so already, try cv::bilateralFilter() or cv::medianBlur(). If neither of these algorithms work, you should look into other edge-preserving smoothing algorithms.
If you imagine the image as a three-dimensional space, traditional filtering replaces the value of each pixel with the weighted average of all filters in a circle centered around the pixel. Bilateral filtering does the same, but uses a three-dimensional sphere centered at the pixel. Since a well-defined edge looks like a plateau, the sphere contains only one point and the pixel value remains unchanged. You can get a more detailed explanation of the bilateral filter and some sample output here.

Related

Which feature descriptors to use and why?

I do like to do compute the position and orientation of a camera in a civil aircraft cockpit.
I do use LEDs as fixed points. My plan is to save their X,Y,Z Position associated with the LED.
How can I detect and identify my LEDs on my images? Which feature descriptor and feature point extractor should I use?
How should I modify my image prior to feature detection?
I like to stay efficient.
----Please stop voting this question down----
Now after having found the solution to my problem, I do realize the question might have been too generic.
Anyways to support other people googeling I am going to describe my answer.
With combinations of OpenCVs functions I create masks which contain areas where the LEDs could be in white. The rest of the image is black. These functions are for example Core.range, Imgproc.dilate, and Imgproc.erode. Also with Imgproc.findcontours I am filtering out too large or too small contours. Also used to combine masks is Core.bitwise_and, or Core.bitwise_not.
The masks are computed from an image in the HSV color space as input.
Having these masks with potential LED areas, I do compute color histograms, which of the intensity normalized rgb colors. (Hue did not work well enough for me). These histograms are trained and normalized using a set of annotated input images and represent my descriptor.
I do match the trained descriptor against computed onces in the application using histogram intersection.
So I receive distance measures. Using a threshold for these measures, the measures and the knowledge of the geometric positions of the real-life LEDs I translate the patches to a graph system, which helps me to find the longest chain of potential LEDs.

Given a image, how extract (detect) blurred (or focussed) regions?

Blurred (focussed) region could be arbitrary shape (not just rectangle)
Simple input/output example:
(is valid any algorithm, command, software, ...)
Thank you!
The variance is much higher in focused regions because blurring acts as a spatial low-pass filter.
But if the object shows no variance (uniform object like wall or a sky) you are completely lost and do not see any differences at all between the blurred and focused regions.
Therefore your question cannot be answered for every kind of image. It's impossible.
What I would do:
High pass filter your image. (Edge detection, absolute differences between neighboring pixels, Laplace or local variances, ...) (Matlab)
Smooth and apply threshold.
Apply pre-knowledge about imaging optics if existing. Example would be knowing that the focus must be round and in in the center, ...

How to calculate an image has noise and Geometric distortion or not?

I need to make an application in iphone which needs to calculate noise, geometric deformation other distortions in an image. How to do this? I have done some image processing stuff with opencv + iphone. But I dont know how to calculate these parameters.
1) How to calculate noise in an image?
2) What is geometric deformation and how to calculate geometric deformation of an image?
3) Is geometric deformation and distortion are same parameters in terms of image filter? or any other distortions available to calculate an image is good quality or not?
Input: My image is a face image in live video stream.
I advise you to read some literature about image processing, for example Gonzalez & Woods.
1) The simplest method of noise calculation by single image is to compute standard deviation between image and its smoothed copy. For smoothing I recommend you to use simple median filter by sample of 3x3 pixels (or more). Median is non-sensitive to outbursts of data, so noice like "salt-n-pepper" won't worsen statistics.
In cases of overexposed or underexposed images such method can give you bad results, in that case you can calculate FFT of image and use a high frequency components for noise estimation.
2), 3) Calculation of geometric deformation is possible only if you know, what should be on image. For example, if you use mire (optical etalon) with quadratic grid, you can find lines on your image (for example by Canny edge detector) and compute distortion, astigmatism and some other aberrations. This could be done also if you sure that image have some straight lines.
Defocusing can be computed from analysis of edges on image or with help of image wavelet transform.
There also much more different methods for image analysing. For example, by analysis of colour image you can estimate chromatic aberration and so on.
But I repeat: in common case this operations are impossible. They all have some particular cases of application.
Read about image quality: there are no standard for this term, in every particular case you can use one or more simple characteristics to recognize whether image good or not.
In you case I'd advice you to make a lot of photos with different kind of artefacts and quality, then make simple analysis of their statistics, wavelet compositions and R-G-B components correlation. BTW, to make analysis of colour image less sensitive to its brightness I recommend you to work in HSV colorspace (but to estimate chromatic aberration you need to work exactly with RGB components).

2D subimage detection in Open CV

What's the most sensible algorithm, or combination of algorithms, to be using from OpenCV for the following problem:
I have a set of small 2D images. I want to detect the locations of these subimages in a larger image.
The subimages are usually around 32x32 pixels, and the larger image is around 400x400.
The subimages are not always square, and such contains alpha channel.
Optionally - the larger image may be grainy, compressed, rotated in 3D, or otherwise slightly distorted
I have tried cvMatchTemplate, with very poor results (difficult to match correctly, and large numbers of false positives, with all match methods). Some of the problems come from the fact OpenCV can't seem to deal with alpha channel template matching.
I have tried a manual search, which seems to work better, and can include the alpha channel, but is very slow.
Thanks for any help.
cvMatchTemplate uses a MSE (SQDIFF/SQDIFF_NORMED) kind of metric for the matching. This kind of metric will penalize different alpha values severly (due to the square in the equation). Have you tried normalized cross-correlation? It is known to model linear variations in pixel intensities better.
If NCC does not do the job, you will need to transform the images to a space where the intensity differences do not have much effect. e.g. Compute a edge-strength image (canny, sobel etc) and run cvMatchTemplate on these images.
Considering the large difference in scales of the images (~10x). A image pyramid will have to be employed to figure out the correct scale for the matching. Recommend you start with a scale (2^1/x: x being the correct scale) and propagate the estimate up the pyramid.
What you need is something like SIFT or SURF.

Gaussian blur and convolution kernels

I do not understand what a convolution kernel is and how I would apply a convolution matrix to pixels in an image (I am talking about doing a Gaussian Blur operation on an image).
Also could I get an explanation on how to create a kernel for a Gaussian Blur operation?
I am reading this article but I cannot seem to understand how things are done...
Thanks to anyone who takes time to explain this to me :),
ExtremeCoder
The basic idea is that the new pixels of the image are created by an weighted average of the pixels close to it (imagine drawing a circle around the pixel).
For each pixel in the image you are going to create a little square around the pixel. Lets say you take the 8 neighbors next to a pixel (including diagonals even though do not matter here), and we perform a weighted average to get the middle pixel.
In the Gaussian blur case it breaks down to two one dimensional operations. For each pixel take the some amount of pixels next to a pixel in the row direction only. Multiply the pixel values time the weights computed from the Gaussian distribution (or if you are doing this for an visual effect and not for a scientific reason, the weights can anything that looks good) and sum them up. Another way to look at it is the pixel make a vector and the weights make a vector and your are taking the dot product. Repeat this process in the column direction as a separate pass.
A convolution kernel is a matrix of values that specify how the neighborhood of a pixel contribute to that pixel's state in the final image. There's a fair description of the basics here. A gaussian blur is a convolution function that uses a really ugly (you've seen the wikipedia page) function to compute a convolution kernel to pass over the image. You'll find an example kernel for a gaussian in that wikipedia page.
The point of all the math in there is to produce a soft blur that resembles the scatter pattern produced by a mesh screen placed between the viewer and the image. You can think of the 'size' (the standard deviation) of the gaussian as being related to the distance between the image and the screen.
Here's an awesome tool, if you don't want to calculate it all by yourself (like me):
http://www.embege.com/gauss/
EDIT
Since the link seems to be broken now, here's a link to archive.org:
http://web.archive.org/web/20150217075657/http://www.embege.com/gauss

Resources