What's the most sensible algorithm, or combination of algorithms, to be using from OpenCV for the following problem:
I have a set of small 2D images. I want to detect the locations of these subimages in a larger image.
The subimages are usually around 32x32 pixels, and the larger image is around 400x400.
The subimages are not always square, and such contains alpha channel.
Optionally - the larger image may be grainy, compressed, rotated in 3D, or otherwise slightly distorted
I have tried cvMatchTemplate, with very poor results (difficult to match correctly, and large numbers of false positives, with all match methods). Some of the problems come from the fact OpenCV can't seem to deal with alpha channel template matching.
I have tried a manual search, which seems to work better, and can include the alpha channel, but is very slow.
Thanks for any help.
cvMatchTemplate uses a MSE (SQDIFF/SQDIFF_NORMED) kind of metric for the matching. This kind of metric will penalize different alpha values severly (due to the square in the equation). Have you tried normalized cross-correlation? It is known to model linear variations in pixel intensities better.
If NCC does not do the job, you will need to transform the images to a space where the intensity differences do not have much effect. e.g. Compute a edge-strength image (canny, sobel etc) and run cvMatchTemplate on these images.
Considering the large difference in scales of the images (~10x). A image pyramid will have to be employed to figure out the correct scale for the matching. Recommend you start with a scale (2^1/x: x being the correct scale) and propagate the estimate up the pyramid.
What you need is something like SIFT or SURF.
Related
My understanding: SIFT descriptor uses the histogram of orientation gradient calculated from 16x16 neighbourhood pixels.
16x16 area in a large image can be a very small area, e.g. 1/10 of one hair on a cat's paw,
when you resize the target image into a small size, 16x16 neighbourhood around the same key point
can be a large of part of the image, e.g. the paw of the cat
It doesn't make sense to me to compare the original image with the resized image using SIFT descriptor,
Can any one tell me what's wrong with my understanding ?
This is a rough description, but should give you an understanding of the approach.
One of the stages that SIFT uses is to create a pyramid of scales of the image. It will scale down and smooth using a low pass filter.
The feature detector then works by finding features that have a peak response not only in the image space, but in scale space too. This means that it finds the scale of the image which the feature will produce the highest response.
Then, the descriptor is calculated in that scale. So when you use a smaller/larger version, it should still find the same scale for the feature.
I do like to do compute the position and orientation of a camera in a civil aircraft cockpit.
I do use LEDs as fixed points. My plan is to save their X,Y,Z Position associated with the LED.
How can I detect and identify my LEDs on my images? Which feature descriptor and feature point extractor should I use?
How should I modify my image prior to feature detection?
I like to stay efficient.
----Please stop voting this question down----
Now after having found the solution to my problem, I do realize the question might have been too generic.
Anyways to support other people googeling I am going to describe my answer.
With combinations of OpenCVs functions I create masks which contain areas where the LEDs could be in white. The rest of the image is black. These functions are for example Core.range, Imgproc.dilate, and Imgproc.erode. Also with Imgproc.findcontours I am filtering out too large or too small contours. Also used to combine masks is Core.bitwise_and, or Core.bitwise_not.
The masks are computed from an image in the HSV color space as input.
Having these masks with potential LED areas, I do compute color histograms, which of the intensity normalized rgb colors. (Hue did not work well enough for me). These histograms are trained and normalized using a set of annotated input images and represent my descriptor.
I do match the trained descriptor against computed onces in the application using histogram intersection.
So I receive distance measures. Using a threshold for these measures, the measures and the knowledge of the geometric positions of the real-life LEDs I translate the patches to a graph system, which helps me to find the longest chain of potential LEDs.
I need to make an application in iphone which needs to calculate noise, geometric deformation other distortions in an image. How to do this? I have done some image processing stuff with opencv + iphone. But I dont know how to calculate these parameters.
1) How to calculate noise in an image?
2) What is geometric deformation and how to calculate geometric deformation of an image?
3) Is geometric deformation and distortion are same parameters in terms of image filter? or any other distortions available to calculate an image is good quality or not?
Input: My image is a face image in live video stream.
I advise you to read some literature about image processing, for example Gonzalez & Woods.
1) The simplest method of noise calculation by single image is to compute standard deviation between image and its smoothed copy. For smoothing I recommend you to use simple median filter by sample of 3x3 pixels (or more). Median is non-sensitive to outbursts of data, so noice like "salt-n-pepper" won't worsen statistics.
In cases of overexposed or underexposed images such method can give you bad results, in that case you can calculate FFT of image and use a high frequency components for noise estimation.
2), 3) Calculation of geometric deformation is possible only if you know, what should be on image. For example, if you use mire (optical etalon) with quadratic grid, you can find lines on your image (for example by Canny edge detector) and compute distortion, astigmatism and some other aberrations. This could be done also if you sure that image have some straight lines.
Defocusing can be computed from analysis of edges on image or with help of image wavelet transform.
There also much more different methods for image analysing. For example, by analysis of colour image you can estimate chromatic aberration and so on.
But I repeat: in common case this operations are impossible. They all have some particular cases of application.
Read about image quality: there are no standard for this term, in every particular case you can use one or more simple characteristics to recognize whether image good or not.
In you case I'd advice you to make a lot of photos with different kind of artefacts and quality, then make simple analysis of their statistics, wavelet compositions and R-G-B components correlation. BTW, to make analysis of colour image less sensitive to its brightness I recommend you to work in HSV colorspace (but to estimate chromatic aberration you need to work exactly with RGB components).
The blending modes Screen, Color Dodge, Soft Light, etc.
like in Photoshop, each have their own math that works
for range 0-1. I wonder how do these blend modes work
for HDR images?
Thanks
I am not familiar with photoshop and it's filter but here is a general explanation of the math behind HDR filters.
Suppose you have 3 images (low light, medium and over exposed). You want to average those images but (I1+I2+I3)/3 is a stupid way. You want to give a higher weight to the image that captures more information in a given area.
So basically you average the images with a weight factor and there are different types of algorithms to calculate the weights. Here are few:
The simplest one is using STD (standard deviation). In each pixel, in each image calculate standard deviation of its 9 neighbours. Use std as weight:
HDR pixel(i,j) = I1(i,j)*stdI1(i,j) + I2(i,j)*stdI2(i,j) + I3(i,j)*stdI3(i,j).
Why std is used? since when std is high it means a high variation in pixels intencity which means more information was captured by the image.
Instead of STD you can use entropy filter, edge detection or any other which represents how much information is encoded around the given pixel
There are also slower but better ways to do HDR. Usually it is done with some kind of wavelet transformation. For example Furier transform. Each image is converted to furier space (coefficients of the frequencies and than the for each frequency, the maximal coefficient of 3 images is taken).
You can even combine the method of std filter and wavelet transforms. For example break the image to different frequencies, smooth the lower frequencies and take a stupid average (I1+I2+I3)/3, but with high frequencies use less smoothing and using std weighted average. The action of smoothing more lower frequencies is called 'blending'. It heavily used when stitching 2 images of different light exposure to a panorama.
Look at this image: http://magazine.magix.com/en/wp-content/uploads/2012/05/Panorama-3.jpg
You can clearly see that the sky gets different color on each image but since sky is a very low frequency (almost no information and no small object) it is heavily smoothed and averaged, thus allowing a gentle stitching.
Hope that answers your question
If histogram equalization is done on a poorly-contrasted image then its features become more visible. However there is also a large amount of grains/speckles/noise. using blurring functions already available in OpenCV is not desirable - i'll be doing text-detection on the image later on and the letters will get unrecognizable.
So what are the preprocessing techniques that should be applied?
Standard blur techniques that convolve the image with a kernel (e.g. Gaussian blur, box filter, etc) act as a low-pass filter and distort the high-frequency text. If you have not done so already, try cv::bilateralFilter() or cv::medianBlur(). If neither of these algorithms work, you should look into other edge-preserving smoothing algorithms.
If you imagine the image as a three-dimensional space, traditional filtering replaces the value of each pixel with the weighted average of all filters in a circle centered around the pixel. Bilateral filtering does the same, but uses a three-dimensional sphere centered at the pixel. Since a well-defined edge looks like a plateau, the sphere contains only one point and the pixel value remains unchanged. You can get a more detailed explanation of the bilateral filter and some sample output here.