I am evaluating template matching algorithm to differentiate similar and dissimilar objects. What I found is confusing, I had an impression of template matching is a method which compares raw pixel intensity values. Hence when the pixel value varies I expected Template Matching to give a less match percentage.
I have a template and search image having same shape and size differing only in color(Images attached). When I did template matching surprisingly I am getting match percentage greater than 90%.
img = cv2.imread('./images/searchtest.png', cv2.IMREAD_COLOR)
template = cv2.imread('./images/template.png', cv2.IMREAD_COLOR)
res = cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
print(max_val)
Template Image :
Search Image :
Can someone give me an insight why it is happening so? I have even tried this in HSV color space, Full BGR image, Full HSV image, Individual channels of B,G,R and Individual channels of H,S,V. In all the cases I am getting a good percentage.
Any help could be really appreciated.
res = cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED)
There are various argument, which you can use to find templates e.g. cv2.TM_CCOEFF, cv2.TM_CCOEFF_NORMED, cv2.TM_CCORR, cv2.TM_CCORR_NORMED, cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED
You can look into their equation here:
https://docs.opencv.org/2.4/modules/imgproc/doc/object_detection.html
From what I think if you want to use your template matching so that it doesn't match shape of different colours, then you should use CV_TM_SQDIFF or maybe cv2.TM_CCOEFF_NORMED.
Correlation term gives matching for maximum value and Squared difference terms gives matching for minimum values. So in case you have exact shape and size, not same color though, you will get high value of correlation (see the equation in above link).
Concept:
Suppose X=(X_1,X_2,....X_n), Y=(Y_1,Y_2,...,Y_n) satisfy Y_i=a * X_i for all i and some positive constant a, then
(sum of all X_i * Y_i)=a * (Sum of (X_i)^2)=SquareRoot(Sum of (X_i)^2)*SquareRoot(Sum of (a * X_i)^2).
therefore (sum of all X_i * Y_i)/(SquareRoot(Sum of (X_i)^2)*SquareRoot(Sum of (Y_i)^2))=1.
In your case, X represent your template image, almost only two color, background is black which is 0, the foreground color is constant c. Y represent ROI of your image, which is also almost only two color, background is 0, foreground color is another constant d. So we have a=d/c to satisfy above mentioned concept. So if we use cv2.TM_CCORR_NORMED, we get result near 1 is what we expected.
As for cv2.TM_CCOEFF_NORMED, if Y_i=a * X_i+b for all i and some constant b and some positive constant a, then correlation coefficient between X and Y is 1(Basic statistics). So if we use cv2.TM_CCOEFF_NORMED, we get result near 1 is what we expected.
Related
I am analyzing a very big number of images and extracting the dominant color codes.
I want to group them into ranges of generic color names, like Green, Dark Green, Light Green, Blue, Dark Blue, Light Blue and so on.
I am looking for a language agnostic way in order to implement something by myself, if there are examples I can look into in order to achieve this I would be more than grateful.
In machine learning field, what you want to do is called classification, in which the goal is to assign the label of one of the classes (color) to each of the observations (images).
To do this, classes must be pre-defined. Suppose these are the colors we want to assign to images:
To determine the dominant color of an image, the distance between each of its pixels and all the colors in the table must be calculated. Note that this distance is calculated in RGB color space. To calculate the distance between the ij-th pixel of the image and the k-th color of the table, the following equation can be used:
d_ijk = sqrt((r_ij-r_k)^2+(g_ij-g_k)^2+(b_ij-b_k)^2)
In the next step, for each pixel, the closest color in the table is selected. This is the concept used to compress an image using indexed colors (except that here the palette is the same for all images and is not calculated for each to minimize the difference between the original and the indexed image). Now, as #jairoar pointed out, we can get the histogram of the image (not to be confused with RGB histogram or intensity histogram), and determine the color that has the most repetition.
To show the result of these steps, I used random crops of this work of art! of mine:
This is how images look, before and after indexing (left: original, right: indexed):
And these are most repeated colors (left: indexed, right: dominant color):
But since you said the number of images is large, you should know that these calculations are relatively time consuming. But the good news is that there are ways to increase the performance. For example, instead of using the Euclidean distance (formula above), you can use the City Block or Chebyshev distance. You can also calculate the distance only for a fraction of the pixels instead of calculating it for all the pixels in an image. For this purpose, you can first scale the image to a much smaller size (for example, 32 by 32) and perform calculations for the pixels of this reduced image. If you decided to resize images, don not bother to use bilinear or bicubic interpolations, it doesn't worth the extra computation. Instead, go for the nearest neighbor, which actually performs a rectangular lattice sampling on the original image.
Although the mentioned changes will greatly increase the speed of calculations, but nothing good comes for free. This is a trade-off of performance versus accuracy. For example, in the previous two pictures, we see that the image, which was initially recognized as orange (code 20), has been recognized as pink (code 26) after resizing it.
To determine the parameters of the algorithm (distance measurement, reduced image size and scaling algorithm), you must first perform the classification operation on a number of images with the highest possible accuracy and keep the results as the ground truth. Then, with multiple experiments, obtain a combination of parameters that do not make the classification error more than a maximum tolerable value.
#saastn's fantastic answer assumes you have a set of pre-defined colors that you want to sort your images to. The implementation is easier if you just want to classify the images to one color out of some set of X equidistant colors, a la histogram.
To summarize, round the color of each pixel in the image to the nearest color out of some set of equidistant color bins. This reduces the precision of your colors down to whatever amount of colors that you desire. Then count all of the colors in the image and select the most frequent color as your classification for that image.
Here is my implementation of this in Python:
import cv2
import numpy as np
#Set this to the number of colors that you want to classify the images to
number_of_colors = 8
#Verify that the number of colors chosen is between the minimum possible and maximum possible for an RGB image.
assert 8 <= number_of_colors <= 16777216
#Get the cube root of the number of colors to determine how many bins to split each channel into.
number_of_values_per_channel = number_of_colors ** ( 1 / 3 )
#We will divide each pixel by its maximum value divided by the number of bins we want to divide the values into (minus one for the zero bin).
divisor = 255 / (number_of_values_per_channel - 1)
#load the image and convert it to float32 for greater precision. cv2 loads the image in BGR (as opposed to RGB) format.
image = cv2.imread("image.png", cv2.IMREAD_COLOR).astype(np.float32)
#Divide each pixel by the divisor defined above, round to the nearest bin, then convert float32 back to uint8.
image = np.round(image / divisor).astype(np.uint8)
#Flatten the columns and rows into just one column per channel so that it will be easier to compare the columns across the channels.
image = image.reshape(-1, image.shape[2])
#Find and count matching rows (pixels), where each row consists of three values spread across three channels (Blue column, Red column, Green column).
uniques = np.unique(image, axis=0, return_counts=True)
#The first of the two arrays returned by np.unique is an array compromising all of the unique colors.
colors = uniques[0]
#The second of the two arrays returend by np.unique is an array compromising the counts of all of the unique colors.
color_counts = uniques[1]
#Get the index of the color with the greatest frequency
most_common_color_index = np.argmax(color_counts)
#Get the color that was the most common
most_common_color = colors[most_common_color_index]
#Multiply the channel values by the divisor to return the values to a range between 0 and 255
most_common_color = most_common_color * divisor
#If you want to name each color, you could also provide a list sorted from lowest to highest BGR values comprising of
#the name of each possible color, and then use most_common_color_index to retrieve the name.
print(most_common_color)
While learning an image denoising technique based on bilateral filter, I encountered this tutorial which provides with full lists of arguments used to run OpenCV's bilateralFilter function. What I see, it's slightly confusing, because there is no explanation about a mathematical rule to alter the diameter value by manipulating both the sigma arguments. So, if picking some specific arguments to pass into that function, I realize hardly what diameter corresponds with a particular couple of sigma values.
Does there exist a dependency between both deviations and the diameter? If my inference is correct, what equation (may be, introduced in OpenCV documentation) is to be referred if applying bilateral filter in a program-based solution?
According to the documentation, the bilateralFilter function in OpenCV takes a parameter d, the neighborhood diameter, as well as a parameter sigmaSpace, the spatial sigma. They can be selected separately, but if d "is non-positive, it is computed from sigmaSpace." For more details we need to look at the source code:
if( d <= 0 )
radius = cvRound(sigma_space*1.5);
else
radius = d/2;
radius = MAX(radius, 1);
d = radius*2 + 1;
That is, if d is not positive, then it is taken as 3 times sigmaSpace. d is also always forced to be odd, so that there is a central pixel in the neighborhood.
Note that the other sigma, sigmaColor, is unrelated to the spatial size of the filter.
In general, if one chooses a sigmaSpace that is too large for the given d, then the Gaussian kernel will be cut off in a way that makes it not appear like a Gaussian, and loose its nice filtering properties (see for example here for an explanation). If it is taken too small for the given d, then many pixels in the neighborhood will always have a near-zero weight, meaning that computational work is wasted. The default value is rather small (one typically uses a radius of 3 times sigma for Gaussian filtering), but is still quite reasonable given the computational cost of the bilateral filter (a smaller neighborhood is cheaper).
These two value (d and sigma) are totally unrelated to each other. Sigma determines the values of the pixels of the kernel, but d determines the size of the kernel.
For example consider this Gaussian filter with sigma=1:
It's a filter kernel and and as you can see the pixel values of the kernel only depends on sigma (the 3*3 matrix in the middle is equal in both kernel), but reducing the size of the kernel (or reducing the diameter) will make the outer pixels ineffective without effecting the values of the middle pixels.
And now if you change the sigma, (with k=3) the kernel is still 3*3 but the pixels' values would be different.
dct don't the conversion properly in opencv.
imf = np.float32(block)
dct = cv2.dct(imf)
[[154,123,123,123,123,123,123,136],
[192,180,136,154,154,154,136,110],
[254,198,154,154,180,154,123,123],
[239,180,136,180,180,166,123,123],
[180,154,136,167,166,149,136,136],
[128,136,123,136,154,180,198,154],
[123,105,110,149,136,136,180,166],
[110,136,123,123,123,136,154,136]]
this block of an image,when converting with code shown above
[162.3 ,40.6, 20.0...
[30.5 ,108.4...
this should be the result,
[1186.3 , 40.6, 20.0...
[30.5, 108.4 ....
but I found this Result. for sample block, https://www.math.cuhk.edu.hk/~lmlui/dct.pdf
The DCT is working fine. The difference between what you got and what you expect is because that particular example given actually does the DFT on M instead of on the original image, I. In this case, as the paper shows, M = I - 128. The only difference in your example is that you don't subtract off that piece, so the values are all larger. In a cosine or Fourier transform, the first coefficient (the "DC offset" as it is sometimes called) has a higher value because your image values are just greater. But that's why all the other coefficients are the same. If you take an image and you simply add some or subtract some from the entire image equally, the coefficients of the transform will be the same, except the very first one.
From the standard definition of the DCT:
You can see here that for the first coefficient with k = 0, that inside the cosine function, you just get 0, and cos(0) = 1. Thus, X_0 as it's shown in this picture is just the sum of all the x_n values. Generally this value may be scaled by something relating to N so that it's something like an average. When doing so, it relates back to the X_0 term being a "DC offset" which you'll see described as the "mean value of the signal," or in other words, how far the signal is from 0. This is super useful to have as one of the cosine/Fourier transform coefficients as it then can completely describe a signal; all the other coefficients describe the frequency content and so they say nothing about how far the values are from 0, but the first coefficient, the DC offset, does tell you the shift!
I have a hypothetical question about image processing:
Supposing we have a grayscale image of size 2x2 which can be represented by an integer matrix (intensity values) with the same dimensions:
(050, 150)
(100, 250)
After applying some mathematical functions (it can be any mathematical function) the values were changed, for example:
(550, 825)
(990, 1120)
Is there any way that I can represent this matrix as an image again (considering that the pixels intensity range is 0-255)?
One option which I can think about is to 'normalize' these values by finding the lower value and decreasing it from each value:
(0, 275)
(440, 570)
Then, finding the higher value and consider it as the 255, for example:
(0, 48)
(77, 255)
I'm not sure if this approach makes sense (or is efficient to represent the original image).
Anyway, this question is just a conceptual doubt, I'm not trying to implement it, so I haven't any code to show.
Is there any way that I can represent this matrix as an image again ( considering that the pixels intensity range is 0-255 ) ?
Oh yes, we can.
The issue is with a colorspace-mapping.
Not just the translation from an unknown range of < A, B >, but also within a certain and reasonable context of the two different colorspace-ranges, the latter ( the target ) of which is the said (int) < 0, 255 > bound.
Given many 2x2 matrices get produced by some unknown process, their colorspace-transcoding ought keep some rationale, that if all were put side by side, the transcoding used should be "non-local" ( having some global anchor for globally equalised normalisation of individual colorspace-transcoding values ) so as not to "devastate" any phenomenon, that was observed in the original colorspace on 4096 x 4096 imagery source, but was "torn" appart, by just locally-normalised 2 x 2 transcoding ( this will lead to incoherrent target colorspaces and the globally observable visual phenomenon will not be visible in a set of target 2x2 sub-views right due to incompatible colorspaces transcoding -- a new kind o non-linear disorder will be introduced due to globally discoordinated colorspace-transcoding and the initial information value of the original will be lost )
When applying a Gaussian blur to an image, typically the sigma is a parameter (examples include Matlab and ImageJ).
How does one know what sigma should be? Is there a mathematical way to figure out an optimal sigma? In my case, i have some objects in images that are bright compared to the background, and I need to find them computationally. I am going to apply a Gaussian filter to make the center of these objects even brighter, which hopefully facilitates finding them. How can I determine the optimal sigma for this?
There's no formula to determine it for you; the optimal sigma will depend on image factors - primarily the resolution of the image and the size of your objects in it (in pixels).
Also, note that Gaussian filters aren't actually meant to brighten anything; you might want to look into contrast maximization techniques - sounds like something as simple as histogram stretching could work well for you.
edit: More explanation - sigma basically controls how "fat" your kernel function is going to be; higher sigma values blur over a wider radius. Since you're working with images, bigger sigma also forces you to use a larger kernel matrix to capture enough of the function's energy. For your specific case, you want your kernel to be big enough to cover most of the object (so that it's blurred enough), but not so large that it starts overlapping multiple neighboring objects at a time - so actually, object separation is also a factor along with size.
Since you mentioned MATLAB - you can take a look at various gaussian kernels with different parameters using the fspecial('gaussian', hsize, sigma) function, where hsize is the size of the kernel and sigma is, well, sigma. Try varying the parameters to see how it changes.
I use this convention as a rule of thumb. If k is the size of kernel than sigma=(k-1)/6 . This is because the length for 99 percentile of gaussian pdf is 6sigma.
You have to find a min/max of a function G such that G(X,sigma) where X is a set of your observations (in your case, your image grayscale values) , This function can be anything that maintain the "order" of the intensities of the iamge, for example, this can be done with the 1st derivative of the image (as G),
fil = fspecial('sobel');
im = imfilter(I,fil);
imagesc(im);
colormap = gray;
this gives you the result of first derivative of an image, now you want to find max sigma by
maximzing G(X,sigma), that means that you are trying a few sigmas (let say, in increasing order) until you reach a sigma that makes G maximal. This can also be done with second derivative.
Given the central value of the kernel equals 1 the dimension that guarantees to have the outermost value less than a limit (e.g 1/100) is as follows:
double limit = 1.0 / 100.0;
size = static_cast<int>(2 * std::ceil(sqrt(-2.0 * sigma * sigma * log(limit))));
if (size % 2 == 0)
{
size++;
}