how to get the result in template matching code? - ios

I am a beginner to Computer vision .I am currently working on a project to find the match between two images using matchTemplate in iOS.The problem that I am facing is with finding a way to determine whether the two images are matching or not although matchTemplate is working well.I thought of taking the percentage of result matrix but I did not know how and could not find a way.also MinMaxLoc did not work with me .
If anyone can help me or give me an idea I would really really appreciate it cause I am on desperate point now.
Here is the code:
`
UIImage* image1 = [UIImage imageNamed:#"1.png"];
UIImage* image2 = [UIImage imageNamed:#"Image002.png"];
// Convert UIImage* to cv::Mat
UIImageToMat(image1, MatImage1);
UIImageToMat(image2, MatImage2);
MatImage1.resize(100 , 180);
MatImage2.resize(100 , 180);
if (!MatImage1.empty())
{
// Convert the image to grayscale
//we can also use BGRA2GRAY : Blue , Green , Red and Alpha(Opacity)
cv::cvtColor(MatImage1, grayImage1, cv::COLOR_BGRA2GRAY );
cv::cvtColor(MatImage2, grayImage2, cv::COLOR_BGRA2GRAY);
}
/// Create the result matrix
int result_cols = grayImage1.cols ;
int result_rows = grayImage1.rows ;
result.create( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate( grayImage1 , grayImage2 , result , CV_TM_SQDIFF_NORMED);
//Normalize
normalize( result, result, 0, 100, cv::NORM_MINMAX, -1 );
//Threshold
cv::threshold(result , result , 30, 0, CV_THRESH_TOZERO);`

The intent of matchTemplate(...) is that the template is usually smaller than the image. The template is then moved across the image as a sliding window and a 'matching score' is calculated in some way e.g. using cross-correlation or squared difference.
So if, the input image is 10x10 and the template is 3x3, then the template is positioned so the top left corner is at the top left corner of the image (centre of template is at pixel (1,1) assuming that we index from 0). The matching score is calculated and then the template slides to (1,2) and we match again. When the template's middle pixel is at (8,1) we slide it down to the next row (1,2) and repeat.
The output result of this process is an 8x8 matrix where the value at each position represents the matching score for when the template was at that point. The size of the output image is W-w+1 x H-h+1 where WxH is the size of the image and wxh is the size of the template.
You can then use minMaxLoc to work out which is the highest and lowest scores in the output matrix and, depending on the matching score you use, one of these will be the most likely match for the template within the image.
Now you are resizing your template and image to the same size:
MatImage1.resize(100 , 180);
MatImage2.resize(100 , 180);
which means that there is only one place that the template can be located within the image and your output matrix should be a 1x1 grid.
You are also using
CV_TM_SQDIFF_NORMED
Which is the normalised squared difference. For this score, a lower matching score is better. i.e. the closer the value in your 1x1 output matrix is to 0, the closer the match between your template and your image.
Given the sizes of template and image are 100x180 then you can easily calculate the maximum value that this matching score can have as 100x180x255 which you would get if the entire image were black and the template were white or vice-versa. This should help you work out a sensible threshold below which you would say t=your template matched the image.
Since you only have a 1x1 output though there's little value in normalising or thresholding the result.

Related

Perspective Transform using paper

I've got an image from phone camera which have a paper inside it. In the image are also, some coordinates marked to get the distance between them. Since the aspect ratio of paper is known in advance (0.7072135785007072), I want to correct the distortion so that the whole image looks as if it's taken from the top view. I collect the four corners of the paper and apply opencv getPerspectiveTransform as follows:
pts1 = [[ 717., 664.],
[1112., 660.],
[1117., 1239.],
[ 730., 1238.]]
ratio=0.7072135785007072
cardH=math.sqrt((pts1[2][0]-pts1[1][0])*(pts1[2][0]-pts1[1][0])+(pts1[2][1]-pts1[1][1])*(pts1[2][1]-pts1[1][1]))
cardW=ratio*cardH;
pts2 = np.float32([[pts1[0][0],pts1[0][1]], [pts1[0][0]+cardW, pts1[0][1]], [pts1[0][0]+cardW, pts1[0][1]+cardH], [pts1[0][0], pts1[0][1]+cardH]])
M = cv2.getPerspectiveTransform(pts1,pts2)
with this matrix M I'm transforming the whole image as follows:
transformed = np.zeros((image.shape[1], image.shape[0]), dtype=np.uint8);
dst = cv2.warpPerspective(image, M, transformed.shape)
_ = cv2.rectangle(dst, (pts2[0][0], pts2[0][1]), (int(pts2[2][0]), int(pts2[2][1])), (0, 255, 0), 2)
The problem with this is that it's correcting the perspective of paper but distorting the overall image. I don't know why. The input image is this and the corresponding output image is this. In the input image point M and O and aligned horizontally, but to my surprise after transforming the overall image the point M and O are no longer aligned horizontally, why is that happening ?

Template matching behavior on Color

I am evaluating template matching algorithm to differentiate similar and dissimilar objects. What I found is confusing, I had an impression of template matching is a method which compares raw pixel intensity values. Hence when the pixel value varies I expected Template Matching to give a less match percentage.
I have a template and search image having same shape and size differing only in color(Images attached). When I did template matching surprisingly I am getting match percentage greater than 90%.
img = cv2.imread('./images/searchtest.png', cv2.IMREAD_COLOR)
template = cv2.imread('./images/template.png', cv2.IMREAD_COLOR)
res = cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
print(max_val)
Template Image :
Search Image :
Can someone give me an insight why it is happening so? I have even tried this in HSV color space, Full BGR image, Full HSV image, Individual channels of B,G,R and Individual channels of H,S,V. In all the cases I am getting a good percentage.
Any help could be really appreciated.
res = cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED)
There are various argument, which you can use to find templates e.g. cv2.TM_CCOEFF, cv2.TM_CCOEFF_NORMED, cv2.TM_CCORR, cv2.TM_CCORR_NORMED, cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED
You can look into their equation here:
https://docs.opencv.org/2.4/modules/imgproc/doc/object_detection.html
From what I think if you want to use your template matching so that it doesn't match shape of different colours, then you should use CV_TM_SQDIFF or maybe cv2.TM_CCOEFF_NORMED.
Correlation term gives matching for maximum value and Squared difference terms gives matching for minimum values. So in case you have exact shape and size, not same color though, you will get high value of correlation (see the equation in above link).
Concept:
Suppose X=(X_1,X_2,....X_n), Y=(Y_1,Y_2,...,Y_n) satisfy Y_i=a * X_i for all i and some positive constant a, then
(sum of all X_i * Y_i)=a * (Sum of (X_i)^2)=SquareRoot(Sum of (X_i)^2)*SquareRoot(Sum of (a * X_i)^2).
therefore (sum of all X_i * Y_i)/(SquareRoot(Sum of (X_i)^2)*SquareRoot(Sum of (Y_i)^2))=1.
In your case, X represent your template image, almost only two color, background is black which is 0, the foreground color is constant c. Y represent ROI of your image, which is also almost only two color, background is 0, foreground color is another constant d. So we have a=d/c to satisfy above mentioned concept. So if we use cv2.TM_CCORR_NORMED, we get result near 1 is what we expected.
As for cv2.TM_CCOEFF_NORMED, if Y_i=a * X_i+b for all i and some constant b and some positive constant a, then correlation coefficient between X and Y is 1(Basic statistics). So if we use cv2.TM_CCOEFF_NORMED, we get result near 1 is what we expected.

Remove Boxes/rectangles from image

I have the following image.
this image
I would like to remove the orange boxes/rectangle around numbers and keep the original image clean without any orange grid/rectangle.
Below is my current code but it does not remove it.
Mat mask = new Mat();
Mat src = new Mat();
src = Imgcodecs.imread("enveloppe.jpg",Imgcodecs.CV_LOAD_IMAGE_COLOR);
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
Scalar lowerThreshold = new Scalar(0, 50, 50);
Scalar upperThreshold = new Scalar(25, 255, 255);
Mat mask = new Mat();
Core.inRange(hsvMat, lowerThreshold, upperThreshold, mask);
//src.setTo(new scalar(255,255,255),mask);
what to do next ?
How can i remove the orange boxes/rectangle from the original images ?
Update:
For information , the mask contains exactly all the boxes/rectangle that i want to remove. I don't know how to use this mask to remove boxes/rectangle from the source (src) image as if they were not present.
This is what I did to solve the problem. I solved the problem in C++ and I used OpenCV.
Part 1: Find box candidates
Firstly I wanted to isolate the signal that was specific for red channel. I splitted the image into three channels. I then subtracted the red channel from blue channel and the red from green channel. After that I subtracted both previous subtraction results from one another. The final subtraction result is shown on the image below.
using namespace cv;
using namespace std;
Mat src_rgb = imread("image.jpg");
std::vector<Mat> channels;
split(src_rgb, channels);
Mat diff_rb, diff_rg;
subtract(channels[2], channels[0], diff_rb);
subtract(channels[2], channels[1], diff_rg);
Mat diff;
subtract(diff_rb, diff_rg, diff);
My next goal was to divide the parts of obtained image into separate "groups". To do that, I smoothed the image a little bit with a Gaussian filter. Then I applied a threshold to obtain a binary image; finally I looked for external contours within that image.
GaussianBlur(diff, diff, cv::Size(11, 11), 2.0, 2.0);
threshold(diff, diff, 5, 255, THRESH_BINARY);
vector<vector<Point>> contours;
findContours(diff, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
Click to see subtraction result, Gaussian blurred image, thresholded image and detected contours.
Part 2: Inspect box candidates
After that, I had to make an estimate whether the interior of each contour contained a number or something else. I made an assumption that numbers will always be printed with black ink and that they will have sharp edges. Therefore I took a blue channel image and I applied just a little bit of Gaussian smoothing and convolved it with a Laplacian operator.
Mat blurred_ch2;
GaussianBlur(channels[2], blurred_ch2, cv::Size(7, 7), 1, 1);
Mat laplace_result;
Laplacian(blurred_ch2, laplace_result, -1, 1);
I then took the resulting image and applied the following procedure for every contour separately. I computed a standard deviation of the pixel values within the contour interior. Standard deviation was high inside the contours that surrounded numbers; and it was low inside the two contours that surrounded the dog's head and the letters on top of the stamp.
That is why I could appliy the standard deviation threshold. Standard deviation was approx. twice larger for contours containing numbers so this was an easy way to only select the contours that contained numbers. Then I drew the contour interior mask. I used erosion and subtraction to obtain the "box edge mask".
The final step was fairly easy. I computed an estimate of average pixel value nearby the box on every channel of the image. Then I changed all pixel values under the "box edge mask" to those values on every channel. After I repeated that procedure for every box contour, I merged all three channels into one.
Mat mask(src_rgb.size(), CV_8UC1);
for (int i = 0; i < contours.size(); ++i)
{
mask.setTo(0);
drawContours(mask, contours, i, cv::Scalar(200), -1);
Scalar mean, stdev;
meanStdDev(laplace_result, mean, stdev, mask);
if (stdev.val[0] < 10.0) continue;
Mat eroded;
erode(mask, eroded, cv::Mat(), cv::Point(-1, -1), 6);
subtract(mask, eroded, mask);
for (int c = 0; c < src_rgb.channels(); ++c)
{
erode(mask, eroded, cv::Mat());
subtract(mask, eroded, eroded);
Scalar mean, stdev;
meanStdDev(channels[c], mean, stdev, eroded);
channels[c].setTo(mean, mask);
}
}
Mat final_result;
merge(channels, final_result);
imshow("Final Result", final_result);
Click to see red channel of the image, the result of convolution with Laplacian operator, drawn mask of the box edges and the final result.
Please note
This code is far from being optimal, especially the last loop does quite a lot of unnecessary work. But I think that in this case readability is more important (and the author of the question did not request an optimized solution anyway).
Looking towards more general solution
After I posted the initial reply, the author of the question noted that the digits can be of any color and their edges are not necessarily sharp. That means that above procedure can fail because of various reasons. I altered the input image so that it contains different kinds of numbers (click to see the image) and you can run my algorithm on this input and analyze what goes wrong.
The way I see it, one of these approaches is needed (or perhaps a mixture of both) to obtain a more "general" solution:
concentrate only on rectangle shape and color (confirm that the box candidate is really an orange box and remove it regardless of what is inside)
concentrate on numbers only (run a proper number detection algorithm inside the interior of every box candidate; if it contains a single number, remove the box)
I will give a trivial example of the first approach. If you can assume that orange box size will always be the same, just check the box size instead of standard deviation of the signal in the last loop of the algorithm:
Rect rect = boundingRect(contours[i]);
float area = rect.area();
if (area < 1000 || area > 1200) continue;
Warning: actual area of rectangles is around 600Px^2, but I took into account the Gaussian Blurring, which caused the contour to expand. Please also note that if you use this approach you no longer need to perform blurring or laplace operations on blue channel image.
You can also add other simple constraints to that condition; ratio between width and height is the first one that comes to my mind. Geometric properties can also be a good option (right angles, straight edges, convexness ...).

OpenCV Template matching against video

Assuming I have a template image and searching for a match in a video,what is the measure to be looked for ?
From OpenCV tutorial here
1.loc = np.where( res >= threshold) gives me numpy array.How to infer it on a scale of 1-100,where 100 refers to exact match and 80 refers to 80% match and so on.
2.I am not clear on min,max values ..what does rectangle coordinates denote?
# Apply template Matching
res = cv2.matchTemplate(img,template,method)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
I'm not too familiar with Python, but I have worked with template matching and OpenCV.
Performing a template match produces a results matrix - called res in your example.
Depending on the template matching method used, the brightest/darkest (max/min) points on this result matrix are your best matches.
In your example the method cv2.TM_SQDIFF_NORMED is used which will normalise the result matrix values between 0 and 1.
You can then iterate over your result matrix points and only store those points which pass a certain threshold, in the example they use 0.8 which is equivalent to an 80% match.
The last step involves marking each match onto the drawing by using the rectangle drawing function which works as follows:
Rectangle(img, pt1, pt2, color, thickness=1, lineType=8, shift=0)
img - image matrix, the picture you want to draw on
pt1 - Top left point of the rectangle (x,y)
pt2 - Bottom right point of the rectangle (x,y)
color - Line colour (BGR format)
I answered a similar question here and provided an example that might be of some help to you too.

Determining the average distance of pixels (to the centre of an image) in OpenCV

I'm trying to figure out how to do the following calculation in OpenCV.
Assuming a binary image (black/white):
Average distance of white pixels from the centre of the image. An image with most of its white pixels near the edges will have a high score, whereas an image with most white pixels near the centre will have a low score.
I know how to do this manually with loops, but since I'm working Java I'd rather offload it to a set of high-performance OpenCV calls which are native.
Thanks
distanceTransform() is almost what you want. Unfortunately, it only calculates distance to the nearest black pixel, which means the data must be massaged a little bit. The image needs to contain only a single black pixel at the center for distanceTransform() to work properly.
My method is as follows:
Set all black pixels to an intermediate value
Set the center pixel to black
Call distanceTransform() on the modified image
Calculate the mean distance via mean(), using the white pixels in the binary image as a mask
Example code is below. It's in C++, but you should be able to get the idea:
cv::Mat img; // binary image
img.setTo(128, img == 0);
img.at<uchar>(img.rows/2, img.cols/2) = 0; // Set center point to zero
cv::Mat dist;
cv::distanceTransform(img, dist, CV_DIST_L2, 3); // Can be tweaked for desired accuracy
cv::Scalar val = cv::mean(dist, img == 255);
double mean = val[0];
With that said, I recommend you test whether this method is actually any faster than iterating in a loop. This method does a fair bit more processing than necessary to accommodate the API call.

Resources