I am a new in OpenCV. Currently, I want to write the demo using OpenCV feature matching to find all the similar images from the set of images.
Firstly, I want to try to compare two images first. And I use this code to do:
openCV: creating a match of features, meaning of output array, java
So I tested this code by input two similar images but has one is the rotation. And then I try to input two totally different images such as lena.jpeg vs car.png (any car).
I see that the algorithm still return matches matrix between two those different images.
My question here is how can I point that one case is similar image and one case is not after I got this matrix:
flannDescriptorMatcher.match(descriptor1,
descriptor2, matches.get(0));
because I don't want to draw matching point between two images, I want factor to distinguish those case.
Thank!
Related
Given an image, how can I create an algorithm using image processing techniques to identify the sections where there are no products present. I also need to create a bounding with coordinates for the empty spaces where products are not present.
I need to accomplish this using OpenCV. And I cannot use Deep Learning here.
I have used a Canny edge detector, and empty spaces are well identified using this.
Should I use a Contour on the results of the Canny edge detector?
Any help would be appreciated.
yes you can do that in image processing technique by using Matlab, vector images.
Use two images.
One is with empty shelves.
Second one is the current image.
Subtract 1st image from second second.
You will get the results how many shelves are empty.
I'm using OpenCV ORB for checking whether two images are similar or not. ORB is efficient and gives me best results most of the time. But, in some cases, ORB's output is not satisfactory. I'm using distance parameter, got after KnnMatch, to identify similar images.
My logic - If the distance value range starts from a smaller value, then the images are similar.
My code is available in this link
After comparison, the result says that Image2 and Image3 are similar to Image1
Should I change this distance depended logic? Will an approach, combined with machine learning and OpenCV ORB, be a solution?
I have done a project similar to yours, and I also experienced issues with ORB. ORB is good for matching key points, and I have found it to be relatively good at that while using it the same way you have, sorting by distance.
However, if you want to determine how similar images are instead of just the keypoints of the image, then instead of counting how many keypoint matches you have in images, try to compare the distance(s) between different keypoints on the same image to the distance(s) between the corresponding points on the other image.
I want to estimate transformation matrix between two images, which are taken at the same scene from different positions.
I tried two methods:
First method related links:
https://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html
https://docs.opencv.org/3.3.0/dc/d2c/tutorial_real_time_pose.html
I use extraction of keypoints, descriptions, and image match to find corresponding points between two images, then use findHomography to compute the matrix. However, these image match method doesn't work well although I used various techniques mentioned in the above links.
Second method, I tried esimateRigidTransform. However, it returns empty matrix for following two example images.
In the doc of the function, "Two raster images. In this case, the function first finds some features in the src image and finds the corresponding features in dst image. After that, the problem is reduced to the first case." It seems it uses similar ideas as the first method.
My questions:
1. Why esimateRigidTransform returns empty matrix for such similar images?
2. Are there better method for computing transform matrix between similar images which are taken at the same scene from different positions? For example, can I skip the feature detection and match steps?
Thanks.
I want to detect Or recognize a specific object in an image. First of all say what I have done. I tried to detect a LOGO e.g Google LOGO, I have the original image of the LOGO, but in the images which I am going to process are taken with different cameras from different angle and from different distance and from different screens (wide screen like cinema).
I am using OpenCV 3 to check whether this LOGO is in these images, I have tried the OpenCV SURF, SIFT etc functions and also tried NORM_L2 algorithm, which compares two images and template matching and also used SVM (it was so slow and not correct detection) and some other OpenCV functions, but no one was good to use. Then I did my own algorithm which is working better than the above functions, but also cannot satisfy the requirements.
Now my question is: Is there any better way to detect the specific object in an image? For example: what should I do at the first and second... steps?
I recently stumbled upon a SIFT implementation for C#. I thought it would be great fun to play around with it, so that's what I did.
The implementation generates a set of "interest points" for any given image. How would I actually use this information to compare two images?
What I'm after is a single "value of similarity". Can that be generated out of the two sets of interest points of the two images?
You need to run SIFT on both images so you get interest points (lets call them Keypoints) in both images.
After that you need to find matches between keypoints in both images. There are algorithms implemented for that purpose in OpenCV.
The value of similarity can be computed out of the number of matches. You can consider that if you get more than 4 points the images are the same, and you can also calculate the relative rotation between them.
You can use number of matches as similarity metric.