How to match features only in a part two given images? - opencv

I have two images.
And after finding the keypoints and descriptors, I want to search for matching features for the features in image1 in only a particular part of image 2.
Can I achieve it through matchesMask parameter of matches?
Or, is there any other method?
Please let me know.
P.s.- I am using FAST detector, ORB extractor and BFMatcher as of now.

I would copy the "particular part of image 2" in another matrix, and use it for the detection / matching.
For instance, if you wanted to create a matrix pointing to the region of "image2" defined by the first 5 columns and 10 rows, you could do:
cv::Mat subMatrix = image2.colRange(0, 5).rowRange(0, 10);
And then you would use subMatrix for the matching.

Related

is ORB+BFMatcher a good match for recognizing repetitive images (with slight variations?)

I need to recognize images with hand-written numerals with known values. Physical objects with the number are always identical but come in slight variations of positions/scale/lighting. They are about 100 in number, having about 100x500 px in size.
In the first pass, the code should "learn" possible inputs, and then recognize them (classify them as being close to one of the "training" images) when they come again.
I was mostly following the Feature Matching Python-OpenCV tutorial
Input images are analyzed first, keypoints & descriptors are remembered in the orbTrained list:
import cv2
import collections
ORBTrained=collections.namedtuple('ORBTrained',['kp','des','img'])
orbTrained=[]
for img in trainingImgs:
z2=preprocessImg(img)
orb=cv2.ORB_create(nfeatures=400,patchSize=30,edgeThreshold=0)
kp,des=orb.detectAndCompute(z2,None)
orbTrained.append(ORBTrained(kp=kp,des=des,img=z2))
z3=cv2.drawKeypoints(z2,kp,None,color=(0,255,0),flags=0)
A typical result of this first stage looks like this:
Then in the next loop, for each real input image, cycle through all training images to see which is matching the best:
ORBMatch=collections.namedtuple('ORBMatch',['dist','match','train'])
for img in inputImgs:
z2=preprocessNum(img)
orb=cv2.ORB_create(nfeatures=400,patchSize=30,edgeThreshold=0)
kp,des=orb.detectAndCompute(z2,None)
bf=cv2.BFMatcher(cv2.NORM_HAMMING,crossCheck=True)
mm=[]
for train in orbTrained:
m=bf.match(des,train.des)
dist=sum([m_.distance for m_ in m])
mm.append(ORBMatch(dist=dist,match=m,train=train))
# sort matching images based on score
mm.sort(key=lambda m: m.dist)
print([m.dist for m in mm[:5]])
best=mm[0]
best.match.sort(key=lambda x:x.distance) # sort matches in the best match
z3=cv2.drawMatches(z2,kp,best.train.img,best.train.kp,best.match[:50],None,flags=2)
The result I get is nonsensical, and consistently so (only when I run with pixel-identical input, the result is correct):
What is the problem? Am I completely misunderstanding what to do, or do I just need to tune some parameters?
First, are you sure you're not reinventing the wheel by creating your own OCR library? There are many free frameworks, some of which support training with custom character sets.
Second, you should understand what feature matching is. It will find similar small areas, but isn't aware of other feature pairs. It will match similar corners of characters, not the characters itself. You might experiment with larger patchSize so that it covers at least half of the digit.
You can minimize false pairs by running feature detection only on a single digit at a time using thresholding and contours to find character bounds.
If the text isn't rotated, using roation-invariant feature descriptor such as ORB isn't the best option, try rotation-variant descriptor, such as FAST.
According to authors of papers (ORB and ORB-SLAM) ORB is invariant to rotation and scale "in a certain range". May be you should first match for small scale or rotation change.

Image Analysis: sift / harris / affine / RANSAC

I am not sure if this falls under the criteria of a proper question, but still, I would like to give it a shot.
I am looking for a library or function that takes two SIFT descriptors in a form of a file (or a matrix) of [number_of_keypoints][feature_0...feature_127] - meaning 128 features per file and allows comparison of images (I am using harris-affine alg. to extract them: http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/extract_features2.tar.gz ).
I am interested in a method that would allow me to find mutual nearest neighbours, that would accept number of keypoints in the neighbourhood and success ratio.
E.g.
Lets say I have two files with keypoints (described by SIFT descriptor) (image_1.sift, image_2.sift). I would like the method to accept: number of keypoints in the neighbourhood, match ratio, where match ratio means in pseudo code:
For each keypoint in image_1
Pick 50 nearest neighbours from image_1 -> List<KeyPoints> neighbours_1
For each keypoint in image_2
Pick 50 nearest neighbours from image_2 -> List<KeyPoints> neighbours_2
int numberOfMatches = 0;
foreach(neighbour in neighbours_1)
{
if(neighbour == neighbours_2.Find(neighbour))
numberOfMatches++;
}
The ratio is number of matches to number keypoints taken into consideration.
For example FindMutualKeypoints(image_1, image_2, 50, 0.7)
It can be c#, java, python or matlab implementation. I don't have much to do with image analysis on regular basis and before I start to write my own implementation, I assumed there probably is one out there already. I am having problem finding the correct terms in English from translation from my mother tongue (looks like the terms are quite different), which is probably the reason, why I could not find it yet.
I think openCV is the way to go.
Here is an example for it: link
It uses SURF descriptors, but you can also use SIFT.
You then call the FLANN matcher which also give you information about the quality of the matches.

Feed a complex-valued image into Neural network (tensorflow)

I'm working on a project which tries to "learn" a relationship between a set of around 10 k complex-valued input images (amplitude/phase; real/imag) and a real-valued output-vector with 48 entries. This output-vector is not a set of labels, but a set of numbers which represents the best parameters to optimize the visual impression of the given complex-valued image. These parameters are generated by an algorithm. It's possible, that there is some noise in the data (comming from images and from the algorithm which generates the parameter-vector)
Those parameters more-less depends on the FFT (fast-fourier-transform) of the input image. Therfore I was thinking of feeding the network (5 hidden-layers, but architecture shouldn't matter right now) with a 1D-reshaped version of the FFT(complexImage) - some pseudocode:
// discretize spectrum
obj_ft = fftshift(fft2(object));
obj_real_2d = real(obj_ft);
obj_imag_2d = imag(obj_ft);
// convert 2D in 1D rows
obj_real_1d = reshape(obj_real_2d, 1, []);
obj_imag_1d = reshape(obj_imag_2d, 1, []);
// create complex variable for 1d object and concat
obj_complx_1d(index, :) = [obj_real_1d obj_imag_1d];
opt_param_1D(index, :) = get_opt_param(object);
I was wondering if there is a better approach for feeding complex-valued images into a deep-network. I'd like to avoid the use of complex gradients, because it's not really necessary?! I "just" try to find a "black-box" which outputs the optimized parameters after inserting a new image.
Tensorflow gets the input: obj_complx_1d and output-vector opt_param_1D for training.
There are several ways you can treat complex signals as input.
Use a transform to make them into 'images'. Short Time Fourier Transforms are used to make spectrograms which are 2D. The x-axis being time, y-axis being frequency. If you have complex input data, you may choose to simply look at the magnitude spectrum, or the power spectral density of your transformed data.
Something else that I've seen in practice is to treat the in-phase and quadrature (real/imaginary) channels separate in early layers of the network, and operate across both in higher layers. In the early layers, your network will learn characteristics of each channel, in higher layers it will learn the relationship between the I/Q channels.
These guys do a lot with complex signals and neural nets. In particular check out 'Convolutional Radio Modulation Recognition Networks'
https://radioml.com/research/
The simplest way to feed complex valued numbers with out using complex gradients in your models is to represent the complex values in a different representation. The two main ways are:
Magnitude/Angle components
Real/Imaginary components
I'll show this idea using magnitude/angle components. Assuming you have a 2d numpy array representing an image with shape = (WIDTH, HEIGHT)
import numpy as np
kSpace = np.fft.ifftshift(np.fft.fft2(img))
This would give you a 2D complex array. You can then transform the array into a
data = np.dstack((np.abs(kSpace), np.angle(kSpace)))
This array will be a numpy array with shape = (WIDTH, HEIGHT, 2). This array represents one complex valued image. For a set of images, make sure to concatenate them together to get an array of shape = (NUM_IMAGES, WIDTH, HEIGHT, 2)
I made a simple example of using tensorflow to learn an Fourier Transform with a simple neural network. You can find this example at https://github.com/michaelmendoza/learning-tensorflow

Different matching results for opencv's descriptor_extractor_matcher when loading data from file

i am using the following code in the descriptor_extractor_matcher.cpp sample to compute the descriptors of img1 (Mat descriptors01), write it to my disk and load it back (Mat descriptors1). (same steps for the keypoints, but code is rather much the same ...)
Ptr<DescriptorExtractor> descriptorExtractor = DescriptorExtractor::create( argv[2] );
...
Mat descriptors01;
descriptorExtractor->compute( img1, keypoints1, descriptors01 ); // compute descriptors
FileStorage storage("test.yml", FileStorage::WRITE); //save it to disc
storage << "blub" << descriptors01;
storage.release();
Mat descriptors1;
FileStorage storage1("test.yml", FileStorage::READ); // load it again
storage1["blub"] >> descriptors1;
storage1.release();
The keypoints & descriptors for image 2 are computed and used without saving and loading.
I am using only the loaded data (keypoints & descriptors) for image 1 for the matching, so for the descriptors: descriptors1.
Now here is the thing: if I compare the cases
A) Using the code above for computing, storing and loading;
B) Using only loaded data (without computing and store it again)
for the matching I get different results, as you can see in the pictures for keypoints aswell as for the matching descriptors. I would have expect no differences... What am I missing here? Must I compare 2 images, and cannot compare an image to a stored set of keypoints and it's descriptors ?
Of course I'm using the same values for [detectorType] [descriptorType] [matcherType] [matcherFilterType] [image1] [image2] [ransacReprojThreshold], by the way ;)
Thanks alot!
UPDATE:
It seems the issue is depending on the descriptor. Working with loaded descriptors works for SIFT and SURF, but not for ORB and other. Images: Results with different descriptors for case A and B:
Try repeating A or B individually and see if the results are coming out to be the same. I suspect they won't and I say that because, #1 Your object of interest has poor texture and that would result in poor descriptors. #2 The viewpoint change between the two images is huge and which leads to the problem of non-repeatability even for the best of the descriptors like SIFT.
Now, comes the part of how to solve this repeatability issue, #1 use some threshold on the norm of the descriptor so that only very strong features are used for matching. #2 use the epipolar constraint along with RANSAC to filter out wrong matches. I am attaching two images to show how the filter hugely affects the correspondences.
Using SURF to find correspondence between the two images (two images in red-cyan colormap)
After filtering the images using RANSAC using epipolar constraint.
Feel free to comment and discuss further over this issue. :-)

OpenCV: How to get inlier points using findHomography()/findFundamental() and RANSAC

OpenCV does not provide a RANSAC-function per se or at least in such a form that you can just call it and be done with it (e.g. cv::ransac(...)). All functions/methods that are able to use RANSAC have a flag that enables it. However this is not always useful if you actually want to do something else with the inliers RANSAC computes after you have estimated a homography/fundamental matrix for example create a nice plot in Octave or similar software/library of the points, apply additional algorithms on the remaining set of filtered matches etc.
After matching two images one gets a vector of matches. Along with that we have of course 2 sets of keypoints (one for each image) that were used in the matching process. Using matches and keypoints we create two vectors of points (e.g. cv::Point2f points) and pass these to findHomography(). From this and this posts I discovered how exactly the inliers are marked using a mask, that we pass to that function. Each row inside the mask relates to an inlier/outlier. However I am unable to figure out how to use the row-index information from my two sets of points. Looking at OpenCV's source code didn't get me too far. In findFundamental() (similar to findHomography() when it comes to its signature and the mask-part) they use compressPoints(), which seems to somehow combine the two sets we have as input (source and destination points) into one. While testing in order to determine the nature of the mask I tried 2 sets of matched points (converted cv::Keypoints to cv::Point2f - a standard procedure). Each set contains 300 points so in total we have 600 points. The returned mask contains 300 rows (values are not important for this topic at hand).
EDIT: While writing this I discovered the answer (see below) but decided to post this question anyway in case someone needs this information as soon as possible and in compact form. Note that we still need one of OpenCV's function, which support RANSAC. So if you have a set of points but no intention of computing homography or fundamental matrix, this is obviously not the way and I dare say that I was unable to find anything useful in OpenCV's API that can help avoid this obstacle therefore you need to use an external library.
The solution is actually quite trivial. As we know each row in our mask gives information if we have an inlier or an outlier. However we have 2 sets of points as input so how exactly does a row containing a single value represent two points? The nature of this sort of indexing appeared in my mind while thinking how actually those two sets of points appear in findHomography() (in my case I was computing the homography between two images). Both sets have equal number of points in them because of the simple fact that they are extracted from the matches between our pair of images. This means that a row in our mask is the actual index of the points in the two sets and also the index in the vector of matches for the two images. I have successfully managed to manually refer to a small subset of matched points based on this and the results are as expected. It is important that you don't alter the order of your matches and the 2D points you have extracted from them using the keypoints referenced in each cv::DMatch. Below you can see a simple example for a single pair of inliers.
for(int i = 0; i < matchesObjectScene.size(); ++i)
{
// extract points from keypoints based on matches
pointsObject.push_back(keypointsObject.at(matchesObjectScene.at(i).queryIdx).pt);
pointsScene.push_back(keypointsScene.at(matchesObjectScene.at(i).trainIdx).pt);
}
// compute homography using RANSAC
cv::Mat mask;
cv::Mat H = cv::findHomography(pointsObject, pointsScene, CV_RANSAC, ransacThreshold, mask);
In the example above if we print some inlier
int maskRow = 10;
std::cout << "POINTS: object(" << pointsObject.at(maskRow).x << "," << pointsObject.at(maskRow).y << ") - scene(" << pointsScene.at(maskRow).x << "," << pointsScene.at(maskRow).y << ")" << std::endl;
and then again but this time using our keypoints (can also be done with the extracted 2D points)
std::cout << "POINTS (via match-set): object(" << keypointsObject.at(matchesCurrentObject.at(maskRow).queryIdx).pt.x << "," << keypointsObject.at(matchesCurrentObject.at(maskRow).queryIdx).pt.y << ") - scene(" << keypointsScene.at(matchesCurrentObject.at(maskRow).trainIdx).pt.x << "," << keypointsScene.at(matchesCurrentObject.at(maskRow).trainIdx).pt.y << ")" << std::endl;
we actually get the same output:
POINTS: object(462,199) - sscene(485,49)
POINTS (via match-set): object(462,199) - scene(485,49)
To get the actual inlier we simply have to check if the current row in the mask actually contains a 0 or non-zero value:
if((unsigned int)mask.at<uchar>(maskRow))
// store match or keypoints or points somewhere where you can access them later
On a different note. It may not be possible for RANSAC to exist as a function by itself in OpenCV because RANSAC is an abstract technique of rejecting outliers. RANSAC relies on a base model for performing the outlier rejection. Now the base model is very generic. It could be anything (not necessarily points which have some relationship among themselves). This could be the reason why RANSAC only exists as a feature in other functions that perform some defined tasks which have some defined scope like findHomography, findFundamentalMat, etc.

Resources