BRISK feature detector detects zero keypoints - opencv

The Brisk detector shown below gives me no keypoints. Can somebody please suggest a problem.
I will try to explain what I am doing below with some of the codes.
#include "opencv2/features2d/features2d.hpp"
using namespace cv;
using namespace std;
Ptr<FeatureDetector> detector;
detector = FeatureDetector::create("BRISK");
// the filename is given some path
Mat img = imread(filename, 0);
CV_Assert( !img.empty() );
vector<KeyPoint> kp;
detector->detect(img, kp);
So, when I debug this and check the keypoint (kp) it says <0 items>
with similar code other detector like ORB, SIFT or SURF work as intended!
Can somebody please suggest a solution.
I am using opencv 2.4.9 with Qt creator 2.5.2
Thanks

Okay, I got it myself!
For someone interested, the default values of the parameters for BriskFeatureDetector i.e octaves = 3 and thres = 30 doesn't gave me keypoints at all. But when I change the octaves = 0 as shown in the original author's demo to use AGAST detector of Brisk, it gave me a considerable amount of keypoints.
Thanks and enjoy!

Related

Determining Image similarity when images have varying factors. Image Analysis

Greetings for the past week (or more) I've been struggling with a problem.
Scenario:
I am developing an app which will allow an expert to create a recipe using a provided image of something to be used as a base. The recipe consists of areas of interests. The program's purpose is to allow non experts to use it, providing images similar to that original and the software cross checks these different areas of interest from the Recipe image to the Provided image.
One use-case scenario could be banknotes. The expert would select an area on an a good picture of a banknote that is genuine, and then the user would provide the software with images of banknotes that need to be checked. So illumination, as well as capturing device could be different.
I don't want you guys to delve into the nature of comparing banknotes, that's another monster to tackle and I got it covered for the most part.
My Problem:
Initially I shrink one of the two pictures to the size of the smaller one.
So now we are dealing with pictures having the same size. (I actually perform the shrinking to the areas of interest and not the whole picture, but that shouldn't matter.)
I have tried and used different methodologies compare these parts but each one had it's limitations due to the nature of the images. Illumination might be different, provided image might have some sort of contamination etc.
What have I tried:
Simple image similarity comparison using RGB difference.
Problem is provided image could be totally different but colours could be similar. So I would get high percentages on "totally" different banknotes.
SSIM on RGB Images.
Would give really low percentage of similarity on all channels.
SSIM after using sobel filter.
Again low percentage of similarity.
I used SSIM from both Scikit in python and SSIM from OpenCV
Feature matching with Flann.
Couldn't find a good way to use detected matches to extract a similarity.
Basically I am guessing that I need to use various methods and algorithms to achieve the best result. My gut tells me that I will need to combine RGB comparison results with a methodology that will:
Perform some form of edge detection like sobel.
Compare the results based on shape matching or something similar.
I am an image analysis newbie and I also tried to find a way to compare, the sobel products of the provided images, using mean and std calculations from openCV, however I either did it wrong, or the results I got were useless anyway. I calculated the eucledian distance between the vectors that resulted from mean and std calculation, however I could not use the results mainly because I couldn't see how they related between images.
I am not providing code I used, firslty because I scrapped some of it, and secondly because I am not looking for a code solution but a methodology or some direction to study-material. (I've read shitload of papers already).
Finally I am not trying to detect similar images, but given two images, extract the similarity between them, trying to bypass small differences created by illumination or paper distortion etc.
Finally I would like to say that I tested all the methods by providing the same image twice and I would get 100% similarity, so I didn't totally fuck it up.
Is what I am trying even possible without some sort of training sets to teach the software what are the acceptable variants of the image? (Again I have no idea if that even makes sense :D )
I think you can try Feature Matching, like SURF alogrithm, FLANN
https://docs.opencv.org/3.3.0/dc/dc3/tutorial_py_matcher.html
http://www.coldvision.io/2016/06/27/object-detection-surf-knn-flann-opencv-3-x-cuda/
Example of Feature Detection using SURF : https://docs.opencv.org/3.0-beta/doc/tutorials/features2d/feature_detection/feature_detection.html
#include <stdio.h>
#include <iostream>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/highgui.hpp"
using namespace cv;
using namespace cv::xfeatures2d;
void readme();
/** #function main */
int main( int argc, char** argv )
{
if( argc != 3 )
{ readme(); return -1; }
Mat img_1 = imread( argv[1], IMREAD_GRAYSCALE );
Mat img_2 = imread( argv[2], IMREAD_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ std::cout<< " --(!) Error reading images " << std::endl; return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector->detect( img_1, keypoints_1 );
detector->detect( img_2, keypoints_2 );
//-- Draw keypoints
Mat img_keypoints_1; Mat img_keypoints_2;
drawKeypoints( img_1, keypoints_1, img_keypoints_1, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
drawKeypoints( img_2, keypoints_2, img_keypoints_2, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
imshow("Keypoints 1", img_keypoints_1 );
imshow("Keypoints 2", img_keypoints_2 );
waitKey(0);
return 0;
}
/** #function readme */
void readme()
{ std::cout << " Usage: ./SURF_detector <img1> <img2>" << std::endl; }
Ok after some digging around, this is what I came with :
#!/usr/bin/env
import numpy as np
import cv2
import sys
import matplotlib.image as mpimg
from skimage import io
from skimage import measure
import time
s = 0
imgA = cv2.imread(sys.argv[1])
imgB = cv2.imread(sys.argv[2])
#imgA = cv2.imread('imageA.bmp')
#imgB = cv2.imread('imageB.bmp')
imgA = cv2.cvtColor(imgA, cv2.COLOR_BGR2GRAY)
imgB = cv2.cvtColor(imgB, cv2.COLOR_BGR2GRAY)
ret,imgA = cv2.threshold(imgA,127,255,0)
ret,imgB = cv2.threshold(imgB,127,255,0)
imgAContours, contoursA, hierarchyA = cv2.findContours(imgA, cv2.RETR_TREE , cv2.CHAIN_APPROX_NONE)
imgBContours, contoursB, hierarchyB = cv2.findContours(imgB, cv2.RETR_TREE , cv2.CHAIN_APPROX_NONE)
imgAContours = cv2.drawContours(imgAContours,contoursA,-1,(0,0,0),1)
imgBContours = cv2.drawContours(imgBContours,contoursB,-1,(0,0,0),1)
imgAContours = cv2.medianBlur(imgAContours,5)
imgBContours = cv2.medianBlur(imgBContours,5)
#s = 100 * 1/(1+cv2.matchShapes(imgAContours,imgBContours,cv2.CONTOURS_MATCH_I2,0.0))
#s = measure.compare_ssim(imgAContours,imgBContours)
#equality = np.equal(imgAContours,imgBContours)
total = 0.0
sum = 0.0
for x in range(len(imgAContours)):
for y in range(len(imgAContours[x])):
total +=1
t = imgAContours[x,y] == imgBContours[x,y]
if t:
sum+=1
s = (sum/total) * 100
print(s)
Basically I preprocess the two images as simply as possible, then I find the contours. Now the matchShapes function from openCV was not giving me the results I wanted.
So I create two images using the information from the contours, and then I apply a median blur filter.
Currently, I am doing a simply boolean check pixel to pixel. However I am planning to change this in the future, making it smarter. Probably with some array math.
If anyone has any suggestions, they are welcome.

Extract point descriptors from small images using OpenCV

I am trying to extract different point descriptors (SIFT, SURF, ORB, BRIEF,...) to build Bag of Visual words. The problem seems to be that I am using very small images : 12x60px.
Using a dense extractor I am able to get some keypoints, but then when extracting the descriptor no data is extracted.
Here is the code :
vector<KeyPoint> points;
Mat descriptor; // descriptor of the current image
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("BRIEF");
Ptr<FeatureDetector> detector(new DenseFeatureDetector(1.f,1,0.1f,6,0,true,false));
image = imread(filename, 0);
roi = Mat(image,Rect(0,0,12,60));
detector->detect(roi,points);
extractor->compute(roi,points,descriptor);
cout << descriptor << endl;
The result is [] (with BRIEF and ORB) and SegFault (with SURF and SIFT).
Does anyone have a clue on how to densely extract point descriptors from small images on OpenCV ?
Thanks for your help.
Indeed, I finally managed to work my way to a solution. Thanks for the help.
I am now using an Orb detector with initalised parameters instead of a random one, e.g:
Ptr<DescriptorExtractor> extractor(new ORB(500, 1.2f, 8, orbSize, 0, 2, ORB::HARRIS_SCORE, orbSize));
I had to explore the documentation of OpenCV thoroughly before finding the answer to my problem : Orb documentation.
Also if people are using the dense point extractor they should be aware that after the descriptor computing process they may have less keypoints than produced by the keypoint extractor. The descriptor computing removes any keypoints for which it cannot get the data.
BRIEF and ORB use a 32x32 patch to get the descriptor. Since it doesn't fit your image, they remove those keypoints (to avoid returning keypoints without descriptor).
In the case of SURF and SIFT, they can use smaller patches, but it depends on the scale provided by the keypoint. In this case, I guess they have to use a bigger patch and the same as before happens. I don't know why you get a segfault, though; maybe the SIFT/SURF descriptor extractors don't check that keypoints are inside the image boundaries, as BRIEF/ORB ones do.

OpenCV FREAK: Fast Retina KeyPoint descriptor

I am developing an application which involves the use of Freak descriptors, just released in the OpenCV2.4.2 version.
In the documentation only two functions appear:
The class constructor
A confusing method selectPairs()
I want to use my own detector and then call the FREAK descriptor passing the keypoints detected but I don't understand clearly how the class works.
Question:
Do I strictly need to use selectPairs()? Is it enough just by calling FREAK.compute()? I don't really understand which is the use of selectPairs.
Just flicked through the paper and saw in paragraph 4.2 that the authors set up a method to select the pairs of receptive fields to evaluate in their descriptor, as taking all possible pairs would be too much burden. The selectPairs() function let you recompute this set of pairs.
Read afterwards the documentation where they point exactly to this paragraph in the original article. Also, a few comments in the documentation tells you that there is an already available, offline learned set of pairs that is ready to use with the FREAK descriptor. So I guess at least for a start you could just use the precomputed pairs, and pass as an argument the list of KeyPoints that you obtained from your method to FREAK.compute.
If your results are disapointing, you could try the keypoint selection method used in the original paper (paragraph 2.1), then ultimately learning your own set of pairs.
#include "iostream"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "cv.h"
#include "highgui.h"
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/nonfree/features2d.hpp>
#include <opencv2/flann/flann.hpp>
#include <opencv2/legacy/legacy.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
Mat image1,image2;
image1 = imread("C:\\lena.jpg",0);
image2 = imread("C:\\lena1.bmp",0);
vector<KeyPoint> keypointsA,keypointsB;
Mat descriptorsA,descriptorsB;
std::vector<DMatch> matches;
OrbFeatureDetector detector(400);
FREAK extractor;
BruteForceMatcher<Hamming> matcher;
detector.detect(image1,keypointsA);
detector.detect(image2,keypointsB);
extractor.compute(image1,keypointsA,descriptorsA);
extractor.compute(image2,keypointsB,descriptorsB);
matcher.match(descriptorsA, descriptorsB, matches);
int nofmatches = 30;
nth_element(matches.begin(),matches.begin()+nofmatches,matches.end());
matches.erase(matches.begin()+nofmatches+1,matches.end());
Mat imgMatch;
drawMatches(image1, keypointsA, image2, keypointsB, matches, imgMatch);
imshow("matches", imgMatch);
waitKey(0);
return 0;
}
this is a simple application to match points in two images...i have used Orb to detect keypoints and FREAK as descriptor on those keypoints...then brutforcematching to detect the corresponding points in two images...i have taken top 30 points that have best match...hope this helps you somewhat...

Generate local features For each keypoint by using SIFT

I have an image and i want to locate key points by using SIFT detector and group them, then i want to generate local features for each key point by using SIFT, would you please help me how I can do it ? Please give me any suggestions
I really appreciate your help
I'm not sure that I understand what you mean, but if you extract SIFT features from an image, you automatically get the feature descriptor which is used to compare features to each other. Of course you also get the feature location, size, direction and hessian value with it.
While you can group those features by there position in the image, but there is currently no way that I'm aware of to compare those groups, since they may be locally related, but can have wildly different feature descriptors.
Also I would suggest SURF. It is faster and not patent encumbered.
Have a look at the examples from OpenCV if you want specific instructions on how to retrieve and compare descriptors.
If you are using opencv here are the commands to do it, else if you are using the matlab see the link MATCHING_using surf
USING OPENCV::
// you can change the parameters for your requirement
double hessianThreshold=200;
int octaves=3;
int octaveLayers=4;
bool upright=false;
vector<KeyPoint>keypoints;
//The detector detects the keypoints in an image here image is RGBIMAGE of Mat type
SurfFeatureDetector detector( hessianThreshold, octaves, octaveLayers, upright );
detector.detect(RGB_IMAGE, keypoints);
//The extractor computesthe local features around the keypoints
SurfDescriptorExtractor extractor;
Mat descriptors;
extractor.compute( last_ref, keypoints, descriptors);
// all the key points local features are stored in rows one after another in descriptors matrix...
Hope it is useful:)

Improve matching accuracy of cvMatchShapes in OpenCV

I tried using cvMatchShapes() to match two marker patterns. As you can see at Best way to count number of "White Blobs" in a Thresholded IplImage in OpenCV 2.3.0 , the source is having a poor image quality.
I'm not satisfied with the results returned from that function, most of the times it gives incorrect matches. How to use this function (or some suitable function) to do effective matching?
Note: My fallback solution is to change marker pattern to have fairly big/clearly visible shapes. Please visit the above link to see my current marker pattern.
EDIT
I found this comprehensive comparison of various feature detection algorithms implemented in OpenCV. http://computer-vision-talks.com/2011/01/comparison-of-the-opencvs-feature-detection-algorithms-2 . According to that FAST seems to be a good choice.
I'd give +1 to anyone who can share a good tutorial for implementing FAST (else STAR/ SURF/ SIFT) in OpenCV. I'm unable to google thinks fast as in speed :(
Here is the FAST inventor's website. FAST stands for Features from Accelerated Segment Test. Here is a short Wikipedia entry on AST based algorithms. Also, here is a good survey of the different feature detectors currently in use today.
FAST is actually already implemented by OpenCV if you would like to use their implementation.
EDIT : Here is short example I created to show you how to use the FAST detector:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
Mat far = imread("far.jpg", 0);
Mat near = imread("near.jpg", 0);
Ptr<FeatureDetector> detector = FeatureDetector::create("FAST");
vector<KeyPoint> farPoints;
detector->detect(far, farPoints);
Mat farColor;
cvtColor(far, farColor, CV_GRAY2BGR);
drawKeypoints(farColor, farPoints, farColor, Scalar(255, 0, 0), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow("farColor", farColor);
imwrite("farPoints.jpg", farColor);
vector<KeyPoint> nearPoints;
detector->detect(near, nearPoints);
Mat nearColor;
cvtColor(near, nearColor, CV_GRAY2BGR);
drawKeypoints(nearColor, nearPoints, nearColor, Scalar(0, 255, 0), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow("nearColor", nearColor);
imwrite("nearPoints.jpg", nearColor);
waitKey();
return 0;
}
This code finds the follow feature points for the far and near imagery:
As you can see, the near image has many more features, but it looks like the same basic structure is detected with the far image. So, you should be able to match these. Have a look at the descriptor_extractor_matcher.cpp. That should get you started.
Hope that helps!

Resources