I'm implementing a Bag-of-Words image classifier using OpenCV. Initially I've tested SURF descriptors extracted in SURF keypoints. I've heard that Dense SIFT (or PHOW) descriptors can work better for my purposes, so I tried them too.
To my surprise, they performed significantly worse, actually almost 10 times worse. What could I be doing wrong? I'm using DenseFeatureDetector from OpenCV to get keypoints. I'm extracting about 5000 descriptors per image from 9 layers and cluster them into 500 clusters.
Should I try PHOW descriptors from VLFeat library? Also I can't use chi square kernel in OpenCV's SVM implementation, which is recommended in many papers. Is this crucial to the classifier quality, should I try another library?
Another question is the scale invariance, I suspect that it can be affected by dense feature extraction. Am I right?
It depends on the problem. You should try different techniques in order to know what is the best technique to use on your problem. Usually using PHOW is very useful when you need to classify any kind of scene.
You should know that PHOW is a little bit different than just Dense SIFT. I used vlfeat PHOW a few years ago, and seeing the code, it is just calling dense sift with different sizes, and some smoothing. That could be one clue to be able to be invariant to scale.
Also in my experiments I used libsvm, and it resulted that histogram intersection was the best one for me. By default chi-square and histogram intersection kernels are not included in libsvm nor OpenCV SVM (based on libsvm). You are the one to decide if you should try them. I can tell you that RBF kernel achieved near 90% of accuracy, wheter histogram intersection 93%, and chi-square 91%. But those results were in my concrete experiments. You should start on RBF with autotuned params, and see if its enough.
Summarizing it all depends on your concrete experiments. But if you use Dense SIFT, maybe you could try to increase the number of clusters, and calling Dense SIFT with different scales (I recommend you the PHOW way).
EDIT: I was looking at OpenCV DenseSift, and maybe you could start with
m_detector=new DenseFeatureDetector(4, 4, 1.5);
Knowing thath vlfeat PHOW uses [4 6 8 10] as bin sizes.
Related
Is there way that I can implement Face Recognition using OpenCV? I tried to use LBPH, and train with one image. It gives a confidence score, but I am not sure how accurate this is to use for verification.
My question is how can I create a face recognition system that tells me how similar the two faces are/if they are the same person or not using OpenCV. It doesn't seem like the confidence score is an accurate measure, if I'm doing this correctly.
Also, is a higher confidence score better?
Thanks
OpenCV 3 currently support following algorithms for face recognition:
- Eigenfaces (see createEigenFaceRecognizer())
- Fisherfaces (see createFisherFaceRecognizer())
- Local Binary Patterns Histograms (see createLBPHFaceRecognizer())
Confidence score by these algorithms is the similarity measure between faces, but these methods are really old and perform poorly. I'd suggest you try this article : http://www.robots.ox.ac.uk/~vgg/publications/2015/Parkhi15/parkhi15.pdf
Basically you need to download trained caffe model from here: http://www.robots.ox.ac.uk/~vgg/software/vgg_face/src/vgg_face_caffe.tar.gz
Use opencv to run this classifier like shown is this example:
http://docs.opencv.org/trunk/d5/de7/tutorial_dnn_googlenet.html#gsc.tab=0
Then collect fc8 feature layer of size 4096 floats from caffe network. And calculate your similarity as L2 norm between two fc8 layers calculated for your faces.
I'm developing an algorithm to classify different types of dogs based off of image data. The steps of the algorithm are:
Go through all training images, detect image features (ie SURF), and extract descriptors. Collect all descriptors for all images.
Cluster within the collected image descriptors and find k "words" or centroids within the collection.
Reiterate through all images, extract SURF descriptors, and match the extracted descriptor with the closest "word" found via clustering.
Represent each image as a histogram of the words found in clustering.
Feed these image representations (feature vectors) to a classifier and train...
Now, I have run into a bit of a problem. Finding the "words" within the collection of image descriptors is a very important step. Due to the random nature of clustering, different clusters are found each time I run my program. The unfortunate result is that sometimes the accuracy of my classifier will be very good, and other times, very bad. I have chalked this up to the clustering algorithm finding "good" words sometimes, and "bad" words other times.
Does anyone know how I can hedge against the clustering algorithm from finding "bad" words? Currently I just cluster several times and take the mean accuracy of my classifier, but there must be a better way.
Thanks for taking time to read through this, and thank you for your help!
EDIT:
I am not using KMeans for classification; I am using a Support Vector Machine for classification. I am using KMeans for finding image descriptor "words", and then using these words to create histograms which describe each image. These histograms serve as feature vectors that are fed to the Support Vector Machine for classification.
There are many possible ways of making clustering repeatable:
The most basic method of dealing with k-means randomness is simply running it multiple times and selecting the best one (the one that minimizes the inner cluster distances/maximizes the between clusters distance).
One can use some fixed initialization for your data instead of randomization. There are many heuristics for starting the k-means. Or at least minimize the variance by using algorithms like k-means++.
Use modification of k-means which guarantees global minimum of regularized function, ie. convex k-means
Use different clustering method, which is deterministic, ie. Data Nets
I would offer two possible suggestions, in addition to those provided.
K-means optimises an objective related to the distance between cluster points and their centroids. You care about classification accuracy. Depending on the computational cost, a simple brute-force approach is to induce multiple clusterings on a subset of your training data, and evaluate the performance of each on some held-out development set for the task you care about. Then use the highest performing variant as the final model. I don't like the use of non-random initialisation because this is only a solution to avoid the randomness, not find the true global minimum of the objective, and your chosen initialisation may be useless and just produce consistently bad classifiers.
The other approach, which is much harder, is to view the k-means step as a dimensionality reduction to enable classification, and incorporate this into the classifier directly. If you use a deep neural net, the layer(s) closest to the input are essentially dimensionality reducers in the same way as the k-means clustering you induce: the difference is their weights are set wrt the error of the net on the classification problem, rather than some unrelated intermediate step. The downside is that this is much closer to a current research problem: training deep nets is hard. You could start with a standard one-hidden-layer architecture (with binary activations on the hidden layer, and using cross-entropy loss on the output layer with outputs coded as one-of-n categories), and attempt to add layers incrementally, but as far as I'm aware standard training algorithms start to behave poorly beyond the single hidden layer, so you'd need to investigate layer-wise training to initialise, or some of the Hessian-Free stuff coming out of Geoff Hinton's group in Toronto.
That is actually an important problem with the BofW approach, and you should share this prominently. SIFT data may actually not have k-means clusters at all. However, due to the nature of the algorithm, k-means will always produce k clusters. One of the things to test with k-means is to validate that the results are stable. If you get a completely different result each time, they are not much better than random.
Nevertheless, if you just want to get some working results, you can just fix the dictionary once and choose one that is working well.
Or you might look into more advanced clustering (in particular one that is more robust wrt. noise!)
I am new to hog, I am using opencv2.4.4 and visual studio 2010, i am running the sample peopledetect.cpp in the package and its compiling and running, but i want to understand the the source code in detail.In peopledetect.cpp is hog descriptors constructed/ already trained for peopledetection 3780 vectors are fed into svm classifier? when i try to debug the peopledetect.cpp i could only find HOGDescriptor creates hog descriptor and detector, i basically doesnt understand what this API does HOGDescriptor as i see peopledetect.cpp doesnt go through the steps of hog processing, it loads the already trained vectors to svm classifier to detect people/no people, am i wrong?. As there is no documentation about this.
Can anyone please brief about this.
The implementation of People Detection algorithm in opencv is based on HOG descriptors as features and SVM as classifier.
1. A training database (positives samples as person, negatives samples as non-person) is used to learn to SVM parameters (it computes and store the support vectors). Cross-validation is also perform (I assume) to optimize the soft margin parameter C and the kernel parameters (it could be linear kernel).
2. To detect people on testing video data, peopledetect.cpp loads the pre-learnt SVM, computes the HOG descriptors on different positions and scales, then merges the windows with high detection scores (outputs of binary SVM classifer).
Here is a good paper (inria) to start with.
Coming to more clearer answer, peopledetect.cpp goes through all the hog steps.
digging deeper i was more clear. Basically if you debug peopledetect.cpp goes through these steps.
Initially image is divided into several scales, scale0(1.05) is coefficient for detection window increase. For each scale of the image features are extracted from window and a classifier window is run, like above it follows scale-space pyramid method. So its pretty big computational process, very expensive, so opencv team has tried to parallelise for each scale.
I was baffled before why i was not able to debug/go through the steps, This parallel_for_(Range(0, (int)levelScale.size()),HOGInvoker()) creates several threads where each thread works on each scale, depends how much threads or created something like this.
because of this i was not able to debug, what i did was freeze all the threads and debug only the main thread. for different scales of the image hog processing steps are
Here in peopledetect.cpp hog and classifier window are kinda combined.In a single window(64x128) both feature extraction and running classifier takes place. After this is done for each scale of the image. There are a number of pedestrian windows of different scales are often associated with this region, this is grouped using grouprectangle() function
Training SVM consist to find parameters of the max margin between postive and negative samples.
if the same feature extraction is done for 1000+ negative and positive sample there is must be millions of features rite?
Yes. These coefficient are extracted from training databases. You don't have them. SVM stores only support vectors which are sufficient to characterise the margin. See dual form of linear SVM for example.
a number of pedestrian windows of different scales are often associated with the region
True. A merging function is apply. Different methods (such groupRectangles(..)) are available (see here) and take in arguments parameters given to detectMultiScale(..).
What i understood from different papers is that feature extraction using hog is done using several positive and negative images, these features which were extracted is fed to Linear SVM to train them,So peopledetect.cpp uses this trained linear SVM sample, so This feature extraction process is not done by peopledetect.cpp i.e HOGDescriptor::getDefaultPeopleDetector() consists of coefficients of the classifier trained for people detection. The features extracted from hog detection/window(64x128)gives a total of length 3780(4 cells x 9 bins x 7 x 15 blocks = 3780) features. These features are then used to train a linear SVM classifier. If the same feature extraction is done for 1000+ negative and positive sample there is must be millions of features rite? How do we get these co-efficients?
But The HOG descriptors are known to contain redundant information because of the different detection window sizes being used. So when the SVM classifier classifies a region as “pedestrian”, a number of pedestrian windows of different scales are often associated with the region. what peopledetect.cpp mainly does is (hog.detectMultiScale(img, found, 0, Size(8,8), Size(32,32), 1.05, 2);) The detection window is scanned across the image at all positions and scales, and conventional non-maximum suppression is run on the output pyramid to detect object instances.
I got Memory Error when I was running dbscan algorithm of scikit.
My data is about 20000*10000, it's a binary matrix.
(Maybe it's not suitable to use DBSCAN with such a matrix. I'm a beginner of machine learning. I just want to find a cluster method which don't need an initial cluster number)
Anyway I found sparse matrix and feature extraction of scikit.
http://scikit-learn.org/dev/modules/feature_extraction.html
http://docs.scipy.org/doc/scipy/reference/sparse.html
But I still have no idea how to use it. In DBSCAN's specification, there is no indication about using sparse matrix. Is it not allowed?
If anyone knows how to use sparse matrix in DBSCAN, please tell me.
Or you can tell me a more suitable cluster method.
The scikit implementation of DBSCAN is, unfortunately, very naive. It needs to be rewritten to take indexing (ball trees etc.) into account.
As of now, it will apparently insist of computing a complete distance matrix, which wastes a lot of memory.
May I suggest that you just reimplement DBSCAN yourself. It's fairly easy, there exists good pseudocode e.g. on Wikipedia and in the original publication. It should be just a few lines, and you can then easily take benefit of your data representation. E.g. if you already have a similarity graph in a sparse representation, it's usually fairly trivial to do a "range query" (i.e. use only the edges that satisfy your distance threshold)
Here is a issue in scikit-learn github where they talk about improving the implementation. A user reports his version using the ball-tree is 50x faster (which doesn't surprise me, I've seen similar speedups with indexes before - it will likely become more pronounced when further increasing the data set size).
Update: the DBSCAN version in scikit-learn has received substantial improvements since this answer was written.
You can pass a distance matrix to DBSCAN, so assuming X is your sample matrix, the following should work:
from sklearn.metrics.pairwise import euclidean_distances
D = euclidean_distances(X, X)
db = DBSCAN(metric="precomputed").fit(D)
However, the matrix D will be even larger than X: n_samples² entries. With sparse matrices, k-means is probably the best option.
(DBSCAN may seem attractive because it doesn't need a pre-determined number of clusters, but it trades that for two parameters that you have to tune. It's mostly applicable in settings where the samples are points in space and you know how close you want those points to be to be in the same cluster, or when you have a black box distance metric that scikit-learn doesn't support.)
Yes, since version 0.16.1.
Here's a commit for a test:
https://github.com/scikit-learn/scikit-learn/commit/494b8e574337e510bcb6fd0c941e390371ef1879
Sklearn's DBSCAN algorithm doesn't take sparse arrays. However, KMeans and Spectral clustering do, you can try these. More on sklearns clustering methods: http://scikit-learn.org/stable/modules/clustering.html
I am trying to use Freak in opencv to detect features and extract descriptors, then build my BOW vocabulary and for each image use the vocabulary to match with BOW. You know, the whole thing. I know BOW can be used with other descriptors like SIFT or SURF, it is not clear to me if Freak descriptors, which are binary, can be used with BOW. More specifically, when opencv builds a BOW vocabulary, it uses k-means cluster. It is not clear to me what distance function the k-means cluster algorithm uses. For binary descriptors like Freak, Hamming distance seems to be the only choice.
It looks to me opencv k-means only uses euclidean distance when calculating distance, bummer. Looks like I have to build my own k-means and my own vocabulary matching. Any smart people out there know a workaround?
Thanks!
I read on a paper that Freak is not easy to be used. Here is the excerpt form the paper "....These algorithms cannot be easily used in many retrieval algorithms because they must be compared with a Hamming distance, which is not easily adapted to accelerated search structures such as vocabulary trees or Approximate Nearest Neighbors (ANN)...."
(ORB ,FREAK and BRISK)
FREAK works with locality sensitive hashing. You can use it with FLANN (Fast approximate nearest neighbors) included in OpenCV.
For the BOW, only the first 5, 6, 7, 8 bytes of the descriptor might be sufficient to construct the tree.