I have extracted the features by using OpenCV opensource..
I have done these steps by using these 2 functions
SiftFeatureDetector
SiftDescriptorExtractor
which I got a matrix of 128*128 from the descriptors, which I think as well that I will use
this matrix to train the features...
What I'm confused about is the following,
When I want to train the features,
I should use a matrix of number of features and every single row contains the information about that feature.. which it might be a matrix of
number of features * 6
For example, I got 344 features in an image... and I got a matrix of 128*128 for the descriptor, which I need this matrix in order to train my features
but as I mentioned, I'm just getting 128*128 matrix.. so what's the problem?
And, what should I get to train later on?
Have you looked at the descriptor_extractor_matcher.cpp, or the matcher_simple.cpp samples from OpenCV? Also, could you post the code you are using to detect the features?
Related
Is there way that I can implement Face Recognition using OpenCV? I tried to use LBPH, and train with one image. It gives a confidence score, but I am not sure how accurate this is to use for verification.
My question is how can I create a face recognition system that tells me how similar the two faces are/if they are the same person or not using OpenCV. It doesn't seem like the confidence score is an accurate measure, if I'm doing this correctly.
Also, is a higher confidence score better?
Thanks
OpenCV 3 currently support following algorithms for face recognition:
- Eigenfaces (see createEigenFaceRecognizer())
- Fisherfaces (see createFisherFaceRecognizer())
- Local Binary Patterns Histograms (see createLBPHFaceRecognizer())
Confidence score by these algorithms is the similarity measure between faces, but these methods are really old and perform poorly. I'd suggest you try this article : http://www.robots.ox.ac.uk/~vgg/publications/2015/Parkhi15/parkhi15.pdf
Basically you need to download trained caffe model from here: http://www.robots.ox.ac.uk/~vgg/software/vgg_face/src/vgg_face_caffe.tar.gz
Use opencv to run this classifier like shown is this example:
http://docs.opencv.org/trunk/d5/de7/tutorial_dnn_googlenet.html#gsc.tab=0
Then collect fc8 feature layer of size 4096 floats from caffe network. And calculate your similarity as L2 norm between two fc8 layers calculated for your faces.
Well Hello everybody. I am doing a project that consist in dectect objects using kinect and svm and ann machine learning. I want if it is posible to give the names of library for svm and ann with graphical tool because I want only to train ann with that library and save in .xml then load .xml with opencv!!
SVM is a classifier used to classify samples based upon their feature vectors. So, your task is to convert the images into feature vectors which can be used by SVM for its training and testing.
Ok, to create feature vector from your images there are several possibilites and i am going to mention some very common technique:
A very easy method is to create normalized hue-histogram of your each image. Let's say, you have created hue-histogram with 5-bins. So, based upon your image color there will be some values in these 5 bins. Lets say the values look like this { 0.32 0.56 0 0 0.12 }. So, now this is your one input vector with 5 dimensions (i.e. number of bins). You have to do the same procedure for all training samples and then you will do it for test image too.
Extract some feature from your input samples (e.g. by using SIFT, SURf) and then create there descriptor using SIFT/SURF. And, then you can use these descriptors as the input to your SVM for training.
I am new to hog, I am using opencv2.4.4 and visual studio 2010, i am running the sample peopledetect.cpp in the package and its compiling and running, but i want to understand the the source code in detail.In peopledetect.cpp is hog descriptors constructed/ already trained for peopledetection 3780 vectors are fed into svm classifier? when i try to debug the peopledetect.cpp i could only find HOGDescriptor creates hog descriptor and detector, i basically doesnt understand what this API does HOGDescriptor as i see peopledetect.cpp doesnt go through the steps of hog processing, it loads the already trained vectors to svm classifier to detect people/no people, am i wrong?. As there is no documentation about this.
Can anyone please brief about this.
The implementation of People Detection algorithm in opencv is based on HOG descriptors as features and SVM as classifier.
1. A training database (positives samples as person, negatives samples as non-person) is used to learn to SVM parameters (it computes and store the support vectors). Cross-validation is also perform (I assume) to optimize the soft margin parameter C and the kernel parameters (it could be linear kernel).
2. To detect people on testing video data, peopledetect.cpp loads the pre-learnt SVM, computes the HOG descriptors on different positions and scales, then merges the windows with high detection scores (outputs of binary SVM classifer).
Here is a good paper (inria) to start with.
Coming to more clearer answer, peopledetect.cpp goes through all the hog steps.
digging deeper i was more clear. Basically if you debug peopledetect.cpp goes through these steps.
Initially image is divided into several scales, scale0(1.05) is coefficient for detection window increase. For each scale of the image features are extracted from window and a classifier window is run, like above it follows scale-space pyramid method. So its pretty big computational process, very expensive, so opencv team has tried to parallelise for each scale.
I was baffled before why i was not able to debug/go through the steps, This parallel_for_(Range(0, (int)levelScale.size()),HOGInvoker()) creates several threads where each thread works on each scale, depends how much threads or created something like this.
because of this i was not able to debug, what i did was freeze all the threads and debug only the main thread. for different scales of the image hog processing steps are
Here in peopledetect.cpp hog and classifier window are kinda combined.In a single window(64x128) both feature extraction and running classifier takes place. After this is done for each scale of the image. There are a number of pedestrian windows of different scales are often associated with this region, this is grouped using grouprectangle() function
Training SVM consist to find parameters of the max margin between postive and negative samples.
if the same feature extraction is done for 1000+ negative and positive sample there is must be millions of features rite?
Yes. These coefficient are extracted from training databases. You don't have them. SVM stores only support vectors which are sufficient to characterise the margin. See dual form of linear SVM for example.
a number of pedestrian windows of different scales are often associated with the region
True. A merging function is apply. Different methods (such groupRectangles(..)) are available (see here) and take in arguments parameters given to detectMultiScale(..).
What i understood from different papers is that feature extraction using hog is done using several positive and negative images, these features which were extracted is fed to Linear SVM to train them,So peopledetect.cpp uses this trained linear SVM sample, so This feature extraction process is not done by peopledetect.cpp i.e HOGDescriptor::getDefaultPeopleDetector() consists of coefficients of the classifier trained for people detection. The features extracted from hog detection/window(64x128)gives a total of length 3780(4 cells x 9 bins x 7 x 15 blocks = 3780) features. These features are then used to train a linear SVM classifier. If the same feature extraction is done for 1000+ negative and positive sample there is must be millions of features rite? How do we get these co-efficients?
But The HOG descriptors are known to contain redundant information because of the different detection window sizes being used. So when the SVM classifier classifies a region as “pedestrian”, a number of pedestrian windows of different scales are often associated with the region. what peopledetect.cpp mainly does is (hog.detectMultiScale(img, found, 0, Size(8,8), Size(32,32), 1.05, 2);) The detection window is scanned across the image at all positions and scales, and conventional non-maximum suppression is run on the output pyramid to detect object instances.
I want to training data and use HOG algorithm to detect pedestrian.
Now I can use defaultHog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector()); in opencv to detection, but the result is not very good to my testing video. So I want to do training use my database.
I have prepared 1000+ positive sample, and 1000+ negative samples. They are cropped to size 50 * 100, and I have do the list file.
And I have read some tutorials on the internet, they all so complex, sometimes abstruse. Most of them are analyze the source code and the algorithm of HOG. But with only less examples and simple anylize.
Some instruction show that libsvm\windows\svm-train.exe can be used to training, Can anyone gives an examples according to 1000+ 50*100 positive samples?
For example, like haartraing, we can do it from opencv, like haartraining.exe –a –b with some parameters, and get a *.xml as a result which will be used to people detection?
Or is there any other method to training, and detection?
I prefer to know how to use it and the detail procedures. As the detail algorithm, it is not important to me. I just want to implement it.
If anyone know about it, please give me some tips.
I provided some sample code and instructions to start training your own HOG descriptor using openCV:
See https://github.com/DaHoC/trainHOG/wiki/trainHOG-Tutorial.
The algorithm is indeed too complex to provide in short, the basic idea however is to:
Extract HOG features from negative and positive sample images of identical size and type.
Use the extracted feature vectors along with their respective classes to train a SVM classifier, in this step you can use the svm-train.exe with a generated file of the correct format containing the feature vectors and their classes (or directly include and address the libsvm library class in your sources).
The resulting SVM model and support vectors are calculated into a single descriptor vector that can be used with the openCV detector.
Best regards
I am using CvSVM to classify only two types of facial expression. I used LBP(Local Binary Pattern) based histogram to extract features from the images, and trained using cvSVM::train(data_mat,labels_mat,Mat(),Mat(),params), where,
data_mat is of size 200x3452, containing normalized(0-1) feature histogram of 200 samples in row major form, with 3452 features each(depends on number of neighbourhood points)
labels_mat is corresponding label matrix containing only two value 0 and 1.
The parameters are:
CvSVMParams params;
params.svm_type =CvSVM::C_SVC;
params.kernel_type =CvSVM::LINEAR;
params.C =0.01;
params.term_crit=cvTermCriteria(CV_TERMCRIT_ITER,(int)1e7,1e-7);
The problem is that:-
while testing I get very bad result (around 10%-30% accuracy), even after applying with different kernel and train_auto() function.
CvSVM::predict(test_data_mat,true) gives 'NaN' output
I will greatly appreciate any help with this, it's got me stumped.
I suppose, that your classes linearly hard/non-separable in feature space you use.
May be it will be better to apply PCA to your dataset before classifier training step
and estimate effective dimensionality of this problem.
Also I think it will be userful test your dataset with other classifiers.
You can adapt for this purpose standard opencv example points_classifier.cpp.
It includes a lot of different classifiers with similar interface you can play with.
The SVM generalization power is low.In the first reduce your data dimension by principal component analysis then change your SVM kerenl type to RBF.