I have been working on training pedestrian detection classifier based on HOG features. Presently I have done the followings:
a) Extracted HOG features of all files i.e. Positive and Negative and saved those features with label i.e. +1 for positive and -1 for negative in file.
b)downloaded svmlight, extracted binaries i.e. svm_learn, svm_classify.
c) passed the "training file" (features file) to svm_learn binary which produced a model file for me.
d) passed "test file" to svm_classify binary and got result in predictions file.
Now my question is that "What to do next and how?". i think i know that now i need to use that "model file" and not "predictions file" in openCV for detection of pedestrian in video but somewhere i read that openCV uses only 1 support vector but i got 295 SV, so how do i convert it into one proper format and use it and any further compulsory steps if any.
I do appreciate your kindness!
It is not true that OpenCV (presumably you are talking about CvSVM) uses only one support vector. As pointed out by QED, what OpenCV does do is to optimize a linear SVM down to one support vector. I think the idea here is that the support vectors define the classification margin, but to do the actual classification only the separating hyperplane is needed and that can be defined with one vector.
Since you have a svmlight model file, and CvSVM can't read that, you have the following options:
train a CvSVM and save the mode as a CvStatsModel file, that you can load tha tlater to get the support vecotrs.
write some code to convert an svmlight model file into a CvStatsModel file (but for this you have to understand both formats).
get source for svmlight, the bit that reaads the modelfile, and integrate it into your OpenCV application
You may use LIBSVM instead, but really you are then faced with the same problems as svmlight.
For ideas on how to convert the support vectors so you can use them with the HOG detector see Training custom SVM to use with HOGDescriptor in OpenCV
Related
Well Hello everybody. I am doing a project that consist in dectect objects using kinect and svm and ann machine learning. I want if it is posible to give the names of library for svm and ann with graphical tool because I want only to train ann with that library and save in .xml then load .xml with opencv!!
SVM is a classifier used to classify samples based upon their feature vectors. So, your task is to convert the images into feature vectors which can be used by SVM for its training and testing.
Ok, to create feature vector from your images there are several possibilites and i am going to mention some very common technique:
A very easy method is to create normalized hue-histogram of your each image. Let's say, you have created hue-histogram with 5-bins. So, based upon your image color there will be some values in these 5 bins. Lets say the values look like this { 0.32 0.56 0 0 0.12 }. So, now this is your one input vector with 5 dimensions (i.e. number of bins). You have to do the same procedure for all training samples and then you will do it for test image too.
Extract some feature from your input samples (e.g. by using SIFT, SURf) and then create there descriptor using SIFT/SURF. And, then you can use these descriptors as the input to your SVM for training.
I am new to hog, I am using opencv2.4.4 and visual studio 2010, i am running the sample peopledetect.cpp in the package and its compiling and running, but i want to understand the the source code in detail.In peopledetect.cpp is hog descriptors constructed/ already trained for peopledetection 3780 vectors are fed into svm classifier? when i try to debug the peopledetect.cpp i could only find HOGDescriptor creates hog descriptor and detector, i basically doesnt understand what this API does HOGDescriptor as i see peopledetect.cpp doesnt go through the steps of hog processing, it loads the already trained vectors to svm classifier to detect people/no people, am i wrong?. As there is no documentation about this.
Can anyone please brief about this.
The implementation of People Detection algorithm in opencv is based on HOG descriptors as features and SVM as classifier.
1. A training database (positives samples as person, negatives samples as non-person) is used to learn to SVM parameters (it computes and store the support vectors). Cross-validation is also perform (I assume) to optimize the soft margin parameter C and the kernel parameters (it could be linear kernel).
2. To detect people on testing video data, peopledetect.cpp loads the pre-learnt SVM, computes the HOG descriptors on different positions and scales, then merges the windows with high detection scores (outputs of binary SVM classifer).
Here is a good paper (inria) to start with.
Coming to more clearer answer, peopledetect.cpp goes through all the hog steps.
digging deeper i was more clear. Basically if you debug peopledetect.cpp goes through these steps.
Initially image is divided into several scales, scale0(1.05) is coefficient for detection window increase. For each scale of the image features are extracted from window and a classifier window is run, like above it follows scale-space pyramid method. So its pretty big computational process, very expensive, so opencv team has tried to parallelise for each scale.
I was baffled before why i was not able to debug/go through the steps, This parallel_for_(Range(0, (int)levelScale.size()),HOGInvoker()) creates several threads where each thread works on each scale, depends how much threads or created something like this.
because of this i was not able to debug, what i did was freeze all the threads and debug only the main thread. for different scales of the image hog processing steps are
Here in peopledetect.cpp hog and classifier window are kinda combined.In a single window(64x128) both feature extraction and running classifier takes place. After this is done for each scale of the image. There are a number of pedestrian windows of different scales are often associated with this region, this is grouped using grouprectangle() function
Training SVM consist to find parameters of the max margin between postive and negative samples.
if the same feature extraction is done for 1000+ negative and positive sample there is must be millions of features rite?
Yes. These coefficient are extracted from training databases. You don't have them. SVM stores only support vectors which are sufficient to characterise the margin. See dual form of linear SVM for example.
a number of pedestrian windows of different scales are often associated with the region
True. A merging function is apply. Different methods (such groupRectangles(..)) are available (see here) and take in arguments parameters given to detectMultiScale(..).
What i understood from different papers is that feature extraction using hog is done using several positive and negative images, these features which were extracted is fed to Linear SVM to train them,So peopledetect.cpp uses this trained linear SVM sample, so This feature extraction process is not done by peopledetect.cpp i.e HOGDescriptor::getDefaultPeopleDetector() consists of coefficients of the classifier trained for people detection. The features extracted from hog detection/window(64x128)gives a total of length 3780(4 cells x 9 bins x 7 x 15 blocks = 3780) features. These features are then used to train a linear SVM classifier. If the same feature extraction is done for 1000+ negative and positive sample there is must be millions of features rite? How do we get these co-efficients?
But The HOG descriptors are known to contain redundant information because of the different detection window sizes being used. So when the SVM classifier classifies a region as “pedestrian”, a number of pedestrian windows of different scales are often associated with the region. what peopledetect.cpp mainly does is (hog.detectMultiScale(img, found, 0, Size(8,8), Size(32,32), 1.05, 2);) The detection window is scanned across the image at all positions and scales, and conventional non-maximum suppression is run on the output pyramid to detect object instances.
I want to training data and use HOG algorithm to detect pedestrian.
Now I can use defaultHog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector()); in opencv to detection, but the result is not very good to my testing video. So I want to do training use my database.
I have prepared 1000+ positive sample, and 1000+ negative samples. They are cropped to size 50 * 100, and I have do the list file.
And I have read some tutorials on the internet, they all so complex, sometimes abstruse. Most of them are analyze the source code and the algorithm of HOG. But with only less examples and simple anylize.
Some instruction show that libsvm\windows\svm-train.exe can be used to training, Can anyone gives an examples according to 1000+ 50*100 positive samples?
For example, like haartraing, we can do it from opencv, like haartraining.exe –a –b with some parameters, and get a *.xml as a result which will be used to people detection?
Or is there any other method to training, and detection?
I prefer to know how to use it and the detail procedures. As the detail algorithm, it is not important to me. I just want to implement it.
If anyone know about it, please give me some tips.
I provided some sample code and instructions to start training your own HOG descriptor using openCV:
See https://github.com/DaHoC/trainHOG/wiki/trainHOG-Tutorial.
The algorithm is indeed too complex to provide in short, the basic idea however is to:
Extract HOG features from negative and positive sample images of identical size and type.
Use the extracted feature vectors along with their respective classes to train a SVM classifier, in this step you can use the svm-train.exe with a generated file of the correct format containing the feature vectors and their classes (or directly include and address the libsvm library class in your sources).
The resulting SVM model and support vectors are calculated into a single descriptor vector that can be used with the openCV detector.
Best regards
I am working with SVM-light. I would like to use SVM-light to train a classifier for object detection. I figured out the syntax to start a training:
svm_learn example2/train_induction.dat example2/model
My problem: how can I build the "train_induction.dat" from a
set of positive and negative pictures?
There are two parts to this question:
What feature representation should I use for object detection in images with SVMs?
How do I create an SVM-light data file with (whatever feature representation)?
For an intro to the first question, see Wikipedia's outline. Bag of words models based on SIFT or sometimes SURF or HOG features are fairly standard.
For the second, it depends a lot on what language / libraries you want to use. The features can be extracted from the images using something like OpenCV, vlfeat, or many others. You can then convert those features to the SVM-light format as described on the SVM-light homepage (no anchors on that page; search for "The input file").
If you update with what language and library you want to use, we can give more specific advice.
I have extracted the features by using OpenCV opensource..
I have done these steps by using these 2 functions
SiftFeatureDetector
SiftDescriptorExtractor
which I got a matrix of 128*128 from the descriptors, which I think as well that I will use
this matrix to train the features...
What I'm confused about is the following,
When I want to train the features,
I should use a matrix of number of features and every single row contains the information about that feature.. which it might be a matrix of
number of features * 6
For example, I got 344 features in an image... and I got a matrix of 128*128 for the descriptor, which I need this matrix in order to train my features
but as I mentioned, I'm just getting 128*128 matrix.. so what's the problem?
And, what should I get to train later on?
Have you looked at the descriptor_extractor_matcher.cpp, or the matcher_simple.cpp samples from OpenCV? Also, could you post the code you are using to detect the features?