I am trying to train an adaboost classifier using the openCV library, for visual pedestrian detection.
I've come across the notion that adaboost allows the selection of the most relevant features, meaning, if I harvest 50.000 features from images and then use them to train a classifier, in the end of the training process I would be able to select, for example, the best 2000 out of those 50.000.
Then, this would allow me to harvest only those 2000 during the actual process for the sake of speed.
Is this even true? Or am I falling in a misconception?
If true,, is it possible to be done using the openCV library?
Best regards
Yes, this is true. That's exactly what boosting is all about.
Please, check the OpenCV documentation about training a cascade of boosted classifiers.
Related
Does it make any sense to perform feature extraction on images using, e.g., OpenCV, then use Caffe for classification of those features?
I am asking this as opposed to the traditional way of passing the images directly to Caffe, and letting Caffe do the extraction and classification procedures.
Yes, it does make sense, but it may not be the first thing you want to try:
If you have already extracted hand-crafted features that are suitable for your domain, there is a good chance you'll get satisfactory results by using an easier-to-use machine learning tool (e.g. libsvm).
Caffe can be used in many different ways with your features. If they are low-level features (e.g. Histogram of Gradients), then several convolutional layers may be able to extract the appropriate mid-level features for your problem. You may also use caffe as an alternative non-linear classifier (instead of SVM). You have the freedom to try (too) many things, but my advice is to first try a machine learning method with a smaller meta-parameter space, especially if you're new to neural nets and caffe.
Caffe is a tool for training and evaluating deep neural networks. It is quite a versatile tool allowing for both deep convolutional nets as well as other architectures.
Of course it can be used to process pre-computed image features.
I'm using an OpenCV Haar classifier in my work but I keep reading conflicting reports on whether the OpenCV Haar classifier is an SVM or not, can anyone clarify if it is using an SVM? Also if it is not using an SVM what advantages does the Haar method offer over an SVM approach?
SVM and Boosting (AdaBoost, GentleBoost, etc) are feature classification strategies/algorithms. Support Vector Machines solve a complex optimization problem, often using kernel functions which allows us to separate samples by working in a much higher dimension feature space. On the other hand, boosting is a strategy based on combining lots of "cheap" classifiers in a smart way, which leads to a very fast classification. Those weak classifiers can be even SVM.
Haar-like features are a kind of features based in integral images and very suitable for Computer Vision problems.
This is, you can combine Haar features with any of the two classification schemes.
It isn't SVM. Here is the documentation:
http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#haar-feature-based-cascade-classifier-for-object-detection
It uses boosting (supporting AdaBoost and a variety of other similar methods -- all based on boosting).
The important difference is related to speed of evaluation is important in cascade classifiers and their stage based boosting algorithms allow very fast evaluation and high accuracy (in particular support training with many negatives), at a better balance point than an SVM for this particular application.
I've built an algorithm for pedestrian detection using openCV tools. To perform classification I use a boosted classifier trained with the CvBoost class.
The problem of this implementation is that I need to feed my classifier the whole set of features I used for training. This makes the algorithm extremely slow, so much that each image takes around 20 seconds to be fully analysed.
I need a different detection structure, and openCV has this Soft Cascade class that seems like exactly what I need. Its basic principle is that there is no need to examine all the features of a testing sample, since a detector can reject most negative samples using a small number of features. The problem is that I have no idea how to train one given a fully labeled set of negative and positive examples.
I find no information about this online, so I am looking for any tips you can give me on how to use this soft cascade to make classification.
Best regards
I have two dependent continuous variables and i want to use their combined values to predict the value of a third binary variable. How do i go about discretizing/categorizing the values? I am not looking for clustering algorithms, i'm specifically interested in obtaining 'meaningful' discrete categories i can subsequently use in in a Bayesian classifier.
Pointers to papers, books, online courses, all very much appreciated!
That is the essence of machine learning and problem one of the most studied problem.
Least-square regression, logistic regression, SVM, random forest are widely used for this type of problem, which is called binary classification.
If your goal is to pragmatically classify your data, several libraries are available, like Scikits-learn in python and weka in java. They have a great documentation.
But if you want to understand what's the intrinsics of machine learning, just search (here or on google) for machine learning resources.
If you wanted to be a real nerd, generate a bunch of different possible discretizations and then train a classifier on it, and then characterize the discretizations by features and then run a classifier on that, and see what sort of discretizations are best!?
In general discretizing stuff is more of an art and having a good understanding of what the input variable ranges mean.
I am doing a project where I have neural networks (or other algorithms) play each other in poker. After each win or loss, I want the neural network (or other algorithm) to update in response to the error of the loss (how this is calculated is unimportant here).
Weka is very nice and I don't want to reinvent the wheel. However, Weka's API seems primarily designed to train from a dataset. Game playing doesn't use a dataset. Rather, the network plays, and then I want it to update itself based on its loss.
Is it possible to use the Weka API to update a network instead of a dataset but on one instance and do this over and over again? I'm I thinking about this right?
The other idea I also want to implement is use a genetic algorithm to update the weights in a neural network, instead of the backpropogation algorithm. As far as I can tell, there is no way to manually specify the weights of a neural network in Weka. This, of course, is vital if using a genetic algorithm for this purpose.
Please help :) Thank you.
Normally weka learning algorithms are batch learning algoritms. What you need are incremental classifier.
From weka docs
Most classifiers need to see all the data before they can be trained, e.g., J48 or SMO. But there are also schemes that can be trained in an incremental fashion, not just in batch mode. All classifiers implementing the weka.classifiers.UpdateableClassifier interface are able to process data in such a way.
See UpdateableClassifier interface to which classifiers implement it.
Also you may look MOA Massive Online Analysis tool which is closely related with weka and all of its classifiers are incremental due to constraints of online learning.
Weka, as far as I can tell, does not do online learning (which is what you're asking about).
It might be better to investigate using competitive analysis for your game.
You may have to reinvent the wheel here. I don't think it's a bad use of time.
I'm currently implementing a learning classifier system, which is pretty simple. I'd also advise looking into these kinds of algorithms. There is an implementation on the internet, but I still prefer to code my own.