How to implement multiple kernels in LIBSVM? - machine-learning

I wish to improve the accuracy of my SVM classification results by using multiple kernel learning (eg., RBF & Poly). Are there any resources to help me figure out how to implement this in LIBSVM? The resources I have searched for show uniquely MKL results, which I don't need.

Related

Is there any subspace clustering method for dealing with numerical attributes?

I am trying to apply some clustering method on my datasets (with numerical dimensions). But I'm convinced that the features have different weights for different clusters. I read that there is an approach called soft subspace clustering that tries do identify the clusters and the weights of the features for each cluster simultaneously. However, the algorithms that I have found apre applied only to categorical data.
I am trying to identify some algorithm of soft subspace clustering for numerical. Do you know if there is any, or how can I adapt methods originally designed to deal with categorical data for dealing with numerical data (I think that it would necessary to propose some way of measuring the relevance of each numerical feature in each cluster)?
Yes, there are dozens of subspace clustering algorithms.
You'll need to do a proper literature research, this is too broad to cover in a QA like stack overflow. Look for (surprise) "subspace clustering", but also include "biclustering", for example.

why using support vector machine?

I have some questions about SVM :
1- Why using SVM? or in other words, what causes it to appear?
2- The state Of art (2017)
3- What improvements have they made?
SVM works very well. In many applications, they are still among the best performing algorithms.
We've seen some progress in particular on linear SVMs, that can be trained much faster than kernel SVMs.
Read more literature. Don't expect an exhaustive answer in this QA format. Show more effort on your behalf.
SVM's are most commonly used for classification problems where labeled data is available (supervised learning) and are useful for modeling with limited data. For problems with unlabeled data (unsupervised learning), then support vector clustering is an algorithm commonly employed. SVM tends to perform better on binary classification problems since the decision boundaries will not overlap. Your 2nd and 3rd questions are very ambiguous (and need lots of work!), but I'll suffice it to say that SVM's have found wide range applicability to medical data science. Here's a link to explore more about this: Applications of Support Vector Machine (SVM) Learning in Cancer Genomics

OpenCV Haar classifier - is it an SVM

I'm using an OpenCV Haar classifier in my work but I keep reading conflicting reports on whether the OpenCV Haar classifier is an SVM or not, can anyone clarify if it is using an SVM? Also if it is not using an SVM what advantages does the Haar method offer over an SVM approach?
SVM and Boosting (AdaBoost, GentleBoost, etc) are feature classification strategies/algorithms. Support Vector Machines solve a complex optimization problem, often using kernel functions which allows us to separate samples by working in a much higher dimension feature space. On the other hand, boosting is a strategy based on combining lots of "cheap" classifiers in a smart way, which leads to a very fast classification. Those weak classifiers can be even SVM.
Haar-like features are a kind of features based in integral images and very suitable for Computer Vision problems.
This is, you can combine Haar features with any of the two classification schemes.
It isn't SVM. Here is the documentation:
http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#haar-feature-based-cascade-classifier-for-object-detection
It uses boosting (supporting AdaBoost and a variety of other similar methods -- all based on boosting).
The important difference is related to speed of evaluation is important in cascade classifiers and their stage based boosting algorithms allow very fast evaluation and high accuracy (in particular support training with many negatives), at a better balance point than an SVM for this particular application.

Sweep through all machine learning classifiers?

I'm using Weka to perform classification, clustering, and some regression on a few large data sets. I'm currently trying out all the classifiers (decision tree, SVM, naive bayes, etc.).
Is there a way (in Weka or other machine learning toolkit) to sweep through all the available classifier algorithms to find the one that produces the best cross-validated accuracy or other metric?
I'd like to find the best clustering algorithm, too, for my other clustering problem; perhaps finding the lowest sum-of-squared-error?
Isn't that some kind of overfitting, too? Trying tons of classifiers, and choosing the best?
Also note that preprocessing is usually very important, and different classifiers may need different preprocessing; and each classifier has in turn a dozen or so parameters...
Same for clustering, don't choose a clustering algorithm by some metric. Because if you choose e.g. "lowest sum-of-squares", k-means will win. Not because it is better. But because it is more overfit to your evaluation method: k-means optimizes the sum-of-squares. The results may be crap on other metrics, but on SSQ, they are by design a local optimum.
Data mining is not something you can automate to a push-button level.
It's a skill that requires experience on how to preprocess, choose algorithms, adjust parameters and evaluate the actual outcome. Otherwise, you'd have some software on the market where you just feed your data and get the optimal classifier out.

How to categorize continuous data?

I have two dependent continuous variables and i want to use their combined values to predict the value of a third binary variable. How do i go about discretizing/categorizing the values? I am not looking for clustering algorithms, i'm specifically interested in obtaining 'meaningful' discrete categories i can subsequently use in in a Bayesian classifier.
Pointers to papers, books, online courses, all very much appreciated!
That is the essence of machine learning and problem one of the most studied problem.
Least-square regression, logistic regression, SVM, random forest are widely used for this type of problem, which is called binary classification.
If your goal is to pragmatically classify your data, several libraries are available, like Scikits-learn in python and weka in java. They have a great documentation.
But if you want to understand what's the intrinsics of machine learning, just search (here or on google) for machine learning resources.
If you wanted to be a real nerd, generate a bunch of different possible discretizations and then train a classifier on it, and then characterize the discretizations by features and then run a classifier on that, and see what sort of discretizations are best!?
In general discretizing stuff is more of an art and having a good understanding of what the input variable ranges mean.

Resources