How do to train a HMM classifier with multiple classes? - machine-learning

I have trained HMM for each class separately, and selected a highest likelihood model given the observation sequence.
However, my requirement is to build single HMM for all classes. Each class can have multiple hidden states from S1,....Sn. For example, if have three classes, then the total number of hidden states are 3xn. How do i train this kind of classifier?.

Related

How do I test CNN model for less classes than training set in python

I have tropical storms images and trying build up a model to categorize the storm category. Here I am trying to predict the last storm stage using previous storm images but Last storm accounts only 5 categories training set has 7 categories( basically I split dataset like last storm for testing and first storms as training).Finally I have to ask, are there any methods to predict less classes than training classes.
In my opinion, it does not matter if your test set contains fewer categories than the training categories as long as the 5 categories that you care about can still be predicted by the model. When the model produces a prediction for a given test sample, you can sort the predicted classes and take only the predicted class with the highest accuracy (or top 3 accuracies, for example) and ignore the rest.
Otherwise, I would suggest training your model only with the number of classes you care about (5) with only the training set of these classes.

Is it possible to have a class feature with several values?

I have a dataset in which the class has several values. For example, a dataset of face recognition where the class could be a tuple (man, old, Chinese).
Is it possible to have such data, if yes what ML classifier should I use?
I beleive this questions must be moved to another paltform like the https://datascience.stackexchange.com/
What you ask for is called Mutli-label Classification
In multiple label classification tasks, the model is trained to provide the probabilities or likelihood of more than one label for a given sample.
You can wether use the Multi-lable classification, or you can use multiple binary classifiers for the prediction of each feature. Like one binary classification for predicting Man or Woman, the other for Old or Young and etc. But you must be cautious that yoru labels be semantically mutual exclusive. I mean if you have labels like "sky" and "outdoor", the binary classification might be noisy if your labels are not carefully made. i.e if for a sample you have "sky" label, but no "outdoor" label, that will cause some noises during your training

Can anyone give me some pointers for using SVM for user recognition using keystroke timing?

I am trying to perform user identification using keystroke dynamics. The data consists of the timing of individual keystrokes. I am using an SVM for binary classification. How can I train this for multiple users?
i have times of dynamic keyword, very times of users, example “hello” h->16seg, e->10, l->30, o->20, therefore, i not have class(1pos, -1neg)
SVMs are a binary classifier. However, SVMs do give you a confidence score (a function of distance from the separating hyperplane). So, you can use this information in one of two popular ways to convert a binary classifier into a multiclass classifier. These two ways are One-vs-All and One-vs-One.
See this article on how to use SVMs in a multiclass setting.
For example, in the One vs. All setting, for each class you separate the training data into samples that belong to that class and samples that belong to any other class. Then you fit an SVM on that data. At the end of the day you have k classifiers if you have k classes. Then you run your test data through all k classifiers and return the class with the highest probability (confidence score).

Training DeepBelief Network to recognize multiple categories?

The learning example of the DeepBelief framework demonstrates how to train a neural network to recognize one object category. The method used for training jpcnn_train() does not have a category label parameter.
However, in the DeepBelief simple example, the given neural network can categorize multiple object categories. Is there a way to do that kind of training through DeepBelief? Or should I look in to Caffe and use that instead as DeepBelief is based on Caffe?
Based on their documentation, in particular on a docs for functions jpcnn_train and jpcnn_predict, it does not appear to support multiclass classification for custom labels out of the box. It does seem to support multiclass classification for ImageNet labels.
However, you can train multiple predictors (here's how to train one), one per your custom class, and then choose the class for which the corresponding predictor outputs the highest value.

Late fusion step of classification using libLinear

I am doing a classification work that use libLinear as kernel these days.
And have trained two type of feature sets into two models to do prediction for a query input.
Wish to utilize Late Fusion to combine two result from models, I change the code of liblinear that I can get the decision score for different classes. So we got two sets of score to determine which class the query should be in.
Is there any standard way to do this "Late Fusion" or just intuitively add two scores of each classes and choose the class with highest score as candidate?
The standard way to combine multiple classifiers would be a weighted sum of the scores of the individual classifiers. Of course, you then have the problem of specifying the weight coefficients. There are different possibilities:
set weights uniformly
set weights proportional to performance of classifier
train a new classifier which takes the scores as input

Resources