I am trying out a basic architecture of Variational Autoencoder(VAE) in my project. But this generative model is not used for images, but for words. I came across a term called "Sampling from the normal distribution". What does this sampling mean? What is the purpose of it?
Normal distribution is a continuous probability distribution.
Sampling from normal distribution means getting a discrete value(or a set of discrete values) out of this equation.
This sampling is generally achieved by simple algorithms like Box-Muller Transform. Numpy and CUDA has facilities that can generate N values from a given normal distribution.
Related
I am recently found a model to classify the Irish flower based on the size of its leaf. There are 3 types of flowers as a target (dependent variable). As I know, the categorical data should be encoded so that it can be used in machine learning. However, in the model the data is used directly without encoding process.
Can anyone help to explain when to use encoding? Thank you in advance!
Relevant question - encoding of continuous feature variables.
Originally, the Iris data were published by Fisher when he published his linear discriminant classifier.
Generally, a distinction is made between:
Real-value classifiers
Discrete feature classifiers
Linear discriminant analysis and quadratic discriminant analysis are real-value classifiers. Trying to add discrete variables as extra input does not work. Special procedures for working with indicator variables (the name used in statistics) in discriminant analysis have been developed. Also the k-nearest neighbour classifier really only works well with real-valued feature variables.
The naive Bayes classifier is most commonly used for classification problems with discrete features. When you don't want to assume conditional independence between the feature variables, the multinomial classifier can be applied to discrete features. A classifier service that does all this for you in one go, is insight classifiers.
Neural networks and support vector machines combine real-valued and discrete features. My advice is to use one separate input node for each discrete outcome - don't use one single input node provided with values like: (0: small, 1: minor, 2: medium, 3: larger, 4: big). One input-node-per-outcome-encoding will improve your training result and yield better test set performance.
The random forest classifier also combines real-valued and discrete features seamlessly.
Final advice is to train and test-set compare at least 4 different types of classifiers, as there is no such thing as the universal best type of classifier.
I am a beginner in machine learning. So any help or suggestion would be of great help.
I have read that putting weights on features and Predicting is a very bad idea. But what if few features needs to be weighted.
In a classification problem let's say it's a common norm that age is most dependent, how do I give weights to this feature. I was thinking to normalize it but with a variance of 1.5 or 2 (other features with variance 1), I believe that this feature will have more weight. Is this fundamentally wrong ? If wrong any other method.
Does it effect differently for classification and regression problems ?
If we are talking specifically about random forests (as you tagged) then you can use the Weighted Subspace Random Forest algorithm (in R wsrf package). The algorithm determines a weight for each variable and then uses these during the model building.
The informativeness of a variable with respect to the class is
measured by an information gain ratio. The measure is used as the
probability of that variable being selected for inclusion in the
variable subspace when splitting a specific node during the tree
building process. Therefore, variables with higher values by the
measure are more likely to be chosen as candidates during variable
selection and a stronger tree can be built.
Generally if a feature has more Importance compared to other features and the model is Dense enough, with enough training sample, your model will automatically give it more Importance by optimizing weight matrices to account for that because we have partial derivatives in back propagation which calculate change by each connection, so it learns to give more importance to that feature on itself. If you don't normalize it, but scale it to a higher scale, you might have overstated it's important.
In practice a neural network works best if the inputs are centered and white. That means that their covariance is diagonal and the mean is the zero vector. This improves optimization of the neural net, since the hidden activation functions don't saturate that fast and thus do not give you near zero gradients early on in learning.
If you do scale just one feature up by a small value, it may or may not have desired effects, but the higher probability is of saturated gradients, so we avoid it.
I am trying to fit a finite Gaussian mixture model with unknown mean and covariances using Stan. I am aware that as HMC can't be applied to sample from discrete distributions, the marginalization technique is usually used to infer mixture parameters using Stan. However, for my application, I need the data cluster assignments. What is the best way to infer them in Stan? Suggestions will be appreciated.
Chapter 13 of the Stan User Manual discusses this in some detail. In short, you can calculate (in the generated quantities block) the posterior probability that an observation falls in each of a finite number of categories and then use that vector of probabilities to draw a category realization from the categorical distribution.
Suppose I have a training set made by (x, y) samples.
To apply a generative algorithm, let's say the Gaussian discriminative, I must assume that
p(x|y) ~ Normal(mu, sigma) for every possible sigma
or I just need to I know if x ~ Normal(mu, sigma) given y?
How can I evaluate if p(x|y) follows a multivariate Normal distribution well enough (up to a threshold) to me to use generative algorithm?
That's a lot of questions.
To apply a generative algorithm, let's say the Gaussian
discriminative, I must assume that
p(x|y) ~ Normal(mu, sigma) for every possible sigma
No, you must assume that's true for some mu, sigma pair. In practice you won't know what mu and sigma is, so you'll need to either estimate it (frequentist, Max Likelihood/Max A Posteriori estimates), or even better incorporate uncertainty about your estimates of the parameters into predictions (Bayesian methodology).
How can I evaluate if p(x|y) follows a multivariate Normal distribution?
Classically, using a goodness of fit test. If the dimensionality of x is more than a handful, though, this won't work because standard tests involve the number of items in bins, and the number of bins you need in high dimensions is astronomical so you have very low expected counts.
A better idea is to say the following: what are my options for modelling the (conditional) distribution of x? You can compare between these options using model comparison techniques. Read up on model checking and comparison.
Finally, your last point:
well enough (up to a threshold) to me to use generative algorithm?
The paradox of many generative methods, including Fisher's Linear Discriminant Analysis for example, as well as the Naive Bayes classifier, is that the classifier can work very well even though the model is poor for the data. There's no particularly sound reason why this should be the case, but many have observed it to be empirically true. Whether it works can be checked much more easily than whether the assumed distribution explains the data very well: just split your data into training and testing and find out!
I am a novice to machine learning, I have read about the HMM but I still have a few questions:
When applying the HMM for machine learning, how can the initial, emmission and transition probabilities be obtained?
Currently I have a set of values (consisting the angles of a hand which I would like to classify via an HMM), what should my first step be?
I know that there are three problems in a HMM (ForwardBackward, Baum-Welch, and Viterbi), but what should I do with my data?
In the literature that I have read, I never encountered the use of distribution functions within an HMM, yet the constructor that JaHMM uses for an HMM consists of:
number of states
Probability Distribution Function factory
Constructor Description:
Creates a new HMM. Each state has the same pi value and the transition probabilities are all equal.
Parameters:
nbStates The (strictly positive) number of states of the HMM.
opdfFactory A pdf generator that is used to build the pdfs associated to each state.
What is this used for? And how can I use it?
Thank you
You have to somehow model and learn the initial, emmision, and tranisition probabilities such that they represent your data.
In the case of discrete distributions and not to much variables/states you can obtain them form maximum likelihood fitting or you train a discriminative classifier that can give you a probability estimate like Random Forests or Naive Bayes. For continuous distributions have a look at Gaussian Processes or any other regression method like Gaussian Mixture Models or Regression Forests.
Regarding your 2. and 3. question: they are to general and fuzzy to be answered here. You should kindly refer to the following books: "Pattern Recognition and Machine Learning" by Bishop and "Probabilistic Graphical Models" by Koller/Friedman.