Predict different values from same inputs in a linear regressor model? - machine-learning

Let's suppose I have a noisy 2d data set where one person watching the data could easily draw a straight line in the data so that the mean squared error is minimized.
The model of the line has the form y = mx + b, where x is the input value, y is the predicted value of the model and m and b are trained variables to minimize the cost.
My question is that if we plug some input x1 to the model, it will always output the same number, not taking into account how sparse the data is. How can a model like this predict different values from same inputs?
Maybe this could be done taking all the errors from the model line to the points, making a distribution of them, taking an expected value of such distribution and then adding that value to y?

If the data is 2d, and it can be perfectly modeled with a straight line then there is no data-based nor statistical-based reason not to claim that the process is fully deterministic, and you should output one value.
However, if you have many more dimensions, or your fit is not perfect (error is minimised but not 0) then what you are after is either predicting distribution of values or at least confidence bounds. There are many probabilistic models that can model distribution of the outputs rather than a singe value. In particular linear regression does that, it assumes that you have a Gaussian error around your predictions, thus effectively once you obtain MSE "A" you can draw predictions from N(mx+b, A) - which, as you can easily see degenerates to deterministic model when A=0. These predictions are optimal in expectation, and they are simply your way of "simulating observations" according to the model. There are also meta methods, if you treat your predictor as a black box - you can train multiple models on subsets of data, and treat their predictions as samples to fit a distribution (again for simplicity it could be a single Gaussian).

Related

What does the y-axis of a Partial Dependence Plot (PDP) for binary classification mean?

I am not sure what the y-axis of my PDP implies? Is that the probability for my target feature to be 1 (binary classification) or something else?
If you do the partial dependence plot of column a and you want to interpret the y value at x = 0.0, the y-axis value represent the average probability of class 1 computed by
changing value of column a in all rows in your dataset to 0.0
predicting all changed row with your fitted model
averaging the probability given by the model
I may not good at explaining but you can read more about PDP at https://christophm.github.io/interpretable-ml-book/pdp.html. Hope this help :)
Generally speaking, we can produce a classifier from a function, f, producing a real-value output plus a threshold. We call the output an 'activation'. If the activation meets a threshold condition is met, the we say the class is detected:
is_class := ( f(x0, x1, ...) > threshold )
and
activation = f(x0, x1, ...)
PDP plots simply show activation values as they change in response to changes in an input value (we ignore the threshold). That is might plot:
f(x0, x, x2, x3, ...)
as a single input x varies. Typically, we hold the others constant, although we can also plot in 2d and 3d.
Sometimes we're interested in:
how a single change the activation
how multiple inputs independently change the activation
how multiple activations change based on different inputs, and so on.
Strictly speaking, we need not even be talking about a classifier when looking a PDP plots. Any function that productions a real-value output (an activation) in response to one of more real-valued feature inputs that we can vary allows us to produce PDP plots.
Classifier activations need not be, and often should not be, interpreted as probabilities, as others have written. In very many cases, this is simply just incorrect. Nevertheless, the analysis of the activation levels is of interest to us, independently of whether the activations represent probabilities: in PDP plots, we can see, for example, which feature values produce strong change - more horizontal plots may imply a worthless feature.
Similarly, in RoC plots, we explicitly examine information about the true-positive and false-position detection rates that result for varying the threshold of activation values.
In both cases, there's no necessity that the classifier produce probabilities as its activation.
Interpretation of PDP plots is fraught with dangers. At a minimum, you need to be clear about what is being held constant as a input feature is varied. Were the other features set to zero (a good choice for linear models)? Did we the set them to their most common values in the test set? Or the most common values for a known class in a sample? Without this information, the vertical axis may be less helpful.
Knowing that an activation is a probability also doesn't seem to helpful in PDP plots -- you can't expect the area under it to sum to one. Perhaps the most useful thing you might find is error cases, where output probabilities are not in the range 0..1.

K Nearest Neighbour Classifier - random state for train test split leads to different accuracy scores

I'm fairly new to data analysis and machine learning. I've been carrying out some KNN classification analysis on a breast cancer dataset in python's sklearn module. I have the following code which attemps to find the optimal k for classification of a target variable.
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
breast_cancer_data = load_breast_cancer()
training_data, validation_data, training_labels, validation_labels = train_test_split(breast_cancer_data.data, breast_cancer_data.target, test_size = 0.2, random_state = 40)
results = []
for k in range(1,101):
classifier = KNeighborsClassifier(n_neighbors = k)
classifier.fit(training_data, training_labels)
results.append(classifier.score(validation_data, validation_labels))
k_list = range(1,101)
plt.plot(k_list, results)
plt.ylim(0.85,0.99)
plt.xlabel("k")
plt.ylabel("Accuracy")
plt.title("Breast Cancer Classifier Accuracy")
plt.show()
The code loops through 1 to 100 and generates 100 KNN models with 'k' set to incremental values in the range 1 to 100. The performance of each of those models is saved to a list and a plot is generated showing 'k' on the x-axis and model performance on the y-axis.
The problem I have is that when I change the random_state parameter when spliting the data into training and testing partitions this results in completely different plots indicating varying model performance for different 'k'values for different dataset partitions.
For me this makes it difficult to decide which 'k' is optimal as the algorithm performs differently for different 'k's using different random states. Surely this doesn't mean that, for this particular dataset, 'k' is arbitrary? Can anyone help shed some light on this?
Thanks in anticipation
This is completely expected. When you do the train-test-split, you are effectively sampling from your original population. This means that when you fit a model, any statistic (such as a model parameter estimate, or a model score) will it self be a sample estimate taken from some distribution. What you really want is a confidence interval around this score and the easiest way to get that is to repeat the sampling and remeasure the score.
But you have to be very careful how you do this. Here are some robust options:
1. Cross Validation
The most common solution to this problem is to use k-fold cross-validation. In order not to confuse this k with the k from knn I'm going to use a capital for cross-validation (but bear in mind this is not normal nomenclature) This is a scheme to do the suggestion above but without a target leak. Instead of creating many splits at random, you split the data into K parts (called folds). You then train K models each time on K-1 folds of the data leaving aside a different fold as your test set each time. Now each model is independent and without a target leak. It turns out that the mean of whatever success score you use from these K models on their K separate test sets is a good estimate for the performance of training a model with those hyperparameters on the whole set. So now you should get a more stable score for each of your different values of k (small k for knn) and you can choose a final k this way.
Some extra notes:
Accuracy is a bad measure for classification performance. Look at scores like precision vs recall or AUROC or f1.
Don't try program CV yourself, use sklearns GridSearchCV
If you are doing any preprocessing on your data that calculates some sort of state using the data, that needs to be done on only the training data in each fold. For example if you are scaling your data you can't include the test data when you do the scaling. You need to fit (and transform) the scaler on the training data and then use that same scaler to transform on your test data (don't fit again). To get this to work in CV you need to use sklearn Pipelines. This is very important, make sure you understand it.
You might get more stability if you stratify your train-test-split based on the output class. See the stratify argument on train_test_split.
Note the CV is the industry standard and that's what you should do, but there are other options:
2. Bootstrapping
You can read about this in detail in introduction to statistical learning section 5.2 (pg 187) with examples in section 5.3.4.
The idea is to take you training set and draw a random sample from it with replacement. This means you end up with some repeated records. You take this new training set, train and model and then score it on the records that didn't make it into the bootstrapped sample (often called out-of-bag samples). You repeat this process multiple times. You can now get a distribution of your score (e.g. accuracy) which you can use to choose your hyper-parameter rather than just the point estimate you were using before.
3. Making sure you test set is representative of your validation set
Jeremy Howard has a very interesting suggestion on how to calibrate your validation set to be a good representation of your test set. You only need to watch about 5 minutes from where that link starts. The idea is to split into three sets (which you should be doing anyway to choose a hyper parameter like k), train a bunch of very different but simple quick models on your train set and then score them on both your validation and test set. It is OK to use the test set here because these aren't real models that will influence your final model. Then plot the validation scores vs the test scores. They should fall roughly on a straight line (the y=x line). If they do, this means the validation set and test set are both either good or bad, i.e. performance in the validation set is representative of performance in the test set. If they don't fall on this straight line, it means the model scores you get from you validation set are not indicative of the score you'll get on unseen data and thus you can't use that split to train a sensible model.
4. Get a larger data set
This is obviously not very practical for your situation but I thought I'd mention it for completeness. As your sample size increases, your standard error drops (i.e. you can get tighter bounds on your confidence intervals). But you'll need more training and more test data. While you might not have access to that here, it's worth keeping in mind for real world situations where you can assess the trade-off of the cost of gathering new data vs the desired accuracy in assessing your model performance (and probably the performance itself too).
This "behavior" is to be expected. Of course you get different results, when training and test is split differently.
You can approach the problem statistically, by repeating each 'k' several times with new train-validation-splits. Then take the median performance for each k. Or even better: look at the performance distribution and the median. A narrow performance distribution for a given 'k' is also a good sign that the 'k' is chosen well.
Afterwards you can use the test set to test your model

What is 'fit' in Machine learning?

What is 'fit' in machine learning? I noticed in some cases it is a synonym for training.
Can someone please explain in layman's term?
A machine learning model is typically specified with some functional form that includes parameters.
An example is a line intended to model data that has an outcome variable y that can be described in terms of a feature x. In that case, the functional form would be:
y = mx + b
fitting the model means finding values for m and b that are in accordance with training data, which is a set of points (x1, y1), (x2, y2), ..., (xN, yN). It may not be possible to set m and b such that the line passes through all training data points, but some loss function could be defined for describing a well-fit line. The fitting algorithm's purpose would be to minimize that loss function. In the case of line fitting, the loss could be the total distance of training data points to the line, but it may be more mathematically convenient to set the loss to the total squared distance of training data points to the line.
In general, a model can be more complex than a line and include many parameters. For some models, the number of parameters is not fixed and can change as part of the fitting process. The features and the outcome variable can be discrete, continuous, and/or multidimensional. For unsupervised problems, there is no outcome variable.
In all these cases, fitting is still analogous to the line example above, where an algorithm is run to find model parameters that in some sense explain the training data. This often involves running some optimization procedure.
A model that is well-fit to the training data may not be well-fit to other non-training data, even if the other data is sampled from the same distribution as the training data. A technique called regularization can be used to address this issue.

How can I get the joint probability of new image and latent variable from generative model?

I just learned that generative model tries to learn p(x|z)p(z) = p(x,z).
But after I study some sample code of generative models such as VAE and GAN, I found that the output of model is the generated image x, which is a 2D matrix.
In my realization, the content of matrix means the probability of every pixel and the latent variable, is this right?
If it's right, is it possible to get joint probability p(x,z) between latent variables z and a whole image x from generative model?
Thanks!
What a generative model is trying to learn is just p(x). p(x|z) = 1 if g(z) = x and 0 otherwise, because GANs and VAEs are deterministic mappings and therefore have 100% chance to map to the same target given the same input.
Extracting the probability of x is not an easy task though and depends on the approach. With GANs you can approximate this by sampling from the model. E.g. you sample 1000 images and see how often an image occured. Then this image has a probability of occurences / 1000. By the law of large numbers you will eventually recover the actual probability distribution of your generator this way.
If you want an exact way to calculate probabilities you can use FLOW networks like GLOW or RealNVP, which optimize for log(p(x)) directly and have a way to recover p(x).

Can a machine learning model provide information about mean and standard deviation of data on which it was trained?

Consider a parametric binary classifier (such as Logistic Regression, SVM etc.) trained on a dataset (say containing two features for e.g. Blood Pressure and Cholesterol level). The dataset is thrown away and the trained model can only be used as a black box (no tweaks and inside information can be gathered from the trained model). Only a set of data points can be provided and their labels predicted.
Is it possible to get information about the mean and/or standard deviation and/or range of the features of the dataset on which this model was trained? If yes, how so? and If no, then why can't we?
Thank you for your response! :)
SVM does not provide any information about the data statistics, it is a maximum margin classifier and it finds the best separating hyperplane between two datasets in the feature space, as a linear combination of "support vectors". If you use kernel functions, then this combination is in the kernel space, it is not even in the original feature space. SVM does not have a straightforward probabilistic interpretation whatsoever.
Logistic regression is a discriminative classifer and models the conditional probability p (y|x,w) where y is your label, x is your data and w are the features. After maximum likelihood training you are left with w and it is again a discriminator (hyperplane) in the feature space, so you don't have the features again.
The following can be considered. Use a Gaussian classifier. Assume that your class is produced by the prior class probability p (y). Then a class conditional density p (x|y,w) produces your data. Then by the Bayes rule, you will have: p (y|x,w) = (p (y)p (x|y,w))/p (x). If you define the class conditional density p (x|y,w) as Gaussian, its parameter set w will consists of the mean vector m and covariance matrix C of x, assuming it is being produced by the class y. But remember that, this will work only based on the assumption that the current data vector belongs to a specific class. Conditioned on w, a better option would be for mean vector: E [x|w]. This the expectation of x with respect to p (x|w). It comes down to a weighted average of mean vectors for the class y=0 and y=1, with respect to their prior class probabilities. Same should work for covariance as well, but it needs to be derived properly, I am not %100 sure right now.

Resources