I am recently studying Machine Learning with Coursera ML course, and some questions popped up while learning cost function with regularization.
Please give me your advice if you have any idea.
If I have enough number of training data, I think regularization would reduce the accuracy because the model is able to obtain high reliability and generalized output only from the training set, without regularization. How can I make a good decision whether or not I should use regularization?
Let’s suppose we have a model as follows: w3*x3 + w2*x2 + w1*x1 +w0, and x3 is the feature which particularly causes overfitting; this means it has more outliers. In this situation, I think the way of regularization is sort of unreasonable due to the fact that it takes effect on every weight. Do you know any better way that I can use in this case?
What is the best way to choose the value of lambda? I guess the simplest way is to conduct multiple learning with different lambda values and to compare their training accuracy. However, this is definitely inefficient when we have huge number of training data. I want to know how you choose the ideal lambda value.
Thanks for reading!
It's a bad idea to come up with guesses before you evaluate your model on validation data. When you talk about 'accuracy' in your question, to which accuracy do you refer to? Train set accuracy is not very useful in estimation of your model's goodness. Normally, regularization is desirable for many families of ML algorithms. In the case of linear regression, it is definitely worth to do. The question here is only the amount of it, i.e. the value of lambda parameter. Also, you might want to try L1 instead of L2. Read this.
In machine learning, questions like this are normally answered using data. Try a model, investigate how it behaves, try different solutions for the issues you observe.
Read this: How to calculate the regularization parameter in linear regression
Related
I have always been using r2 score metrics. I know there are several evaluation metrics out there i have read several articles about it. Since i'm still a beginner in machine learning. I'm still very confused of
When to use each of it, is depending on our case, if yes please give me example
I read this article and it said, r2 score is not straightforward, we need other stuff to measure the performance of our model. Does it mean we need more than 1 evaluation metrics in order to get better insight of our model performance?
Is it recommended if we only measure our model performance by just one evaluation metrics?
From this article it said knowing the distribution of our data and our business goal helps us to understand choose appropriate metrics. What does it mean by that?
How to know for each metrics that the model is 'good' enough?
There are different evaluation metrics for regression problems like below.
Mean Squared Error(MSE)
Root-Mean-Squared-Error(RMSE)
Mean-Absolute-Error(MAE)
R² or Coefficient of Determination
Mean Square Percentage Error (MSPE)
so on so forth..
As you mentioned you need to use them based on your problem type, what you want to measure and the distribution of your data.
To do this, you need to understand how these metrics evaluate the model. You can check the definitions and pros/cons of evaluation metrics from this nice blog post.
R² shows what variation of your purpose variable is described by independent variables. A good model can give R² score close to 1.0 but it does not mean it should be. Models which have low R² can also give low MSE score. So to ensure your predictive power of your model it is better to use MSE, RMSE or other metrics besides the R².
No. You can use multiple evaluation metrics. The important thing is if you compare two models, you need to use same test dataset and the same evaluation metrics.
For example, if you want to penalize your bad predictions too much, you can use MSE evaluation metric because it basically measures the average squared error of our predictions or if your data have too much outlier MSE give too much penalty to this examples.
The good model definition changes based on your problem complexity. For example if you train a model which predicts that heads or tails and gives %49 accuracy it is not good enough because the baseline of this problem is %50. But for any other problem, %49 accuracy may enough for your problem. So in a summary, it depends on your problem and you need to define or think that human(baseline) threshold.
I read this line today :
Every regression gets better with the addition of more features or variables... But adding more features increases complexity and reduces interpretability of the model as well.
I am unable to understand what is interpretability? (searched it on google but still did not get it)
Please help thank you
I would say that interpretability in a regression problems is when you can explain the result of your model to non statistician / domain experts.
For example: you try to predict the size of people depending on many variable, including sex. If you use linear regression, you will be able to say that the model will add 20cm (again, for example) to the predicted size if the person is a man (compared to a woman). The domain expert will understand the relationship between explanatory variable and the predicted result, without understanding statistics or how a linear regression works.
In addition, I disagree with the fact that the addition of more features or variables always improve regression result.
What is a better regression ? Improvement in choosen metrics ? For training or test set ? A "better regression" doesn't mean anything...
If we assume that a better regression is a regression which is better to predict the target for a new dataset, more variable doesn't always improve prediction power, especially when there is no regularization, if the added feature contains futures variables or many others cases.
I wonder if there´s anyway to proof the correctness of my results after apply some data mining algorithms to a set of data. When i say data mining algorithms im talking about the basic algorithms
If you have many examples, a simple way is to split available data in three partitions:
training data (around 50%-60% of available examples, randomly chosen);
validation data (20%-25%);
test data (20%-25%).
Training data are used to adjust parameters of the data mining algorithms.
With validation data you can compare models/algorithms/parameters and choose a winner.
Test data can give you a forecast of winner's performance in the "real world" because they are independent (during the training/validation phase you don't make any choice based on test data).
Anyway there are many schemes and probably the best place to delve deeper into the matter is http://stats.stackexchange.com
There can be several ways to proof correctness of your results. Firstly, you have to choose performance criteria
Accuracy of algorithm
Standard Deviation of results
Computation time
Based on either of these criteria, you have to adopt different-different mechanism to prove correctness of your algorithm.
1. Accuracy of algorithm
for this you have to understand, what are those point which can be questioned when you say that my algorithm's accuracy is XY.WZ%.
First question, is your algorithm giving better result because of over-fitting?
To avoid over-fitting by your algorithm, you can divide your data into three parts
training data
validation data
testing data
by doing so, if you are get good testing results, you can be sure that your algorithm did not over-fit. if there is a big difference between training and testing accuracy that is a sign of over-fitting.
What if you find out that your algorithm over-fit?
You can use several regularization techniques that keeps value of weights coefficient lower and helps in preventing over-fitting. You can know more about this in lectures of machine learning by Andre N.G at coursra.
Second question, is your data-set fairly chosen?
Suppose you have 100 dataset and you divided it in 50-30-20 set (training-validation-testing). Now question comes which 50 for training and which 30 dataset for validation and so on. So for different-2 selection of these data-set, you will get different-2 accuracy values. So, you should take 5-10 different-2 sets and then provide and average of results. This technique is known as cross-validation technique.
An another way to prove correctness of your algorithm is to provide confusion matrix in case of muticlass classification and sensitivity and specificity in case of binary classification. you can look at their wiki pages.
2. Standard deviation of results
If your algorithm is based on random population generation or based on heuristics then you are most likely to get different solution at each run of algorithm . In this case, you should provide an standard deviation of multiple runs on same data-set and same parameter setting by your algorithm.
3. computation time of algorithm
This might not be important in every case but if you are doing an comparison of your algorithm with other algorithm then you should provide comparison of computation time, however this has nothing to do with correctness of your algorithm but it does gives an idea of comprehensiveness of your algorithm.
What good are proven results?
At most you will be able to prove that your implementation matches some theoretical mathematical model, or that an approximative algorithm approximates this mathematical model.
But in practise, real data will not satisfy your mathematical assumptions anyway.
Often, the best proof is: does it work?
That is, on real, unseen data. Not on the data that you used to choose your parameters, because then you are prone to overfitting.
I am new in machine learning. My problem is to make a machine to select a university for the student according to his location and area of interest. i.e it should select the university in the same city as in the address of the student. I am confused in selection of the algorithm can I use Perceptron algorithm for this task.
There are no hard rules as to which machine learning algorithm is the best for which task. Your best bet is to try several and see which one achieves the best results. You can use the Weka toolkit, which implements a lot of different machine learning algorithms. And yes, you can use the perceptron algorithm for your problem -- but that is not to say that you would achieve good results with it.
From your description it sounds like the problem you're trying to solve doesn't really require machine learning. If all you want to do is match a student with the closest university that offers a course in the student's area of interest, you can do this without any learning.
I second the first remark that you probably don't need machine learning if the student has to live in the same area as the university. If you want to use an ML algorithm, maybe it would best to think about what data you would have to start with. The thing that comes to mind is a vector for a university that has certain subjects/areas for each feature. Then compute a distance from a vector which is like an ideal feature vector for the student. Minimize this distance.
The first and formost thing you need is a labeled dataset.
It sounds like the problem could be decomposed into a ML problem however you first need a set of positive and negative examples to train from.
How big is your dataset? What features do you have available? Once you answer these questions you can select an algorithm that bests fits the features of your data.
I would suggest using decision trees for this problem which resembles a set of if else rules. You can just take the location and area of interest of the student as conditions of if and else if statements and then suggest a university for him. Since its a direct mapping of inputs to outputs, rule based solution would work and there is no learning required here.
Maybe you can use a "recommender system"or a clustering approach , you can investigate more deeply the techniques like "collaborative filtering"(recommender system) or k-means(clustering) but again, as some people said, first you need data to learn from, and maybe your problem can be solved without ML.
Well, there is no straightforward and sure-shot answer to this question. The answer depends on many factors like the problem statement and the kind of output you want, type and size of the data, the available computational time, number of features, and observations in the data, to name a few.
Size of the training data
Accuracy and/or Interpretability of the output
Accuracy of a model means that the function predicts a response value for a given observation, which is close to the true response value for that observation. A highly interpretable algorithm (restrictive models like Linear Regression) means that one can easily understand how any individual predictor is associated with the response while the flexible models give higher accuracy at the cost of low interpretability.
Speed or Training time
Higher accuracy typically means higher training time. Also, algorithms require more time to train on large training data. In real-world applications, the choice of algorithm is driven by these two factors predominantly.
Algorithms like Naïve Bayes and Linear and Logistic regression are easy to implement and quick to run. Algorithms like SVM, which involve tuning of parameters, Neural networks with high convergence time, and random forests, need a lot of time to train the data.
Linearity
Many algorithms work on the assumption that classes can be separated by a straight line (or its higher-dimensional analog). Examples include logistic regression and support vector machines. Linear regression algorithms assume that data trends follow a straight line. If the data is linear, then these algorithms perform quite good.
Number of features
The dataset may have a large number of features that may not all be relevant and significant. For a certain type of data, such as genetics or textual, the number of features can be very large compared to the number of data points.
When we have a high degree linear polynomial that is used to fit a set of points in a linear regression setup, to prevent overfitting, we use regularization, and we include a lambda parameter in the cost function. This lambda is then used to update the theta parameters in the gradient descent algorithm.
My question is how do we calculate this lambda regularization parameter?
The regularization parameter (lambda) is an input to your model so what you probably want to know is how do you select the value of lambda. The regularization parameter reduces overfitting, which reduces the variance of your estimated regression parameters; however, it does this at the expense of adding bias to your estimate. Increasing lambda results in less overfitting but also greater bias. So the real question is "How much bias are you willing to tolerate in your estimate?"
One approach you can take is to randomly subsample your data a number of times and look at the variation in your estimate. Then repeat the process for a slightly larger value of lambda to see how it affects the variability of your estimate. Keep in mind that whatever value of lambda you decide is appropriate for your subsampled data, you can likely use a smaller value to achieve comparable regularization on the full data set.
CLOSED FORM (TIKHONOV) VERSUS GRADIENT DESCENT
Hi! nice explanations for the intuitive and top-notch mathematical approaches there. I just wanted to add some specificities that, where not "problem-solving", may definitely help to speed up and give some consistency to the process of finding a good regularization hyperparameter.
I assume that you are talking about the L2 (a.k. "weight decay") regularization, linearly weighted by the lambda term, and that you are optimizing the weights of your model either with the closed-form Tikhonov equation (highly recommended for low-dimensional linear regression models), or with some variant of gradient descent with backpropagation. And that in this context, you want to choose the value for lambda that provides best generalization ability.
CLOSED FORM (TIKHONOV)
If you are able to go the Tikhonov way with your model (Andrew Ng says under 10k dimensions, but this suggestion is at least 5 years old) Wikipedia - determination of the Tikhonov factor offers an interesting closed-form solution, which has been proven to provide the optimal value. But this solution probably raises some kind of implementation issues (time complexity/numerical stability) I'm not aware of, because there is no mainstream algorithm to perform it. This 2016 paper looks very promising though and may be worth a try if you really have to optimize your linear model to its best.
For a quicker prototype implementation, this 2015 Python package seems to deal with it iteratively, you could let it optimize and then extract the final value for the lambda:
In this new innovative method, we have derived an iterative approach to solving the general Tikhonov regularization problem, which converges to the noiseless solution, does not depend strongly on the choice of lambda, and yet still avoids the inversion problem.
And from the GitHub README of the project:
InverseProblem.invert(A, be, k, l) #this will invert your A matrix, where be is noisy be, k is the no. of iterations, and lambda is your dampening effect (best set to 1)
GRADIENT DESCENT
All links of this part are from Michael Nielsen's amazing online book "Neural Networks and Deep Learning", recommended reading!
For this approach it seems to be even less to be said: the cost function is usually non-convex, the optimization is performed numerically and the performance of the model is measured by some form of cross validation (see Overfitting and Regularization and why does regularization help reduce overfitting if you haven't had enough of that). But even when cross-validating, Nielsen suggests something: you may want to take a look at this detailed explanation on how does the L2 regularization provide a weight decaying effect, but the summary is that it is inversely proportional to the number of samples n, so when calculating the gradient descent equation with the L2 term,
just use backpropagation, as usual, and then add (λ/n)*w to the partial derivative of all the weight terms.
And his conclusion is that, when wanting a similar regularization effect with a different number of samples, lambda has to be changed proportionally:
we need to modify the regularization parameter. The reason is because the size n of the training set has changed from n=1000 to n=50000, and this changes the weight decay factor 1−learning_rate*(λ/n). If we continued to use λ=0.1 that would mean much less weight decay, and thus much less of a regularization effect. We compensate by changing to λ=5.0.
This is only useful when applying the same model to different amounts of the same data, but I think it opens up the door for some intuition on how it should work, and, more importantly, speed up the hyperparametrization process by allowing you to finetune lambda in smaller subsets and then scale up.
For choosing the exact values, he suggests in his conclusions on how to choose a neural network's hyperparameters the purely empirical approach: start with 1 and then progressively multiply÷ by 10 until you find the proper order of magnitude, and then do a local search within that region. In the comments of this SE related question, the user Brian Borchers suggests also a very well known method that may be useful for that local search:
Take small subsets of the training and validation sets (to be able to make many of them in a reasonable amount of time)
Starting with λ=0 and increasing by small amounts within some region, perform a quick training&validation of the model and plot both loss functions
You will observe three things:
The CV loss function will be consistently higher than the training one, since your model is optimized for the training data exclusively (EDIT: After some time I've seen a MNIST case where adding L2 helped the CV loss decrease faster than the training one until convergence. Probably due to the ridiculous consistency of the data and a suboptimal hyperparametrization though).
The training loss function will have its minimum for λ=0, and then increase with the regularization, since preventing the model from optimally fitting the training data is exactly what regularization does.
The CV loss function will start high at λ=0, then decrease, and then start increasing again at some point (EDIT: this assuming that the setup is able to overfit for λ=0, i.e. the model has enough power and no other regularization means are heavily applied).
The optimal value for λ will be probably somewhere around the minimum of the CV loss function, it also may depend a little on how does the training loss function look like. See the picture for a possible (but not the only one) representation of this: instead of "model complexity" you should interpret the x axis as λ being zero at the right and increasing towards the left.
Hope this helps! Cheers,
Andres
The cross validation described above is a method used often in Machine Learning. However, choosing a reliable and safe regularization parameter is still a very hot topic of research in mathematics.
If you need some ideas (and have access to a decent university library) you can have a look at this paper:
http://www.sciencedirect.com/science/article/pii/S0378475411000607