Suppose we have a set of inputs (named x1, x2, ..., xn) that give us the output y. The goal is to predict y from some values of x1... xn that were not seem yet. It's clear to me that this problem can be modelled as a Regression problem on the realm of Machine Learning.
However, let's say data keep coming. I'm able to predict y from x1... xn. Furthermore, I'm able to check afterwards whether or not that prediction was a good one. If it was a good one, everything is fine. On the other hand, I would like to update my model in case that prediction deviates a lot from the real y. The one way I can see this is to insert this new data on my training set and train the regression algorithm again. Two problems arise from that. First, it may cost more than I can afford to recompute my module from scratch from time to time. Second, I may already have too much data on my training set so that new coming data is negligible. However, the new coming data might be more import than the older ones due to the nature of my problem.
It seems that a good solution would be to compute a kind of continuous regression that is more related to the new data than to the older one. I have searched for such approach but I have not found anything relevant. Perhaps I'm looking at the wrong direction. Does anyone have a clue on how to do it?
If you want to consider the newer data more important you have to use weights. Usually it is called
sample_weight
in fit() function in scikit-learn (if you use this library).
Weights can be defined as 1 / (time pass from this current observation).
Now about the second problem. If the recalculation takes much time you can cut your observations and use the latest ones. Fit your model on the whole data and on the fresh one + some part of the old data and check how much your weights are changed. I suppose if you really have a dependence between {x_i} and {y} you don't need the whole dataset.
Otherwise you can use weights again. But for now you will weight weights in the model:
model for old data: w1*x1 + w2*x2 + ...
model for new data: ~w1*x1 + ~w2*x2 + ...
common model: (w1*a1_1 + ~w1*a1_2)*x1 + (w2*a2_1 + ~w2*a2_2)*x2 + ...
Here a1_1, a2_1 are the weights for 'old model', a2_1, a2_2 - for new one, w1, w2 - coefficients of old model, ~w1, ~w2 - of the new one.
Parameters {a} can be estimated as in the first bullet (be hands), but you also can create another linear model to estimate them. But my advice: don't use non-linear regression for {a} - you will overfit.
Related
How exactly are the theta parameters (AKA weights) updated after the errors have been calculated?
I do not know how to implement LaTeX here so I will display the pictures (from Andrew Ng's machine learning course) of what I know has to be done before theta/weights can be updated.
What is done next using the calculted delta "accumulator" matrix, or rather, how do I apply the accumulator matrix to update my weights?
On the course Andrew uses the computations shown below to be fed into an optimizer such as fminunc and do the parameter updating automatically, but I want to know clearly how to do the updating which is hidden away by the optimizer. I want to know how to compute the updating manually.
Do I simply apply New Weights = Old Weights - (learning-rate x D matrix)(from step 5 below)?
I am new to machine learning. I am having a question regarding polynomial regression using one feature.
My understanding is that if there is one input feature, we can create a hypothesis function by taking the squares and cubes the feature.
Suppose x1 is the input feature and our hypothesis function becomes something like this :
htheta(x) = theta0 + (theta1)x1 + (theta2)x1^2 + (theta3)x1^3.
My question is what is the use case of such scenario ? In what type of data, this type of hypothesis function will help ?
This scenario is for simple curve fitting problems. For example, you might have a spring and want to know how far the spring is stretched as a function of how much force you apply (the spring needn't be a linear spring obeying Hooke's law). You could build a model by collecting a bunch of measurements of different forces applied on the spring (measured in Newtons) and the resulting spring extension (also called displacement) in centimeters. You could then build a model of the form F(x) = theta_1 * x + theta_2 * x^3 + theta_3 * x^5 and fit the three theta parameters. You could of course do this with any other single variable problem (height vs. age, weight vs. blood pressure, current vs. voltage). In practice, you generally have many more than just a one dependent variable though.
Also worth pointing out that the transformations needn't be polynomial in the dependent variable (x in this case). You could just as well try logs, square roots, exponentials etc. If you're asking why is it always a parameter times a function of the input variable, this is more of a modeling choice than anything (specifically a linear model since it's linear in theta). It does not have to be this way and is a simple assumption that restricts the class of functions. Linear models also satisfy some intuitive statistical properties which also justify their use (see here)
I have a dataset X whose shape is (1741, 61). Using logistic regression with cross_validation I was getting around 62-65% for each split (cv =5).
I thought that if I made the data quadratic, the accuracy is supposed to increase. However, I'm getting the opposite effect (I'm getting each split of cross_validation to be in the 40's, percentage-wise) So,I'm presuming I'm doing something wrong when trying to make the data quadratic?
Here is the code I'm using,
from sklearn import preprocessing
X_scaled = preprocessing.scale(X)
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(3)
poly_x =poly.fit_transform(X_scaled)
classifier = LogisticRegression(penalty ='l2', max_iter = 200)
from sklearn.cross_validation import cross_val_score
cross_val_score(classifier, poly_x, y, cv=5)
array([ 0.46418338, 0.4269341 , 0.49425287, 0.58908046, 0.60518732])
Which makes me suspect, I'm doing something wrong.
I tried transforming the raw data into quadratic, then using preprocessing.scale, to scale the data, but it was resulting in an error.
UserWarning: Numerical issues were encountered when centering the data and might not be solved. Dataset may contain too large values. You may need to prescale your features.
warnings.warn("Numerical issues were encountered "
So I didn't bother going this route.
The other thing that's bothering is the speed of the quadratic computations. cross_val_score is taking around a couple of hours to output the score when using polynomial features. Is there any way to speed this up? I have an intel i5-6500 CPU with 16 gigs of ram, Windows 7 OS.
Thank you.
Have you tried using the MinMaxScaler instead of the Scaler? Scaler will output values that are both above and below 0, so you will run into a situation where values with a scaled value of -0.1 and those with a value of 0.1 will have the same squared value, despite not really being similar at all. Intuitively this would seem to be something that would lower the score of a polynomial fit. That being said I haven't tested this, it's just my intuition. Furthermore, be careful with Polynomial fits. I suggest reading this answer to "Why use regularization in polynomial regression instead of lowering the degree?". It's a great explanation and will likely introduce you to some new techniques. As an aside #MatthewDrury is an excellent teacher and I recommend reading all of his answers and blog posts.
There is a statement that "the accuracy is supposed to increase" with polynomial features. That is true if the polynomial features brings the model closer to the original data generating process. Polynomial features, especially making every feature interact and polynomial, may move the model further from the data generating process; hence worse results may be appropriate.
By using a 3 degree polynomial in scikit, the X matrix went from (1741, 61) to (1741, 41664), which is significantly more columns than rows.
41k+ columns will take longer to solve. You should be looking at feature selection methods. As Grr says, investigate lowering the polynomial. Try L1, grouped lasso, RFE, Bayesian methods. Try SMEs (subject matter experts who may be able to identify specific features that may be polynomial). Plot the data to see which features may interact or be best in a polynomial.
I have not looked at it for a while but I recall discussions on hierarchically well-formulated models (can you remove x1 but keep the x1 * x2 interaction). That is probably worth investigating if your model behaves best with an ill-formulated hierarchical model.
I am using LSTM neural networks (stateful) for time series prediction.
I'm hoping that the stateful LSTM can capture the hidden patterns and make a satisfactory prediction (the physical law that cause the variation of the time series is not clear).
I have a time series X with a length of 1500 (actual observational data), and my purpose is to predict the future 100.
I suppose predict the next 10 will be more promising than predict the next 100 (is that right?).
So, I prepare the training data like this (always using 100 values to predict the next 10; x_n denotes the n-th element in X):
shape of trainX: [140, 100, 1]
shape of trainY: [140, 10, 1]
---
0: [x_0, x_1, ..., x_99] -> [x_100, x_101, ..., x_109]
1: [x_10, x_11, ..., x_109] -> [x_110, x_111, ..., x_119]
2: [x_20, x_21, ..., x_119] -> [x_120, x_121, ..., x_129]
...
139: [x_1390, x_1391, ..., x_1489] -> [x_1490, x_1491, ..., x_1499]
---
After the training, I want to use the model to predict the next 10 values [x_1500 - x_1509] with [x_1400 - x_1499], and then predict the next 10 values [x_1510 - x_1519] with [x_1410 - x_1509].
Is this the right way?
After a lot of reading of documents and examples, I can train a model and make the prediction, but the result seems not satisfactory.
To validate the method, I assume that the last 100 (x_1400 - x_1499) values are unknown, and remove them from trainX and trainY, then try to train a model and predict them. Lastly, compare the predicted values with the observed values.
Any suggestions or comments will be appreciated.
The time series looks like this:
Your question is really complexed. Before I will try to answer it - I'll share my doubts with you about is it sensible to use LSTM for your task. You want to use a really advanced model (LSTM are capable to learn really complexed patterns) to a time series which seems to be relatively easy. Moreover - you have a really small amout of data. To be honest - I would try to train simpler and easier methods first (like ARMA or ARIMA).
To answer your question - if your approach is good - it seems to be reasonable. Other reasonable methods are predicting all 100 steps or e.g. 50 steps twice. With 10 steps you might come across error cumulation - but still it might be a good method.
As I mentioned earlier - I would rather try easier ML method for this task but if you really want to use LSTM you may tackle this problem in a following way:
Define metaparameters like number of steps ahead you want to predict, the size of input fed to network.
Try to use e.g. grid search in order to find the best value of this metaparameters. Evaluate each setup using k-fold crossvalidation.
Retrain final model using the best metaparameter setup.
You have relatively small amount of data so you may easily find the best values of hyperparameters. This will also show you if your approach is good or not - simply check the results provided by the best solution.
While following the Coursera-Machine Learning class, I wanted to test what I learned on another dataset and plot the learning curve for different algorithms.
I (quite randomly) chose the Online News Popularity Data Set, and tried to apply a linear regression to it.
Note : I'm aware it's probably a bad choice but I wanted to start with linear reg to see later how other models would fit better.
I trained a linear regression and plotted the following learning curve :
This result is particularly surprising for me, so I have questions about it :
Is this curve even remotely possible or is my code necessarily flawed?
If it is correct, how can the training error grow so quickly when adding new training examples? How can the cross validation error be lower than the train error?
If it is not, any hint to where I made a mistake?
Here's my code (Octave / Matlab) just in case:
Plot :
lambda = 0;
startPoint = 5000;
stepSize = 500;
[error_train, error_val] = ...
learningCurve([ones(mTrain, 1) X_train], y_train, ...
[ones(size(X_val, 1), 1) X_val], y_val, ...
lambda, startPoint, stepSize);
plot(error_train(:,1),error_train(:,2),error_val(:,1),error_val(:,2))
title('Learning curve for linear regression')
legend('Train', 'Cross Validation')
xlabel('Number of training examples')
ylabel('Error')
Learning curve :
S = ['Reg with '];
for i = startPoint:stepSize:m
temp_X = X(1:i,:);
temp_y = y(1:i);
% Initialize Theta
initial_theta = zeros(size(X, 2), 1);
% Create "short hand" for the cost function to be minimized
costFunction = #(t) linearRegCostFunction(X, y, t, lambda);
% Now, costFunction is a function that takes in only one argument
options = optimset('MaxIter', 50, 'GradObj', 'on');
% Minimize using fmincg
theta = fmincg(costFunction, initial_theta, options);
[J, grad] = linearRegCostFunction(temp_X, temp_y, theta, 0);
error_train = [error_train; [i J]];
[J, grad] = linearRegCostFunction(Xval, yval, theta, 0);
error_val = [error_val; [i J]];
fprintf('%s %6i examples \r', S, i);
fflush(stdout);
end
Edit : if I shuffle the whole dataset before splitting train/validation and doing the learning curve, I have very different results, like the 3 following :
Note : the training set size is always around 24k examples, and validation set around 8k examples.
Is this curve even remotely possible or is my code necessarily flawed?
It's possible, but not very likely. You might be picking the hard to predict instances for the training set and the easy ones for the test set all the time. Make sure you shuffle your data, and use 10 fold cross validation.
Even if you do all this, it is still possible for it to happen, without necessarily indicating a problem in the methodology or the implementation.
If it is correct, how can the training error grow so quickly when adding new training examples? How can the cross validation error be lower than the train error?
Let's assume that your data can only be properly fitted by a 3rd degree polynomial, and you're using linear regression. This means that the more data you add, the more obviously it will be that your model is inadequate (higher training error). Now, if you choose few instances for the test set, the error will be smaller, because linear vs 3rd degree might not show a big difference for too few test instances for this particular problem.
For example, if you do some regression on 2D points, and you always pick 2 points for your test set, you will always have 0 error for linear regression. An extreme example, but you get the idea.
How big is your test set?
Also, make sure that your test set remains constant throughout the plotting of the learning curves. Only the train set should increase.
If it is not, any hint to where I made a mistake?
Your test set might not be large enough or your train and test sets might not be properly randomized. You should shuffle the data and use 10 fold cross validation.
You might want to also try to find other research regarding that data set. What results are other people getting?
Regarding the update
That makes a bit more sense, I think. Test error is generally higher now. However, those errors look huge to me. Probably the most important information this gives you is that linear regression is very bad at fitting this data.
Once more, I suggest you do 10 fold cross validation for learning curves. Think of it as averaging all of your current plots into one. Also shuffle the data before running the process.