Time Series forecasting using caret package in R - machine-learning

I am trying to forecast using caret package on economic data. Is there any method to predict values for next coming years?
library(mlbench)
library(caret)
library(pROC)
library(caTools)
library(ROCR)
myTimeControl <- trainControl( method = "timeslice", initialWindow = 36,
horizon = 12, fixedWindow = FALSE, allowParallel = TRUE, classProbs = TRUE,
summaryFunction = twoClassSummary, verboseIter = TRUE)
modelRF <- train(
as.factor(class) ~ . , data = TestData, method = "rf", metric = "ROC",
ntree = 1000, preProc = c("center", "scale"), trControl = myTimeControl)
Please help me to predict the class for next coming years.

You'll need to use the predict method with the x data for what you want to predict.

For forecasting time series, first consider that your time series is a regression problem then you can use extreme gradient boosting method which is a time taking model but its accuracy is very good. I have applied many model for time series forecasting like ARIMA, PROPHET, HOLT WINTER'S, EXPONENTIAL SMOOTHING, etc but my forecasting accuracy is best in extreme boosting method although it's a regression model.
train(QTY~BILLDATE1,
data = train1,
method = "xgbDART",
preProc = c("center", "scale"))

Tree based models tend to be weak predictors of time indexed economic data. They have some success in panel data. You will be better using support vector machines or LSTM's for time indexed economic forecasts.

Related

TCN model result seems like offset by one time step

I have been training a temperature dataset with TCN model. I Have tested it on small data available. I am training a bivariate TCN model with the data which I am giving input is Maximum temperature and minimum temperature to predict maximum temperature. I want to know if this is overfitting or if the graph is right Graph here
Below is my model
i = Input(shape=(lookback_window, 2))
m = TCN(nb_filters = 64)(i)
m = Dense(3, activation='linear')(m)
model = Model(inputs=[i], outputs=[m])
model.summary()
The summary of the model is given here
evaluate accuracy during epochs:
acc = sklearn.metrics.accuracy_score(y_true, y_pred)

LSTM time series forecast for multiple time series' ( single model vs individual model performance)

I would like to use deep(stacked) lstm model to forecast the disk utilisation for all my clusters. But what I am experiencing is my individual model representing a single clusters- time series is giving more accurate performance, when compared to a single model representing all the time series'. Single model is a kind of averaging out the forecast though I am including the cluster-id as part of a feature passed to the model.
My training sequence logic goes as given below, where my time series
has [ 'utilisation','clusterID'] as two features.
for i in range(len(sequences)):
# find the end of this pattern
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out-1
# check if we are beyond the dataset
if out_end_ix > len(sequences):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequences[i:end_ix, :], sequences[end_ix-1:out_end_ix, 0]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)```
- **Model**
trainX = trainX.reshape((trainX.shape[0], trainX.shape[1],features))
trainY = trainY.reshape((trainY.shape[0], trainY.shape[1]))
input = Input(batch_input_shape=(batch_size,look_back,features), name='input', dtype='float32')
optimizer = keras.optimizers.RMSprop(learning_rate=0.0001, rho=0.9, epsilon=None, decay=0.0)
model = Sequential()
model.add(Bidirectional(LSTM(800, return_sequences=True,return_state=False,input_shape=(look_back, features),stateful=stateful)))
model.add((LSTM(800, return_sequences=True,return_state=False,stateful=stateful)))
model.add(attention(return_sequences=False))
model.add(Dense(units=out_num))
model.compile(metrics=[rmse],run_eagerly=True,loss='mean_squared_error', optimizer='adam')
history = model.fit(trainX, trainY, batch_size=5000, epochs=100, verbose=2, validation_split=0.1)
- **and prediction logic as follows**
testX, testY = create_dataset(predictList.values,look_back,out_num)
reshaped_test=np.reshape(testX[0],(1,look_back,2))
futureStepPredict = model.predict(reshaped_test)
I was expecting an LSTM model can be trained on multiple timeseries at one time itself and can give individual forecast with almost the same accuracy as if it was considered as a single model based on the cluster-id provided as input, is this a wrong expectation?

PyTorch: Predicting future values with LSTM

I'm currently working on building an LSTM model to forecast time-series data using PyTorch. I used lag features to pass the previous n steps as inputs to train the network. I split the data into three sets, i.e., train-validation-test split, and used the first two to train the model. My validation function takes the data from the validation data set and calculates the predicted valued by passing it to the LSTM model using DataLoaders and TensorDataset classes. Initially, I've got pretty good results with R2 values in the region of 0.85-0.95.
However, I have an uneasy feeling about whether this validation function is also suitable for testing my model's performance. Because the function now takes the actual X values, i.e., time-lag features, from the DataLoader to predict y^ values, i.e., predicted target values, instead of using the predicted y^ values as features in the next prediction. This situation seems far from reality where the model has no clue of the real values of the previous time steps, especially if you forecast time-series data for longer time periods, say 3-6 months.
I'm currently a bit puzzled about tackling this issue and defining a function to predict future values relying on the model's values rather than the actual values in the test set. I have the following function predict, which makes a one-step prediction, but I haven't really figured out how to predict the whole test dataset using DataLoader.
def predict(self, x):
# convert row to data
x = x.to(device)
# make prediction
yhat = self.model(x)
# retrieve numpy array
yhat = yhat.to(device).detach().numpy()
return yhat
You can find how I split and load my datasets, my constructor for the LSTM model, and the validation function below. If you need more information, please do not hesitate to reach out to me.
Splitting and Loading Datasets
def create_tensor_datasets(X_train_arr, X_val_arr, X_test_arr, y_train_arr, y_val_arr, y_test_arr):
train_features = torch.Tensor(X_train_arr)
train_targets = torch.Tensor(y_train_arr)
val_features = torch.Tensor(X_val_arr)
val_targets = torch.Tensor(y_val_arr)
test_features = torch.Tensor(X_test_arr)
test_targets = torch.Tensor(y_test_arr)
train = TensorDataset(train_features, train_targets)
val = TensorDataset(val_features, val_targets)
test = TensorDataset(test_features, test_targets)
return train, val, test
def load_tensor_datasets(train, val, test, batch_size=64, shuffle=False, drop_last=True):
train_loader = DataLoader(train, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last)
val_loader = DataLoader(val, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last)
test_loader = DataLoader(test, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last)
return train_loader, val_loader, test_loader
Class LSTM
class LSTMModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim, dropout_prob):
super(LSTMModel, self).__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.lstm = nn.LSTM(
input_dim, hidden_dim, layer_dim, batch_first=True, dropout=dropout_prob
)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x, future=False):
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
out = out[:, -1, :]
out = self.fc(out)
return out
Validation (defined within a trainer class)
def validation(self, val_loader, batch_size, n_features):
with torch.no_grad():
predictions = []
values = []
for x_val, y_val in val_loader:
x_val = x_val.view([batch_size, -1, n_features]).to(device)
y_val = y_val.to(device)
self.model.eval()
yhat = self.model(x_val)
predictions.append(yhat.cpu().detach().numpy())
values.append(y_val.cpu().detach().numpy())
return predictions, values
I've finally found a way to forecast values based on predicted values from the earlier observations. As expected, the predictions were rather accurate in the short-term, slightly becoming worse in the long term. It is not so surprising that the future predictions digress over time, as they no longer depend on the actual values. Reflecting on my results and the discussions I had on the topic, here are my take-aways:
In real-life cases, the real values can be retrieved and fed into the model at each step of the prediction -be it weekly, daily, or hourly- so that the next step can be predicted with the actual values from the previous step. So, testing the performance based on the actual values from the test set may somewhat reflect the real performance of the model that is maintained regularly.
However, for predicting future values in the long term, forecasting, if you will, you need to make either multiple one-step predictions or multi-step predictions that span over the time period you wish to forecast.
Making multiple one-step predictions based on the values predicted the model yields plausible results in the short term. As the forecasting period increases, the predictions become less accurate and therefore less fit for the purpose of forecasting.
To make multiple one-step predictions and update the input after each prediction, we have to work our way through the dataset one by one, as if we are going through a for-loop over the test set. Not surprisingly, this makes us lose all the computational advantages that matrix operations and mini-batch training provide us.
An alternative could be predicting sequences of values, instead of predicting the next value only, say using RNNs with multi-dimensional output with many-to-many or seq-to-seq structure. They are likely to be more difficult to train and less flexible to make predictions for different time periods. An encoder-decoder structure may prove useful for solving this, though I have not implemented it by myself.
You can find the code for my function that forecasts the next n_steps based on the last row of the dataset X (time-lag features) and y (target value). To iterate over each row in my dataset, I would set batch_size to 1 and n_features to the number of lagged observations.
def forecast(self, X, y, batch_size=1, n_features=1, n_steps=100):
predictions = []
X = torch.roll(X, shifts=1, dims=2)
X[..., -1, 0] = y.item(0)
with torch.no_grad():
self.model.eval()
for _ in range(n_steps):
X = X.view([batch_size, -1, n_features]).to(device)
yhat = self.model(X)
yhat = yhat.to(device).detach().numpy()
X = torch.roll(X, shifts=1, dims=2)
X[..., -1, 0] = yhat.item(0)
predictions.append(yhat)
return predictions
The following line shifts values in the second dimension of the tensor by one so that a tensor [[[x1, x2, x3, ... , xn ]]] becomes [[[xn, x1, x2, ... , x(n-1)]]].
X = torch.roll(X, shifts=1, dims=2)
And, the line below selects the first element from the last dimension of the 3d tensor and sets that item to the predicted value stored in the NumPy ndarray (yhat), [[xn+1]]. Then, the new input tensor becomes [[[x(n+1), x1, x2, ... , x(n-1)]]]
X[..., -1, 0] = yhat.item(0)
Recently, I've decided to put together the things I had learned and the things I would have liked to know earlier. If you'd like to have a look, you can find the links down below. I hope you'll find it useful. Feel free to comment or reach out to me if you agree or disagree with any of the remarks I made above.
Building RNN, LSTM, and GRU for time series using PyTorch
Predicting future values with RNN, LSTM, and GRU using PyTorch

How to get only the predictions with a probability greater than x

I used a random forest to classify texts to certain categories. When I used my testdata I got an accuracy of 0.98. But with another set of data the overall accuracy decreases to 0.7. I think, most of the rows still have a high accuracy.
So now I want to show only the predicted categories with a high confidence.
random-forrest gives me a column "probability", which is an array of probabilities. How do I get the actual probabilty of the chosen prediction?
val randomForrest = new RandomForestClassifier()
.setLabelCol(labelIndexer.getOutputCol)
.setFeaturesCol(vectorAssembler.getOutputCol)
.setProbabilityCol("probability")
.setSeed(123)
.setPredictionCol("prediction")
I eventually came up with the following udf to get the best prediction together with its probability.
If there is a more convenient way, please comment.
def getBestPrediction = udf((
rawPrediction: org.apache.spark.ml.linalg.Vector, probability: org.apache.spark.ml.linalg.Vector) => {
val bestPrediction = probability.argmax
val bestProbability = probability(bestPrediction)
(bestPrediction, bestProbability)
})

Low confidence score in SVC for example from training set

Here is my code for SVC classifier.
vectorizer = TfidfVectorizer(lowercase=False)
train_vectors = vectorizer.fit_transform(training_data)
classifier_linear = svm.LinearSVC()
clf = CalibratedClassifierCV(classifier_linear)
linear_svc_model = clf.fit(train_vectors, train_labels)
training_data here is a list of english sentences and train_lables are the labels associated. I do the usual stopwords removal and some preprocessing before creating final version of training_data. Here is how my testing code:
test_lables = ["no"]
test_vectors = vectorizer.transform(test_lables)
prediction_linear = clf.predict_proba(test_vectors)
counter = 0
class_probability = {}
lables = []
for item in train_labels:
if item in lables:
continue
else:
lables.append(item)
for val in np.nditer(prediction_linear):
new_val = val.item(0)
class_probability[lables[counter]] = new_val
counter = counter + 1
sorted_class_probability = sorted(class_probability.items(), key=operator.itemgetter(1), reverse=True)
print(sorted_class_probability)
Now when I run the code with a phrase that is already there in the training set (a word 'no' in this case), it identifies properly, but the confidence score is even below .9. The output is as follows:
[('no', 0.8474342514152964), ('hi', 0.06830103628879058), ('thanks', 0.03070201906552546), ('confused', 0.02647134535600733), ('ok', 0.015857384248465656), ('yes', 0.005961945963546264), ('bye', 0.005272017662368208)]
When I am studying online, I have seen that usually confidence score for data already in the training set is closer to 1 or almost 1 and rest of them are really negligible. What can I do to get better confidence score? Should I be worried that if I add more classes to it, the confidence score will further dip and it will be difficult for me to surely point out one standout class?
As long as your scores help you classify your inputs correctly, you shouldn't worry at all. If anything, if your confidence on the input already in your training data is too high, that probably means your method has overfit to the data, and cannot generalize to the unseen data.
However, you can tune the complexity of your method by changing the penalization parameters. In the case of a LinearSVC, you have both the penalty and the C parameter. Try different values of those two and observe the effect. Make sure you also observe the effect on an unseen test set.
Just a not that the values of C should be in exponential space, eg. [0.001, 0.01, 0.1, 1, 10, 100, 1000] for you to see meaningful effects.
The SGDClassifier may be relevant to your case if you're interested in such linear models and tuning your parameters.

Resources