Prophet with Multiprocessing? - time-series

My problem is time series anomaly detection and I use facebook prophet library. So I have a function called "fit_predict_model" and I have 90 different dataframes that I keep in the dictionary. I mean have 90 different models. Then it takes a long time to train. I wanted to use multiprocessing to train faster.But I am getting memory error. How can I solve this problem?
def fit_predict_model(dataframe, model_name, interval_width = 0.95, changepoint_range = 0.88):
model = Prophet(yearly_seasonality=False,daily_seasonality=True,
seasonality_mode = "multiplicative",changepoint_range = changepoint_range)
model = model.fit(dataframe)
forecast = model.predict(forecast)
return forecast
pred = {}
def run(key):
pred[key] = fit_predict_model(train[key], model_name = key)
pool = Pool(cpu_count())
pool.map(run, list(train.keys()))
pool.close()
pool.join()

Related

How to retrieve a fold while subsampling for class imbalance using caret

I have a class imbalance issue with my data, and following the guide here, I'm trying to downsample during a cv. I'd like to be able to inspect individual folds after running the model, but when I attempt to pull an individual fold from the training set I don't see balanced classes like I expected. Does the downsampling occur after the creation of the fold? If so, how would I retrieve those indices? A sample:
set.seed(2969)
imbal_train <- twoClassSim(10000, intercept = -20, linearVars = 20)
tr_ctrl <- trainControl(method = "cv", number = 5,
classProbs = TRUE,
p=0.5,
summaryFunction = twoClassSummary,
sampling = "down")
testModel<-train(Class ~ .,
data = imbal_train,
method = "rf",
metric = "ROC",
trControl = tr_ctrl)
fold1<-imbal_train[testModel$control$index$Fold1,]
table(fold1$Class)
Class1 7529 Class2 472

Any way to efficiently stack/ensemble pre-trained models for image classification?

I am trying to stack a few pre-trained models that I have through taking the last hidden layer of each model and then concatenating them together and then plugging them into a meta-learner model (e.g. XGBoost).
I am running into a big problem of having to process each image of my dataset multiple times since each base model requires a different processing method. This is causing my model to take a really long time to train and is infeasible. Is there any way to work past this?
For example:
model_1, processor_1 = pretrained_model(), pretrained_processor()
model_2, processor_2 = pretrained_model2(), pretrained_processor2()
for img in images:
input_1 = processor_1(img)
input_2 = processor_2(img)
out_1 = model_1(input_1)
out_2 = model_2(input_2)
torch.cat((out1,out2), dim=1) #concatenates hidden representations to feed into another model
Here'a recommendation if you want to process your images faster:
Note: I did not test this out
import torch
import torch.nn as nn
# Create a stack nn module
class StackedModel(nn.Module):
def __init__(self, model1, model2):
super(StackedModel, self).__init__()
self.model1 = model1
self.model2 = model2
def forward(self, imgs):
out_1 = model_1(input_1)
out_2 = model_2(input_2)
return torch.cat((out1, out2), dim=1)
# Init model
model = StackedModel(model1, model2)
# Try to stack and run in a larger batch assuming u have extra gpu space
stacked_preproc1 = []
stacked_preproc2 = []
max_batch_size = 16
total_output = []
for index, img in enumerate(images):
input_1 = processor_1(img)
input_2 = processor_2(img)
stacked_preproc1.append(input_1)
stakced_preproc2.appennd(input2)
if index % max_batch_size == 0:
stacked_preproc1 = torch.stack(stacked_preproc1)
stakced_preproc2 = torch.stack(stakced_preproc2)
else:
total_output.append(
model(stacked_preproc1, stacked_preproc2)
)
# Reset array
stacked_preproc1 = []
stakced_preproc2 = []

how do I use TimeSeriesSplit in xgb.cv

I have trying to run XGBoost for time series analysis. these are my codes which are used else where
xgb1 = xgb.XGBRegressor(learning_rate=0.1,n_estimators=n_estimators,max_depth=max_depth,min_child_weight=min_child_weight,gamma=0,subsample=0.8,colsample_bytree=0.8,
reg_alpha=reg_alpha,objective='reg:squarederror', nthread=4, scale_pos_weight=1, seed=27)
xgb_param = xgb1.get_xgb_params()
dmatrix = xgb.DMatrix(data=X_train, label=y_train)
cv_folds = 5
early_stopping_rounds = 50
cvresults = xgb.cv(dtrain=dmatrix, params = xgb_param,num_boost_round=xgb1.get_params()['n_estimators'], nfold=cv_folds,
metrics='rmse', early_stopping_rounds=early_stopping_rounds)
Obvious issue here is that I want to cross validate timeseries data and hence can't use the cv_folds = 5.
(How) can I use the TimeseriesSplit function within xgb.cv?
thanks,

Find the best pipeline model using CrossValidator and ParamGridBuilder

I have an acceptable model, but I would like to improve it by adjusting its parameters in Spark ML Pipeline with CrossValidator and ParamGridBuilder.
As an Estimator I will place the existing pipeline.
In ParamMaps I would not know what to put, I do not understand it.
As Evaluator I will use the RegressionEvaluator already created previously.
I'm going to do it for 5 folds, with a list of 10 different depth values in the tree.
How can I select and show the best model for the lowest RMSE?
ACTUAL example:
from pyspark.ml import Pipeline
from pyspark.ml.regression import DecisionTreeRegressor
from pyspark.ml.feature import VectorIndexer
from pyspark.ml.evaluation import RegressionEvaluator
dt = DecisionTreeRegressor()
dt.setPredictionCol("Predicted_PE")
dt.setMaxBins(100)
dt.setFeaturesCol("features")
dt.setLabelCol("PE")
dt.setMaxDepth(8)
pipeline = Pipeline(stages=[vectorizer, dt])
model = pipeline.fit(trainingSetDF)
regEval = RegressionEvaluator(predictionCol = "Predicted_XX", labelCol = "XX", metricName = "rmse")
rmse = regEval.evaluate(predictions)
print("Root Mean Squared Error: %.2f" % rmse)
(1) Spark Jobs
(2) Root Mean Squared Error: 3.60
NEED:
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
dt2 = DecisionTreeRegressor()
dt2.setPredictionCol("Predicted_PE")
dt2.setMaxBins(100)
dt2.setFeaturesCol("features")
dt2.setLabelCol("PE")
dt2.setMaxDepth(10)
pipeline2 = Pipeline(stages=[vectorizer, dt2])
model2 = pipeline2.fit(trainingSetDF)
regEval2 = RegressionEvaluator(predictionCol = "Predicted_PE", labelCol = "PE", metricName = "rmse")
paramGrid = ParamGridBuilder().build() # ??????
crossval = CrossValidator(estimator = pipeline2, estimatorParamMaps = paramGrid, evaluator=regEval2, numFolds = 5) # ?????
rmse2 = regEval2.evaluate(predictions)
#bestPipeline = ????
#bestLRModel = ????
#bestParams = ????
print("Root Mean Squared Error: %.2f" % rmse2)
(1) Spark Jobs
(2) Root Mean Squared Error: 3.60 # the same ¿?
You need to call .fit() with your training data on the crossval object to create the cv model. That will do the cross validation. Then you get the best model (according to your evaluator metric) from that. Eg.
cvModel = crossval.fit(trainingData)
myBestModel = cvModel.bestModel

Tensorflow LSTM PTB Example - Understanding forward and backward pass

Right now I am going through the tensorflow example on LSTMs where they use the PTB dataset to create an LSTM network capable of predicting the next word. I've spent a lot of time trying to understand the code, and have a good understanding for most of it however there is one function which I don't fully grasp:
def run_epoch(session, model, eval_op=None, verbose=False):
"""Runs the model on the given data."""
costs = 0.0
iters = 0
state = session.run(model.initial_state)
fetches = {
"cost": model.cost,
"final_state": model.final_state,
}
if eval_op is not None:
fetches["eval_op"] = eval_op
for step in range(model.input.epoch_size):
feed_dict = {}
for i, (c, h) in enumerate(model.initial_state):
feed_dict[c] = state[i].c
feed_dict[h] = state[i].h
vals = session.run(fetches, feed_dict)
cost = vals["cost"]
state = vals["final_state"]
costs += cost
iters += model.input.num_steps
return np.exp(costs / iters)
My confusion is this: each time through the outerloop I believe we have processed batch_size * num_steps numbers of words, done the forward propagation and done the backward propagation. But, how in the next iteration, for example, do we know to start with the 36th word of each batch if num_steps = 35? I suspect it is some change in an attribute of the class model on each iteration but I cannot figure that out. Thanks for your help.

Resources