How to use multivariate variables forcast using transformer model of tsai - time-series

Please tell me about the tsai model with multivariate variables.
I am trying to create a time series model with multivariate variables using transformer model of tsai.
In the Forecasting section of the document below, it says "univariate or multivariate time series input," but I can only find examples of univariate forecast and not multivariate forecast.
https://timeseriesai.github.io/tsai/#forecasting
When I trained a model with multivariate variables and performed inference, the result was unnatural oscillation.
Please tell me an example of multivariate forecast description.
Or, should I use the following Multivariate Regression instead of Forecasting?
https://timeseriesai.github.io/tsai/#multivariate-regression

Related

Auto_arima and SARIMAX are giving different forecasts with same hyper-parameters

I am trying to forecast on a dataset where I am using pmd auto_arima method to find the best hyperparameters. Using the same hyper-parameters I am separately using statsmodel SARIMAX model to check the underlying method to auto_arima. But SARIMAX is giving far different forecasts than auto_arima. As per my understanding, Auto_arima is using SARIMAX as the underlying model after selection of best hyper parameters, then it should provide same forecast as SARIMAX model. I am using python to build model and create forecasts
Auto ARIMA results:
Auto Arima Model Summary
Auto Arima Graph
SARIMAX results:
SARIMAX Model Summary
SARIMAX Graph
Am i missing something? Any help would be appreciated. Thanks

How to predict the stock price using the pattern of other stocks?

I have three months worth of stock prices before and after of certain events (bio-clinical success, dividends, m&a, etc.).
I want to analyze the trend after a specific event using these data, and based on this, I want to analyze the trend of new stocks waiting for a specific event.
But I'm not sure which algorithm to use.
Which algorithm should I use, LSTm or ARIMA or etc?
I would recommend starting with something simple like linear regression. Linear regression is used to find trends in data also it is a very simple algorithm that requires little understanding of advanced math compared to other algorithms. In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Such models are called linear models. Most commonly, the conditional mean of the response given the values of the explanatory variables (or predictors) is assumed to be an affine function of those values; less commonly, the conditional median or some other quantile is used. Like all forms of regression analysis, linear regression focuses on the conditional probability distribution of the response given the values of the predictors, rather than on the joint probability distribution of all of these variables, which is the domain of multivariate analysis. But you can choose what algorithm you want to use

MultiOutputRegressor out of sample forecast horizon

I am trying to combine out of sample forecast horizon for supervised models. Furthermore, they are multi-output as there are a lot of simultaneous univariate time series going parallel. How can a avoid using X_test samples for predicting on this type of models?
code is like this .... (any other regressor - RF, AdaBoost etc)
multioutputregressor =
MultiOutputRegressor(xgb.XGBRegressor(objective='reg:squarederror',verbose = 1)).fit(X_train,y_train) ....
y_multirf1 = multioutputregressor.predict(X_test)
Here I need to forecast on univariate data. Besides, it looks like there is only 'time' as an exogenous variable. But it is a violation to put it as X(train/test). Are there any special models for supervised forecasting with out-of-sample predictions?
Thanx.

How do I update a trained model (weka.classifiers.functions.MultilayerPerceptron) with new training data in Weka?

I would like to load a model I trained before and then update this model with new training data. But I found this task hard to accomplish.
I have learnt from Weka Wiki that
Classifiers implementing the weka.classifiers.UpdateableClassifier interface can be trained incrementally.
However, the regression model I trained is using weka.classifiers.functions.MultilayerPerceptron classifier which does not implement UpdateableClassifier.
Then I checked the Weka API and it turns out that no regression classifier implements UpdateableClassifier.
How can I train a regression model in Weka, and then update the model later with new training data after loading the model?
I have some data mining experience in Weka as well as in scikit-learn and r and updateble regression models do not exist in weka and scikit-learn as far as I know. Some R libraries however do support updating regression models (take a look at this linear regression model for example: http://stat.ethz.ch/R-manual/R-devel/library/stats/html/update.html), so if you are free to switching data mining tool this might help you out.
If you need to stick to Weka than I'm afraid that you would probably need to implement such a model yourself, but since I'm not a complete Weka expert please check with the guys at weka list (http://weka.wikispaces.com/Weka+Mailing+List).
The SGD classifier implementation in Weka supports multiple loss functions. Among them are two loss functions that are meant for linear regression, viz. Epsilon insensitive, and Huber loss functions.
Therefore one can use a linear regression trained with SGD as long as either of these two loss functions are used to minimize training error.

Multinomial logistic regression steps in SPSS

I have data suited to multinomial logistic regression but I don't know how to formulate the model in predicting my Y.
How do I perform Multinomial Logistic Regression using SPSS?
How does stepwise method work?
There are plenty of examples of annotated output for SPSS multinomial logistic regression:
UCLA example
My own list of links and resources
Stepwise method provides a data driven approach to selection of your predictor variables. In general the decision to use data-driven or direct entry or hierarchical approaches is related to whether you want to test theory (i.e., direct entry or hierarchical) or you want to simply optimise prediction (i.e., stepwise and related methods).

Resources