ValueError: Only TF native optimizers are supported in Eager mode - machine-learning

I applied Eager execution to my code using
import TensorFlow as tf
tf.enable_eager_execution()
tf.executing_eagerly()
But when I used Adam Optimizer
single_step_model = tf.keras.models.Sequential()
single_step_model.add(tf.keras.layers.LSTM(32,
input_shape=x_train.shape[-2:]))
single_step_model.add(tf.keras.layers.Dense(1))
single_step_model.compile(optimizer=tf.keras.optimizers.Adam(), loss='mae')
I got the error as
ValueError: Only TF native optimizers are supported in Eager mode
Please help me guys

Use tf.train optimizer, and not tf.keras.* optimizer. Eg.:
single_step_model = tf.keras.models.Sequential()
single_step_model.add(tf.keras.layers.LSTM(32,
input_shape=x_train.shape[-2:]))
single_step_model.add(tf.keras.layers.Dense(1))
single_step_model.compile(optimizer=tf.train.AdamOptimizer(), loss='mae')
```

Related

Exception: The passed model is not callable and cannot be analyzed directly with the given masker

I am dealing with a Regression problem and I used StackingRegressor to train data and then make prediction on test set. For model explainability purpose, I used SHAP as follows:
import xgboost
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import StackingRegressor
import shap
# train a model
X, y = shap.datasets.boston()
stkr = StackingRegressor(
estimators = [('xgbr', xgboost.XGBRegressor()), ('rfr', RandomForestRegressor())],
final_estimator = xgboost.XGBRegressor(),
cv = 3
)
model = stkr.fit(X, y)
explainer = shap.Explainer(model)
shap_values = explainer(X)
shap.summary_plot(explainer(X), X)
After running this code, I face with the following error:
Exception: The passed model is not callable and cannot be analyzed directly with the given masker! Model: StackingRegressor
I have no idea why I got The passed model is not callable and cannot be analyzed directly with the given masker! Model: StackingRegressor error, while I could use the same code and replace StackingRegressor with RandomForestRegressor or XGBoostRegressor and run it without any problem.
Does anyone have any idea?
I have had the same issue with a different model. The solution that worked for me was to use KernelExplainer instead of explainer. Additionally you need to use the model.predict function instead of just the model. Note that to get the shaps values you need to use KernelExplainer.shap_values() uses a function
So I think this should work:
explainer = shap.KernelExplainer(model.predict, X)
shap_values = explainer.shap_values(X)
shap.summary_plot(shap_values, X_train, plot_type="bar")
Which version of shap are you using?
I just found this error and fixed it by upgrading the version from 0.39.0 to 0.40.0
Not sure can help.

Does PyTorch seed affect dropout layers?

I came across the idea of seeding my neural network for reproducible results, and was wondering if pytorch seeding affects dropout layers and what is the proper way to seed my training/testing?
I'm reading the documentation here, and wondering if just placing these lines will be enough?
torch.manual_seed(1)
torch.cuda.manual_seed(1)
You can easily answer your question with some lines of code:
import torch
from torch import nn
dropout = nn.Dropout(0.5)
torch.manual_seed(9999)
a = dropout(torch.ones(1000))
torch.manual_seed(9999)
b = dropout(torch.ones(1000))
print(sum(abs(a - b)))
# > tensor(0.)
Yes, using manual_seed is enough.
Actually it depends on your device:
If cpu:
torch.manual_seed(1) == true.
If cuda:
torch.cuda.manual_seed(1)=true
torch.backends.cudnn.deterministic = True
Lastly, use the following code can make sure the results are reproducible among python, numpy and pytorch.
def setup_seed(seed):
random.seed(seed)
numpy.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
setup_seed(42)

Get predicted values using cross_validate()

I have the following code which performs 5-fold cross validation and returns several metric values.
iris = load_iris()
clf = SVC()
scoring = {'acc': 'accuracy',
'prec_macro': 'precision_macro',
'rec_micro': 'recall_macro'}
scores = cross_validate(clf, iris.data, iris.target, scoring=scoring,
cv=5, return_train_score=True)
I want to know if this can be modified to print the predicted values for each fold.
If you're using sklearn you can use cross_val_predict:
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(clf_name,X_train,y_train_5,cv=3)
cross_val_score gives score for each fold. while cross_val_predict gives prediction for each fold.
Since I need also this feature in scikit-learn, I've hacked the code in my sklearn repo.
If you still need this, you can find this on my github, on the branch group_cv:
https://github.com/robbisg/scikit-learn/tree/group_cv
The modified cross_validate function is here:
https://github.com/robbisg/scikit-learn/blob/group_cv/sklearn/model_selection/_validation.py
You need to call cross_validate with return_predictions=True.
Hope this helps.

CUDA_ERROR_OUT_OF_MEMORY: How to activate multiple GPUs from Keras in Tensorflow

I am running a large model on tensorflow using Keras and toward the end of the training the jupyter notebook kernel stops and in the command line I have the following error:
2017-08-07 12:18:57.819952: E tensorflow/stream_executor/cuda/cuda_driver.cc:955] failed to alloc 34359738368 bytes on host: CUDA_ERROR_OUT_OF_MEMORY
This I guess is simple enough - I am running out of memory. I have 4 NVIDIA 1080ti GPUs. I know that TF uses only one unless specified. Therefore, I have 2 questions:
Is there a good working example of how to utilise all GPUs in Keras
In Keras, it seems it is possible to change gpu_options.allow_growth=True, but I cannot see exactly how to do this (I understand this is being a help-vampire, but I am completely new to DL on GPUs)
see CUDA_ERROR_OUT_OF_MEMORY in tensorflow
See this Official Keras Blog
Try this:
import keras.backend as K
config = K.tf.ConfigProto()
config.gpu_options.allow_growth = True
session = K.tf.Session(config=config)

muti output regression in xgboost

Is it possible to train a model in Xgboost that have multiple continuous outputs (multi regression)?
What would be the objective to train such a model?
Thanks in advance for any suggestions
My suggestion is to use sklearn.multioutput.MultiOutputRegressor as a wrapper of xgb.XGBRegressor. MultiOutputRegressor trains one regressor per target and only requires that the regressor implements fit and predict, which xgboost happens to support.
# get some noised linear data
X = np.random.random((1000, 10))
a = np.random.random((10, 3))
y = np.dot(X, a) + np.random.normal(0, 1e-3, (1000, 3))
# fitting
multioutputregressor = MultiOutputRegressor(xgb.XGBRegressor(objective='reg:linear')).fit(X, y)
# predicting
print np.mean((multioutputregressor.predict(X) - y)**2, axis=0) # 0.004, 0.003, 0.005
This is probably the easiest way to regress multi-dimension targets using xgboost as you would not need to change any other part of your code (if you were using the sklearn API originally).
However this method does not leverage any possible relation between targets. But you can try to design a customized objective function to achieve that.
Multiple output regression is now available in the nightly build of XGBoost, and will be included in XGBoost 1.6.0.
See https://github.com/dmlc/xgboost/blob/master/demo/guide-python/multioutput_regression.py for an example.
It generates warnings: reg:linear is now deprecated in favor of reg:squarederror, so I update an answer based on #ComeOnGetMe's
import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn.multioutput import MultiOutputRegressor
# get some noised linear data
X = np.random.random((1000, 10))
a = np.random.random((10, 3))
y = np.dot(X, a) + np.random.normal(0, 1e-3, (1000, 3))
# fitting
multioutputregressor = MultiOutputRegressor(xgb.XGBRegressor(objective='reg:squarederror')).fit(X, y)
# predicting
print(np.mean((multioutputregressor.predict(X) - y)**2, axis=0))
Out:
[2.00592697e-05 1.50084441e-05 2.01412247e-05]
I would place a comment but I lack the reputation. In addition to #Jesse Anderson, to install the most recent version, select the top link from here:
https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/list.html?prefix=master/
Make sure to select the one for your operating system.
Use pip install to install the wheel. I.e. for macOS:
pip install https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/master/xgboost-1.6.0.dev0%2B4d81c741e91c7660648f02d77b61ede33cef8c8d-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl
You can use Linear regression, random forest regressors and some other related algorithms in Scikit-learn to produce multi-output regression. Not sure about XGboost. The boosting regressor in Scikit does not allow multiple outputs. For people who asked, when it may be necessary one example would be to forecast multi-steps of time-series a head.
Based on the above discussion, I have extended the univariate XGBoostLSS to a multivariate framework called Multi-Target XGBoostLSS Regression that models multiple targets and their dependencies in a probabilistic regression setting. Code follows soon.

Resources