How to disable keras warnings? - machine-learning

When I run the following code:
init = "he_uniform"
inp = Input(shape=(1800, 1))
norm1 = BatchNormalization(mode=0)(inp)
conv1 = Convolution1D(16, 5, border_mode='same', init=init, activation="relu")(norm1)
pool1 = MaxPooling1D(pool_size=3)(conv1)
norm2 = BatchNormalization(mode=0)(pool1)
flat1 = Flatten()(norm2)
dens1 = Dense(128, init=init, activation="relu")(flat1)
#norm3 = BatchNormalization(mode=0)(dens1)
output = Dense(2, init=init, activation="softmax")(dens1)
from keras.models import *
model = Model(input=[inp], output=output)
I had the warnings:
root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/ipykernel/__main__.py:4: UserWarning: Update your `BatchNormalization` call to the Keras 2 API: `BatchNormalization()`
/root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/ipykernel/__main__.py:5: UserWarning: Update your `Conv1D` call to the Keras 2 API: `Conv1D(16, 5, activation="relu", kernel_initializer="he_uniform", padding="same")`
/root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/ipykernel/__main__.py:7: UserWarning: Update your `BatchNormalization` call to the Keras 2 API: `BatchNormalization()`
/root/miniconda/envs/jupyterhub_py3/lib/python3.4/site-packages/ipykernel/__main__.py:9: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(128, activation="relu", kernel_initializer="he_uniform")`
The following approach did not help.
with warnings.catch_warnings():
warnings.simplefilter("ignore")
How to disable this warning?

You can tell tensorflow using environment variables. From within python:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
The warnings indicate you are using keras 1.x code with a keras 2.x installation. Maybe it would be easier to fix them, instead of ignoring.

You can use this code at the top of the main.py:
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn

Related

SHAP Deep Explainer does not work for LSTM: "Attribute Error: 'Deep' object has no attribute 'masker'"

We use Keras to construct our LSTM model as follows:
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
#make LSTM model architecture
model2 = Sequential()
model2.add(LSTM(100, return_sequences = True))
model2.add(LSTM(50, return_sequences = True))
model2.add(LSTM(10))
model2.add(Dense(1))
model2.compile(loss='mae', optimizer='adam')
The above model is successfully trained and working, and we need the SHAP to explain the output of the LSTM model.
We attempt to use SHAP as follows:
import shap
explainer = shap.DeepExplainer(model2,x_train_appended)
shap_values = explainer(x_train_appended)
Executing the above 3 lines throws the following error:
In [56]: import shap
...: explainer = shap.DeepExplainer(model2, x_train_appended)
...: shap_values = explainer(x_train_appended)
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'list'> input: [<tf.Tensor: shape=(49586, 1, 23), dtype=float32, numpy=
array([[[0.40824828, 0.02564103, 0.03370786, ..., 0.4494382 ,
0.43333334, 0.59210527]],
[[0. , 0.06410257, 0.05617978, ..., 0.4494382 ,
0.43333334, 0.59210527]],
[[0.5400617 , 0.06410257, 0.06741573, ..., 0.4494382 ,
0.43333334, 0.59210527]],
...,
[[0.5400617 , 0.01282051, 0.05617978, ..., 0.07865169,
0.01111111, 0.05263158]],
[[0. , 0.02564103, 0.05617978, ..., 0.07865169,
0.01111111, 0.05263158]],
[[0. , 0.02564103, 0.05617978, ..., 0.07865169,
0.01111111, 0.05263158]]], dtype=float32)>]
Consider rewriting this model with the Functional API.
Traceback (most recent call last):
File "", line 3, in
shap_values = explainer(x_train_appended)
File "/home/kiton/.local/lib/python3.8/site-packages/shap/explainers/_explainer.py", line 207, in call
if issubclass(type(self.masker), maskers.OutputComposite) and len(args)==2:
AttributeError: 'Deep' object has no attribute 'masker'
Did anyone run into a similar issue when using SHAP Deep Explainer? Am I doing something wrong here? Any feedback is appreciated. Thanks a lot for your time and help in advance!
Could this error be related to using keras? Perhaps, building a model with tensorflow or pytorch would solve the problem?

Why does using X[0] in MNIST classifier code give me an error?

I was learning to do classification with the MNIST dataset. And I got an error with I am not able to figure out, I have done a lot of google searches and I am not able to do anything, maybe you are an expert and can help me. Here is the code--
>>> from sklearn.datasets import fetch_openml
>>> mnist = fetch_openml('mnist_784', version=1)
>>> mnist.keys()
output:
dict_keys(['data', 'target', 'frame', 'categories', 'feature_names', 'target_names', 'DESCR', 'details', 'url'])
>>> X, y = mnist["data"], mnist["target"]
>>> X.shape
output:(70000, 784)
>>> y.shape
output:(70000)
>>> X[0]
output:KeyError Traceback (most recent call last)
c:\users\khush\appdata\local\programs\python\python39\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2897 try:
-> 2898 return self._engine.get_loc(casted_key)
2899 except KeyError as err:
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 0
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
<ipython-input-10-19c40ecbd036> in <module>
----> 1 X[0]
c:\users\khush\appdata\local\programs\python\python39\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2904 if self.columns.nlevels > 1:
2905 return self._getitem_multilevel(key)
-> 2906 indexer = self.columns.get_loc(key)
2907 if is_integer(indexer):
2908 indexer = [indexer]
c:\users\khush\appdata\local\programs\python\python39\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2898 return self._engine.get_loc(casted_key)
2899 except KeyError as err:
-> 2900 raise KeyError(key) from err
2901
2902 if tolerance is not None:
KeyError: 0
Please answer, there can be a silly mistake because I am a beggineer in ML. It would be really helpful if you gave me some hint also.
The API of fetch_openml changed between versions. In earlier versions, it returns a numpy.ndarray array. Since 0.24.0 (December 2020), as_frame argument of fetch_openml is set to auto (instead of False as default option earlier) which gives you a pandas.DataFrame for the MNIST data. You can force the data read as a numpy.ndarray by setting as_frame = False. See fetch_openml reference .
I was also facing the same problem.
scikit-learn: 0.24.0
matplotlib: 3.3.3
Python: 3.9.1
I used to below code to resolve the issue.
import matplotlib as mpl
import matplotlib.pyplot as plt
# instead of some_digit = X[0]
some_digit = X.to_numpy()[0]
some_digit_image = some_digit.reshape(28,28)
plt.imshow(some_digit_image,cmap="binary")
plt.axis("off")
plt.show()
You don't need to downgrade you scikit-learn library, if you follow the code below:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version= 1, as_frame= False)
mnist.keys()
You load the dataset as a dataframe for you to able to access the images, you have two ways to do this,
Transform the dataframe to an Array
# Transform the dataframe into an array. Check the first value
some_digit = X.to_numpy()[0]
# Reshape it to (28,28). Note: 28 x 28 = 7064, if the reshaping doesn't meet
# this you are not able to show the image
some_digit_image = some_digit.reshape(28,28)
plt.imshow(some_digit_image,cmap="binary")
plt.axis("off")
plt.show()
Transform the row
# Transform the row of your choosing into an array
some_digit = X.iloc[0,:].values
# Reshape it to (28,28). Note: 28 x 28 = 7064, if the reshaping doesn't
# meet this you are not able to show the image
some_digit_image = some_digit.reshape(28,28)
plt.imshow(some_digit_image,cmap="binary")
plt.axis("off")
plt.show()

Google Cloud ML exited with a non-zero status of 245 when training

I tried to train my model on Google Cloud ML using this sample code:
import keras
from keras import optimizers
from keras import losses
from keras import metrics
from keras.models import Model, Sequential
from keras.layers import Dense, Lambda, RepeatVector, TimeDistributed
import numpy as np
def test():
model = Sequential()
model.add(Dense(2, input_shape=(3,)))
model.add(RepeatVector(3))
model.add(TimeDistributed(Dense(3)))
model.compile(loss=losses.MSE,
optimizer=optimizers.RMSprop(lr=0.0001),
metrics=[metrics.categorical_accuracy],
sample_weight_mode='temporal')
x = np.random.random((1, 3))
y = np.random.random((1, 3, 3))
model.train_on_batch(x, y)
if __name__ == '__main__':
test()
and i got this error:
The replica master 0 exited with a non-zero status of 245. Termination reason: Error.
Detailed error output is big, so i'm pasting it here in pastebin
Note this output:
Module raised an exception for failing to call a subprocess Command '['python', '-m', u'trainer.test', '--job-dir', u'gs://my_test_bucket_keras/s_27_100630']' returned non-zero exit status -11.
And I guess the google cloud will run your code with an extra parameter called --job-dir. So perhaps you can try add the following code in your example code?
import ...
import argparse
def test():
model = Sequential()
model.add(Dense(2, input_shape=(3,)))
model.add(RepeatVector(3))
model.add(TimeDistributed(Dense(3)))
model.compile(loss=losses.MSE,
optimizer=optimizers.RMSprop(lr=0.0001),
metrics=[metrics.categorical_accuracy],
sample_weight_mode='temporal')
x = np.random.random((1, 3))
y = np.random.random((1, 3, 3))
model.train_on_batch(x, y)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Input Arguments
parser.add_argument(
'--job-dir',
help='GCS location to write checkpoints and export models',
required=True
)
args = parser.parse_args()
arguments = args.__dict__
test()
# test(**arguments) # or if you want to use this job_dir parameter in your code
Not 100% sure this will work but I think you can give it a try.
Also I have a post here to do something similar, perhaps you can take a look there as well.
Problem is resolved. All I had to do is use tensorflow 1.1.0 instead default 1.0.1

TypeError: 'Tensor' object is not callable

I'm trying to display the output of each layer of the convolutions neural network.
The backend I'm using is TensorFlow.
Here is the code:
import ....
from keras import backend as K
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape = (1,28,28)))
convout1 = Activation('relu')
model.add(convout1)
(X_train, y_train), (X_test, y_test) = mnist_dataset = mnist.load_data("mnist.pkl")
reshaped = X_train.reshape(X_train.shape[0], 1, X_train.shape[1], X_train.shape[2])
from random import randint
img_to_visualize = randint(0, len(X_train) - 1)
# Generate function to visualize first layer
# ERROR HERE
convout1_f = K.function([model.input(train=False)], convout1.get_output(train=False)) #ERROR HERE
convolutions = convout1_f(reshaped[img_to_visualize: img_to_visualize+1])
The full Error is:
convout1_f = K.function([model.input(train=False)],
convout1.get_output(train=False)) TypeError: 'Tensor' object is not
callable
Any comment or suggestion is highly appreciated. Thank you.
Both get_output and get_input methods return either Theano or TensorFlow tensor. It's not callable because of the nature of this objects.
In order to compile a function you should provide only layer tensors and a special Keras tensor called learning_phase which sets in which option your model should be called.
Following this answer your function should look like this:
convout1_f = K.function([model.input, K.learning_phase()], convout1.get_output)
Remember that you need to pass either True or False when calling your function in order to make your model computations in either learning or training phase mode.

Probable issue with LSTM in lasagne

With a simple constructor for the LSTM, as given in the tutorial, and an input of dimension [,,1] one would expect to see an output of shape [,,num_units].
But regardless of the num_units passed during construction, the output has the same shape as the input.
Following is the min code to replicate this issue...
import lasagne
import theano
import theano.tensor as T
import numpy as np
num_batches= 20
sequence_length= 100
data_dim= 1
train_data_3= np.random.rand(num_batches,sequence_length,data_dim).astype(theano.config.floatX)
#As in the tutorial
forget_gate = lasagne.layers.Gate(b=lasagne.init.Constant(5.0))
l_lstm = lasagne.layers.LSTMLayer(
(num_batches,sequence_length, data_dim),
num_units=8,
forgetgate=forget_gate
)
lstm_in= T.tensor3(name='x', dtype=theano.config.floatX)
lstm_out = lasagne.layers.get_output(l_lstm, {l_lstm:lstm_in})
f = theano.function([lstm_in], lstm_out)
lstm_output_np= f(train_data_3)
lstm_output_np.shape
#= (20, 100, 1)
An unqualified LSTM (I mean in its default mode) should produce one output for each unit right?
The code was run on kaixhin's cuda lasagne docker image docker image
What gives?
Thanks !
You can fix that by using a lasagne.layers.InputLayer
import lasagne
import theano
import theano.tensor as T
import numpy as np
num_batches= 20
sequence_length= 100
data_dim= 1
train_data_3= np.random.rand(num_batches,sequence_length,data_dim).astype(theano.config.floatX)
#As in the tutorial
forget_gate = lasagne.layers.Gate(b=lasagne.init.Constant(5.0))
input_layer = lasagne.layers.InputLayer(shape=(num_batches, # <-- change
sequence_length, data_dim),) # <-- change
l_lstm = lasagne.layers.LSTMLayer(input_layer, # <-- change
num_units=8,
forgetgate=forget_gate
)
lstm_in= T.tensor3(name='x', dtype=theano.config.floatX)
lstm_out = lasagne.layers.get_output(l_lstm, lstm_in) # <-- change
f = theano.function([lstm_in], lstm_out)
lstm_output_np= f(train_data_3)
print lstm_output_np.shape
If you feed your input into the input_layer, it is not ambiguous anymore, so you do not even need to specify where the input is supposed to go. Directly specifying a shape and adding the tensor3 into the LSTM does not work.

Resources