I was doing Facial Expression Recognition on kaggle kernel and everything was going smooth, but suddenly the following code started giving error.
import tensorflow as tf
x = tf.placeholder(shape = [None, image_pixels], dtype = tf.float32)
y = tf.placeholder(shape = [None, labels_count], dtype = tf.float32)
AttributeError: module 'tensorflow' has no attribute 'placeholder'
I have tried many alternatives available on internet such as using
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
this instead of
import tensorflow as tf
But all went in vain. Kindly help me out here
change the <tf.placeholder> as <tf.compat.v1.placeholder>
such as
x = tf.placeholder(shape = [None, image_pixels], dtype = tf.float32)
change as
x = tf.compat.v1.placeholder(shape = [None, image_pixels], dtype = tf.float32)
but there would be another problem about runtime error with eager execution
add <tf.compat.v1.disable_eager_execution()> after import part
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
like this
Related
I've built up my own neural model, trained it, and got 99.58% accuracy. But I am facing a problem with plotting the confusion matrix. There are some examples available for flow_from_directory but no examples exist for image_dataset_from_directory. Can anyone help me?
See the post How to plot confusion matrix for prefetched dataset in Tensorflow using
true_categories = tf.concat([y for x, y in val_ds], axis=0)
to get the true labels for the validation set. Then you can plot the confusion matrix with something like this
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import confusion_matrix
cm = confusion_matrix(true_categories, predicted_id)
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(1,1,1)
sns.set(font_scale=1.4) #for label size
sns.heatmap(cm, annot=True, annot_kws={"size": 12},
cbar = False, cmap='Purples');
ax1.set_ylabel('True Values',fontsize=14)
ax1.set_xlabel('Predicted Values',fontsize=14)
plt.show()
Here is the code I created to be able to assemble the matrix of confusion
Note:
test_dataset is a tf.data.Dataset variable.
I used validation_dataset = tf.keras.preprocessing.image_dataset_from_directory()
import tensorflow as tf
y_true = []
y_pred = []
for x,y in validation_dataset:
y= tf.argmax(y,axis=1)
y_true.append(y)
y_pred.append(tf.argmax(model.predict(x),axis = 1))
y_pred = tf.concat(y_pred, axis=0)
y_true = tf.concat(y_true, axis=0)
I am trying to do a predictions on my pretrained model , I have total of 40 classes , it is showing me predictions in epsilon numbers , I want to choose maximum from it and by using if-else , I want to classify it . It is giving me above error !
from keras.applications.inception_resnet_v2 import preprocess_input
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications import imagenet_utils
import numpy as np
def prepare_image(file):
img_path = '/content/drive/MyDrive/test imgs/'
img = image.load_img(img_path + file, target_size=(224, 224))
img_array = image.img_to_array(img)
img_array_expanded_dims = np.expand_dims(img_array, axis=0)
return tf.keras.applications.inception_resnet_v2.preprocess_input(img_array_expanded_dims)
model = tf.keras.models.load_model("CNN ResNet.h5")
preprocessed_image = prepare_image('mcd.jpg')
predictions = model.predict(preprocessed_image)
print(predictions)
highest=np.argmax(prediction,axis=1)
print("Highest position : ", highest)
if(highest==0):
print("This class is Acer")
To allow using Keras model as part of standard tensorflow operations, I create a model using specific placeholder for the input.
However, when trying to do model.predict, I get an error:
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [100,84,84,4]
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[100,84,84,4], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
My code is given below:
from keras.layers import Convolution2D, Dense, Input
from keras.models import Model
from keras.optimizers import Nadam
from keras.losses import mean_absolute_error
from keras.activations import relu
import tensorflow as tf
import numpy as np
import gym
state_size = [100, 84, 84, 4]
input_tensor = tf.placeholder(dtype=tf.float32, shape=state_size)
inputL = Input(tensor=input_tensor)
h1 = Convolution2D(filters=32, kernel_size=(5,5), strides=(4,4), activation=relu) (inputL)
h2 = Convolution2D(filters=64, kernel_size=(3,3), strides=(2,2), activation=relu) (h1)
h3 = Convolution2D(filters=64, kernel_size=(3,3), activation=relu) (h2)
h4 = Dense(512, activation=relu) (h3)
out = Dense(18) (h4)
model = Model(inputL, out)
opt = Nadam()
disc_rate=0.99
sess = tf.Session()
dummy_input = np.ones(shape=state_size)
model.compile(opt, mean_absolute_error)
writer = tf.summary.FileWriter('./my_graph', sess.graph)
writer.close()
print(out)
print(model.predict({input_tensor: dummy_input}))
I have also trying feeding the input directly(no dictionary, just the value) - same exception. I can, however, get the model to work like:
print(sess.run( model.output, {input_tensor: dummy_input }))
Is there a way for me to still use normal Keras .predict method?
The following works (we need to initialize global variables):
sess.run(tf.global_variables_initializer()) # initialize
print(sess.run([model.output], feed_dict={input_tensor: dummy_input}))
I am trying to make my MultinomialNB work. I use CountVectorizer on my training and test set and of course there are different words in both setzs. So I see, why the error
ValueError: dimension mismatch
occurs, but I dont know how to fix it. I tried CountVectorizer().transform instead of CountVectorizer().fit_transform as was suggested in an other post (SciPy and scikit-learn - ValueError: Dimension mismatch) but that just gives me
NotFittedError: CountVectorizer - Vocabulary wasn't fitted.
how can I use CountVectorizer right?
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.cross_validation import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report
import sklearn.feature_extraction
df = data
y = df["meal_parent_category"]
X = df['name_cleaned']
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3)
X_train = CountVectorizer().fit_transform(X_train)
X_test = CountVectorizer().fit_transform(X_test)
algo = MultinomialNB()
algo.fit(X_train,y_train)
y = algo.predict(X_test)
print(classification_report(y_test,y_pred))
Ok, so after asking this question I figured it out :)
Here is the solution with vocabulary and such:
df = train
y = df["meal_parent_category_cleaned"]
X = df['name_cleaned']
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3)
vectorizer_train = CountVectorizer()
X_train = vectorizer_train.fit_transform(X_train)
vectorizer_test = CountVectorizer(vocabulary=vectorizer_train.vocabulary_)
X_test = vectorizer_test.transform(X_test)
algo = MultinomialNB()
algo.fit(X_train,y_train)
y_pred = algo.predict(X_test)
print(classification_report(y_test,y_pred))
Im new to Tensorflow&ML and following this example:
https://www.tensorflow.org/get_started/tflearn
It works very well until change hidden_units parameter here:
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir="/tmp/iris_model")
When i try anything, for example hidden_units = [20, 40, 20] or hidden_units = [20] it throws an error.
I tried to find out on my own but unsuccessfully so far and thought someone here can help.
The question is how to chose a number of hidden layers for DNN Classifier and why two my examples above do not work?
Here is a full code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import urllib
import tensorflow as tf
import numpy as np
IRIS_TRAINING = "iris_training.csv"
IRIS_TRAINING_URL = "http://download.tensorflow.org/data/iris_training.csv"
IRIS_TEST = "iris_test.csv"
IRIS_TEST_URL = "http://download.tensorflow.org/data/iris_test.csv"
if not os.path.exists(IRIS_TRAINING):
raw = urllib.request.urlopen(IRIS_TRAINING_URL).read()
with open(IRIS_TRAINING,'wb') as f:
f.write(raw)
if not os.path.exists(IRIS_TEST):
raw = urllib.request.urlopen(IRIS_TEST_URL).read()
with open(IRIS_TEST,'wb') as f:
f.write(raw)
# Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TRAINING,
target_dtype=np.int,
features_dtype=np.float32)
test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TEST,
target_dtype=np.int,
features_dtype=np.float32)
# Specify that all features have real-value data
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir="/tmp/iris_model")
# Define the training inputs
def get_train_inputs():
x = tf.constant(training_set.data)
y = tf.constant(training_set.target)
return x, y
# Fit model.
classifier.fit(input_fn=get_train_inputs, steps=2000)
# Define the test inputs
def get_test_inputs():
x = tf.constant(test_set.data)
y = tf.constant(test_set.target)
return x, y
# Evaluate accuracy.
accuracy_score = classifier.evaluate(input_fn=get_test_inputs,
steps=1)["accuracy"]
print("\nTest Accuracy: {0:f}\n".format(accuracy_score))
Found it,
if model_dir is not specified than moel works just fine with new hidden_units