This is my cnn model classifier that i have created.
classifier = Sequential()
classifier.add(Convolution2D(32,3,3, input_shape = (256,256,3),activation = "relu"))
classifier.add(MaxPooling2D(pool_size = (2,2)))
classifier.add(Flatten())
Now i want to find the weights of each layer after passing an image in the model. So i got this code
inputs = classifier.input
outputs = [classifier.layers[i].output for i in range(len(classifier.layers))]
model = Model(inputs, outputs)
all_layers_predictions = model.predict(test_image)
all_layers_predictions
But i am getting this output:
The output of the code
I dont understand what exactly is the problem ? Why are the values not getting printed
for layer in model.layers:
weights = layer.get_weights()
Related
There is one input variable and one output variable.
However each data point of the input/output variable is a vector.
Size of each input vector is 141X1 and size of each output vector is 400X1.
I have attached the input data file(Ivec.xls) and output data file(Ovec.xls)
data link
For training:
Input vectors: Ivec(:,1:9) and output vectors: Ovec(:,1:9)
For testing:
Input vector: Ivec(:,10) and the predicted_Ovec10 can be compared with Ovec(:,10)
to know the performance of the model.
How to create a regression model from this?
dataset_Ivec = pd.read_excel(r'Ivec.xls',header = None)
dataset_Ovec = pd.read_excel(r'Ovec.xls',header = None)
dataset_Ivec_numpy = dataset_Ivec.to_numpy()
dataset_Ovec_numpy = dataset_Ovec.to_numpy()
X_train = dataset_Ivec_numpy[:,:-1]
y_train = dataset_Ovec_numpy[:,:-1]
X_test = dataset_Ivec_numpy[:,9]
y_test = dataset_Ovec_numpy[:,9]
model = Sequential()
model.add(Dense(activation="relu", input_dim=X_train.shape[0], units = X_train.shape[1], kernel_initializer="uniform"))
model.add(Dropout(0.285))
model.add(Dense(activation="linear", input_dim=y_train.shape[1], units = X_train.shape[1], kernel_initializer="uniform"))
model.compile(optimizer="adagrad", loss="mean_squared_error", metrics=["accuracy"])
# model = baseline_model()
result = model.fit(X_train, y_train, batch_size=2, epochs=20, validation_data=(X_test, y_test))
I tried to write this code, however, it is too confusing for me.
And now I am quite stuck for many days. Please help someone.
I am on learning curve of neural network , using Keras to forecast next value based on previous specified window. Here is my code
from sklearn.preprocessing import MinMaxScaler,StandardScaler
from keras.preprocessing.sequence import TimeseriesGenerator
scaler = StandardScaler() scaler.fit(train)
scaled_train = scaler.transform(train)
scaled_test = scaler.transform(test)
# define generator
n_input = 47
n_features = 1
generator = TimeseriesGenerator(scaled_train, scaled_train, length = n_input, batch_size=12)
initializer = tf.keras.initializers.GlorotNormal()
model = Sequential()
model.add(LSTM(12,activation = 'relu', input_shape = (n_input, n_features),kernel_initializer = initializer))
model.add(Dense(1))
model.compile(optimizer = 'adam', loss = 'mae')
Problem is every time when I retrain model without making any changes , my model performance gets changed. Normally it should change based on any changes in parameter e.g. (new hidden layers or changes in activation function etc)
I am trying to use a MobileNet model but facing above mentioned issue . I don't know if it is
occuring due to train_test_split or else . Architecture is shown below
Can I use model.fit instead of model.fit_generator here ?
mobilenet = MobileNet(input_shape=(224,224,3) , weights='imagenet', include_top=False)
# don't train existing weights
for layer in mobilenet.layers:
layer.trainable = False
folders = glob('/content/drive/MyDrive/AllClasses/*')
print("Total number of classes are",len(folders))
x = Flatten()(mobilenet.output)
prediction = Dense(len(folders), activation='softmax')(x)
model = Model(inputs=mobilenet.input, outputs=prediction)
model.summary()
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
dataset = ImageDataGenerator(rescale=1./255)
dataset = dataset.flow_from_directory('/content/drive/MyDrive/AllClasses',target_size=(224, 224),batch_size=32,class_mode='categorical',color_mode='grayscale')
train_data, test_data = train_test_split(dataset,random_state=42, test_size=0.20,shuffle=True)
r = model.fit(train_data,validation_data=(test_data),epochs=5)
I have the code for CNN model. But I need the output of each and every layer of my model and my testing image is passed after compiling the model. So is there a way that i am able to see the output of each layer of my CNN model by taking my test image as input.
classifier = Sequential()
classifier.add(Convolution2D(32,3,3, input_shape = (64,64,3),activation =
"relu"))
classifier.add(MaxPooling2D(pool_size = (2,2)))
classifier.add(Convolution2D(64,3,3,activation = "relu"))
classifier.add(MaxPooling2D(pool_size = (2,2)))
classifier.add(Flatten())
My test function is this :
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('spot.png',target_size = (64,64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image,axis = 0)
result = classifier.predict(test_image)
training_set.class_indices
if result [0][0] == 0:
prediction = 'mango_powder'
else:
prediction = 'mango_spot'
print(prediction)
Make a model that outputs all layers:
inputs = classifier.input
outputs = [classifier.layers[i].output for i in range(len(classifier.layers))]
model = Model(inputs, outputs)
Use this model to predict with the same inputs you would use in classifier:
all_layers_predictions = model.predict(images)
Here, all_layers_predictions will be a list with the outputs of each layer.
You might need to ignore the first layer (i = 0) in case the input layer is appearing in classifier.summary()
I am working on a binary classification problem on Keras. The loss function I use is binary_crossentropy and metrics is metrics=['accuracy']. Since two classes are imbalanced, I use class_weight='auto' when I fit training data set to the model.
To see the performance, I print out the accuracy by
print GNN.model.test_on_batch([test_sample_1, test_sample_2], test_label)[1]
The output is 0.973. But this result is different when I use following lines to get the prediction accuracy
predict_label = GNN.model.predict([test_sample_1, test_sample_2])
rounded = predict_label.round(1)
print (rounded == test_label).sum()/float(rounded.shape[0])
which is 0.953.
So I am wondering how metrics=['accuracy'] evaluate the model performance and why the result is different.
For details, I attached the model summary below.
input_size = self.n_feature
encoder_size = 2000
dropout_rate = 0.5
X1 = Input(shape=(input_size, ), name='input_1')
X2 = Input(shape=(input_size, ), name='input_2')
encoder = Sequential()
encoder.add(Dropout(dropout_rate, input_shape=(input_size, )))
encoder.add(Dense(encoder_size, activation='tanh'))
encoded_1 = encoder(X1)
encoded_2 = encoder(X2)
merged = concatenate([encoded_1, encoded_2])
comparer = Sequential()
comparer.add(Dropout(dropout_rate, input_shape=(encoder_size * 2, )))
comparer.add(Dense(500, activation='relu'))
comparer.add(Dropout(dropout_rate))
comparer.add(Dense(200, activation='relu'))
comparer.add(Dropout(dropout_rate))
comparer.add(Dense(1, activation='sigmoid'))
Y = comparer(merged)
model = Model(inputs=[X1, X2], outputs=Y)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
self.model = model
And I train model by
self.hist = self.model.fit(
x=[train_sample_1, train_sample_2],
y=train_label,
class_weight = 'auto',
validation_split=0.1,
batch_size=batch_size,
epochs=epochs,
callbacks=callbacks)