In scikit learn, how many neurons are in the output layer? As stated here, you can only specify the hidden layer size and their neurons but nothing about the output layer, thus I am not sure how scikit learn implements the output layer.
Does it make sense to use softmax activation function for output layer having only a single neuron?
Test:
Setup:
In [227]: %paste
clf = MLPClassifier()
m = 10**3
n = 64
df = pd.DataFrame(np.random.randint(100, size=(m, n))).add_prefix('x') \
.assign(y=np.random.choice([-1,1], m))
X_train, X_test, y_train, y_test = \
train_test_split(df.drop('y',1), df['y'], test_size=0.2, random_state=33)
clf.fit(X_train, y_train)
## -- End pasted text --
Out[227]:
MLPClassifier(activation='relu', alpha=0.0001, batch_size='auto', beta_1=0.9,
beta_2=0.999, early_stopping=False, epsilon=1e-08,
hidden_layer_sizes=(100,), learning_rate='constant',
learning_rate_init=0.001, max_iter=200, momentum=0.9,
nesterovs_momentum=True, power_t=0.5, random_state=None,
shuffle=True, solver='adam', tol=0.0001, validation_fraction=0.1,
verbose=False, warm_start=False)
Number of outputs:
In [229]: clf.n_outputs_
Out[229]: 1
Number of layers:
In [228]: clf.n_layers_
Out[228]: 3
The number of iterations the solver has ran:
In [230]: clf.n_iter_
Out[230]: 60
Here is an excerpt of the source code where the activation function for the output layer will be chosen:
# Output for regression
if not is_classifier(self):
self.out_activation_ = 'identity'
# Output for multi class
elif self._label_binarizer.y_type_ == 'multiclass':
self.out_activation_ = 'softmax'
# Output for binary class and multi-label
else:
self.out_activation_ = 'logistic'
UPDATE: MLPClassifier binarizes (in a one-vs-all fashion) labels internaly, so logistic regression should work well also with labels that are differ from [0,1]:
if not incremental:
self._label_binarizer = LabelBinarizer()
self._label_binarizer.fit(y)
self.classes_ = self._label_binarizer.classes_
Related
I wrote a code for kNN using sklearn and then compared the predictions using the WEKA kNN. The comparison was done using the 10 test set predictions, out of which, only a single one is showing a high difference of >1.5 but all others are exactly the same. So, I am not sure about if my code is working fine or not. Here is my code:
df = pd.read_csv('xxxx.csv')
X = df.drop(['Name', 'activity'], axis=1)
y = df['activity']
Xstd = StandardScaler().fit_transform(X)
x_train, x_test, y_train, y_test = train_test_split(Xstd, y, test_size=0.2,
shuffle=False, random_state=None)
print(x_train.shape, x_test.shape)
X_train_trans = x_train
X_test_trans = x_test
for i in range(2, 3):
knn_regressor = KNeighborsRegressor(n_neighbors=i, algorithm='brute',
weights='uniform', metric='euclidean', n_jobs=1, p=2)
CV_pred_train = cross_val_predict(knn_regressor, X_train_trans, y_train,
n_jobs=-1, verbose=0, cv=LeaveOneOut())
print("LOO Q2: ", metrics.r2_score(y_train, CV_pred_train).round(2))
# Train Test predictions
knn_regressor.fit(X_train_trans, y_train)
train_r2 = knn_regressor.score(X_train_trans, y_train)
y_train_pred = knn_regressor.predict(X_train_trans).round(3)
train_r2_1 = metrics.r2_score(y_train, y_train_pred)
y_test_pred = knn_regressor.predict(X_test_trans).round(3)
train_r = stats.pearsonr(y_train, y_train_pred)
abs_error_train = (y_train - y_train_pred)
train_predictions = pd.DataFrame({'Actual': y_train, 'Predcited':
y_train_pred, "error": abs_error_train.round(3)})
MAE_train = metrics.mean_absolute_error(y_train, y_train_pred)
abs_error_test = (y_test_pred - y_test)
test_predictions = pd.DataFrame({'Actual': y_test, 'predcited':
y_test_pred, 'error': abs_error_test.round(3)})
test_r = stats.pearsonr(y_test, y_test_pred)
test_r2 = metrics.r2_score(y_test, y_test_pred)
MAE_test = metrics.mean_absolute_error(y_test, y_test_pred).round(3)
print(test_predictions)
The train set statistics are almost same in both sklearn and WEKA kNN.
the sklearn predictions are:
Actual predcited error
6.00 5.285 -0.715
5.44 5.135 -0.305
6.92 6.995 0.075
7.28 7.005 -0.275
5.96 6.440 0.480
7.96 7.150 -0.810
7.30 6.660 -0.640
6.68 7.200 0.520
***4.60 6.950 2.350***
and the weka predictions are:
actual predicted error
6 5.285 -0.715
5.44 5.135 -0.305
6.92 6.995 0.075
7.28 7.005 -0.275
5.96 6.44 0.48
7.96 7.15 -0.81
7.3 6.66 -0.64
6.68 7.2 0.52
***4.6 5.285 0.685***
parameters used in both algorithms are: k =2, brute force for distance calculation, metric: euclidean.
Any suggestions for the difference?
I am training a Neural network for classifying between two classes. I got train set from internet and generated a small dataset from our internal experiment. i want to have a model learned through the internet data ( i use as train dataset 100k) and use our own generated data ( 600) as a validation set. So that model learned from the open domain can be adjusted to our domain.
I trained the model and did couple of experiments. But the problem is it i am getting around 60% validation set accuracy. I need suggestion how to improve the validation set accuracy.
Experiment No. 1
Code :
model_without_emb = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=160, weights=[embeddings_matrix], trainable=True),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv1D(64, 5, activation='relu'),
tf.keras.layers.MaxPooling1D(pool_size=4),
tf.keras.layers.LSTM(64),
tf.keras.layers.Dense(1, activation='sigmoid') ])
model_without_emb.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model_without_emb.summary()
es = EarlyStopping(monitor='val_acc',
mode='max', verbose=1, patience=25) num_epochs = 50
history_without_emb = model_without_emb.fit(train_sequences,
train_labels, epochs=num_epochs, validation_data=(test_sequences,
test_labels), verbose=1, callbacks=[es])
Experiment 2 : (reducing the number of units)
Code :
model_without_emb = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=160, weights
= [embeddings_matrix], trainable=False),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Conv1D(6, 3, activation='relu'),
tf.keras.layers.MaxPooling1D(pool_size=4),
tf.keras.layers.LSTM(6),
tf.keras.layers.Dense(1, activation='sigmoid') ])
opt = tf.compat.v1.keras.optimizers.Adam(learning_rate=0.000005,
beta_1=0.9,
beta_2=0.999, epsilon=1e-07, decay = 0.0, amsgrad=True)
model_without_emb.compile(loss='binary_crossentropy', optimizer=opt,
metrics=['accuracy'])
Experiment 3 ( removing Convolution layer)
Code :
model_without_emb = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=160, weights=[embeddings_matrix], trainable=False),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.LSTM(8),
tf.keras.layers.Dense(1, activation='sigmoid') ])
opt = tf.compat.v1.keras.optimizers.Adam(learning_rate=0.000005,
beta_1=0.9, beta_2=0.999, epsilon=1e-07, decay = 0.0, amsgrad=True)
model_without_emb.compile(loss='binary_crossentropy',
optimizer=opt, metrics=['accuracy'])
I have built a neural network for performing regression. However, if I'm performing cross-validation before making prediction, the output changes. Below are the graphs with and without cross validation.
With Cross Validation
Without Cross Validation
The code that I use for cross validation
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import KFold
epoch = 5000
n_cols = X_train.shape[1]
def baseline_model():
model = Sequential()
model.add(Dense(3, activation='sigmoid', input_shape=(n_cols,)))
model.add(Dense(1, activation = 'linear'))
model.compile(optimizer='adam', loss='mse')
return model
estimator = KerasRegressor(build_fn=baseline_model, epochs=epoch, batch_size=16, verbose = 0)
kfold = KFold(n_splits=5)
results = cross_val_score(estimator, X_train, y_train, cv=kfold)
print("Results: %.10f (%.10f) MSE" % (results.mean(), results.std()))
print("RMSE:", np.sqrt(abs(results.mean())))
print(results)
for prediction
epoch = 5000
n_cols = X_train.shape[1]
def modelling():
model = Sequential()
model.add(Dense(4, activation='tanh', input_shape=(n_cols,)))
model.add(Dense(1, activation = 'linear'))
model.compile(optimizer='adam', loss='mse')
return model
model = modelling()
history = model.fit(X_train, y_train, epochs= epoch, validation_split = 0.3, batch_size= 16, verbose = 0)
Using keras with tensorflow backend
That's the essence of cross-validation. Instead of one evaluation, it yields the mean and std of many evaluations. For your example, you are using a 5 split Kfold, which means you will be learning on 4/5 of train data and testing on the remaining 1/5 for 5 times.
Cross validation is used to be sure that your model is not overfitting.
I have a CNN that saves the bottleneck features of the training and test data with the VGG16 architecture, then uploads the features to my custom fully connected layers to classify the images.
#create data augmentations for training set; helps reduce overfitting and find more features
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip=True)
#use ImageDataGenerator to upload validation images; data augmentation not necessary for
validating process
val_datagen = ImageDataGenerator(rescale=1./255)
#load VGG16 model, pretrained on imagenet database
model = applications.VGG16(include_top=False, weights='imagenet')
#generator to load images into NN
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
#total number of images used for training data
num_train = len(train_generator.filenames)
#save features to numpy array file so features do not overload memory
bottleneck_features_train = model.predict_generator(train_generator, num_train // batch_size)
val_generator = val_datagen.flow_from_directory(
val_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
num_val = len(val_generator.filenames)
bottleneck_features_validation = model.predict_generator(val_generator, num_val // batch_size)`
#used to retrieve the labels of the images
label_datagen = ImageDataGenerator(rescale=1./255)
#generators can create class labels for each image in either
train_label_generator = label_datagen.flow_from_directory(
train_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
#total number of images used for training data
num_train = len(train_label_generator.filenames)
#load features from VGG16 and pair each image with corresponding label (0 for normal, 1 for pneumonia)
#train_data = np.load('xray/bottleneck_features_train.npy')
#get the class labels generated by train_label_generator
train_labels = train_label_generator.classes
val_label_generator = label_datagen.flow_from_directory(
val_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
num_val = len(val_label_generator.filenames)
#val_data = np.load('xray/bottleneck_features_validation.npy')
val_labels = val_label_generator.classes
#create fully connected layers, replacing the ones cut off from the VGG16 model
model = Sequential()
#converts model's expected input dimensions to same shape as bottleneck feature arrays
model.add(Flatten(input_shape=bottleneck_features_train.shape[1:]))
#ignores a fraction of input neurons so they do not become co-dependent on each other; helps prevent overfitting
model.add(Dropout(0.7))
#normal fully-connected layer with relu activation. Replaces all negative inputs with 0 and does not fire neuron,
#creating a lighetr network
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.7))
#output layer to classify 0 or 1
model.add(Dense(1, activation='sigmoid'))
#compile model and specify which optimizer and loss function to use
#optimizer used to update the weights to optimal values; adam optimizer maintains seperate learning rates
#for each weight and updates accordingly
#cross-entropy function measures the ability of model to correctly classify 0 or 1
model.compile(optimizer=optimizers.Adam(lr=0.0007), loss='binary_crossentropy', metrics=['accuracy'])
#used to stop training if NN shows no improvement for 5 epochs
early_stop = EarlyStopping(monitor='val_loss', min_delta=0.01, patience=5, verbose=1)
#checks each epoch as it runs and saves the weight file from the model with the lowest validation loss
checkpointer = ModelCheckpoint(filepath=top_model_weights_dir, verbose=1, save_best_only=True)
#fit the model to the data
history = model.fit(bottleneck_features_train, train_labels,
epochs=epochs,
batch_size=batch_size,
callbacks = [early_stop, checkpointer],
verbose=2,
validation_data=(bottleneck_features_validation, val_labels))`
After calling train_top_model(), the CNN gets an 86% accuracy after around 10 epochs.
However, when I try implementing this architecture in by building the fully connected layers directly on top of the VGG16 layers, The network gets stuck at a val_acc of 0.5000 and basically does not train. Are there any issues with the code?
epochs = 10
batch_size = 20
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip=True)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary',
shuffle=False)
num_train = len(train_generator.filenames)
val_generator = val_datagen.flow_from_directory(
val_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary',
shuffle=False)
num_val = len(val_generator.filenames)`
base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_width,
img_height, 3))
x = base_model.output
x = Flatten()(x)
x = Dropout(0.7)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.7)(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in model.layers[:19]:
layer.trainable = False
checkpointer = ModelCheckpoint(filepath=top_model_weights_dir, verbose=1, save_best_only=True)
model.compile(optimizer=optimizers.Adam(lr=0.0007), loss='binary_crossentropy', metrics=
['accuracy'])
history = model.fit_generator(train_generator,
steps_per_epoch=(num_train//batch_size),
validation_data=val_generator,
validation_steps=(num_val//batch_size),
callbacks=[checkpointer],
verbose=1,
epochs=epochs)
The reason is that in the second approach, you have not frozen the VGG16 layers. In other words, you are training the whole network. Whereas in the first approach you are just training the weights of your fully connected layers.
Use something like this:
for layer in base_model.layers[:end_layer]:
layer.trainable = False
where end_layer is the last layer you are importing.
I have finished a PyTorch MLP model for the MNIST dataset, but got two different results: 0.90+ accuracy when using MNIST dataset from PyTorch, but ~0.10 accuracy when using MNIST dataset from Keras.
Below is my code with dependency: PyTorch 0.3.0.post4, keras 2.1.3, tensorflow backend 1.4.1 gpu version.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import torch as pt
import torchvision as ptv
from keras.datasets import mnist
from torch.nn import functional as F
from torch.utils.data import Dataset, DataLoader
# training data from PyTorch
train_set = ptv.datasets.MNIST("./data/mnist/train", train=True, transform=ptv.transforms.ToTensor(), download=True)
test_set = ptv.datasets.MNIST("./data/mnist/test", train=False, transform=ptv.transforms.ToTensor(), download=True)
train_dataset = DataLoader(train_set, batch_size=100, shuffle=True)
test_dataset = DataLoader(test_set, batch_size=10000, shuffle=True)
class MLP(pt.nn.Module):
"""The Multi-layer perceptron"""
def __init__(self):
super(MLP, self).__init__()
self.fc1 = pt.nn.Linear(784, 512)
self.fc2 = pt.nn.Linear(512, 128)
self.fc3 = pt.nn.Linear(128, 10)
self.use_gpu = True
def forward(self, din):
din = din.view(-1, 28 * 28)
dout = F.relu(self.fc1(din))
dout = F.relu(self.fc2(dout))
# return F.softmax(self.fc3(dout))
return self.fc3(dout)
model = MLP().cuda()
print(model)
# loss func and optim
optimizer = pt.optim.SGD(model.parameters(), lr=1)
criterion = pt.nn.CrossEntropyLoss().cuda()
def evaluate_acc(pred, label):
pred = pred.cpu().data.numpy()
label = label.cpu().data.numpy()
test_np = (np.argmax(pred, 1) == label)
test_np = np.float32(test_np)
return np.mean(test_np)
def evaluate_loader(loader):
print("evaluating ...")
accurarcy_list = []
for i, (inputs, labels) in enumerate(loader):
inputs = pt.autograd.Variable(inputs).cuda()
labels = pt.autograd.Variable(labels).cuda()
outputs = model(inputs)
accurarcy_list.append(evaluate_acc(outputs, labels))
print(sum(accurarcy_list) / len(accurarcy_list))
def training(d, epochs):
for x in range(epochs):
for i, data in enumerate(d):
optimizer.zero_grad()
(inputs, labels) = data
inputs = pt.autograd.Variable(inputs).cuda()
labels = pt.autograd.Variable(labels).cuda()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if i % 200 == 0:
print(i, ":", evaluate_acc(outputs, labels))
# Training MLP for 4 epochs with MNIST dataset from PyTorch
training(train_dataset, 4)
# The accuracy is ~0.96.
evaluate_loader(test_dataset)
print("###########################################################")
def load_mnist():
(x, y), (x_test, y_test) = mnist.load_data()
x = x.reshape((-1, 1, 28, 28)).astype(np.float32)
x_test = x_test.reshape((-1, 1, 28, 28)).astype(np.float32)
y = y.astype(np.int64)
y_test = y_test.astype(np.int64)
print("x.shape", x.shape, "y.shape", y.shape,
"\nx_test.shape", x_test.shape, "y_test.shape", y_test.shape,
)
return x, y, x_test, y_test
class TMPDataset(Dataset):
"""Dateset for loading Keras MNIST dataset."""
def __init__(self, a, b):
self.x = a
self.y = b
def __getitem__(self, item):
return self.x[item], self.y[item]
def __len__(self):
return len(self.y)
x_train, y_train, x_test, y_test = load_mnist()
# Create dataloader for MNIST dataset from Keras.
test_loader = DataLoader(TMPDataset(x_test, y_test), num_workers=1, batch_size=10000)
train_loader = DataLoader(TMPDataset(x_train, y_train), shuffle=True, batch_size=100)
# Evaluate the performance of MLP trained on PyTorch dataset and the accurach is ~0.96.
evaluate_loader(test_loader)
evaluate_loader(train_loader)
model = MLP().cuda()
print(model)
optimizer = pt.optim.SGD(model.parameters(), lr=1)
criterion = pt.nn.CrossEntropyLoss().cuda()
# Train now on MNIST dataset from Keras.
training(train_loader, 4)
# Evaluate the trianed model on MNIST dataset from Keras and result in performance ~0.10...
evaluate_loader(test_loader)
evaluate_loader(train_loader)
I had checked some samples from Keras MNIST dataset and found no error.
I am wondering what is wrong with the datasets?
The code can run without error, run it to see the results.
The MNIST data coming from Keras are not normalized; following the Keras MNIST MLP example, you should do it manually, i.e. you should include the following in your load_data() function:
x /= 255
x_test /= 255
Not sure about PyTorch, but it would seem that the MNIST data from their own utility functions come already normalized (as is the case with Tensorflow - see the third point in my answer here).
A 10% accuracy (i.e. equivalent to random guessing) in case of not-normalized input data is perfectly consistent.