How to use validation monitor in Softmax classifier in tensorflow - machine-learning

I just edit the https://github.com/tensorflow/tensorflow/blob/r0.10/tensorflow/examples/tutorials/mnist/mnist_softmax.py to enable logging by using a validation monitor
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Import data
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_string('data_dir', '/tmp/data/', 'Directory for storing data')
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
sess = tf.InteractiveSession()
# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
validation_metrics = {"accuracy": tf.contrib.metrics.streaming_accuracy,
"precision": tf.contrib.metrics.streaming_precision,
"recall": tf.contrib.metrics.streaming_recall}
validation_monitor = tf.contrib.learn.monitors.ValidationMonitor(
mnist.test.images,
mnist.test.labels,
every_n_steps=50, metrics=validation_metrics,
early_stopping_metric="loss",
early_stopping_metric_minimize=True,
early_stopping_rounds=200)
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# Train
tf.initialize_all_variables().run()
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
train_step.run({x: batch_xs, y_: batch_ys})
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels}))
But i am confused how i set validation_monitor in this program. I have learned in DNNClassfier , the validation_monitor is used in the flowwing way
# Fit model.
classifier.fit(x=training_set.data,
y=training_set.target,
steps=2000, monitors=[validation_monitor])
So, how i can use validation_monitor in softmax_classifer?

I don't think there's an easy way to do that, since ValidationMonitor is a part of tf.contrib, e.g. contribution code that is not supported by the TensorFlow team. So unless you are using some higher-level API from tf.contrib (like DNNClassfier), you might not be able to simply pass a ValidationMonitor instance to a optimizer's minimize method.
I believe your options are:
Check how DNNClassfier's fit method is implemented and utilise the same approach by manually handling a ValidationMonitor instance in your graph and session.
Implement your own validation routine for logging and/or early stopping or whatever it is you intend to use ValidationMonitor for.

Related

Target size (torch.Size([32, 9])) must be the same as input size (torch.Size([32, 10]))

I have 10 classes. I have a model such as;
from brevitas.nn import QuantLinear, QuantReLU
import torch.nn as nn
# Setting seeds for reproducibility
torch.manual_seed(0)
model = nn.Sequential(
QuantLinear(input_size, hidden1, bias=True, weight_bit_width=weight_bit_width),
nn.BatchNorm1d(hidden1),
nn.Dropout(0.5),
QuantReLU(bit_width=act_bit_width),
QuantLinear(hidden1, hidden2, bias=True, weight_bit_width=weight_bit_width),
nn.BatchNorm1d(hidden2),
nn.Dropout(0.5),
QuantReLU(bit_width=act_bit_width),
QuantLinear(hidden2, hidden3, bias=True, weight_bit_width=weight_bit_width),
nn.BatchNorm1d(hidden3),
nn.Dropout(0.5),
QuantReLU(bit_width=act_bit_width),
QuantLinear(hidden3, num_classes, bias=True, weight_bit_width=weight_bit_width)
)
model.to(device)
and I have defined my training phase as:
def train(model, train_loader, optimizer, criterion):
losses = []
# ensure model is in training mode
model.train()
for i, data in enumerate(train_loader, 0):
inputs, target = data['pointcloud'].to(device).float(), data['category'].to(device)
target = torch.nn.functional.one_hot(target)
optimizer.zero_grad()
# forward pass
output = model(inputs)
loss = criterion(output, target.float())
# backward pass + run optimizer to update weights
loss.backward()
optimizer.step()
# keep track of loss value
losses.append(loss.data.cpu().numpy())
return losses
As I run the training code:
import numpy as np
from sklearn.metrics import accuracy_score
from tqdm import tqdm, trange
# Setting seeds for reproducibility
torch.manual_seed(0)
np.random.seed(0)
running_loss = []
running_test_acc = []
t = trange(num_epochs, desc="Training loss", leave=True)
for epoch in t:
loss_epoch = train(model, train_loader, optimizer,criterion)
test_acc = test(model, valid_loader)
t.set_description("Training loss = %f test accuracy = %f" % (np.mean(loss_epoch), test_acc))
t.refresh() # to show immediately the update
running_loss.append(loss_epoch)
running_test_acc.append(test_acc)
I get an error as:
Target size (torch.Size([32, 9])) must be the same as input size
(torch.Size([32, 10]))
Please help me about what can be possibly the solution. I have added one hot encoding because I have seen some solutions like that before.
The code error is pretty straightforward - the criterion (that you didn't show here in the code) expects both the input and the target arguments to be the same size, but they're not.
The problem is that you're using torch.nn.functional.one_hot(target) without telling it how many classes you need for one-hot encoding; it is then infered as the largest values in target +1 (see: https://pytorch.org/docs/stable/generated/torch.nn.functional.one_hot.html). You should change it to torch.nn.functional.one_hot(target, num_classes=10)

After hyperparameter tuning accuracy remains the same

I was trying to hyper tune param but after I did it, the accuracy score has not changed at all, what I do wrong?
# Log reg
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=0.3326530612244898,max_iter=100,tol=0.01)
logreg.fit(X_train,y_train)
from sklearn.metrics import confusion_matrix
​
y_pred = logreg.predict(X_test)
​
print('Accuracy of log reg is: ', logreg.score(X_test,y_test))
​
confusion_matrix(y_test,y_pred)
# 0.9181286549707602 - acurracy before tunning
Output:
Accuracy of log reg is: 0.9181286549707602
array([[ 54, 9],
[ 5, 103]])
Here is me Using Grid Search CV:
from sklearn.model_selection import GridSearchCV
params ={'tol':[0.01,0.001,0.0001],
'max_iter':[100,150,200],
'C':np.linspace(1,20)/10}
grid_model = GridSearchCV(logreg,param_grid=params,cv=5)
grid_model_result = grid_model.fit(X_train,y_train)
print(grid_model_result.best_score_,grid_model_result.best_params_)
Output:
0.8867405063291139 {'C': 0.3326530612244898, 'max_iter': 100, 'tol': 0.01}
The problem was that in the first chunk you evaluate the model's performance on the test set, while in the GridSearchCV you only looked at the performance on the training set after hyperparameter optimization.
The code below shows that both procedures, when used to predict the test set labels, perform equally well in terms of accuracy (~0.93).
Note, you might want to consider using a hyperparameter grid with other solvers and a larger range of max_iter because I obtained convergence warnings.
# Load packages
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
# Load the dataset and split in X and y
df = pd.read_csv('Breast_cancer_data.csv')
X = df.iloc[:, 0:5]
y = df.iloc[:, 5]
# Perform train and test split (80/20)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Initialize a model
Log = LogisticRegression(n_jobs=-1)
# Initialize a parameter grid
params = [{'tol':[0.01,0.001,0.0001],
'max_iter':[100,150,200],
'C':np.linspace(1,20)/10}]
# Perform GridSearchCV and store the best parameters
grid_model = GridSearchCV(Log,param_grid=params,cv=5)
grid_model_result = grid_model.fit(X_train,y_train)
best_param = grid_model_result.best_params_
# This step is only to prove that both procedures actually result in the same accuracy score
Log2 = LogisticRegression(C=best_param['C'], max_iter=best_param['max_iter'], tol=best_param['tol'], n_jobs=-1)
Log2.fit(X_train, y_train)
# Perform two predictions one straight from the GridSearch and the other one with manually inputting the best params
y_pred1 = grid_model_result.best_estimator_.predict(X_test)
y_pred2 = Log2.predict(X_test)
# Compare the accuracy scores and see that both are the same
print("Accuracy:",metrics.accuracy_score(y_test, y_pred1))
print("Accuracy:",metrics.accuracy_score(y_test, y_pred2))

Precision recall curve when results of estimator known

I have the results of an estimator running on X, as well as the ground truth, and I want to use plot_precision_recall_curve, but that requires passing in the estimator and X - which I can't do, the estimator is very complex and resides in another system... What should I do? (it would be nice to have a version of plot_precision_recall_curve that takes in y_pred and y_true ...).
You can use precision_recall_curve which accepts y_true and y_pred, and returns precision, recall, and thresholds, to be used further to find f1_score and auc, the latter can let you plot it manually.
This is an example:
# precision-recall curve and f1
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import f1_score
from sklearn.metrics import auc
from matplotlib import pyplot
# generate 2 class dataset
X, y = make_classification(n_samples=1000, n_classes=2, random_state=1)
# split into train/test sets
trainX, testX, trainy, testy = train_test_split(X, y, test_size=0.5, random_state=2)
# fit a model
model = LogisticRegression(solver='lbfgs')
model.fit(trainX, trainy)
# predict probabilities
lr_probs = model.predict_proba(testX)
# keep probabilities for the positive outcome only
lr_probs = lr_probs[:, 1]
# predict class values
yhat = model.predict(testX)
lr_precision, lr_recall, _ = precision_recall_curve(testy, lr_probs)
lr_f1, lr_auc = f1_score(testy, yhat), auc(lr_recall, lr_precision)
# summarize scores
print('Logistic: f1=%.3f auc=%.3f' % (lr_f1, lr_auc))
# plot the precision-recall curves
no_skill = len(testy[testy==1]) / len(testy)
pyplot.plot([0, 1], [no_skill, no_skill], linestyle='--', label='No Skill')
pyplot.plot(lr_recall, lr_precision, marker='.', label='Logistic')
# axis labels
pyplot.xlabel('Recall')
pyplot.ylabel('Precision')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()

why 10-fold cross validation is even faster than 1-fold fit when using LGB?

I am using LGB to handle a machine leaning task. But I found when I use the sklearn API cross_val_score and set cv=10, the time cost is less than single fold fit. I splited dataset useing train_test_split, then fit a LGBClassifier on training set. The time cost of latter is much more than former, why?
Sorry for my bad English.
Environment: Python 3.5, scikit-learn 0.20.3, lightgbm 2.2.3
Inter Xeon CPU E5-2650 v4
Memory 128GB
X = train_df.drop(['uId', 'age'], axis=1)
Y = train_df.loc[:, 'age']
X_test = test_df.drop(['uId'], axis=1)
X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size=0.1,
stratify=Y)
# (1809000, 12) (1809000,) (201000, 12) (201000,) (502500, 12)
print(X_train.shape, Y_train.shape, X_val.shape, Y_val.shape, X_test.shape)
from lightgbm import LGBMClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score
import time
lgb = LGBMClassifier(n_jobs=-1)
tic = time.time()
scores = cross_val_score(lgb, X, Y,
scoring='accuracy', cv=10, n_jobs=-1)
toc = time.time()
# 0.3738402985074627 0.0009231167322574765 300.1487271785736
print(np.mean(scores), np.std(scores), toc-tic)
tic = time.time()
lgb.fit(X_train, Y_train)
toc = time.time()
# 0.3751492537313433 472.1763586997986 (is much more than 300)
print(accuracy_score(Y_val, lgb.predict(X_val)), toc-tic)
Sorry, I found the answer. Here writtens in documentation of LightGBM: 'for the best speed, set this to the number of real CPU cores, not the number of threads'. So set n_jobs=-1 is not a best choice.

Why different intermediate layer ouput of CNN in keras?

I am using this code to perform some experiment, I want to use intermediate layer representation of layer mainly before the fully connected layer(or last layer) of CNN.
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalMaxPooling1D
from keras.datasets import imdb
# set parameters:
max_features = 5000
maxlen = 400
batch_size = 100
embedding_dims = 50
filters = 250
kernel_size = 3
hidden_dims = 250
epochs = 100
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
model.add(Dropout(0.2))
# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
# we use max pooling:
model.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))#<======== I need output after this.
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam', metrics=['accuracy'])
To get the intermediate layer representation of penultimate layer I used following code.
CODE1
get_layer_output = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[6].output])
# output in test mode = 0
layer_output_test = get_layer_output([x_test, 0])[0]
# output in train mode = 1
layer_output_train = get_layer_output([x_train, 1])[0]
print(layer_output_train)
print(layer_output_train.shape)
CODE2
def get_activations(model, layer, X_batch):
get_activations = K.function([model.layers[0].input, K.learning_phase()], [model.layers[layer].output,])
activations = get_activations([X_batch,1])
return activations
import numpy as np
X_train=np.array(get_activations(model=model,layer=6, X_batch=x_train)[0], dtype=np.float32)
print(X_train)
print(X_train.shape)
Which one is correct as I am getting/printing different output for above two codes? I want to use the above correct output to multiply by weights and optimise by custom optimiser.
If you pass 1 to K.learning_phase() you will get different results every time. But both codes give the same result.
Using a higher level approach, you can do this:
from keras.models import Model
newModel = Model(model.inputs,model.layers[6].output)
Do whatever you want with newModel. You can train it (and affect the original model), and use it to predict values.

Resources