performance metrics stratified cross validation - machine-learning

I have implemented stratified cross validation for multiclass imbalanced dataset. Im unable to calculate the average of each performance metric such as precision, recall etc.
skf = StratifiedKFold(n_splits=10)
lst_accu_stratified = []
lst_pre_stratified = []
for train_index, test_index in skf.split(X, y):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
dt.fit(X_train, y_train)
# score = dt.score(X_test,y_test)
lst_accu_stratified.append(dt.score(X_test, y_test))
lst_pre_stratified.append(cross_val_score(dt, X_test, y_test, scoring='precision_weighted'))
# Print the output.
print('List of possible accuracy:', lst_accu_stratified)
print('\nOverall Accuracy:',mean(lst_accu_stratified)*100, '%')
print('\nStandard Deviation is:', stdev(lst_accu_stratified))
#print('For Fold {} the accuracy is {}'.format(str(fold_no),score))
# Print the output.
print('List of possible pre:', lst_pre_stratified)
print('\nOverall pre:',mean(lst_accu_stratified)*100, '%')
print('\nStandard Deviation pre is:', stdev(lst_pre_stratified))
the error it is giving me is this, how can I calculate the weighted average of precision, recall, f1-score etc.
List of possible accuracy: [0.9835704251505708, 0.9982440134988939, 0.9977959341848186, 0.998433740776025, 0.9956645298800277, 0.9979089632009818, 0.9963467259802279, 0.9915873778373425, 0.998042168066752, 0.9966494834956786]
Maximum Accuracy That can be obtained from this model is: 99.8433740776025 %
Minimum Accuracy: 98.35704251505707 %
Overall Accuracy: 99.54243362071318 %
Standard Deviation is: 0.00463447140029694
List of possible pre: [array([0.97498825, 0.99018204, 0.99331666, 0.99531447, 0.99747649]), array([0.99927386, 0.99946234, 0.99961679, 0.99508796, 0.99812511]), array([0.99728536, 0.99963718, 0.99672916, 0.99722338, 0.99667466]), array([0.99927476, 0.99969985, 0.99953702, 0.99964024, 0.99130982]), array([0.99740928, 0.99812025, 0.99882627, 0.99528916, 0.99861694]), array([0.99963661, 0.99965769, 0.99830287, 0.99917217, 0.99826977]), array([0.99796059, 0.99895001, 0.99828052, 0.99729752, 0.99235937]), array([0.99247648, 0.99593567, 0.99934078, 0.99971908, 0.9985657 ]), array([0.99834264, 0.99890637, 0.99885082, 0.99938173, 0.99332253]), array([0.99848042, 0.99981842, 0.99947901, 0.99730492, 0.99834253])]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-132-03190719070c> in <module>
28 # Print the output.
29 print('List of possible pre:', lst_pre_stratified)
---> 30 print('\nMaximum Accuracy That can be obtained from this model is:', max(lst_pre_stratified)*100, '%')
31 print('\nMinimum Accuracy:', min(lst_pre_stratified)*100, '%')
32 print('\nOverall pre:',mean(lst_accu_stratified)*100, '%')
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Can anyone help how to calculate precion, AUC etc.??

Related

ValueError: Classification metrics can't handle a mix of multiclass and continuous-multioutput targets

I am trying to calculate the f1 score for a multi-class classification problem using the Cifar10 dataset. I am importing f1 metirics from the sklearn library. However I keep getting the following error message:
ValueError: Classification metrics can't handle a mix of multiclass and continuous-multioutput targets
Below is my function for testing the model on my validation set. Would someone be able to explain how to calculate f1 when performing multi-class classification. I am getting quite confused.
#torch.no_grad()
def valid_function(model, optimizer, val_loader):
model.eval()
val_loss = 0.0
val_accu = 0.0
f_one = []
for i, (x_val, y_val) in enumerate(val_loader):
x_val, y_val = x_val.to(device), y_val.to(device)
val_pred = model(x_val)
loss = criterion(val_pred, y_val)
val_loss += loss.item()
val_accu += accuracy(val_pred, y_val)
f_one.append(f1_score(y_val.cpu(), val_pred.cpu()))
val_loss /= len(val_loader)
val_accu /= len(val_loader)
print('Val Loss: %.3f | Val Accuracy: %.3f'%(val_loss,val_accu))
return val_loss, val_accu
The problem is here:
val_pred = model(x_val)
You need to convert how you load the model. For example in your case:
val_pred = np.argmax(model.predict(x_val), axis=-1)

Why is my accuracy decreasing with increasing K in KNN algorithm?

I am working on Pima Indian dataset and using KNN as my classification algorithm. In order to find the right k I am using KFold CV. However as the value of k increases the accuracy is decreasing.
knn_train = train_data.copy()
knn_y = knn_train['Outcome']
knn_train.drop('Outcome', axis=1, inplace=True)
acc_score = []
avg_score_lst = []
n_neighs_lst = []
for k in range(50):
kfold = KFold(n_splits=5, random_state=23, shuffle=True)
model = KNeighborsClassifier(n_neighbors=k+1)
for train_index, test_index in kfold.split(knn_train):
X_train, X_test = knn_train.iloc[train_index,:], knn_train.iloc[test_index,:]
y_train, y_test = knn_y.iloc[train_index], knn_y.iloc[test_index]
model.fit(X_train, y_train)
preds = model.predict(X_test)
acc = accuracy_score(y_test, preds)
acc_score.append(acc)
avg_acc_score = mean(acc_score)
avg_score_lst.append(avg_acc_score)
n_neighs_lst.append(model.n_neighbors)
sns.lineplot(x=n_neighs_lst, y=avg_score_lst)
plt.show()
Accuracy vs k graph
Predictions are made by averaging across the k neighbours. Where k is larger, the distance is then larger, which defeats the principle behind kNN - that neighbours that are nearer have similar densities or classes.
There is normally an optimum k, which you can find using cross-validation - not too big and not too small. However, it depends on your data - it's not impossible for a k of 1 to be optimal.

Neural net loss exponentially rises after first propogation

I am training a neural network on video frames (converted to greyscale) to output a tensor with two values. The first iteration always evaluates an acceptable loss (mean squared error generally between 15-40), followed by an exponential rise in the second pass, and then infinite.
The net is quite vanilla:
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(100 * 291, 29100),
nn.ReLU(),
nn.Linear(29100, 29100),
nn.ReLU(),
nn.Linear(29100, 2),
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
As is the training loop:
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to("cpu"), y.to("cpu")
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropogation
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
Example of loss function growth:
ITERATION 1
prediction: tensor([[-1.2239, -8.2337]], grad_fn=<AddmmBackward>)
actual: tensor([[0.0321, 0.0325]])
loss: tensor(34.9545, grad_fn=<MseLossBackward>)
ITERATION 2
prediction: tensor([[ 314636.5625, 2063098.2500]], grad_fn=<AddmmBackward>)
actual: tensor([[0.0330, 0.0323]])
loss: tensor(2.1777e+12, grad_fn=<MseLossBackward>)
ITERATION 3
prediction: tensor([[-8.0924e+22, -5.3062e+23]], grad_fn=<AddmmBackward>)
actual: tensor([[0.0334, 0.0317]])
loss: tensor(inf, grad_fn=<MseLossBackward>)
Here is an example of the video data: it's a 291x100 greyscale image and there are 1100 of them in the training dataset:
dataset.video_frames.size()
> torch.Size([1100, 100, 291])
dataset.video_frames[0]
> tensor([[21., 29., 28., ..., 33., 27., 26.],
[22., 27., 25., ..., 25., 25., 30.],
[23., 26., 26., ..., 24., 24., 28.],
...,
[24., 33., 31., ..., 41., 40., 42.],
[26., 34., 31., ..., 26., 20., 22.],
[25., 32., 32., ..., 21., 20., 18.]])
And the labeled training data:
dataset.y.size()
> torch.Size([1100, 2])
dataset.y[0]
> tensor([0.0335, 0.0315], dtype=torch.float)
I've fiddled the learning rate, number of hidden layers, and nothing seems to keep the loss from going to infinite.
Properly scaling the inputs is crucial for proper training.
Weights are initialized based on some assumptions on the way inputs are scaled.
See this part of a lecture on weight initialization and see how critical it is for proper convergence.
More details on the mathematical analysis of the influence of weight initialization can be found in Sec. 2 of this paper:
Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (ICCV 2015).

Facing this error while classifying Images, containing 10 classes in pytorch, in ResNet50. My code is:

This is the code I am implementing: I am using a subset of the CalTech256 dataset to classify images of 10 different kinds of animals. We will go over the dataset preparation, data augmentation and then steps to build the classifier.
def train_and_validate(model, loss_criterion, optimizer, epochs=25):
'''
Function to train and validate
Parameters
:param model: Model to train and validate
:param loss_criterion: Loss Criterion to minimize
:param optimizer: Optimizer for computing gradients
:param epochs: Number of epochs (default=25)
Returns
model: Trained Model with best validation accuracy
history: (dict object): Having training loss, accuracy and validation loss, accuracy
'''
start = time.time()
history = []
best_acc = 0.0
for epoch in range(epochs):
epoch_start = time.time()
print("Epoch: {}/{}".format(epoch+1, epochs))
# Set to training mode
model.train()
# Loss and Accuracy within the epoch
train_loss = 0.0
train_acc = 0.0
valid_loss = 0.0
valid_acc = 0.0
for i, (inputs, labels) in enumerate(train_data_loader):
inputs = inputs.to(device)
labels = labels.to(device)
# Clean existing gradients
optimizer.zero_grad()
# Forward pass - compute outputs on input data using the model
outputs = model(inputs)
# Compute loss
loss = loss_criterion(outputs, labels)
# Backpropagate the gradients
loss.backward()
# Update the parameters
optimizer.step()
# Compute the total loss for the batch and add it to train_loss
train_loss += loss.item() * inputs.size(0)
# Compute the accuracy
ret, predictions = torch.max(outputs.data, 1)
correct_counts = predictions.eq(labels.data.view_as(predictions))
# Convert correct_counts to float and then compute the mean
acc = torch.mean(correct_counts.type(torch.FloatTensor))
# Compute total accuracy in the whole batch and add to train_acc
train_acc += acc.item() * inputs.size(0)
#print("Batch number: {:03d}, Training: Loss: {:.4f}, Accuracy: {:.4f}".format(i, loss.item(), acc.item()))
# Validation - No gradient tracking needed
with torch.no_grad():
# Set to evaluation mode
model.eval()
# Validation loop
for j, (inputs, labels) in enumerate(valid_data_loader):
inputs = inputs.to(device)
labels = labels.to(device)
# Forward pass - compute outputs on input data using the model
outputs = model(inputs)
# Compute loss
loss = loss_criterion(outputs, labels)
# Compute the total loss for the batch and add it to valid_loss
valid_loss += loss.item() * inputs.size(0)
# Calculate validation accuracy
ret, predictions = torch.max(outputs.data, 1)
correct_counts = predictions.eq(labels.data.view_as(predictions))
# Convert correct_counts to float and then compute the mean
acc = torch.mean(correct_counts.type(torch.FloatTensor))
# Compute total accuracy in the whole batch and add to valid_acc
valid_acc += acc.item() * inputs.size(0)
#print("Validation Batch number: {:03d}, Validation: Loss: {:.4f}, Accuracy: {:.4f}".format(j, loss.item(), acc.item()))
# Find average training loss and training accuracy
avg_train_loss = train_loss/train_data_size
avg_train_acc = train_acc/train_data_size
# Find average training loss and training accuracy
avg_valid_loss = valid_loss/valid_data_size
avg_valid_acc = valid_acc/valid_data_size
history.append([avg_train_loss, avg_valid_loss, avg_train_acc, avg_valid_acc])
epoch_end = time.time()
print("Epoch : {:03d}, Training: Loss: {:.4f}, Accuracy: {:.4f}%, \n\t\tValidation : Loss : {:.4f}, Accuracy: {:.4f}%, Time: {:.4f}s".format(epoch, avg_train_loss, avg_train_acc*100, avg_valid_loss, avg_valid_acc*100, epoch_end-epoch_start))
# Save if the model has best accuracy till now
torch.save(model, dataset+'_model_'+str(epoch)+'.pt')
return model, history
# Load pretrained ResNet50 Model
resnet50 = models.resnet50(pretrained=True)
#resnet50 = resnet50.to('cuda:0')
# Freeze model parameters
for param in resnet50.parameters():
param.requires_grad = False
# Change the final layer of ResNet50 Model for Transfer Learning
fc_inputs = resnet50.fc.in_features
resnet50.fc = nn.Sequential(
nn.Linear(fc_inputs, 256),
nn.ReLU(),
nn.Dropout(0.4),
nn.Linear(256, num_classes), # Since 10 possible outputs
nn.LogSoftmax(dim=1) # For using NLLLoss()
)
# Convert model to be used on GPU
# resnet50 = resnet50.to('cuda:0')
# Change the final layer of ResNet50 Model for Transfer Learning
fc_inputs = resnet50.fc.in_features
resnet50.fc = nn.Sequential(
nn.Linear(fc_inputs, 256),
nn.ReLU(),
nn.Dropout(0.4),
nn.Linear(256, num_classes), # Since 10 possible outputs
nn.LogSoftmax(dienter code herem=1) # For using NLLLoss()
)
# Convert model to be used on GPU
# resnet50 = resnet50.to('cuda:0')`enter code here`
Error is this:
RuntimeError Traceback (most recent call
last) in ()
6 # Train the model for 25 epochs
7 num_epochs = 30
----> 8 trained_model, history = train_and_validate(resnet50, loss_func, optimizer, num_epochs)
9
10 torch.save(history, dataset+'_history.pt')
in train_and_validate(model,
loss_criterion, optimizer, epochs)
43
44 # Compute loss
---> 45 loss = loss_criterion(outputs, labels)
46
47 # Backpropagate the gradients
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in
call(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in
forward(self, input, target)
202
203 def forward(self, input, target):
--> 204 return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
205
206
~\Anaconda3\lib\site-packages\torch\nn\functional.py in
nll_loss(input, target, weight, size_average, ignore_index, reduce,
reduction) 1836 .format(input.size(0),
target.size(0))) 1837 if dim == 2:
-> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target,
weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes'
failed. at
C:\Users\builder\AppData\Local\Temp\pip-req-build-0i480kur\aten\src\THNN/generic/ClassNLLCriterion.c:97
This happens when there are either incorrect labels in your dataset, or the labels are 1-indexed (instead of 0-indexed). As from the error message, cur_target must be smaller than the total number of classes (10). To verify the issue, check the maximum and minimum label in your dataset. If the data is indeed 1-indexed, just minus one from all annotations and you should be fine.
Note, another possible reason is that there exists some -1 labels in the data. Some (esp older) datasets use -1 as indication of a wrong/dubious label. If you find such labels, just discard them.

Size Mismatch using pytorch when trying to train data

I am really new to pytorch and just trying to use my own dataset to do a simple Linear Regression Model. I am only using the numbers values as inputs, too.
I have imported the data from the CSV
dataset = pd.read_csv('mlb_games_overview.csv')
I have split the data into four parts X_train, X_test, y_train, y_test
X = dataset.drop(['date', 'team', 'runs', 'win'], 1)
y = dataset['win']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=True)
I have converted the data to pytorch tensors
X_train = torch.from_numpy(np.array(X_train))
X_test = torch.from_numpy(np.array(X_test))
y_train = torch.from_numpy(np.array(y_train))
y_test = torch.from_numpy(np.array(y_test))
I have created a LinearRegressionModel
class LinearRegressionModel(torch.nn.Module):
def __init__(self):
super(LinearRegressionModel, self).__init__()
self.linear = torch.nn.Linear(1, 1)
def forward(self, x):
y_pred = self.linear(x)
return y_pred
I have initialized the optimizer and the loss function
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
Now when I start to train the data I get the runtime error mismatch
EPOCHS = 500
for epoch in range(EPOCHS):
pred_y = model(X_train) # RUNTIME ERROR HERE
loss = criterion(pred_y, y_train)
optimizer.zero_grad() # zero out gradients to update parameters correctly
loss.backward() # backpropagation
optimizer.step() # update weights
print('epoch {}, loss {}'. format(epoch, loss.data[0]))
Error Log:
RuntimeError Traceback (most recent call last)
<ipython-input-40-c0474231d515> in <module>
1 EPOCHS = 500
2 for epoch in range(EPOCHS):
----> 3 pred_y = model(X_train)
4 loss = criterion(pred_y, y_train)
5 optimizer.zero_grad() # zero out gradients to update parameters correctly
RuntimeError: size mismatch, m1: [3540 x 8], m2: [1 x 1] at
C:\w\1\s\windows\pytorch\aten\src\TH/generic/THTensorMath.cpp:752
In your Linear Regression model, you have:
self.linear = torch.nn.Linear(1, 1)
But your training data (X_train) shape is 3540 x 8 which means you have 8 features representing each input example. So, you should define the linear layer as follows.
self.linear = torch.nn.Linear(8, 1)
A linear layer in PyTorch has parameters, W and b. If you set the in_features to 8 and out_features to 1, then the shape of the W matrix will be 1 x 8 and the length of b vector will be 1.
Since your training data shape is 3540 x 8, you can perform the following operation.
linear_out = X_train W_T + b
I hope it clarifies your confusion.

Resources