I am new to PyTorch, and Reinforcement Learning, so I am practicing with the gym library, but I cannot get my model to train. I have narrowed it down to my grads being None, but I cannot figure out why. I suspect that it has something to do with the way I calculate reward/loss, but I cannot understand the problem.
I read that it might be because my parameters are detached from the computational graph, and I have tried everything to prevent that without any luck. I don't think I fully understand the computational graph... Code:
for episode in range(n_episodes):
done = False
obs, reward, done, info = env.reset()
rewards = torch.tensor(0, dtype = torch.float32 ,requires_grad = True)
while not done:
observation = torch.from_numpy(obs)
observation.requires_grad = True
y_pred = model(observation)
obs, reward, done, info = env.step(int(y_pred.clone().detach().numpy()[0]))
rewards.cat(torch.tensor(-reward, requires_grad = True))
optimizer.zero_grad()
loss = rewards.sum()
loss.retain_grad()
loss.backward()
optimizer.step()
My model looks like this:
class Network(nn.Module):
def __init__(self, input_dim, output_dim):
super(Network, self).__init__()
self.model = nn.Sequential(
nn.Linear(input_dim, 64),
nn.ReLU(),
nn.Linear(64, output_dim),
nn.ReLU(),
nn.Softmax(dim= 0))
def forward(self, x):
return self.model(x)
I printed list(model.parameters())[0].grad is None, to find that it is always True.
How can I fix this problem?
Related
I'm trying to implement the Learner object and its steps and facing an issue with the loss.backward() function as it raises and AttributeError: 'NoneType' object has no attribute 'data'
The entire process works when I follow the Chapter 04 MNIST Basics. However, implementing within a class raises this error. Could anybody guide me on why this occurs and ways to fix this?
Here's the code below:
class Basic_Optim:
def __init__(self, params, lr):
self.params = list(params)
self.lr = lr
def step(self):
for p in self.params:
p.data -= self.lr * p.grad.data
def zero(self):
for p in self.params:
p.grad = None
class Learner_self:
def __init__(self, train, valid, model, loss, metric, params, lr):
self.x = train
self.y = valid
self.model = model
self.loss = loss
self.metric = metric
self.opt_func = Basic_Optim(params, lr)
def fit(self, epochs):
for epoch in range(epochs):
self.train_data()
score = self.valid_data()
print(score, end = ' | ')
def train_data(self):
for x, y in self.x:
preds = self.model(x)
loss = self.loss(preds, y)
loss_b = loss.backward()
print(f'Loss: {loss:.4f}, Loss Backward: {loss_b}')
self.opt_func.step()
self.opt_func.zero()
def valid_data(self):
accuracy = [self.metric(xb, yb) for xb, yb in self.y]
return round(torch.stack(accuracy).mean().item(), 4)
learn = Learner_self(dl, valid_dl, simple_net, mnist_loss, metric=batch_accuracy,
params=linear_model.parameters(), lr = 1)
learn.fit(10)
OUTPUT from the print statement inside the train_data prints: Loss: 0.0516, Loss Backward: None and then raises the Attribute error shared above.
Please let me know if you want any more details. Every other function such as mnist_loss, batch_accuracy, simple_net are exactly the same from the book.
Thank you in advance.
It seems like your optimizer and your trainer do not work on the same model.
You have model=simple_net, while the parameters for the optimizer are those of a different model params=linear_model.parameters().
Try passing params=simple_net.parameters() -- that is, make sure the trainer's params are those of model.
I'm running into a roadblock in my learning about NLP. I'm working on a beginner's Kaggle competition classifying tweets as "disaster" or "not disaster". I started out by repurposing a simple network from a PyTorch tutorial comprised of nn.EmbeddingBag and nn.Linear layers and saw decent results during both training and inference:
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
The loss function is BCEWithLogits, by the way.
I decided to up my game and throw an LSTM into the mix. I took a deep dive into padded/packed sequences and think I understand them pretty well. After perusing around and thinking about it, I came to the conclusion that I should be grabbing the final non-padded hidden state of each sequence's output from the LSTM. That's what I tried below:
My attempt at upping my game:
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, hidden_size, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.Embedding(vocab_size, embed_dim, padding_idx=0)
self.lstm = nn.LSTM(embed_dim, hidden_size, batch_first=True)
self.fc1 = nn.Linear(hidden_size, num_class)
def forward(self, padded_seq, lengths):
# embedding layer
embedded_padded = self.embedding(padded_seq)
packed_output = pack_padded_sequence(embedded_padded, lengths, batch_first=True)
# lstm layer
output, _ = self.lstm(packed_output)
padded_output, lengths = pad_packed_sequence(output, batch_first=True)
# get hidden state of final non-padded sequence element:
h_n = []
for seq, length in zip(padded_output, lengths):
h_n.append(seq[length - 1, :])
lstm_out = torch.stack(h_n)
# linear layers
out = self.fc1(lstm_out)
return out
This morning, I ported my notebook over to an IDE and ran the debugger and confirmed that h_n is indeed the final hidden state of each sequence, not including padding.
So everything runs/trains without error but my loss never decreases when I use batch size > 1.
With batch_size = 8:
With batch_size = 1:
My Question
I would have expected this LSTM setup to perform much better on this simple task. So I'm wondering "Where have I gone wrong?"
Additional Information: Training Code
def train_one_epoch(model, opt, criterion, lr, trainloader):
model.to(device)
model.train()
running_tl = 0
for (label, data, lengths) in trainloader:
opt.zero_grad()
label = label.reshape(label.size()[0], 1)
output = model(data, lengths)
loss = criterion(output, label)
running_tl += loss.item()
loss.backward()
opt.step()
return running_tl
def validate_one_epoch(model, opt, criterion, lr, validloader):
running_vl = 0
model.eval()
with torch.no_grad():
for (label, data, lengths) in validloader:
label = label.reshape(label.shape[0], 1)
output = model(data, lengths)
loss = criterion(output, label)
running_vl += loss.item()
return running_vl
def train_model(model, opt, criterion, epochs, trainload, testload=None, lr=1e-3):
avg_tl_per_epoch = []
avg_vl_per_epoch = []
for e in trange(epochs):
running_tl = train_one_epoch(model, opt, criterion, lr, trainload)
avg_tl_per_epoch.append(running_tl / len(trainload))
if testload:
running_vl = validate_one_epoch(model, opt, criterion, lr, validloader)
avg_vl_per_epoch.append(running_vl / len(testload))
return avg_tl_per_epoch, avg_vl_per_epoch
I think your model should look like that :
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, hidden_size, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.Embedding(vocab_size, embed_dim, padding_idx=0)
self.lstm = nn.LSTM(embed_dim, hidden_size, batch_first=True)
self.fc1 = nn.Linear(hidden_size, num_class)
def forward(self, padded_seq, lengths):
# embedding layer
embedded_padded = self.embedding(padded_seq)
packed_output = pack_padded_sequence(embedded_padded, lengths, batch_first=True)
# lstm layer
output, _ = self.lstm(packed_output)
out = self.fc1(output)
return out
As, by default, the LSTM will just output the last hidden state as an output when provided with a sequence.
Also depending on the number of examples, the simple embedding + linear model might work better as it needs fewer data to converge. Your data being tweets (very short text) the sequential aspect of the text might not be so important.
You have not provided the code for preprocessing your data. With text a good preprocessing is crucial and I recommend you to take a look to the pytorch tutorial called NLP FROM SCRATCH: TRANSLATION WITH A SEQUENCE TO SEQUENCE NETWORK AND ATTENTION.
I'm using a LSTM model to predict BABA stock price using this dataset: "/kaggle/input/price-volume-data-for-all-us-stocks-etfs/Data/Stocks/baba.us.txt".
I'm not sure why my model is not learning and the y_test_prediction is so different from the actual y_test. I really appreciate your help as I'm beginning to learn machine learning. Thank you!
I have scaled the data with minMaxScaler before splitting it. This is how I split the data:
x_train, y_train, x_test, y_test = [], [], [], []
lags = 3
for t in range(len(train_data)-lags-1):
x_train.append(train_data[t:(t+lags),:])
y_train.append(train_data[(t+lags),:])
for t in range(len(test_data)-lags-1):
x_test.append(test_data[t:(t+lags),:])
y_test.append(test_data[(t+lags),:])
x_train = torch.FloatTensor(np.array(x_train))
y_train = torch.FloatTensor(np.array(y_train))
x_test = torch.FloatTensor(np.array(x_test))
y_test = torch.FloatTensor(np.array(y_test))
x_train = np.reshape(x_train,(x_train.shape[0],x_train.shape[1],1))
x_test = np.reshape(x_test,(x_test.shape[0],x_test.shape[1],1))
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
This is my LSTM model:
input_dim = 1
hidden_layer_dim = 32
num_layers = 1
output_dim = 1
class LSTM(nn.Module):
def __init__(self, input_dim,hidden_layer_dim, num_layers, output_dim ):
super(LSTM, self).__init__()
self.input_dim = input_dim
self.hidden_layer_dim = hidden_layer_dim
self.num_layers = num_layers
self.output_dim = output_dim
self.lstm = nn.LSTM(input_dim, hidden_layer_dim,num_layers,batch_first = True)
self.fc = nn.Linear(hidden_layer_dim, output_dim)
def forward(self, x):
# initial hidden state & cell state as zeros
h0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_layer_dim))
c0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_layer_dim))
# lstm output with hidden and cell state
output, (hn, cn) = self.lstm(x, (h0,c0))
# get hidden state to be passed to dense layer
hn = hn.view(-1, self.hidden_layer_dim)
output = self.fc(hn)
return output
This is my training:
num_epochs = 100
learning_rate = 0.01
model = LSTM(input_dim,hidden_layer_dim, num_layers, output_dim)
loss = torch.nn.MSELoss() # mean-squared error for regression
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
hist = np.zeros(num_epochs)
# train model
for epoch in range(num_epochs):
outputs = model(x_train)
optimizer.zero_grad()
#get loss function
loss_fn = loss(outputs, y_train.view(1,-1))
hist[epoch] = loss_fn.item()
loss_fn.backward()
optimizer.step()
if epoch %10==0:
print("Epoch: %d, loss: %1.5f" % (epoch, hist[epoch]))
This is the training loss and prediction vs actual
training loss
prediction vs actual
You are initialising hidden layers every time forward is being called, which might cause errors with backprop. You do not even have to initialise them. PyTorch takes care of that for you. You can check this implementation for the details. Also, as a side note, you might want to take a look at PyTorch dataloaders(just an easier way to make splits).
After training the network I noticed that accuracy goes up and down. Initially I thought it is caused by the learning rate, but it is set to quite small value. Please check the screenshot attached.
Plot Accuracy Screenshot
My network (in Pytorch) looks as follow:
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(3,16,kernel_size=3),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.layer2 = nn.Sequential(
nn.Conv2d(16,32, kernel_size=3),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.layer3 = nn.Sequential(
nn.Conv2d(32,64, kernel_size=3),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.fc1 = nn.Linear(17*17*64,512)
self.fc2 = nn.Linear(512,1)
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def forward(self,x):
out = self.layer1(x)
out = self.layer2(out)
out = self.layer3(out)
out = out.view(out.size(0),-1)
out = self.relu(self.fc1(out))
out = self.fc2(out)
out = torch.sigmoid(out)
return out
I am using RMSprop as optimizer and BCELoss as criterion. The learning rate is set to 0.001
Here is the training process:
epochs = 15
itr = 1
p_itr = 100
model.train()
total_loss = 0
loss_list = []
acc_list = []
for epoch in range(epochs):
for samples, labels in train_loader:
samples, labels = samples.to(device), labels.to(device)
optimizer.zero_grad()
output = model(samples)
labels = labels.unsqueeze(-1)
labels = labels.float()
loss = criterion(output, labels)
loss.backward()
optimizer.step()
total_loss += loss.item()
scheduler.step()
if itr%p_itr == 0:
pred = torch.argmax(output, dim=1)
correct = pred.eq(labels)
acc = torch.mean(correct.float())
print('[Epoch {}/{}] Iteration {} -> Train Loss: {:.4f}, Accuracy: {:.3f}'.format(epoch+1, epochs, itr, total_loss/p_itr, acc))
loss_list.append(total_loss/p_itr)
acc_list.append(acc)
total_loss = 0
itr += 1
My dataset is quite small - 2000 train and 1000 validation (binary classification 0/1). I wanted to do the 80/20 split but I was asked to keep it like that. I was thinking that the architecture might be too complex for such a small dataset.
Any hits what may cause such jumps in the training process?
Your code here is wrong: pred = torch.argmax(output, dim=1)
This line using for multiclass classification with Cross-Entropy Loss.
Your task is binary classification so the pred values are wrong. Change to:
if itr%p_itr == 0:
pred = torch.round(output)
....
You can change your optimizer to Adam, SGD, or RMSprop to find the suitable optimizer that helps your model coverage faster.
Also change the forward() function:
def forward(self,x):
out = self.layer1(x)
out = self.layer2(out)
out = self.layer3(out)
out = out.view(out.size(0),-1)
out = self.relu(self.fc1(out))
out = self.fc2(out)
return self.sigmoid(out) #use your forward is ok, but this cleaner
I'm new to the PyTorch framework (coming from Theano and Tensorflow mainly):
I've followed the introduction tutorial and read the Classifying Names with a Character-Level RNN one.
I now try to adapt it to a char level LSTM model in order to gain some practical experience with the framework.
Basically I feed in the model sequences of char indices and give as target to the model the same sequence but shifted by one in the future.
However I can't overfit a simple training example and I don't see what I did wrong.
If someone can spot my mistake it would be very helpful.
Here is my code:
class LSTMTxtGen(nn.Module):
def __init__(self, hidden_dim, n_layer, vocab_size):
super(LSTMTxtGen, self).__init__()
self.n_layer = n_layer
self.hidden_dim = hidden_dim
self.vocab_size = vocab_size
self.lstm = nn.LSTM(vocab_size, hidden_dim, n_layer, batch_first=True)
# The linear layer that maps from hidden state space to tag space
#self.hidden = self.init_hidden()
def init_hidden(self, batch_size):
# Before we've done anything, we dont have any hidden state.
# Refer to the Pytorch documentation to see exactly
# why they have this dimensionality.
# The axes semantics are (num_layers, minibatch_size, hidden_dim)
return (autograd.Variable(torch.zeros(self.n_layer, batch_size,
self.hidden_dim)),
autograd.Variable(torch.zeros(self.n_layer, batch_size,
self.hidden_dim)))
def forward(self, seqs):
self.hidden = self.init_hidden(seqs.size()[0])
lstm_out, self.hidden = self.lstm(seqs, self.hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
lstm_out = nn.Linear(lstm_out.size(1), self.vocab_size)(lstm_out)
return lstm_out
model = LSTMTxtGen (
hidden_dim = 50,
n_layer = 3,
vocab_size = 44,
)
print(Model)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adamax(model.parameters())
G = Data.batch_generator(5,100)
batch_per_epoch, to_idx, to_char = next(G)
X, Y = next(G)
for epoch in range(10):
losses = []
for batch_count in range(batch_per_epoch):
model.zero_grad()
#mode.hidden = model.init_hidden()
#X, Y = next(G)
X = autograd.Variable(torch.from_numpy(X))
Y = autograd.Variable(torch.from_numpy(Y))
preds = model(X)
loss = criterion(preds.view(-1, model.vocab_size), Y.view(-1))
loss.backward()
optimizer.step()
losses.append(loss)
if (batch_count % 20 == 0):
print('Loss: ', losses[-1])