I would like to have all the predictions of a random forest from the package {ranger} stored in an ml3 Prediction Object, and then use the predictions of the individual trees as features for another learner.
The following code then follows to the following error message in R.
Code:
library("mlr3")
library("mlr3learners")
task = tsk("iris")
learner = lrn("classif.ranger", predict.all = TRUE)
# Train
train_set = sample(task$nrow, 0.8 * task$nrow)
test_set = setdiff(seq_len(task$nrow), train_set)
learner$train(task, row_ids = train_set)
# Predition
prediction = learner$predict(task, row_ids = test_set)
print(prediction)
Error:
Error in check_prediction_data.PredictionDataClassif(pdata) :
Assertion on 'as_factor(pdata$response, levels = lvls)' failed: Must
have length 30, but has length 15000.
Can someone help me to solve this issue?
Related
My seq2seq model seems to only learn to produce sequences of popular words like:
"i don't . i don't . i don't . i don't . i don't"
I think that might be due to a lack of actual data flow between encoder and decoder.
That happens whether I use encoder.init_hidden() or encoder_hidden.detach().
If I use neither, I get an error:
"RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward."
If I try to use retain_graph=True, I get another error:
"RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [256, 768]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True)."
This seems to be a very common use case, but from all similar questions and all the documentation and experiments, I cannot solve it.
Am I missing something obvious?
encoder = Encoder(embedding_dim, hidden_size, max_seq_len, num_layers, vocab.len(), word_embeddings).to(device)
decoder = Decoder(embedding_dim, hidden_size, num_layers, vocab.len()).to(device)
loss_function = nn.CrossEntropyLoss(ignore_index=0)
optimizer = optim.SGD(params=encoder.parameters() + decoder.parameters(), lr=learn_rate)
encoder.train()
decoder.train()
encoder_hidden = encoder.init_hidden()
for epoch in range(num_epochs):
epoch_loss = 0
num_samples = 0
j = 0
for prompts, responses in train_data_loader:
#encoder_hidden = encoder.init_hidden() # new tensor of zeroes
encoder_hidden = encoder_hidden.detach()
optimizer.zero_grad()
encoder_output, encoder_hidden = encoder(prompts, encoder_hidden)
decoder_hidden = encoder.transform_hidden(encoder_hidden)
batch_size = responses.size(0)
decoder_input = torch.tensor([[SOS_TOKEN]] * batch_size, device=device)
decoder_outputs = []
sequence_length = responses.shape[1]
for i in range(sequence_length):
word_index = responses[:, i:i+1]
decoder_output, _ = decoder(decoder_input, decoder_hidden)
decoder_outputs.append(decoder_output)
decoder_input = word_index
decoder_outputs_t = torch.cat(decoder_outputs, dim=1)
decoder_outputs_t = decoder_outputs_t.permute(0, 2, 1)
loss = loss_function(decoder_outputs_t, responses)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
num_samples += 1
j += 1
mean_loss = epoch_loss / num_samples
I'm currently working on a Naive Bayes sentiment analysis program but I'm not quite sure how to determine it's accuracy. My code is:
x = df["Text"]
y = df["Mood"]
test_size = 1785
x_train = x[:-test_size]
y_train = y[:-test_size]
x_test = x[-test_size:]
y_test = y[-test_size:]
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(x_train)
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
clf = MultinomialNB().fit(X_train_tfidf, y_train)
print(clf.predict(count_vect.transform(["Random text"])))
The prediction works just fine for a sentence that I give it, however I want to run it on 20% from my database (x_test and y_test) and calculate the accuracy. I'm not quite sure how to approach this. Any help would be appreciated.
I've also tried the following:
predictions = clf.predict(x_test)
print(accuracy_score(y_test, predictions))
Which gives me the following error:
ValueError: could not convert string to float: "A sentence from the dataset"
before usiing predictions = clf.predict(x_test) please convert the test set also to numeric
x_test = count_vect.transform(x_test).toarray()
you can find step by step to do this [here]
I am trying to use a MobileNet model but facing above mentioned issue . I don't know if it is
occuring due to train_test_split or else . Architecture is shown below
Can I use model.fit instead of model.fit_generator here ?
mobilenet = MobileNet(input_shape=(224,224,3) , weights='imagenet', include_top=False)
# don't train existing weights
for layer in mobilenet.layers:
layer.trainable = False
folders = glob('/content/drive/MyDrive/AllClasses/*')
print("Total number of classes are",len(folders))
x = Flatten()(mobilenet.output)
prediction = Dense(len(folders), activation='softmax')(x)
model = Model(inputs=mobilenet.input, outputs=prediction)
model.summary()
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
dataset = ImageDataGenerator(rescale=1./255)
dataset = dataset.flow_from_directory('/content/drive/MyDrive/AllClasses',target_size=(224, 224),batch_size=32,class_mode='categorical',color_mode='grayscale')
train_data, test_data = train_test_split(dataset,random_state=42, test_size=0.20,shuffle=True)
r = model.fit(train_data,validation_data=(test_data),epochs=5)
I am trying to Hyper-Parameter Tune XGBoostClassifier using Hyperopt. But I am facing a error. Please find below the code that I am using and the error as well:-
Step_1: Objective Function
import csv
from hyperopt import STATUS_OK
from timeit import default_timer as timer
MAX_EVALS = 200
N_FOLDS = 10
def objective(params, n_folds = N_FOLDS):
"""Objective function for XGBoost Hyperparameter Optimization"""
# Keep track of evals
global ITERATION
ITERATION += 1
# # Retrieve the subsample if present otherwise set to 1.0
# subsample = params['boosting_type'].get('subsample', 1.0)
# # Extract the boosting type
# params['boosting_type'] = params['boosting_type']['boosting_type']
# params['subsample'] = subsample
# Make sure parameters that need to be integers are integers
for parameter_name in ['max_depth', 'colsample_bytree',
'min_child_weight']:
params[parameter_name] = int(params[parameter_name])
start = timer()
# Perform n_folds cross validation
cv_results = xgb.cv(params, train_set, num_boost_round = 10000,
nfold = n_folds, early_stopping_rounds = 100,
metrics = 'auc', seed = 50)
run_time = timer() - start
# Extract the best score
best_score = np.max(cv_results['auc-mean'])
# Loss must be minimized
loss = 1 - best_score
# Boosting rounds that returned the highest cv score
n_estimators = int(np.argmax(cv_results['auc-mean']) + 1)
# Write to the csv file ('a' means append)
of_connection = open(out_file, 'a')
writer = csv.writer(of_connection)
writer.writerow([loss, params, ITERATION, n_estimators,
run_time])
# Dictionary with information for evaluation
return {'loss': loss, 'params': params, 'iteration': ITERATION,
'estimators': n_estimators, 'train_time': run_time,
'status': STATUS_OK}
I have defined the sample space and the optimization algorithm as well. While running Hyperopt, I am encountering this error below. The error is in the objective function.
Error:KeyError: 'auc-mean'
<ipython-input-62-8d4e97f16929> in objective(params, n_folds)
25 run_time = timer() - start
26 # Extract the best score
---> 27 best_score = np.max(cv_results['auc-mean'])
28 # Loss must be minimized
29 loss = 1 - best_score
First, print cv_results and see which key exists.
In the below example notebook the keys were : 'test-auc-mean' and 'train-auc-mean'
See cell 5 here:
https://www.kaggle.com/tilii7/bayesian-optimization-of-xgboost-parameters
#avvinci is correct. Let me explain it further.
cv_results = xgb.cv(params, train_set, num_boost_round = 10000,
nfold = n_folds, early_stopping_rounds = 100,
metrics = 'auc', seed = 50)
This is xgboost cross validation and it return the evaluation history. The history is essentially a pandas dataframe. The column names in the dataframe depends upon what is being passes as in, train, test and eval.
best_score = np.max(cv_results['auc-mean'])
Here you are looking for the best auc in the evaluation history which are called
'test-auc-mean' and 'train-auc-mean'
as #avvinci suggested. The column name 'auc-mean' does not exists so it throws KeyError. Either you call it train-auc-mean for best auc in training set or test-auc-mean for best auc in test set.
If you are in doubt, just run that cross validation outside and use head on the cv_results.
I set my model and data to the same device, but always raise the error like this:
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
The following is my training code, I hope you can answer it.Thanks!
def train(train_img_path, train_label_path, pths_path, interval, log_file):
file_num = len(os.listdir(train_img_path))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net = EAST(extractor=extractor, geometry_mode=geometry_mode, pretrained=True)
net = net.to(device)
trainset = custom_dataset(train_img_path, train_label_path)
train_loader = data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=num_workers, drop_last=True)
optimizer = optim.SGD(net.parameters(), lr=initial_lr, momentum=momentum, weight_decay=weight_decay_sgd)
criterion = Loss(weight_geo, weight_angle, geometry_mode="RBOX")
net.train()
epoch_loss = 0.
for epoch in range(max_epoch):
epoch_time = time.time()
for i, (img, score_gt, geo_gt, ignored_map) in enumerate(train_loader):
start_time = time.time()
img, score_gt, geo_gt, ignored_map = img.to(device), score_gt.to(device),\
geo_gt.to(device), ignored_map.to(device)
score_pred, geo_pred = net(img)
total_loss, score_loss, loss_AABB, loss_angle = criterion(score_pred, geo_pred, score_gt, geo_gt, ignored_map)
epoch_loss += total_loss.item()
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
I suspect your loss function has some internal parameters of its own, therefore you should also
criterion = Loss(weight_geo, weight_angle, geometry_mode="RBOX").to(device)
It would be easier to spot the error if you provide a full trace, indicating which line exactly caused the error.