RandomizedSearchCV with xgboost classifier is taking too long - machine-learning

I am trying to run this code on my dataset. However, its taking too long (the code has been running since yesterday and my dataset is not too large)
# A parameter grid for XGBoost
params = {
'min_child_weight': [1, 5, 10],
'gamma': [0.5, 1, 1.5, 2, 5],
'subsample': [0.6, 0.8, 1.0],
'colsample_bytree': [0.6, 0.8, 1.0],
'max_depth': [3, 4, 5]
}
xgb_classifier = XGBClassifier(learning_rate=0.02, n_estimators=600,
objective='binary:logistic',silent=True, nthread=1)
folds = 3
param_comb = 5
skf = StratifiedKFold(n_splits=folds, shuffle = True, random_state = 1001)
random_search = RandomizedSearchCV(xgb_classifier, param_distributions=params,
n_iter=param_comb, scoring='roc_auc', n_jobs=4, cv=skf.split(trainx,trainy),
verbose=3, random_state=1001 )
random_search.fit(trainx, trainy)
I am new in the field. any ideas?

Related

Early stopping in XGBoost classification and RandomizedSearchCV

When I try to combine RandomSearch with the early stopping method to reduce the overfitting, I get this error:
py:372: FitFailedWarning:
300 fits failed out of a total of 300.
The score on these train-test partitions for these parameters will be set to nan.
If these failures are not expected, you can try to debug them by setting error_score='raise'.
The code I am trying is like this:
params_dist = {'min_child_weight': [0.1, 1, 5, 10, 50],
'colsample_bytree': np.arange(0.5, 1.0, 0.1),
'gamma': [0.5, 1, 1.5, 2, 5],
'subsample': np.arange(0.5, 1.0, 0.1),
'max_depth': range(3, 21, 3),
'learning_rate': [0.0001,0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 1],
'n_estimators': [50, 100, 250, 500, 750, 1000],
'reg_alpha': [0.0001, 0.001, 0.1, 1],
'reg_lambda': [0.0001, 0.001, 0.1, 1]}
model_with_earlyStopping = xgb.XGBClassifier(objective='binary:logistic',
eval_metric="error",
early_stopping_rounds=13,
seed=42)
random_search = model_selection.RandomizedSearchCV(model_with_earlyStopping,
param_distributions=params_dist,
scoring='roc_auc',
n_jobs=-1,
verbose=0,
cv=3,
random_state=1001,
n_iter=100)
The code worked fine without using early stopping. However, I am looking for a way to combine these 2 methods together.
Can anyone help me fix it?

Guidance needed - GridSearchCV returns parameters that decrease the accuracy of the XGBoost model

I am playing around with the XGBoostClassifier and tuning this with GridSearchCV. I first created the variable xgbc:
xgbc = xgb.XGBClassifier()
I did'nt use any parameters as I wanted to see the default model performance. This gave me accuracy_score = 85.65%, recall_score = 77.91% and roc_auc_score = 84.21%, using the following lines of code:
print("Accuracy: ", accuracy_score(y_test, xgbc.predict(X_test)))
print("Recall: ", recall_score(y_test, xgbc.predict(X_test)))
print("ROC_AUC: ", roc_auc_score(y_test, xgbc.predict(X_test)))
Next, I used GridSearchCV to try to tune the parameters, like this:
Setting up the parameter dictionary:
xgbc_params = {'max_depth': [5, 6, 7], #6
'learning_rate': [0.25, 0.300000012, 0.35], #0.300000012
'gamma':[0, 0.001, 0.1], #0
'reg_lambda': [0.8, 0.95, 1], #1
'scale_pos_weight': [0, 1, 2], #1
'n_estimators': [95, 100, 105]} #100
(The numbers after the # are the default values, which gave me the above scores.)
And now run the GridSearchCV like this:
xgbc_grid = GridSearchCV(xgbc, param_grid=xgbc_params, scoring = make_scorer(accuracy_score), cv = 10, n_jobs = -1)
Next, fit this to the training data:
xgbc_grid.fit(X_train, y_train, verbose = 1, early_stopping_rounds = 10, eval_metric = 'aucpr', eval_set = [(X_test, y_test)])
Finally, run the metrics again:
print("Best Reg estimators: ", xgbc_grid.best_params_)
print("Accuracy: ", accuracy_score(y_test, xgbc_grid.predict(X_test)))
print("Recall: ", recall_score(y_test, xgbc_grid.predict(X_test)))
print("ROC_AUC: ", roc_auc_score(y_test, xgbc_grid.predict(X_test)))
Now, the scores change: accuracy_score = 0.8340807174887892, recall_score = 0.7325581395348837 and roc_auc_score = 0.8420896282464777. Also, here is the best_params_ result:
Best Reg estimators: {'gamma': 0, 'learning_rate': 0.35, 'max_depth': 5, 'n_estimators': 95, 'reg_lambda': 0.8, 'scale_pos_weight': 1}
Here is my problem:
The parameter values that GridSearchCV returns through xgbc_grid.best_params_ are not the most optimal for accuracy, as the accuracy score decreases. Can you please help me figure out why this is happenning?
In the parameter dictionary above, I have provided the default values. If I set the parameters to only these single values, then I get the 85% accuracy, like, 'max_depth': [6]. However, as soon as I add other values, like 'max_depth': [5, 6, 7], then GridSearchCV gives the parameters that are not the highest on accuracy score. Full details below:
Base Reg estimators (acc = 85%): {'gamma': 0, 'learning_rate': 0.35, 'max_depth': 5, 'n_estimators': 95, 'reg_lambda': 0.8, 'scale_pos_weight': 1}
Best Reg estimators (acc = 83%): {'gamma': 0, 'learning_rate': 0.35, 'max_depth': 6, 'n_estimators': 100, 'reg_lambda': 1, 'scale_pos_weight': 1}

Catboost hyperparams search

I want to use default hyperparams in randomized search, how can I do it? (per_float_feature_quantization param here)
grid = {'learning_rate': [0.1, 0.16, 0.2],
'depth': [4, 6, 10],
'l2_leaf_reg': [1, 3, 5, 7, 9],
'iterations': [800, 1000, 1500, 2000],
'bagging_temperature': [1, 2, 3, 4, 5],
'border_count': [128, 256, 512],
'grow_policy': ['SymmetricTree', 'Depthwise'],
'per_float_feature_quantization':[None, '3:border_count=1024']}
model = CatBoostClassifier(loss_function='MultiClass',
custom_metric='Accuracy',
eval_metric='TotalF1',
od_type='Iter',
od_wait=40,
task_type="GPU",
devices='0:1',
random_seed=42,
cat_features=cat_features)
randomized_search_result = model.randomized_search(grid,
X=X,
y=y
)
And I've got
CatBoostError: library/cpp/json/writer/json_value.cpp:499: Not a map
There is an error in one or more of the parameters of your grid. Commenting them out one-by-one should help you identify it.
As a side note, Optuna recently released support for CatBoost, in case you want to try that instead of a grid search. Optuna’s documentation of a CatBoost example.

visualization clusterization results

After using k-means i have 3 clusters.
I've used 10 features (marks) in k-means for this data set.
I'm understand that we can't draw 10D chart, but how can i visualize this clusters?
Should i separate data by 2 or 3 features instead 10?
What axises should i use in my case?
For drawing i'm using js and highcharts.js on client side.
Example of code (just for stackoverflow requirement), but I have 10 coordinates for every point
const kmeans = require('ml-kmeans');
let data = [[1, 1, 1, 1, 1], [1, 2, 1, 1, 1], [-1, -1, -1, 1, 1], [-1, -1, -1.5, 1, 1]];
let centers = [[1, 2, 1, 1, 1], [-1, -1, -1, 1, 1]];
let ans = kmeans(data, 2, { initialization: centers });
console.log(ans);
/*KMeansResult*/
{
clusters: [ 0, 0, 1, 1, 1 ]
centroids:
[ { centroid: [ 1, 1.5, 1, 1, 1 ], error: 0.25, size: 2 },
{ centroid: [ -1, -1, -1.25, 1, 1 ], error: 0.0625, size: 2 } ],
converged: true, iterations: 1
}
*/*
Use your favorite generic visualization approach. Clusterings do not have very special requirements.
E.g.
Scatterplot matrix
Dimensionality reduction with PCA
tSNE embeddings
MDS
UMAP
Boxplots
Violin plots
...

Why is my convolution autoencoder not getting trained properly?

Why is my convolutional autoencoder not converging properly? I have a very simple layer stack.
Encoder: Conv/ReLU(Kernel size: 7x7, stride = 1, padding = 0) => maxPool(kernel size=2x2, stride = 2) => Conv/ReLU(Kernel size: 5x5, stride = 1, padding = 0) => MaxPool(kernel size=2x2, stride = 2)
Decoder: Nearest Neighbour Upsampling => Deconv/ReLU => Nearest Neighbour Upsampling => Deconv/ReLU
Training Images are of size 30x30x1.
I tried to train it with 1000 images over 1000 epoch, but the error (MSE) is still 120.
BATCH_SIZE = 100
IMAGE_SIZE = 30
NUM_CHANNELS = 1
num_images = 1000
def init_weights(shape):
return tf.Variable(tf.random_normal(shape, stddev=0.01))
def encoder(X, w, w2, wd, wd2):
l1a = tf.nn.relu(tf.nn.conv2d(X, w,strides=[1, 1, 1, 1], padding='VALID'))
l1 = tf.nn.max_pool(l1a, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
l2a = tf.nn.relu(tf.nn.conv2d(l1, w2,strides=[1, 1, 1, 1], padding='VALID'))
l2 = tf.nn.max_pool(l2a, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
l1da = tf.image.resize_images(l2, 8, 8, 1, align_corners=False)
output_shapel1d = tf.convert_to_tensor([BATCH_SIZE, 12, 12, 32], dtype=tf.int32);
l1d = tf.nn.relu(tf.nn.conv2d_transpose(l1da, wd, output_shapel1d, strides=[1, 1, 1, 1], padding='VALID'))
l2da = tf.image.resize_images(l1d, 24, 24, 1, align_corners=False)
output_shapel2d = tf.convert_to_tensor([BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS], dtype=tf.int32);
l2d = tf.nn.relu(tf.nn.conv2d_transpose(l2da, wd2, output_shapel2d, strides=[1, 1, 1, 1], padding='VALID'))
return l2d
complete_image = extract_data(0, 1000)
trX = complete_image[0:900]
trY = trX
teX = complete_image[900:1000]
teY = teX
X = tf.placeholder("float", [BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS])
Y = tf.placeholder("float", [BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS])
w = init_weights([7, 7, 1, 32])
w2 = init_weights([5, 5, 32, 64])
wd = init_weights([5, 5, 32, 64])
wd2 = init_weights([7, 7, 1, 32])
py_x = encoder(X, w, w2, wd, wd2)
cost = tf.reduce_mean(tf.squared_difference(py_x, Y, name = None))
train_op = tf.train.RMSPropOptimizer(0.001, 0.9).minimize(cost)
predict_op = py_x;
global_step = tf.Variable(0, name='global_step', trainable=False)
saver = tf.train.Saver()
with tf.Session() as sess:
tf.initialize_all_variables().run()
start = global_step.eval() # get last global_step
print "Start from:", start
if FLAGS.output == "train":
for i in range(start, 500):
training_batch = zip(range(0, num_images - BATCH_SIZE, batch_size),
range(batch_size, num_images - BATCH_SIZE, batch_size))
for start, end in training_batch:
sess.run(train_op, feed_dict={X: trX[start:end], Y: trY[start:end]})
total_epoch_cost += sess.run(cost, feed_dict={X: trX[start:end], Y: trY[start:end]})
avg_epoch_cost = total_epoch_cost/BATCH_SIZE
print "cost during epoch " + `i` + "is ", avg_epoch_cost
I have added the complete code in this gist with slight modifications. I am training this with around 10,000 images, and the error after 488 epochs is 74.8.

Resources