Amazon Sagemaker object detection expected number of batch error - machine-learning

I am getting the following error while training my model with more than 1 epoch
[02/06/2019 13:37:08 WARNING 140231582721856] Expected number of batches: 15, did not match the number of batches processed: 16. This may happen when some images or annotations are invalid and cannot be parsed. Please check the dataset and ensure it follows the format in the documentation.
[02/06/2019 13:37:08 INFO 140231582721856] #quality_metric: host=algo-1, epoch=24, batch=16 train cross_entropy <loss>=(nan)
[02/06/2019 13:37:08 INFO 140231582721856] #quality_metric: host=algo-1, epoch=24, batch=16 train smooth_l1 <loss>=(nan)
[02/06/2019 13:37:08 INFO 140231582721856] Round of batches complete
[02/06/2019 13:37:08 INFO 140231582721856] Updated the metrics
[02/06/2019 13:37:08 INFO 140231582721856] #quality_metric: host=algo-1, epoch=24, validation mAP <score>=(0.0)
[02/06/2019 13:37:08 INFO 140231582721856] #progress_metric: host=algo-1, completed 83 % of epochs
#metrics {"Metrics": {"Max Batches Seen Between Resets": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Number of Batches Since Last Reset": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Number of Records Since Last Reset": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Total Batches Seen": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Total Records Seen": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Max Records Seen Between Resets": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Reset Count": {"count": 1, "max": 25, "sum": 25.0, "min": 25}}, "EndTime": 1549460228.963195, "Dimensions": {"Host": "algo-1", "Meta": "training_data_iter", "Operation": "training", "Algorithm": "AWS/Object Detection", "epoch": 24}, "StartTime": 1549460224.644808}
Following is the code that I used
for estimator
od_model = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=1,
train_instance_type='ml.p3.8xlarge',
train_volume_size = 500,
train_max_run = 300000,
input_mode= 'File',
output_path=s3_output_location,
sagemaker_session=sess)
And for hyperparameters
od_model.set_hyperparameters(base_network='resnet-50',
use_pretrained_model=0,
num_classes=1,
mini_batch_size=32,
epochs=30,
learning_rate=0.001,
lr_scheduler_step='3,6',
lr_scheduler_factor=0.1,
optimizer='sgd',
momentum=0.9,
weight_decay=0.0005,
overlap_threshold=0.5,
nms_threshold=0.45,
image_shape=512,
label_width=360,
num_training_samples=500)
But bounding boxes appear fine if I keep epoch to 1 although the output model does not detect properly and creates boxes everywhere
With the above code the final model does not create any bounding boxes

Two training losses are 'nan' and the validation mAP is 0. This means the model was not trained properly. Try to tune the 'learning_rate' and 'batch_size' hyperparameters. For a small dataset (500 images), you can use the transfer learning feature by setting 'use_pretrained_model=1'.

Related

Guidance needed - GridSearchCV returns parameters that decrease the accuracy of the XGBoost model

I am playing around with the XGBoostClassifier and tuning this with GridSearchCV. I first created the variable xgbc:
xgbc = xgb.XGBClassifier()
I did'nt use any parameters as I wanted to see the default model performance. This gave me accuracy_score = 85.65%, recall_score = 77.91% and roc_auc_score = 84.21%, using the following lines of code:
print("Accuracy: ", accuracy_score(y_test, xgbc.predict(X_test)))
print("Recall: ", recall_score(y_test, xgbc.predict(X_test)))
print("ROC_AUC: ", roc_auc_score(y_test, xgbc.predict(X_test)))
Next, I used GridSearchCV to try to tune the parameters, like this:
Setting up the parameter dictionary:
xgbc_params = {'max_depth': [5, 6, 7], #6
'learning_rate': [0.25, 0.300000012, 0.35], #0.300000012
'gamma':[0, 0.001, 0.1], #0
'reg_lambda': [0.8, 0.95, 1], #1
'scale_pos_weight': [0, 1, 2], #1
'n_estimators': [95, 100, 105]} #100
(The numbers after the # are the default values, which gave me the above scores.)
And now run the GridSearchCV like this:
xgbc_grid = GridSearchCV(xgbc, param_grid=xgbc_params, scoring = make_scorer(accuracy_score), cv = 10, n_jobs = -1)
Next, fit this to the training data:
xgbc_grid.fit(X_train, y_train, verbose = 1, early_stopping_rounds = 10, eval_metric = 'aucpr', eval_set = [(X_test, y_test)])
Finally, run the metrics again:
print("Best Reg estimators: ", xgbc_grid.best_params_)
print("Accuracy: ", accuracy_score(y_test, xgbc_grid.predict(X_test)))
print("Recall: ", recall_score(y_test, xgbc_grid.predict(X_test)))
print("ROC_AUC: ", roc_auc_score(y_test, xgbc_grid.predict(X_test)))
Now, the scores change: accuracy_score = 0.8340807174887892, recall_score = 0.7325581395348837 and roc_auc_score = 0.8420896282464777. Also, here is the best_params_ result:
Best Reg estimators: {'gamma': 0, 'learning_rate': 0.35, 'max_depth': 5, 'n_estimators': 95, 'reg_lambda': 0.8, 'scale_pos_weight': 1}
Here is my problem:
The parameter values that GridSearchCV returns through xgbc_grid.best_params_ are not the most optimal for accuracy, as the accuracy score decreases. Can you please help me figure out why this is happenning?
In the parameter dictionary above, I have provided the default values. If I set the parameters to only these single values, then I get the 85% accuracy, like, 'max_depth': [6]. However, as soon as I add other values, like 'max_depth': [5, 6, 7], then GridSearchCV gives the parameters that are not the highest on accuracy score. Full details below:
Base Reg estimators (acc = 85%): {'gamma': 0, 'learning_rate': 0.35, 'max_depth': 5, 'n_estimators': 95, 'reg_lambda': 0.8, 'scale_pos_weight': 1}
Best Reg estimators (acc = 83%): {'gamma': 0, 'learning_rate': 0.35, 'max_depth': 6, 'n_estimators': 100, 'reg_lambda': 1, 'scale_pos_weight': 1}

How to feed key-value features (aggregated data) to LSTM?

I have the following time-series aggregated input for an LSTM-based model:
x(0): {y(0,0): {a(0,0), b(0,0)}, y(0,1): {a(0,1), b(0,1)}, ..., y(0,n): {a(0,n), b(0,n)}}
x(1): {y(1,0): {a(1,0), b(1,0)}, y(1,1): {a(1,1), b(1,1)}, ..., y(1,n): {a(1,n), b(1,n)}}
...
x(m): {y(m,0): {a(m,0), b(m,0)}, y(m,1): {a(m,1), b(m,1)}, ..., y(m,n): {a(m,n), b(m,n)}}
where x(m) is a timestep, a(m,n) and b(m,n) are features aggregated by the non-temporal sequential key y(m,n) which might be 0...1,000.
Example:
0: {90: {4, 4.2}, 91: {6, 0.2}, 92: {1, 0.4}, 93: {12, 11.2}}
1: {103: {1, 0.2}}
2: {100: {3, 0.1}, 101: {0.4, 4}}
Where 90-93, 103, and 100-101 are aggregation keys.
How can I feed this kind of input to LSTM?
Another approach would be to use non-aggregated data. In that case, I'd get the proper input for LSTM. Example:
Aggregated input:
0: {100: {3, 0.1}, 101: {0.4, 4}}
Original input:
0: 100, 1, 0.05
1: 101, 0.2, 2
2: 100, 1, 0
3: 100, 1, 0.05
4: 101, 0.2, 2
But in that case, the aggregation would be lost, and the whole purpose of aggregation is to minimize the number of steps so that I get 500 timesteps instead of e.g. 40,000, which is impossible to feed to LSTM. If you have any ideas I'd appreciate it.

Data shuffling for Image Classification

I want to develop a CNN model to identify 24 hand signs in American Sign Language. I created a custom dataset that contains 3000 images for each hand sign i.e. 72000 images in the entire dataset.
For training the model, I would be using 80-20 dataset split (2400 images/hand sign in the training set and 600 images/hand sign in the validation set).
My question is:
Should I randomly shuffle the images when creating the dataset? And Why?
Based on my previous experience, it led to validation loss being lower than training loss and validation accuracy more than training accuracy. Check this link.
Random shuffling of data is a standard procedure in all machine learning pipelines, and image classification is not an exception; its purpose is to break possible biases during data preparation - e.g. putting all the cat images first and then the dog ones in a cat/dog classification dataset.
Take for example the famous iris dataset:
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y=True)
y
# result:
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
As you can clearly see, the dataset has been prepared in such a way that the first 50 samples are all of label 0, the next 50 of label 1, and the last 50 of label 2. Try to perform a 5-fold cross validation in such a dataset without shuffling and you'll find most of your folds containing only a single label; try a 3-fold CV, and all your folds will include only one label. Bad... BTW, it's not just a theoretical possibility, it has actually happened.
Even if no such bias exists, shuffling never hurts, so we do it always just to be on the safe side (you never know...).
Based on my previous experience, it led to validation loss being lower than training loss and validation accuracy more than training accuracy. Check this link.
As noted in the answer there, it is highly unlikely that this was due to shuffling. Data shuffling is not anything sophisticated - essentially, it is just the equivalent of shuffling a deck of cards; it may have happened once that you insisted on "better" shuffling and subsequently you ended up with a straight flush hand, but obviously this was not due to the "better" shuffling of the cards.
Here is my two cents on the topic.
First of all make sure to extract a test set that has equal number of samples for each hand sign. (hand sign #1 - 500 samples, hand sign #2 - 500 samples and so on)
I think this is referred to as stratified sampling.
When it comes to the training set, there is no huge mistake in shuffling the entire set. However, when splitting the training set into training and validation set make sure that the validation set is good enough to be a representation for the test set.
One of my personal experiences with shuffling:
After splitting the training set into training and validation sets, the validation set turned out to be very easy to predict. Therefore, I saw good learning metric values. However, the performance of the model on the test set was horrible.

visualization clusterization results

After using k-means i have 3 clusters.
I've used 10 features (marks) in k-means for this data set.
I'm understand that we can't draw 10D chart, but how can i visualize this clusters?
Should i separate data by 2 or 3 features instead 10?
What axises should i use in my case?
For drawing i'm using js and highcharts.js on client side.
Example of code (just for stackoverflow requirement), but I have 10 coordinates for every point
const kmeans = require('ml-kmeans');
let data = [[1, 1, 1, 1, 1], [1, 2, 1, 1, 1], [-1, -1, -1, 1, 1], [-1, -1, -1.5, 1, 1]];
let centers = [[1, 2, 1, 1, 1], [-1, -1, -1, 1, 1]];
let ans = kmeans(data, 2, { initialization: centers });
console.log(ans);
/*KMeansResult*/
{
clusters: [ 0, 0, 1, 1, 1 ]
centroids:
[ { centroid: [ 1, 1.5, 1, 1, 1 ], error: 0.25, size: 2 },
{ centroid: [ -1, -1, -1.25, 1, 1 ], error: 0.0625, size: 2 } ],
converged: true, iterations: 1
}
*/*
Use your favorite generic visualization approach. Clusterings do not have very special requirements.
E.g.
Scatterplot matrix
Dimensionality reduction with PCA
tSNE embeddings
MDS
UMAP
Boxplots
Violin plots
...

gltf 2.0 BoxTextured sample

I try to understand the data in the BoxTextured Model for the TEXCOORD_0 accessor.
As seen in the capture, the datas seems correct for POSITION and NORMALS but why values in the TEXCOORD_0 accessor aren't in range of "max": [ 1.0, 1.0 ], "min": [ 0.0, 0.0 ] but have a "max": [ 6.0, 1.0 ] ?
{
"bufferView": 2,
"byteOffset": 0,
"componentType": 5126,
"count": 24,
"max": [
6.0,
1.0
],
"min": [
0.0,
0.0
],
"type": "VEC2"
}
Should those be normalized ?
My texture applied is totally wrong : Rendered with uv test texture.
Where is my misunderstanding ?
Thank you
(I know I have a problem with my face orientation but that's another problem)
The 6.0 comes from the number of faces on the cube. Note that the sampler specifies REPEAT (10497):
"samplers": [
{
"magFilter": 9729,
"minFilter": 9986,
"wrapS": 10497,
"wrapT": 10497
}
],
so the image will be tiled repeatedly. It's just a simple way to get the logo rendered on all six faces of the cube.

Resources