Examples for CNTK Learners - machine-learning

I have been going through the Microsft's Python CNTK Tutorials for version 2 Beta 9.0. I haven't found good documentation with examples of recommended values to pass to the different learners available. I have been able to get the following learners working on the CNTK 103: Part B - Feed Forward Network with MNIST toturial:
lr_per_minibatch=learning_rate_schedule(0.2, UnitType.minibatch)
trainer = Trainer(z, ce, pe, sgd(z.parameters, lr=lr_per_minibatch))
lr_per_minibatch=learning_rate_schedule(0.2, UnitType.minibatch)
trainer = Trainer(z, ce, pe, adagrad(z.parameters, lr=lr_per_minibatch))
lr_per_minibatch=learning_rate_schedule(0.05, UnitType.minibatch)
trainer = Trainer(z, ce, pe, adam_sgd(z.parameters, lr=lr_per_minibatch, momentum=momentum_as_time_constant_schedule(700) ))
lr_per_minibatch=learning_rate_schedule(0.2, UnitType.minibatch)
trainer = Trainer(z, ce, pe, nesterov(z.parameters, lr=lr_per_minibatch, momentum=momentum_as_time_constant_schedule(700) ))
lr_per_minibatch=learning_rate_schedule(0.1, UnitType.minibatch)
trainer = Trainer(z, ce, pe, rmsprop(z.parameters, lr=lr_per_minibatch, gamma=0.90, inc=0.03, dec=0.03, max=0.1, min=0.1 ))
These work, but does anyone have good examples of recommended values of the parameters that each trainer receives?

For the current learners the best parameters depend on the data and the problem you are solving. Therefore it is very hard to provide good recommendations. One typical piece of advice is if a learning rate works then all smaller learning rates will work but you will have to run longer (i.e. do more sweeps over the data).

Related

When do I use scoring vs metrics to evaluate ML performance

hi what is the basic difference between 'scoring' and 'metrics'. these are used to measure performance but how do they differ?
if you see the example
in the below the cross val is using 'neg_mean_squared_error' for scoring
X = array[:, 0:13]
Y = array[:, 13]
seed = 7
kfold = model_selection.KFold(n_splits=10, random_state=seed)
model = LinearRegression()
scoring = 'neg_mean_squared_error'
results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("MSE: %.3f (%.3f)") % (results.mean(), results.std())
but in the below xgboost example I am using metrics = 'rmse'
cmatrix = xgb.DMatrix(data=X, label=y)
params = {'objective': 'reg:linear', 'max_depth': 3}
cv_results = xgb.cv(dtrain=cmatrix, params=params, nfold=3, num_boost_round=5, metrics='rmse', as_pandas=True, seed=123)
print(cv_results)
how do they differ?
They don't; these are actually just different terms, to declare the same thing.
To be very precise, scoring is the process in which one measures the model performance, according to some metric (or score). The scikit-learn term choice for the argument scoring (as in your first snippet) is rather unfortunate (it actually implies a scoring function), as the MSE (and its variants, as negative MSE and RMSE) are metrics or scores. But practically speaking, as shown in your example snippets, these two terms are used as synonyms and are frequently used interchangeably.
The real distinction of interest here is not between "score" and "metric", but between loss (often referred to as cost) and metrics such as the accuracy (for classification problems); this is often a source of confusion among new users. You may find my answers in the following threads useful (ignore the Keras mentions in some titles, the answers are generally applicable):
Loss & accuracy - Are these reasonable learning curves?
How does Keras evaluate the accuracy?
Optimizing for accuracy instead of loss in Keras model

How many hours of training does it take to get decent error in House Prices Dataset using Neural Network

I'm new to Machine Learning and I'm trying to implement linear regression using keras on this dataset https://www.kaggle.com/harlfoxem/housesalesprediction . Although I think classical machine learning will be more suited to this problem, I want to use Neural Network to learn about it. I have done feature selection and removed some features with high correlation with each other, and now have 8 features left. I have hnormalized my features, but not the labels. I have read and know that Neural Networks generally take time to train, I just want to ask this question to prevent me from investing further time on a model that might won't work. Right now, I am training a model with this design:
model = Sequential()
model.add(Dense(10, inputshape = (10, ) , activation =LeakyReLU()))
model.add(Dense(7, activation=LeakyReLU()))
model.add(Dense(1))
model.compile(optimizer ="adam", loss = "meansquarederror", metrics = ["meansquared_error"])
and right now, it's been 13,000 epochs and 8 hours, and I'm still getting :
loss: 66127403415.9417 - meansquarederror: 66127421440.0000 - valloss: 75086529026.4872 - valmeansquarederror: 75086495744.0000
Although I can see that the loss has been slowly improving (It started at about 300 billion) . So how many hours of training does it take to get decent error on this dataset? Am I on the right track?

sklearn High score with low performance

could you kindly help me decide whether I'm hitting a bug or the problem may be in my implementation?
I have a data set with 5 features and 2000+ observations and I use SVR to do regression tests and select parameters with grid search. If I don't scale my data, then I get a best score of close to zero, but if I do scale it, the best score is around 0.90.
When I manually test the data, it predicts wrong values totally randomly. How can this be? I expect the best score to show how well the trained data could have been validated on new ones during cross validation. I suppose I should not get high score if my model cannot generate well. Should I? Could this be a bug?
SKlearn version is 0.19.1 (from package of Ubuntu Linux 18.04 x64 LTS platform)
Python version is 3.6.7
Would it be worth an upgrade with pip? Any further idea? Thank you.
Edit: see the following code that produces high score, still generalizes badly - though it is regression, scoring should reflect the difference of the predicted ones from the test values:
C_range = 2.0 ** np.arange(-5, 15, 2)
gamma_range = 2.0 ** np.arange(-5, 15, 2)
parameters = {"kernel":["rbf"], "C":C_range, "gamma":gamma_range}
estimator = svm.SVR()
clf = GridSearchCV(estimator, parameters, cv=3, n_jobs=-1, verbose=0)
clf.fit(x, y)
print( clf.best_score_ )

Improving boosting model ,reducing Root mean square error

Hi i am solving a regression problem.My data set consists of 13 features and 550068 rows.I tried different different models and found that boosting algorithms(i.e xgboost,catboost,lightgbm) are performing well on that big data set.here is the code
import lightgbm as lgb
gbm = lgb.LGBMRegressor(objective='regression',num_leaves=100,learning_rate=0.2,n_estimators=1500)
gbm.fit(x_train, y_train,
eval_set=[(x_test, y_test)],
eval_metric='l2_root',
early_stopping_rounds=10)
y_pred = gbm.predict(x_test, num_iteration=gbm.best_iteration_)
accuracy = round(gbm.score(x_train, y_train)*100,2)
mse = mean_squared_error(y_test,y_pred)
rmse = np.sqrt(mse)
import xgboost as xgb
boost_params = {'eval_metric': 'rmse'}
xgb0 = xgb.XGBRegressor(
max_depth=8,
learning_rate=0.1,
n_estimators=1500,
objective='reg:linear',
gamma=0,
min_child_weight=1,
subsample=1,
colsample_bytree=1,
scale_pos_weight=1,
seed=27,
**boost_params)
xgb0.fit(x_train,y_train)
accuracyxgboost = round(xgb0.score(x_train, y_train)*100,2)
predict_xgboost = xgb0.predict(x_test)
msexgboost = mean_squared_error(y_test,predict_xgboost)
rmsexgboost= np.sqrt(msexgboost)
from catboost import Pool, CatBoostRegressor
train_pool = Pool(x_train, y_train)
cbm0 = CatBoostRegressor(rsm=0.8, depth=7, learning_rate=0.1,
eval_metric='RMSE')
cbm0.fit(train_pool)
test_pool = Pool(x_test)
predict_cat = cbm0.predict(test_pool)
acc_cat = round(cbm0.score(x_train, y_train)*100,2)
msecat = mean_squared_error(y_test,predict_cat)
rmsecat = np.sqrt(msecat)
By using the above models i am getting rmse values about 2850.Now i want to improve my model performance by reducing root mean square error.How can i improve my model performance? As i am new to boosting algorithms,which parameters effect the models?And how can i do hyperparameter tuning for those algorithms(xgboost,catboost,lightgbm).I am using Windows10 os and intel i5 7th genration.
Out of those 3 tools that you have tried CatBoost provides an edge in categorical feature processing (it could be also faster, but I did not see a benchmark demonstrating it, and it seems to be not dominating on kaggle, so most likely it is not as quick as LightGBM, but I might be wrong in that hypothesis). So I would use it if I have many of those in my sample. The other two (LightGBM and XGBoost) provide very similar functionality and I would suggest to choose one of them and stick top it. At the moment it seems that LightGBM outperforms XGBoost in training time on CPU providing a very comparable precision of predictions. See for example GBM-perf beachmark on github or this in-depth analysis. If you have GPU's available, than in fact XGBoost seems to be preferable, judging on the benachmark above.
In general, you can improve your model performance in several ways:
train longer (if early stopping was not triggered, that means that there is still room for generalisation; if it was, then you can not improve further by training longer the chosen model with chosen hyper-parameters)
optimise hyper-parameters (see below)
choose a different model. There is no single silver bullet for all problems. Typically GBMs work very well on large samples of structured data, but for some classes of problems (e.g. linear dependence) it is hard for a GBM to learn how to generalise, as it might require very many splits. So it might be that for your problem a linear model, an SVM or something else will do better out of the box.
Since we narrowed down to 2 options, I can not advice on catboost hyper-parameter optimisation, as I have no hands-on experience with it yet. But for lightgbm tuning you can read this official lightgbm doc and these instructions in one of the issues. There are very many good examples of hyper parameter tuning for LightGBM. I can quickly dig out my kernel on kaggle: see here. I do not claim it to be perfect but that's something what is easy for me to find :)
If you are using Intel CPU, then try Intel XGBoost. Intel has powered several optimizations for XGBoost to accelerate gradient boosting models and improve its training and inference capabilities. Also, please check out the article, https://www.intel.com/content/www/us/en/developer/articles/technical/easy-introduction-xgboost-for-intel-architecture.html#gs.q4c6p6 on how to use XGBoost with Intel optimizations.
You can use either of lasso or ridge, these methods could improve the performance.
For hyper parameter tuning, you can use loops. iterate the values and check where you getting lowest RMSE values.
You can also try stacked ensemble techniques.
If you use R, use h20.ai package, It gives good result.

muti output regression in xgboost

Is it possible to train a model in Xgboost that have multiple continuous outputs (multi regression)?
What would be the objective to train such a model?
Thanks in advance for any suggestions
My suggestion is to use sklearn.multioutput.MultiOutputRegressor as a wrapper of xgb.XGBRegressor. MultiOutputRegressor trains one regressor per target and only requires that the regressor implements fit and predict, which xgboost happens to support.
# get some noised linear data
X = np.random.random((1000, 10))
a = np.random.random((10, 3))
y = np.dot(X, a) + np.random.normal(0, 1e-3, (1000, 3))
# fitting
multioutputregressor = MultiOutputRegressor(xgb.XGBRegressor(objective='reg:linear')).fit(X, y)
# predicting
print np.mean((multioutputregressor.predict(X) - y)**2, axis=0) # 0.004, 0.003, 0.005
This is probably the easiest way to regress multi-dimension targets using xgboost as you would not need to change any other part of your code (if you were using the sklearn API originally).
However this method does not leverage any possible relation between targets. But you can try to design a customized objective function to achieve that.
Multiple output regression is now available in the nightly build of XGBoost, and will be included in XGBoost 1.6.0.
See https://github.com/dmlc/xgboost/blob/master/demo/guide-python/multioutput_regression.py for an example.
It generates warnings: reg:linear is now deprecated in favor of reg:squarederror, so I update an answer based on #ComeOnGetMe's
import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn.multioutput import MultiOutputRegressor
# get some noised linear data
X = np.random.random((1000, 10))
a = np.random.random((10, 3))
y = np.dot(X, a) + np.random.normal(0, 1e-3, (1000, 3))
# fitting
multioutputregressor = MultiOutputRegressor(xgb.XGBRegressor(objective='reg:squarederror')).fit(X, y)
# predicting
print(np.mean((multioutputregressor.predict(X) - y)**2, axis=0))
Out:
[2.00592697e-05 1.50084441e-05 2.01412247e-05]
I would place a comment but I lack the reputation. In addition to #Jesse Anderson, to install the most recent version, select the top link from here:
https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/list.html?prefix=master/
Make sure to select the one for your operating system.
Use pip install to install the wheel. I.e. for macOS:
pip install https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/master/xgboost-1.6.0.dev0%2B4d81c741e91c7660648f02d77b61ede33cef8c8d-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl
You can use Linear regression, random forest regressors and some other related algorithms in Scikit-learn to produce multi-output regression. Not sure about XGboost. The boosting regressor in Scikit does not allow multiple outputs. For people who asked, when it may be necessary one example would be to forecast multi-steps of time-series a head.
Based on the above discussion, I have extended the univariate XGBoostLSS to a multivariate framework called Multi-Target XGBoostLSS Regression that models multiple targets and their dependencies in a probabilistic regression setting. Code follows soon.

Categories

Resources