xgboost error with multi classification question - machine-learning

I got an error when running xgboost multi-classification, I set my label y as numeric, but it doesn't work.
My code:
xgb_train = xgb.DMatrix(data=data.matrix(trainX), label=trainY)
xgb_test = xgb.DMatrix(data=data.matrix(testX), label=testY)
watchlist = list(train=xgb_train, test=xgb_test)
xgmodel = xgb.train(data=xgb_train,
objective = "multi:softmax",
num_class = 7,
max.depth=4,
nrounds=6,
eta = 0.3,
gamma = 0,
min_child_weight = 1,
verbose=1,
watchlist=watchlist,
eval_metric='mae')
Error in xgb.iter.update(bst$handle, dtrain, iteration - 1, obj) :
[13:48:41] amalgamation/../src/objective/multiclass_obj.cu:123: SoftmaxMultiClassObj: label must be in [0, num_class).
Stack trace:
[bt] (0) 1 xgboost.so 0x000000013a6a5ff4 dmlc::LogMessageFatal::~LogMessageFatal() + 116
[bt] (1) 2 xgboost.so 0x000000013a7d6efd xgboost::obj::SoftmaxMultiClassObj::GetGradient(xgboost::HostDeviceVector<float> const&, xgboost::MetaInfo const&, int, xgboost::HostDeviceVector<xgboost::detail::GradientPairInternal<float> >*) + 1069
[bt] (2) 3 xgboost.so 0x000000013a77c514 xgboost::LearnerImpl::UpdateOneIter(int, std::__1::shared_ptr<xgboost::DMatrix>) + 788
[bt] (3) 4 xgboost.so 0x000000013a73ff2c XGBoosterUpdateOneIter + 140
[bt] (4) 5 xgboost.so 0x000000013a6a28c3 XGBoosterUpdateOneIter_R + 67
[bt] (5) 6 libR.dylib 0x000000010e50da82 R_doDotCall + 1458
[bt] (6) 7 libR.dyli

Well, I found the answer.
My label contains 7 classes starting from 3 to 9. xgboost accepts class label starting from 0, so I subtract all column, and the model run successfully.

Related

Error using BayesSearchCV from skopt on RandomForestClassifier

this is the code to reproduce the error:
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from scipy.stats import loguniform
from skopt import BayesSearchCV
from sklearn.datasets import load_iris
import numpy as np
X, y = load_iris(return_X_y=True)
grid = {
'LogisticRegression' : {
'C': loguniform.rvs(0.1, 10000, size = 50),
'solver': ['lbfgs','saga'],
'penalty': ['l2'],
'warm_start': [False, True],
'class_weight' : [None, 'balanced'],
'max_iter': [100, 1000],
'n_jobs': [ 10 ]
},
'RandomForestClassifier' : {
'n_estimators': np.random.randint(5, 200, size=10),
'criterion' : [ 'gini', 'entropy' ],
'max_depth' : np.random.randint(5, 50, size=10),
'min_samples_split': np.random.randint(5, 50, size=10),
'min_samples_leaf': np.random.randint(5, 50, size=10),
'max_features' : loguniform.rvs(0.2, 1.0, size=5),
'n_jobs' : [ 10 ]
}
}
tuner_params = {
'cv': 2,
'n_jobs': 10,
'scoring': 'roc_auc_ovr',
'return_train_score': True,
'refit': True,
'n_iter':3
}
clf = 'LogisticRegression'
search_cv = BayesSearchCV( estimator = eval(clf)(), search_spaces = grid[clf], **tuner_params)
search_cv.fit(X,y)
clf = 'RandomForestClassifier'
search_cv = BayesSearchCV( estimator = eval(clf)(), search_spaces = grid[clf], **tuner_params)
search_cv.fit(X,y)
Using BayesSearchCV on LogisticRegression as classifier gives no error, while using RandomForestClassifier it gives the following error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Input In [8], in <cell line: 2>()
1 search_cv = BayesSearchCV( estimator = eval(clf)(), search_spaces = grid[clf], **tuner_params)
----> 2 search_cv.fit(X,y)
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/searchcv.py:466, in BayesSearchCV.fit(self, X, y, groups, callback, **fit_params)
463 else:
464 self.optimizer_kwargs_ = dict(self.optimizer_kwargs)
--> 466 super().fit(X=X, y=y, groups=groups, **fit_params)
468 # BaseSearchCV never ranked train scores,
469 # but apparently we used to ship this (back-compat)
470 if self.return_train_score:
File ~/.conda/envs/meth/lib/python3.9/site-packages/sklearn/model_selection/_search.py:875, in BaseSearchCV.fit(self, X, y, groups, **fit_params)
869 results = self._format_results(
870 all_candidate_params, n_splits, all_out, all_more_results
871 )
873 return results
--> 875 self._run_search(evaluate_candidates)
877 # multimetric is determined here because in the case of a callable
878 # self.scoring the return type is only known after calling
879 first_test_score = all_out[0]["test_scores"]
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/searchcv.py:512, in BayesSearchCV._run_search(self, evaluate_candidates)
508 while n_iter > 0:
509 # when n_iter < n_points points left for evaluation
510 n_points_adjusted = min(n_iter, n_points)
--> 512 optim_result = self._step(
513 search_space, optimizer,
514 evaluate_candidates, n_points=n_points_adjusted
515 )
516 n_iter -= n_points
518 if eval_callbacks(callbacks, optim_result):
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/searchcv.py:400, in BayesSearchCV._step(self, search_space, optimizer, evaluate_candidates, n_points)
397 """Generate n_jobs parameters and evaluate them in parallel.
398 """
399 # get parameter values to evaluate
--> 400 params = optimizer.ask(n_points=n_points)
402 # convert parameters to python native types
403 params = [[np.array(v).item() for v in p] for p in params]
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/optimizer/optimizer.py:395, in Optimizer.ask(self, n_points, strategy)
393 X = []
394 for i in range(n_points):
--> 395 x = opt.ask()
396 X.append(x)
398 ti_available = "ps" in self.acq_func and len(opt.yi) > 0
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/optimizer/optimizer.py:367, in Optimizer.ask(self, n_points, strategy)
336 """Query point or multiple points at which objective should be evaluated.
337
338 n_points : int or None, default: None
(...)
364
365 """
366 if n_points is None:
--> 367 return self._ask()
369 supported_strategies = ["cl_min", "cl_mean", "cl_max"]
371 if not (isinstance(n_points, int) and n_points > 0):
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/optimizer/optimizer.py:434, in Optimizer._ask(self)
430 if self._n_initial_points > 0 or self.base_estimator_ is None:
431 # this will not make a copy of `self.rng` and hence keep advancing
432 # our random state.
433 if self._initial_samples is None:
--> 434 return self.space.rvs(random_state=self.rng)[0]
435 else:
436 # The samples are evaluated starting form initial_samples[0]
437 return self._initial_samples[
438 len(self._initial_samples) - self._n_initial_points]
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/space/space.py:900, in Space.rvs(self, n_samples, random_state)
897 columns = []
899 for dim in self.dimensions:
--> 900 columns.append(dim.rvs(n_samples=n_samples, random_state=rng))
902 # Transpose
903 return _transpose_list_array(columns)
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/space/space.py:698, in Categorical.rvs(self, n_samples, random_state)
696 return self.inverse_transform([(choices)])
697 elif self.transform_ == "normalize":
--> 698 return self.inverse_transform(list(choices))
699 else:
700 return [self.categories[c] for c in choices]
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/space/space.py:685, in Categorical.inverse_transform(self, Xt)
680 """Inverse transform samples from the warped space back into the
681 original space.
682 """
683 # The concatenation of all transformed dimensions makes Xt to be
684 # of type float, hence the required cast back to int.
--> 685 inv_transform = super(Categorical, self).inverse_transform(Xt)
686 if isinstance(inv_transform, list):
687 inv_transform = np.array(inv_transform)
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/space/space.py:168, in Dimension.inverse_transform(self, Xt)
164 def inverse_transform(self, Xt):
165 """Inverse transform samples from the warped space back into the
166 original space.
167 """
--> 168 return self.transformer.inverse_transform(Xt)
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/space/transformers.py:309, in Pipeline.inverse_transform(self, X)
307 def inverse_transform(self, X):
308 for transformer in self.transformers[::-1]:
--> 309 X = transformer.inverse_transform(X)
310 return X
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/space/transformers.py:216, in LabelEncoder.inverse_transform(self, Xt)
214 else:
215 Xt = np.asarray(Xt)
--> 216 return [
217 self.inverse_mapping_[int(np.round(i))] for i in Xt
218 ]
File ~/.conda/envs/meth/lib/python3.9/site-packages/skopt/space/transformers.py:217, in <listcomp>(.0)
214 else:
215 Xt = np.asarray(Xt)
216 return [
--> 217 self.inverse_mapping_[int(np.round(i))] for i in Xt
218 ]
KeyError: 9
My versions:
python: 3.9.12
sklearn: 1.1.1
skopt: 0.9.0
The same error happen when using XGBClassifier or GradientBoostingClassifier, while there is no error using SVC or KNeighborsClassifier.
I believe that's related to how skopt encodes the hyperparameter space: it seems having identical points generated by your random lists are required to trigger the error, though sometimes it fits regardless. Either there are collisions or it makes the grid to be processed erroneously.
At least the issue stopped reproducing for me after changing all random lists to list(range(...)).
Might be worth a bug report.

Define a function : Fibonacci Sequence

Define a function to implement Fibonacci Sequence: 1, 1, 2, 3, 5, 8, 13, 21, 34. Please use the function output first 20 figures of Fibonacci Sequence.
Here is a python implementation
def fib(n):
a, b = 0, 1
while a < n:
print(a, end=' ')
a, b = b, a+b
print()
fib(5000)
Output
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181
A recursive implementation
memo = [-1] * 21
memo[0] = 0
memo[1] = 1
print(memo[0], end=' ')
print(memo[1], end=' ')
def fibrec(n):
if(memo[n] == -1):
memo[n] = fibrec(n-2) + fibrec(n-1)
print(memo[n], end=' ')
return memo[n]
fibrec(20)
Output
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765

Decode UDP message with LUA

I'm relatively new to lua and programming in general (self taught), so please be gentle!
Anyway, I wrote a lua script to read a UDP message from a game. The structure of the message is:
DATAxXXXXaaaaBBBBccccDDDDeeeeFFFFggggHHHH
DATAx = 4 letter ID and x = control character
XXXX = integer shows the group of the data (groups are known)
aaaa...HHHHH = 8 single-precision floating point numbers
The last ones is those numbers I need to decode.
If I print the message as received, it's something like:
DATA*{V???A?A?...etc.
Using string.byte(), I'm getting a stream of bytes like this (I have "formatted" the bytes to reflect the structure above.
68 65 84 65/42/20 0 0 0/237 222 28 66/189 59 182 65/107 42 41 65/33 173 79 63/0 0 128 63/146 41 41 65/0 0 30 66/0 0 184 65
The first 5 bytes are of course the DATA*. The next 4 are the 20th group of data. The next bytes, the ones I need to decode, and are equal to those values:
237 222 28 66 = 39.218
189 59 182 65 = 22.779
107 42 41 65 = 10.573
33 173 79 63 = 0.8114
0 0 128 63 = 1.0000
146 41 41 65 = 10.573
0 0 30 66 = 39.500
0 0 184 65 = 23.000
I've found C# code that does the decode with BitConverter.ToSingle(), but I haven't found any like this for Lua.
Any idea?
What Lua version do you have?
This code works in Lua 5.3
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
-- Read two float values starting from position 10 in the string
print(string.unpack("<ff", str, 10)) --> 39.217700958252 22.779169082642 18
-- 18 (third returned value) is the next position in the string
For Lua 5.1 you have to write special function (or steal it from François Perrad's git repo )
local function binary_to_float(str, pos)
local b1, b2, b3, b4 = str:byte(pos, pos+3)
local sign = b4 > 0x7F and -1 or 1
local expo = (b4 % 0x80) * 2 + math.floor(b3 / 0x80)
local mant = ((b3 % 0x80) * 0x100 + b2) * 0x100 + b1
local n
if mant + expo == 0 then
n = sign * 0.0
elseif expo == 0xFF then
n = (mant == 0 and sign or 0) / 0
else
n = sign * (1 + mant / 0x800000) * 2.0^(expo - 0x7F)
end
return n
end
local str = "DATA*\20\0\0\0\237\222\28\66\189\59\182\65..."
print(binary_to_float(str, 10)) --> 39.217700958252
print(binary_to_float(str, 14)) --> 22.779169082642
It’s little-endian byte-order of IEEE-754 single-precision binary:
E.g., 0 0 128 63 is:
00111111 10000000 00000000 00000000
(63) (128) (0) (0)
Why that equals 1 requires that you understand the very basics of IEEE-754 representation, namely its use of an exponent and mantissa. See here to start.
See #Egor‘s answer above for how to use string.unpack() in Lua 5.3 and one possible implementation you could use in earlier versions.

LSTM predicting time series yields odd results

I'm trying to predict time series data for the next few days looking at past few days, using Keras. My label data is target values for multiple future days, regression model has multiple output neurons (the "direct approach" for time series).
Here is test data with predictions for 10 days, using 60 days history.
10 days prediction for test data
As you can see, future values for all days are about the same. I've spent quite some time on it, and must admit that I'm probably missing something with respect to LSTM...
Here is training data with prediction:
10 days prediction for training data
In order to confirm that I'm preparing data properly, I've created a "tracking data set" which I used to visualize data transformations. Here it is...
Data set:
Open,High,Low,Close,Volume,OpenInt
111,112,113,114,115,0
121,122,123,124,125,0
131,132,133,134,135,0
141,142,143,144,145,0
151,152,153,154,155,0
161,162,163,164,165,0
171,172,173,174,175,0
181,182,183,184,185,0
191,192,193,194,195,0
201,202,203,204,205,0
211,212,213,214,215,0
221,222,223,224,225,0
231,232,233,234,235,0
241,242,243,244,245,0
251,252,253,254,255,0
261,262,263,264,265,0
271,272,273,274,275,0
281,282,283,284,285,0
291,292,293,294,295,0
Training set using 2 days history, predicting 3 days future values (I used different values of history days and future days, and it all makes sense to me), without feature scaling in order to visualize data transformations:
X train (6, 2, 5)
[[[111 112 113 114 115]
[121 122 123 124 125]]
[[121 122 123 124 125]
[131 132 133 134 135]]
[[131 132 133 134 135]
[141 142 143 144 145]]
[[141 142 143 144 145]
[151 152 153 154 155]]
[[151 152 153 154 155]
[161 162 163 164 165]]
[[161 162 163 164 165]
[171 172 173 174 175]]]
Y train (6, 3)
[[131 141 151]
[141 151 161]
[151 161 171]
[161 171 181]
[171 181 191]
[181 191 201]]
Test set
X test (6, 2, 5)
[[[201 202 203 204 205]
[211 212 213 214 215]]
[[211 212 213 214 215]
[221 222 223 224 225]]
[[221 222 223 224 225]
[231 232 233 234 235]]
[[231 232 233 234 235]
[241 242 243 244 245]]
[[241 242 243 244 245]
[251 252 253 254 255]]
[[251 252 253 254 255]
[261 262 263 264 265]]]
Y test (6, 3)
[[221 231 241]
[231 241 251]
[241 251 261]
[251 261 271]
[261 271 281]
[271 281 291]]
Model:
def CreateRegressor(self,
optimizer='adam',
activation='tanh', # RNN activation
init_mode='glorot_uniform',
hidden_neurons=50,
dropout_rate=0.0,
weight_constraint=0,
stateful=False,
# SGD parameters
learn_rate=0.01,
momentum=0):
kernel_constraint = maxnorm(weight_constraint) if weight_constraint > 0 else None
model = Sequential()
model.add(LSTM(units=hidden_neurons, activation=activation, kernel_initializer=init_mode, kernel_constraint=kernel_constraint,
return_sequences=True, input_shape=(self.X_train.shape[1], self.X_train.shape[2]), stateful=stateful))
model.add(Dropout(dropout_rate))
model.add(LSTM(units=hidden_neurons, activation=activation, kernel_initializer=init_mode, kernel_constraint=kernel_constraint,
return_sequences=True, stateful=stateful))
model.add(Dropout(dropout_rate))
model.add(LSTM(units=hidden_neurons, activation=activation, kernel_initializer=init_mode, kernel_constraint=kernel_constraint,
return_sequences=True, stateful=stateful))
model.add(Dropout(dropout_rate))
model.add(LSTM(units=hidden_neurons, activation=activation, kernel_initializer=init_mode, kernel_constraint=kernel_constraint,
return_sequences=False, stateful=stateful))
model.add(Dropout(dropout_rate))
model.add(Dense(units=self.y_train.shape[1]))
if (optimizer == 'SGD'):
optimizer = SGD(lr=learn_rate, momentum=momentum)
model.compile(optimizer=optimizer, loss='mean_squared_error')
return model
...which I create with these params:
self.CreateRegressor(optimizer = 'adam', hidden_neurons = 100)
... and then fit like this:
self.regressor.fit(self.X_train, self.y_train, epochs=100, batch_size=32)
... and predict:
y_pred = self.regressor.predict(X_test)
... or
y_pred_train = self.regressor.predict(X_train)
What am I missing?

Try to simulate a neural network in MATLAB by myself

I tried to create a neural network to estimate y = x ^ 2. So I created a fitting neural network and gave it some samples for input and output. I tried to build this network in C++. But the result is different than I expected.
With the following inputs:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1
-2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71
and the following outputs:
0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296
1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500
2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096
4225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144
169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841
900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849
1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249
3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041
I used fitting tool network. with matrix rows. Training is 70%, validation is 15% and testing is 15% as well. The number of hidden neurons is two. Then in command lines I wrote this:
purelin(net.LW{2}*tansig(net.IW{1}*inputTest+net.b{1})+net.b{2})
Other information :
My net.b[1] is: -1.16610230053776 1.16667147712026
My net.b[2] is: 51.3266249426358
And net.IW(1) is: 0.344272596370387 0.344111217766824
net.LW(2) is: 31.7635369693519 -31.8082184881063
When my inputTest is 3, the result of this command is 16, while it should be about 9. Have I made an error somewhere?
I found the Stack Overflow post Neural network in MATLAB that contains a problem like my problem, but there is a little difference, and the differences is in that problem the ranges of input and output are same, but in my problem is no. That solution says I need to scale out the results, but how can I scale out my result?
You are right about scaling. As was mentioned in the linked answer, the neural network by default scales the input and output to the range [-1,1]. This can be seen in the network processing functions configuration:
>> net = fitnet(2);
>> net.inputs{1}.processFcns
ans =
'removeconstantrows' 'mapminmax'
>> net.outputs{2}.processFcns
ans =
'removeconstantrows' 'mapminmax'
The second preprocessing function applied to both input/output is mapminmax with the following parameters:
>> net.inputs{1}.processParams{2}
ans =
ymin: -1
ymax: 1
>> net.outputs{2}.processParams{2}
ans =
ymin: -1
ymax: 1
to map both into the range [-1,1] (prior to training).
This means that the trained network expects input values in this range, and outputs values also in the same range. If you want to manually feed input to the network, and compute the output yourself, you have to scale the data at input, and reverse the mapping at the output.
One last thing to remember is that each time you train the ANN, you will get different weights. If you want reproducible results, you need to fix the state of the random number generator (initialize it with the same seed each time). Read the documentation on functions like rng and RandStream.
You also have to pay attention that if you are dividing the data into training/testing/validation sets, you must use the same split each time (probably also affected by the randomness aspect I mentioned).
Here is an example to illustrate the idea (adapted from another post of mine):
%%# data
x = linspace(-71,71,200); %# 1D input
y_model = x.^2; %# model
y = y_model + 10*randn(size(x)).*x; %# add some noise
%%# create ANN, train, simulate
net = fitnet(2); %# one hidden layer with 2 nodes
net.divideFcn = 'dividerand';
net.trainParam.epochs = 50;
net = train(net,x,y);
y_hat = net(x);
%%# plot
plot(x, y, 'b.'), hold on
plot(x, x.^2, 'Color','g', 'LineWidth',2)
plot(x, y_hat, 'Color','r', 'LineWidth',2)
legend({'data (noisy)','model (x^2)','fitted'})
hold off, grid on
%%# manually simulate network
%# map input to [-1,1] range
[~,inMap] = mapminmax(x, -1, 1);
in = mapminmax('apply', x, inMap);
%# propagate values to get output (scaled to [-1,1])
hid = tansig( bsxfun(#plus, net.IW{1}*in, net.b{1}) ); %# hidden layer
outLayerOut = purelin( net.LW{2}*hid + net.b{2} ); %# output layer
%# reverse mapping from [-1,1] to original data scale
[~,outMap] = mapminmax(y, -1, 1);
out = mapminmax('reverse', outLayerOut, outMap);
%# compare against MATLAB output
max( abs(out - y_hat) ) %# this should be zero (or in the order of `eps`)
I opted to use the mapminmax function, but you could have done that manually as well. The formula is a pretty simply linear mapping:
y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;

Resources