Testing accuracy more than training accuracy - machine-learning

I am building a tuned random forest model for multiclass classification.
I'm getting the following results
Training accuracy(AUC) :0.9921996
Testing accuracy(AUC) :0.992237664
I saw a question related to this on this website and the common answer seems to be that the dataset must be small and your model got lucky
But in my case I have about 300k training data points and 100k testing data points
Also my classes are well balanced
> summary(train$Bucket)
0 1 TO 30 121 TO 150 151 TO 180 181 TO 365 31 TO 60 366 TO 540 541 TO 730 61 TO 90
166034 32922 4168 4070 15268 23092 8794 6927 22559
730 + 91 TO 120
20311 11222
> summary(test$Bucket)
0 1 TO 30 121 TO 150 151 TO 180 181 TO 365 31 TO 60 366 TO 540 541 TO 730 61 TO 90
55344 10974 1389 1356 5090 7698 2932 2309 7520
730 + 91 TO 120
6770 3741
Is it possible for a model to fit this well on a large testing data? Please answer if I can do something to cross verify that my model is indeed fitting really well.
My complete code
split = sample.split(Book2$Bucket,SplitRatio =0.75)
train = subset(Book2,split==T)
test = subset(Book2,split==F)
traintask <- makeClassifTask(data = train,target = "Bucket")
rf <- makeLearner("classif.randomForest")
params <- makeParamSet(makeIntegerParam("mtry",lower = 2,upper = 10),makeIntegerParam("nodesize",lower = 10,upper = 50))
#set validation strategy
rdesc <- makeResampleDesc("CV",iters=5L)
#set optimization technique
ctrl <- makeTuneControlRandom(maxit = 5L)
#start tuning
tune <- tuneParams(learner = rf ,task = traintask ,resampling = rdesc ,measures = list(acc) ,par.set = params ,control = ctrl ,show.info = T)
rf.tree <- setHyperPars(rf, par.vals = tune$x)
tune$y
r<- train(rf.tree, traintask)
getLearnerModel(r)
testtask <- makeClassifTask(data = test,target = "Bucket")
rfpred <- predict(r, testtask)
performance(rfpred, measures = list(mmce, acc))

The difference is of order 1e-4, nothing is wrong, it is a regular, statistical error (variance of the result). Nothing to worry about. This literally means that a difference is about 0.0001 * 100,000 = 10 samples ... 10 samples out of 100k.

Related

Decision Tree - Exporting image via Graphviz error

I'm trying to build a Decision Tree using gridsearch and a pipeline, but I get an error when I try to export the image using graphviz. I looked online and couldn't find anything; one potential problem would've been if I didn't use the best_estimator_ instance, but I did in this case.
Everything works (getting accuracy and other metrics) except the exporting graph part.
def TreeOpt(X, y):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
std_scl = StandardScaler()
dec_tree = tree.DecisionTreeClassifier()
pipe = Pipeline(steps=[('std_slc', std_scl),
('dec_tree', dec_tree)])
criterion = ['gini', 'entropy']
max_depth = list(range(1,15))
parameters = dict(dec_tree__criterion=criterion,
dec_tree__max_depth=max_depth)
tree_gs = GridSearchCV(pipe, parameters)
tree_gs.fit(X_train, y_train)
export_graphviz(
tree_gs.best_estimator_,
out_file=("dec_tree.dot"),
feature_names=None,
class_names=None,
filled=True)
But I get
<ipython-input-2-bb91ec6ba0d9> in <module>
37 filled=True)
38
---> 39 DecTreeOptimizer(X = df.drop(['quality'], axis=1), y = df.quality)
40
<ipython-input-2-bb91ec6ba0d9> in DecTreeOptimizer(X, y)
30 print("Best score: " + str(tree_GS.best_score_))
31
---> 32 export_graphviz(
33 tree_GS.best_estimator_,
34 out_file=("dec_tree.dot"),
~\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
61 extra_args = len(args) - len(all_args)
62 if extra_args <= 0:
---> 63 return f(*args, **kwargs)
64
65 # extra_args > 0
~\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\tree\_export.py in export_graphviz(decision_tree, out_file, max_depth, feature_names, class_names, label, filled, leaves_parallel, impurity, node_ids, proportion, rotate, rounded, special_characters, precision)
767 """
768
--> 769 check_is_fitted(decision_tree)
770 own_file = False
771 return_string = False
~\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
61 extra_args = len(args) - len(all_args)
62 if extra_args <= 0:
---> 63 return f(*args, **kwargs)
64
65 # extra_args > 0
~\AppData\Local\Programs\Python\Python39\lib\site-packages\sklearn\utils\validation.py in check_is_fitted(estimator, attributes, msg, all_or_any)
1096
1097 if not attrs:
-> 1098 raise NotFittedError(msg % {'name': type(estimator).__name__})
1099
1100
NotFittedError: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.```
After long searches, finally found the answer here :Plot best decision tree with pipeline and GridsearchCV
The best_estimator_ attribute returns a pipeline instead of an object, so I just had to query it like this: best_estimator_[1] (and then I found a whole other lot of problems with my code, but that's part 2).
I will leave this here in case anyone else is going to need it. Cheers!

Tensorflow. Switching from BasicRNNCell to LSTMCell

I have built a RNN with BasicRNN now I want to use the LSTMCell but the passage does not seem trivial. What should I change?
First i define all the placeholders and variables:
X_placeholder = tf.placeholder(tf.float32, [batch_size, truncated_backprop_length, embedding_size])
Y_placeholder = tf.placeholder(tf.int32, [batch_size, truncated_backprop_length])
init_state = tf.placeholder(tf.float32, [batch_size, state_size])
W = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32)
b = tf.Variable(np.zeros((batch_size, num_classes)), dtype=tf.float32)
W2 = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32)
b2 = tf.Variable(np.zeros((batch_size, num_classes)), dtype=tf.float32)
Then I unstack the labels:
labels_series = tf.transpose(batchY_placeholder)
labels_series = tf.unstack(batchY_placeholder, axis=1)
inputs_series = X_placeholder
Then i define my RNN:
cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple = False)
states_series, current_state = tf.nn.dynamic_rnn(cell, inputs_series, initial_state = init_state)
The error that I get is:
InvalidArgumentError Traceback (most recent call last)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, debug_python_shape_fn, require_shape_fn)
669 node_def_str, input_shapes, input_tensors, input_tensors_as_shapes,
--> 670 status)
671 except errors.InvalidArgumentError as err:
/home/deepnlp2017/anaconda3/lib/python3.5/contextlib.py in __exit__(self, type, value, traceback)
65 try:
---> 66 next(self.gen)
67 except StopIteration:
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
468 compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 469 pywrap_tensorflow.TF_GetCode(status))
470 finally:
InvalidArgumentError: Dimensions must be equal, but are 50 and 100 for 'rnn/while/basic_lstm_cell/mul' (op: 'Mul') with input shapes: [32,50], [32,100].
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-19-2ac617f4dde4> in <module>()
4 #cell = tf.contrib.rnn.BasicRNNCell(state_size)
5 cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple = False)
----> 6 states_series, current_state = tf.nn.dynamic_rnn(cell, inputs_series, initial_state = init_state)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in dynamic_rnn(cell, inputs, sequence_length, initial_state, dtype, parallel_iterations, swap_memory, time_major, scope)
543 swap_memory=swap_memory,
544 sequence_length=sequence_length,
--> 545 dtype=dtype)
546
547 # Outputs of _dynamic_rnn_loop are always shaped [time, batch, depth].
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in _dynamic_rnn_loop(cell, inputs, initial_state, parallel_iterations, swap_memory, sequence_length, dtype)
710 loop_vars=(time, output_ta, state),
711 parallel_iterations=parallel_iterations,
--> 712 swap_memory=swap_memory)
713
714 # Unpack final output if not using output tuples.
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name)
2624 context = WhileContext(parallel_iterations, back_prop, swap_memory, name)
2625 ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, context)
-> 2626 result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
2627 return result
2628
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py in BuildLoop(self, pred, body, loop_vars, shape_invariants)
2457 self.Enter()
2458 original_body_result, exit_vars = self._BuildLoop(
-> 2459 pred, body, original_loop_vars, loop_vars, shape_invariants)
2460 finally:
2461 self.Exit()
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants)
2407 structure=original_loop_vars,
2408 flat_sequence=vars_for_body_with_tensor_arrays)
-> 2409 body_result = body(*packed_vars_for_body)
2410 if not nest.is_sequence(body_result):
2411 body_result = [body_result]
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in _time_step(time, output_ta_t, state)
695 skip_conditionals=True)
696 else:
--> 697 (output, new_state) = call_cell()
698
699 # Pack state if using state tuples
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in <lambda>()
681
682 input_t = nest.pack_sequence_as(structure=inputs, flat_sequence=input_t)
--> 683 call_cell = lambda: cell(input_t, state)
684
685 if sequence_length is not None:
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py in __call__(self, inputs, state, scope)
182 i, j, f, o = array_ops.split(value=concat, num_or_size_splits=4, axis=1)
183
--> 184 new_c = (c * sigmoid(f + self._forget_bias) + sigmoid(i) *
185 self._activation(j))
186 new_h = self._activation(new_c) * sigmoid(o)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py in binary_op_wrapper(x, y)
882 if not isinstance(y, sparse_tensor.SparseTensor):
883 y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
--> 884 return func(x, y, name=name)
885
886 def binary_op_wrapper_sparse(sp_x, y):
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py in _mul_dispatch(x, y, name)
1103 is_tensor_y = isinstance(y, ops.Tensor)
1104 if is_tensor_y:
-> 1105 return gen_math_ops._mul(x, y, name=name)
1106 else:
1107 assert isinstance(y, sparse_tensor.SparseTensor) # Case: Dense * Sparse.
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py in _mul(x, y, name)
1623 A `Tensor`. Has the same type as `x`.
1624 """
-> 1625 result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name)
1626 return result
1627
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py in apply_op(self, op_type_name, name, **keywords)
761 op = g.create_op(op_type_name, inputs, output_types, name=scope,
762 input_types=input_types, attrs=attr_protos,
--> 763 op_def=op_def)
764 if output_structure:
765 outputs = op.outputs
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in create_op(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_shapes, compute_device)
2395 original_op=self._default_original_op, op_def=op_def)
2396 if compute_shapes:
-> 2397 set_shapes_for_outputs(ret)
2398 self._add_op(ret)
2399 self._record_op_seen_by_control_dependencies(ret)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in set_shapes_for_outputs(op)
1755 shape_func = _call_cpp_shape_fn_and_require_op
1756
-> 1757 shapes = shape_func(op)
1758 if shapes is None:
1759 raise RuntimeError(
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in call_with_requiring(op)
1705
1706 def call_with_requiring(op):
-> 1707 return call_cpp_shape_fn(op, require_shape_fn=True)
1708
1709 _call_cpp_shape_fn_and_require_op = call_with_requiring
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py in call_cpp_shape_fn(op, input_tensors_needed, input_tensors_as_shapes_needed, debug_python_shape_fn, require_shape_fn)
608 res = _call_cpp_shape_fn_impl(op, input_tensors_needed,
609 input_tensors_as_shapes_needed,
--> 610 debug_python_shape_fn, require_shape_fn)
611 if not isinstance(res, dict):
612 # Handles the case where _call_cpp_shape_fn_impl calls unknown_shape(op).
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, debug_python_shape_fn, require_shape_fn)
673 missing_shape_fn = True
674 else:
--> 675 raise ValueError(err.message)
676
677 if missing_shape_fn:
ValueError: Dimensions must be equal, but are 50 and 100 for 'rnn/while/basic_lstm_cell/mul' (op: 'Mul') with input shapes: [32,50], [32,100].
You should consider giving the error trace. Otherwise it is hard (or impossible) to help.
I reproduced the situation and found that the issue was coming from state unpacking, i.e. line c, h = state.
Try to set state_is_tuple to false i.e.
cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple=False)
I'm not sure why this is happening. Are you loading a previous model? What is your tensorflow version?
More information on TensorFlow RNN Cells:
I would suggest you to take a look at: WildML post, section "RNN CELLS, WRAPPERS AND MULTI-LAYER RNNS".
It states that:
BasicRNNCell – A vanilla RNN cell.
GRUCell – A Gated Recurrent Unit cell.
BasicLSTMCell – An LSTM cell based on Recurrent Neural Network Regularization. No peephole connection or cell clipping.
LSTMCell – A more complex LSTM cell that allows for optional peephole connections and cell clipping.
MultiRNNCell – A wrapper to combine multiple cells into a multi-layer cell.
DropoutWrapper – A wrapper to add dropout to input and/or output connections of a cell.
Given this, I would suggest you to switch from BasicRNNCell to BasicLSTMCell. Where Basic here means "use it unless you know what you are doing". If you want to try LSTMs without going into details, thats the way to go. It may be straightforward, just replace with it and voilà!
If not, share some of your code + error.
Hope it helps
The problem seems to be with the init_state variable.
Basic RNN cells only have one state variable while LSTM has a visible and a hidden state. Specify the options state_is_tuple=False will concat the two state variables into one, therefore double the size of what you have specified in the init_state declaration.
To avoid this, one can use the built-in zero_state method for an LSTMCell to initialize the state in the correct way without worrying about size differences.
So it would simply be:
init_state = cell.zero_state(batch_size, dtype)
Of course will will have to be placed after the line where the cell is built.

Recursive daily forecast

I am doing a recursive one-step-ahead daily forecast with different time series models for 2010. For example:
set.seed(1096)
Datum=seq(as.Date("2008/1/1"), as.Date("2010/12/31"), "days")
r=rnorm(1096)
y=xts(r,order.by=as.Date(Datum))
List.y=vector(mode = "list", length = 365L)
for (i in 1:365) {
window.y <- window(y[,1], end = as.Date("2009-12-30") + i)
fit.y <- arima(window.y, order=c(5,0,0))
List.y[[i]] <- forecast(fit.y , h = 1)
}
the list looks like this:
List.y
[[1]]
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
732 -0.0506346 -1.333437 1.232168 -2.012511 1.911242
[[2]]
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
733 0.03905936 -1.242889 1.321008 -1.921511 1.99963
....
[[365]]
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
1096 0.09242849 -1.1794 1.364257 -1.852665 2.037522
And now I want to extract only the forecast value for each period [1]-[365], so I can work with the forecast data. However, I am not sure how to do this.
I tried
sa=sapply(List.y[1:365], `[`, 4)
but then I only get this:
$mean
Time Series:
Start = 732
End = 732
Frequency = 1
[1] -0.0506346
$mean
Time Series:
Start = 733
End = 733
Frequency = 1
[1] 0.03905936
...
$mean
Time Series:
Start = 1096
End = 1096
Frequency = 1
[1] 0.09242849
but I want all 365 [1] values in a numeric vector or something, so I can work with the data.
Just use this: sa2=as.numeric(sa). sa2 will be a numeric vector of forecasted means.

Extracting sampled Time Points

I have a matlab Curve from which i would like to plot and find Concentration values at 17 different time samples
Following is the curve from which i would like to extract Concentration values at 17 different time points
following are the time points in minutes
t = 0,0.25,0.50,1,1.5,2,3,4,9,14,19,24,29,34,39,44,49. minutes samples
Following is the Function which i have written to plot the above graph
function c_t = output_function_constrainedK2(t, a1, a2, a3,b1,b2,b3,td, tmax,k1,k2,k3)
K_1 = (k1*k2)/(k2+k3);
K_2 = (k1*k3)/(k2+k3);
DV_free= k1/(k2+k3);
c_t = zeros(size(t));
ind = (t > td) & (t < tmax);
c_t(ind)= conv(((t(ind) - td) ./ (tmax - td) * (a1 + a2 + a3)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
ind = (t >= tmax);
c_t(ind)= conv((a1 * exp(-b1 * (t(ind) - tmax))+ a2 * exp(-b2 * (t(ind) - tmax))) + a3 * exp(-b3 * (t(ind) - tmax)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
plot(t,c_t);
axis([0 50 0 1400]);
xlabel('Time[mins]');
ylabel('concentration [Mbq]');
title('Model :Constrained K2');
end
If possible, Kindly please suggest me some idea how i could possibly alter the above function so that i can come up with concentration values at 17 different time points stated above
Following are the input values that i have used to come up with the curve
output_function_constrainedK2(0:0.1:50,2501,18500,65000,0.5,0.7,0.3,3,8,0.014,0.051,0.07)
This will give you concentration values at the time points you wanted. You will have to put this inside the output_function_constrainedK2 function so that you can access the variables t and c_t.
T=[0 0.25 0.50 1 1.5 2 3 4 9 14 19 24 29 34 39 44 49];
concentration=interp1(t,c_t,T)

Negative probability in GMM

I am so confused. I have tested a program for myself by following MATLAB code :
feature_train=[1 1 2 1.2 1 1 700 709 708 699 678];
No_of_Clusters = 2;
No_of_Iterations = 10;
[m,v,w]=gaussmix(feature_train,[],No_of_Iterations,No_of_Clusters);
feature_ubm=[1000 1001 1002 1002 1000 1060 70 79 78 99 78 23 32 33 23 22 30];
No_of_Clusters = 3;
No_of_Iterations = 10;
[mubm,vubm,wubm]=gaussmix(feature_ubm,[],No_of_Iterations,No_of_Clusters);
feature_test=[2 2 2.2 3 1 600 650 750 800 658];
[lp_train,rp,kh,kp]=gaussmixp(feature_test,m,v,w);
[lp_ubm,rp,kh,kp]=gaussmixp(feature_test,mubm,vubm,wubm);
However, the result is wondering me because the feature_test must be classified in feature_train not feature_ubm. As you see below the probability of feature_ubm is more than feature_train!?!
Can anyone explain for me what is the problem ?
Is the problem related to gaussmip and gaussmix MATLAB functions ?
sum(lp_ubm)
ans =
-3.4108e+06
sum(lp_train)
ans =
-1.8658e+05
As you see below the probability of feature_ubm is more than feature_train!?!
You see exactly the opposite, despite the absolute value of ubm is big, you are considering negative numbers and
sum(lp_train) > sum(lp_ubm)
hense
P(test|train) > P(test|ubm)
So your test chunk is correctly classified as train, not as ubm.

Resources