Decision Tree Performance, ML - machine-learning

If we don't give any constraints such as max_depth, minimum number of samples for nodes, Can decision tree always give 0 training error? or it depends on Dataset? What about shown dataset?
edit- it is possible to have a split which results in lower accuracy than parent node, right? According to theory of decision tree it should stop splitting there even if the end results after several splitting can be good! Am I correct?

Decision tree will always find a split that imrpoves accuracy/score
For example, I've built a decision tree on data similiar to yours:
A decision tree can get to 100% accuracy on any data set where there are no 2 samples with the same feature values but different labels.
This is one reason why decision trees tend to overfit, especially on many features or on categorical data with many options.
Indeed, sometimes, we prevent a split in a node if the improvement created by the split is not high enough. This is problematic as some relationships, like y=x_1 xor x_2 cannot be expressed by trees with this limitation.
So commonly, a tree doesn't stop because he cannot improve the model on training data.
The reason you don't see trees with 100% accuracy is because we use techniques to reduce overfitting, such as:
Tree pruning like this relatively new example. This basically means that you build your entire tree, but then you go back and prune nodes that did not contribute enough to the model's performance.
Using a ratio instead of gain for the splits. Basically this is a way to express the fact that we expect less improvement from a 50%-50% split than a 10%-90% split.
Setting hyperparameters, such as max_depth and min_samples_leaf, to prevent the tree from splitting too much.

Related

Why in some cases random forest with n_estimators equals to 1 performs worse than decision tree [duplicate]

This question already has an answer here:
Why is Random Forest with a single tree much better than a Decision Tree classifier?
(1 answer)
Closed 4 months ago.
Why in some cases random forest with n_estimators equals to 1 performs worse than decision tree, even after setting the bootstrap to false?
Try to use different machine learning model for predicting credit card default rate, I tried random forest and decision tree, but random forest seems to perform worse, then I tried random forest with only 1 tree, so it is supposed to be the same as decision tree, but it still performed worse.
A specific answer to your observations depends on the implementation of the decision tree (DT) and random forest (RF) methods that you're using. That said, there are three most likely reasons:
bootstrapping: Although you mention that you set that to False, in the most general form, RFs use two forms of bootstrapping: of the dataset and of the features. Perhaps the setting only controls one of these. Even if both of these are off, some RF implementations have other parameters that control the number of attributes considered for each split of the tree and how they are selected.
tree hyperparameters: Related to my remark on the previous point, the other aspect to check is if all of the other tree hyperparameters are the same. Tree depth, number of points per leaf node, etc, these all would have to matched to make the methods directly comparable.
growing method: Lastly, it is important to remember that trees are learned via indirect/heuristic losses that are often greedily optimized. Accordingly, there are different algorithms to grow the trees (e.g., C4.5), and the DT and RF implementation may be using different approaches.
If all of these match, then the differences should really be minor. If there are still differences (i.e., "in some cases"), these may be because of randomness in initialization and the greedy learning schemes which lead to suboptimal trees. That is the main reason for RFs, in which the ensemble diversity is used to mitigate these issues.

Why can't we initiate the root node randomly in Decision Trees?

I just got into learning about Decision Trees. So the questions might be a bit silly.
The idea of selecting the root node is a bit confusing. Why can't we randomly select the root node? The only difference it seems to make is that it would make the Decision Tree longer and complex, but would get the same result eventually.
Also just as an extension of the feature selection process in Decision Trees, why can't be use something as simple as correlation between features and target, or Chi-Square test to figure which Feature to start off with?
Why can't we randomly select the root node?
We can, but this could also be extended to its child node and to child node of that child node and so on...
The only difference it seems to make is that it would make the Decision Tree longer and complex, but would get the same result eventually.
The more complex the tree is the higher variance it will have, meaning 2 things:
small changes in the training dataset can greatly affect the shape of the three
it overfits the training set
None of these is good and even if you pick a sensible choice at each step, based on entropy or gini impurity index, you will still probably end up with larger three than you would like. Yes that tree might have a good accuracy on the training set but it will probably overfit the training set.
Most of the algorithms that are using decision trees have their own ways to combat this variance, in one way or another. If you consider simple decision tree algorithm itself, the way to reduce the variance is to first train the tree and prune the tree afterwards to make it smaller and less overfitting. Random forest is solving it by averaging over large number of trees while randomly restricting which predictor can be considered for slit every time that decision has to be made.
So, randomly picking the root node will lead to the same result eventually but only on the training set and only once the overfitting is so extreme that the tree simply predicts everything with 100% accuracy. But the more the tree overfits the training set, the less accuracy it will have on a test set (in general), and we care about accuracy on the test set, not on the training set.

What does depth of decision tree depend on?

Below is a paramter for DecisionTreeClassifier: max_depth
http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
max_depth : int or None, optional (default=None)
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
I always thought that depth of the decision tree should be equal or less than number of the features (attributes) of a given dataset. IWhat if we find pure classes before the mentioned input for that parameter? Does it stop splitting or splits further till the mentioned input?
Is it possible to use the same attribute in two different level of a decision tree while splitting?
If the number of features are very high for a decision tree then it can grow very very large. To answer your question, yes, it will stop if it finds the pure class variable.
This is another reason DecisionTrees tend to do overfitting.
You would like to use max_depth parameter when you are using Random Forest , which does not select all features for any specific tree, therefore all trees are not expected to grow to the maximum possible depth, which in turn will require pruning. Decision Trees are weak learners and in RandomForest along with max_depth these participate in voting. More details about these RF and DT relations can be search easily on internet. There are a range of articles published.
So, Generally you would like to use max_depth when you are having large number of features. Also, in actual implementations you would like to use RandomForest rather than DecisionTree alone.

Best Learning model for high numerical dimension data? (with Rapidminer)

I have a dataset of approx. 4800 rows with 22 attributes, all numerical, describing mostly the geometry of rock / minerals, and 3 different classes.
I tried out a cross validation with k-nn Model inside it, with k= 7 and Numerical Measure -> Camberra Distance as parameters set..and I got a performance of 82.53% and 0.673 kappa. Is that result representative for the dataset? I mean 82% is quite ok..
Before doing this, I evaluated the best subset of attributes with a decision table, I got out 6 different attributes for that.
the problem is, you still don't learn much from that kind of models, like instance-based k-nn. Can I get any more insight from knn? I don't know how to visualize the clusters in that high dimensional space in Rapidminer, is that somehow possible?
I tried decision tree on the data, but I got too much branches (300 or so) and it looked all too messy, the problem is, all numerical attributes have about the same mean and distribution, therefore its hard to get a distinct subset of meaningful attributes...
ideally, the staff wants to "Learn" something about the data, but my impression is, that you cannot learn much meaningful of that data, all that works best is "Blackbox" Learning models like Neural Nets, SVM, and those other instance-based models...
how should I proceed?
Welcome to the world of machine learning! This sounds like a classic real-world case: we want to make firm conclusions, but the data rows don't cooperate. :-)
Your goal is vague: "learn something"? I'm taking this to mean that you're investigating, hoping to find quantitative discriminations among the three classes.
First of all, I highly recommend Principal Component Analysis (PCA): find out whether you can eliminate some of these attributes by automated matrix operations, rather than a hand-built decision table. I expect that the messy branches are due to unfortunate choice of factors; decision trees work very hard at over-fitting. :-)
How clean are the separations of the data sets? Since you already used Knn, I'm hopeful that you have dense clusters with gaps. If so, perhaps a spectral clustering would help; these methods are good at classifying data based on gaps between the clusters, even if the cluster shapes aren't spherical. Interpretation depends on having someone on staff who can read eigenvectors, to interpret what the values mean.
Try a multi-class SVM. Start with 3 classes, but increase if necessary until your 3 expected classes appear. (Sometimes you get one tiny outlier class, and then two major ones get combined.) The resulting kernel functions and the placement of the gaps can teach you something about your data.
Try the Naive Bayes family, especially if you observe that the features come from a Gaussian or Bernoulli distribution.
As a holistic approach, try a neural net, but use something to visualize the neurons and weights. Letting the human visual cortex play with relationships can help extract subtle relationships.

Multivariate Decision Tree learner

A lot univariate decision tree learner implementations (C4.5 etc) do exist, but does actually someone know multivariate decision tree learner algorithms?
Bennett and Blue's A Support Vector Machine Approach to Decision Trees does multivariate splits by using embedded SVMs for each decision in the tree.
Similarly, in Multicategory classification via discrete support vector machines (2009) , Orsenigo and Vercellis embed a multicategory variant of discrete support vector machines (DSVM) into the decision tree nodes.
CART algorithm for decisions tree can be made into a Multivariate. CART is a binary splitting algorithm as opposed to C4.5 which creates a node per unique value for discrete values. They use the same algorithm for MARS as for missing values too.
To create a Multivariant tree you compute the best split at each node, but instead of throwing away all splits that weren't the best you take a portion of those (maybe all), then evaluate all of the data's attributes by each of the potential splits at that node weighted by the order. So the first split (which lead to the maximum gain) is weighted at 1. Then the next highest gain split is weighted by some fraction < 1.0, and so on. Where the weights decrease as the gain of that split decreases. That number is then compared to same calculation of the nodes within the left node if it's above that number go left. Otherwise go right. That's pretty rough description, but that's a multi-variant split for decision trees.
Yes, there are some, such as OC1, but they are less common than ones which make univariate splits. Adding multivariate splits expands the search space enormously. As a sort of compromise, I have seen some logical learners which simply calculate linear discriminant functions and add them to the candidate variable list.

Resources