How to view a regression tree? [duplicate] - machine-learning

This question already has answers here:
How to visualize a Regression Tree in Python
(4 answers)
Visualizing decision tree in scikit-learn
(11 answers)
Python Decision Tree GraphViz
(3 answers)
Visualizing a decision tree ( example from scikit-learn )
(2 answers)
interpreting Graphviz output for decision tree regression
(1 answer)
Closed 2 years ago.
I am wanting to visualize my decision tree applied to regression (only this plot for classification worked) what is going wrong that only the values appear but not the tree built itself?

For a simpler approach, try:
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
.
.
.
clf = DecisionTreeClassifier()
clf.fit(X,y)
clf.plot_tree()
Theplot_tree() method uses matplotlib tools to make a tree visualizer.
For a fancier approach, you can plot the tree using graphviz. Check out this article about this topic:
https://towardsdatascience.com/visualizing-decision-trees-with-python-scikit-learn-graphviz-matplotlib-1c50b4aa68dc

Related

XGBoost decision tree machine learning

I got this fig when I used the XGBoost regressor (the decision tree has too many details & not clear)for a large dataset (3 MB)
enter image description here
Please what is the solution?

Does the bagging of bagging models make sense? [duplicate]

This question already has answers here:
Execution time of AdaBoost with SVM base classifier
(2 answers)
using random forest as base classifier with adaboost
(2 answers)
Closed 1 year ago.
Would it make sense to apply bagging of random forests? For example:
brf = BaggingRegressor(base_estimator=RandomForestModel,
n_estimators=10,
max_samples=1.0,
bootstrap=True, # Samples are drawn with replacement
n_jobs= n_jobs,
random_state=random_state).fit(X_train, Y_train)
Or to have, on a stacking model, a Random Forest as the base/final estimator?

Why xgboost show features of feature importance which doen't shown in last tree?

I have data set with 7 features.
I'm running xgboost and plots the last tree:
plot_tree(model, num_trees = model.n_estimators-1)
This last tree contains only 2 features.
To my understanding the last tree (tree number = n_estimators-1) is the tree which is used for prediction (and it's the only one which used for prediction).
I'm plotting the feature importance:
plot_importance(model)
This plot has 7 features
Why do we see the all 7 features and not 2 features ? (I'm asking because the last tree use 2 features and not 7 features).
To my understanding the feature importance need to be calculated according to the last tree, is it true ? (because this tree is used for the prediction) ?
xgboost used all trees for prediction. (each tree leaf adds some values to the final prediction) Therefor you see 7 features.
I'm reccomending you to watch this video tutorial about how xgboost works:
XGBoost Part 2 (of 4): Classification

What if i train classifier two time? [duplicate]

This question already has an answer here:
How can i train multiple times an SVM classifier from sklearn in Python?
(1 answer)
Closed 4 years ago.
If I train a classifier two times, like:
clf.fit(X,y)
clf.fit(X,y)
Will it overwrite the existing classifier or will it just train it one time?
Yes, clf will be fit with the last data you try to fit it with. See the answer here https://stackoverflow.com/a/28884168/9458191 for more information.
Whenever you call .fit(...) on a classifier, it will only retain the new fit, essentially overwriting any previous training.
If you are using an entirely different dataset, the resulting classfier will obviously be different than before the second .fit(...) call. If you are using the same dataset, then the classifier may or may not be any different. Some classifiers are deterministic in training, if this is the case then they should not be any different. Some classifiers are non-deterministic, however, and those could have different results during the second training.

Identifying the most useful words in differentiating between classes [duplicate]

This question already has answers here:
How to get most informative features for scikit-learn classifiers?
(9 answers)
Closed 1 year ago.
Is it possible to use tfidf (tfidfvectorizer in Python) to figure out which words are most important when trying to distinguish between two text classes (i.e., positive or negative sentiment, etc.)? For example, which words were most important for identifying the positive class, and then separately, which were most useful for identifying the negative class?
You can let scikit learn do your heavy lifting - train a random forest on your binary tree, extract the classifier's feature importance ranking and use it to get the most important words:
clf = RandomForestClassifier()
clf.fit(data, labels)
importances = clf.feature_importances_
np.argsort(importances)[::-1]
feature_names = vectorizer.get_feature_names()
top_words = []
for i in xrange(100):
top_words.append(feature_names[indices[i]])
Note that this will only tell you what are the most important words - not what they say for each category. To say what each word say about each class you can classify the individual words and see what is their classification.
Another option is to take all positive/negative data samples, remove from them the word you are trying understand and see how this affects the classification of the sample.

Resources