Manually categorize vs Decision Tree categorize in Classification - machine-learning

encountering a problem when doing a classification problem in Kaggle competition.
If I am trying to categorize numeric variables or categorical variables into other categories manually(from plotting and observing from their distributions), and also find out Decision Tree Tree Graph seems not doing as good in categorizing, should I manually categorize them, to have a better classification accuracy result?
Basically, the question is, can Decision Tree and Random Forest and GradienBoost DT do such a job good enough so that my manual re-categorization has no merit and there is no need to manually categorize and feature engineer in this way?

Related

when adaboost is better than XGboost in some data combinations?

my name is Eslam a masters' student in Egypt, my thesis is in the field of education data mining. I used AdaBoost and XGBoost techniques for my predictive model to predict students success rate based on Open Learning Analytics data-set - OLAD.
the idea behind the analysis is tying various techniques (including ensemble and non ensemble techniques) on different combinations of features,interesting results showed up
Results:
the question is why some techniques performs better than others in specific features combinations? specially for Random Rorest,XGB and ADA?
ML model could achieve different results based on what kind of space and what kind of function you want to approximate. You can expect that SVM achieve highest score on data which are naturally embedded on Hilbert space. On the other hand if data does not fit this kind of space (i.e. many categorical, not ordered features) you can expect boosting trees methods would outperform SVM.
However if I good understood that 'Decision Tree Accuracy' is a single decision tree based on results from the picture I believe your tests was done on small data sets or your boosting and RF was incorrectly parametrized.

Decision Tree Uniqueness sklearn

I have some questions regarding decision tree and random forest classifier.
Question 1: Is a trained Decision Tree unique?
I believe that it should be unique as it maximizes Information Gain over each split. Now if it is unique why there is random_state parameter in decision tree classifier.As it is unique so it will be reproducible every time. So no need for random_state as Decision tree is unique.
Question 2: What does a decision tree actually predict?
While going through random forest algorithm I read that it averages probability of each class from its individual tree, But as far I know decision tree predicts class not the Probability for each class.
Even without checking out the code, you will see this note in the docs:
The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data and max_features=n_features, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, random_state has to be fixed.
For splitter='best', this is happening here:
# Draw a feature at random
f_j = rand_int(n_drawn_constants, f_i - n_found_constants,
random_state)
And for your other question, read this:
...
Just build the tree so that the leaves contain not just a single class estimate, but also a probability estimate as well. This could be done simply by running any standard decision tree algorithm, and running a bunch of data through it and counting what portion of the time the predicted label was correct in each leaf; this is what sklearn does. These are sometimes called "probability estimation trees," and though they don't give perfect probability estimates, they can be useful. There was a bunch of work investigating them in the early '00s, sometimes with fancier approaches, but the simple one in sklearn is decent for use in forests.
...

Best way to classify labeled sentences from a set of documents

I have a classification problem and I need to figure out the best approach to solve it. I have a set of training documents, where some the sentences and/or paragraphs within the documents are labeled with some tags. Not all sentences/paragraphs are labeled. A sentence or paragraph may have more than one tag/label. What I want to do is make some model, where given a new documents, it will give me suggested labels for each of the sentences/paragraphs within the document. Ideally, it would only give me high-probability suggestions.
If I use something like nltk NaiveBayesClassifier, it gives poor results, I think because it does not take into account the "unlabeled" sentences from the training documents, which will contain many similar words and phrases as the labeled sentences. The documents are legal/financial in nature and are filled with legal/financial jargon most of which should be discounted in the classification model.
Is there some better classification algorithm that Naive Bayes, or is there some way to push the unlabelled data into naive bayes, in addition to the labelled data from the training set?
Here's what I'd do to slightly modify your existing approach: train a single classifier for each possible tag, for each sentence. Include all sentences not expressing that tag as negative examples for the tag (this will implicitly count unlabelled examples). For a new test sentence, run all n classifiers, and retain classes scoring above some threshold as the labels for the new sentence.
I'd probably use something other than Naive Bayes. Logistic regression (MaxEnt) is the obvious choice if you want something probabilistic: SVMs are very strong if you don't care about probabilities (and I don't think you do at the moment).
This is really a sequence labelling task, and ideally you'd fold in predictions from nearby sentences too... but as far as I know, there's no principled extension to CRFs/StructSVM or other sequence tagging approaches that lets instances have multiple labels.
is there some way to push the unlabelled data into naive bayes
There is no distinction between "labeled" and "unlabeled" data, Naive Bayes builds simple conditional probabilities, in particular P(label|attributes) and P(no label|attributes) so it is heavily based on used processing pipeline but I highly doubt that it actually ignores the unlabelled parts. If it does so for some reason, and you do not want to modify the code, you can also introduce some artificial label "no label" to all remaining text segments.
Is there some better classification algorithm that Naive Bayes
Yes, NB is in fact the most basic model, and there are dozens better (stronger, more general) ones, which achieve better results in text tagging, including:
Hidden Markov Models (HMM)
Conditional Random Fields (CRF)
in general -Probabilistic Graphical Models (PGM)

Machine Learning Algorithm selection

I am new in machine learning. My problem is to make a machine to select a university for the student according to his location and area of interest. i.e it should select the university in the same city as in the address of the student. I am confused in selection of the algorithm can I use Perceptron algorithm for this task.
There are no hard rules as to which machine learning algorithm is the best for which task. Your best bet is to try several and see which one achieves the best results. You can use the Weka toolkit, which implements a lot of different machine learning algorithms. And yes, you can use the perceptron algorithm for your problem -- but that is not to say that you would achieve good results with it.
From your description it sounds like the problem you're trying to solve doesn't really require machine learning. If all you want to do is match a student with the closest university that offers a course in the student's area of interest, you can do this without any learning.
I second the first remark that you probably don't need machine learning if the student has to live in the same area as the university. If you want to use an ML algorithm, maybe it would best to think about what data you would have to start with. The thing that comes to mind is a vector for a university that has certain subjects/areas for each feature. Then compute a distance from a vector which is like an ideal feature vector for the student. Minimize this distance.
The first and formost thing you need is a labeled dataset.
It sounds like the problem could be decomposed into a ML problem however you first need a set of positive and negative examples to train from.
How big is your dataset? What features do you have available? Once you answer these questions you can select an algorithm that bests fits the features of your data.
I would suggest using decision trees for this problem which resembles a set of if else rules. You can just take the location and area of interest of the student as conditions of if and else if statements and then suggest a university for him. Since its a direct mapping of inputs to outputs, rule based solution would work and there is no learning required here.
Maybe you can use a "recommender system"or a clustering approach , you can investigate more deeply the techniques like "collaborative filtering"(recommender system) or k-means(clustering) but again, as some people said, first you need data to learn from, and maybe your problem can be solved without ML.
Well, there is no straightforward and sure-shot answer to this question. The answer depends on many factors like the problem statement and the kind of output you want, type and size of the data, the available computational time, number of features, and observations in the data, to name a few.
Size of the training data
Accuracy and/or Interpretability of the output
Accuracy of a model means that the function predicts a response value for a given observation, which is close to the true response value for that observation. A highly interpretable algorithm (restrictive models like Linear Regression) means that one can easily understand how any individual predictor is associated with the response while the flexible models give higher accuracy at the cost of low interpretability.
Speed or Training time
Higher accuracy typically means higher training time. Also, algorithms require more time to train on large training data. In real-world applications, the choice of algorithm is driven by these two factors predominantly.
Algorithms like Naïve Bayes and Linear and Logistic regression are easy to implement and quick to run. Algorithms like SVM, which involve tuning of parameters, Neural networks with high convergence time, and random forests, need a lot of time to train the data.
Linearity
Many algorithms work on the assumption that classes can be separated by a straight line (or its higher-dimensional analog). Examples include logistic regression and support vector machines. Linear regression algorithms assume that data trends follow a straight line. If the data is linear, then these algorithms perform quite good.
Number of features
The dataset may have a large number of features that may not all be relevant and significant. For a certain type of data, such as genetics or textual, the number of features can be very large compared to the number of data points.

Weka machine learning:how to interprete Naive Bayes classifier?

I am using the explorer feature for classification. My .arff data file has 10 features of numeric and binary values; (only the ID of instances is nominal).I have abt 16 instances. The class to predict is Yes/No.i have used Naive bayes but i cantnot interpret the results,,does anyone know how to interpret results from naive Bayes classification?
Naive Bayes doesn't select any important features. As you mentioned, the result of the training of a Naive Bayes classifier is the mean and variance for every feature. The classification of new samples into 'Yes' or 'No' is based on whether the values of features of the sample match best to the mean and variance of the trained features for either 'Yes' or 'No'.
You could use others algorithms to find the most informative attributes. In that case you might want to use a decision tree classifier, e.g. J48 in WEKA (which is the open-source implementation of C4.5 decision tree algorithm). The first node in the resulting decision tree tells you which feature has the most predictive power.
Even better (as stated by Rushdi Shams in the other post); Weka's Explorer offers purpose build options to find the most useful attributes in a dataset. These options can be found under the Select attributes tab.
As Sicco said NB cannot offer you the best features. Decision tree is a good choice because the branching can sometimes tell you the feature that is important- BUT NOT ALWAYS. In order to handle simple to complex featureset, you can use WEKA's SELECT ATTRIBUTE tab. There, you can find search methods and attribute evaluator. Depending on your task, you can choose the one that best suits you. They will provide you a ranking of the features (either from training data or from a k-fold cross validation). Personally, I believe that decision trees perform poor if your dataset is overfitting. In that case, a ranking of features is the standard way to select best features. Most of the times I use infogain and ranker algorithm. When you see your attributes are ranked from 1 to k, it is really nice to figure out the required features and unnecessary ones.

Resources