Feature selection for data with both continuous and categorical features? - machine-learning

How do I perform feature selection when I have both numerical and categorical features? Is it common to partition the features and explore correlation separately (e.g. select out my categorical feature and use Chi-square, then select my continuous and use ANOVA?)

In general - No. If you use information gain for feature selection, you first need to transform your continuous attributes into nominal attributes via discretization. The number of nominal values of an attribute can greatly affect the information gain - since the more nominal values you have, the greater the chance to explain the target variable. Make sure you have roughly the same number of values for each attribute, and that you are using information gain ratio which normalizes the information gain with respect to the number of values and their probability.

Related

Feature Scaling of attributes

I used two features to train a classification model say feature A and B. Feature A is more important than feature B. Feature A has ordinal data and hence I have label encoded it and its value range from 1 to 5. Feature B is also a categorical feature and have one hot encoded it after label encoding
Due to the above encoding, feature A has a value ranging from 1 to 5 whereas feature B has multiple columns and each column value is either 0 or 1.
Now after my model training, my model is too much skewed towards feature A as its value range from 1 to 5 whereas it gives very less attention to feature B.
Now if I feature scale using standard scalar, Feature A will be having the value between -1 to 1 and hence after model training, Feature B have more role than feature A to make the decision.
Is there a better way to feature scale both the features so that Feature A has more edge but not very much that feature B is completely ignored
Once you one hot encode, you will have a set of features only. The model won't know if the features belong to A or B. You can then calculate Feature importance or maybe run Feature Selection Algorithms in order to make it more efficient.
However, if you feel Feature A is more important, then try scaling to other limits other than -1 to 1 inorder to maintain more columns for Feature A than Feature B. Or scale both correspondingly. But again, the model sees it only as a set of features and so try changing the model/parameters rather than focusing on this for improving performance.

How to use ordered categorial variables in building ML models?

I am trying to build a logistic regression model and a lot of my features have ordered categorical variables. I think dummy variable may not be useful as it treats each category with equal weightage. So, do i need to treat to ordered categorial variable like numerical ?
Thanks in advance .
Ordered categorical values are termed as "Ordinal" attribute in data mining where one value is less than or greater than another value. You can treat these values as nominal values or continuous values (numbers).
Some of the pros and cons of treating them as numbers (continuous) are:
Pros:
This gives you a lot of flexibility in your choice of analysis and
preserves the information in the ordering. More importantly to many
analysts, it allows you to analyze the data easily.
Cons:
This approach requires the assumption that the numerical distance
between each set of subsequent categories is equal. Otherwise
depending on the domain you can make the interval large.

When classifying, does one need to normalize new incoming features when predicting on real data?

There are two data sets - the training one and a data set of features, labels for which are yet to be predicted (the new one).
I built a Random Forest classifier. Along the way I had to do two things:
Normalize continuous numeric features.
Perform a one-hot-encoding on the categorical ones.
Now I have two questions. When i am predicting labels for the new data:
Do I need to normalize the incoming features? (common sense tells me that yes :) ) If so, should I take the mean, max, min values for a specific feature from the training data set or should I somehow take into account the new values of the features?
How do I hot-one-encode the new values of the features? Do I expand the dictionary of the possible categories for a specific category taking into account the possibly new values of the features?
In my case I possess both data sets, so I could calculate all this stuff in advance, but what if I only had a classifier and a new data set?
I only have a basic knowledge of the type of classifiers and normalization techniques you're using, but the general rule, that I think applies to what you're doing as well, is to do the following.
Your classifier is not a Random Forest Classifier. That is only one step of the pipeline that acts as your actual classifier. This pipeline / actual classifier is what you describe:
Normalize continuous numeric features.
Perform a one-hot-encoding on the categorical ones.
Use a Random Forest Classifier on what you get from the first 2 steps.
This pipeline, that encompasses 3 things, is what you're actually using as your classifier.
Now, how does a classifier work?
You build some state based on the training data.
You use that state to make predictions on the test data.
So:
Do I need to normalize the incoming features? (common sense tells me that yes :) ) If so, should I take the mean, max, min values for a specific feature from the training data set or should I somehow take into account the new values of the features?
Your classifier normalizes the incoming features for the training data, so it will normalize those for unseen instances too. To do this, it must use the state it has built during training.
For example, if you were doing min-max scaling on your features, your state would store a min(f) and max(f) for each feature f. Then, during testing / prediction, you would do min-max scaling for each feature f using the stored min(f) and max(f) values.
I'm not sure what you mean by "normalize continuous numeric features". Do you mean discretization? If you build some state for this discretization during training, then you need to find a way to factor that in.
How do I hot-one-encode the new values of the features? Do I expand the dictionary of the possible categories for a specific category taking into account the possibly new values of the features?
Don't you know how many values each category can have beforehand? Usually you do (since categoricals are things like nationality, continent etc. - things you know in advance). If you can get a value for a categorical feature that you haven't seen during training, it begs the question if you should even care about it. What good is a categorical value you've never trained on?
Maybe add an "unknown" category. I think expanding for a single one should be fine, what good are more going to do if you've never trained on them?
What kind of categoricals do you have?
I could be wrong, but do you really need one-hot encoding? AFAIK, tree-based classifiers don't seem to benefit that much from it.

Decision tree with high cardinality attribute

I want to learn a decision tree having a reasonable discrete target attribute with 5 possible different values.
However, there are discrete high cardinality input attributes (1000s of different possible string values) that I wonder if it makes sense to include them. Is there any policy what the maximum cardinality should be when including an attribute to train a decision tree?
There is no maximum cardinality, no. Of course, you could omit values that do not actually appear in the data.
You will have to use an RDF implementation that handles multi-label categorical features directly rather than converts them to a series of binary indicator features.
For a categorical feature with N values there are 2^N - 2 possible decision rules on the feature, which is too many to consider by a long way. The heuristic I have used is to compute the entropy of the target when you divide up the data by the N categorical feature values. Then order the values by entropy and evaluate the N-2 rules you get by considering prefixes of that list.

Information gain on non discrete dataset

Jiawei Han's book on Data Mining 2nd edition (Attribute Selection Measures - pp 297 thru 300) explains how to calculate information gain achieved by each attribute (age, income, credit_rating) and class (buys_computer yes or no).
In this example, each of the attribute values is discrete, for e.g. age can be youth/middle-aged/senior, income can be high/low/medium, credit_rating fair/excellent etc.
I would like to know how the same information gain can be applied to attributes which take non discrete data. For e.g. the income attribute takes any currency amount like 100.68, 120.90, etc etc.
If there are 1000 students, there could be 1000 different amount values.
How can we apply the same information gain over non discrete data? Any tutorial/sample example/video url would be of great help.
When your target variable is discrete (categorical), you just calculate entropy over the empirical distribution of categories in the left/right split you're considering, and compare their weighted average to the entropy without the split.
For a continuous target variable, like income, this is defined analogously as differential entropy. For your purpose you would assume that the values in your set have a normal distribution, and calculate the differential entropy accordingly. From Wikipedia:
That is it's just a function of the variance of the values. Note that this is in nats, not bits of entropy. To compare to Shannon entropy above, you'd have to convert, which is just a multiplication.
Most common way for to do splitting for continuous variable (1d) is picking a threshold (from discretized set of thresholds, or you can choose a prior). So you can compute information gain for continuous value by first sorting it (you have to have an order) and then scanning it for the best value. http://dilekylmzr.files.wordpress.com/2011/09/data-mining_lecture9.ppt
Example of using this technique in random forests
Often this technique is used in random forests (or decision trees), so I will post few references to resources on that.
More information on random forests and this technique can be found here : http://www.cs.ubc.ca/~nando/540-2013/lectures.html . See lectures on youtube because slides are not very much informative. In the lecture it is described how to match body parts using random forests in Kinect, so it is quite interesting.
Also you can look it up here : https://research.microsoft.com/pubs/145347/bodypartrecognition.pdf - the original paper being discussed in the lecture.
Note that for information gain you can use also gaussian entropy. It is basically fitting gaussian to data before and after split.

Resources