How to deal with missing values in K-means clustering? - machine-learning

I am working on customer segmentation based on their purchases for different type of product category.
Below is a dummy representation of my data. (The data is in percentage of the total revenue per each category the customer purchased):
Image Link
As seen in the image link above, altho this data have only a few 0's but the original data has many 0s. therefore, using this data for kmeans clustering does not output any acceptable insights and skews the data towards the left.
dropping the rows or averaging the missing data is misleading. :/

How to deal with missing values it's your choice, it will impact your clustering of course. There is no one "correct" way.
Few popular ways:
Fill each column missing values with average/mean of that feature
Bootstrapping: select random row and copy it's value to fill missing value
Closer Neighbor: find the closest neighbor and fill according to his missing values.
Without seeing your full data and why you're trying to do with clustering, it's a bit hard to help. Depends on the case...
You can always do some feature extraction (e.g. PCA), maybe it will give some better insights

Related

Machine learning with handling features which are suppose to have missing data

I am currently working in a project for my MSc and I am having this issue with that dataset. I don't have previous experience in machine learning and this is my first exposure.
In my dataset I started doing my EDA (Exploratory Data Analysis) and I have a categorical feature with missing data which is Province_State. This column has 52360 missing values and as a percentage that is a 5.40%. I guess that is not too bad and according to what I learnt, I should impute these missing values or delete the column if I have reasonable reasonings.
My logical reasoning is that, not every country has provinces. Therefore that is pretty normal that there are missing values. I clearly don't see a point in imputing these missing values with a random value because that is not logically and it will also lead inaccuracy within the model because we cannot come up with a value which does not practically exist for that particular country.
I think I should do one of the following:
Impute all the missing values to a constant value such as -1 or "NotApplicable"
Remove the feature from the dataset
Please help me with a solution and thank you very much in advance.
(This dataset can be accessed from this link)
There are many ways to handle missing data .Deleting the whole column is not a good idea in most cases as you will be discarding information, however if you still want to delete the feature perform univariate analysis on that feature and see if its useful and decide accordingly.
Instead of removing the feature you can use any of the following ways:
Impute missing values with Mean/Median.
Predict missing values.
Impute all the missing values to -1.
Use algorithms that support missing values.

What can we do with the dataset that 98 percent of the columns are null values?

I want to predict down time of the servers before it happens. To achive this aim, I collected many data from different data sources.
One of the data sources is metric data which contain cpu-time, cpu-percentage, memory-usage, etc. However, values of the columns in this dataset are null. I mean 98% of the many columns are null.
What kind of data preperation technique can be used to prepere the data before apply it to a prediction algorithm.
I appreciate any help.
If I were in your situation my first option would be to ignore this data source. There is too much missing data to be a relevant source of information for any ML algorithm.
That being said, if you still want to use this source of data, you will have to fill the gaps. Infer the missing data with only 2% of available data is hardly possible, but when you are speaking of more than 90% of missing data, I would advise to have a look at Non-Negative Matrix Factorization (NMF) here.
A few versions of this algorithm are implemeted in R, also to have better results in inferring such a big amount of missing data you could read this paper which uses times series information -which could be your case- with NMF to get better results. I ran some tests up to 95% of missing data and results were not so bad, hence, as discussed earlier, you could discard some of your data to have only 80% or 90% of missing data, then apply NMF for times series.
Normally various data imputation techniques can be applied, but in the case of 98% null values, I don't think this would be a correct approach, you are going to infer the empty data from just 2% available information; this would generate an enormous amount of bias in your data. I would go for such an option: Sort your rows such in descending order, such that the rows with the largest number of non-null columns come first. Then determine a cutoff from the beginning of the sorted list of rows, such that, for example, only 20% of the data missing in the selected subset of the data. Then apply data imputation. But of course, this assumes that you will have enough number of data points (rows) after determining this cutoff, which you may not have and the data is not missing at random for each row (if data is missing at random for each row, you cannot use this sorting method at all).
In any case, I can hardly see a concrete way of getting a meaningful model built by using such a high amount of missing data.
First, there can be many reasons why your data are null, like, it was not planed to get those data in the previous project version, then you upgrade it but it is not retroactive so you only have access to the data from the new version, meaning the 2% are fine data but represent nothing compared to total volume cause the new version is just up since X days; etc.
ANYWAY
Even if you have only 2% of non-null data, it does not really matters, what does matter is "how many data represent those 2%" ? If it is 2% of 5 billions, then it is enough to take "just" the 2% of non-null as training data and ignore the others!
Now, if the 2% represents just few data, then I really advise you to NOT fill the null values with them, because it will create enormous bias, furthermore, it means your actual process is not ready for implementing machine learning project => Just adapt to get more data.

What does it mean to have zero mean in the data?

I'm trying to find ways to normalize my dataset (represented as a matrix with documents as rows and columns as features) and I came across a technique called feature scaling. I found a Wikipedia article on it here.
One of the methods listed is Standardization which says "Feature standardization makes the values of each feature in the data have zero-mean and unit-variance." What does that mean (no pun intended)?
In this method, "we subtract the mean from each feature. Then we divide the values (mean is already subtracted) of each feature by its standard deviation." When they say 'subtract the mean', is it the mean of the entire matrix or the mean of the column pertaining to that feature?
Also, if this feature scaling method is applied, does the mean not have to be subtracted from columns when performing Principal Component Analysis (PCA) on the data?
The basic idea is to do a simple (and reversible) transformation on your dataset set to make it easier to handle. You are subtracting a constant from each column and then dividing each column by a (different) constant. Those constants are column-specific.
When they say 'subtract the mean', is it the mean of the entire matrix
or the mean of the column pertaining to that feature?
The mean of the column pertaining to that feature.
...does the mean not have to be subtracted from columns when performing Principal Component Analysis (PCA) on the data?
Correct. PCA requires data with a mean of zero. Usually this is enforced by subtracting the mean as a first step. If the mean has already been subtracted that step is not required. However, there is no harm in performing the "subtract the mean" operation twice. Because the second time the mean will be zero, so nothing will change. Formally, we might say that standardization is idempotent.
From looking at the article, my understanding is that you would subtract the mean of that feature. This will give you a set of data for the feature that describes the same layout of the data but normalized.
Imagine you added data for a new feature. You're probably going to want the data for your original features to remain the same, and not be influenced by the new feature.
I guess you would still get a "standardized" range of values if you subtracted the mean of the whole data set, but that would be something different - you're probably more interested in how the data of a single feature lies around its mean.
You could also have a look (or ask the question) on math.stackexchange.com.

Prediction Algorithm for Basketball Stats

I'm working on a project where I need to predict future stats based on past stats of basketball players. I would like to be able to predict next season's statistics based on the statistics of the past three seasons (if there are three previous seasons to choose from). Does anyone have a suggestion for a good prediction algorithm I could use? The data is continuous and there can be anywhere between 5-14 dimensions (age, minutes, points, etc.)
Thanks!
Note: I'd really like to use the program Weka to do this.
Out of the box, random forest would likely give you a strong baseline, so I would start with this.
You can also try try linear regression, which is a simple yet relative effective method, but depending on the data might require a bit more tweaking (for example transforming some of the input and/or out variables).
Gradient boosting regression is another strong predictor, but typically also needs more tweaking to work well.
All of these algorithms have Weka implementations.
There obviously isn't one correct answer, but for anyone looking to do something similar, I'll better describe my problem and the solution that I've found. I created a csv file where each row is a different season, and each column contains a different attribute. For each attribute that I would like to predict, I have the stats for the current season and then another column for the stats for the previous season. The first (rookie) season will have 0 for all 'previous season' columns. With this data set, I loaded it into Weka and used a Multilayer Perceptron with the test-option set to Cross-Validation. I set the number of folds to somewhere between 80-90% of the number of seasons available.
Finally, to predict the next season's statistics, you add one more row to the end and input the last-season values with "?" in the columns that you would like to predict. If anyone would like a deeper example, I'd be glad to provide one.
I think also if you truly want to create an accurate prediction you have to look at player movement and if a player moves to a team with a losing record, do they increase their minutes to have a larger role which would inflate stats or move to a winning team for a lesser role where they could see a decrease in stats.

what should I do when training set contains some error data in supervised classification?

I am working on a project which performs text auto-classification, I have a lot of data set like as below:
Text | CategoryName
xxxxx... | AA
yyyyy... | BB
zzzzz... | AA
then, I will use the above data set to generate a classifier, once new text coming, the classifier can label new text with correct CategoryName
(text is natural language, size between 10-10000)
Now, the problem is, the original data set contains some incorrect data, (E.g. AAA should be labeled as Category AA, but it is labeled as Category BB accidentally ) because these data are classified manually. And I don't know which label is wrong and how many percentages are wrong because I can't review all data manually...
So my question is, what should I do?
Can I find the wrong labels via some automatic way?
How to increase precision and recall when new data coming?
How to evaluate the impact of wrong data? (since I don't know how many percentage data is wrong)
Any other suggestions?
Obviously, there is no easy way to solve your problem - after all, why build a classifier if you already have a system that can detect wrong classifications.
Do you know how much the erroneous classifications affect your learning? If there are only a small percentage of them, they should not hurt the performance much. (Edit. Ah, apparently you don't. Anyway, I suggest you try it out - at least if you can identify a false result when you see one.)
Of course, you could always first train your system and then have it suggest classifications for the training data. This might help you identify (and correct) your faulty training data. This obviously depends on how much training data you have, and if it is sufficiently broad to allow your system to learn correct classification despite the faulty data.
Can you review any of the data manually to find some mislabeled examples? If so, you might be able to train a second classifier to identify mislabeled data, assuming there is some kind of pattern to the mislabeling. It would be useful for you to know if mislabeling is a purely random process (it is just noise in the training data) or if mislabeling correlates with particular features of the data.
You can't evaluate the impact of mislabeled data on your specific data set if you have no estimate regarding what fraction of your training set is actually mislabeled. You mention in a comment that you have ~5M records. If you can correctly manually label a few hundred, you could train your classifier on that data set, then see how the classifier performs after introducing random mislabeling. You could do this multiple times with varying percentages of mislabeled data to see the impact on your classifier.
Qualitatively, having a significant quantity of mislabeled samples will increase the impact of overfitting so it is even more important that you do not overfit your classifier to the data set. If you have a test data set (assuming it also suffers from mislabling), then you might consider training your classifier to less-than-maximal classification accuracy on the test data set.
People usually deal with the problem you a describing by having multiple annotators and computing their agreement (e.g. Fleiss' kappa). This is often seen as the upper bound on the performance of any classifier. If three people give you three different answers, you know the task is quite hard and your classifier stands no chance.
As a side note:
If you do not know how many of your records have been labelled incorrectly, you do not understand one of the key properties of the problem. Select 1000 records at random and spend the day reviewing their labels to get an idea. It really is time well spent. For example, I found I can easily review 500 labelled tweets per hour. Health warning: it is very tedious, but a morning spent reviewing gives me a good idea of how distracted my annotators were. If 5% of the records are incorrect, it is not such a problem. If 50 are incorrect, you should go back you your boss and tell them it can't be done.
As another side note:
Someone mentioned active learning. I think it is worth looking into options from the literature, keeping in mind labels might have to change. You said that it hard.

Resources