I'm trying to find ways to normalize my dataset (represented as a matrix with documents as rows and columns as features) and I came across a technique called feature scaling. I found a Wikipedia article on it here.
One of the methods listed is Standardization which says "Feature standardization makes the values of each feature in the data have zero-mean and unit-variance." What does that mean (no pun intended)?
In this method, "we subtract the mean from each feature. Then we divide the values (mean is already subtracted) of each feature by its standard deviation." When they say 'subtract the mean', is it the mean of the entire matrix or the mean of the column pertaining to that feature?
Also, if this feature scaling method is applied, does the mean not have to be subtracted from columns when performing Principal Component Analysis (PCA) on the data?
The basic idea is to do a simple (and reversible) transformation on your dataset set to make it easier to handle. You are subtracting a constant from each column and then dividing each column by a (different) constant. Those constants are column-specific.
When they say 'subtract the mean', is it the mean of the entire matrix
or the mean of the column pertaining to that feature?
The mean of the column pertaining to that feature.
...does the mean not have to be subtracted from columns when performing Principal Component Analysis (PCA) on the data?
Correct. PCA requires data with a mean of zero. Usually this is enforced by subtracting the mean as a first step. If the mean has already been subtracted that step is not required. However, there is no harm in performing the "subtract the mean" operation twice. Because the second time the mean will be zero, so nothing will change. Formally, we might say that standardization is idempotent.
From looking at the article, my understanding is that you would subtract the mean of that feature. This will give you a set of data for the feature that describes the same layout of the data but normalized.
Imagine you added data for a new feature. You're probably going to want the data for your original features to remain the same, and not be influenced by the new feature.
I guess you would still get a "standardized" range of values if you subtracted the mean of the whole data set, but that would be something different - you're probably more interested in how the data of a single feature lies around its mean.
You could also have a look (or ask the question) on math.stackexchange.com.
Related
I am working on customer segmentation based on their purchases for different type of product category.
Below is a dummy representation of my data. (The data is in percentage of the total revenue per each category the customer purchased):
Image Link
As seen in the image link above, altho this data have only a few 0's but the original data has many 0s. therefore, using this data for kmeans clustering does not output any acceptable insights and skews the data towards the left.
dropping the rows or averaging the missing data is misleading. :/
How to deal with missing values it's your choice, it will impact your clustering of course. There is no one "correct" way.
Few popular ways:
Fill each column missing values with average/mean of that feature
Bootstrapping: select random row and copy it's value to fill missing value
Closer Neighbor: find the closest neighbor and fill according to his missing values.
Without seeing your full data and why you're trying to do with clustering, it's a bit hard to help. Depends on the case...
You can always do some feature extraction (e.g. PCA), maybe it will give some better insights
Before I dive into the Question itself I'll give a brief explanation of the data set and the problem
The Data set
I have a data set of roughly 20000 records and I intend to use it to train a classifier which classifies the a given record as 'Positive' or 'Negative'. The data set is also pretty imbalanced with a 5:1 ratio favoring the 'Positive' side.
One of the Features called 'Price' within the Data set which contains a monetary value (thus is <0) and has a few missing values (about 200). When I analyzed the data set all the rows which had NaN for 'Price' were classified as 'Negative'.
The Problem
What would be the best strategy to impute this column? I came up with the following options
I could drop these rows but since all of them are from the 'Negative'
class, that doesn't seem viable
Impute it with a value an extreme value such as -1000.00 as it is a monetary value. While it may work in this situation. It would not work had the value also taken negative values. and I wish to learn a more generic approach to the problem.
Impute it as normal with a stategy such as 'mean' or 'nearest
neighbour' which still could affect the performance as a majority of
the classes are 'Positive'
I could add a new Column called 'wasCompleted' which has a value of 1 if there was a value for the 'Price' feature or 0 if there wasn't. And still go with an option like (2) or (3). Which would still not solve any issue within those stategies
Considering this scenario what would be the best option to consider to impute these values?
There is at least one more option to consider:
Leave it as it is, and use ML method which can handle missing values much better than using any kind of imputation or creation of additional features. Such a method is e.g. LightGMB.
I understand, that in the Decision Tree algorithm, when the splitting is decided, we choose the best split based on some criterion. And when you're looking for the best split, you have to iterate through some list of values. But it seems very computationally expensive to consider every value of the feature as the possible threshold (or, so called, cut point). Thus, there is a necessity for some heuristic for choosing these thresholds. For example, if we have continuous feature and categorical target (i.e, we are dealing with classification problem), we can do the following: sort dataset by given feature and consider for splitting only values, where target variable is changing it's value.
But what do you do if you have regression task, i.e. both feature and target are continuous variables? I realize, that I have to calculate, for example, the mean variance or mean median deviation in both branches for each split. But how do you decide from which values you're choosing you best split? People surely have came up with some optimal solution in order to avoid iterating over every value of the feature in the training set.
I've done some research, but most sources only focuses on different criteria and questions of how you determine whether your split is suitable. Which is not really answering my question.
I've found this question, but Predictor only suggests, that it can be done using the percentiles. And I think, that there is no guarantee, that this is how it really done in real life.
I've also found this question, but for me geledek's answer is not very clear (obviously, dude just copy-pasted his answer from presentation, that he is referring to). I'm pretty much fine with the Method 1, but I would really appreciate if someone could explain Method 2 in more details. Or, perhaps, provide some different source or explanation of your own.
UPD: I've also looked up to the scikit-learn repo at GitHub, and found this line. I can't quite understand the overall code, but it seems that this particular line implies that thresholds are chosen as the averages of the neighboring feature values (which corresponds with the aforementioned Method 1 from the question above). Is that correct? I also don't understand this comment: # sum of halves is used to avoid infinite value. How exactly does dividing by two prevent from getting infinite values? Don't you get infinity only when you are dividing by zero? Is dividing by two necessary, because this way we are getting average value (and not because we don't want to get infinitely)?
I have a set of 3-5 black box scoring functions that assign positive real value scores to candidates.
Each is decent at ranking the best candidate highest, but they don't always agree--I'd like to find how to combine the scores together for an optimal meta-score such that, among a pool of candidates, the one with the highest meta-score is usually the actual correct candidate.
So they are plain R^n vectors, but each dimension individually tends to have higher value for correct candidates. Naively I could just multiply the components, but I hope there's something more subtle to benefit from.
If the highest score is too low (or perhaps the two highest are too close), I just give up and say 'none'.
So for each trial, my input is a set of these score-vectors, and the output is which vector corresponds to the actual right answer, or 'none'. This is kind of like tech interviewing where a pool of candidates are interviewed by a few people who might have differing opinions but in general each tend to prefer the best candidate. My own application has an objective best candidate.
I'd like to maximize correct answers and minimize false positives.
More concretely, my training data might look like many instances of
{[0.2, 0.45, 1.37], [5.9, 0.02, 2], ...} -> i
where i is the ith candidate vector in the input set.
So I'd like to learn a function that tends to maximize the actual best candidate's score vector from the input. There are no degrees of bestness. It's binary right or wrong. However, it doesn't seem like traditional binary classification because among an input set of vectors, there can be at most 1 "classified" as right, the rest are wrong.
Thanks
Your problem doesn't exactly belong in the machine learning category. The multiplication method might work better. You can also try different statistical models for your output function.
ML, and more specifically classification, problems need training data from which your network can learn any existing patterns in the data and use them to assign a particular class to an input vector.
If you really want to use classification then I think your problem can fit into the category of OnevsAll classification. You will need a network (or just a single output layer) with number of cells/sigmoid units equal to your number of candidates (each representing one). Note, here your number of candidates will be fixed.
You can use your entire candidate vector as input to all the cells of your network. The output can be specified using one-hot encoding i.e. 00100 if your candidate no. 3 was the actual correct candidate and in case of no correct candidate output will be 00000.
For this to work, you will need a big data set containing your candidate vectors and corresponding actual correct candidate. For this data you will either need a function (again like multiplication) or you can assign the outputs yourself, in which case the system will learn how you classify the output given different inputs and will classify new data in the same way as you did. This way, it will maximize the number of correct outputs but the definition of correct here will be how you classify the training data.
You can also use a different type of output where each cell of output layer corresponds to your scoring functions and 00001 means that the candidate your 5th scoring function selected was the right one. This way your candidates will not have to be fixed. But again, you will have to manually set the outputs of the training data for your network to learn it.
OnevsAll is a classification technique where there are multiple cells in the output layer and each perform binary classification in between one of the classes vs all others. At the end the sigmoid with the highest probability is assigned 1 and rest zero.
Once your system has learned how you classify data through your training data, you can feed your new data in and it will give you output in the same way i.e. 01000 etc.
I hope my answer was able to help you.:)
i am fairly new with statitistic.
I made an experiment and used the two way ANOVA with repeated measures. The calculation was done in SPSS. In most papers I have seen, the f-value and the degree of freedom were reported as well. is it normal to report those values as well? if so, which values do i take from the spss output.
how do I interpret these values? what do they mean?
when does the f-value support a significant result and when not?
what are good values for the f-value and the degree of freedom.
in some article is also read about the critical f-values, how do I get this value?
most articles describe how to calculate those values but do not explain their meaning for the experiment.
some clarification in these issues is greatly appreciated.
My English is not very good, but I will try to answer your question.
The main purpose of ANOVA is that we want statistical proof that the measured groups have the same mean or not. So we make a null hypothesis and an alternative hypothesis, then we use a test statistics on the data. You can use ANOVA if the groups has the same variance (squared standard deviation).
You need to test this. This is a hyptest too, the nullhyp. is the groups have the same variance, the anternative hyp. is they dont.
You need to make decision from the Sig. value, if the value is higher than 0,05, we usually accept the nullhyp. If the variances are equal, we can use ANOVA. (I assume that the data is following the Normal distribution.) The nullhyp. is that the groups have equal means, the alternative hyp is that we have at least 1 group with a different mean. You can make your decision from the Sig. value, as I said before, if the value higher than 0.05 we accept the nullhyp. The F-critical value is not important if you are calculating on a computer. You can make an accepting interval from the lower and the upper F-critical, and if the F-value is in the interval you accept the nullhyp, but I only used this method in statistics class. You don't need the F-value and the df in the report, because they don't explain anything on their own.