How to select features for clustering? - machine-learning

I had time-series data, which I have aggregated into 3 weeks and transposed to features.
Now I have features: A_week1, B_week1, C_week1, A_week2, B_week2, C_week2, and so on.
Some of features are discreet, some - continuous.
I am thinking of applying K-Means or DBSCAN.
How should I approach the feature selection in such situation?
Should I normalise the features? Should I introduce some new ones, that would somehow link periods together?

Since K-means and DBSCAN are unsupervised learning algorithms, selection of features over them are tied to grid search. You may want to test them to evaluate such algorithms based on internal measures such as Davies–Bouldin index, Silhouette coefficient among others. If you're using python you can use Exhaustive Grid Search to do the search. Here is the link to the scikit library.

Formalize your problem, don't just hack some code.
K-means minimizes the sum of squares. If the features have different scales they get different influence on the optimization. Therefore, you carefully need to choose weights (scaling factors) of each variable to balance their importance the way you want (and note that a 2x scaling factor does not make the variable twice as important).
For DBSCAN, the distance is only a binary decision: close enough, or not. If you use the GDBSCAN version, this is easier to understand than with distances. But with mixed variables, I would suggest to use the maximum norm. Two objects are then close if they differ in each variable by at most "eps". You can set eps=1, and scale your variables such that 1 is a "too big" difference. For example in discrete variables, you may want to tolerate one or two discrete steps, but not three.
Logically, it's easy to see that the maximum distance threshold decomposes into a disjunction of one-variablea clauses:
maxdistance(x,y) <= eps
<=>
forall_i |x_i-y_i| <= eps

Related

Sklearn k-means clustering (weighted), determining optimum sample weight for each feature?

K-means clustering in sklearn, number of clusters is known in advance (it is 2).
There are multiple features. Feature values are initially without any weight assigned, i.e. they are treated equally weighted. However, task is to assign custom weights to each feature, in order to get best possible clusters separation.
How to determine optimum sample weights (sample_weight) for each feature, in order to get best possible separation of the two clusters?
If this is not possible for k-means, or for sklearn, I am interested in any alternative clustering solution, the point is that I need method of automatic determination of appropriate weights for multivariate features, in order to maximize clusters separation.
In meantime, I have implemented following: clustering by each component separately, then calculating silhouette score, calinski harabasz score, dunn score and inverse davies bouldin score for each component (feature) separately. Then scaling those scores to same magnitude, then PCA them to 1 feature. This produced weights for each component. It seems this approach produces reasonable results. I suppose better approach would be full factorial experiment (DOE), but it seems that this simple approach produces satisfactory results as well.

Feature extraction for multiple sub-features

I would like to conduct some feature extraction(or clustering) for dataset containing sub-features.
For example, dataset is like below. The goal is to classify the type of robot using the data.
Samples : 100 robot samples [Robot 1, Robot 2, ..., Robot 100]
Classes : 2 types [Type A, Type B]
Variables : 6 parts, and 3 sub-features for each parts (total 18 variables)
[Part1_weight, Part1_size, Part1_strength, ..., Part6_size, Part6_strength, Part6_weight]
I want to conduct feature extraction with [weight, size, strength], and use extracted feature as a representative value for the part.
In short, my aim is to reduce the feature to 6 - [Part1_total, Part2_total, ..., Part6_total] - and then, classify the type of robot with those 6 features. So, make combined feature with 'weight', 'size', and 'strength' is the problem to solve.
First I thought of applying PCA (Principal Component Analysis), because it is one of the most popular feature extraction algorithm. But it considers all 18 features separately, so 'Part1_weight' can be considered as more important than 'Part2_weight'. But what I have to know is the importance of 'weights', 'sizes', and 'strengths' among samples, so PCA seems to be not applicable.
Is there any supposed way to solve this problem?
If you want to have exactly one feature per part I see no other way than performing the feature reduction part-wise. However, there might be better choices than simple PCA. For example, if the parts are mostly solid, their weight is likely to correlate with the third power of the size, so you could take the cubic root of the weight or the cube of the size before performing the PCA. Alternatively, you can take a logarithm of both values, which again results in a linear dependency.
Of course, there are many more fancy transformations you could use. In statistics, the Box-Cox Transformation is used to achieve a normal-looking distribution of the data.
You should also consider normalising the transformed data before performing the PCA, i.e. subtracting the mean and dividing by the standard deviations of each variable. It will remove the influence of units of measurement. I.e. it won't matter whether you measure weight in kg, atomic units, or Sun masses.
If the Part's number makes them different from one another (e.g Part1 is different from Part2, doesn't matter if their size, weight, strength parameters are identical), you can do PCA once for each Part. Using only the current Part's size, weight and strength as parameters in the current PCA.
Alternatively, if the Parts array order doesn't matter, you can do only one PCA using all (size, weight, strength) parameter triples, not differing them by their part number.

If a dataset has multiple columns all in different formats, what would be the best approach to deal with such data?

Say, a dataset has columns like length and width which can be float, and it can also have some binary elements (yes/no) or discrete numbers (categories transformed into numbers). What would it be wise to simply use all these as features without having to worry about the formats (or more like the nature of the features)? When doing normalization, can we just normalize the discrete numbers the same way as continuous numbers? I'm really confused on dealing with multiple formats.....
Yes, you can normalize discrete values. But it ought have no real
effect on learning - normalization would be required if you are
doing some form of a similarity measurement, which is not the case
for factor variables. There are some special cases like neural
networks, which are sensible to the scale of input\output and the
size of weights (see 'vanishing\exploding gradient' topic). Also it
would make some sense if you are doing a clustering on your data.
Clustering uses some kind of a distance measure so it would be
better to have all features on the same scale.
There is nothing special with categorical stuff, except that some of
the learning methods are especially good at using categorical
features, some - at using real-valued features, and some are good at
both.
My first choice for mix of categorical and real-valued features would be to use some tree-based methods (RandomForest or Gradient Boosting Machine) and the second one - ANNs.
Also, extremely good approach at handling factors (categorical variables) is to convert them into set of Boolean variables. For example if you have a factor with five levels (1,2,3,4 and 5) a good way to go would be to convert it into 5 features with 1 in a column representing one of the levels.

Reducing a matrix of feature vectors to a single, meaningful vector

I have matrices of feature vectors - 200 features long, in which the feature vectors within a matrix are temporally related, but I wish to reduce each matrix to a single, meaningful vector. I have applied PCA to the matrix in order to reduce its dimensionality to one with high variance, and am considering concatenating its rows together into one feature vector to summarize the data.
Is this a sensible approach, or are there better ways of achieving this?
So you have an n x 200 feature matrix, where n is your number of samples, and 200 features per sample, and each feature is temporally related to all others? Or you have individual feature matrices, one for each time point, and you want to run PCA on each of these individual feature matrices to find a single eigenvector for that time point, and then concatenate those together?
PCA seems more useful in the second case.
While this is doable, this is maybe not the best way to go about it because you lose temporal sensitivity by collapsing together features from different times. Even if each feature in your final feature matrix represents a different time, most classifiers cannot learn about the fact that feature 2 follows feature 1 etc. So you lose the natural temporal ordering by doing this.
If you care about the the temporal relationship between these features you may want to take a look at recurrent neural networks, which allow you feed information from t-1 into a node, at the same time as feeding in your current t features. So in a sense they learn about the relationship between t-1 and t features which will help you preserve temporal ordering. See this for an explanation: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
If you don't care about time and just want to group everything together, then yes PCA will help reduce your feature count. Ultimately it depends what type of information you think is more relevant to your problem.

Clustering Method Selection in High-Dimension?

If the data to cluster are literally points (either 2D (x, y) or 3D (x, y,z)), it would be quite intuitive to choose a clustering method. Because we can draw them and visualize them, we somewhat know better which clustering method is more suitable.
e.g.1 If my 2D data set is of the formation shown in the right top corner, I would know that K-means may not be a wise choice here, whereas DBSCAN seems like a better idea.
However, just as the scikit-learn website states:
While these examples give some intuition about the algorithms, this
intuition might not apply to very high dimensional data.
AFAIK, in most of the piratical problems we don't have such simple data. Most probably, we have high-dimensional tuples, which cannot be visualized like such, as data.
e.g.2 I wish to cluster a data set where each data is represented as a 4-D tuple <characteristic1, characteristic2, characteristic3, characteristic4>. I CANNOT visualize it in a coordinate system and observes its distribution like before. So I will NOT be able to say DBSCAN is superior to K-means in this case.
So my question:
How does one choose the suitable clustering method for such an "invisualizable" high-dimensional case?
"High-dimensional" in clustering probably starts at some 10-20 dimensions in dense data, and 1000+ dimensions in sparse data (e.g. text).
4 dimensions are not much of a problem, and can still be visualized; for example by using multiple 2d projections (or even 3d, using rotation); or using parallel coordinates. Here's a visualization of the 4-dimensional "iris" data set using a scatter plot matrix.
However, the first thing you still should do is spend a lot of time on preprocessing, and finding an appropriate distance function.
If you really need methods for high-dimensional data, have a look at subspace clustering and correlation clustering, e.g.
Kriegel, Hans-Peter, Peer Kröger, and Arthur Zimek. Clustering high-dimensional data: A survey on subspace clustering, pattern-based clustering, and correlation clustering. ACM Transactions on Knowledge Discovery from Data (TKDD) 3.1 (2009): 1.
The authors of that survey also publish a software framework which has a lot of these advanced clustering methods (not just k-means, but e.h. CASH, FourC, ERiC): ELKI
There are at least two common, generic approaches:
One can use some dimensionality reduction technique in order to actually visualize the high dimensional data, there are dozens of popular solutions including (but not limited to):
PCA - principal component analysis
SOM - self-organizing maps
Sammon's mapping
Autoencoder Neural Networks
KPCA - kernel principal component analysis
Isomap
After this one goes back to the original space and use some techniques that seems resonable based on observations in the reduced space, or performs clustering in the reduced space itself.First approach uses all avaliable information, but can be invalid due to differences induced by the reduction process. While the second one ensures that your observations and choice is valid (as you reduce your problem to the nice, 2d/3d one) but it loses lots of information due to transformation used.
One tries many different algorithms and choose the one with the best metrics (there have been many clustering evaluation metrics proposed). This is computationally expensive approach, but has a lower bias (as reducting the dimensionality introduces the information change following from the used transformation)
It is true that high dimensional data cannot be easily visualized in an euclidean high dimensional data but it is not true that there are no visualization techniques for them.
In addition to this claim I will add that with just 4 features (your dimensions) you can easily try the parallel coordinates visualization method. Or simply try a multivariate data analysis taking two features at a time (so 6 times in total) to try to figure out which relations intercour between the two (correlation and dependency generally). Or you can even use a 3d space for three at a time.
Then, how to get some info from these visualizations? Well, it is not as easy as in an euclidean space but the point is to spot visually if the data clusters in some groups (eg near some values on an axis for a parallel coordinate diagram) and think if the data is somehow separable (eg if it forms regions like circles or line separable in the scatter plots).
A little digression: the diagram you posted is not indicative of the power or capabilities of each algorithm given some particular data distributions, it simply highlights the nature of some algorithms: for instance k-means is able to separate only convex and ellipsoidail areas (and keep in mind that convexity and ellipsoids exist even in N-th dimensions). What I mean is that there is not a rule that says: given the distributiuons depicted in this diagram, you have to choose the correct clustering algorithm consequently.
I suggest to use a data mining toolbox that lets you explore and visualize the data (and easily transform them since you can change their topology with transformations, projections and reductions, check the other answer by lejlot for that) like Weka (plus you do not have to implement all the algorithms by yourself.
In the end I will point you to this resource for different cluster goodness and fitness measures so you can compare the results rfom different algorithms.
I would also suggest soft subspace clustering, a pretty common approach nowadays, where feature weights are added to find the most relevant features. You can use these weights to increase performance and improve the BMU calculation with euclidean distance, for example.

Resources