I'm trying to evaluate what is the right number of cluster needed for clusterize some data.
I know that this is possible using Davies–Bouldin Index (DBI).
To using DBI you have to compute it for any number of cluster and the one that minimize the DBI corresponds to the right number of cluster needed.
The question is:
how to know if 2 clusters are better than 1 cluster using DBI? So, how can I compute DBI when I have just 1 cluster?
Only considering the average DBI of all clusters apparently is not a good idea.
Certainly, increasing the number of clusters - k, without penalty, will always reduce the amount of DBI in the resulting clustering, to the extreme case of zero DBI if each data point is considered its own cluster (because each data point overlaps with its own centroid).
how to know if 2 clusters are better than 1 cluster using DBI? So, how can I compute DBI when I have just 1 cluster?
So it's hard to say which one is better if you only use the average DBI as the performance metric.
A good practical method is to use the Elbow method.
Another method looks at the percentage of variance explained as a function of the number of clusters: You should choose a number of clusters so that adding another cluster doesn't give much better modeling of the data. More precisely, if you graph the percentage of variance explained by the clusters against the number of clusters, the first clusters will add much information (explain a lot of variance), but at some point the marginal gain will drop, giving an angle in the graph. The number of clusters are chosen at this point, hence the "elbow criterion".
Some other good alternatives with respective to choosing the optimal number of clusters:
Determining the number of clusters in a data set
How to define number of clusters in K-means clustering?
Related
I am currently studying CluStream, and I have some doubts regarding the results. I will proceed to explain:
If the micro clusters are clustered using K means, we all know that every micro cluster will belong to the closest macro cluster (computing the euclidean distance between the centers).
Now, looking at the following sample result:
we can see that the macro clusters do not group all the micro clusters …
What does this mean? How should we consider the micro clusters that do not lie inside some macro cluster? Should I find every micro cluster closest macro one to label them?
EDIT:
Checking the MOA source code on Github, I found that the macro clusters radius is calculated multiplying the deviation AVG by the so called ‘radius factor’ (which value is fixed at 1.8). However, when I ask the macro clusters for their weights, if a huge time window is used and there is not a fading component, I can see that the macro clusters resume the information of all the points ... all the current micro clusters are considered! So, even if we see some micro clusters that stay out of the macro clusters spheres, we know that they belong to the closest one - it's K means after all!
So, I still have a question: why calculating the macro clusters radius that way? I mean, what does it represent? Should not the algorithm return the labeled micro clusters instead?
Any feedback is welcomed. TIA!
The key question is: what does the user need?
Labeling micro-clusters is okay, but where is the use for the user?
In most cases, all that people use of the k-means result are the cluster centers. Because the entire objective of k-means is essentially "find the best k-point approximation to the data".
So likely all the information users of CluStream are going to use are the k current cluster centers. maybe the weights each, and their age.
What is the general convention for number of k, while performing k-means on KDD99 dataset? Three different papers I read have three completely different k (25,20 and 5). I would like to know the general opinion on this, like what should be the range of k e.t.c?
Thanks
The K-means clustering algorithm is used to find groups which have not been explicitly labeled in the data.
I general there is no method for determining the exact value for K, but an estimated approach can be used to determine it.
To find K, take the mean distance between data points and their cluster centroid.
The elbow method and kernel method works more precisely, but the number of clusters can depend upon your problem. (Recommended)
And one of the quick approaches is:-Take the square root of the number of data points divided by two and set that as number of cluster.
I have read some resources and I found out how hierarchical clustering works. However, when I compare it with k-means clustering, it seems to me that k-means really constitues specific number of clusters,whereas hierarchical analysis shows me how the samples can be clustered. What I mean is that I do not get a specific number of clusters in hierarchical clustering. I get only a scheme about how the clusters can be constituted and portion of relation between the samples.
Thus, I cannot understand where I can use this clustering method.
Hierarchical clustering (HC) is just another distance-based clustering method like k-means. The number of clusters can be roughly determined by cutting the dendrogram represented by HC. Determining the number of clusters in a data set is not an easy task for all clustering methods, which is usually based on your applications. Tuning the thresholds in HC may be more explicit and straightforward for researchers, especially for a very large data set. I think this question is also related.
In k-means clustering k is a hyperparameter that you need to find in order to divide your data points into clusters whereas in hierarchical clustering (lets take one type of hierarchical clustering i.e. agglomerative) firstly you consider all the points in your dataset as a cluster and then merge two clusters based on a similarity metric and repeat this until you get a single cluster. I will explain this with an example.
Suppose initially you have 13 points (x_1,x_2,....,x_13) in your dataset so at start you will have 13 clusters, now in second step lets you get 7 clusters (x_1-x_2 , x_4-x_5, x_6-x_8, x_3-x_7, x_11-x_12, x_10, x_13) based on the similarity between the points. In the third step lets say you get 4 clusters(x_1-x_2-x_4-x_5, x_6-x_8-x_10, x_3-x_7-x_13, x_11-x_12) like this you would arrive to a step wherein all the points in your dataset form one cluster and which is also the last step of agglomerative clustering algorithm.
So in hierarchical clustering, there is no hyperparameter, depending upon your problem, if you want 7 clusters then stop at the second step if you want 4 clusters then stop at the third step and likewise.
A practical advantage in hierarchical clustering is the possibility of visualizing results using dendrogram. If you don’t know in advance what number of clusters you’re looking for (as is often the case…), you can use the dendrogram plot that can help you choose k with no need to create separate clusterings. Dendrogram can also give a great insight into the data structure, help identify outliers, etc. Hierarchical clustering is also deterministic, whereas k-means with random initialization can give you different results when running several times on the same data.
Hope this helps.
I am running a k-means algorithm in R and trying to find the optimal number of clusters, k. Using the the silhouette method, the gap statistic, and the elbow method, I determined that the optimal number of clusters is 2. While there are no predefined clusters for the business, I am concerned that k=2 is not too insightful, which leads me to a few questions.
1) What does an optimal k = 2 mean in terms of the data's natural clustering? Does this suggest that maybe there are no clear clusters or that no clusters are better than any clusters?
2) At k = 2, the R-squared is low (.1). At k = 5, the R-squared is much better (.32). What are the exact trade offs on selecting k = 5 knowing it's not optimal? Would it be that you can increase the clusters, but they may not be distinct enough?
3) My n=1000, I have 100 variables to choose from, but only selected 5 from domain knowledge. Would increasing the number of variables necessarily make the clustering better?
4) As a follow up to question 3, if a variable is introduced and lowers the R-squared, what does that say about the variable?
I am no expert but I will try to answer as best as I can:
1) Your optimal cluster number methods gave you k=2 so that would suggest there is clear clustering the number is just low (2). To help with this try using your knowledge of the domain to help with the interpretation, does 2 clusters make sense given your domain?
2) Yes you're correct. The optimal solution in terms of R-squared is to have as many clusters as data points, however this isn't optimal in terms of why you're doing k-means. You're doing k-means to gain more insightful information from the data, this is you're primary goal. As such if you choose k=5 you're data will fit your 5 clusters better but as you say there probably isn't much distinction between them so you're not gaining any insight.
3) Not necessarily, in fact adding blindly could make it worse. K-means operates in euclidean space so every variable is given an even weighting in determining the clusters. If you add variables that are not relevant their values will still distort the n-d space making your clusters worse.
4) (Double check my logic here i'm not 100% on this one) If a variable is introduced to the same number of clusters and it drops the R-squared then yes it is a useful variable to add, it means it has correlation with your other variables.
I have run modulartiy edge_weight/randomized at a resolution of 1, atleast 20 times on the same network. This is the same network I have created based on the following rule. Two nodes are related if they have atleast one item in common. Every time I run modularity I get a little different node distribution among communities. Additionally, I get 9 or 10 communities but it is never consistent. Any comment or help is much appreciated.
I found a solution to my problem using consensus clustering. Here is the paper that describes it. One way to get the optimum clusters without having to solve them in a high-dimensional space using spectral clustering would be to run the algorithm repeatedly until no more partitions can be achieved. Here is the article and complete explanation details:
SCIENTIFIC REPORTS | 2 : 336 | DOI: 10.1038/srep00336
Consensus clustering in complex networks Andrea Lancichinetti & Santo Fortunato
The consensus matrix. Let us suppose that we wish to combine nP partitions found by a clustering algorithm on a network with n vertices. The consensus matrix D is an n x n matrix, whose entry Dij indicates the number of partitions in which vertices i and j of the network were assigned to the same cluster, divided by the number of partitions nP. The matrix D is usually much denser than the adjacency matrix A of the original network, because in the consensus matrix there is an edge between any two vertices which have cooccurred in the same cluster at least once. On the other hand, the weights are large only for those vertices which are most frequently coclustered, whereas low weights indicate that the vertices are probably at the boundary between different (real) clusters, so their classification in the same cluster is unlikely and essentially due to noise. We wish to maintain the large weights and to drop the low ones, therefore a filtering procedure is in order. Among the other things, in the absence of filtering the consensus matrix would quickly grow into a very dense matrix, which would make the application of any clustering algorithm computationally expensive.We discard all entries of D below a threshold t. We stress that there might be some noisy vertices whose edges could all be below the threshold, and they would be not connected anymore. When this happens, we just connect them to their neighbors with highest weights, to keep the graph connected all along the procedure.
Next we apply the same clustering algorithm to D and produce another set of partitions, which is then used to construct a new consensus matrix D9, as described above. The procedure is iterated until the consensus matrix turns into a block diagonal matrix Dfinal, whose weights equal 1 for vertices in the same block and 0 for vertices in different blocks. The matrix Dfinal delivers the community structure of the original network. In our calculations typically one iteration is sufficient to lead to stable results. We remark that in order to use the same clustering method all along, the latter has to be able to detect clusters in weighted networks, since the consensus matrix is weighted. This is a necessary constraint on the choice of the methods for which one could use the procedure proposed here. However, it is not a severe limitation,as most clustering algorithms in the literature can handle weighted networks or can be trivially extended to deal with them.
I think that the answer is in the randomizing part of the algorithm. You can find more details here:
https://github.com/gephi/gephi/wiki/Modularity
https://sites.google.com/site/findcommunities/
http://lanl.arxiv.org/abs/0803.0476