K-Medoids Cluster Analysis - machine-learning

What are some analysis functions which can be used on the K-Medoids algorithms?
My main aim is to compare results of 2 different clustering results in order to see which is better.
Can SSE (sum of squared errors) be applied to K-Medoids algorithm?

The original k-medoid publication discusses the measures ESS, along with several other measures such as average dissimilarity, maximum dissimilarity, diameter that may be more appropriate to use.
SSE is closely related to Euclidean distance, so it usually is not appropriate (unless, of course, you use Euclidean; but why would you use k-medoids then instead of k-means?)

ARI, NMI, and Silhouette Coefficient can be used to compare the results

Related

How to select features for clustering?

I had time-series data, which I have aggregated into 3 weeks and transposed to features.
Now I have features: A_week1, B_week1, C_week1, A_week2, B_week2, C_week2, and so on.
Some of features are discreet, some - continuous.
I am thinking of applying K-Means or DBSCAN.
How should I approach the feature selection in such situation?
Should I normalise the features? Should I introduce some new ones, that would somehow link periods together?
Since K-means and DBSCAN are unsupervised learning algorithms, selection of features over them are tied to grid search. You may want to test them to evaluate such algorithms based on internal measures such as Davies–Bouldin index, Silhouette coefficient among others. If you're using python you can use Exhaustive Grid Search to do the search. Here is the link to the scikit library.
Formalize your problem, don't just hack some code.
K-means minimizes the sum of squares. If the features have different scales they get different influence on the optimization. Therefore, you carefully need to choose weights (scaling factors) of each variable to balance their importance the way you want (and note that a 2x scaling factor does not make the variable twice as important).
For DBSCAN, the distance is only a binary decision: close enough, or not. If you use the GDBSCAN version, this is easier to understand than with distances. But with mixed variables, I would suggest to use the maximum norm. Two objects are then close if they differ in each variable by at most "eps". You can set eps=1, and scale your variables such that 1 is a "too big" difference. For example in discrete variables, you may want to tolerate one or two discrete steps, but not three.
Logically, it's easy to see that the maximum distance threshold decomposes into a disjunction of one-variablea clauses:
maxdistance(x,y) <= eps
<=>
forall_i |x_i-y_i| <= eps

Image accuracy metric

What is an efficient and correct metric I can use to compare two images in matrix form? I have built a machine learning model which predicts an image and want to see how far off it is from the target using a number for easy comparision.
There is a lot of different methods you can use. I guess the most popular ones are:
Euclidean Distance
Chord Distance
Pearson’s Correlation Coefficient
Spearman Rank Coefficient
You can also study about these and other metrics (their main advantages and drawbacks) from here: Image Registration - Principles, Tools and Methods / Authors: Goshtasby, A. Ardeshir
DOI: 10.1007/978-1-4471-2458-0
Hope it helps.
Adding to the excellent start from Victor Oliveira Antonino, I suggest starting with either Pearson's or Cosine. The rank coefficient isn't particularly applicable for this space; Euclidean and chord distance have properties that don't represent as well our human interpretations of image similarity.
Each metric has advantages and disadvantages. When you get into an application that doesn't map readily to physical distance, then Euclidean distance is unlikely to be the best choice.

Difference between similarity strategies in Mahout recommenditembased

I am using mahout recommenditembased algorithm. What are the differences between all the --similarity Classes available? How to know what is the best choice for my application? These are my choices:
SIMILARITY_COOCCURRENCE
SIMILARITY_LOGLIKELIHOOD
SIMILARITY_TANIMOTO_COEFFICIENT
SIMILARITY_CITY_BLOCK
SIMILARITY_COSINE
SIMILARITY_PEARSON_CORRELATION
SIMILARITY_EUCLIDEAN_DISTANCE
What does it mean each one?
I'm not familiar with all of them, but I can help with some.
Cooccurrence is how often two items occur with the same user. http://en.wikipedia.org/wiki/Co-occurrence
Log-Likelihood is the log of the probability that the item will be recommended given the characteristics you are recommending on. http://en.wikipedia.org/wiki/Log-likelihood
Not sure about tanimoto
City block is the distance between two instances if you assume you can only move around like you're in a checkboard style city. http://en.wikipedia.org/wiki/Taxicab_geometry
Cosine similarity is the cosine of the angle between the two feature vectors. http://en.wikipedia.org/wiki/Cosine_similarity
Pearson Correlation is covariance of the features normalized by their standard deviation. http://en.wikipedia.org/wiki/Pearson_correlation_coefficient
Euclidean distance is the standard straight line distance between two points. http://en.wikipedia.org/wiki/Euclidean_distance
To determine which is the best for you application you most likely need to have some intuition about your data and what it means. If your data is continuous value features than something like euclidean distance or pearson correlation makes sense. If you have more discrete values than something along the lines of city block or cosine similarity may make more sense.
Another option is to set up a cross-validation experiment where you see how well each similarity metric works to predict the desired output values and select the metric that works the best from the cross-validation results.
Tanimoto and Jaccard are similars, is a statistic used for comparing the similarity and diversity of sample sets.
https://en.wikipedia.org/wiki/Jaccard_index

What makes the distance measure in k-medoid "better" than k-means?

I am reading about the difference between k-means clustering and k-medoid clustering.
Supposedly there is an advantage to using the pairwise distance measure in the k-medoid algorithm, instead of the more familiar sum of squared Euclidean distance-type metric to evaluate variance that we find with k-means. And apparently this different distance metric somehow reduces noise and outliers.
I have seen this claim but I have yet to see any good reasoning as to the mathematics behind this claim.
What makes the pairwise distance measure commonly used in k-medoid better? More exactly, how does the lack of a squared term allow k-medoids to have the desirable properties associated with the concept of taking a median?
1. K-medoid is more flexible
First of all, you can use k-medoids with any similarity measure. K-means however, may fail to converge - it really must only be used with distances that are consistent with the mean. So e.g. Absolute Pearson Correlation must not be used with k-means, but it works well with k-medoids.
2. Robustness of medoid
Secondly, the medoid as used by k-medoids is roughly comparable to the median (in fact, there also is k-medians, which is like K-means but for Manhattan distance). If you look up literature on the median, you will see plenty of explanations and examples why the median is more robust to outliers than the arithmetic mean. Essentially, these explanations and examples will also hold for the medoid. It is a more robust estimate of a representative point than the mean as used in k-means.
Consider this 1-dimensional example:
[1, 2, 3, 4, 100000]
Both the median and medoid of this set are 3. The mean is 20002.
Which do you think is more representative of the data set? The mean has the lower squared error, but assuming that there might be a measurement error in this data set ...
Technically, the notion of breakdown point is used in statistics. The median has a breakdown point of 50% (i.e. half of the data points can be incorrect, and the result is still unaffected), whereas the mean has a breakdown point of 0 (i.e. a single large observation can yield a bad estimate).
I do not have a proof, but I assume the medoid will have a similar breakdown point as the median.
3. k-medoids is much more expensive
That's the main drawback. Usually, PAM takes much longer to run than k-means. As it involves computing all pairwise distances, it is O(n^2*k*i); whereas k-means runs in O(n*k*i) where usually, k times the number of iterations is k*i << n.
I think this has to do with the selection of the center for the cluster. k-means will select the "center" of the cluster, while k-medoid will select the "most centered" member of the cluster.
In a cluster with outliers (i.e. points far away from the other members of the cluster) k-means will place the center of the cluster towards the outliers, whereas k-medoid will select one of the more clustered members (the medoid) as the center.
It now depends on what you use clustering for. If you just wanted to classify a bunch of objects then you don't really care about where the center is; but if the clustering was used to train a decider which will now classify new objects based on those center points, then k-medoid will give you a center closer to where a human would place the center.
In wikipedia's words:
"It [k-medoid] is more robust to noise and outliers as compared to k-means because it minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances."
Here's an example:
Suppose you want to cluster on one dimension with k=2. One cluster has most of its members around 1000 and the other around -1000; but there is an outlier (or noise) at 100000.
It obviously belongs to the cluster around 1000 but k-means will put the center point away from 1000 and towards 100000. This may even make some of the members of the 1000 cluster (say a member with value 500) to be assigned to the -1000 cluster.
k-medoid will select one of the members around 1000 as the medoid, it'll probably select one that is bigger than 1000, but it will not select an outlier.
Just a tiny note added to #Eli's answer, K-medoid is more robust to noise and outliers than k-means because the latter selects the cluster center, which is mostly just a "virtue point", on the other hand the former chooses the "actual object" from the cluster.
Suppose you have five 2D points in one cluster with the coordinates of (1,1),(1,2),(2,1),(2,2), and (100,100). If we don't consider the object exchanges among the clusters, with k-means you will get the center of cluster (21.2,21.2) which is pretty distracted by the point (100,100). However, with k-medoid will choose the center among (1,1),(1,2),(2,1),and (2,2) according to its algorithm.
Here is a fun applet ( E.M. Mirkes, K-means and K-medoids applet. University of Leicester, 2011 ) that you can randomly generate dataset in the 2D plane and compare k-medoid and k-means learning process.

Performance Analysis of Clustering Algorithms

I have been given 2 data sets and want to perform cluster analysis for the sets using KNIME.
Once I have completed the clustering, I wish to carry out a performance comparison of 2 different clustering algorithms.
With regard to performance analysis of clustering algorithms, would this be a measure of time (algorithm time complexity and the time taken to perform the clustering of the data etc) or the validity of the output of the clusters? (or both)
Is there any other angle one look at to identify the performance (or lack of) for a clustering algorithm?
Many thanks in advance,
T
It depends a lot on what data you have available.
A common way of measuring the performance is with respect to existing ("external") labels (albeit that would make more sense for classification than for clustering). There are around two dozen measures you can use for this.
When using an "internal" quality measure, make sure that it is independent of the algorithms. For example, k-means optimizes such a measure, and will always come out best when evaluating with respect to this measure.
There are two categories of clustering evaluation methods and the choice depends
on whether a ground truth is available. The first category is the extrinsic methods which require the existence of a ground truth and the other category is the intrinsic methods. In general, extrinsic methods try to assign a score to a clustering, given the ground truth, whereas intrinsic methods evaluate clustering by examining how well the clusters are separated and how compact they are.
For extrinsic methods (remember you need to have a ground available) one option is to use the BCubed precision and recall metrics. The BCubed precion and recall metrics differ from the traditional precision and recall in the sense that clustering is an unsupervised learning technique and therefore we do not know the labels of the clusters beforehand. For this reason BCubed metrics evaluate the precion and recall for evry object in a clustering on a given dataset according to the ground truth. The precision of an example is an indication of how many other examples in the same cluster belong to the same category as the example. The recall of an example reflects how many examples of the same category are assigned to the same cluster. Finally, we can combine these two metrics in one using the F2 metric.
Sources:
Data Mining Concepts and Techniques by Jiawei Han, Micheline, Kamber and Jian Pei
http://www.cs.utsa.edu/~qitian/seminar/Spring11/03_11_11/IR2009.pdf
My own experience in evaluating the performance of clustering
A simple approach for the extrinsic methods where there is a ground truth available is to use a distance metric between clusterings; the ground truth is simply considered to be a clustering. Two good measures to use are the Variation of Information by Meila and, in my humble opinion, the split join distance by myself also discussed by Meila. I do not recommend the Mirkin index or the Rand index - I've written more about it here on stackexchange.
These metrics can be split into two constituent parts, each representing the distance of one of the clusterings to the largest common subclustering. It is worthwhile to consider both parts; if the ground truth part (to common subclustering) is very small, it means that the tested clustering is close to a superclustering; if the other part is small it means that the tested clustering is close to the common subclustering and hence close to a subclustering of the ground truth. In both cases the clustering can be said to be compatible with the ground truth. For more information see the link above.
There are several benchmarks for the clustering algorithms evaluation with extrinsic quality measures (accuracy) and intrinsic measures (some internal statistics of the formed clusters):
Clubmark demonstrated in ICDM'18
WebOCD, see description in the paper
Circulo
ParallelComMetric
CluSim
CoDAR (the sources might be acquired from the paper authors)
Selection of the appropriate benchmark depends on the kind of the clustering algorithm (hard or soft clustering), kind (pairwise relations, attributed datasets or mixed) and size of the clustering data, required evaluation metrics and the admissible amount of the supervision. The Clubmark paper describes evaluation criteria in details.
The Clubmark is developed for the fully automatic parallel evaluation of many clustering algorithms (processing input data specified by the pairwise relations) on many large datasets (millions and billions of clustering elements) and evaluated mostly by accuracy metrics tracing resource consumption (processing and execution time, peak resident memory consumption, etc.).
But for a couple of algorithms on a couple of datasets even the manual evaluation is appropriate.

Resources