The difference between k and centroids in k-mean algorithm - machine-learning

I have a bit confusion in K-mean clustering algorithm. The initial step in the algorithm is to provide centroids seeds before calculating the similarity matrix. Is there a difference between the number of clusters k and the number of centroids?

There is a centroid for each cluster. So no, there is no difference between the number of clusters k and the number of centroids.

Related

Is there any reason to (not) L2-normalize vectors before using cosine similarity?

I was reading the paper "Improving Distributional Similarity
with Lessons Learned from Word Embeddings" by Levy et al., and while discussing their hyperparameters, they say:
Vector Normalization (nrm) As mentioned in Section 2, all vectors (i.e. W’s rows) are normalized to unit length (L2 normalization), rendering the dot product operation equivalent to cosine similarity.
I then recalled that the default for the sim2 vector similarity function in the R text2vec package is to L2-norm vectors first:
sim2(x, y = NULL, method = c("cosine", "jaccard"), norm = c("l2", "none"))
So I'm wondering, what might be the motivation for this, normalizing and cosine (both in terms of text2vec and in general). I tried to read up on the L2 norm, but mostly it comes up in the context of normalizing before using the Euclidean distance. I could not find (surprisingly) anything on whether L2-norm would be recommended for or against in the case of cosine similarity on word vector spaces/embeddings. And I don't quite have the math skills to work out the analytic differences.
So here is a question, meant in the context of word vector spaces learned from textual data (either just co-occurrence matrices possible weighted by tfidf, ppmi, etc; or embeddings like GloVe), and calculating word similarity (with the goal being of course to use a vector space+metric that best reflects the real-world word similarities). Is there, in simple words, any reason to (not) use L2 norm on a word-feature matrix/term-co-occurrence matrix before calculating cosine similarity between the vectors/words?
If you want to get cosine similarity you DON'T need to normalize to L2 norm and then calculate cosine similarity. Cosine similarity anyway normalizes the vector and then takes dot product of two.
If you are calculating Euclidean distance then u NEED to normalize if distance or vector length is not an important distinguishing factor. If vector length is a distinguishing factor then don't normalize and calculate Euclidean distance as it is.
text2vec handles everything automatically - it will make rows have unit L2 norm and then call dot product to calculate cosine similarity.
But if matrix already has rows with unit L2 norm then user can specify norm = "none" and sim2 will skip first normalization step (saves some computation).
I understand confusion - probably I need to remove norm option (it doesn't take much time to normalize matrix).

Is there any unsupervised learning algorithm that do not not assign k

A traditional unsupervised learning approaches normally needs to assign number of clustering (K) before computing, but what if I do not know the exact number of K and exclude the k out of algorithm, I mean, Is there any unsupervised learning algorithm that do not need assign any k, so we can get k clustering automatically?
Affinity propagation
DBSCAN
Mean shift
For more details, check scikit-learn docs here.
You could try to infer the amount of clusters by metrics such as Akaike information criterion, Bayes information criterion, using the Silhouette or the Elbow. I've also heard people talk about automatic clustering methods based on self-organizing maps (SOM), but you'd have to do your own research there.
In my experience it usually just boils down to exploring the data with manifold methods such as t-SNE and/or density based methods such as DBSCAN and then setting k either manually or with a suitable heuristic.
There is a hierarchical clustering in graph's theory. You can achieve clustering either bottom up or top down.
Bottom up
define distance metric (Euclidean, Manhattan...)
start with each point in its own cluster
merge closest two clusters
There are three ways to select closest cluster:
complete link -> two clusters with the smallest maximum pairwise distance
single link -> two clusters with the smallest minimum pairwise distance
average link -> average distance between all pairwise distances
Single linkage clustering can be solved with Kruskalov minimum spanning tree algorithm, however while easy to understand it runs in O(n^3). There is a variation of Prim's algorithm for MST which can solve this in O(nˇ2).
Top-down aka Divisive Analysis
Start with all points in the same cluster and divide clusters at each iteration.
divisive analysis.
There are other clustering algorithms which you may google up, some already mentioned in other answers. I have not used others so i will leave that out.

SIFT features and classification of images?

I am new to image processing, and I want to extract image features in order to do some classification. I am having problems understanding the pipeline.
As far as I understand, I have a images and I run the SIFT algorithm on them. This gives me a set of descriptors for each images, the number varies, with fixed length of 128.
I then proceed to cluster them, since it is not possible to apply algorithms on varying number of features. For this, I stack up all the descriptors of all images and I run the k means algorithm with the desired number of clusters. What I get are k number of features of length 128.
Here is where I am confused, so I now have these new descriptors right, what do I do with them? I don't understand how I can plug them into a classifier if they represent all images? Should each images have their own separate features to be fed into a classifier?
I am sure I did not understand the concept, but can anybody please clarify what happens after I get a k*128 sized matrix? What is fed into for example an SVM classifier and how? How does this k means result suffice to train a classifier?
Thanks!
EDIT: I might have confused keypoints and descriptors, sorry new to image processing!
You should look into the image classification/image retrieval approach known as 'bag of visual words' - it is extremely relevant. A bag of visual words is a fixed-length feature vector v which summarises the occurrences of the features in an image. This makes use of what is called a codebook (also called a dictionary from historical uses in text retrieval), which in your case is built from your K-means clustering. To make v for a given image, the simplest approach is to assign v[j] the proportion of SIFT descriptors that are closest to the jth cluster centroid. This means the length of V is K, so it is independent of the number of SIFT features that are detected in the image.
Concretely, suppose you've done K means clustering with K = 100. Let's use ci to denote the ith cluster centre. For SIFT, this would be a vector of size 128. Now, for a given input image, you make this vector v, which is of size 100 and initialized with zeros. You then extract features from the image, and their corresponding descriptors. Let's say there are N descriptors, and we will call them d0, d2,...,d(N-1), where dj is the jth descriptor. For each dj you compute the vector distance between it and the cluster centres c0, c2,...c99. You then take the cluster index k with the lowest distance to dj, and increment: v[k]+=1. Note that this process can be parallelised very well particularly on GPUs. Also it can be faster to replace this process using what is known as Approximate Nearest Neighbours, using e.g. the FLANN library.

How to classify MNIST data set using k-means clustering?

I am applying K-Means clustering on MNIST dataset. How can I then predict the values of my test set according to this ?
well k-means is an unsupervised technique, so technically speaking you don't use it to "classify"--ie, a k-means model isn't supplied with labeled data (if it is then it doesn't use the class labels) and more so it doesn't return a prediction as a class label (eg, "1")
so to use k-means to predict the single digit encoded in a given data instance:
your k-means model is comprised of a set of centroids (i assume
you chose 26 centroids to correspond to the numbers 0 - 9 in base 10
each centroid represents the geometric center of one cluster--one
cluster per number
calculate the pairwise Euclidean distance (vector norm) between
your unknown data point and each centroid in your k-means model (the
centroid values from the final iteration, obviously)
the cluster whose centroid that is the least distance from the
unknown data point is the cluster to which the unknown data point
belongs

Libsvm: SVM normalizing starts from 0 or 0.001

I am using libsvm for my document classification.
I use svm.h and svm.cc only in my project.
Its struct svm_problem requires array of svm_node that are non-zero thus using sparse.
I get a vector of tf-idf words with lets say in range [5,10]. If i normalize it to [0,1], all the 5's would become 0.
Should i remove these zeroes when sending it to svm_train ?
Does removing these would not reduce the information and lead to poor results ?
should i start the normalization from 0.001 rather than 0 ?
Well, in general, in SVM does normalizing in [0,1] not reduces information ?
SVM is not a Naive Bayes, feature's values are not counters, but dimensions in multidimensional real valued space, 0's have exactly the same amount of information as 1's (which also answers your concern regarding removing 0 values - don't do it). There is no reason to ever normalize data to [0.001, 1] for the SVM.
The only issue here is that column-wise normalization is not a good idea for the tf-idf, as it will degenerate yout features to the tf (as for perticular i'th dimension, tf-idf is simply tf value in [0,1] multiplied by a constant idf, normalization will multiply by idf^-1). I would consider one of the alternative preprocessing methods:
normalizing each dimension, so it has mean 0 and variance 1
decorrelation by making x=C^-1/2*x, where C is data covariance matrix

Resources