Find the optimum number of clusters (in hierarchical clustering) - spss

I am trying to do cluster analysis in SPSS. In R, we can use silhouette plots to determine the best number of cluster.
How can i find optimum number of cluster using SPSS.
PS. I am new to SPSS.

You can use the STATS CLUS SIL command to generate silhouette plots and scores if that's specifically what you're after.
Sample syntax, using mostly default values, might look like this:
STATS CLUS SIL
CLUSTER=clus_var /* var w cluster classifications */
VARIABLES=pred_var1 TO pred_var10 /* vars used to form clusters */
NEXTBEST=nb_clus_var /* output var. holds next best classifications */
SILHOUETTE=s_value /* output var. holds silhouette scores */
DISSIMILARITY=EUCLID /* make sure this matches measure in kmeans */
MINKOWSKIPOWER=2
/OPTIONS MISSING=RESCALE RENUMBERORDINAL=NO
/OUTPUT HISTOGRAM=YES ORIENTATION=HORIZONTAL THREEDBAR=YES THREEDCOUNTS=NO .
Potentially helpful links:
IBM: Using the silhouette procedure to evaluate kmeans
stackoverflow: How to visualize the effect of running means algorithm in SPSS
Stats.StackExchange: How to Calculate silhouette coefficient in SPSS for clustered data set?
On a side note, you might also consider the DISCRIMINANT command as another tool for evaluating the distinctiveness of your clusters.
DISCRIMINANT
/GROUPS=clus_var4 (1 4) /* assumes 4 cluster classifications */
/VARIABLES=pred_var1 TO pred_var10 /* vars used to form clusters */
/ANAL all
/METHOD = MAHAL
/PRIORS SIZE
/HISTORY = STEP
/ROTATE struct
/STATISTICS = CROSSVALID COEFF
/CLASSIFY = NONMISSING POOLED .
You can look at the output classification stats (cross-validated) to see how often the predicted cluster classification matches the actual.

Related

Derive the right k in k-means clustering (including k = 1) in pyspark

I want to check if a clustering would be helpful or not on my coordinates.
I'm dealing with trajectories and want to check if all of them are starting on a same area (the trajectories are different). Thus the aim here is to characterise the most frequent departure points.
However, sometimes there is no need for clustering. I'm using K-means here. I had thought of using the Silhouette Score but I don't see if it is mathematically correct for the case where there is only one cluster. DBScan will not be a good clustering as density are not similar in the clusters I wanted to build.
Would you have an idea to create a kind of check between k=1 and k=3, which would be the best split for my data? I'm dealing here with data with coordinates (latitude/longitude) where the starting point is not 100% fixed but can vary within 2km around a kind of barycentre.
Simple extract with k=2 :
from pyspark.ml.feature import VectorAssembler
vecAssembler = VectorAssembler(inputCols=["lat", "lon"], outputCol="features")
df1= vecAssembler.transform(df)
from pyspark.ml.clustering import KMeans
from pyspark.ml.evaluation import ClusteringEvaluator
# Loads data.
# Trains a k-means model.
kmeans = KMeans().setK(2).setSeed(1)
model = kmeans.fit(df1.select('features'))
# Make predictions
transformed = model.transform(df1)
evaluator = ClusteringEvaluator(predictionCol='prediction', featuresCol='features', \
metricName='silhouette', distanceMeasure='squaredEuclidean')
evaluator.evaluate(transformed)
Is there a way to compute in pySpark a case with k=1 ? in order to derive Elbow or gap statistics ?

Convert dgeMatrix for downstream tasks

I am trying to cluster sentence embeddings based on Glove model from text2vec. I generated the embeddings using the glove model like so (I create the iterator, vocab etc in the standard way).
# create document term matrix
dtm = create_dtm(it, vectorizer)
# assign the word embeddings
common_terms = intersect(colnames(dtm), rownames(word_vectors) )
# normalise
dtm_averaged <- text2vec::normalize(dtm[, common_terms], "l1")
# compute average sentence embeddings
sentence_vectors = dtm_averaged %*% word_vectors[common_terms, ]
The resulting object is of dgeMatrix class, which is equivalent to matrix class as I understand. dgeMatrix class isn't used for many downstream tasks so I would like to convert the matrix. The object, however, is 6GB large, and I have problems converting the matrix to a data frame or even text file for further processing.
Ideally , I'd use this matrix in Spark for further analysis such as k-means clustering. My question what would be the best strategy to use the matrix for downstream tasks.
a) Convert to matrix class or data frame
b) write the matrix to file?
c) something completely different
I run the models on Google Cloud and have a machine with 32gb ram and 28 cpu.
Thanks for your help.

Recommended values for OpenCV RTrees parameters

Any idea on the recommended parameters for OpenCV RTrees? I have read the documentation and I'm trying to apply it to MNIST dataset, i.e. 60000 training images, with 10000 testing images. I'm trying to optimize MaxDepth, MinSampleCount, setMaxCategories, and setPriors? e.g.
Ptr<RTrees> model = RTrees::create();
/* Depth of the tree.
A low value will likely underfit and conversely
a high value will likely overfit.
The optimal value can be obtained using cross validation
or other suitable methods.
*/
model->setMaxDepth(?); // letter_recog.cpp uses 10
/* minimum samples required at a leaf node for it to be split.
A reasonable value is a small percentage of the total data e.g. 1%.
MNIST 70000 * 0.01 = 700
*/
model->setMinSampleCount(700?); letter_recog.cpp uses 10
/* regression_accuracy – Termination criteria for regression trees.
If all absolute differences between an estimated value in a node and
values of train samples in this node are less than this parameter
then the node will not be split. */
model->setRegressionAccuracy(0); // I think this is already correct
/*
use_surrogates – If true then surrogate splits will be built.
These splits allow to work with missing data and compute variable importance correctly.'
To compute variable importance correctly, the surrogate splits must be enabled in
the training parameters, even if there is no missing data.
*/
model->setUseSurrogates(true); // I think this is already correct
/*
Cluster possible values of a categorical variable into K \leq max_categories clusters
to find a suboptimal split. If a discrete variable, on which the training procedure
tries to make a split, takes more than max_categories values, the precise best subset
estimation may take a very long time because the algorithm is exponential.
Instead, many decision trees engines (including ML) try to find sub-optimal split
in this case by clustering all the samples into max_categories clusters that is
some categories are merged together. The clustering is applied only in n>2-class
classification problems for categorical variables with N > max_categories possible values.
In case of regression and 2-class classification the optimal split can be found
efficiently without employing clustering, thus the parameter is not used in these cases.
*/
model->setMaxCategories(?); letter_recog.cpp uses 15
/*
priors – The array of a priori class probabilities, sorted by the class label value.
The parameter can be used to tune the decision tree preferences toward a certain class.
For example, if you want to detect some rare anomaly occurrence, the training base will
likely contain much more normal cases than anomalies, so a very good classification
performance will be achieved just by considering every case as normal.
To avoid this, the priors can be specified, where the anomaly probability is
artificially increased (up to 0.5 or even greater), so the weight of the misclassified
anomalies becomes much bigger, and the tree is adjusted properly. You can also think about
this parameter as weights of prediction categories which determine relative weights that
you give to misclassification. That is, if the weight of the first category is 1 and
the weight of the second category is 10, then each mistake in predicting the
second category is equivalent to making 10 mistakes in predicting the first category.
*/
model->setPriors(Mat()); // ?
/* If true then variable importance will be calculated and
then it can be retrieved by CvRTrees::get_var_importance().
*/
model->setCalculateVarImportance(true); // I think this is already correct
/*
The size of the randomly selected subset of features at each tree node and
that are used to find the best split(s). If you set it to 0 then the size
will be set to the square root of the total number of features.
*/
model->setActiveVarCount(0); // I think this is already correct
/*
CV_TERMCRIT_ITER Terminate learning by the max_num_of_trees_in_the_forest;
CV_TERMCRIT_EPS Terminate learning by the forest_accuracy;
CV_TERMCRIT_ITER | CV_TERMCRIT_EPS Use both termination criteria.
*/
model->setTermCriteria(TC(100,0.01f)); // I think this is already correct

Encog query classification

I'm trying to process this dataset using Encog. In order to do so, I combined the outputs into one (can't seem to figure out how to use multiple expected outputs, even tho I unsuccessfully tried to manually train a NN with 4 output neurons) with the values: "disease1", "disease2", "none" and "both".
Starting from there, used the analyst wizard in the CSV, and the automatic process trained a NN with the expected outputs. A peak from the file:
"field:1","field:2","field:3","field:4","field:5","field:6","field:7","Output:field:7"
40.5,yes,yes,yes,yes,no,both,both
41.2,no,yes,yes,no,yes,second,second
Now my problem is: how do I query it? I tried with classification, but as far as I've understood, the result only gives me the values {0,1,2}, so there are two classes which I can't differentiate (both are 0).
This same problem applies to the Iris example presented in the wiki. Also, how does Encog extrapolate from the output neuron values to the 0/1/2 results?
Edit: the solution I have found was to use a separate network for disease 1 and disease 2, but I really would like to know if it was possible to combine those into one.
You are correct, that you will need to combine the output column to a single value. Encog analyst will only classify to a single output column. That output column can have many different values. So normalizing the two output columns to none,first,second,both will work. If you use the underlying neural networks directly, you could actually train for two outputs each doing an independent classification. But for this discussion I will assume we are dealing with the analyst.
Are you querying the network using the workbench, or in code? By default Encog analyst encodes to the neural network using equilateral encoding. This results in a number of output neurons equal to n-1, where n is the number of classes. If you choose one-of-n encoding in the analyst wizard, then the regular classify method on the BasicNetwork will work, as it is only designed for one-of-n.
If you would like to query (in code) using equilateral, then you can use a method similar to the following. I am adding this to the next version of Encog.
/**
* Used to classify a neural network that has been encoded using equilateral encoding.
* This is the default for the Encog analyst. Equilateral encoding uses an output count
* equal to the number of classes minus one.
* #param input The input to the neural network.
* #param high The high value of the activation range, usually 1.
* #param low The low end of the normalization range, usually -1 or 0.
* #return The class that the input belongs to.
*/
public int classifyEquilateral(final MLData input,double high, double low) {
MLData result = this.compute(input);
Equilateral eq = new Equilateral(getOutputCount()+1,high,low);
return eq.decode(result.getData());
}

hierarchical clustering using flann in opencv

I'm trying to use a method hierarchicalClustering from opencv 2.4.2.
It work without error, but the problem is, that I don't undertstand the parametrs it accepts eg. branching...
And i think it couses my problem that i get always just one cluster.
My input is a cv::Mat of LBPH features (for face detection) number of rows is 12 and number of cols is 6272.
No matter what is the value of branching factor I get always just one cluster and its centroid is mean of rows from input matrix grouppeed_one_ferson_features.
Could you advice ???
THANK a LOT!!!
heres the code:
cv::Mat groupped_one_person_features;
.... // fill grouppeed_one_ferson_features with data
int Nclusters=50;
cv::Mat centroids (Nclusters,Features.data[0][0].cols,CV_32FC1);
int count = cv::flann::hierarchicalClustering<cvflann::L1<float>>groupped_one_person_features,centroids,cvflann::KMeansIndexParams(2000,11,cvflann::FLANN_CENTERS_KMEANSPP));
First of all, you missed a parenthesis in your last line:
int count = cv::flann::hierarchicalClustering<cvflann::L1<float>>(groupped_one_person_features,centroids,cvflann::KMeansIndexParams(2000,11,cvflann::FLANN_CENTERS_KMEANSPP));
In the order, the parameters are (according to flann_base.hpp):
The points to be clustered
The computed cluster centers. Matrix should be preallocated and centers.rows is the number of clusters requested.
The clustering parameters
The distance to be used for clustering
Therefore, if you always get one cluster, it possibly means that your centroids matrix only has one row. Can you verify this?
The parameters of KMeansIndexParams are (according to kmeans_index.h):
branching factor: the number of children of a node in the tree
iterations: max iterations to perform in one kmeans clustering (kmeans tree)
centers_init: algorithm used for picking the initial cluster centers for kmeans tree
cb_index: cluster boundary index. Used when searching the kmeans tree

Resources