Normalization of PDF in Jensen-Shannon and KL divergences - normalization

I am just wondering why some people normalize the PDFs values before calculating the JS or KL divergences in matlab?

Related

Converting Human Pose heatmaps into skeleton keypoints

I am using HRNet estimator for extracting Human Poses and Heatmaps. SimpleHRNet
I was processing a lot of videos to save pose keypoints and heatmaps and after the processing, I realized there's some major error in the pose keypoints files, but heatmaps files seems to be okay.
Is there a way to convert the heatmaps into Pose keypoints in a fast and efficient manner. I just need a few ideas to implement.
I'm using coco-keypoint format (17 keypoints) and heatmaps are stored as .npy files each containing heatmaps for all 17 keypoints (shape is 17, 96, 72)

Is there any reason to (not) L2-normalize vectors before using cosine similarity?

I was reading the paper "Improving Distributional Similarity
with Lessons Learned from Word Embeddings" by Levy et al., and while discussing their hyperparameters, they say:
Vector Normalization (nrm) As mentioned in Section 2, all vectors (i.e. W’s rows) are normalized to unit length (L2 normalization), rendering the dot product operation equivalent to cosine similarity.
I then recalled that the default for the sim2 vector similarity function in the R text2vec package is to L2-norm vectors first:
sim2(x, y = NULL, method = c("cosine", "jaccard"), norm = c("l2", "none"))
So I'm wondering, what might be the motivation for this, normalizing and cosine (both in terms of text2vec and in general). I tried to read up on the L2 norm, but mostly it comes up in the context of normalizing before using the Euclidean distance. I could not find (surprisingly) anything on whether L2-norm would be recommended for or against in the case of cosine similarity on word vector spaces/embeddings. And I don't quite have the math skills to work out the analytic differences.
So here is a question, meant in the context of word vector spaces learned from textual data (either just co-occurrence matrices possible weighted by tfidf, ppmi, etc; or embeddings like GloVe), and calculating word similarity (with the goal being of course to use a vector space+metric that best reflects the real-world word similarities). Is there, in simple words, any reason to (not) use L2 norm on a word-feature matrix/term-co-occurrence matrix before calculating cosine similarity between the vectors/words?
If you want to get cosine similarity you DON'T need to normalize to L2 norm and then calculate cosine similarity. Cosine similarity anyway normalizes the vector and then takes dot product of two.
If you are calculating Euclidean distance then u NEED to normalize if distance or vector length is not an important distinguishing factor. If vector length is a distinguishing factor then don't normalize and calculate Euclidean distance as it is.
text2vec handles everything automatically - it will make rows have unit L2 norm and then call dot product to calculate cosine similarity.
But if matrix already has rows with unit L2 norm then user can specify norm = "none" and sim2 will skip first normalization step (saves some computation).
I understand confusion - probably I need to remove norm option (it doesn't take much time to normalize matrix).

Document representation with pre-trained Word Vectors for Author Classification/Regression (GP)

I am trying to replicate (https://arxiv.org/abs/1704.05513) to do a Big 5 author classification on Facebook data (posts and Big 5 profiles are given).
After removing the stop words, I embed each word in the file with their pre-trained GloVe word vectors. However, computing the average or coordinate-wise min/max word vector for each user and using those as input for a Gaussian Process/SVM gives me terrible results. Now I wonder what the paper means by:
Our method combines Word Embedding with Gaussian
Processes. We extract the words from the users’ tweets and
average their Word Embedding representation into a single
vector. The Gaussian Processes model then takes these
vectors as an input for training and testing.
What else can I "average" the vectors to get decent results and do they use some specific Gaussian Process?

word2vec how to get words from vectors?

I use ANN to predict words from words. The input and output are all words vectors. I do not know how to get words from the output of ANN. By the way, it's gensim I am using
You can find cosine similarity of the vector with all other word-vectors to find the nearest neighbors of your vector.
The nearest neighbor search on an n-dimensional space, can be brute force, or you can use libraries like FLANN, Annoy, scikit-kdtree to do it more efficiently.
update
Sharing a gist demonstrating the same:
https://gist.github.com/kampta/139f710ca91ed5fabaf9e6616d2c762b

Library for SVM and ANN with graphical toolkit

Well Hello everybody. I am doing a project that consist in dectect objects using kinect and svm and ann machine learning. I want if it is posible to give the names of library for svm and ann with graphical tool because I want only to train ann with that library and save in .xml then load .xml with opencv!!
SVM is a classifier used to classify samples based upon their feature vectors. So, your task is to convert the images into feature vectors which can be used by SVM for its training and testing.
Ok, to create feature vector from your images there are several possibilites and i am going to mention some very common technique:
A very easy method is to create normalized hue-histogram of your each image. Let's say, you have created hue-histogram with 5-bins. So, based upon your image color there will be some values in these 5 bins. Lets say the values look like this { 0.32 0.56 0 0 0.12 }. So, now this is your one input vector with 5 dimensions (i.e. number of bins). You have to do the same procedure for all training samples and then you will do it for test image too.
Extract some feature from your input samples (e.g. by using SIFT, SURf) and then create there descriptor using SIFT/SURF. And, then you can use these descriptors as the input to your SVM for training.

Resources