I have a word2vec model for every user, so I understand what two words look like on different models. Is there a more optimized way to compare the trained models than this?
userAvec = Word2Vec.load(userAvec.w2v)
userBvec = Word2Vec.load(userBvec.w2v)
#for word in vocab, perform dot product:
cosine_similarity = np.dot(userAvec['president'], userBvec['president'])/(np.linalg.norm(userAvec['president'])* np.linalg.norm(userBvec['president']))
Is this the best way to compare two models? Is there a stronger way to see how two models compare rather than word by word? Picture 1000 users/models, each with similar number of words in the vocab.
There's a faulty assumption at the heart of your question.
If the models userAvec and userBvec were trained in separate sessions, on separate data, the calculated angle between the userAvec['president'] and userBvec['president'] is, alone, essentially meaningless. There's randomness in the algorithm initialization, and then in most modes of training – via things like negative-sampling, frequent-word-downsampling, and arbitrary reordering of training examples due to thread-scheduling variability). As a result, even repeated model-training with the exact same corpus and parameters can result in different coordinates for the same words.
It's only the relative distances/directions, among words that were co-trained in the same iterative process, that have significance.
So it might be interesting the compare whether the two model's lists of top-N similar words, for a particular word, are similar. But the raw value of the angle, between the coordinates of the same word in alternate models, isn't a meaningful measure.
Related
I have been searching and attempting to implement a word embedding model to predict similarity between words. I have a dataset made up 3,550 company names, the idea is that the user can provide a new word (which would not be in the vocabulary) and calculate the similarity between the new name and existing ones.
During preprocessing I got rid of stop words and punctuation (hyphens, dots, commas, etc). In addition, I applied stemming and separated prefixes with the hope to get more precision. Then words such as BIOCHEMICAL ended up as BIO CHEMIC which is the word divided in two (prefix and stem word)
The average company name length is made up 3 words with the following frequency:
The tokens that are the result of preprocessing are sent to word2vec:
#window: Maximum distance between the current and predicted word within a sentence
#min_count: Ignores all words with total frequency lower than this.
#workers: Use these many worker threads to train the model
#sg: The training algorithm, either CBOW(0) or skip gram(1). Default is 0s
word2vec_model = Word2Vec(prepWords,size=300, window=2, min_count=1, workers=7, sg=1)
After the model included all the words in the vocab , the average sentence vector is calculated for each company name:
df['avg_vector']=df2.apply(lambda row : avg_sentence_vector(row, model=word2vec_model, num_features=300, index2word_set=set(word2vec_model.wv.index2word)).tolist())
Then, the vector is saved for further lookups:
##Saving name and vector values in file
df.to_csv('name-submission-vectors.csv',encoding='utf-8', index=False)
If a new company name is not included in the vocab after preprocessing (removing stop words and punctuation), then I proceed to create the model again and calculate the average sentence vector and save it again.
I have found this model is not working as expected. As an example, calculating the most similar words pet is getting the following results:
ms=word2vec_model.most_similar('pet')
('fastfood', 0.20879755914211273)
('hammer', 0.20450574159622192)
('allur', 0.20118337869644165)
('wright', 0.20001833140850067)
('daili', 0.1990675926208496)
('mgt', 0.1908089816570282)
('mcintosh', 0.18571510910987854)
('autopart', 0.1729743778705597)
('metamorphosi', 0.16965581476688385)
('doak', 0.16890916228294373)
In the dataset, I have words such as paws or petcare, but other words are creating relationships with pet word.
This is the distribution of the nearer words for pet:
On the other hand, when I used the GoogleNews-vectors-negative300.bin.gz, I could not add new words to the vocab, but the similarity between pet and words around was as expected:
ms=word2vec_model.most_similar('pet')
('pets', 0.771199643611908)
('Pet', 0.723974347114563)
('dog', 0.7164785265922546)
('puppy', 0.6972636580467224)
('cat', 0.6891531348228455)
('cats', 0.6719794869422913)
('pooch', 0.6579219102859497)
('Pets', 0.636363685131073)
('animal', 0.6338439583778381)
('dogs', 0.6224827170372009)
This is the distribution of the nearest words:
I would like to get your advice about the following:
Is this dataset appropriate to proceed with this model?
Is the length of the dataset enough to allow word2vec "learn" the relationships between the words?
What can I do to improve the model to make word2vec create relationships of the same type as GoogleNews where for instance word pet is correctly set among similar words?
Is it feasible to implement another alternative such as fasttext considering the nature of the current dataset?
Do you know any public dataset that can be used along with the current dataset to create those relationships?
Thanks
3500 texts (company names) of just ~3 words each is only around 10k total training words, with a much smaller vocabulary of unique words.
That's very, very small for word2vec & related algorithms, which rely on lots of data, and sufficiently-varied data, to train-up useful vector arrangements.
You may be able to squeeze some meaningful training from limited data by using far more training epochs than the default epochs=5, and far smaller vectors than the default size=100. With those sorts of adjustments, you may start to see more meaningful most_similar() results.
But, it's unclear that word2vec, and specifically word2vec in your averaging-of-a-name's-words comparisons, is matched to your end goals.
Word2vec needs lots of data, doesn't look at subword units, and can't say anything about word-tokens not seen during training. An average-of-many-word-vectors can often work as an easy baseline for comparing multiword texts, but might also dilute some word's influence compared to other methods.
Things to consider might include:
Word2vec-related algorithms like FastText that also learn vectors for subword units, and can thus bootstrap not-so-bad guess vectors for words not seen in training. (But, these are also data hungry, and to use on a small dataset you'd again want to reduce vector size, increase epochs, and additionally shrink the number of buckets used for subword learning.)
More sophisticated comparisons of multi-word texts, like "Word Mover's Distance". (That can be quite expensive on longer texts, but for names/titles of just a few words may be practical.)
Finding more data that's compatible with your aims for a stronger model. A larger database of company names might help. If you just want your analysis to understand English words/roots, more generic training texts might work too.
For many purposes, a mere lexicographic comparison - edit distances, count of shared character-n-grams – may be helpful too, though it won't detect all synonyms/semantically-similar words.
Word2vec does not generalize to unseen words.
It does not even work well for wards that are seen but rare. It really depends on having many many examples of word usage. Furthermore a you need enough context left and right, but you only use company names - these are too short. That is likely why your embeddings perform so poorly: too little data and too short texts.
Hence, it is the wrong approach for you. Retraining the model with the new company name is not enough - you still only have one data point. You may as well leave out unseen words, word2vec cannot work better than that even if you retrain.
If you only want to compute similarity between words, probably you don't need to insert new words in your vocabulary.
By eye, I think you can also use FastText without the need to stem the words. It also computes vectors for unknown words.
From FastText FAQ:
One of the key features of fastText word representation is its ability
to produce vectors for any words, even made-up ones. Indeed, fastText
word vectors are built from vectors of substrings of characters
contained in it. This allows to build vectors even for misspelled
words or concatenation of words.
FastText seems to be useful for your purpose.
For your task, you can follow FastText supervised tutorial.
If your corpus proves to be too small, you can build your model starting from availaible pretrained vectors (pretrainedVectors parameter).
I'm working on a regression algorithm, in this case k-NearestNeighbors to predict a certain price of a product.
So I have a Training set which has only one categorical feature with 4 possible values. I've dealt with it using a one-to-k categorical encoding scheme which means now I have 3 more columns in my Pandas DataFrame with a 0/1 depending the value present.
The other features in the DataFrame are mostly distances like latitud - longitude for locations and prices, all numerical.
Should I standardize (Gaussian distribution with zero mean and unit variance) and normalize before or after the categorical encoding?
I'm thinking it might be benefitial to normalize after encoding so that every feature is to the estimator as important as every other when measuring distances between neighbors but I'm not really sure.
Seems like an open problem, thus I'd like to answer even though it's late. I am also unsure how much the similarity between the vectors would be affected, but in my practical experience you should first encode your features and then scale them. I have tried the opposite with scikit learn preprocessing.StandardScaler() and it doesn't work if your feature vectors do not have the same length: scaler.fit(X_train) yields ValueError: setting an array element with a sequence. I can see from your description that your data have a fixed number of features, but I think for generalization purposes (maybe you have new features in the future?), it's good to assume that each data instance has a unique feature vector length. For instance, I transform my text documents into word indices with Keras text_to_word_sequence (this gives me the different vector length), then I convert them to one-hot vectors and then I standardize them. I have actually not seen a big improvement with the standardization. I think you should also reconsider which of your features to standardize, as dummies might not need to be standardized. Here it doesn't seem like categorical attributes need any standardization or normalization. K-nearest neighbors is distance-based, thus it can be affected by these preprocessing techniques. I would suggest trying either standardization or normalization and check how different models react with your dataset and task.
After. Just imagine that you have not numerical variables in your column but strings. You can't standardize strings - right? :)
But given what you wrote about categories. If they are represented with values, I suppose there is some kind of ranking inside. Probably, you can use raw column rather than one-hot-encoded. Just thoughts.
You generally want to standardize all your features so it would be done after the encoding (that is assuming that you want to standardize to begin with, considering that there are some machine learning algorithms that do not need features to be standardized to work well).
So there is 50/50 voting on whether to standardize data or not.
I would suggest, given the positive effects in terms of improvement gains no matter how small and no adverse effects, one should do standardization before splitting and training estimator
I'm building a machine learning model where some columns are physical addresses (which I can translate into X / Y coordinates) but I'm a little bit confused on how this will be handled by the ML algorithm.
Is there a particular way to translate a GEO location into columns for use into ML (classification and/or regression) ?
Thanks in advance !
The choice of features would, in general, depend on what kind of relationship you anticipate between the features and the target variable. You are right in saying that post code number itself does not bear any relation to the target. Here the postcode is simply a string, or a category. What kind of model are you planning to use? Linear regression and Decision tree are two examples. These models capture relationships in different ways. As an example for a feature, you could compute the straight line distance between the source and destination, and use that in the model, since intuitively, the farther they are, the higher the transit time is likely to be. What else does the transit time depend on? See if you can relate the factors influencing the travel time to the information that you have, i.e., the postcodes / XY co-ordinates, in some way.
This summarizes the answer we ended up with in the comments of the questions:
This transformation from ZIP codes to geo-coordinates should not be seen as a "split" but only as a way to represent your data in a multidimensional way (in this case the dimension will be 2).
Machine learning algorithms exist for both unidimensional and multidimensional data. The two dimensions can be correlated or uncorrelated, depending on how you define the parameters of the model you choose afterwards.
Moreover, the correlation does not have to be explicitly set in most cases. Only an initial value may be useful, but many algorithm also rely on random initialization or other simple methods that estimate it from a subset of your data. So, for clarity's sake, if you model you data by a Gaussian for example, when estimating the parameters of this Gaussian, the covariance matrix will have non-diagonal term that are non-zeros which will represent the data correlation. You only need not to take an assumption that states that the 2 dimensions are uncorrelated!
I am using Hmm for speech recognition of separate words. I have trained my Hmms for my database. I calculate and compare likelihood probabilities for an incoming audio signal. The problem I have is different words have different number of optimal states which will give different number of search paths (number of search paths = states^observations ) so probabilities can't be compared. How do I normalize the effect of different number of states?
You need either context free grammar or language model (usually - 3-gram probabilistic model) to recognize utterances rather than single words. Then you use appropriate algorithm to calculate score for each path. I strongly recommend you to take a look at existing solutions like Kaldi or CMUSphinx.
I am trying to use relations between objects for a supervised learning task.
For eg, given a text like "Cats eat fish" , I would like to use the relation Cats-eat-fish as a feature for learning task (namely identifying the sense of a word). I thus would like to represent this relation numerically so that I could use it as a feature for a learning a model. Any suggestions on how I could accomplish that. I was thinking of hashing it to an integer but that could pose challenges like two relations semantically the same could have 2 very different hash values. I ideally would like 2 similar relations (for eg lives and resides) to hash to the same value. I guess I would also need to figure out if I could canonicalize relations before hashing.
Other approaches perhaps not using numerical features would also be useful. I am also wondering if there are graph based approaches to this problem.
I'd suggest making (very large numbers) of binary features for all possible relation types, and then possibly running some form of dimensionality reduction on the resulting (very sparse) feature space.
Another way to do this, which would reduce sparsity, would be to replace the bare words with entity types, for example [animal] eats [animal], or even [animate] eats [animate], and then use binary features in this space. You want to avoid mapping to numerical values on a single dimension because you'll impose spurious ordinal relations between features if you do.
How about representing verbs by features that would express typical words preceding the verb (usually subject) and typical words following the verb (usually object). Say you can take 500 most frequent words (or even better most discriminating words), then each verb would be represented as a 1000-dimensional vector. Each feature in the vector can be either binary (is there the word with frequency above certain threshold or not), or pure count, or probably best as logarithm. Then you can run PCA to reduce the vector to some smaller dimension.
The approach above is probabilistic which might be good or bad depending on what you want. IF you want to do it precisely with a lot of manual input, then look at situation semantics.