I was able to fine tune a language model using fast ai. I would like to extract sentence embeddings from the fine-tuned model for sentence similarity. How do I get the encoder model embeddings? Also can embeddings be compared with dot product like other embeddings from other models such as USE?
data_lm = TextLMDataBunch.from_df(train_df = se1, valid_df = se2, path = "",text_cols='text')
learn = language_model_learner(data_lm,drop_mult=0.7,pretrained=True,arch=AWD_LSTM)
learn.fit_one_cycle(3, 1e-01)
My code is above how can I get encodings from learn?
This should give you the encoder(Which is an embedding layer) :
learn.model[0].encoder
Related
I need an info.
I used this: https://towardsdatascience.com/improving-sentence-embeddings-with-bert-and-representation-learning-dfba6b444f6b to extract features but I got word embeddings.
If I want sentence embeddings using BERT traines on my data, how can I do?
Example: sentence "I want running" --> result [1,768] array embeddings
Thanks.
I recommend several aproaches. If you use HuggingFace, try the following suggestion:
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) #
Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the
output tuple
I invite you to use Sentence_Transformers. The project fine-tunes BERT / RoBERTa / DistilBERT / ALBERT / XLNet with a siamese or triplet network structure to produce semantically meaningful sentence embeddings. You can employ Flair to test the Sentence Transformer.
Alternatively you may try Flair TransformerDocumentEmbeddings. See examples.
I have a document binomial classifier that uses a tf-idf representation of a training set of documents and applies Logistic Regression to it:
lr_tfidf = Pipeline([('vect', tfidf),('clf', LogisticRegression(random_state=0))])
lr_tfidf.fit(X_train, y_train)
I save the model in pickle and used it to classify new documents:
text_model = pickle.load(open('text_model.pkl', 'rb'))
results = text_model.predict_proba(new_document)
How can I get the representation (features + frequencies) used by the model for this new document without explicitly computing it?
EDIT: I am trying to explain better what I want to get.
Wen I use predict_proba, I guess that the new document is represented as a vector of term frequencies (according to the rules used in the model stored) and those frequencies are multiplied by the coefficients learnt by the logistic regression model to predict the class. Am I right? If yes, how can I get the terms and term frequencies of this new document, as used by predict_proba?
I am using sklearn v 0.19
As I understand from the comments, you need to access the tfidfVectorizer from inside the pipeline. This can be done easily by:
tfidfVect = text_model.named_steps['vect']
Now you can use the transform() method of the vectorizer to get the tfidf values.
tfidf_vals = tfidfVect.transform(new_document)
The tfidf_vals will be a sparse matrix of single row containing the tfidf of terms found in the new_document. To check what terms are present in this matrix, you need to use tfidfVect.get_feature_names().
hybrid word character model
As shown in the above image I need to create a hybrid encoder-decoder network(seq2seq) which takes in both word and character embeddings as input.
As shown in image consider the sentence:
A cute cat
Hypothetically the words in vocabulary are:
a , cat
and Out of vocabulary words are:
cute
we feed the words a, cat as their respective embeddings
but since cute is out of vocabulary we generally feed it with embedding of a universal token.
But instead in this case I need to pass that unique word (cute which is out of vocabulary) through another seq2seq layer character by character to generate its embedding on the fly.
The both seq2seq layers must be trained jointly end to end.
The following is a snippet of my code where I tried the main encoder decoder network which takes word based inputs in Keras
model=Sequential()
model.add(Embedding(X_vocab_len+y_vocab_len, 300,weights=[embedding_matrix], input_length=X_max_len, mask_zero=True))
for i in range(num_layers):
return_sequences = i != num_layers-1
model.add(LSTM(hidden_size,return_sequences=return_sequences))
model.add(RepeatVector(y_max_len))
# Creating decoder network
for _ in range(num_layers):
model.add(LSTM(hidden_size, return_sequences=True))
model.add(TimeDistributed(Dense(y_vocab_len)))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
here X is my input sentence and y is the sentence to be generated ,vocabulary size is what I fixed consisting of frequent words and rare words are considered out of vocabulary based on vocabulary size
here I created a sequential model in Keras where I added embeddings from pre-trained vectors generated by GloVe(embedding_matrix)
How to model input to achieve such senario ?
The reference paper is :
http://aclweb.org/anthology/P/P16/P16-1100.pdf
I have a set of users and their content(1 document per user containing tweets of that user). I am planning to use a distributed vector representation of some size N for each user. One way is to take pre trained wordvectors on twitter data and average them to get distributed vector of an user. I am planning to use doc2vec for better results.But I am not quite sure if I understood the DM model given in Distributed Representations of Sentences and Documents.
I understand that we are assigning one vector per paragraph and while predicting next word we are using that and then backpropagating the error to update the paragraph vector as well as word vector. How to use this to predict paragraph vector of a new paragraph?
Edit : Any toy code for gensim to compute paragraph vector of new document would be appreciated.
The following code is based on gensim's doc2vec tutorial. We can instantiate and train a doc2vec model to generate embeddings of size 300 with a context window of size 10 as follows:
from gensim.models.doc2vec import Doc2Vec
model = Doc2Vec(size=300, window=10, min_count=2, iter=64, workers=16)
model.train(train_corpus, total_examples=model.corpus_count, epochs=model.iter)
Having trained our model, we can compute a vector for a new unseen document as follows:
doc_id = random.randint(0, len(test_corpus))
inferred_vector = model.infer_vector(test_corpus[doc_id])
sims = model.docvecs.most_simlar([inferred_vector], topn=len(model.docvecs))
This will return a 300-dimensional representation of our test document and compute top-N most similar documents from the training set based on cosine similarity.
LSTM/RNN can be used for text generation.
This shows way to use pre-trained GloVe word embeddings for Keras model.
How to use pre-trained Word2Vec word embeddings with Keras LSTM
model? This post did help.
How to predict / generate next word when the model is provided with the sequence of words as its input?
Sample approach tried:
# Sample code to prepare word2vec word embeddings
import gensim
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
sentences = [[word for word in document.lower().split()] for document in documents]
word_model = gensim.models.Word2Vec(sentences, size=200, min_count = 1, window = 5)
# Code tried to prepare LSTM model for word generation
from keras.layers.recurrent import LSTM
from keras.layers.embeddings import Embedding
from keras.models import Model, Sequential
from keras.layers import Dense, Activation
embedding_layer = Embedding(input_dim=word_model.syn0.shape[0], output_dim=word_model.syn0.shape[1], weights=[word_model.syn0])
model = Sequential()
model.add(embedding_layer)
model.add(LSTM(word_model.syn0.shape[1]))
model.add(Dense(word_model.syn0.shape[0]))
model.add(Activation('softmax'))
model.compile(optimizer='sgd', loss='mse')
Sample code / psuedocode to train LSTM and predict will be appreciated.
I've created a gist with a simple generator that builds on top of your initial idea: it's an LSTM network wired to the pre-trained word2vec embeddings, trained to predict the next word in a sentence. The data is the list of abstracts from arXiv website.
I'll highlight the most important parts here.
Gensim Word2Vec
Your code is fine, except for the number of iterations to train it. The default iter=5 seems rather low. Besides, it's definitely not the bottleneck -- LSTM training takes much longer. iter=100 looks better.
word_model = gensim.models.Word2Vec(sentences, vector_size=100, min_count=1,
window=5, iter=100)
pretrained_weights = word_model.wv.syn0
vocab_size, emdedding_size = pretrained_weights.shape
print('Result embedding shape:', pretrained_weights.shape)
print('Checking similar words:')
for word in ['model', 'network', 'train', 'learn']:
most_similar = ', '.join('%s (%.2f)' % (similar, dist)
for similar, dist in word_model.most_similar(word)[:8])
print(' %s -> %s' % (word, most_similar))
def word2idx(word):
return word_model.wv.vocab[word].index
def idx2word(idx):
return word_model.wv.index2word[idx]
The result embedding matrix is saved into pretrained_weights array which has a shape (vocab_size, emdedding_size).
Keras model
Your code is almost correct, except for the loss function. Since the model predicts the next word, it's a classification task, hence the loss should be categorical_crossentropy or sparse_categorical_crossentropy. I've chosen the latter for efficiency reasons: this way it avoids one-hot encoding, which is pretty expensive for a big vocabulary.
model = Sequential()
model.add(Embedding(input_dim=vocab_size, output_dim=emdedding_size,
weights=[pretrained_weights]))
model.add(LSTM(units=emdedding_size))
model.add(Dense(units=vocab_size))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
Note passing the pre-trained weights to weights.
Data preparation
In order to work with sparse_categorical_crossentropy loss, both sentences and labels must be word indices. Short sentences must be padded with zeros to the common length.
train_x = np.zeros([len(sentences), max_sentence_len], dtype=np.int32)
train_y = np.zeros([len(sentences)], dtype=np.int32)
for i, sentence in enumerate(sentences):
for t, word in enumerate(sentence[:-1]):
train_x[i, t] = word2idx(word)
train_y[i] = word2idx(sentence[-1])
Sample generation
This is pretty straight-forward: the model outputs the vector of probabilities, of which the next word is sampled and appended to the input. Note that the generated text would be better and more diverse if the next word is sampled, rather than picked as argmax. The temperature based random sampling I've used is described here.
def sample(preds, temperature=1.0):
if temperature <= 0:
return np.argmax(preds)
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def generate_next(text, num_generated=10):
word_idxs = [word2idx(word) for word in text.lower().split()]
for i in range(num_generated):
prediction = model.predict(x=np.array(word_idxs))
idx = sample(prediction[-1], temperature=0.7)
word_idxs.append(idx)
return ' '.join(idx2word(idx) for idx in word_idxs)
Examples of generated text
deep convolutional... -> deep convolutional arithmetic initialization step unbiased effectiveness
simple and effective... -> simple and effective family of variables preventing compute automatically
a nonconvex... -> a nonconvex technique compared layer converges so independent onehidden markov
a... -> a function parameterization necessary both both intuitions with technique valpola utilizes
Doesn't make too much sense, but is able to produce sentences that look at least grammatically sound (sometimes).
The link to the complete runnable script.