In my text generation dataset, I have converted all infrequent words into the token (unknown word), as suggested by most text-generation literature.
However, when training an RNN to take in part of a sentence as input and predict the rest of the sentence, I am not sure how I should stop the network from generating tokens.
When the network encounters an unknown (infrequent) word in the training set, what should its output be?
Example:
Sentence: I went to the mall and bought a <ukn> and some groceries
Network input: I went to the mall and bought a
Current network output: <unk> and some groceries
Desired network output: ??? and some groceries
What should it be outputting instead of the <unk>?
I don't want to build a generator that outputs words it does not know.
A RNN will give you a sampling of tokens that are most likely to appear next in your text. In your code you choose the token with the highest probability, in this case «unk».
In this case you can omit the «ukn» token and simply take the next most likely token that the RNN suggests based on the probability values that it renders.
I've seen <UNK> occasionally, but never <UKN>.
Even more common in word-embedding-training is dropping rare words entirely, to keep vocabularies compact, and avoid having words-without-sufficient-examples from serving as 'noise' in the training of other words. (Folding them all into a single magic unknown-token – which then becomes more frequent than real tokens! – would just tend to throw a big unnatural pseudo-word with no clear meaning into every other word's contexts.)
So, I'm not sure it's accurate to describe this as "suggested by most text-generation literature". And to the extent it might be, wouldn't any source suggesting this then also suggest what-to-do when a prediction is the UNK token?
If your specific application needed any real known word instead, even if the NN has low confidence that the right word is any known-word, it would seem you'd just read the next-best-non-<UKN> prediction from the NN, as suggested by #petezurich's answer.
Related
While working on tasks like text classification, QA, the original vocabulary generated from the corpus is usually too large, containing a lot of 'unimportant' words. The most popular ways I've seen to reduce the vocabulary size are discarding stop words and words with low frequencies.
For example, in gensim
gensim.utils.prune_vocab(vocab, min_reduce, trim_rule=None):
Remove all entries from the vocab dictionary with count smaller than min_reduce.
Modifies vocab in place, returns the sum of all counts that were pruned.
But in practice, setting the minimum count is empirical and does not seems quite exact. I notice that the term frequency of each word in the vocabulary often follows long-tail distribution, is it a good way if I only keep the top-K words that occupies X% (95%, 90%, 85%, ...) of the total term frequency? Or are there any sensible ways to reduce the vocabulary, without seriously influencing the NLP task?
There is indeed a few recent developments that try to counteract this problem. The most notable ones are probably subword units (also known as Byte Pair Encodings, or BPEs), which you can imagine as a notion similar to syllables in a word (but not the same!); A word like basketball could then be transformed into variations like bas ##ket ##ball or basket ##ball. Note that this is a constructed example and might not reflect the actually chosen subwords.
The idea itself is relatively old (an article from 1994), but has been recently popularized by Sennrich et al., and is basically used in every state-of-the-art NLP library that has to deal with large vocabularies.
The two biggest implementations of this idea are probably fastBPE and Google's SentencePiece.
With subword units, you now basically have the freedom to determine a fix vocabulary size, and the algorithm will then try to optimize towards a mix of word diversity, and basically splitting "more complex words" into several pieces, such that your desired vocabulary size can cover any word in the corpus. For the exact algorithm, though, I highly recommend you to look into the linked paper or implementations.
In general, the least-frequent words in your training data are also the safest to discard.
This is especially the case for 'word2vec' and similar algorithms. There may not be enough varied examples of the usage of each rare word to learn reliable representations – as opposed to weak/idiosyncratic representations based on the few not-necessarily-representative examples of their use that you do have.
Also, rare words won't recur as often in future texts, making their relative value in the model less.
And, by the typical 'zipfian' distribution of word-frequencies in natural-language material, while each individual rare word only occurs a few times, altogether there are many such words. So just discarding words with one to a few instances will often significantly shrink the vocabulary (and thus overall model) by half or more.
Finally, it's been observed in 'word2vec' that discarding those intervening rare words – which are many in total number, though each individually has only limited-quality examples – the quality of the surviving more-frequent word-vectors often improves. Those more-important words have fewer intervening lower-value 'noisy' words moving them out of each others' context windows, or pulling the model's weights in other directions via interleaved training examples.
(Similarly, in adequate corpuses, using more-aggressive frequent-word downsampling, as controlled by the sample parameter, can often increase word-vector quality while also speeding training – though with no savings in overall vocabulary size, as no words are totally eliminated by that setting.)
On the other hand, 'stop words' are insufficiently numerous to offer much vocabulary-size savings when discarded. Discard them, or not, based on whether their presence helps or hurts your later steps & final results – not to save a tiny amount of vocabulary-driven model space.
Note that for gensim's Word2Vec model, and related algorithms, in addition to the min_count parameter which discards all words appearing fewer times than that value, there is also the max_final_vocab parameter, which will dynamically choose whatever min_count is sufficient to achieve a final vocabulary size no larger than the max_final_vocab value.
So if you know you have the system memory to support a 1-million-word model, you don't have to use trial-and-error on min_count values to reach something near that: you can just specify max_final_vocab=1000000, min_count=1.
(On the other hand, be careful with the max_vocab_size parameter. It should only be used to prevent the initial word-count survey from outgrowing available RAM, and thus should be set to the largest value your system can manage – far, far larger than whatever you'd like your actual final vocabulary size to be. That's because the max_vocab_size is enforced whenever the survey-in-progress reaches that size – not just at the end – and discards a lot of the smaller word counts, and then enforces a higher floor each time it's enforced. If this limit is hit at all, it means final counts will only be approximate – & the escalating floor means sometimes the running-vocabulary will be pruned to a mere 10% or so of the full max_vocab_size.)
You can significantly reduce vocabulary size via text pre-processing tailored to your learning task & domain. Some NLP techniques include:
Remove rare & frequent stop words. Not just from pre-defined lists but through learned thresholds, TF-IDF weights or superfluous part-of-speech removals.
Correct spelling/grammar/slang if your text is noisy or from different dialects of the same language.
lemmatize words to remove tense & plurality variants if these relationships don't matter. ie: played, playing or plays -> play
Parametrize text with named entities whenever specific details aren't needed. ie: <PERSON> bought <MONEY> tickets to <LOCATION> for <DATE>
Disambiguate & perform synonym substitution to the most frequent usage of its interpretation. ie: bedrooms are spacious -> rooms are big
Simplify contractions & negations. ie: I don't dislike it -> I do not dislike it ~> I like it
resolve co-references where pronouns are made explicit. ie: John said he will go -> John said John will go
Dimensionality reduce with SVD to automatically capture equivalent phrases.
I am studying machine translation right now and I am interested in a question probing a bit more deeply into the internals of sentence representations.
Suppose we train an encoder-decoder Seq2Seq En-Fr translation system on parallel corpora, starting with pre-trained Eng and Fr word vectors. The system can use anything to form the sentence embedding (Transformers, LSTMs, etc). Then the job of the Seq2Seq translation system is to learn to build Eng sentence representations from Eng word vectors and learn to build French sentence representations from French word vectors and by the linking of the encoder and decoder, learn those two sentence representations in the same space.
After training the model, and encoding some English sentence with the model (Say, "This is not a pipe."), the sentence embedding in the joint representation space has some idea of the words 'this', 'is', 'not', 'a', 'pipe', etc and all their associations as well as the sequence in which they appear. (1)
When the decoder is run on the encoding, it is able to take out the aforementioned information due for a load of corpora that was fed to it during training and statistical associations between words, and output, correspondingly, 'Ceci', 'n', ''', 'est', 'pas', 'une', 'pipe', '(EOS)'. At each step, it extracts and outputs the next French word from the decoder hidden state and transforms it so that the heuristically "most prominent" word to be decoded next can be found by the decoder, and so on, until '(EOS)'.
My question is this: Is there any interpretation of the last decoder hidden state after (EOS) is the output? Is it useful for anything else? Of course, an easy answer is "no, the model was trained to capture millions of lines of English text and process them until some word in conjunction with the hidden state produces (EOS) and last decoder hidden state is simply that, everything else not explicitly trained on is just noise and not signal".
But I'm wondering if there's anything more to this? What I'm trying to get at is, if you have a sentence embedding generated in English, and have the meaning dumped out of it in French by the decoder model, does any residual meaning remain that is not translatable from English to French? Certainly, the last hidden state for any particular sentence's translation would be very hard to interpret, but how about in the aggregate (like some aggregation of the last hidden states of every single sentence to be translated that has the words 'French' in it, which means something slightly different in English because it can be paired with 'fries' etc. This is a silly example, but you can probably think of others exploiting cultural ambiguities, etc, that turn up in language.) Might this last embedding capture some statistical "uncertainty" or ambiguity about the translation (maybe of like the English possible "meanings" and associations that could have ended up in French but didn't?) or some other structural aspect of the language that might be used to help us understand, say, how English is different from French?
What category do you think the answer to this fall in?
"There is no signal",
"There probably is some signal but it would be
very hard to extract because of depends on the mechanics of how the
model was trained"
"There is a signal that can be reliably extracted,
even if we have to aggregate over millions of examples"?
I'm not sure if this question is sensical at all but I'm curious about the answer and if any research been done on this front? I ask out of plain simple curiosity.
Notes:
I am aware that the last hidden state exists because it generates (EOS) in conjunction with the last word. That is its purpose, nothing else (?) makes it special. I'm wondering if we can get any more meaning out of it (even if it means transforming it like applying the decoder step one more time to it or something).
(1) (Of course, the ML model has no rich ides of 'concepts' as a human would with all its associations to thoughts and experiences and feelings, to the ML model the 'concept' only has associations with other words seen in the monolingual corpus for the word vector training and the bilingual corpus for translation training.)
Answering my own question but still interested in thoughts. I have a hunch the answer is "no", because the hidden state embedding is generated with only two properties in mind: (1) To be 'closest' by cosine distance to the next output token out of all tokens in French, and (2) to produce the hidden state corresponding to the next word when the decoder transformation is applied to it. To make the last hidden state have an interpretation other than 'it is the point on the 300-d (or whatever dimension embedding we are using) unit circle closes by cosine distance to the French (EOS) token' would mean we would have apply (2) to it. But the training data never had any examples of anything following (EOS) so what we get if we apply the decoder transformation to the last hidden state was never learned and is simply random depending on our model initialisations.
If we wanted to get some sort of idea about how good a 'match' the English and French joint embedding space is, we should be looking and comparing the test loss of various translations, not looking into the last hidden state. But still interested in people's thoughts on the matter if anyone thinks differently.
According to my understanding (please correct me if I'm wrong), Beam Search is BFS where it only explores the "graph" of possibilities down b the most likely options, where b is the beam size.
To calculate/score each option, especially for the work that I'm doing which is in the field of NLP, we basically calculate the score of a possibility by calculating the probability of a token, given everything that comes before it.
This makes sense in a recurrent architecture, where you simply run the model you have with your decoder through the best b first tokens, to get the probabilities of the second tokens, for each of the first tokens. Eventually, you get sequences with probabilities and you just pick the one with the highest probability.
However, in the Transformer architecture, where the model doesn't have that recurrence, the output is the entire probability for each word in the vocabulary, for each position in the sequence (batch size, max sequence length, vocab size). How do I interpret this output for Beam Search? I can get the encodings for the input sequence, but since there isn't that recurrence of using the previous output as input for the next token's decoding, how do I go about calculating the probability of all the possible sequences that stems from the best b tokens?
The beam search works exactly in the same as with the recurrent models. The decoder is not recurrent (it's self-attentive), but it is still auto-regressive, i.e., generating a token is conditioned on previously generated tokens.
At the training time, the self-attention is masked, such that in only attend to words to the left from the word that is currently generated. It simulates the setup you have at inference time when you indeed only have the left context (because the right context has not been generated yet).
The only difference is that in the RNN decoder, you only use the last RNN state in every beam search step. With the Transformer, you always need to keep the entire hypothesis and do the self-attention over the entire left context.
Adding more information for your later question and for people who have the same question:
I guess what I really want to ask is that, with an RNN architecture, in the decoder, I can feed it the b tokens that are highest in probability, to get the conditional probabilities of subsequent tokens. However, as I understand, from this tutorial here: tensorflow.org/beta/tutorials/text/…, I can't really do that for the Transformer architecture. Is that right? The decoder takes in the encoder outputs, the 2 masks and the target -- what would I input in for the parameter target?
The tutorial on the website you mentioned is using teacher forcing in the training stage. And it's possible to apply beam-search for the decoder of transformers in the testing stage.
Using beam-search for modern architecture like transformers in the training stage is not so popular. (Check this link for more info)
while teacher forcing as the tutorial mentioned in the training stage, can offer you parallel computation and speed up training once you are dealing with a large vocabulary-list task.
As for testing such a decoder, you could try the following steps to do beam-search (Just offering a possibility based on my understanding and there may have more better solutions):
First, Instead of taking the entire ground truth sequence as input for the decoder, you could only provide "[SOS]" and pad the rest positions.
Although output of your decoder is still [batch_size, max_sequence_len, vocab_size], only the (batch_size, 0, vocab_size) is giving you useful information and that is the first token your model generated. Select top b token and add to your "[SOS]" sequence. Now you have "[SOS] token(1,1)", ... , "[SOS], token(1,b)" sequences.
Second, use the above sequences as input for the decoder and search for the top b token among b * vocab_size options. Add them to their corresponding sequence.
Repeat until sequcences meet some restriction (max_ouput_length or [EOS])
P.S: 1) [SOS] or [EOS] means the Start or the End of the sequence.
2) token(i,j) means the j-th token in top b tokens for the i-th token in sequence
I am reading about n-grams and I am wondering whether there is a case in practice when uni-grams would are preferred to be used over bi-grams (or higher N-grams). As I understand, the bigger N, the bigger complexity to calculate the probabilities and establish the vector space. But apart from that, are there other reasons (e.g. related to type of data)?
This boils down to data sparsity: As your n-gram length increases, the amount of times you will see any given n-gram will decrease: In the most extreme example, if you have a corpus where the maximum document length is n tokens and you are looking for an m-gram where m=n+1, you will, of course, have no data points at all because it's simply not possible to have a sequence of that length in your data set. The more sparse your data set, the worse you can model it. For this reason, despite that a higher-order n-gram model, in theory, contains more information about a word's context, it cannot easily generalize to other data sets (known as overfitting) because the number of events (i.e. n-grams) it has seen during training becomes progressively less as n increases. On the other hand, a lower-order model lacks contextual information and so may underfit your data.
For this reason, if you have a very relatively large amount of token types (i.e. the vocabulary of your text is very rich) but each of these types has a very low frequency, you may get better results with a lower-order n-gram model. Similarly, if your training data set is very small, you may do better with a lower-order n-gram model. However, assuming that you have enough data to avoid over-fitting, you then get better separability of your data with a higher-order model.
Usually, n-grams more than 1 is better as it carries more information about the context in general. However, sometimes unigrams are also calculated besides bigram and trigrams and used as fallback for them. This is usefull also, if you want high recall than precision to search unigrams, for instance, you are searching for all possible uses of verb "make".
Lets use Statistical Machine Translation as an Example:
Intuitively, the best scenario is that your model has seen the full sentence (lets say 6-grams) before and knows its translation as a whole. If this is not the case you try to divide it to smaller n-grams, keeping into consideration that the more information you know about the word surroundings, the better the translation. For example, if you want to translate "Tom Green" to German, if you have seen the bi-gram you will know it is a person name and should remain as it is but if your model never saw it, you would fall back to unigrams and translate "Tom" and "Green" separately. Thus "Green" will be translated as a color to "Grün" and so on.
Also, in search knowing more about the surrounding context makes the results more accurate.
Binarization is the act of transforming colorful features of of an entity into vectors of numbers, most often binary vectors, to make good examples for classifier algorithms.
If we where to binarize the sentence "The cat ate the dog", we could start by assigning every word an ID (for example cat-1, ate-2, the-3, dog-4) and then simply replace the word by it's ID giving the vector <3,1,2,3,4>.
Given these IDs we could also create a binary vector by giving each word four possible slots, and setting the slot corresponding to a specific word with to one, giving the vector <0,0,1,0,1,0,0,0,0,1,0,0,0,0,0,1>. The latter method is, as far as I know, is commonly referred to as the bag-of-words-method.
Now for my question, what is the best binarization method when it comes to describe features for natural language processing in general, and transition-based dependency parsing (with Nivres algorithm) in particular?
In this context, we do not want to encode the whole sentence, but rather the current state of the parse, for example the top word on the stack en the first word in the input queue. Since order is highly relevant, this rules out the bag-of-words-method.
With best, I am referring to the method that makes the data the most intelligible for the classifier, without using up unnecessary memory. For example I don't want a word bigram to use 400 million features for 20000 unique words, if only 2% the bigrams actually exist.
Since the answer is also depending on the particular classifier, I am mostly interested in maximum entropy models (liblinear), support vector machines (libsvm) and perceptrons, but answers that apply to other models are also welcome.
This is actually a really complex question. The first decision you have to make is whether to lemmatize your input tokens (your words). If you do this, you dramatically decrease your type count, and your syntax parsing gets a lot less complicated. However, it takes a lot of work to lemmatize a token. Now, in a computer language, this task gets greatly reduced, as most languages separate keywords or variable names with a well defined set of symbols, like whitespace or a period or whatnot.
The second crucial decision is what you're going to do with the data post-facto. The "bag-of-words" method, in the binary form you've presented, ignores word order, which is completely fine if you're doing summarization of a text or maybe a Google-style search where you don't care where the words appear, as long as they appear. If, on the other hand, you're building something like a compiler or parser, order is very much important. You can use the token-vector approach (as in your second paragraph), or you can extend the bag-of-words approach such that each non-zero entry in the bag-of-words vector contains the linear index position of the token in the phrase.
Finally, if you're going to be building parse trees, there are obvious reasons why you'd want to go with the token-vector approach, as it's a big hassle to maintain sub-phrase ids for every word in the bag-of-words vector, but very easy to make "sub-vectors" in a token-vector. In fact, Eric Brill used a token-id sequence for his part-of-speech tagger, which is really neat.
Do you mind if I ask what specific task you're working on?
Binarization is the act of
transforming colorful features of
an entity into vectors of numbers,
most often binary vectors, to make
good examples for classifier
algorithms.
I have mostly come across numeric features that take values between 0 and 1 (not binary as you describe), representing the relevance of the particular feature in the vector (between 0% and 100%, where 1 represents 100%). A common example for this are tf-idf vectors: in the vector representing a document (or sentence), you have a value for each term in the entire vocabulary that indicates the relevance of that term for the represented document.
As Mike already said in his reply, this is a complex problem in a wide field. In addition to his pointers, you might find it useful to look into some information retrieval techniques like the vector space model, vector space classification and latent semantic indexing as starting points. Also, the field of word sense disambiguation deals a lot with feature representation issues in NLP.
[Not a direct answer] It all depends on what you are try to parse and then process, but for general short human phrase processing (e.g. IVT) another method is to use neural networks to learn the patterns. This can be very acurate for smallish vocubularies