I am confused about the meaning of paragraph matrix (D) in the structure of the PV-DBOW model (Doc2Vec).
Is the paragraph matrix the result of one-hot encoding of n input paragraph IDs?
Or is the paragraph matrix a randomly initialized weight to generate the shape(nxp), where n is the number of paragraph ID inputs and p is the vector dimension?
I've never found the original paper's diagrams very clear or helpful. (And, no one has been able to reproduce their claimed results in the 'concatenation' mode.)
But I can say, from familiarity with code implementations:
At the outset, every paragraph-ID gets a randomly-initialized vector. Thus, there is a matrix in the model with all of these vectors, of shape number_of_paragraph_ids x number_of_dimensions
For backprop-training in PV-DBOW, each individual paragraph-vector (one row from the above matrix) is adjusted to better predict the matching-paragraph's individual constituent words.
While figuratively, that's sort of a 1-hot selection of a single paragraph choice, in the code it's just a lookup of the single correct row using a paragraph-ID key.
Related
This Article describes the LambdaRank algorithm for information retrieval. In formula 8 page 6, the authors propose to multiply the gradient (lambda) by a term called |∆NDCG|.
I do understand that this term is the difference of two NDCGs when swapping two elements in the list:
the size of the change in NDCG (|∆NDCG|) given by swapping the rank positions of U1 and U2
(while leaving the rank positions of all other urls unchanged)
However, I do not understand which ordered list is considered when swapping U1 and U2. Is it the list ordered by the predictions from the model at the current iteration ? Or is it the list ordered by the ground-truth labels of the documents ? Or maybe, the list of the predictions from the model at the previous iteration as suggested by Tie-Yan Liu in his book Learning to Rank for Information Retrieval ?
Short answer: It's the list ordered by the predictions from the model at the current iteration.
Let's see why it makes sense.
At each training iteration, we perform the following steps (these steps are standard for all Machine Learning algorithms, whether it's classification or regression or ranking tasks):
Calculate scores s[i] = f(x[i]) returned by our model for each document i.
Calculate the gradients of model's weights ∂C/∂w, back-propagated from RankNet's cost C. This gradient is the sum of all pairwise gradients ∂C[i, j]/∂w, calculated for each document's pair (i, j).
Perform a gradient ascent step (i.e. w := w + u * ∂C/∂w where u is step size).
In "Speeding up RankNet" paragraph, the notion λ[i] was introduced as contributions of each document's computed scores (using the model weights at current iteration) to the overall gradient ∂C/∂w (at current iteration). If we order our list of documents by the scores from the model at current iteration, each λ[i] can be thought of as "arrows" attached to each document, the sign of which tells us to which direction, up or down, that document should be moved to increase NDCG. Again, NCDG is computed from the order, predicted by our model.
Now, the problem is that the lambdas λ[i, j] for the pair (i, j) contributes equally to the overall gradient. That means the rankings of documents below, let’s say, 100th position is given equal improtance to the rankings of the top documents. This is not what we want: we should prioritize having relevant documents at the very top much more than having correct ranking below 100th position.
That's why we multiply each of those "arrows" by |∆NDCG| to emphasise on top ranking more than the ranking at the bottom of our list.
I'm trying to implement Deep Mind's DNC - Nature paper- with PyTorch 0.4.0.
When implementing the variant of LSTM they used I encountered some troubles with dimensions.
To simplify suppose BATCH=1.
The equations they list in the paper are these:
where [x;h] means a concatenation of x and h into one single vector, and i, f and o are column vectors.
My question is about how the state s_t is computed.
The second addendum is obtained by multiplying i with a column vector and so the result is either a scalar (transpose i first, then do scalar product) or wrong (two column vectors multiplied).
So the state results in a single scalar...
With the same reasoning the hidden state h_t is a scalar too, but it has to be a column vector.
Obviously I'm wrong somewhere, but I can't figure out where.
By looking at Wikipedia LSTM Article I think I figured it out.
This is the formal implementation of standard LSTM found in the article:
The circle represents element-by-element product.
By using this product in the corresponding parts of DNC equations (s_t and o_t) the dimensions work.
The dimensions for the input data for LSTM are [Batch Size, Sequence Length, Input Dimension] in tensorflow.
What is the meaning of Sequence Length & Input Dimension ?
How do we assign the values to them if my input data is of the form :
[[[1.23] [2.24] [5.68] [9.54] [6.90] [7.74] [3.26]]] ?
LSTMs are a subclass of recurrent neural networks. Recurrent neural nets are by definition applied on sequential data, which without loss of generality means data samples that change over a time axis. A full history of a data sample is then described by the sample values over a finite time window, i.e. if your data live in an N-dimensional space and evolve over t-time steps, your input representation must be of shape (num_samples, t, N).
Your data does not fit the above description. I assume, however, that this representation means you have a scalar value x which evolves over 7 time instances, such that x[0] = 1.23, x[1] = 2.24, etc.
If that is the case, you need to reshape your input such that instead of a list of 7 elements, you have an array of shape (7,1). Then, your full data can be described by a 3rd order tensor of shape (num_samples, 7, 1) which can be accepted by a LSTM.
Simply put seq_len is number of time steps that will be inputted into LSTM network, Let's understand this by example...
Suppose you are doing a sentiment classification using LSTM.
Your input sentence to the network is =["I hate to eat apples"]. Every single token would be fed as input at each timestep, So accordingly here the seq_Len would total number of tokens in a sentence that is 5.
Coming to the input_dim you might know we can't directly feed words to the netowrk you would need to encode those words into numbers. In Pytorch/tensorflow embedding layers are used where we have to specify embedding dimension.
Suppose your embedding dimension is 50 that means that embedding layer will take index of respective token and convert it into vector representation of size 50. So the input dim to LSTM network would become 50.
I've been trying to understand the process of skip-gram learning algorithm. There's this small detail that confuses me.
In the following graph(which is used in many articles and blogs to explain skip-gram), what does the multiple outputs mean? I mean, the input word is the same, the output matrix is the same. Then when you calculate the output vector, which I believe is the probability set of all words appearing near the input word, it should be the same all the time.
skipgram model
Hope someone can help me with this~
This article seems to explain it adequately — each "chunk" of the output represents the prediction of a word at one position in the context (the window of words before and after the input word in the text). The output is "really" a single vector, but the diagram is trying to make it clear that it corresponds to C instances of a word-vector where C is the size of the context.
It's kind of a prone-to-misinterpretation diagram. Each of the three outputs in that diagram should be considered the results for a different input (context) word.
Feed it word 1, and through the hidden layer, to the output layer, you'll get (size-of-vocabulary) V output values (at each node, assuming the easier-to-think-about negative-sampling mode) – the top results in the diagram. Feed it word 2, and you'll get the middle results. Feed it word 3, and you'll get the bottom results.
I have learned in some essays (Tomas Mikolov...) that a better way of forming the vector for a sentence is to concatenate the word-vector.
but due to my clumsy in mathematics, I am still not sure about the details.
for example,
supposing that the dimension of word vector is m; and that a sentence has n words.
what will be the correct result of concatenating operation?
is it a row vector of 1 x m*n ? or a matrix of m x n ?
There are at least three common ways to combine embedding vectors; (a) summing, (b) summing & averaging or (c) concatenating. So in your case, with concatenating, that would give you a 1 x m*a vector, where a is the number of sentences. In the other cases, the vector length stays the same. See gensim.models.doc2vec.Doc2Vec, dm_concat and dm_mean - it allows you to use any of those three options [1,2].
[1] http://radimrehurek.com/gensim/models/doc2vec.html#gensim.models.doc2vec.LabeledLineSentence
[2] https://github.com/piskvorky/gensim/blob/develop/gensim/models/doc2vec.py