What do the parameters of DBpedia Spotlight mean? - ontology

I am interested in using DBpedia Spotlight. However, we need to insert a value to the two parameters confidence and support. What do these two parameters really mean?
I want to identify the significant, prominent n-grams in the text. In that case, what is the usual recommendation for confidence and support parameters (rule of thumb)?

When you ask DBpedia Spotlight to annotate text (finding entities/topics), it searches for n-grams that have URIs on DBpedia (n-grams that are Wikipedia titles). Those n-grams are called DBpedia resources.
Support: this is the Resource Prominence parameter, it helps you to ignore unimportant or uninformative resources. When you set a value X to it, this means resources that have a number of Wikipedia in-links smaller than X will be ignored and not returned to you.
Confidence: this is the Disambiguation Confidence parameter, it is a threshold which takes a value between 0 and 1. When you set a high value to it, you get better and more trustworthy annotations but you risk losing some correct ones.
Choosing values of those (or any other) parameters depends on your use case.
Examples:
If you have some test set or gold standard for the type of n-grams you are interested in, you can tune your choice until you get good enough results satisfied by your gold standard.
If you care about retrieving the top-N n-grams only to infer the topic of text, you can tune your parameters choosing high values to get few (mostly) correct n-grams and sort them by Confidence.
If you want to get as many n-grams as possible and your task won't get affected or biased by mistakes, you can set low values.

Related

Nearest neighbor search vs Near duplicate detection

I was searching for a few AI/ML and non-AI/ML solutions for the "Near duplicate detection" problem (text, image, audio), I found that there is a similar/exact problem i,e, "Nearest neighbor search" which is also seems handled exactly the same way as "Near duplicate detection". I wondering whether there are any differences at all between these two problems or their solutions in any way.
The two problem names seems semantically the same from an english perspective.
In a nearest neighbor search you have a set of elements and, given a reference element, you want to search for an element in the set that is the closest to the reference with respect to a given metric.
In a near duplicate detection you have a set of elements and, given a reference element, you want to search for an element in the set that is the closest to be a duplicate of the reference with respect to a given metric.
Having said that, in the litterature I see people usually using the later name when the elements in the set are textual documents. In this case, one example algorithm consists of getting the set of windows of size k of the textual documents (k-shingles) and comparing two documents using the Jaccard metric (number of shingles in common in the two documents divided by number of different shingles) between the set of k-shingles of each of the documents. To avoid calculating the Jaccard metric explicitly, there is a theorem. If you hash all the k-shingles to 64-bit integers (for example) and consider a random permutation from 64-bit integers to 64-bit integers, then, if you apply the permutation to the set of hashed k-shingles of each document, the probability that the smallest elements of each of the two sets of permuted values are equal is equal to the Jaccard metric between the two documents.
On the other side, I see people usually using the first name if the set of elements is a subset of R^n (for example). In this case, many techniques exist. For example, some useful data structures are octrees, kd-trees.
Having said that, people also use vectorization techniques to convert some sets of elements into a subset of R^n. For example, signal2vec, word2vec etc.

Are there good ways to reduce the size of a vocabulary in natural language processing?

While working on tasks like text classification, QA, the original vocabulary generated from the corpus is usually too large, containing a lot of 'unimportant' words. The most popular ways I've seen to reduce the vocabulary size are discarding stop words and words with low frequencies.
For example, in gensim
gensim.utils.prune_vocab(vocab, min_reduce, trim_rule=None):
Remove all entries from the vocab dictionary with count smaller than min_reduce.
Modifies vocab in place, returns the sum of all counts that were pruned.
But in practice, setting the minimum count is empirical and does not seems quite exact. I notice that the term frequency of each word in the vocabulary often follows long-tail distribution, is it a good way if I only keep the top-K words that occupies X% (95%, 90%, 85%, ...) of the total term frequency? Or are there any sensible ways to reduce the vocabulary, without seriously influencing the NLP task?
There is indeed a few recent developments that try to counteract this problem. The most notable ones are probably subword units (also known as Byte Pair Encodings, or BPEs), which you can imagine as a notion similar to syllables in a word (but not the same!); A word like basketball could then be transformed into variations like bas ##ket ##ball or basket ##ball. Note that this is a constructed example and might not reflect the actually chosen subwords.
The idea itself is relatively old (an article from 1994), but has been recently popularized by Sennrich et al., and is basically used in every state-of-the-art NLP library that has to deal with large vocabularies.
The two biggest implementations of this idea are probably fastBPE and Google's SentencePiece.
With subword units, you now basically have the freedom to determine a fix vocabulary size, and the algorithm will then try to optimize towards a mix of word diversity, and basically splitting "more complex words" into several pieces, such that your desired vocabulary size can cover any word in the corpus. For the exact algorithm, though, I highly recommend you to look into the linked paper or implementations.
In general, the least-frequent words in your training data are also the safest to discard.
This is especially the case for 'word2vec' and similar algorithms. There may not be enough varied examples of the usage of each rare word to learn reliable representations – as opposed to weak/idiosyncratic representations based on the few not-necessarily-representative examples of their use that you do have.
Also, rare words won't recur as often in future texts, making their relative value in the model less.
And, by the typical 'zipfian' distribution of word-frequencies in natural-language material, while each individual rare word only occurs a few times, altogether there are many such words. So just discarding words with one to a few instances will often significantly shrink the vocabulary (and thus overall model) by half or more.
Finally, it's been observed in 'word2vec' that discarding those intervening rare words – which are many in total number, though each individually has only limited-quality examples – the quality of the surviving more-frequent word-vectors often improves. Those more-important words have fewer intervening lower-value 'noisy' words moving them out of each others' context windows, or pulling the model's weights in other directions via interleaved training examples.
(Similarly, in adequate corpuses, using more-aggressive frequent-word downsampling, as controlled by the sample parameter, can often increase word-vector quality while also speeding training – though with no savings in overall vocabulary size, as no words are totally eliminated by that setting.)
On the other hand, 'stop words' are insufficiently numerous to offer much vocabulary-size savings when discarded. Discard them, or not, based on whether their presence helps or hurts your later steps & final results – not to save a tiny amount of vocabulary-driven model space.
Note that for gensim's Word2Vec model, and related algorithms, in addition to the min_count parameter which discards all words appearing fewer times than that value, there is also the max_final_vocab parameter, which will dynamically choose whatever min_count is sufficient to achieve a final vocabulary size no larger than the max_final_vocab value.
So if you know you have the system memory to support a 1-million-word model, you don't have to use trial-and-error on min_count values to reach something near that: you can just specify max_final_vocab=1000000, min_count=1.
(On the other hand, be careful with the max_vocab_size parameter. It should only be used to prevent the initial word-count survey from outgrowing available RAM, and thus should be set to the largest value your system can manage – far, far larger than whatever you'd like your actual final vocabulary size to be. That's because the max_vocab_size is enforced whenever the survey-in-progress reaches that size – not just at the end – and discards a lot of the smaller word counts, and then enforces a higher floor each time it's enforced. If this limit is hit at all, it means final counts will only be approximate – & the escalating floor means sometimes the running-vocabulary will be pruned to a mere 10% or so of the full max_vocab_size.)
You can significantly reduce vocabulary size via text pre-processing tailored to your learning task & domain. Some NLP techniques include:
Remove rare & frequent stop words. Not just from pre-defined lists but through learned thresholds, TF-IDF weights or superfluous part-of-speech removals.
Correct spelling/grammar/slang if your text is noisy or from different dialects of the same language.
lemmatize words to remove tense & plurality variants if these relationships don't matter. ie: played, playing or plays -> play
Parametrize text with named entities whenever specific details aren't needed. ie: <PERSON> bought <MONEY> tickets to <LOCATION> for <DATE>
Disambiguate & perform synonym substitution to the most frequent usage of its interpretation. ie: bedrooms are spacious -> rooms are big
Simplify contractions & negations. ie: I don't dislike it -> I do not dislike it ~> I like it
resolve co-references where pronouns are made explicit. ie: John said he will go -> John said John will go
Dimensionality reduce with SVD to automatically capture equivalent phrases.

When are uni-grams more suitable than bi-grams (or higher N-grams)?

I am reading about n-grams and I am wondering whether there is a case in practice when uni-grams would are preferred to be used over bi-grams (or higher N-grams). As I understand, the bigger N, the bigger complexity to calculate the probabilities and establish the vector space. But apart from that, are there other reasons (e.g. related to type of data)?
This boils down to data sparsity: As your n-gram length increases, the amount of times you will see any given n-gram will decrease: In the most extreme example, if you have a corpus where the maximum document length is n tokens and you are looking for an m-gram where m=n+1, you will, of course, have no data points at all because it's simply not possible to have a sequence of that length in your data set. The more sparse your data set, the worse you can model it. For this reason, despite that a higher-order n-gram model, in theory, contains more information about a word's context, it cannot easily generalize to other data sets (known as overfitting) because the number of events (i.e. n-grams) it has seen during training becomes progressively less as n increases. On the other hand, a lower-order model lacks contextual information and so may underfit your data.
For this reason, if you have a very relatively large amount of token types (i.e. the vocabulary of your text is very rich) but each of these types has a very low frequency, you may get better results with a lower-order n-gram model. Similarly, if your training data set is very small, you may do better with a lower-order n-gram model. However, assuming that you have enough data to avoid over-fitting, you then get better separability of your data with a higher-order model.
Usually, n-grams more than 1 is better as it carries more information about the context in general. However, sometimes unigrams are also calculated besides bigram and trigrams and used as fallback for them. This is usefull also, if you want high recall than precision to search unigrams, for instance, you are searching for all possible uses of verb "make".
Lets use Statistical Machine Translation as an Example:
Intuitively, the best scenario is that your model has seen the full sentence (lets say 6-grams) before and knows its translation as a whole. If this is not the case you try to divide it to smaller n-grams, keeping into consideration that the more information you know about the word surroundings, the better the translation. For example, if you want to translate "Tom Green" to German, if you have seen the bi-gram you will know it is a person name and should remain as it is but if your model never saw it, you would fall back to unigrams and translate "Tom" and "Green" separately. Thus "Green" will be translated as a color to "Grün" and so on.
Also, in search knowing more about the surrounding context makes the results more accurate.

Representing relations as features for supervised learning tasks

I am trying to use relations between objects for a supervised learning task.
For eg, given a text like "Cats eat fish" , I would like to use the relation Cats-eat-fish as a feature for learning task (namely identifying the sense of a word). I thus would like to represent this relation numerically so that I could use it as a feature for a learning a model. Any suggestions on how I could accomplish that. I was thinking of hashing it to an integer but that could pose challenges like two relations semantically the same could have 2 very different hash values. I ideally would like 2 similar relations (for eg lives and resides) to hash to the same value. I guess I would also need to figure out if I could canonicalize relations before hashing.
Other approaches perhaps not using numerical features would also be useful. I am also wondering if there are graph based approaches to this problem.
I'd suggest making (very large numbers) of binary features for all possible relation types, and then possibly running some form of dimensionality reduction on the resulting (very sparse) feature space.
Another way to do this, which would reduce sparsity, would be to replace the bare words with entity types, for example [animal] eats [animal], or even [animate] eats [animate], and then use binary features in this space. You want to avoid mapping to numerical values on a single dimension because you'll impose spurious ordinal relations between features if you do.
How about representing verbs by features that would express typical words preceding the verb (usually subject) and typical words following the verb (usually object). Say you can take 500 most frequent words (or even better most discriminating words), then each verb would be represented as a 1000-dimensional vector. Each feature in the vector can be either binary (is there the word with frequency above certain threshold or not), or pure count, or probably best as logarithm. Then you can run PCA to reduce the vector to some smaller dimension.
The approach above is probabilistic which might be good or bad depending on what you want. IF you want to do it precisely with a lot of manual input, then look at situation semantics.

Binarization in Natural Language Processing

Binarization is the act of transforming colorful features of of an entity into vectors of numbers, most often binary vectors, to make good examples for classifier algorithms.
If we where to binarize the sentence "The cat ate the dog", we could start by assigning every word an ID (for example cat-1, ate-2, the-3, dog-4) and then simply replace the word by it's ID giving the vector <3,1,2,3,4>.
Given these IDs we could also create a binary vector by giving each word four possible slots, and setting the slot corresponding to a specific word with to one, giving the vector <0,0,1,0,1,0,0,0,0,1,0,0,0,0,0,1>. The latter method is, as far as I know, is commonly referred to as the bag-of-words-method.
Now for my question, what is the best binarization method when it comes to describe features for natural language processing in general, and transition-based dependency parsing (with Nivres algorithm) in particular?
In this context, we do not want to encode the whole sentence, but rather the current state of the parse, for example the top word on the stack en the first word in the input queue. Since order is highly relevant, this rules out the bag-of-words-method.
With best, I am referring to the method that makes the data the most intelligible for the classifier, without using up unnecessary memory. For example I don't want a word bigram to use 400 million features for 20000 unique words, if only 2% the bigrams actually exist.
Since the answer is also depending on the particular classifier, I am mostly interested in maximum entropy models (liblinear), support vector machines (libsvm) and perceptrons, but answers that apply to other models are also welcome.
This is actually a really complex question. The first decision you have to make is whether to lemmatize your input tokens (your words). If you do this, you dramatically decrease your type count, and your syntax parsing gets a lot less complicated. However, it takes a lot of work to lemmatize a token. Now, in a computer language, this task gets greatly reduced, as most languages separate keywords or variable names with a well defined set of symbols, like whitespace or a period or whatnot.
The second crucial decision is what you're going to do with the data post-facto. The "bag-of-words" method, in the binary form you've presented, ignores word order, which is completely fine if you're doing summarization of a text or maybe a Google-style search where you don't care where the words appear, as long as they appear. If, on the other hand, you're building something like a compiler or parser, order is very much important. You can use the token-vector approach (as in your second paragraph), or you can extend the bag-of-words approach such that each non-zero entry in the bag-of-words vector contains the linear index position of the token in the phrase.
Finally, if you're going to be building parse trees, there are obvious reasons why you'd want to go with the token-vector approach, as it's a big hassle to maintain sub-phrase ids for every word in the bag-of-words vector, but very easy to make "sub-vectors" in a token-vector. In fact, Eric Brill used a token-id sequence for his part-of-speech tagger, which is really neat.
Do you mind if I ask what specific task you're working on?
Binarization is the act of
transforming colorful features of
an entity into vectors of numbers,
most often binary vectors, to make
good examples for classifier
algorithms.
I have mostly come across numeric features that take values between 0 and 1 (not binary as you describe), representing the relevance of the particular feature in the vector (between 0% and 100%, where 1 represents 100%). A common example for this are tf-idf vectors: in the vector representing a document (or sentence), you have a value for each term in the entire vocabulary that indicates the relevance of that term for the represented document.
As Mike already said in his reply, this is a complex problem in a wide field. In addition to his pointers, you might find it useful to look into some information retrieval techniques like the vector space model, vector space classification and latent semantic indexing as starting points. Also, the field of word sense disambiguation deals a lot with feature representation issues in NLP.
[Not a direct answer] It all depends on what you are try to parse and then process, but for general short human phrase processing (e.g. IVT) another method is to use neural networks to learn the patterns. This can be very acurate for smallish vocubularies

Resources