I'm new to Named Entity Recognition and I'm having some trouble understanding what/how features are used for this task.
Some papers I've read so far mention features used, but don't really explain them, for example in
Introduction to the CoNLL-2003 Shared Task:Language-Independent Named Entity Recognition, the following features are mentioned:
Main features used by the the sixteen systems that participated in the
CoNLL-2003 shared task sorted by performance on the English test data.
Aff: affix information (n-grams); bag: bag of words; cas: global case
information; chu: chunk tags; doc: global document information; gaz:
gazetteers; lex: lexical features; ort: orthographic information; pat:
orthographic patterns (like Aa0); pos: part-of-speech tags; pre:
previously predicted NE tags; quo: flag signing that the word is
between quotes; tri: trigger words.
I'm a bit confused by some of these, however. For example:
isn't bag of words supposed to be a method to generate features (one for each word)? How can BOW itself be a feature? Or does this simply mean we have a feature for each word as in BOW, besides all the other features mentioned?
how can a gazetteer be a feature?
how can POS tags exactly be used as features ? Don't we have a POS tag for each word? Isn't each object/instance a "text"?
what is global document information?
what is the feature trigger words?
I think all I need here is to just to look at an example table with each of these features as columns and see their values to understand how they really work, but so far I've failed to find an easy to read dataset.
Could someone please clarify or point me to some explanation or example of these features being used?
Here's a shot at some answers (and by the way the terminology on all this stuff is super overloaded).
isn't bag of words supposed to be a method to generate features (one for each word)? How can BOW itself be a feature? Or does this simply mean we have a feature for each word as in BOW, besides all the other features mentioned?
how can a gazetteer be a feature?
In my experience BOW Feature Extraction is used to produce word features out of sentences. So IMO BOW is not one feature, it is a method of generating features out of a sentence (or a block of text you are using). Uning NGrams can help with accounting for sequence, but BOW features amount to unordered bags of strings.
how can POS tags exactly be used as features ? Don't we have a POS tag for each word?
POS Tags are used as features because they can help with "word sense disambiguation" (at least on a theoretical level). For instance, the word "May" can be a name of a person or a month of a year or a poorly capitalized conjugated verb, but the POS tag can be the feature that differentiates that fact. And yes, you can get a POS tag for each word, but unless you explicitly use those tags in your "feature space" then the words themselves have no idea what they are in terms of their POS.
Isn't each object/instance a "text"?
If you mean what I think you mean, then this is true only if you have extracted object-instance "pairs" and stored them as features (an array of them derived from a string of tokens).
what is global document information?
I perceive this one to mean as such: Most NLP tasks function on a sentence. Global document information is data from all the surrounding text in the entire document. For instance, if you are trying to extract geographic placenames but disambiguate them, and you find the word Paris, which one is it? Well if France is mentioned 5 sentences above, that could increase the likelihood of it being Paris France rather than Paris Texas or worst case, the person Paris Hilton. It's also really important in what is called "coreference resolution", which is when you correlate a name to a pronoun reference (mapping a name mention to "he" or "she" etc).
what is the feature trigger words?
Trigger words are specific tokens or sequences that have high reliability as a stand alone thing to have a specific meaning. For instance, in sentiment analysis, curse words with exclamation marks often indicate negativity. There can be many permutations of this.
Anyway, my answers here are not perfect, and are prone to all manner of problems in human epistemology and inter-subjectivity, but those are the way I've been thinking about this things over the years I've been trying to solve problems with NLP.
Hopefully someone else will chime in, especially if I'm way off.
You should probably keep in mind that NER classify each word/token separately from features that are internal or external clues. Internal clues takes into account the word itself (morphology as uppercase letters, is the token present in a dedicated lexicon, POS) and external ones relies on contextual information (previous and next word, document features).
isn't bag of words supposed to be a method to generate features (one
for each word)? How can BOW itself be a feature? Or does this simply
mean we have a feature for each word as in BOW, besides all the other
features mentioned?
Yes, BOW generates one feature per word, with sometimes feature selection methods that reduces the number features taken into account (e.g. minimum frequency of words)
how can a gazetteer be a feature?
Gazetteer may also generate one feature per word, but in most cases it does enrich data, by labelling words or multi-word expressions (as full proper names). It is an ambiguous step: "Georges Washington" will lead to two features: entire "Georges Washington" as a celebrity and "Washington" as a city.
how can POS tags exactly be used as features ? Don't we have a POS tag
for each word? Isn't each object/instance a "text"?
For classifiers, each instance is a word. This is why sequence labelling (e.g. CRF) methods are used: they allow to leverage previous words and next words as additional contextual features to classify the current word. Labelling a text is done as a process relying on the most likely NE types for each word in the sequence.
what is global document information?
This could be metadata (e.g. date, author), topics (full text categorization), coreference, etc.
what is the feature trigger words?
Triggers are external clues, contextual patterns that help disambiguation. For instance "Mr" will be used as a feature that strongly suggest that the following tokens would be a person.
I recently implemented a NER system in python and I found the following features helpful:
character-level ngrams (using CountVectorizer)
previous word features and labels (i.e. context)
viterbi or beam-search on label sequence probability
part of speech (pos), word-length, word-count, is_capitalized, is_stopword
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I have a 2-million long list of names of Podcasts. Also, I have a huge text corpus scraped from a sub-Reddit (Posts, comments, threads etc.) where the podcasts from our list are being mentioned a lot by the users. The task I'm trying to solve is, I've to count the number of mentions by each name in our corpora. In other words, generate a dictionary of (name: count) pairs.
The challenge here is that most of these Podcast names are several words long, For eg: "Utah's Noon News"; "Congress Hears Tech Policy Debates" etc. However, the mentions which Reddit users make are often a crude substring of the original name, for eg: "Utah Noon/ Utah New" or "Congress Tech Debates/ Congress Hears Tech". This makes identifying names from the list quite difficult.
What I've Tried:
First, I processed and concatenated all the words in the original podcast names into a single word. For instance,
"Congress Hears Tech Policy Debates" -> "Congresshearstechpolicydebates"
As I traversed the subreddit corpus, whenever I found a named-entity or a potential podcast name, I processed its words like this,
"Congress Hears Tech" (assuming this is what I found in the corpora) -> "congresshearstech"
I compared this "congresshearstech" string to all the processed names in the podcast list. I make this comparison using scored calculated on word-spelling similarity. I did this using difflib Python library. Also, there are similarity scores like Leveshtein and Hamming Distance. Eventually, I rewarded the podcast name with similarity score maximum to our corpus-found string.
My problem:
The thing is, the above strategy is infact working accurately. However, it's way too slow to do for the entire corpus. Also, my list of names is way too long. Can anyone please suggest a faster algorithm/data structure to compare so many names on such a huge corpus? Is there any deep learning based approach possible here? Something like where I can train a LSTM on the 2 million Podcast names. So, that whenever a possible name is encountered, this trained model can output the closest spelling of any Podcast from our list?
You may be able to use something like tf-idf and cosine similarity to solve this problem. I'm not familiar with any approach to use machine learning that would be helpful here.
This article gives a more detailed description of the process and links to some useful libraries. You should also read this article which describes a somewhat similar project to yours and includes information on improving performance. I'll describe the method as I understand it here.
tf-idf is an acronym meaning "term frequency inverse document frequency". Essentially, you look at a subset of text and find the frequency of the terms in your subset relative to the frequency of those terms in the entire corpus of text. Terms that are common in your subset and in the corpus as a whole will have a low value, whereas terms that are common in your subset but rare in the corpus would have a high value.
If you can compute the tf-idf for a "document" (or subset of text) you can turn a subset of text into a vector of tf-idf values. Once you have this vector you can use it to compute the cosine-similarity of your text subset with other subsets. Say, find the similarity of an excerpt from reddit with all of your titles. (There is a way to manage this so you aren't continuously checking each reddit excerpt against literally every title - see this post).
Once you can do this then I think the solution is to pick some value n, and scan through the reddit posts n words at a time doing the tf-idf / cosine similarity scan on your titles and marking matches when the cosine-similarity is higher than a certain value (you'll need to experiment with this to find what gives you a good result). Then, you decrement n and repeat until n is 0.
If exact text matching (with or without your whitespace removal preprocessing) is sufficient, consider the Aho-Corasick string matching algorithm for detecting substring matches (i.e. the podcast names) in a body of text (i.e. the subreddit content). There are many implementations of this algorithm for python, but ahocorapy has a good readme that summarizes how to use it on a dataset.
If fuzzy matching is a requirement (also matching when the mention text of the podcast name is not an exact match), then consider a fuzzy string matching library like thefuzz (aka fuzzywuzzy) if per query-document operations offer sufficient performance. Another approach is to precompute n-grams from the substrings and accumulate the support counts across all n-grams for each document as the fuzzyset package does.
If additional information about the podcasts is available in a knowledge base (i.e. more than just the name is known), then the problem is more like the general NLP task of entity linking but to a custom knowledge base (i.e. the podcast list). This is an area of active research and state of the art methods are discussed on NLP Progress here.
My Motivations I'm trying to learn German and realized there's a confounding fact with the structure of German: every noun has a gender which seems unrelated to the noun itself in many cases.
Unlike languages such as English, each noun has a different definite article, depending on gender: der (masculine), die (feminine), and das (neuter). For example:
das Mädchen ("the girl"), der Rock ("the skirt), die Hose ("the trousers/pants"). So, there seems to be no correlation between gender assignment of nouns and their meanings.
The Data
I gathered up to 5000 German words with 3 columns (das, der, die) for each word with 1's and 0's. So, my data is already clustered with one hot encoding and I'm not trying to predict anything.
Why I'm here I am clueless on where to start, how to approach this problem as the concept of distance in clustering doesn't make sense to me in this setting. I can't think of a way to generate an understandable description of these clusters. The mixed data makes it impossible for me to think of some hard-coded metrics for evaluation.
So, my question is:
I want to find some patterns, some characteristics of these words that made them fall in a specific cluster. I don't know if I'm making any sense but some people managed to find some patterns already (for example word endings, elongated long objects tend to be masculine etc., etc) and I believe ML/AI could do a way better job at this. Would it be possible for me to do something like this?
Some personal thoughts
While I was doing some research (perhaps, naive), I realized the potential options are decision trees and cobweb algorithms. Also, I was thinking if I could just scrape a few images (say 5) for every word and try to run some image classification and see the intermediate NN's to see if any specific shapes support a specific object gender. In addition to that, I was wondering whether scraping the data of google n-gram viewers of these words could help in anyway. I couldn't think of a way to use NLP or its sub domains.
Alternatives If everything I just wrote sounds nonsensical, please suggest me a way to make visual representations of my dataframe (more like nodes and paths with images at nodes, one for each cluster) in Python so that I could make pictorial mind maps and try to by heart them.
The ultimate purpose is to make learning German simpler for myself and possibly for others
How can I extract a phrase from a sentence using a dataset which has some set of the sentence and corresponding label in the form of
Sentence1:I want to play cricket
Label1: play cricket
Sentence2: Need to wash my clothes
Label2: wash clothes
I have tried using chunking with nltk but I am not able to use training data along with the chunks.
The "reminder paraphrases" you describe don't map exactly to other kinds of "phrases" with explicit software support.
For example, the gensim Phrases module uses a purely statistical approach to discover neighboring word-pairings that are so common, relative to the base rates of each word individually, that they might usefully be considered a combined unit. It might turn certain entities into phrases (eg: "New York" -> "New_York"), or repeated idioms (eg: "slacking off" -> "slacking_off"). But it'd only be neighboring-runs-of-words, and not the sort of contextual paraphrase you're seeking.
Similarly, libraries which are suitably grammar-aware to mark-up logical parts-of-speech (and inter-dependencies) also tend to simply group and label existing phrases in the text – not create simplified, imperative summaries like you desire.
Still, such libraries' output might help you work up your own rules-of-thumb. For example, it appears in your examples so far, your desired "reminder paraphrase" is always one verb and one noun (that verb's object). So after using part-of-speech tagging (as from NLTK or SpaCy), choosing the last verb (perhaps also preferring verbs in present/imperative tense), and the following noun-phrase (perhaps stripped of other modifiers/prepositions) may do most of what you need.
Of course, more complicated examples would need better heuristics. And if the full range of texts you need to work on is very varied, finding a general approach might require many more (hundreds/thousands) of positive training examples: what you think the best paraphrase is, given certain texts. Then, you could consider a number of machine-learning methods that might be able to pick the right ~2 words from larger texts.
Researching published work for "paraphrasing", rather than just "phrase extraction", might also guide you to ideas, but I unfortunately don't know any ready-to-use paraphrasing libraries.
I am using Gensim to train Word2Vec. I know word similarities are deteremined by if the words can replace each other and make sense in a sentence. But can word similarities be used to extract relationships between entities?
Example:
I have a bunch of interview documents and in each interview, the interviewee always says the name of their manager. If I wanted to extract the name of the manager from these interview transcripts could I just get a list of all human name's in the document (using nlp), and the name that is the most similar to the word "manager" using Word2Vec, is most likely the manager.
Does this thought process make any sense with Word2Vec? If it doesn't, would the ML solution to this problem then be to input my word embeddings into a sequence to sequence model?
Yes, word-vector similarities & relative-arrangements can indicate relationships.
In the original Word2Vec paper, this was demonstrated by using word-vectors to solve word-analogies. The most famous example involves the analogy "'man' is to 'king' as 'woman' is to ?".
By starting with the word-vector for 'king', then subtracting the vector for 'man', and adding the vector for 'woman', you arrive at a new point in the coordinate system. And then, if you look for other words close to that new point, often the closest word will be queen. Essentially, the directions & distances have helped find a word that's related in a particular way – a gender-reversed equivalent.
And, in large news-based corpuses, famous names like 'Obama' or 'Bush' do wind up with vectors closer to their well-known job titles like 'president'. (There will be many contexts in such corpuses where the words appear immediately together – "President Obama today signed…" – or simply in similar roles – "The President appointed…" or "Obama appointed…", etc.)
However, I suspect that's less-likely to work with your 'manager' interview-transcripts example. Achieving meaningful word-to-word arrangements depends on lots of varied examples of the words in shared usage contexts. Strong vectors require large corpuses of millions to billions of words. So the transcripts with a single manager wouldn't likely be enough to get a good model – you'd need transcripts across many managers.
And in such a corpus each manager's name might not be strongly associated with just manager-like contexts. The same name(s) will be repeated when also mentioning other roles, and transcripts may not especially refer to managerial-action in helpful third-person ways that make specific name-vectors well-positioned. (That is, there won't be clean expository statements like, "John_Smith called a staff meeting", or "John_Smith cancelled the project, alongside others like "…manager John_Smith…" or "The manager cancelled the project".)
I am working on developing a tool for language identification of a given text i.e. given a sample text, identify the language (for e.g. English, Swedish, German, etc.) it is written in.
Now the strategy I have decided to follow (based on a few references I have gathered) are as follows -
a) Create a character n-gram model (The value of n is decided based on certain heuristics and computations)
b) Use a machine learning classifier(such as naive bayes) to predict the language of the given text.
Now, the doubt I have is - Is creating a character N-gram model necessary. As in, what disadvantage does a simple bag of words strategy have i.e. if I use all the words possible in the respective language to create a prediction model, what could be the possible cases where it would fail.
The reason why this doubt arose was the fact that any reference document/research paper I've come across states that language identification is a very difficult task. However, just using this strategy of using the words in the language seems to be a simple task.
EDIT: One reason why N-gram should be preferred is to make the model robust even if there are typos as stated here. Can anyone point out more?
if I use all the words possible in the respective language to create a prediction model, what could be the possible cases where it would fail
Pretty much the same cases were a character n-gram model would fail. The problem is that you're not going to find appropriate statistics for all possible words.(*) Character n-gram statistics are easier to accumulate and more robust, even for text without typos: words in a language tend to follow the same spelling patterns. E.g. had you not found statistics for the Dutch word "uitbuiken" (a pretty rare word), then the occurrence of the n-grams "uit", "bui" and "uik" would still be strong indicators of this being Dutch.
(*) In agglutinative languages such as Turkish, new words can be formed by stringing morphemes together and the number of possible words is immense. Check the first few chapters of Jurafsky and Martin, or any undergraduate linguistics text, for interesting discussions on the possible number of words per language.
Cavnar and Trenkle proposed a very simple yet efficient approach using character n-grams of variable length. Maybe you should try to implement it first and move to a more complex ML approach if C&T approach doesn't meet your requirements.
Basically, the idea is to build a language model using only the X (e.g. X = 300) most frequent n-grams of variable length (e.g. 1 <= N <= 5). Doing so, you are very likely to capture most functional words/morphemes of the considered language... without any prior linguistic knowledge on that language!
Why would you choose character n-grams over a BoW approach? I think the notion of character n-gram is pretty straightforward and apply to every written language. Word, is a much much complex notion which greatly differ from one language to another (consider languages with almost no spacing marks).
Reference: http://odur.let.rug.nl/~vannoord/TextCat/textcat.pdf
The performance really depends on your expected input. If you will be classifying multi-paragraph text all in one language, a functional words list (which your "bag of words" with pruning of hapaxes will quickly approximate) might well serve you perfectly, and could work better than n-grams.
There is significant overlap between individual words -- "of" could be Dutch or English; "and" is very common in English but also means "duck" in the Scandinavian languages, etc. But given enough input data, overlaps for individual stop words will not confuse your algorithm very often.
My anecdotal evidence is from using libtextcat on the Reuters multilingual newswire corpus. Many of the telegrams contain a lot of proper names, loan words etc. which throw off the n-gram classifier a lot of the time; whereas just examining the stop words would (in my humble estimation) produce much more stable results.
On the other hand, if you need to identify short, telegraphic utterances which might not be in your dictionary, a dictionary-based approach is obviously flawed. Note that many North European languages have very productive word formation by free compounding -- you see words like "tandborstställbrist" and "yhdyssanatauti" being coined left and right (and Finnish has agglutination on top -- "yhdyssanataudittomienkinkohan") which simply cannot be expected to be in a dictionary until somebody decides to use them.