Increase the keywords library - machine-learning

I have been trying Machine Learning algorithms for text classification. It works pretty well. However, I also want to add an automatic feature of addition of new keywords in my keywords library. For example, my current library is like the following:
[
[food,eat,drinks],
[travel, explore, visit],
[business, work, future]
]
Now, I try to import a random string. For example:
importString = "I am Harish. I am a foodie. I do not like long journeys. I am an entrepreneur."
After importing the above string, I am removing all the stop words first, and then I want to automatically update my library (without human help) such as:
[
[food, eat, drinks, foodie],
[travel, explore, visit, journeys],
[business, work, future, entrepreneur]
]
Is there any way (in the field of Machine Learning or Deep Learning) to accomplish this task?

You could just append the word to the list.
The exact syntax for the code depends on the language you are using, but in python its:
list.append(stuff-to-add-to-list)
supposing that the name of your list is "list" and the variable to append to the list is "stuff-to-add-to-list"

Related

I have a dataset on which I want to do Phrase extraction using NLP but I am unable to do so?

How can I extract a phrase from a sentence using a dataset which has some set of the sentence and corresponding label in the form of
Sentence1:I want to play cricket
Label1: play cricket
Sentence2: Need to wash my clothes
Label2: wash clothes
I have tried using chunking with nltk but I am not able to use training data along with the chunks.
The "reminder paraphrases" you describe don't map exactly to other kinds of "phrases" with explicit software support.
For example, the gensim Phrases module uses a purely statistical approach to discover neighboring word-pairings that are so common, relative to the base rates of each word individually, that they might usefully be considered a combined unit. It might turn certain entities into phrases (eg: "New York" -> "New_York"), or repeated idioms (eg: "slacking off" -> "slacking_off"). But it'd only be neighboring-runs-of-words, and not the sort of contextual paraphrase you're seeking.
Similarly, libraries which are suitably grammar-aware to mark-up logical parts-of-speech (and inter-dependencies) also tend to simply group and label existing phrases in the text – not create simplified, imperative summaries like you desire.
Still, such libraries' output might help you work up your own rules-of-thumb. For example, it appears in your examples so far, your desired "reminder paraphrase" is always one verb and one noun (that verb's object). So after using part-of-speech tagging (as from NLTK or SpaCy), choosing the last verb (perhaps also preferring verbs in present/imperative tense), and the following noun-phrase (perhaps stripped of other modifiers/prepositions) may do most of what you need.
Of course, more complicated examples would need better heuristics. And if the full range of texts you need to work on is very varied, finding a general approach might require many more (hundreds/thousands) of positive training examples: what you think the best paraphrase is, given certain texts. Then, you could consider a number of machine-learning methods that might be able to pick the right ~2 words from larger texts.
Researching published work for "paraphrasing", rather than just "phrase extraction", might also guide you to ideas, but I unfortunately don't know any ready-to-use paraphrasing libraries.

Advantage of boost::bimap::unordered_set_of v/s std::unordered_set

I'm using a bidirectional map to link a list of names to a particular single name (for example, to correlate cities and countries). So, my definition of the type is something like:
using boost::bimap<boost::bimaps::unordered_set_of<std::string>, std::string> CitiesVsCountries;
But one question intrigues me:
What's the advantage on using a boost::bimaps::unordered_set_of<std::string> v/s a simple std::unordered_set? The advantage of the bimap is clear (avoing having to synchronize by hand two maps), but I can't really see what added value is giving the Boost version of the unordered set, nor I can find any document detailing the difference.
Thanks a lot for your help.

How can I cluster similar type of sentences based on their context and extract keywords from them

I wanted to cluster sentences based on their context and extract common keywords from similar context sentences.
For example
1. I need to go to home
2. I am eating
3. He will be going home tomorrow
4. He is at restaurant
Sentences 1 and 3 will be similar with keyword like go and home and maybe it's synonyms like travel and house .
Pre existing API will be helpful like using IBM Watson somehow
This API actually is doing what you are exactly asking for (Clustering sentences + giving key-words):
http://www.rxnlp.com/api-reference/cluster-sentences-api-reference/
Unfortunately the algorithm used for clustering and the for generating the key-words is not available.
Hope this helps.
You can use RapidMiner with Text Processing Extension.
Insert each sentence in a seperate file and put them all in a folder.
Put the operators and make a design like below.
Click on the Process Documents from files operator and in the right bar side choose "Edit list" on "Text directories" field. Then choose the folder that contains your files.
Double click on Process Documents from files operator and in the new window add the operators like below design(just the ones you need).
Then run your process.

Algorithm for keyword/phrase trend search similar to Twitter trends

Wanted some ideas about building a tool which can scan text sentences (written in english language) and build a keyword rank, based on the most occurrences of words or phrases within the texts.
This would be very similar to the twitter trends wherin twitter detects and reports the top 10 words within the tweets.
I have identified the high level steps in the algorithm as follows
Scan the text and remove all the common , frequent words ( such as, "the" , "is" , "are", "what" , "at" etc..)
Add the remaining words to a hashmap. If the word is already in the map then increment its count.
To get the top 10 words , iterate through the hashmap and find out the top 10 counts.
Step 2 and 3 are straightforward but I do not know in step 1 how do I detect the important words within a text and segregate them from the common words (prepositions, conjunctions etc )
Also if I want to track phrases what could be the approach ?
For example if I have a text saying "This honey is very good"
I might want to track "honey" and "good" but I may also want to track the phrases "very good" or "honey is very good"
Any suggestions would be greatly appreciated.
Thanks in advance
For detecting phrases, I suggest to use chunker. You can use one provided by NLP tool like OpenNLP or Stanford CoreNLP.
NOTE
honey is very good is not a phrase. It is clause. very good is a phrase.
In Information Retrieval System, those common word are called Stop Words.
Actually, your step 1 would be quite similar to step 3 in the sense that you may want to constitute an absolute database of the most common words in the English language in the first place. Such a list is available easily on the internet (Wikipedia even has an article referencing the 100 most common words in the English language.) You can store those words in a hashmap and while scanning your text contents just ignore the common tokens.
If you don't trust Wikipedia and the already existing listing for common words, you can build your own database. For that purpose, just scan thousands of tweets (the more the better) and make your own frequency chart.
You're facing an n-gram-like problem.
Do not reinvent the wheel. What you seem to be wanting to do has been done thousands of times, just use existing libs or pieces of code (check the External Links section of the n-gram Wikipedia page.)
Check out the NLTK library. It has code that does number one two and three:
1 Removing common words can be done using stopwords or a stemmer
2,3 getting the most common words can be done with FreqDist
Second you can use tools from Stanford NLP for tracking your text

Validate words against an English dictionary in Rails?

I've done some Google searching but couldn't find what I was looking for.
I'm developing a scrabble-type word game in rails, and was wondering if there was a simple way to validate what the player inputs in the game is actually a word. They'd be typing the word out.
Is validation against some sort of English language dictionary database loaded within the app best way to solve this problem? If so, are there any libraries that offer this kind of functionality? If not, what would you suggest?
Thanks for your help!
You need two things:
a word list
some code
The word list is the tricky part. On most Unix systems there's a word list at /usr/share/dict/words or /usr/dict/words -- see http://en.wikipedia.org/wiki/Words_(Unix) for more details. The one on my Mac has 234,936 words in it. But they're not all valid Scrabble words. So you'd have to somehow acquire a Scrabble dictionary, make sure you have the right license to use it, and process it so it's a text file.
(Update: The word list for LetterPress is now open source, and available on GitHub.)
The code is no problem in the simple case. Here's a script I whipped up just now:
words = {}
File.open("/usr/share/dict/words") do |file|
file.each do |line|
words[line.strip] = true
end
end
p words["magic"]
p words["saldkaj"]
This will output
true
nil
I leave it as an exercise for the reader to make it into a proper Words object. (Technically it's not a Dictionary since it has no definitions.) Or to use a DAWG instead of a hash, even though a hash is probably fine for your needs.
A piece of language-agnostic advice here, is that if you only care about the existence of a word (which in such a case, you do), and you are planning to load the entire database into the application (which your query suggests you're considering) then a DAWG will enable you to check the existence in O(n) time complexity where n is the size of the word (dictionary size has no effect - overall the lookup is essentially O(1)), while being a relatively minimal structure in terms of memory (indeed, some insertions will actually reduce the size of the structure, a DAWG for "top, tap, taps, tops" has fewer nodes than one for "tops, tap").

Resources