how to extract nouns and substantives from a phrase? - parsing

I would like to extract nouns, substantives and adjectives from a given text phrase. Is there a java library (open source) that does that? Does anyone know how to do that?
Basically, I was thinking in creating separated dictionaries for these categories (nouns, substantives, adjectives) and then parse the phrase, separate words in tokens and compare against these dictionaries but having something (lib) that already does that for me would be great. More perfect if it supports Brazilian Portuguese language!
Thanks.

Opennlp is a good java library to achieve what you want.
have a look at this blog for setup, and this blog clearly explains how to extract nouns,adjectives and verbs.
hope this helps!

Related

Parse arbitrary text to produce dependency graph

How to create dependency graph (parse tree) for random sentences. Is there any predined grammer to parse english sentences using nltk.
Example:
I want to make a parse tree for the sentence
“A large company needs a sustainable business model.”
which should look like this.
Please suggest me how this can be done.
This question is a near-duplicate of 3125926. But I'll elaborate just a little on the answer given there.
I don't have personal experience with dependency parsing under NLTK, but according to the accepted answer, the integration with MaltParser is documented at http://nltk.googlecode.com/svn/trunk/doc/api/nltk.parse.malt.MaltParser-class.html
If for some reason MaltParser doesn't suit your needs, you might also take a look at MSTParser and the Stanford Parser. I think those three options are the best-known, and I expect one (or all) of them will work for you.
Note that the Stanford Parser includes routines to convert from constituency trees and between several of the standard dependency representations, so if you need a specific format, you might look at the format-conversion arguments to the edu.stanford.nlp.trees.EnglishGrammaticalStructure class.
e.g., to convert from constituency trees to basic dependencies:
java -cp stanford-parser.jar edu.stanford.nlp.trees.EnglishGrammaticalStructure -treeFile <input trees> -basic

Are there any libraries in iOS for identifying phonetically same sound

I am trying to build an iOS application. In one of the screens the user can type something in a search bar and I have to take same action for different spellings of the same word.
For eg: User can type "elephant" or "alephant" or "elefant". I have to take same action for all these three words.
Is there any library that identifies these words as similar ones ? I cannot use spellchecker as I need this in languages other than english also ..
I did some research and I found that there are some phonetic algorithms like Text::soundex for achieving this on server side. Wondering if any libraries there for iOS ?
Thanks in advance !!
A better alternative to Soundex would be Double Metaphone or, even better, Metaphone 3. You don't say what language you are using, but both of these algorithms are available in C++, C#, and Java
There's no soundex available in for example NSString, but if that's what you want, it's fairly easy to implement. Here's a—albeit horribly formatted—soundex NSString category from CocoaDev.
You could also use the Levenstein Distance algorithm to catch simple spelling errors. Also easy to implement (read the Wikipedia article for the details), but here's a NSString category for that.
Before you use these algorithms, normalize the input. There's the amazing CFStringTransform class in Core Foundation (see this great article about it on NSHipster—especially the last part about normalization) that automatically can transform different language inputs into normalized forms.

Is there a standard file naming convention for key-value pairs in filename?

I have multiple data files that are named after what they contain. For example
machine-testM_pid-1234_key1-value1.log
There are keys and values separated by - and _. Is there a better syntax for this? Are there parsers that automatically read these kinds of files/filenames?
The idea here is that the filenames are human and machine readable.
8 years later...
I would suggest that you look into https://en.wikipedia.org/wiki/Query_string
They aren't pretty filenames, but you wouldn't have to re-invent the wheel with for example converting to dict/json since there are well-tested methods for parsing query strings, for example in the requests library.
There's a related question over at https://unix.stackexchange.com/questions/44153/good-style-practices-for-separators-in-file-or-directory-names as well that contains some useful information that contains some ideas for better visual separators (_-_ and ___) that might be better in this case
There seems to be no standard file naming convention for key-values.

How do you think the "Quick Add" feature in Google Calendar works?

Am thinking about a project which might use similar functionality to how "Quick Add" handles parsing natural language into something that can be understood with some level of semantics. I'm interested in understanding this better and wondered what your thoughts were on how this might be implemented.
If you're unfamiliar with what "Quick Add" is, check out Google's KB about it.
6/4/10 Update
Additional research on "Natural Language Parsing" (NLP) yields results which are MUCH broader than what I feel is actually implemented in something like "Quick Add". Given that this feature expects specific types of input rather than the true free-form text, I'm thinking this is a much more narrow implementation of NLP. If anyone could suggest more narrow topic matter that I could research rather than the entire breadth of NLP, it would be greatly appreciated.
That said, I've found a nice collection of resources about NLP including this great FAQ.
I would start by deciding on a standard way to represent all the information I'm interested in: event name, start/end time (and date), guest list, location. For example, I might use an XML notation like this:
<event>
<name>meet Sam</name>
<starttime>16:30 07/06/2010</starttime>
<endtime>17:30 07/06/2010</endtime>
</event>
I'd then aim to build up a corpus of diary entries about dates, annotated with their XML forms. How would I collect the data? Well, if I was Google, I'd probably have all sorts of ways. Since I'm me, I'd probably start by writing down all the ways I could think of to express this sort of stuff, then annotating it by hand. If I could add to this by going through friends' e-mails and whatnot, so much the better.
Now I've got a corpus, it can serve as a set of unit tests. I need to code a parser to fit the tests. The parser should translate a string of natural language into the logical form of my annotation. First, it should split the string into its constituent words. This is is called tokenising, and there is off-the-shelf software available to do it. (For example, see NLTK.) To interpret the words, I would look for patterns in the data: for example, text following 'at' or 'in' should be tagged as a location; 'for X minutes' means I need to add that number of minutes to the start time to get the end time. Statistical methods would probably be overkill here - it's best to create a series of hand-coded rules that express your own knowledge of how to interpret the words, phrases and constructions in this domain.
It would seem that there's really no narrow approach to this problem. I wanted to avoid having to pull along the entirety of NLP to figure out a solution, but I haven't found any alternative. I'll update this if I find a really great solution later.

How can I use NLP to parse recipe ingredients?

I need to parse recipe ingredients into amount, measurement, item, and description as applicable to the line, such as 1 cup flour, the peel of 2 lemons and 1 cup packed brown sugar etc. What would be the best way of doing this? I am interested in using python for the project so I am assuming using the nltk is the best bet but I am open to other languages.
I actually do this for my website, which is now part of an open source project for others to use.
I wrote a blog post on my techniques, enjoy!
http://blog.kitchenpc.com/2011/07/06/chef-watson/
The New York Times faced this problem when they were parsing their recipe archive. They used an NLP technique called linear-chain condition random field (CRF). This blog post provides a good overview:
"Extracting Structured Data From Recipes Using Conditional Random Fields"
They open-sourced their code, but quickly abandoned it. I maintain the most up-to-date version of it and I wrote a bit about how I modernized it.
If you're looking for a ready-made solution, several companies offer ingredient parsing as a service:
Zestful (full disclosure: I'm the author)
Spoonacular
Edamam
I guess this is a few years out, but I was thinking of doing something similar myself and came across this, so thought I might have a stab at it in case it is useful to anyone else in f
Even though you say you want to parse free test, most recipes have a pretty standard format for their recipe lists: each ingredient is on a separate line, exact sentence structure is rarely all that important. The range of vocab is relatively small as well.
One way might be to check each line for words which might be nouns and words/symbols which express quantities. I think WordNet may help with seeing if a word is likely to be a noun or not, but I've not used it before myself. Alternatively, you could use http://en.wikibooks.org/wiki/Cookbook:Ingredients as a word list, though again, I wouldn't know exactly how comprehensive it is.
The other part is to recognise quantities. These come in a few different forms, but few enough that you could probably create a list of keywords. In particular, make sure you have good error reporting. If the program can't fully parse a line, get it to report back to you what that line is, along with what it has/hasn't recognised so you can adjust your keyword lists accordingly.
Aaanyway, I'm not guaranteeing any of this will work (and it's almost certain not to be 100% reliable) but that's how I'd start to approach the problem
This is an incomplete answer, but you're looking at writing up a free-text parser, which as you know, is non-trivial :)
Some ways to cheat, using knowledge specific to cooking:
Construct lists of words for the "adjectives" and "verbs", and filter against them
measurement units form a closed set, using words and abbreviations like {L., c, cup, t, dash}
instructions -- cut, dice, cook, peel. Things that come after this are almost certain to be ingredients
Remember that you're mostly looking for nouns, and you can take a labeled list of non-nouns (from WordNet, for example) and filter against them.
If you're more ambitious, you can look in the NLTK Book at the chapter on parsers.
Good luck! This sounds like a mostly doable project!
Can you be more specific what your input is? If you just have input like this:
1 cup flour
2 lemon peels
1 cup packed brown sugar
It won't be too hard to parse it without using any NLP at all.

Resources