Intelligent text parsing and translation - parsing

What would be an intelligent way to store text, so that it can be intelligently parsed and translated later on.
For example, The employee is outstanding as he can identify his own strengths and weaknesses and is comfortable with himself.
The above could be the generic text which is shown to the user prior to evaluation. If the user is a Male (say Shaun) or female (say Mary), the above text should be translated as follows.
Mary is outstanding as she can identify her own strengths and weaknesses and is comfortable with herself.
Shaun is outstanding as he can identify his own strengths and weaknesses and is comfortable with himself.
How do we store the evaluation criteria in the first place with appropriate place or token holders. (In the above case employee should be translated to employee name and based on his gender the words he or she, himself or herself needs to be translated)
Is there a mechanism to automatically translate the text with the above information.

The basic idea of doing something like this is called Mail Merge.
This page seems to discus how to implement something like this in Ruby.
[Edit]
A google search gave me this - http://freemarker.org/
I don't know much about this library, but it looks like what you need.

This is a very broad question in the field of Natural Language Processing. There are numerous ways to go around it, the questions you asked seem too broad.
If I understand correctly part of your question this could be done this way :
#variable{name} is outstanding as #gender{he/she} can identify #gender{his/hers} own strengths and weaknesses and is comfortable with #gender{himself/herself}.
Or:
#name is outstanding as #he can identify #his own strengths and weaknesses and is comfortable with #himself.
... if gender is the major problem.

I have had some experience working with a tool called Grammatica, when building a custom user input excel like formula parsing and evaluation engine. It may not be to the level of sophistication you're looking for but it's a start. This basically uses many of the same concepts that popular code compiler parsers employ. It's definitely worth checking out.

I agree with Kornel, this question is too broad. What you seem to be talking about is semantics for which RDF's and OWL can be a good starting point. Read about modeling semantics using markup and you can work your way up from there.

Related

spell checker uses language model

I look for spell checker that could use language model.
I know there is a lot of good spell checkers such as Hunspell, however as I see it doesn't relate to context, so it only token-based spell checker.
for example,
I lick eating banana
so here at token-based level no misspellings at all, all words are correct, but there is no meaning in the sentence. However "smart" spell checker would recognize that "lick" is actually correctly written word, but may be the author meant "like" and then there is a meaning in the sentence.
I have a bunch of correctly written sentences in the specific domain, I want to train "smart" spell checker to recognize misspelling and to learn language model, such that it would recognize that even thought "lick" is written correctly, however the author meant "like".
I don't see that Hunspell has such feature, can you suggest any other spell checker, that could do so.
See "The Design of a Proofreading Software Service" by Raphael Mudge. He describes both the data sources (Wikipedia, blogs etc) and the algorithm (basically comparing probabilities) of his approach. The source of this system, After the Deadline, is available, but it's not actively maintained anymore.
One way to do this is via a character-based language model (rather than a word-based n-gram model). See my answer to Figuring out where to add punctuation in bad user generated content?. The problem you're describing is different, but you can apply a similar solution. And, as I noted there, the LingPipe tutorial is a pretty straightforward way of developing a proof-of-concept implementation.
One important difference - to capture more context, you may want to train a larger n-gram model than the one I recommended for punctuation restoration. Maybe 15-30 characters? You'll have to experiment a little there.

Struggling with a model because of a lot of data

I'm building an app to write wine tasting notes, and I have to translate this tasting framework (only the first page) into a model.
It's a lot of a data and I'm not sure about how to proceed. I tried to sketch a possible solution in this spreadsheet.
What would you suggest to do? Should I create only one model (Wine) with a column for each wine characteristic?
Thanks!
P.S. I'm learning web development, sorry if my question sounds trivial.
Perhaps not a direct answer to your question but here's a few notes nevertheless:
I'd say for something like wine tasting, I would go for a combination of selects, tags and simple free-text.
This RailsCast should give you a good introduction to the acts-as-taggable-on gem for tagging.
One thing that may be handy when modelling your DB is to look at several wine tasting 'forms' that are already filled out and see if you can see a pattern. Say for Palate/Acidity I would expect the value to be one of light/medium/high, whereas for the Conclusion/Identity it could be pretty much anything.
You'll also need to find the right balance between restricting and allowing the user input. I'd expect your users to be happier with free-text and feeling more restricted with select/radio boxes. On the other hand, it is always easier (for you) to change from a select box to free text, rather than the other way around. Not to mention searching is much easier by selects or tags.
I don't think your question is that trivial and I think you should to simply try it and see. And design with the ability to change in mind.

English query generation through machine translation systems

I'm working on a project to generate questions from sentences. Right now, I'm at a point where I can generate questions like:
"Angela Merkel is the chancelor of Germany." -> "Angela Merkel is who?"
Now, of course, I want the questions to look like "Who is...?" instead. Is there any easy way to do this that I haven't thought of yet?
My current idea would be to train an English(not quite question) -> English(question) translator, maybe using existing machine translation engines like moses. Is this overkill? How much data would I need? Are there corpora that address this or a similar problem? Is using a general translation engine even appropriate for this task?
Check out Michael Heilman's dissertation Automatic Factual Question Generation from Text for background on question generation and to see what his approach to this problem looks like. You can find more by searching for research on "question generation". He mentions a corpus from Microsoft: the Microsoft Research Question-Answering Corpus.
I don't think that an approach based solely on (current) statistical machine translation approaches is going to work that well, since you're usually going to need a deeper syntactic analysis of the source sentence to do a good job of generating an appropriate question. For simple questions like your example, it's pretty easy to design syntactic tree transformations to generate the question, but it gets much trickier as soon as the sentences get a little more complicated.
Off the top of my head, if you restrict yourself to relatively simple questions, you could do a parse, and then flip around the elements to get the question. How do you decide the question word though? Who, What, Where, Why... for this you'll need a classifier that looks at the elements of a sentence. Angela Merkel should be easy to classify as a person/name, so she gets s 'Who', Berlin should be in a dictionary of geos, so it gets 'Where'.
I'm not sure about specific software, but I'd probably do it with NLTK, using a dependency parse and then whatever classification scheme you feel like.
Ultimately your success depends on how big your input and output space is. I'd go for the absolute simplest possible problem first.

Converting free form english text to spanish, what are the options?

I have an application that will be used by spanish speaking people as well as english speaking people. I am using .resx files and localization to translate all the hard coded text. I am also retrieving language specific data from the database for some things that don't change often like "Category Descriptions". Here is my question. I think I already know the answer. Is there a way to translate free form text entered by a user? For example can a string entered as saved to a database in english be displayed in spanish? One more issue is these strings often contain engineering terms and technical abbreviations that I don't think could be translated with something like google translate. Is there anything else out there? I am thinking that this text can only be translated by a human with knowledge of the terminolgy and abbreviations used in this particular industry.
There are some online services such as Google Translate as pointed to by Binary Worrier. However, one should bear in mind that none of these services give accurate translations. Because, as you wrote, translation is a very difficult matter. Current obstacles to good automated translation include, as you wrote, lack of context.
This is a problem even for human translators. Ask a translator for a given sentence in another language. She'll answer: "Ok, what do you mean by this word: X or Y ? In which context ? Who are you talking to? Is this a formal or informal tone? etc...
This is especially true regarding localization where texts are usually very short. This increases the lack of context. Think of a simple menu item: "Load". Is it a name? Is it a verb? Damn, even a human translator needs more information. So don't expect a computer to solve the problem.
Of course, it all depends on the accuracy that you need and the acceptance factor of your users for bad translations. Google Translate et al are very successful because people prefer a bad translation than nothing.
If I were you, I'd make a few manual tests with typical texts in your DBs and see if the translation accuracy fits your needs.
BTW, I believe Google Translate is free for reasonable of amount of use. Basically, unless you want to translate the whole Wikipedia every week, you should be on the safe side ;-)
You can hook into Google Translate APIs and translate this stuff on the fly, I think there's a charge though
I have an answer from my users. Have the users enter the strings in both English and Spanish and store them to the database. Display the correct strings based on the language of the browser. I still have alot of grunt work to do with filling out the .resx files and modifying all the words I need translated.

Tool to parse text for possible Wikipedia links

Does a tool exist that can parse text and output that text, hyper-linked to Wikipedia entries for words of interest?
For example, I'd like a tool that could turn something like:
The most popular search algorithm on a
sorted list is the binary search.
Into:
The most popular search algorithm on a
sorted list is the binary search.
It would be wonderful if Wikipedia had an API which would do this since they would be best equipped to determine what "words of interests" are.
In my example I simply linked all combinations which linked directly to an entry except for The and most.
There is a tool that does exactly what you're asking for.
http: //wikify.appointment.at/
It's not perfect, but it works.
You have two separate problems to solve here:
Deciding which words should be linked
Determining if there's a suitable entry to link these words to
Now, (2) is simpler, though it's also somewhat problematic. Wikipedia seems to have an API that allows you to gather data efficiently, and they also allow "screen scraping". But there's a problem with disambiguation - sometimes you might hit not the entry you wanted. For example, python links to a disambiguation page, as it can be a programming language, a snake and a couple of other things.
(1) Is much harder, though. You can take the "simple approach" and attempt to find links for all non-trivial nouns (or even noun/adjective pairs). Non-trivial here means omitting words like "fiend, word, computer" etc.
But This would result in a plethora of links, which isn't convenient to read. It's really up to you to decide what's interesting in the text, and this depends a lot on the text itself. In an article for professional programmers, do you really want to link to "search algorithm" every time? But for beginners, perhaps you do.
To conclude, I strongly doubt there's a single general-purpose tool that will do the trick for you. But you surely have all the options at your hand, and something need-specific can be coded without too much effort.
Silviu Cucerzan of Microsoft Research tackled this problem. Well, not the problem of inserting the links, but the general issue of determining what entities are being mentioned in a some piece of text. Fortunately for you, he used Wikipedia articles as his set of entities. His paper, "Large-Scale Named Entity Disambiguation Based on Wikipedia Data", is available on his website. Direct link: pdf.

Resources