in this link http://nlp.stanford.edu/software/tokenizer.html
a document is processed and all abbreviation dot and full stop are recognized differently. I want to know the logic or process used behind this. Please explain.
You can split your document to sentences (using Stanford or any other tool, e.g this); clearly the dots at the end of sentences are full stops.
Related
I wanted to cluster sentences based on their context and extract common keywords from similar context sentences.
For example
1. I need to go to home
2. I am eating
3. He will be going home tomorrow
4. He is at restaurant
Sentences 1 and 3 will be similar with keyword like go and home and maybe it's synonyms like travel and house .
Pre existing API will be helpful like using IBM Watson somehow
This API actually is doing what you are exactly asking for (Clustering sentences + giving key-words):
http://www.rxnlp.com/api-reference/cluster-sentences-api-reference/
Unfortunately the algorithm used for clustering and the for generating the key-words is not available.
Hope this helps.
You can use RapidMiner with Text Processing Extension.
Insert each sentence in a seperate file and put them all in a folder.
Put the operators and make a design like below.
Click on the Process Documents from files operator and in the right bar side choose "Edit list" on "Text directories" field. Then choose the folder that contains your files.
Double click on Process Documents from files operator and in the new window add the operators like below design(just the ones you need).
Then run your process.
I am following a course titled Natural Language Processing on Coursera, and while the course is informative, I wonder if the contents given cater to what am I looking for.Basically I want to implement a textual version of Cortana, or Siri for now as a project, i.e. where the user can enter commands for the computer in natural language and they will be processed and translated into appropriate OS commands. My question is
What is generally sequence of steps for the above applications, after processing the speech? Do they tag the text and then parse it, or do they have any other approach?
Under which application of NLP does it fall? Can someone cite me some good resources for same? My only doubt is that what I follow now, shall that serve any important part towards my goal or not?
What you want to create can be thought of as a carefully constrained chat-bot, except you are not attempting to hold a general conversation with the user, but to process specific natural language input and map it to specific commands or actions.
In essence, you need a tool that can pattern match various user input, with the extraction or at least recognition of various important topic or subject elements, and then decide what to do with that data.
Rather than get into an abstract discussion of natural language processing, I'm going to make a recommendation instead. Use ChatScript. It is a free open source tool for creating chat-bots that recently took first place in the Loebner chat-bot competition, as it has done so several times in the past:
http://chatscript.sourceforge.net/
The tool is written in C++, but you don't need to touch the source code to create NLP apps; just use the scripting language provided by the tool. Although initially written for chat-bots, it has expanded into an extremely programmer friendly tool for doing any kind of NLP app.
Most importantly, you are not boxed in by the philosophy of the tool or limited by the framework provided by the tool. It has all the power of most scripting languages so you won't find yourself going most of the distance towards your completing your app, only to find some crushing limitation during the last mile that defeats your app or at least cripples it severely.
It also includes a large number of ontologies that can jump-start significantly your development efforts, and it's built-in pre-processor does parts-of-speech parsing, input conformance, and many other tasks crucial to writing script that can easily be generalized to handle large variations in user input. It also has a full interface to the WordNet synset database. There are many other important features in ChatScript that make NLP development much easier, too many to list here. It can run on Linux or Windows as a server that can be accessed using a TCP-IP socket connection.
Here's a tiny and overly simplistic example of some ChatScript script code:
# Define the list of available devices in the user's household.
concept: ~available_devices( green_kitchen_lamp stove radio )
#! Turn on the green kitchen lamp.
#! Turn off that damn radio!
u: ( turn _[ on off ] *~2 _~available_devices )
# Save off the desired action found in the user's input. ON or OFF.
$action = _0
# Save off the name of the device the user wants to turn on or off.
$target_device = _1
# Launch the utility that turns devices on and off.
^system( devicemanager $action $target_device )
Above is a typical ChatScript rule. Your app will have many such rules. This rule is looking for commands from the user to turn various devices in the house on and off. The # character indicates a line is a comment. Here's a breakdown of the rule's head:
It consists of the prefix u:. This tells ChatScript a rule that the rule accepts user input in statement or question format.
It consists of the match pattern, which is the content between the parentheses. This match pattern looks for the word turn anywhere in the sentence. Next it looks for the desired user action. The square brackets tell ChatScript to match the word on or the word off. The underscore preceding the square brackets tell ChatScript to capture the matched text, the same way parentheses do in a regular expression. The ~2 token is a range restricted wildcard. It tells ChatScript to allow up to 2 intervening words between the word turn and the concept set named ~available_devices.
~available_devices is a concept set. It is defined above the rule and contains the set of known devices the user can turn on and off. The underscore preceding the concept set name tells ChatScript to capture the name of the device the user specified in their input.
If the rule pattern matches the current user input, it "fires" and then the rule's body executes. The contents of this rule's body is fairly obvious, and the comments above each line should help you understand what the rule does if fired. It saves off the desired action and the desired target device captured from the user's input to variables. (ChatScript variable names are preceded by a single or double dollar-sign.) Then it shells to the operating system to execute a program named devicemanager that will actually turn on or off the desired device.
I wanted to point out one of ChatScript's many features that make it a robust and industrial strength NLP tool. If you look above the rule you will see two sentences prefixed by a string consisting of the characters #!. These are not comments but are validation sentences instead. You can run ChatScript in verify mode. In verify mode it will find all the validation sentences in your scripts. It will then apply each validation sentence to the rule immediately following it/them. If the rule pattern does not match the validation sentence, an error message will be written to a log file. This makes each validation sentence a tiny, easy to implement unit test. So later when you make changes to your script, you can run ChatScript in verify mode and see if you broke anything.
I use COBOL in my business and I originally learned at a a site using perform - thru ....
Now I have changed jobs and the local programming standards are different, so I need to get used to PERFORM without THRU.
I also have a doubt about the use of the full-stop/period/point in a paragraph.
You can use the period to end sentences within a paragraph. I say this because I read in places that the paragraph ends with the first point, and elsewhere I see full-stop/period/point used differently to that.
I wrote two structures below which I understand should be equal. Is that so?
Structure A:
PERFORM 100-WRITING.
PERFORM 200-FINISH.
100-WRITING.
*-----------
DISPLAY "HI MY NAME IS FELIPE".
DISPLAY "THE WORLD IS GREAT".
DISPLAY "I DONT SPEAK ENGLISH".
200-FINISH.
*----------
DISPLAY "BYE BYE BABY".
DISPLAY "ESTO ES TODO".
Structure B:
PERFORM 100-WRITING.
PERFORM 200-FINISH.
100-WRITING.
*-----------
DISPLAY "HI MY NAME IS FELIPE"
DISPLAY "THE WORLD IS GREAT"
DISPLAY "I DONT SPEAK ENGLISH".
200-FINISH.
*----------
DISPLAY "BYE BYE BABY"
DISPLAY "ESTO ES TODO".
You are correct, the code you show is equivalent.
A paragraph or a SECTION must end with a full-stop/period/point. This means that the last thing (other than a comment or a blank line) in a paragraph or a SECTION must be a full-stop/period/point.
This does not mean that a full-stop/period/point actually ends a paragraph or SECTION. You are correct, a full-stop/period/point in a paragraph ends a sentence. A paragraph or SECTION must contain sentences only. Therefore a paragraph or SECTION must end with a full-stop/period/point because a sentence must end with one.
PERFORM 100-WRITING
PERFORM 200-FINISH
.
100-WRITING.
*-----------
DISPLAY "HI MY NAME IS FELIPE"
DISPLAY "THE WORLD IS GREAT"
DISPLAY "I DONT SPEAK ENGLISH"
.
200-FINISH.
*----------
DISPLAY "BYE BYE BABY"
DISPLAY "ESTO ES TODO"
.
Since the 1985 COBOL standard, sentences are no longer really important in COBOL in the PROCEDURE DIVISION, so it is only extra work if they are made important by the coder.
I find it much more convenient to re-write like the above. Don't attach the full-stop/period/point to anything. Just have it on in a line of its own. Then it means you can move around any of the lines in a paragraph/SECTION without worrying about whether the full-stop/period/point is in the correct place, because it is not actually attached to anything, so it never gets move around until you need it for the next paragraph/SECTION.
However, your site coding standards may not allow you to do this. You could always attempt to get them changed for the better :-)
I've seen a number of QR codes that contain a URL but also have extra some text after it. Something like:
http://www.example.com Thanks for scanning this QR code!
I've experimented with using a number of different delimiting methods (several spaces, a question mark, two dashes, one or two returns) and all work to varying degrees on various scanning programs.
Some respect the space character, others respect the return. Some think the URL isn't a URL at all when I use a return. Long story short, it's all over the map how the various scanning programs (NeoReader, iNigma, Qrafter, Beetag, OptiScan, etc.) treat characters after a URL.
Is there any consensus on weather (a)this is even a good idea or not and (b)if so, what is the 'correct' (best practice) way to do it? (I know I should go read the RFC for the exact definition of a URL but since the reader programs are all over the map, I suspect they didn't read it either.)
You can make it work by converting the text message into valid URL, while trying to keep readability.
In your case it can be:
http://www.example.com?Thanks_for_scanning_this_QR_code
It's not perfect, but it can help on web analytics side to distinguish all QR code users.
Spaces are definitely not part of a URL, so, in that sense a space definitely should delimit the end of a URL.
The entire string is not a URL, taken as a whole of course. So yes it's asking for trouble.
As you've found, the empirical answer is that not every reader does what you want. Barcode Scanner for instance understands the split here, but does not prompt the user to launch the browser since the payload isn't a URL per se.
So: it's a bad idea.
Summary
I am trying to design a heuristic for matching up sentences in a translation (from the original language to the translated language) and would like guidance and tips. Perhaps there is a heuristic that already does something similar? So given two text files, I would like to be able to match up the sentences (so I can pick out a sentence and say this is the translation of that sentence).
Details
The input text would be translated novels. So I do not expect the translations to be literal, although, using something like google translate might be a good way to test the accuracy of the heuristic.
To help me, I have a library that will gloss the contents of the translated text and give me the definitions of the words in the sentence. Other things I know:
Chapters and order are preserved; I know that the first sentence in chapter three will match with the first sentence in chapter three of the translation (Note, this is not strictly true; the first sentence might match up with the first two sentences, or even the second sentence)
I can calculate the overall size (characters, sentences, paragraphs); which could give me an idea of the average difference in sentence size (for example, the translation might be 30% longer).
Looking at the some books I have, the translated version has about 30% more sentences than the original text.
Implementation
(if it matters)
I am planning to do this in Java - but I am not that fussed - any language will do.
I am not greatly concerned about speed.
I guess to to be sure of the matches, some user feedback might be required. Like saying "Yes, this sentence definitely matches with that sentence." This would give the heuristic some more ground to stand on. This would mean that the user would need a little proficiency in the languages.
Background
(for those interested)
The reason I want to make this is that I want it to assist with my foreign language study. I am studying Japanese and find it hard to find "good" material (where "good" is defined by what I like). There are already tools to do something similar with subtitles from videos (an easier task - using the timing information of the video). But nothing, as far as I know, for texts.
There are tools called "sentence aligners" used in NLP research that does exactly what you want.
I advise hunalign:
http://mokk.bme.hu/resources/hunalign/
and MS sentence aligner:
http://research.microsoft.com/en-us/downloads/aafd5dcf-4dcc-49b2-8a22-f7055113e656/
Both are quite OK, but remember that nothing is perfect. Sentences that are too hard to be aligned will be dropped and some sentences may be wrongly aligned.