I have read alot of scientific papers and search the internet but I still could not find the appropriate answer to my question. I know what ontology is but the main question is even if create the ontology (on protege) where the source code will reside in the ontology as it is just a relationship model.
and second question is do I have to write the code in turtle/RDF-XML format or it can be done automatically?
Thanks in advance
I think your question is about “Semantic Web” and has nothing to do with PLC programming.
I can recommend the following book:
Semantic Web for the Working Ontologist
by Dean Allemang, James Hendler
Publisher: Morgan Kaufmann
Published: Juli 2011
Related
I have what I think is a simple question. I am trying to put together a question answering system and I am having trouble converting a natural question to a knowledge graph triple. Here is an example of what I mean:
Assume I have a prebuilt knowledge graph with the relationship:
((Todd) -[:picked_up_by]-> (Jane))
How can I make this conversion:
"Who picked up Todd today?" -> ((Todd) -[:picked_up_by]-> (?))
I am aware that there is a field dedicated to "Relationship Extraction", but I don't think that this fits that problem if I could name it, "question triple extraction" would be the name of what I am trying to do.
Generally speaking, it looks like a relation extraction problem, with your custom relations. Since the question is too generic, this is not an answer, just some links.
Check out reading comprehension: projects on github and lecture by Christopher Manning
Also, look up Semantic Role Labeling.
I am using core nlp library to find coreference in my text
Tyson lives in New York City with his wife and their two children.
when I am running this on Stanford CoreNLP Online demo it's giving me correct output
but when I run this text on my machine it's returning null on this line of code
Map graph = document.get(CorefChainAnnotation.class);
Thank you
Look into this complete example - http://blog.pengyifan.com/resolve-coreference-using-stanford-corenlp/. I guess you are missing something as i am unable to understand the exact reason from the code you provided.
is there any online documentation explaining tags output by Stanford NLP parser?
I'm quite new to NLP and and to me it seems that the tags like NN, VBZ, .. and relationships like poss, nsubj ... seem to follow a kind of standard since I've seen this output on other parsers.
thanks a lot!
For grammatical dependencies (nsubj, poss...), you can read the official manual: http://nlp.stanford.edu/software/dependencies_manual.pdf
Tags like NN, VBZ... are part-of-speech tags. You can find info about them here: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html, or by googling "part-of-speech tags penn treebank"
The answer given by #permanganate already provides the best (to my knowledge) list of part-of-speech tags defined in the Penn Treebank. For the dependency tags, however, I find the following Stanford Twiki page far more useful than the more commonly used manual:
Stanford Dependencies Twiki
It provides a neat representation of the entire hierarchy, followed by detailed examples of many tags that are not explained in the manual. I have found these illustrative examples to be very helpful, even when I am using other (non-Stanford) dependency parsers.
I heavily use strings in a project so what i am looking for is a fast library for handling them.I think the Boyer-Moore Algorithm is the best.
Is there a free solution for that ?
You can consider the following resources implementing Boyer–Moore algorithm:
Boyer-Moore Horspool in Delphi 2010
Boyer-Moore-Horspool text searching
Search Components - Version 2.1
Boyer-moore, de la recherche efficace
Last Edit:
The StringSimilarity package of theunknownones project is a good source for fuzzy and phonetic string comparison algorithms:
DamerauLevenshtein
Koelner Phonetik
SoundEx
Metaphone
DoubleMetaphone
NGram
Dice
JaroWinkler
NeedlemanWunch
SmithWatermanGotoh
MongeElkan
CAUTION: Answering to the comment rather than to the question itself
There is (or, rather, was, because it has been abandoned currently) a Delphi unit (namely!) FastStrings which implements Boyer–Moore string search algorithm by heavy use of inline assembler. Is is one you are looking for?
As side note: project homepage is defunct now as long as author's e-mail, so i'm finding reuse (and modification and, naturally, any further development) of this code rather problematic given how restrictive are licensing terms.
It's possible to do interesting things with what would ordinarily be thought of as typesetting languages. For example, you can construct the Mandelbrot set using postscript.
It is suggested in this MathOverflow question that LaTeX may be Turing-complete. This implies the ability to write arbitrary programs (although it may not be easy!). Does anyone know of any concrete example of such a program in LaTeX, which does something highly unusual with the language?
In issue 13 of The Monad Reader, Stephen Hicks writes about implementing the solution to an ICFP contest (involving Mars rover navigation) in TeX, with copious use of macros. Amusingly, the solution's output when typeset is a postscript map of the rover's path.
Alternatively, Andrew Greene wrote a BASIC interpreter in TeX (more details). This may count as slightly perverse.
\def\K#1#2{#2}
\def\S#1#2#3{#1#3{#2#3}}
The pgfmath library still amazes me. But on a more Turing-related note: it is possible to write an actual Turing machine in TeX, as per http://en.literateprograms.org/Turing_machine_simulator_(LaTeX). It's just a nifty way of using expansions in TeX.
PostScript is Turing complete as well, if you'll read the manual you'll be amazed by the general programming capabilities of it (at least, I was).
I'm not sure if this qualifies as programming per se, but I've recently starting doing something a bit like Object Oriented stuff in LaTeX. (You don't need to know any maths to follow the following.) In recent papers, I've been writing about categories, which have objects and morphisms. Since there've been quite a few of those, I wanted a consistent style so that, say, 𝒞 was a category with typical object C and typical morphism c. Then I'd also have 𝒟 with D and d. So I define a "class", say "category" (you need to be a mathematician to understand the joke there), and declare that C is an instance of this class, and then have access to \ccat, \cobj, \cmor and so forth. The reason for not doing \cat{c}, \obj{c}, and \mor{c}, and so forth, is that sometimes these categories have special names and so after declaring the instance, I can modify it's name very easily (simply redefine \ccat - well, actually \mathccat since \ccat is a wrapper which selects \mathccat in math mode and \textccat in text mode). (Of course, it's a little more complicated than the above suggests and the OO stuff really comes in useful when I want to define a new category as a variant of an old one (it can even deal with the case where the old one doesn't exist yet.).)
Although it may not qualify as actual programming, I am using it in papers and do find it useful - the other answers (so far) have more of the feel of showing off the capabilities of LaTeX than of a sensible solution to a practical problem.
I know of someone who wrote the answer to an ACM contest problem in LaTeX.