Are there any tools available for generating RDF from natural language? A list of RDFizers compiled by the SIMILE project only mentions one, the Monrai Cypher. Unfortunately, it seems to have been a proprietary tool developed by Monrai Technologies, which has since disappeared, and I can't find any download links. Has anyone seen anything similar?
You want some ontology learning and population tools.
This online article lists 4 different systems:
Text2Onto,
Abraxas,
KnowItAll,
OntoLearn
You may want to check out the book; it reviews several ontology learning tools as well:
Ontology learning from text: methods, evaluation and applications, by Paul Buitelaar, Philipp Cimiano, Bernardo Magnini
You might look into OpenCalias, Zemanta and Hakia which all have nice APIs for extracting semantic data out of internet resources. Not familiar with Monrai Cypher, but possibly these might help.
you could use the python nltk
to parse the text and emit the rdf tripplets
Related
My goal is to build an automated Knowledge Graph. I have decided to use Neo4j as my database. I am intending to load a json file from my local directory to Neo4j. The data I will be using are the yelp datasets(the json files are quite large).
I have seen some Neo4j examples with Graphaware and OpenNLP. I read that Neo4j has a good support for JAVA apps. I have also read that Neoj supports python(I am intending to use nltk). Is it advisable to use Neo4j with JAVA maven/gradle and OpenNLP? Or should I use it with py2neo with nltk.
I am really sorry that I don't have any prior experience with these tools. Any advice or recommendation will be greatly appreciated. Thank you so much!
Welcome to Stack Overflow! Unfortunately, this question is a suggestion/opinion question so isn't appropriate for this forum.
However, this is an area I have worked in so I can confidently say that Java (or Kotlin) is the best way to go for Neo. The reason being, it is the native language for Neo and there is significantly more support in terms of the community for questions and libraries available out there.
However, NLTK is much more powerful than OpenNLP. So, if your usecase is simple enough for OpenNLP, then purely Java/Kotlin is a perfect approach. Alternatively, you can utilize java as an interfacing layer for the stored graph, but use python with NLTK for language work feeding into the graph. This would, of course, dramatically increase the complexity of your project.
Ultimately, the best approach depends on your exact use-case and which trade-offs make the most sense for you.
We've been looking into languages for a ML project at work. A colleague of mine is a big Common Lisp fan, however I have some concerns. Are there any good/modern ML libraries for Common Lisp that people know of (something comparable to Weka)? Also, does anyone know of a good statistics library for CLisp?
Thanks to ABCL you can use Weka in your Common Lisp program.
There are some libraries indexed on cliki RCL, RCLG and cl-random in particular look interesting.
GSLL gives you access to GSL's statistics functions.
We extract various information from e-mails - flights, car rentals, hotels and more. the method is to extract the body of the mail, usually in HTML form but sometime it's text or we use the information in a PDF/Word/RTF attachment. We then apply regular expressions (sometimes in several steps) in order to get information, which is provided in a tabular form (you can think of a flight table, hotel table, etc.). Notice, even though we parse HTML, this is not web scraping.
Currently we are using QL2's WebQL engine, but we are looking to replace it from business reasons. Can you recommend on another engine? It must run on Linux and be accessible from Java (a Java API would be the the best, but Web services are good solution as well). It also must support regular expressions for text extraction and not just to be based on the HTML structure.
I recommend that you have a look at R. It has an extensive number of text mining packages: have a look at the Natural Language Processing view. In particular, look at the tm package. Here are some relevant links:
Paper about the package in the Journal of Statistical Computing: http://www.jstatsoft.org/v25/i05/paper. The paper includes a nice example of an analysis of the R-devel
mailing list (https://stat.ethz.ch/pipermail/r-devel/) newsgroup postings from 2006.
Package homepage: http://cran.r-project.org/web/packages/tm/index.html
Look at the introductory vignette: http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf
In addition, R provides many tools for parsing HTML or XML. Have a look at this question for an example using the RCurl and XML packages.
Edit: You can integrate R with Java with JRI. It's a very widely used package, with many examples. You can also see these related questions.
Have a look at:
LingPipe - LingPipe is a suite of Java libraries for the linguistic analysis of human language.
Lucene - Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java.
Just wanted to update - our final decision was to implement the parsing in groovy, and to add some required functionality (html to text, pdf to text, clean whitespace, etc.) either by implementing it in Java ot by relying on 3rd party libraries.
I use a custom parser made with Flex and C++ for similar purposes. I'd suggest you take a look at parser generators in java (javaCC .jj files) javacc-faq Nutch does it this way. (NutchAnalysis.jj)
What tools are available for metamodelling?
Especially for developing diagram editors, at the moment trying out Eclipse GMF
Wondering what other options are out there?
Any comparison available?
Your question is simply too broad for a single answer - due to many aspects.
First, meta-modelling is not a set term, but rather a very fuzzy thing, including modelling models of models and reaching out to terms like MDA.
Second, there are numerous options to developing diagram editors - going the Eclipse way is surely a nice option.
To get you at least started in the Eclipse department:
have a look at MOF, that is architecture for "meta-modelling" from the OMG (the guys, that maintain UML)
from there approach EMOF, a sub set which is supported by the Eclipse Modelling Framework in the incarnation of Ecore.
building something on top of GMF might be indeed a good idea, because that's the way existing diagram editors for the Eclipse platform take (e.g. Omondo's EclipseUML)
there are a lot of tools existing in the Eclipse environment, that can utilize Ecore - I simply hope, that GMF builts on top of Ecore itself.
Dia has an API for this - I was able to fairly trivially frig their UML editor into a basic ER modelling tool by changing the arrow styles. With a DB reversengineering tool I found in sourceforge (took the schema and spat out dia files) you could use this to document databases. While what I did was fairly trivial, the API was quite straightforward and it didn't take me that long to work out how to make the change.
If you're of a mind to try out Smalltalk There used to be a Smalltalk meta-case framework called DOME which does this sort of thing. If you download VisualWorks, DOME is one of the contributed packages.
GMF is a nice example. At the core of this sits EMF/Ecore, like computerkram sais. Ecore is also used for the base of Eclipse's UML2 . The prestige use case and proof of concept for GMF is certainly UML2 Tools.
Although generally a UML tool, I would look at StarUML. It supports additional modules beyond what are already built in. If it doesn't have what you need built in or as a module, I supposed you could make your own, but I don't know how difficult that is.
Meta-modeling is mostly done in Smalltalk.
You might want to take a look at MOOSE (http://moose.unibe.ch). There are a lot of tools being developed for program understanding. Most are Smalltalk based. There is also some java and c++ work.
Two of the most impressive tools are CodeCity and Mondrian. CodeCity can visualize code development over time, Mondrian provides scriptable visualization technology.
And of course there is the classic HotDraw, which is also available in java.
For web development there is also Magritte, providing meta-descriptions for Seaside.
I would strongly recommend you look into DSM (Domain Specific Modeling) as a general topic, meta-modeling is directly related. There are eclipse based tools like GMF that currently require java coding, but integrate nicely with other eclipse tools and UML. However there are two other classes out there.
MetaCase which I will call a pure DSM tool as it focuses on allowing a developer/modeler with out nearly as much coding create a usable graphical model. Additionally it can be easily deployed for others to use. GMF and Microsoft's Beta software factory/DSM tool fall into this category.
Pure Meta-modeling tools which are not intended for DSM tooling, code generation, and the like. I do not follow these tools as closely as I am interested in applications that generate tooling for SMEs, Domain Experts, and others to use and contribute value to an active project not modeling for models sake, or just documentation and theory.
If you want to learn more about number 1, the tooling applications for DSMs/Meta-modeling, then check out my post "DSMForum.org great resources, worth a look." or just navigate directly to the DSMForum.org
In case you are interested in something that is related to modelling and not generation of code, have a look at adoxx.org. As a metamodelling platform it does provide functionalities and mechanisms to quickly develop your own DSL and allows you to focus on the models needs (business requirements, conceptual level design/specification). There is an active community from academia and practice involved developing prototypical as well as commercial application based on the platform. Could be interesting ...
I've been given a job of 'translating' one language into another. The source is too flexible (complex) for a simple line by line approach with regex. Where can I go to learn more about lexical analysis and parsers?
If you want to get "emotional" about the subject, pick up a copy of "The Dragon Book." It is usually the text in a compiler design course. It will definitely meet your need "learn more about lexical analysis and parsers" as well as a bunch of other fun stuff!
IMH(umble)O, save yourself an arm and/or leg and buy an older edition - it will fill your information desires.
Try ANLTR:
ANTLR, ANother Tool for Language
Recognition, is a language tool that
provides a framework for constructing
recognizers, interpreters, compilers,
and translators from grammatical
descriptions containing actions in a
variety of target languages.
There's a book for it also.
Niklaus Wirth's book "Compiler Construction" (available as a free PDF)
http://www.google.com/search?q=wirth+compiler+construction
I've recently been working with PLY which is an implementation of lex and yacc in Python. It's quite easy to get started with it and there are some simple examples in the documentation.
Parsing can quickly become a very technical topic and you'll find that you probably won't need to know all the details of the parsing algorithm if you're using a parser builder like PLY.
Lots of people have recommended books. For many these are much more useful in a structured environment with assignments and due dates and so forth. Even if not, having the material presented in a different way can help greatly.
(a) Have you considered going to a school with a decent CS curriculum?
(b) There are lots of online lectures, such as MIT's Open Courseware. Their EE/CS section has many courses that touch on parsing, though I can't see any on parsing per se. It's typically introduced as one of the first theory courses as language classification and automata is at the heart of much of CS theory.
If you prefer Java based tools, the Java Compiler Compiler, JavaCC, is a nice parser/scanner. It's config file driven, and will generate java code that you can include in your program. I haven't used it a couple years though, so I'm not sure how the current version is. You can find out more here: https://javacc.dev.java.net/
Lexing/Parsing + typecheck + code generation is a great CS exercise I would recommend it to anyone wanting a solid basis, so I'm all for the Dragon Book
I found this site helpful:
Lex and YACC primer/HOWTO
The first time I used lex/yacc was for a relatively simple project. This tutorial was all I really needed. When I approached more complex projects later, the familiarity I had from this tutorial and a simple project allowed me to build something fancier.
After taking (quite) a few compilers classes, I've used both The Dragon Book and C&T. I think C&T does a far better job of making compiler construction digestible. Not to take anything away from The Dragon Book, but I think C&T is a far more practical book.
Also, if you like writing in Java, I recommend using JFlex and BYACC/J for your lexing and parsing needs.
Yet another textbook to consider is Programming Language Pragmatics. I prefer it over the Dragon book, but YMMV.
If you're using Perl, yet another tool to consider is Parse::RecDescent.
If you just need to do this translation once and don't know anything about compiler technology, I would suggest that you get as far as you can with some fairly simplistic translations and then fix it up by hand. Yes, it is a lot of work. But it is less work than learning a complex subject and coding up the right solution for one job. That said, you should still learn the subject, but don't let not knowing it be a roadblock to finishing your current project.
Parsing Techniques - A Practical Guide
By Dick Grune and Ceriel J.H. Jacobs
This book (freely available as PDF) gives an extensive overview of different parsing techniques/algorithms. If you really want to understand the different parsing algorithms, this IMO is a better reference than the Dragon Book (as Parsing Techniques focuses entirely on parsing, while the Dragon Book covers parsing only as one - although important - part of the compiler construction process).
flex and bison are the new lex and yacc though. The syntax for BNF is often derided for being a bit obtuse. Some have moved to ANTLR and Ragel for this reason.
If you're not doing much translation, you may one to pull a one-off using multiline regexes with Perl or Ruby. Writing a compatible BNF grammar for an existing language is not a task to be taken lightly.
On the other hand, it is entirely possible to leverage any given language's .l and .y files if they are available as open source. Then, you could construct new code from an existing parse tree.