I have a set of football related keywords, a data set of positive sentiments words and negative sentiments words with me. My requirement is to combine these and search is social media to get some real time discussions and posts, and do some statistical analysis and reach some conclusions. This keywords and data sets are dynamically updating one. Now my question is
What is the best practice to handle the three sets of data? Using an Ontology structure or Well structured database?
Whether the data in the ontology is able to access from any programming languages? can i update or retrieve the data in Ontology using .NET or R or with any other programming language?
Thank you
Representing the related keywords as an ontology is a good idea rather than storing in a database.
SPARQL can be used to access and search the ontology to get related information
Your system will be semantically rich if its an ontology
If its a database, may be the access time may be improved but it will not be semantically rich
You may use apache jena which is a free Java API for creating an ontology.
Python also has many plugins for ontology generation.
Related
I'm in the process of migrating a neo4j database into Grakn for genomics and biological data, I have the files in CSV for this but I need to an ETL Tool for solving this problem in the simplest way.
I am following this template Python migrator:
https://blog.grakn.ai/loading-data-and-querying-knowledge-from-a-grakn-knowledge-graph-using-the-python-client-b764a476cda8
Am I correct in thinking this way -
Do nodes map to entities?
Do edges in neo4j map to relationships in Grakn?
Do labels map to attributes?
While it is possible to use a direct mapping of the property-graph model to the entity-relationship model (used by Grakn), it is highly likely that limitations and shortcomings of the property graph model will be transferred. This is why Grakn does not provide or encourage a completely general migration tool. Every Grakn knowledge graph should be powered by a thought-out model (ie. schema) that is tailored to the intended domain.
To outline how one can easily (re)model a dataset in Grakn, the key is to create a schema that closely resembles how we perceive data in the real world in terms of things and their interactions. This easily maps onto the Entity-Relationship-Attribute model Grakn uses. It is common to iterate several times before settling on the final schema (though it can always be extended later).
Then we can:
ask intuitive questions (in the form of Graql queries) - using the defined Entities/Relationships/Attributes that map closely to our mental model
build an intelligent database that is capable of reasoning over data the same way we do, by adding logical, deductive rules that apply in our domain
I encourage to you check out this blog post on the challenges of working with graph databases, and for any domain specific modeling questions head over to the Grakn community forum.
Good luck and welcome to Grakn!
If you map your property graph directly to GRAKN, you will end up with relations that are most likely named as verbs connecting only two objects (one of which will appear to be a subject and the other an object). GRAKN will be fine with this, but as mentioned previously, may make leveraging all the goodness in GRAKN more difficult. In particular, converting existing graph structures to hyperedges may take some significant reengineering. But the good news is that the ETL would be straightforward.
A better solution would be to define your ideal schema first in GRAKN (taking advantage of hyperedges), then fashion an ETL to populate the schema. In such a case, the ETL might be simple or complex. It would depend on how complex your original data was and how complex the new schema was.
I need to import SNOMED CT ontology into a graph database, in this case Neo4J but it could be another choice eventually.
However, I could not find a clear depiction of SNOMED CT underlying relational data model, in order to achieve this. Or at least, simplified SQL views that expose entity relantionship in a way that can be mapped to a graph database.
I would greatly appreciate any guidance or previous experiencies with this matter.
Directly trying to serialise the relational data model is probably going to be quite difficult and will take you further away from your goal.
It is worth noting that SNOMED data is actually available in RDF format already. So you get a graph structure for "free".
For example this project provides the data in a RDF format and putting RDF data into a graph is quite simple regardless of your choice of Titan or Neo4j.
Side Note:
A colleague of mine has actually worked on importing SNOMED data into a Grakn Graph, a semantic graph system we both work on. If you interested you can check out his work here. Grakn is a semantic graph solution which runs on top of Titan.
If you are looking for a sample on how to model the Concepts, Descriptions and Relationships into a Graph database. I have a sample project in Github that can upload the Snomed data into a Neo4j database.
https://github.com/pradeepvemulakonda/Snomed
Before you go into the implementation detail, I would suggest trying out the following Snomed data browser at
http://ontoserver.csiro.au/shrimp/
Once you get a feel of the concepts and relationships you can go through the implementation. You can use the following gist to understand how you can query the uploaded concepts and relationships in Neo4j.
https://neo4j.com/graphgist/95f4f165-0172-4b3d-981b-edcbab2e0a4b#listing_category=health-care-and-science
SNOMED can be loaded into MySQL using the UMLS (unified medical language system) released by NIH. Once loaded the table MRREL contains all the relations between SNOMED nodes. If you want load it right away in Neo4j you can totally skip the MySQL step and work directly with the UMLS RRF files. The RRF documentation format is not great but the files are easy to parse tabular text.
There are in fact three tables, Concepts, Descriptions and Relationships
You'll find them described here:
https://confluence.ihtsdotools.org/display/DOCTIG/3.1.+Components
Most important are the relations between Relationships and Concepts and Descriptions and Concepts.
I know that this question may not be suitable for SO, but please let this question be here for a while. Last time my question was moved to cross-validated, it froze; no more views or feedback.
I came across a question that does not make much sense for me. How IFC models can be interrogated via NLP? Consider IFC models as semantically rich structured data. IFC defines an EXPRESS based entity-relationship model consisting of entities organized into an object-based inheritance hierarchy. Examples of entities include building elements, geometry, and basic constructs.
How could NLP be used for such type of data? I don't see NLP relevant at all.
In general, I would suggest that using NLP techniques to "interrogate" already (quite formally) structured data like EXPRESS would be overkill at best and a time / maintenance sinkhole at worst. In general, the strengths of NLP (human language ambiguity resolution, coreference resolution, text summarization, textual entailment, etc.) are wholly unnecessary when you already have such an unambiguous encoding as this. If anything, you could imagine translating this schema directly into a Prolog application for direct logic queries, etc. (which is quite a different direction than NLP).
I did some searches to try to find the references you may have been referring to. The only item I found was Extending Building Information Models Semiautomatically Using Semantic Natural Language Processing Techniques:
... the authors propose a new method for extending the IFC schema to incorporate CC-related information, in an objective and semiautomated manner. The method utilizes semantic natural language processing techniques and machine learning techniques to extract concepts from documents that are related to CC [compliance checking] (e.g., building codes) and match the extracted concepts to concepts in the IFC class hierarchy.
So in this example, at least, the authors are not "interrogating" the IFC schema with NLP, but rather using it to augment existing schemas with additional information extracted from human-readable text. This makes much more sense. If you want to post the actual URL or reference that contains the "NLP interrogation" phrase, I should be able to comment more specifically.
Edit:
The project grant abstract you referenced does not contain much in the way of details, but they have this sentence:
... The information embedded in the parametric 3D model is intended for facility or workplace management using appropriate software. However, this information also has the potential, when combined with IoT sensors and cognitive computing, to be utilised by healthcare professionals in Ambient Assisted Living (AAL) environments. This project will examine how as-constructed BIM models of healthcare facilities can be interrogated via natural language processing to support AAL. ...
I can only speculate on the following reason for possibly using an NLP framework for this purpose:
While BIM models include Industry Foundation Classes (IFCs) and aecXML, there are many dozens of other formats, many of them proprietary. Some are CAD-integrated and others are standalone. Rather than pay for many proprietary licenses (some of these enterprise products are quite expensive), and/or spend the time to develop proper structured query behavior for the various diverse file format specifications (which may not be publicly available in proprietary cases), the authors have chosen a more automated, general solution to extract the content they are looking for (which I assume must be textual or textual tags in nearly all cases). This would almost be akin to a search engine "scraping" websites and looking for key words or phrases and synonyms to them, etc. The upside is they don't have to explicitly code against all the different possible BIM file formats to get good coverage, nor pay out large sums of money. The downside is they open up new issues and considerations that come with NLP, including training, validation, supervision, etc. And NLP will never have the same level of accuracy you could obtain from a true structured query against a known schema.
I'm trying to get my head around two very different approaches to data sharing: OData and Semantic Web/Linked Data. Is there a good comparison of the two?
As I understand it, OData combines syndication/CRUD (AtomPub), serialisation formats (XML, JSON), a data model, a query language, and some semantics/conventions governing use of those existing technologies. It's primarily intended for exposing data from one system so that others can consume it.
Linked Data is a data model, a rigorous commitment to URIs, an (optional?) serialisation format (RDF/XML), but (correct me if I'm wrong) doesn't say anything about transport, CRUD, etc. It seems intended to allow inferencing across lots of little chunks of data drawn from a wide variety of sources. (Not something of major importance to us right now - we would be synchronising large slabs of data between a small number of sources, and wanting to preserve provenance information).
I'm interested in technologies for sharing data between certain data management platforms, some of which I work on directly. OData seems more appealing as it's very straightforward to explain to developers: implement this API, follow that Atom standard, serialise the data like this. We're already doing something very similar for one platform: sharing XML-serialised data on an Atom feed, with URL parameters used to filter.
By contrast, my past experiences working with RDF have given me an impression of brittle, opaque (massive slabs of RDF/XML), inaccessible (using SPARQL vs SQL) technology - but perhaps I'm confusing the experience of working with a triplestore like Jena with simply exposing an existing database via a linked data API.
Any pointers, comments etc on the differences and similarities between these two approaches in terms of scope, technologies, ease, future potential etc would be great.
I think discussing this in depth is not really what Stackoverflow is meant for, but just to give you some pointers to interesting discussions about differences and overlap:
Oh - it is data on the Web
Microsoft, OData and RDF
One of the key differences seems to be that OData has no means to link data from different sources to each other. Essentially, you're still stuck in a silo.
It might also be interesting to check out various attempts to convert data between the two approaches. See a.o. http://answers.semanticweb.com/questions/1298/has-anyone-written-a-mapping-from-odata-to-rdf .
OData may be easier, but its not better, by any means. SPARQL and RDF (forget RDF/XML, better to look at Turtle) satisfies everything in OData along with providing many more cutting edge features such as:
Federation Extensions
Linked Data
Reasoning and Inference (for the more brave)
Equally, the software supporting the standards is actually quite sophisticated. Most people interested in OData generally come from a Microsoft background, so take a look at dotNetRdf
Here's a comparison matrix:
http://uoccou.wordpress.com/2011/02/17/linked-data-odata-gdata-datarss-comparison-matrix/
Unfortunately the table formatting is pretty horrible, but the content is useful.
I am developing a web-based application using Rails. I am debating between using a Graph Database, such as InfoGrid, or a Document Database, such as MongoDB.
My application will need to store both small sets of data, such as a URL, and very large sets of data, such as Virtual Machines. This data will be tied to a single user.
I am interested in learning about peoples experiences with either Graph or Document databases and why they would use either of the options.
Thank you
I don't feel enough experienced with both worlds to properly and fully answer your question, however I'm using a document database for some time and here are some personal hints.
The document databases are based on a concept of key,value, and static views and are pretty cool for finding a set of documents that have a particular value.
They don't conceptualize the relations between documents.
So if your software have to provide advanced "queries" where selection criteria act on several 'types of document' or if you simply need to perform a selection using several elements, the [key,value] concept is not appropriate.
There are also a number of other cases where document databases are inappropriate : presenting large datasets in "paged" tables, sortable on several columns is one of the cases where the performances are low and disk space usage is huge.
So in many cases you'll have to perform "server side" processing in order to pick up the pieces, and with rails, or any other ruby based framework, you might run into performance issues.
The graph database are based on the concept of tripplestore, meaning that they also conceptualize the relations between the entities.
The graph can be traversed using the relations (and entity roles), and might be more convenient when performing searches across relation-structured data.
As I have no experience with graph database, I'm not aware if the graph database can be easily queried/traversed with several criterias, however if an advised reader has such an information I'd really appreciate any examples of such queries/traversals.
I'm currently reading about InfoGrid and trying to figure if such databases could by handy in order to perform complex requests on a very large set of data, relations included ....
From what I can read, the InfoGrah should be considered as a "data federator" able to search/mine the data from several sources (Stores) wich can also be a NoSQL database such as Mongo.
Wich means that you could use a mongo store for updates and InfoGraph for data searching, and maybe spare a lot of cpu and disk when it comes to complex searches inside a nosql database.
Of course it might seem a little "overkill" if your app simply stores a large set of huge binary files in a database and all you need is to perform simple key queries and to retrieve the result. In that cas a nosql database such as mongo or couch would probably be handy.
Hope some of this helps ;)
When connecting related documents by edges, will you get a shallow or a deep graph? I think the answer to that question is important when deciding between graphdbs and documentdbs. See Square Pegs and Round Holes in the NOSQL World by Jim Webber for thoughts along these lines.