Importing SNOMED CT into Neo4J - neo4j

I need to import SNOMED CT ontology into a graph database, in this case Neo4J but it could be another choice eventually.
However, I could not find a clear depiction of SNOMED CT underlying relational data model, in order to achieve this. Or at least, simplified SQL views that expose entity relantionship in a way that can be mapped to a graph database.
I would greatly appreciate any guidance or previous experiencies with this matter.

Directly trying to serialise the relational data model is probably going to be quite difficult and will take you further away from your goal.
It is worth noting that SNOMED data is actually available in RDF format already. So you get a graph structure for "free".
For example this project provides the data in a RDF format and putting RDF data into a graph is quite simple regardless of your choice of Titan or Neo4j.
Side Note:
A colleague of mine has actually worked on importing SNOMED data into a Grakn Graph, a semantic graph system we both work on. If you interested you can check out his work here. Grakn is a semantic graph solution which runs on top of Titan.

If you are looking for a sample on how to model the Concepts, Descriptions and Relationships into a Graph database. I have a sample project in Github that can upload the Snomed data into a Neo4j database.
https://github.com/pradeepvemulakonda/Snomed
Before you go into the implementation detail, I would suggest trying out the following Snomed data browser at
http://ontoserver.csiro.au/shrimp/
Once you get a feel of the concepts and relationships you can go through the implementation. You can use the following gist to understand how you can query the uploaded concepts and relationships in Neo4j.
https://neo4j.com/graphgist/95f4f165-0172-4b3d-981b-edcbab2e0a4b#listing_category=health-care-and-science

SNOMED can be loaded into MySQL using the UMLS (unified medical language system) released by NIH. Once loaded the table MRREL contains all the relations between SNOMED nodes. If you want load it right away in Neo4j you can totally skip the MySQL step and work directly with the UMLS RRF files. The RRF documentation format is not great but the files are easy to parse tabular text.

There are in fact three tables, Concepts, Descriptions and Relationships
You'll find them described here:
https://confluence.ihtsdotools.org/display/DOCTIG/3.1.+Components
Most important are the relations between Relationships and Concepts and Descriptions and Concepts.

Related

Migrating from Neo4j to Grakn

I'm in the process of migrating a neo4j database into Grakn for genomics and biological data, I have the files in CSV for this but I need to an ETL Tool for solving this problem in the simplest way.
I am following this template Python migrator:
https://blog.grakn.ai/loading-data-and-querying-knowledge-from-a-grakn-knowledge-graph-using-the-python-client-b764a476cda8
Am I correct in thinking this way -
Do nodes map to entities?
Do edges in neo4j map to relationships in Grakn?
Do labels map to attributes?
While it is possible to use a direct mapping of the property-graph model to the entity-relationship model (used by Grakn), it is highly likely that limitations and shortcomings of the property graph model will be transferred. This is why Grakn does not provide or encourage a completely general migration tool. Every Grakn knowledge graph should be powered by a thought-out model (ie. schema) that is tailored to the intended domain.
To outline how one can easily (re)model a dataset in Grakn, the key is to create a schema that closely resembles how we perceive data in the real world in terms of things and their interactions. This easily maps onto the Entity-Relationship-Attribute model Grakn uses. It is common to iterate several times before settling on the final schema (though it can always be extended later).
Then we can:
ask intuitive questions (in the form of Graql queries) - using the defined Entities/Relationships/Attributes that map closely to our mental model
build an intelligent database that is capable of reasoning over data the same way we do, by adding logical, deductive rules that apply in our domain
I encourage to you check out this blog post on the challenges of working with graph databases, and for any domain specific modeling questions head over to the Grakn community forum.
Good luck and welcome to Grakn!
If you map your property graph directly to GRAKN, you will end up with relations that are most likely named as verbs connecting only two objects (one of which will appear to be a subject and the other an object). GRAKN will be fine with this, but as mentioned previously, may make leveraging all the goodness in GRAKN more difficult. In particular, converting existing graph structures to hyperedges may take some significant reengineering. But the good news is that the ETL would be straightforward.
A better solution would be to define your ideal schema first in GRAKN (taking advantage of hyperedges), then fashion an ETL to populate the schema. In such a case, the ETL might be simple or complex. It would depend on how complex your original data was and how complex the new schema was.

Best practices for developing an application with Neo4J

I recently came across an application which uses NEO4j as the backend. In my experience with SQL and other Key-value based databases, I have developed an understanding(which could be refined) that other databases store data and your application derives the information while with NEO4J you store the information. This means that the logic of deriving the information is already captured in the model of NEO4J. I am not able to get my head around this because now I cannot have logic that can be composed and most importantly something that can be tested with unit tests. I can sure have component level tests using embedded neo4j but then that's not the same. Can someone please help me understand the application development philosophy/methodology with NEO4J.
...other databases store data and your application derives the information while with NEO4J you store the information.
Hmmm.... Define data and define information. Mostly it goes: Data is something that requires further processing to become information (that is, something informative - something you can derived some conclusion or insights from).
Anyhow, doubt this has anything to do with Graph databases vs relational/aggregate databases. A database, as the name suggests, stores data.
This means that the logic of deriving the information is already captured in the model of NEO4J.
I'm not sure what you mean by "the logic... is already captured". Some queries are much easier with Neo+Cypher that with say SQL; like "Find all the friends of my friends that live in Berlin", but I would hardly relate this to 'logic'.
I cannot have logic that can be composed and most importantly something that can be tested with unit tests.
What do you mean by 'logic that can be composed'? And unit tests has nothing to do with this I'm afraid - there's no logic being tested if you talk about graph vs other databases.
Can someone please help me understand the application development philosophy/methodology with NEO4J.
There's really not much to it. Neo4J is a database like any other database, only that it uses a different model from relational/aggregate databases.
To highlight two of its strengths:
No joins - That's a pain point with relational/aggregate databases, especially with complex queries. Essentially, nearly all system involve a data model that is a graph (you only need one many-to-many relationship in your data model for that), and not using a graph database is a form of dimensionality reduction. The reasons relational databases prevailed for so many years is nothing short of a set of historical coincidences.
Easier DB migrations - and that's for being a schema-less data base. You ripe the same benefits with any other schema-less database.
I strongly recommend you read the 'NOSQL Overview' appendix of the free Graph Databases. It focus on a lot of these points.

How to import .rdf file in Neo4j database?

I have a .rdf file which I was used for Dgraph in order to import the data and the subsequently queries in order to get the relations in the Dgraph Ratel UI.
Now I need to include in my web application for which Dgraph doesn't have support (link). Hence I started looking for Neo4j.
Can anyone please help out how to import .rdf file in Neo4j if not what's the workaround.
Thanks.
Labeled property graph (LPG) and RDF graph are different graph data models, see:
RDF Triple Stores vs. Labeled Property Graphs: What's the Difference? by Jesus Barrasa
Reification is red herring by Bob DuCharme
Neo4j supports LPG data model only. However, there is the neosemantics Neo4j plugin.
After installation, say:
CALL semantics.importRDF(<RDF_file_path>, <RDF_serialization_format>)
The mapping from RDF to LPG is described here.
I'd suggest you to use a proper triplestore for RDF data.

When developing web applications when would you use a Graph database versus a Document database?

I am developing a web-based application using Rails. I am debating between using a Graph Database, such as InfoGrid, or a Document Database, such as MongoDB.
My application will need to store both small sets of data, such as a URL, and very large sets of data, such as Virtual Machines. This data will be tied to a single user.
I am interested in learning about peoples experiences with either Graph or Document databases and why they would use either of the options.
Thank you
I don't feel enough experienced with both worlds to properly and fully answer your question, however I'm using a document database for some time and here are some personal hints.
The document databases are based on a concept of key,value, and static views and are pretty cool for finding a set of documents that have a particular value.
They don't conceptualize the relations between documents.
So if your software have to provide advanced "queries" where selection criteria act on several 'types of document' or if you simply need to perform a selection using several elements, the [key,value] concept is not appropriate.
There are also a number of other cases where document databases are inappropriate : presenting large datasets in "paged" tables, sortable on several columns is one of the cases where the performances are low and disk space usage is huge.
So in many cases you'll have to perform "server side" processing in order to pick up the pieces, and with rails, or any other ruby based framework, you might run into performance issues.
The graph database are based on the concept of tripplestore, meaning that they also conceptualize the relations between the entities.
The graph can be traversed using the relations (and entity roles), and might be more convenient when performing searches across relation-structured data.
As I have no experience with graph database, I'm not aware if the graph database can be easily queried/traversed with several criterias, however if an advised reader has such an information I'd really appreciate any examples of such queries/traversals.
I'm currently reading about InfoGrid and trying to figure if such databases could by handy in order to perform complex requests on a very large set of data, relations included ....
From what I can read, the InfoGrah should be considered as a "data federator" able to search/mine the data from several sources (Stores) wich can also be a NoSQL database such as Mongo.
Wich means that you could use a mongo store for updates and InfoGraph for data searching, and maybe spare a lot of cpu and disk when it comes to complex searches inside a nosql database.
Of course it might seem a little "overkill" if your app simply stores a large set of huge binary files in a database and all you need is to perform simple key queries and to retrieve the result. In that cas a nosql database such as mongo or couch would probably be handy.
Hope some of this helps ;)
When connecting related documents by edges, will you get a shallow or a deep graph? I think the answer to that question is important when deciding between graphdbs and documentdbs. See Square Pegs and Round Holes in the NOSQL World by Jim Webber for thoughts along these lines.

Where can I find a real dataset anywhere online that I could try doing a data warehouse cube with?

I am studying data warehouses and I have to do one final project for my studies.
I am thinking about doing a cube for a data warehouse. Where can I find a real dataset anywhere online that I could try doing a cube with?
You can refer to this page to see how to convert a part of the Northwind database to a star schema for building cubes: Northwind Star Schema.
Here's an example on Adventure Works - of course, it's prebuilt SSAS, but I guess you could look at the underlying AventureWorks DB and do the dimensional modeling yourself.
I think doing a DW on an existing popular dataset like Northwind or AdventureWorks is probably not a great idea, because so many people have done them. Even StackOverflow has had data mining done, but perhaps it would be a good candidate - I'm not sure what Brent's work actually comprised.
So if you are looking to do an original project, you might need to look further afield, if only to distinguish your work from previous work.

Resources