1.Can Neo4j store RDF directly? we understand it can import RDF and export RDF but how is the data stored internally.
We also understand in Neo4j we can create property graphs and make it as a KG using APOC procedures and algorithms available,is that the case or are we missing anything?
2. We would like to understand, how an entity will be tagged against an ontology in Neo4j KG implementation.
You can import RDF data into Neo4j, however it will not be exactly in that format. Using Neosemantics, the triples will be converted into a property graph.
Neosematics can reconvert the property graph data back into triples should that be required.
Related
I'm new to neo4j and currently attempting to migrate existing data into a neo4j database. I have written a small program to convert current data (in bespoke format) into a large CREATE cypher query for initial population of the database. My first iteration has been to somewhat retain the structuring of the existing object model, i.e Objects become nodes, node type is same as object name in current object model, and the members become properties (member name is property name). This is done for all fundamental types (and strings) and any member objects are thus decomposed in the same way as in the original object model.
This has been fine in terms of performance and 13000+ line CREATE cypher queries have been generated which can be executed throuh the web frontend/client. However the model is not ideal for a graph database, I beleive, since there can be many properties, and instead I would like to deomcompose these 'fundamental' nodes (with members which are fundamental types) into their own node, relating to a more 'abstract' node which represents the more higher level object/class. This means each member is a node with a single (at first, it may grow) property say { value:"42" }, or I could set the node type to the data type (i.e integer). If my understanding is correct this would also allow me to create relationships between the 'members' (since they are nodes and not propeties) allowing a greater freedom when expressing relationships between original members of different objects rather than just relating the parent objects to each other.
The problem is this now generates 144000+ line Cypher queries (and this isn't a large dataset in compraison to others) which the neo4j client seems to bulk at. The code highlighting appears to work in the query input box of the client (i.e it highlights correctly, which I assume implies it parsed it correctly and is valid cypher query), but when I come to run the query, I get the usual browser not responding and then a stack overflow (no punn intended) error. Whats more the neo4j client doesn't exit elegantly and always requires me to force end task and the db is in the 2.5-3GB usage from, what is effectively and small amount of data (144000 lines, approx 2/3 are relationships so at most ~48000 nodes). Yet I read I should be able to deal with millions of nodes and relationships in the milliseconds?
Have tried it with firefox and chrome. I am using the neo4j community edition on windows10. The sdk would initially be used with C# and C++. This research is in its initial stages so I haven't used the sdk yet.
Is this a valid approach, i.e to initially populate to database via a CREATE query?
Also is my approach about decomposing the data into fundamental types a good one? or are there issues which are likely to arise from this approach.
That is a very large Cypher query!!!
You would do much better to populate your database using LOAD CSV FROM... and supplying a CSV file containing the data you want to load.
For a detailed explaination, have a look at:
https://neo4j.com/developer/guide-import-csv/
(This page also discusses the batch loader for really large datasets.)
Since you are generating code for the Cypher query I wouldn't imagine you would have too much trouble generating a CSV file.
(As an indication of performance, I have been loading a 1 million record CSV today into Neo4j running on my laptop in under two minutes.)
How to extract an entity relationship diagram from a graph database? I have all the required files that was created from my application.
You can use
call db.schema for a graph representation of the graph data model. There are a few other functions to get the properties, keys, indexes, etc like call db.indexes, call db.propertykeys etc.
The APOC procedure library has a few relevant functions that might help to get a tabular layout - or develop it yourself in Excel from the labels, property keys, etc.
You can also build a data model using the Arrows tool
Please reorient your thinking to use graph terms - the equivalent for the ER diagram would be a model built using the Arrows tool or the db.schema.
I used: CALL db.schema.visualization for visualizing the database schema. Like https://stackoverflow.com/a/45357049/7924573 already said, in graph databases this is as closest as you can to ER-diagrams. In the remote interface you can export it directly as e.g. .svg graphic
Here is an example:
There are several built-in input output format in Giraph, but all those formats support only numerical ids & value.
So is there a way to process property graph such that both vertices & edges can have multiple key & values or anything close? I'm specifically interested in whether edge can have attributes like timeCreated or type.
Also, is there some convention to use only numerical ids & data for faster processing? Specifically, is the property graph from graph database usually filtered to have only ids & value before batch processing using Giraph?
At least from Neo4j you can use the csv export of node- and relationship-id's to generate the data for giraph.
You can use something like this:
http://neo4j.com/blog/export-csv-from-neo4j-curl-cypher-jq/
and you can use neo4j-import to import that csv data, or LOAD CSV for more advanced structural updates.
I'm new in ontology and i'm using protege to create my own. I have imported the Dublin core ontology (http://purl.org/dc/elements/1.1) into my own ontology using protege but only annotations were included. Upon reading the documentation of Dublin core there were classes and properties defined. how will i include these classes and properties in my ontology with the use of protege?
I think your problem is that you don't look inside the files. Dublin Core has many files. The one that you are referring to (elements) only contains annotations. You need to find the appropriate file. The DC terms contains some properties and classes. You need to read the documentation and then import the file with the modelling elements that you require.
How can I store references to Java objects in Neo4j?
Neo4j basically stores primitives like integers and strings. See the full reference on what can be stored in the apidocs for setProperty.
I managed to store objects into neo4j by first marshaling them to an XML (I'm using Xstream for that) and then unmarshaling when querying.