I have two database instances: one remote and one local. I want to dump the local database and merge it with remote (structure of nodes & relationships are the same). How may I do it? Simple neo4j-admin dump & load will help?
This is not possible with the dump & load command.
To do what you want, you need to export your local database as CSVs and then make some LOAD CSV queries with MERGE command.
To export your database as CSVs, take a look at APOC with the export procedures : https://neo4j-contrib.github.io/neo4j-apoc-procedures/#_export_import
Related
I have below data in my neo4j database which I want to insert into mysql table using jdbc.
"{""id"":7512,""labels"":[""person1""],""properties"":{""person1"":""Nishant"",""group_uuid"":""6b27c9c8-4d5b-4ebc-b8c2-667bb159e029""}}"
"{""id"":7513,""labels"":[""person1""],""properties"":{""person1"":""anish"",""group_uuid"":""6b27c9c8-4d5b-4ebc-b8c2-667bb159e029""}}"
"{""id"":7519,""labels"":[""person1""],""properties"":{""person1"":""nishant"",""group_uuid"":""6b27c9c8-4d5b-4ebc-b8c2-667bb159e029""}}"
"{""id"":7520,""labels"":[""person1""],""properties"":{""person1"":""xiaoyi"",""group_uuid"":""9d7d4bf6-6db6-4cf2-8186-d8d0621a58c5""}}"
"{""id"":7521,""labels"":[""person1""],""properties"":{""person1"":""pavan"",""group_uuid"":""3ddc954a-16f5-4c59-a94a-b262f9784211""}}"
"{""id"":7522,""labels"":[""person1""],""properties"":{""person1"":""jose"",""group_uuid"":""6b27c9c8-4d5b-4ebc-b8c2-667bb159e029""}}"
"{""id"":7523,""labels"":[""person1""],""properties"":{""person1"":""neil"",""group_uuid"":""9d7d4bf6-6db6-4cf2-8186-d8d0621a58c5""}}"
"{""id"":7524,""labels"":[""person1""],""properties"":{""person1"":""menish"",""group_uuid"":""9d7d4bf6-6db6-4cf2-8186-d8d0621a58c5""}}"
"{""id"":7525,""labels"":[""person1""],""properties"":{""person1"":""ankur"",""group_uuid"":""3ddc954a-16f5-4c59-a94a-b262f9784211""}}"
Desired Output in mysql database table.
id,name,group_id
7525,ankur,3ddc954a-16f5-4c59-a94a-b262f9784211
7524,menish,9d7d4bf6-6db6-4cf2-8186-d8d0621a58c5
...
Since you did not provide much info in your question, here is a general approach for exporting from neo4j to MySQL.
Execute a Cypher query using one of the APOC export to CSV procedures to export the data intended for the table to a CSV file.
Import from the CSV file into MySQL. (E.g., here is a tutorial.)
I have two cases:
case 1:export a part of data in neo4j database A to database B,like data of Label "Person" in database A ,I wanna export "Person" data from A to B
case 2: export whole data from A to B
so how to deal with these two cases? thanks
APOC allows to export the full graph or subgraphs into a cypher file consisting of create statements, see https://neo4j-contrib.github.io/neo4j-apoc-procedures/#_export_to_cypher_script for details.
The other option would be access the other database via the neo4j jdbc driver and use apoc.load.jdbc to retrieve data from there.
I have more separate independent structures in the database. I need to do a backup for each of these structures separately and not to do a full backup of everything.
I am interested is there a way to do a backup of some specific graph part. I checked what backup strategies there are in the neo4j documentation. There are incremental backup and full backup, but I could not find the possibility to extract and backup only some part of the graph or maybe some independent graph structure in the database.
Ideal would be to define cypher query and to get the result like that. For example in most relational databases it is possible to extract/backup separate table or dataset (depending on database). So that is something I am looking to do in neo4j too. Define node label and then do a backup or by some other criteria.
You can use the experimental dump command along with the shell :
Example: dumping the user nodes to a users.cypher file that will contain all the cypher statements for recreating the users later :
./bin/neo4j-shell -c 'dump MATCH (n:User) RETURN n;' > users.cypher
Related info in the documentation : http://neo4j.com/docs/stable/shell-commands.html#_dumping_the_database_or_cypher_statement_results
My database was affected by the bug in Neo4j 2.1.1 that tends to corrupt the database in the areas where many nodes have been deleted. It turns out most of the relationships that have been affected were marked for deletion in my database. I have dumped the rest of the data using neo4j-shell and with a single query. This gives a 1.5G Cypher file that I need to import into a mint database to have my data back in a healthy data structure.
I have noticed that the dump file contains definitions for (1) schema, (2) nodes and (3) relationships. I have already removed the schema definitions from the file because they can be applied later on. Now the issue is that since the dump file uses a single series of identifiers for nodes during node creation (in the following format: _nodeid) and relationship creation, it seems that all CREATE statements (33,160,527 in my case) need to be run in a single transaction.
My first attempt to do so kept the server busy for 36 hours without results. I had neo4j-shell load the data directly into a new database directory instead of connecting to a server. The data files in the new database directory never showed any sign of receiving data, and the message log showed many messages indicating thread blocks.
I wonder what is the best way of getting this data back into the database? Should I load a specific config file? Do I need to allocate a large Java heap? What is the trick to have such a large dump file loaded into a database?
The dump command is not meant for larger scale exports, there was originally a version that did, but it was not included in the product.
if you have the old database still around, you can try some things:
contact Neo4j support to help you recover your data
use my store-utils to copy it over to a new db (it will skip all broken records)
query the data with cypher and export the results as csv
you could use the shell-import-tools for that
and then import your data from the CSV using either the shell tools again, or the load csv command or the batch-importer
Here is what I finally did:
First I identified all unaffected nodes and marked them with one specific label (let's say Carriable). This was a pretty easy process in my case because all the affected nodes had the same label, so, I just excluded this specific label. In my case I did not have to identify the affected relationships separately because all the affected relationships were also connected to nodes from the affected label.
Then I exported the whole database except the affected nodes and relationships to GraphML using a single query (in neo4j-shell):
export-graphml -o /home/mah/full.gml -t -r match (n:Carriable) optional match (n)-[i]-(:Carriable) return n,i
This took about a half hour to yield a 4GB XML file.
Then I imported the entire GraphML back into a mint database:
JAVA_OPTS="-Xmx8G" neo4j-shell -c "import-graphml -c -t -b 10000 -i /home/mah/full.gml" -path /db/newneo
This took yet another half hour to accomplish.
Please note that I allocated more than sufficient Java heap memory (JAVA_OPTS="-Xmx8G"), imposed a particularly small batch size (-b 10000) and allowed the use of on-disk caching.
Finally, I removed the unnecessary "Carriable" label and recreated the constraints.
Everyone familiar with MySQL has likely used the mysqldump command which can generate a file of SQL statements representing both the schema and data in a MySQL database.
These SQL text files are commonly used for many purposes: backups, seeding replicas, copying databases between installations (- copy prod DBs to staging environments etc) and others.
Is there a similar tool for Neo4j that can dump an entire graph into a text file of Cypher statements, that when executed on an empty database would reconstruct the original data?
Thanks.
In neo4j version 2 (e.g. 2.0.0M3), using neo4j-shell, you can use the command
dump
which will create the cypher statements (pretty much like mysqldump would do. To read in the file you can use
cat dump.cql | neo4j-shell
Cypher is just a query language for Neo4J just as SQL is for MySQL or other relational databases. If you wish to transfer the db, then you just need to copy the folder containing the database files. Simple.
For example my folder simple-graph contains all the db files. Just copy the folder and store it at some other location. You can directly start using it as:
GraphDatabaseServiceraphDb = new EmbeddedGraphDatabase(DB_PATH);//DB_PATH is path to the new location
You can use the procedure apoc.export.cypher.all() to dump all the data in your database.
For example, you can dump the database into a single file called dump-file.cypher:
neo4j#neo4j> CALL apoc.export.cypher.all('dump-file.cypher');
For details of the procedure, please see the documentation: https://neo4j.com/labs/apoc/4.4/overview/apoc.export/apoc.export.cypher.all/.