I want to copy my entire neo4j graph(all nodes and relationships) created so far from my local machine(windows) to VM(windows). Is there any way I can achieve the same. Please advise.
You can create a dump of the database and restore the database dump in the VM. Other than that, you can use APOC to export the node and relationship information and import it in your VM. For more information about the APOC procedures to export and import data you can take a look at the documentation: https://neo4j.com/labs/apoc/4.3/export/
The easiest way:
Stop the neo4j service on both local machine and VM.
copy the /data/ folder with all its content to the new location.
Start the neo4j service in the new location.
Related
I have imported my data to a new Neo4j database instead of the standard graph.db using import tool. I want to switch this database to web Neo4j. I used Neo4j docker image with /var/lib/neo4j volume. But I can't find my config file to change the active database, even after I mount conf directory specifically this file doesn't get generated.
How can I switch active Neo4j database in web client or neo4j shell?
Here is the command with which I created neo4j container:
docker run --publish=7474:7474 --publish=7687:7687 --volume=/var/lib/neo4j/import:/var/lib/neo4j/import --env=NEO4J_dbms_allow_upgrade='true' --env=NEO4J_dbms.security.allow_csv_import_from_file_urls='true' neo4j:latest
You cannot change the active database of a live Neo4J instance.
Enterprise edition does allow for some values to be changed without rebooting; the keys allowed to be changed this way are listed at the online documentation, but dbms.active_database is not one of them.
Instead, you have a few options.
You can mount a /conf directory
The conf directory can be filled with configuration files that will completely override the default ones. They are not generated by Neo4J, you must take an entire neo4j.conf file and place it in the directory which is then mounted to the container. You can change whatever values you need to in that file.
After the mapped directory is updated with the new file, you will need to bounce your image (or exec a bounce of neo4j from within the image).
You can set the active database with an environment variable
Similar to how you've passed in the other environment variables, you can pass in other configuration options. If your new database was called newgraph.db and it resided in the same directory as graph.db, you would need only to pass in --env=NEO4J_dbms_active__database=newgraph.db. If it resides in a different directory, give that directory with --env=NEO4J_dbms_directories_data=/path/to/new/data/dir.
As these are passed as environment variables, changing them requires starting a new Docker image.
You could also build your own image.
The final and perhaps most drastic option would be to create your own image that is based off of neo4j's image and has all of the changes that you need. Typically, this would not be required, but if you want to clean up your invocation of docker and not keep around any mapped configuration directories, this is the way to go. It would also ensure anybody who has your custom image needs no additional configuration; whether this is desirable is up to you and your deployment architecture.
One friend of mine and I are trying to develop a CorDapp for a financial use case, I can run the cordapp-tutorial and the demos, however they only run on localhost.
We would like to create two "real" nodes and I understood correctly we should build two Corda nodes, my pc as one node server and his pc as another node server, but how can we effectively connect over the internet? On slack I have been told to enable dev-mode, but how do you enable it?
We have a corda.jar and the nodea.conf, but the part I don't really understand from the documentation is:
"Each node server by default must have a node.conf file in the current working directory. After first execution of the node server there will be many other configuration and persistence files created in this workspace directory. The directory can be overridden by the --base-directory= command line argument."
What is intended as working directory?
I've read this documentation
: Corda Nodes
Thank to all, I think I will be asking a lot of question in the near future :D
In Corda 3.1, you can use the network bootstrapper to create a dev-mode network of nodes running on two separate machines as follows:
Create the nodes by following the instructions here (e.g. by using gradlew deployNodes)
Navigate to the folder where the nodes were created (e.g. build/nodes)
Open the node.conf file of each node and change the localhost part of its p2pAddress to the IP address of the machine where the node will be run (e.g. p2pAddress="10.18.0.166:10007")
After making these changes, we need to redistribute the updated nodeInfo files to each node, so that they have the updated IP addresses for each node. Use the network bootstrapper tool to automatically update the files and have them distributed to each node:
java -jar network-bootstrapper.jar kotlin-source/build/nodes
Move the node folders to their individual machines (e.g. using a USB key). It is important that none of the nodes - including the notary - end up on more than one machine. Each computer should also have a copy of runnodes and runnodes.bat.
For example, you may end up with the following layout:
Machine 1: Notary, PartyA, runnodes, runnodes.bat
Machine 2: PartyB, PartyC, runnodes, runnodes.bat
After starting each node, the nodes will be able to see one another and agree ledger updates among themselves
Warning
The bootstrapper must be run after the node.conf files have been modified, but before the nodes are distributed across machines. Otherwise, the nodes will not have the updated IP addresses for each node and will not be able to communicate.
Each of the nodes will have a node.conf file. To enable devMode add this line to the node.conf file.
devMode=true
I am trying to use the Neo4j-import tool to bulk import csv files into a new neo4j database. I can successfully perform this but for the dockerized neo4j to recognize it, I need to restart the docker. Is there any way to do this without having to restart the container? I have tried using the cypher csv import but that for some reason doesn't support the large dataset ~76k relationships
I am running two neo4j instance on one server but with different port. I changed port in neo4j.conf file, I am able to run it properly.
Now, when I execute the cypher query to create a node in the second instance from neo4j-shell, it is creating nodes in first instance.
I have not configured any database path. Assuming it will consider neo4j default database path i.e. data\databases\graph.db.
Please help me with my mistake.
I see a few option of neo4j-shell and It has -port. You can change it with used port or you review property such as path, file, config.
You should text ./neo4j-shell -help to see them.
In docker, is it possible to use part of the host's filesystem to be mounted as read-only in the docker image but any write on it will be on the COW/UFS layer? Below is the usecase I am looking at.
1) We have a proprietary product that takes forever to install with lots of manual intervention. However once the install base is completed the core files are almost not changed as it allows a node-level configuration to be placed in a separate directory that just references the install base. Of course if we need to update the core files then it will be on the host. The core installation will take up about 8GB file space on the host machine.
The host core installation may be virtualized (VMWare or VirtualBox).
2) The core installation would also write its metadata on a database, and each created node will write additional metadata stuff on it. If the DB installation is on the host, can docker run the DB process on a docker image and just reference the DB binaries and data partition as read-only, but write its changes on the data partition on the layer?
If it helps here is a sample relationship I am looking at:
-> Host is a VirtualBox running CentOS, and has the installation of proprietary product and its database.
-> Container A1 will spawn a database process based on the existing database state (empty except for the metadata made during installation).
-> Container A2 will spawn a product process, create the product node using the database offered by A1, and run the build,test,deploy routines.
I need to spawn multiple pairs of the node+database on demand for continuous integration. The setup above should allow me to bring up Container pairs for each isolated node that is needed by our development team. Theoretically I can mount the product base directory as read/write but I think there will be some operations that write data on it (e.g. logs) that I would like to be done on the product process layer instead.
Thanks.