neo4j-shell example of running a Cypher script - neo4j

I need to run a Cypher query against a Neo4J database, from a command line (for batch scheduling purposes).
When I run this:
./neo4j-shell -file /usr/share/neo4j/scripts/query.cypher -path /usr/share/neo4j/neo4j-community-3.1.1/data/databases/graph.db
I get this error:
ERROR (-v for expanded information):
Error starting org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory, /usr/share/neo4j/neo4j-community-3.1.1/data/databases/graph.db
There is a running Neo4J instance on that database (localhost:7474). I need the script to perform queries against it.
NOTE: this is a split of the original question, for the sake of tidiness.

To execute (one or more) Cypher statements from a file while the neo4j server is running, you can use the APOC procedure apoc.cypher.runFile(file or url).
Since you mention "batch scheduling", the Job management and periodic execution APOC procedures may be helpful. Those procedures could, in turn, execute calls to apoc.cypher.runFile.

Okay I just spun up a fresh instance of Neo4j-community-3.1.1 today and ran into the exact same problem. Note that I had already created a database using the bulk import tool, so one might need to make a directory for a database (mkdir data/databases/graph.db) before using a shell.
I believe your problem might be that you have an instance of Neo4j process running against the database you are trying to access.
For me, shutting down Neo4j, and then starting the shell with an explicit path worked:
cd /path/to/neo4j-community-3.1.1/
bin/neo4j stop ## assuming it is already running (may need a port specifier)
bin/neo4j-shell -path data/databases/graph.db
For some reason I thought you could have both the shell and the server running, but apparently that is not the case. Hopefully someone will correct me if I am wrong.

Related

Error 403: "Flux query service disabled." But flux-enabled=true has been set in influxdb.conf

I have been using InfluxDB (server version 1.7.5) with the InfluxQL language for some time now. Unfortunately, InfluxQL does not allow me to perform any form of joins, so I need to use InfluxDB's new scripting language Flux instead.
The manual states that I have to enable Flux in /etc/influxdb/influxdb.conf by setting flux-enabled=true which I have done. I restarted the server to make sure I got the new settings and started the Influx Command Line tool with "-type=flux".
I then do get a different user interface than when I use InfluxQL. So far so good. I can also set and read variables etc. So I can set:
> dummy = 1
> dummy
1
However, when I try to do any form of query of the tables such as: from(bucket:"db_OxyFlux-test/autogen")
I always get
Error: Flux query service disabled. Verify flux-enabled=true in the [http] section of the InfluxDB config.
: 403 Forbidden
I found the manual for Fluxlang rather lacking in basic details of Schema exploration and so I am not sure if this is just an issue with my query raising this error or if something else is going wrong. I tested this both on my own home machine and on our remote work server and I get the same results.
Re: Vilix
Thank you. This lead me in the right direction.
I realised that InfluxDB does not automatically read the config file (which is not very intuitive). But your solution also forces me to start the deamon by hand each time. After some more googling I used:
"sudo influxd config -config /etc/influxdb/influxdb.conf"
So hopefully now the daemon will start automatically each time on startup rather than me having to do this by hand.
I have the same issue and solution is to start influxd with -config option:
influxd -config /etc/influxdb/influxdb.conf

How to Load Cypher File into Neo4j

I have generated a cypher file and want to load it into neo4j.
The only relevant documentation I could find was about loading csv's.
I also tried the shell but it seems to have no effect
cypher-shell.bat -uneo4j -pne04j < db.cql
Copy paste into localhost:7474/browser makes the browser unresponsive.
In the current Neo4j version you can use Cypher Shell to achieve your goal.
From the docs, Invoke Cypher Shell with a Cypher script from the command line:
$ cat db.cql | bin/cypher-shell -u yourneo4juser -p yourpassword
Note that this example is based in a Linux instalation. If you are using Neo4j with Windows, you will need to adjust this command to your needs.
Not sure when this has been added but in the current version (4.4) an alternative way to load a cypher file to NEO4J (GUI) is by dragging and dropping the cypher file over the web browser, which then offers two options, either to add it to the favorites or to paste it in the editor.

If neo4j-shell is deprecated then how do I dump the contents of the database (for backup)

I've just been looking into how to backup the database and have found that neo4j-shell -c dump > my-db-dumb.cql looks like a good solution, which exports everything to a cypher query which creates everything when run (a bit like mysqldump for MySQL).
However, according to the official documentation, neo4j-shell has beed deprecated in favour of cypher-shell, and I can't find the equivalent dump function for cypher-shell. Is there one? If not, what should I do instead of neo4j-shell -c dump? Or is there a better way of backing up the database (I have the community edition)? One advantage of the above solution is you don't have to stop the database.
The most useful option is to shutdown the data and then take backups using the new neo4j-admin command.
If you cannot shutdown the graph, then you can manually copy the "graph.db" directory to someplace else, and then use neo4j-shell using the -path option on new location. As far as version 3.1.1 is concerned, the neo4j-shell is working perfectly.

Change database.location from command line

Is there a way to specify a Neo4j database location at command invocation time, instead of via a file? So instead of putting the following in neo4j-server.properties:
org.neo4j.server.database.location=/path/to/db
Something like:
neo4j --db=/path/to/db
I'm still on Neo4j 2.1.6 but advice on any version is better than nothing.
My particular use case at this time is, my regular DB is having problems and I want to quickly spin up a blank database just to narrow down the problem to binaries or data (yes I've checked the log files!).
You could use sed to edit neo4j-server.properties before starting neo4j. Something like:
sed -i.bak s/org.neo4j.server.database.location=databases/org.neo4j.server.database.location=newdatabase/g neo4j-community-2.1.6/conf/neo4j-server.properties
You could create a simple script that takes the path as a param. So something like:
./start-neo.sh mytestdb where start-neo.sh is:
sed -i.bak s/org.neo4j.server.database.location=databases/$1/g neo4j-community-2.1.6/conf/neo4j-server.properties
neo4j-community-2.1.6/bin/neo4j start
will set org.neo4j.server.database.location=mytestdb and then start neo4j.

how to swap solr core from shell

I have a solr setup with two cores. I want to schedule a core(core1, backend) for full import frequently(e.g. after every 5 mins), then swap with the live(core0, serving) core from shell command through a shceduler.
For full-import command, I am using following shell command
wget -o - -q -t 1 http://localhost:8080/solr/core1/dataimport?command=full-import
Which works fine. If I do a core swap from browser by hitting
http://localhost:8080/solr/admin/cores?action=SWAP&core=core1&other=core0, I get latest update instantly on search. But if I schedule this URL as shell command similar to dataimport, it doesn't do that swap.
Did you try with
curl
"http://'localhost':8080/solr/admin/cores?action=SWAP&core=core1&other=core0"
from shell?
There is catch with the SWAPs
Apache Solr allows to swap two cores around for non-Cloud configurations. They take each other’s name, so it is a good way to push an updated core into a production without downtime.
But an interesting question is how this is achieved. Normally, core name is it’s directory name too. So, does Solr rename the directory on the filesystem too?
Not really! Instead name property in the core.properties file is updated to use the name of the other core. Usually that property is used to give an alternave name of the core for when the directory naming conventions are not suitable.
The gotcha is - of course - that you still have two directories with right looking names for the cores you see in the Admin UI. So, it is very easy to forget that extra redirection/rename step when troubleshooting somebody else’s - or even your own old - setup.

Resources