neo4j-shell can not connect to neo4j Server - neo4j

I'm using docker version of neo4j (v3.1.0) and I'm having difficulties connecting to neo4j server using neo4j-shell.
After running an instance of neo4r:3.1.0 docker, I run a bash inside the container:
$ docker exec -it neo4j /bin/bash
And from there I try to run the neo4j-shell like this:
/var/lib/neo4j/bin/neo4j-shell
But it errors:
$ /var/lib/neo4j/bin/neo4j-shell
ERROR (-v for expanded information):
Connection refused
-host Domain name or IP of host to connect to (default: localhost)
-port Port of host to connect to (default: 1337)
-name RMI name, i.e. rmi://<host>:<port>/<name> (default: shell)
-pid Process ID to connect to
-c Command line to execute. After executing it the shell exits
-file File containing commands to execute, or '-' to read from stdin. After executing it the shell exits
-readonly Connect in readonly mode (only for connecting with -path)
-path Points to a neo4j db path so that a local server can be started there
-config Points to a config file when starting a local server
Example arguments for remote:
-port 1337
-host 192.168.1.234 -port 1337 -name shell
-host localhost -readonly
...or no arguments for default values
Example arguments for local:
-path /path/to/db
-path /path/to/db -config /path/to/neo4j.config
-path /path/to/db -readonly
I also tried other hosts like: localhost, 127.0.0.1 and 172.17.0.6 (the container IP). Since it didn't work I tried to list open ports on my container:
$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 :::7687 :::* LISTEN
tcp 0 0 :::7473 :::* LISTEN
tcp 0 0 :::7474 :::* LISTEN
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
As you can see there's no 1337 open! I've looked into the config file and the line for specifying port is commented out which means it should be set to its default value (1337).
Can anyone help me connect to neo4j using neo4j-shell?
BTW, the neo4j server is up and running and I can use its web access through port :7474.

In 3.1 it seems the shell is not enabled by default.
You will need to pass your own configuration file with the shell enabled :
Uncomment
# Enable a remote shell server which Neo4j Shell clients can log in to.
dbms.shell.enabled=true
(I find the amount of worker for changing one value in docker quite heavy but yeah..)
Or use the new cypher-shell :
ikwattro#graphaware-team ~> docker ps -a | grep 'neo4j'
34b3c6718504 neo4j:3.1.0 "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 7473-7474/tcp, 7687/tcp compassionate_easley
2395bd0b1fe9 neo4j:3.1.0 "/docker-entrypoint.s" 5 minutes ago Exited (143) 3 minutes ago cranky_goldstine
949feacbc0f9 neo4j:3.1.0 "/docker-entrypoint.s" 5 minutes ago Exited (130) 5 minutes ago modest_boyd
c38572b078de neo4j:3.0.6-enterprise "/docker-entrypoint.s" 6 weeks ago Exited (0) 6 weeks ago fastfishpim_neo4j_1
ikwattro#graphaware-team ~> docker exec --interactive --tty compassionate_easley bin/cypher-shell
username: neo4j
password: *****
Connected to Neo4j 3.1.0 at bolt://localhost:7687 as user neo4j.
Type :help for a list of available commands or :exit to exit the shell.
Note that Cypher queries must end with a semicolon.
neo4j>
NB: Cypher-shell supports begin and commit :
neo4j> :begin
neo4j# create (n:Node);
Added 1 nodes, Added 1 labels
neo4j# :commit;
neo4j>
-
neo4j> :begin
neo4j# create (n:Person {name:"John"});
Added 1 nodes, Set 1 properties, Added 1 labels
neo4j# :rollback
neo4j> :commit
There is no open transaction to commit
neo4j>
http://neo4j.com/docs/operations-manual/current/tools/cypher-shell/

Related

Docker | Bind for 0.0.0.0:80 failed | Port is already allocated

i've been trying all the existing commands for several hours and could not fix this problem.
i used everything covered in this Article: Docker - Bind for 0.0.0.0:4000 failed: port is already allocated.
I currently have one container: docker ps -a | meanwhile docker ps is empty
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ebb9289dfd1 dockware/dev:latest "/bin/bash /entrypoi…" 2 minutes ago Created TheGoodPartDocker
when i Try docker-compose up -d i get the Error:
ERROR: for TheGoodPartDocker Cannot start service shop: driver failed programming external connectivity on endpoint TheGoodPartDocker (3b59ebe9366bf1c4a848670c0812935def49656a88fa95be5c4a4be0d7d6f5e6): Bind for 0.0.0.0:80 failed: port is already allocated
I've tried to remove everything using: docker ps -aq | xargs docker stop | xargs docker rm
Or remove ports: fuser -k 80/tcp
even deleting networks:
sudo service docker stop
sudo rm -f /var/lib/docker/network/files/local-kv.db
or just manually shut down stop and run:
docker-compose down
docker stop 5ebb9289dfd1
docker rm 5ebb9289dfd1
here is also my netstat : netstat | grep 80
unix 3 [ ] STREAM CONNECTED 20680 /mnt/wslg/PulseAudioRDPSink
unix 3 [ ] STREAM CONNECTED 18044
unix 3 [ ] STREAM CONNECTED 32780
unix 3 [ ] STREAM CONNECTED 17805 /run/guest-services/procd.sock
And docker port TheGoodPartDocker gives me no result.
I also restarted my computer, but nothing works :(.
Thanks for helping
Obviously port 80 is already occupied by some other process. You need to stop the process, before you start the container. To find out the process use ss:
$ ss -tulpn | grep 22
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1187,fd=3))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1187,fd=4))

Error accessing Scylladb cluster outside docker container

I'm running Scylladb locally in a docker container and I want to access the cluster outside the docker container. That's when I'm getting the following error: cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers')
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.17.0.2 776 KB 256 ? ad698c75-a465-4deb-a92c-0b667e82a84f rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.SimpleSnitch
DynamicEndPointSnitch: disabled
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
443048b2-c1fe-395e-accd-5ae9b6828464: [172.17.0.2]
I have no problem accessing the cluster using cqlsh on port 9042:
Connected to at 172.17.0.2:9042.
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.3.1 | Native protocol v4]
Now I'm trying to access the cluster from my fastapi app that is outside the docker container.
from cassandra.cluster import Cluster
cluster = Cluster(['172.17.0.2'])
session = cluster.connect('Test Cluster')
And here's the Error that I'm getting:
raise NoHostAvailable("Unable to connect to any servers", errors)
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'172.17.0.2:9042': OSError(51, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Network is unreachable")})
with a little bit of tinkering, it's possible to achieve a connection to the Scylla running in a container outside of the container for local development.
I've tried on M1 Mac with docker desktop:
Run scylla container with couple of new parameters[src]:
--listen-address 0.0.0.0 for simplification as we are spawning Scylla inside the container to allow connection to the container from any network
--broadcast-rpc-address 127.0.0.1 required if --listen-address set to 0.0.0.0. We are going to port forward 9042 from container to host (local) machine, so this is an IP where it will be acessible.
The final command to spawn the container is:
$ docker run --rm -ti \
-p 127.0.0.1:9042:9042 \
scylladb/scylla \
--smp 1 \
--listen-address 0.0.0.0 \
--broadcast-rpc-address 127.0.0.1
The -p 127.0.0.1:9042:9042 is to make port 9042 accessible on host (local) machine.
Install pip3 install scylla-driver as it has support of darwin/arm64 architecture.
Write a simple python script:
# so74265199.py
from cassandra.cluster import Cluster
cluster = Cluster(['127.0.0.1'])
session = cluster.connect()
# Select from a table that is available without keyspace
res = session.execute('SELECT * FROM system.versions')
print(res.one())
Run your script
$ python3 so74265199.py
Row(key='local', build_id='71178cf6db7021896cd8251751b78b3d9e3afa8d', build_mode='release', version='5.0.5-0.20221009.5a97a1060')
Disclaimer: I'm not an expert in Scylla's configuration, so feel free to point out a better approach.

Failed to connect to http://localhost:8086, Please check your connection settings and ensure 'influxd' is running

Searched online but I don't see the solution. I have influx installed: InfluxDB shell version: v1.6.2. But it throws me this error:
Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection refused
Please check your connection settings and ensure 'influxd' is running.
Just a couple things to check: make sure the the service is running (use the service manager on your OS or the influxd command to check). Another test you can do is to use the actual machine IP address http://:8086 instead of localhost. It could be access is restricted (iptables).
If none of that works, I would check out the discussion on this GitHub issue.
In my case on a Mac, I had to run influxd -config /usr/local/etc/influxdb.conf first before running influx.
I was facing the same challenge when I upgraded influxdb to 1.8.9, so I had to downgrade back to 1.8.5.
https://vibhubithar.medium.com/workaround-latest-version-of-influxdb-not-starting-on-raspberry-pi-buster-a8b5afa84fce
sudo apt update
sudo apt upgrade -y
wget https://s3.amazonaws.com/dl.influxdata.com/influxdb/releases/influxdb_1.8.5_armhf.deb
sudo systemctl unmask influxdb.service
sudo systemctl start influxdb
sudo systemctl enable influxdb.service
First check and see if the influxdb instance is running or not. If it is already running you might need to kill the process by issuing command,
ps -ef |grep influxdb
influxdb 5781 1 99 18:15 pts/0 00:00:22 /usr/bin/influxd -pidfile /var/run/influxdb/influxd.pid -config /etc/influxdb/influxdb.conf
pkill -f influxdb
Once the process is killed, there are chances that port is still in used, which can be verified by issuing command shown below.
sudo netstat -tulpn | grep LISTEN |grep influx
root#db1:/usr/bin# sudo netstat -tulpn | grep LISTEN |grep influx
tcp 0 0 127.0.0.1:8088 0.0.0.0:* LISTEN 28558/influxd
tcp6 0 0 :::8086 :::* LISTEN 28558/influxd
root#db1:/usr/bin#
In the above example, kill the process 28558 by issuing command pkill -9 28558
Once the port is released, cd to /etc/init.d directory and run the below mention service.
root#jvision-db1:/etc/init.d# influx
DB instance should come back and can be verified by the ps -ef |grep influxdb command.
Also, cd to /usr/bin directory and issue below mention command to verify InfluxDB is also available.
root#db1:~# cd /usr/bin
root#db1:/usr/bin# ./influx
Connected to http://localhost:8086 version 1.7.9
InfluxDB shell version: 1.7.9
>
if :
bind-address = "10.0.0.32:8086"
use
$> influx -host 10.0.0.32
Connected to http://10.2.3.102:8086 version 1.8.10
InfluxDB shell version: 1.8.10

Cannot conect to Docker container running in VSTS

I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/

Docker remote api don't restart after my computer restart

Last week I struggled to make my docker remote api working. As it is running on VM, I have not restart my VM since then. Today I finally restarted my VM and it is not working any more (docker and docker-compose are working normally, but not docker remote api). My docker init file looks like this: /etc/init/docker.conf.
description "Docker daemon"
start on filesystem and started lxc-net
stop on runlevel [!2345]
respawn
script
/usr/bin/docker -H tcp://0.0.0.0:4243 -d
end script
# description "Docker daemon"
# start on (filesystem and net-device-up IFACE!=lo)
# stop on runlevel [!2345]
# limit nofile 524288 1048576
# limit nproc 524288 1048576
respawn
kill timeout 20
.....
.....
Last time I made setting indicated here this
I tried nmap to see if port 4243 is opened.
ubuntu#ubuntu:~$ nmap 0.0.0.0 -p-
Starting Nmap 7.01 ( https://nmap.org ) at 2016-10-12 23:49 CEST
Nmap scan report for 0.0.0.0
Host is up (0.000046s latency).
Not shown: 65531 closed ports
PORT STATE SERVICE
22/tcp open ssh
43978/tcp open unknown
44672/tcp open unknown
60366/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 1.11 seconds
as you can see, the port 4232 is not opened.
when I run:
ubuntu#ubuntu:~$ echo -e "GET /images/json HTTP/1.0\r\n" | nc -U
This is nc from the netcat-openbsd package. An alternative nc is available
in the netcat-traditional package.
usage: nc [-46bCDdhjklnrStUuvZz] [-I length] [-i interval] [-O length]
[-P proxy_username] [-p source_port] [-q seconds] [-s source]
[-T toskeyword] [-V rtable] [-w timeout] [-X proxy_protocol]
[-x proxy_address[:port]] [destination] [port]
I run this also:
ubuntu#ubuntu:~$ sudo docker -H=tcp://0.0.0.0:4243 -d
flag provided but not defined: -d
See 'docker --help'.
I restart my computer many times and try a lot of things with no success.
I already have a group named docker and my user is in:
ubuntu#ubuntu:~$ groups $USER
ubuntu : ubuntu adm cdrom sudo dip plugdev lpadmin sambashare docker
Please tel me what is wrong.
Your startup script contains an invalid command:
/usr/bin/docker -H tcp://0.0.0.0:4243 -d
Instead you need something like:
/usr/bin/docker daemon -H tcp://0.0.0.0:4243
As of 1.12, this is now (but docker daemon will still work):
/usr/bin/dockerd -H tcp://0.0.0.0:4243
Please note that this is opening a port that gives remote root access without any password to your docker host.
Anyone that wants to take over your machine can run docker run -v /:/target -H your.ip:4243 busybox /bin/sh to get a root shell with your filesystem mounted at /target. If you'd like to secure your host, follow this guide to setting up TLS certificates.
I finally found www.ivankrizsan.se and it is working find now. Thanks to this guy (or girl) ;).
This settings work for me on ubuntu 16.04. Here is how to do :
Edit this file /lib/systemd/system/docker.service and replace the line ExecStart=/usr/bin/dockerd -H fd:// with
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:4243
Save the file
restart with :sudo service docker restart
Test with : curl http://localhost:4243/version
Result: you should see something like this:
{"Version":"1.11.0","ApiVersion":"1.23","GitCommit":"4dc5990","GoVersion" "go1.5.4","Os":"linux","Arch":"amd64","KernelVersion":"4.4.0-22-generic","BuildTime":"2016-04-13T18:38:59.968579007+00:00"}
Attention :
Remain aware that 0.0.0.0 is not good for security, for more security, you should use 127.0.0.1

Resources