I have a droplet on digital ocean and have neo4j installed and running, the results of service neo4j status is:
Loaded: loaded (/etc/init.d/neo4j; bad; vendor preset: enabled)
Active: active (exited) since Wed 2017-04-19 10:35:43 UTC; 21h ago
Docs: man:systemd-sysv-generator(8)
Process: 725 ExecStop=/etc/init.d/neo4j stop (code=exited, status=0/SUCCESS)
Process: 806 ExecStart=/etc/init.d/neo4j start (code=exited, status=0/SUCCESS)
Tasks: 0
Memory: 0B
CPU: 0
But when I run the pyhthon code I get:
py2neo.packages.httpstream.http.SocketError: Connection refused
I have authentication disabled.
In the graph all I explicitly referred to the db:
G=Graph('http://localhost:7474/db/data')
which results in the following error:
File "/usr/local/lib/python3.5/dist-packages/py2neo/packages/neo4j/v1/bolt.py", line 156, in _recv
raise ProtocolError("Server closed connection")
py2neo.packages.neo4j.v1.exceptions.ProtocolError: Server closed connection
Related
I just moved Docker default storage location to second disk setting up a /etc/docker/daemon.json as described in documentation, so far so goood.
The problem is that now I keep getting a bunch of volumes being continuously (re)mounted, ad obiously it is really annoying.
So I tried to set up overlay2 in /etc/docker/daemon.json, but now Docker doesn't event start
# sudo systemctl restart docker
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
# systemctl status docker.service
× docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2022-12-15 11:06:36 CET; 10s ago
TriggeredBy: × docker.socket
Docs: https://docs.docker.com
Process: 17614 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status>
Main PID: 17614 (code=exited, status=1/FAILURE)
CPU: 54ms
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: Stopped Docker Application Container Engine.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: docker.service: Start request repeated too quickly.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: docker.service: Failed with result 'exit-code'.
dic 15 11:06:36 sgratani-OptiPlex-7060 systemd[1]: Failed to start Docker Application Container Engine.
So, for now I give up using overlay2 since having all the Docker images and container on the seond disk is more important than getting rid of a bunch of volumes being mounted continuously, but can anyone tells me where the problem is and if there is a solution?
Update #1: strange permissions behaviour problem
Got a simple docker-compose.yml with a Wordpress service (official WP image) and a database service, and when I have the docker storage location on the second disk instead of default one the database (volume maybe?) seems inaccessible:
wordpress keep giving error on db connection
trying to run mysql interactive from db service result in error on login with root user
Obviously this is related to the docker storage location, but cannot find why, since new location is created by docker itself when started.
I prefer to create a situation where on a Raspberry Pi4 Docker is running while the SD-card is read only. This with overlay fs.
In the dockercontainer a database is running, the data of the database is written to an USB-stick (volume mapping).
When overlayfs is activated (after reboot, enabled via “sudo raspi-config”), docker will not start-up any more.
The steps on https://docs.docker.com/storage/storagedriver/overlayfs-driver/
System information:
Linux raspberrypi 5.10.63-v8+ #1488 SMP PREEMPT Thu Nov 18 16:16:16 GMT 2021 aarch64 GNU/Linux
Docker information:
pi#raspberrypi:~ $ docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.6.3-docker)
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 20.10.11
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
………
Status docker after restart:
pi#raspberrypi:~ $ sudo systemctl status docker.*
Warning: The unit file, source configuration file or drop-ins of docker.service changed on disk. Run 'systemctl daemon-reload' to reload units.
● docker.socket - Docker Socket for the API
Loaded: loaded (/lib/systemd/system/docker.socket; enabled; vendor preset: enabled)
Active: failed (Result: service-start-limit-hit) since Thu 2021-12-09 14:30:43 GMT; 1h 13min ago
Triggers: ● docker.service
Listen: /run/docker.sock (Stream)
CPU: 2ms
Dec 09 14:30:36 raspberrypi systemd[1]: Starting Docker Socket for the API.
Dec 09 14:30:36 raspberrypi systemd[1]: Listening on Docker Socket for the API.
Dec 09 14:30:43 raspberrypi systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2021-12-09 14:30:43 GMT; 1h 13min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 992 (code=exited, status=1/FAILURE)
CPU: 162ms
Dec 09 14:30:43 raspberrypi systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Dec 09 14:30:43 raspberrypi systemd[1]: Stopped Docker Application Container Engine.
Dec 09 14:30:43 raspberrypi systemd[1]: docker.service: Start request repeated too quickly.
Dec 09 14:30:43 raspberrypi systemd[1]: docker.service: Failed with result 'exit-code'.
Dec 09 14:30:43 raspberrypi systemd[1]: Failed to start Docker Application Container Engine.
Running the command given in docker.service with additional overlay flag
pi#raspberrypi:~ $ sudo /usr/bin/dockerd --storage-driver=overlay -H fd:// --containerd=/run/containerd/containerd.sock
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay, from file: overlay2)
pi#raspberrypi:~ $ sudo /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
INFO[2021-12-09T14:34:31.667296985Z] Starting up
failed to load listeners: no sockets found via socket activation: make sure the service was started by systemd
Which steps am I missing to be able to run Docker with overlay fs, such that the SD-card in the Raspberry is read only?
Without the overlay fs active it all works as expected.
I ran into this issue as well and found a way around it. In summary, you can't run the default Docker FS driver (overlay2) on overlayfs. Fortunately, Docker supports other storage drivers, including fuse-overlayfs. Switching to this driver resolves the issue but there's one final catch. When Docker starts, it attempts to rename /var/lib/docker/runtimes and since overlayfs doesn't support renames of directories already in lower layers, it fails. If you simply rm -rf this directory while Docker is stopped and before you enable RPi's overlayfs, everything should work.
I cannot get memcached to run on my server.
This is what I tried so far:
% sudo systemctl start memcached # no output
% sudo systemctl status memcached.service
● memcached.service - memcached daemon
Loaded: loaded (/lib/systemd/system/memcached.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2020-02-16 17:45:09 CET; 4s ago
Process: 22725 ExecStart=/usr/share/memcached/scripts/systemd-memcached-wrapper /etc/memcached.conf (code=exited, status=71)
Main PID: 22725 (code=exited, status=71)
systemd[1]: Started memcached daemon.
systemd-memcached-wrapper[22725]: bind(): Cannot assign requested address
systemd-memcached-wrapper[22725]: failed to listen on TCP port 11211: Cannot assign requested address
systemd[1]: memcached.service: Main process exited, code=exited, status=71/n/a
systemd[1]: memcached.service: Unit entered failed state.
systemd[1]: memcached.service: Failed with result 'exit-code'.
I am running Ubuntu 16.04.6 LTS
How can I start my memcached service?
Have a look into /etc/memcached.conf there might be written sth. like
-l xxx.xx.xx.xx
If you are trying to connect via localhost: just comment the line.
If you are trying to connect from somewhere else check the IP for correctness.
Suddently my docker daemon stop and never turned on again. I'm running docker on a Linux raspberrypi 4.1.13-v7+. It worked before until last week when my docker service suddenly stop working and I don't have a clue why.
My docker version is:
raspberrypi:~ $ docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.4.3
Git commit: 20f81dd
Built: Thu Mar 10 22:23:48 2016
OS/Arch: linux/arm
An error occurred trying to connect: Get http:///var/run/docker.sock/v1.22/version: read unix /var/run/docker.sock: connection reset by peer
Socket is ok:
● docker.socket - Docker Socket for the API
Loaded: loaded (/lib/systemd/system/docker.socket; disabled)
Active: active (listening) since Sat 2018-03-17 00:42:46 UTC; 6s ago
Listen: /var/run/docker.sock (Stream)
Looking to my service status you can see the following log:
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled)
Active: failed (Result: start-limit) since Sat 2018-03-17 00:05:52 UTC; 4min 55s ago
Docs: https://docs.docker.com
Process: 2891 ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS (code=exited, status=1/FAILURE)
Main PID: 2891 (code=exited, status=1/FAILURE)
Mar 17 00:05:52 raspberrypi docker[2891]: time="2018-03-17T00:05:52.743474604Z" level=debug msg="ReleaseAddress(LocalDefault/172.17.0.0/16, 172.17.0.1)"
Mar 17 00:05:52 raspberrypi docker[2891]: time="2018-03-17T00:05:52.758090386Z" level=debug msg="ReleasePool(LocalDefault/172.17.0.0/16)"
Mar 17 00:05:52 raspberrypi docker[2891]: time="2018-03-17T00:05:52.772819345Z" level=debug msg="Cleaning up old shm/mqueue mounts: start."
Mar 17 00:05:52 raspberrypi docker[2891]: time="2018-03-17T00:05:52.773269239Z" level=fatal msg="Error starting daemon: Error initializing network controller: Error creating default \"bridge\" network: package not installed"
Mar 17 00:05:52 raspberrypi systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 17 00:05:52 raspberrypi systemd[1]: Failed to start Docker Application Container Engine.
I already tried this solution but for me it didn't work.
How can I make my docker service start again? It seems that a package is not installed but I tried to:
raspberrypi:~ $ modprobe bridge
modprobe: FATAL: Module bridge not found.
I have the following issue installing and provisioning my Kubernetes CoreOS-libvirt-based cluster.
When I'm logging on the master node, I see the following:
ssh core#192.168.10.1
Last login: Thu Dec 10 17:19:21 2015 from 192.168.10.254
CoreOS alpha (884.0.0)
Update Strategy: No Reboots
Failed Units: 1
kube-addons.service
Trying to debug it, I run and receive the following:
core#kubernetes-master ~ $ systemctl status kube-addons.service
● kube-addons.service - Kubernetes addons
Loaded: loaded (/etc/systemd/system/kube-addons.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2015-12-10 16:41:06 UTC; 41min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 801 ExecStart=/opt/kubernetes/bin/kubectl create -f /opt/kubernetes/addons (code=exited, status=1/FAILURE)
Process: 797 ExecStartPre=/bin/sleep 10 (code=exited, status=0/SUCCESS)
Process: 748 ExecStartPre=/bin/bash -c while [[ "$(curl -s http://127.0.0.1:8080/healthz)" != "ok" ]]; do sleep 1; done (code=exited, status=0/SUCCESS)
Main PID: 801 (code=exited, status=1/FAILURE)
Dec 10 16:40:53 kubernetes-master systemd[1]: Starting Kubernetes addons...
Dec 10 16:41:06 kubernetes-master kubectl[801]: replicationcontroller "skydns" created
Dec 10 16:41:06 kubernetes-master kubectl[801]: error validating "/opt/kubernetes/addons/skydns-svc.yaml": error validating data: found invalid field portalIP for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 16:41:06 kubernetes-master systemd[1]: Failed to start Kubernetes addons.
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Unit entered failed state.
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Failed with result 'exit-code'.
My etcd version is:
etcd --version
etcd version 0.4.9
But I have a etcd2 also:
etcd2 --version
etcd Version: 2.2.2
Git SHA: b4bddf6
Go Version: go1.4.3
Go OS/Arch: linux/amd64
And at the current moment the second one is being runned:
ps aux | grep etcd
etcd 731 0.5 8.4 329788 42436 ? Ssl 16:40 0:16 /usr/bin/etcd2
root 874 0.4 7.4 59876 37804 ? Ssl 17:19 0:02 /opt/kubernetes/bin/kube-apiserver --address=0.0.0.0 --port=8080 --etcd-servers=http://127.0.0.1:2379 --kubelet-port=10250 --service-cluster-ip-range=10.11.0.0/16
core 953 0.0 0.1 6740 876 pts/0 S+ 17:27 0:00 grep --colour=auto etcd
What causes the issue and how can I solve it?
Thank you.
The relevant log line is:
/opt/kubernetes/addons/skydns-svc.yaml": error validating data: found invalid field portalIP for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
You should figure out what's invalid about that IP or set the flag to ignore.