I can't get the Neo4j service to start on Ubuntu 16.04. A while ago I was running a Ubuntu 14 and I was able to install Neo4j just fine. But then I removed it and time passed and I've upgraded to Ubuntu 16 and now I want to check out Neo4j again and it's not installing. I have java and anything it asks me. I'm little aware that that Ubuntu changed the service launcher and I think that might be it but I don't know enough in either Ubuntu or Neo4j to know where to start debugging to figure out how to make this work.
Can someone point me to logs to look at or can they fill in any holes in my knowledge to help me?
UPDATE
michael#Acer:~$ systemctl status neo4j
● neo4j.service - LSB: Neo4j Graph Database server
Loaded: loaded (/etc/init.d/neo4j; bad; vendor preset: enabled)
Active: active (running) since Tue 2016-09-27 13:56:05 MDT; 3 days ago
Docs: man:systemd-sysv-generator(8)
Tasks: 37
Memory: 120.3M
CPU: 46min 31.410s
CGroup: /system.slice/neo4j.service
└─17663 /usr/bin/java -cp /var/lib/neo4j/plugins:/etc/neo4j:/usr/share/neo4j/lib/*:/var/lib/neo4j/plugins/ * -server -XX:+UseG1GC -XX:-OmitStackTraceInFastThr
Oct 01 11:44:53 Acer systemd[1]: Started LSB: Neo4j Graph Database server.
Oct 01 11:45:00 Acer systemd[1]: Started LSB: Neo4j Graph Database server.
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
lines 1-13/13 (END)
Here is the info from the firewall:
netstat -ntlp | grep 17663
tcp6 0 0 127.0.0.1:7473 :::* LISTEN 17663/java
tcp6 0 0 127.0.0.1:7474 :::* LISTEN 17663/java
tcp6 0 0 127.0.0.1:1337 :::* LISTEN 17663/java
tcp6 0 0 :::42787 :::* LISTEN 17663/java
tcp6 0 0 127.0.0.1:7687 :::* LISTEN 17663/java
I figured it out! I had to allow non-local access to Neo4j.
In the previous versions of Neo4j, the default installation would allow remote connections. Since I've always installed this on a headless server, I just assumed this was how it is. In the new Neo4j 3.0, this is turned off by default and you have to go into neo4j.config and uncomment the appropriate lines in the networking section to allow connections. Made the config changes and reboot the machine just for good measure and everything started to work.
Related
Please do as i do in your vps and then maybe the issue reproduced,replace the variable $vps_ip with your real vps ip during the below steps.
wget https://saimei.ftp.acc.umu.se/debian-cd/current/amd64/iso-cd/debian-10.4.0-amd64-netinst.iso
transmission-create -o debian.torrent debian-10.4.0-amd64-netinst.iso
Create a trackerless torrent ,show info on it:
transmission-show debian.torrent
Name: debian-10.4.0-amd64-netinst.iso
File: debian.torrent
GENERAL
Name: debian-10.4.0-amd64-netinst.iso
Hash: a7fbe3ac2451fc6f29562ff034fe099c998d945e
Created by: Transmission/2.92 (14714)
Created on: Mon Jun 8 00:04:33 2020
Piece Count: 2688
Piece Size: 128.0 KiB
Total Size: 352.3 MB
Privacy: Public torrent
TRACKERS
FILES
debian-10.4.0-amd64-netinst.iso (352.3 MB)
Open the port which transmission running on your vps.
firewall-cmd --zone=public --add-port=51413/tcp --permanent
firewall-cmd --reload
Check it from your local pc.
sudo nmap $vps_ip -p51413
Host is up (0.24s latency).
PORT STATE SERVICE
51413/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 1.74 seconds
Add the torrent and seed it with transmission's default username and password on your vps(with your own if you already change it):
transmission-remote -n "transmission:transmission" --add debian.torrent
localhost:9091/transmission/rpc/ responded: "success"
transmission-remote -n "transmission:transmission" --list
ID Done Have ETA Up Down Ratio Status Name
1 0% None Unknown 0.0 0.0 None Idle debian-10.4.0-amd64-netinst.iso
Sum: None 0.0 0.0
transmission-remote -n "transmission:transmission" -t 1 --start
localhost:9091/transmission/rpc/ responded: "success"
Get the debian.torrent from your vps into local pc.
scp root#$vps_ip:/root/debian.torrent /tmp
Now to try download it in your local pc.
aria2c --enable-dht=true /tmp/debian.torrent
06/08 09:28:04 [NOTICE] Downloading 1 item(s)
06/08 09:28:04 [NOTICE] IPv4 DHT: listening on UDP port 6921
06/08 09:28:04 [NOTICE] IPv4 BitTorrent: listening on TCP port 6956
06/08 09:28:04 [NOTICE] IPv6 BitTorrent: listening on TCP port 6956
*** Download Progress Summary as of Mon Jun 8 09:29:04 2020 ***
===============================================================================
[#a34431 0B/336MiB(0%) CN:0 SD:0 DL:0B]
FILE: /tmp/debian-10.4.0-amd64-netinst.iso
-------------------------------------------------------------------------------
I wait about one hour ,the download progress is always 0%.
If you're using DHT, you have to open a UDP port in your firewall and then, depending on what you're doing, you can specify that port to aria2c. From the docs:
DHT uses UDP. Since aria2 doesn't configure firewalls or routers for port forwarding, it's up to you to do it manually.
$ aria2c --enable-dht --dht-listen-port=6881 file.torrent
See this page for some more examples of using DHT with aria2c.
The command:
pi#raspberrypi:~ $ mosquitto
1566609792: mosquitto version 1.5.7 starting
1566609792: Using default config.
1566609792: Opening ipv4 listen socket on port 1883.
1566609792: Error: Address already in use
can be invoked to start mosquitto. Is there a better single command that can verify the broker is running. I would like to avoid using using pub and sub commands to test and use a simple query command. I would also like to avoid using the mosquitto command to determine if the installation is active / running
Assuming that you installed mosquitto using apt-get then it will have been set up as a systemd service so:
service mosquitto status
will show if it's running or not:
● mosquitto.service - LSB: mosquitto MQTT v3.1 message broker
Loaded: loaded (/etc/init.d/mosquitto; generated; vendor preset: enabled)
Active: active (running) since Mon 2019-08-12 22:39:38 BST; 1 weeks 4 days ag
Docs: man:systemd-sysv-generator(8)
Process: 32183 ExecStop=/etc/init.d/mosquitto stop (code=exited, status=0/SUCC
Process: 32220 ExecStart=/etc/init.d/mosquitto start (code=exited, status=0/SU
CPU: 8min 53.255s
CGroup: /system.slice/mosquitto.service
└─32226 /usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
else just using ps will show if the process is running:
$ ps -efc | grep mosquitto
mosquit+ 32226 1 TS 19 Aug12 ? 00:08:53 /usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
This is the first time such a thing happens to me. I'm really scared.
I've been coding and testing a Django webapp on my laptop. The app is running on Docker, with docker-compose. Both the host and guest are Ubuntu 18.04. It consists of 3 images: Django+Gunicorn, Nginx and Postgres.
Nothing really fancy and it worked perfectly, until 5 minutes ago.
When I tried to refresh the page (accessible via 127.0.0.1) on Chrome Incognito, it got stuck on loading. Same thing with curl. At the time, I was logged into the Django container (to activate collectstatic whenever I needed it) and it was still running as usual.
I thought something was stuck somewhere so I tried to see if there's anything listening to the 80 port. Nothing really special:
tcp6 0 0 :::80 :::* LISTEN 10815/docker-proxy
So, wanting to get back to coding as fast as possible, I tried to (sudo) down then kill the containers, to no avail:
ERROR: for xxxxxxxx_nginx_1 Cannot kill container: e94e64a75b1726ccd27623024a4223ffb3d77c6578b4d69f6240bea51e8e641b: Cannot kill container e94e64a75b1726ccd27623024a4223ffb3d77c6578b4d69f6240bea51e8e641b: unknown error after kill: docker-runc did not terminate sucessfully: container_linux.go:393: signaling init process caused "permission denied"
: unknown
No problem, I thought, and I just stoped the docker service:
sudo systemctl stop docker
I refreshed the 127.0.0.1 page expecting to see a This site can’t be reached page ... only to see the webapp loading!
I tried to see what container are running to stop them, but docker ps returned this:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Which confirms the Docker service was down. systemctl status confirmed just that. I also checked if the serverside code was running. It is. I also tried to change some frontend code, and it loading the new version.
Can someone tell me what's going on, and how to stop this 'zombie' app from running?
Thanks!
EDIT
I just had the idea to run ps aux | grep docker and here's what I found:
root 1661 0.5 0.9 670260 74136 ? Ssl 17:47 1:15 dockerd -G docker --exec-root=/var/snap/docker/384/run/docker --data-root=/var/snap/docker/common/var-lib-docker --pidfile=/var/snap/docker/384/run/docker.pid --config-file=/var/snap/docker/384/config/daemon.json --debug
root 2148 0.3 0.4 756640 34944 ? Ssl 17:47 0:47 docker-containerd --config /var/snap/docker/384/run/docker/containerd/containerd.toml
root 4105 0.0 0.0 7508 4112 ? Sl 17:48 0:01 docker-containerd-shim -namespace moby -workdir /var/snap/docker/common/var-lib-docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7709ab085e470228c120eff4c9b36590348dac483a40d9b107cfb8d62146e060 -address /var/snap/docker/384/run/docker/containerd/docker-containerd.sock -containerd-binary /snap/docker/384/bin/docker-containerd -runtime-root /var/snap/docker/384/run/docker/runtime-runc -debug
root 10618 0.0 0.0 7508 4464 ? Sl 17:57 0:01 docker-containerd-shim -namespace moby -workdir /var/snap/docker/common/var-lib-docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/3a689a845ef012584e46d631c053ca0a00dbe34bb430f5e52a4de879c7efe966 -address /var/snap/docker/384/run/docker/containerd/docker-containerd.sock -containerd-binary /snap/docker/384/bin/docker-containerd -runtime-root /var/snap/docker/384/run/docker/runtime-runc -debug
root 10815 0.0 0.0 425952 2956 ? Sl 17:58 0:07 /snap/docker/384/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.20.0.4 -container-port 80
root 10822 0.0 0.0 9172 5032 ? Sl 17:58 0:01 docker-containerd-shim -namespace moby -workdir /var/snap/docker/common/var-lib-docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/e94e64a75b1726ccd27623024a4223ffb3d77c6578b4d69f6240bea51e8e641b -address /var/snap/docker/384/run/docker/containerd/docker-containerd.sock -containerd-binary /snap/docker/384/bin/docker-containerd -runtime-root /var/snap/docker/384/run/docker/runtime-runc -debug
ahmed 26359 0.0 0.0 21536 1048 pts/5 S+ 21:52 0:00 grep --color=auto docker
EDIT 2
After manually killing some of the processes above, the situation is back to normal. But still, I'd love to get an explanation if there's one.
When I print list
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f15a180315d3 influxdb "/entrypoint.sh infl…" 2 hours ago Exited (128) 2 hours ago influxdb
7b753ba600df influxdb "/entrypoint.sh infl…" 3 hours ago Exited (0) 2 hours ago nervous_fermi
2ddc5d9af400 influxdb "/entrypoint.sh infl…" 3 hours ago Exited (0) 3 hours ago nostalgic_varahamihira
2e174a82d38d influxdb "/entrypoint.sh infl…" 3 hours ago Exited (0) 3 hours a modest_mestorf
But if I try restart
docker container restart influxdb
I get
Error response from daemon: Cannot restart container influxdb: driver failed programming external connectivity on endpoint influxdb (06ee4d738dffecd1a202840699a899286f4bbb88392e4eb227d65670108687a6): Error starting userland proxy: listen tcp 0.0.0.0:8086: bind: address already in use
netstat -nl -p tcp | grep 8086
tcp6 0 0 :::8086 :::* LISTEN 1985/influxd
How to restart docker container?
If I go for
docker kill influxdb
Error response from daemon: Cannot kill container: influxdb: Container f15a180315d38c2f5fac929b2d0b9be3e8ca2a09033648b5c5174c15a64c4d71 is not running
Problem
As indicated by the error message:
Error response from daemon: Cannot restart container influxdb: driver failed programming external connectivity on endpoint influxdb (06ee4d738dffecd1a202840699a899286f4bbb88392e4eb227d65670108687a6): Error starting userland proxy: listen tcp 0.0.0.0:8086: bind: address already in use
The port 8086 was already blocked ( therefore the address already in use part) by another process. Therefore the container was not able to run, because the container tried to start influxdb, but failed because of the already bound port.
Additionally the output of netstat provided the hint, which process occupies the port:
netstat -nl -p tcp | grep 8086
tcp6 0 0 :::8086 :::* LISTEN 1985/influxd
(see the last part: 1985/influxd)
Solution
Kill the other process (first check, if the process is busy and you should save data before stopping it), e.g. using the kill command:
kill 1985
I feel so dumb admitting this, but I am struggling on the uWSGI tutorial for Django here
My problem is after making a test.py file as described in the tutorial, and running the command:
uwsgi --http :8000 --wsgi-file test.py
I go to port :8000 on the IP adress for my VPS and the connection times out. I have been playing around with nginx and have been able to get the "Welcome to nginx" screen to successfully show itself. The output on my terminal after starting uwsgi with the above command is:
--wsgi-file test.py
*** Starting uWSGI 1.9.17.1 (64bit) on [Thu Oct 10 20:58:40 2013] ***
compiled with version: 4.6.3 on 10 October 2013 20:17:02
os: Linux-3.9.3-x86_64-linode33 #1 SMP Mon May 20 10:22:57 EDT 2013
nodename: Name
machine: x86_64
clock source: unix
detected number of CPU cores: 8
current working directory: /usr/local/uwsgi-tutorail/mytest
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 7883
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :8000 fd 4
spawned uWSGI http 1 (pid: 18638)
uwsgi socket 0 bound to TCP address 127.0.0.1:52306 (port auto-assigned) fd 3
Python version: 2.7.3 (default, Sep 26 2013, 20:13:52) [GCC 4.6.3]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x26599f0
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72792 bytes (71 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x26599f0 pid: 18637 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 18637, cores: 1)
I am a complete newb at uwsgi, any help would be greatly appreciated.
Not an elegant solution but I was able to "fix" the problem by rebuilding my VPS and starting from scratch.