After creating a systemd service to launch a rails app, the service is failing with this error:
$ systemctl status evrserver
● evrserver.service - evr server boot
Loaded: loaded (/etc/systemd/system/evrserver.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2019-04-03 18:34:22 BST; 2min 51s ago
Process: 425 ExecStart=/home/pi/.rbenv/bin/rbenv bundle exec rails s -b 192.168.1.66 (code=exited, status=1/FAILURE)
CGroup: /system.slice/evrserver.service
Apr 03 18:34:22 raspberrypi systemd[1]: Failed to start evr server boot.
Apr 03 18:34:22 raspberrypi systemd[1]: evrserver.service: Unit entered failed state.
Apr 03 18:34:22 raspberrypi systemd[1]: evrserver.service: Failed with result 'exit-code'.
Setup based on other online tutorials for systemd starting a rails app in an rbenv managed environment as noted here:
https://gist.github.com/arteezy/5d53d99f6ee617fae1f0db0576fdd418
https://mikewilliamson.wordpress.com/2015/08/26/running-a-rails-app-with-systemd-and-liking-it/
Here is the service file:
[Unit]
Description=evr server boot
After=network.target
After=local-fs.target
[Service]
Type=forking
User=pi
Group=pi
WorkingDirectory=/home/pi/evr
ExecStart=/home/pi/.rbenv/bin/rbenv bundle exec rails s -b 192.168.1.66
TimeoutSec=180
RestartSec=180s
Restart=always
[Install]
WantedBy=multi-user.target
This device has two other services that are custom implementations that are working with no issue, both non rails processes. What am I missing here to have the rails service run?
ah, kinda funny. it's always the little things yeah? forgot an exec before bundle and the type needs to be simple not forking.
[Unit]
Description=evr server boot
After=network.target
After=local-fs.target
[Service]
Type=simple
User=pi
Group=pi
WorkingDirectory=/home/pi/evr
ExecStart=/home/pi/.rbenv/bin/rbenv exec bundle exec rails s -b 192.168.1.66
TimeoutSec=180
RestartSec=180s
Restart=always
[Install]
WantedBy=multi-user.target
and successful logs:
● evrserver.service - evr server boot
Loaded: loaded (/etc/systemd/system/evrserver.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-04-04 04:05:45 BST; 8min ago
Main PID: 460 (ruby)
CGroup: /system.slice/evrserver.service
└─460 puma 3.12.0 (tcp://192.168.1.66:3000) [evr]
Apr 04 04:05:45 raspberrypi systemd[1]: Started evr server boot.
Apr 04 04:06:08 raspberrypi rbenv[460]: => Booting Puma
Apr 04 04:06:08 raspberrypi rbenv[460]: => Rails 6.0.0.beta3 application starting in development
Apr 04 04:06:08 raspberrypi rbenv[460]: => Run `rails server --help` for more startup options
Apr 04 04:13:51 raspberrypi rbenv[460]: Puma starting in single mode...
Apr 04 04:13:51 raspberrypi rbenv[460]: * Version 3.12.0 (ruby 2.6.2-p47), codename: Llamas in Pajamas
Apr 04 04:13:51 raspberrypi rbenv[460]: * Min threads: 5, max threads: 5
Apr 04 04:13:51 raspberrypi rbenv[460]: * Environment: development
Apr 04 04:13:51 raspberrypi rbenv[460]: * Listening on tcp://192.168.1.66:3000
Apr 04 04:13:51 raspberrypi rbenv[460]: Use Ctrl-C to stop
Related
I am running Docker on a Ubuntu server:
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
Docker was running without problems for a year or so but suddenly it is not available anymore:
root#srv-lab-t-427:/home/schm# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2023-02-05 06:43:29 UTC; 2min 43s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Process: 1478999 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/>
Main PID: 1478999 (code=exited, status=1/FAILURE)
Feb 05 06:43:29 srv-lab-t-427 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Feb 05 06:43:29 srv-lab-t-427 systemd[1]: Stopped Docker Application Container Engine.
Feb 05 06:43:29 srv-lab-t-427 systemd[1]: docker.service: Start request repeated too quickly.
Feb 05 06:43:29 srv-lab-t-427 systemd[1]: docker.service: Failed with result 'exit-code'.
Feb 05 06:43:29 srv-lab-t-427 systemd[1]: Failed to start Docker Application Container Engine.
Any idea how to fix this?
I need to change the underlying storage for a Proxmox LXC Debian Buster container from RAW to ZFS. For this I restored a snapshot to ZFS storage. This is normally transparent for the OS in the container, but in this case docker no longer starts.
The initial problem was that docker wasn't started, and after some digging around I find this:
# dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
INFO[2021-08-03T09:24:40.909844803Z] Starting up
...
ERRO[2021-08-03T09:24:56.914420548Z] failed to mount overlay: invalid argument storage-driver=overlay2
ERRO[2021-08-03T09:24:56.914439880Z] [graphdriver] prior storage driver overlay2 failed: driver not supported
failed to start daemon: error initializing graphdriver: driver not supported
How can I fix this?
EDIT:
I tried the suggested fix, but still no cigar:
root#mail:/var/log# systemctl status docker.service
* docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-10-09 10:05:49 UTC; 1min 23s ago
Docs: https://docs.docker.com
Process: 236 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 236 (code=exited, status=1/FAILURE)
Oct 09 10:05:49 mail systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
Oct 09 10:05:49 mail systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Oct 09 10:05:49 mail systemd[1]: Stopped Docker Application Container Engine.
Oct 09 10:05:49 mail systemd[1]: docker.service: Start request repeated too quickly.
Oct 09 10:05:49 mail systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 09 10:05:49 mail systemd[1]: Failed to start Docker Application Container Engine.
The link offered suggests creating a new zpool within the container. Seems a bit of an overkill for that to be necessary, no?
Configure Docker to use zfs. Edit /etc/docker/daemon.json and set the storage-driver to zfs. If the file was empty before, it should now look like this:
{
"storage-driver": "zfs"
}
more details: https://docs.docker.com/storage/storagedriver/zfs-driver/
We've been having issues where the Docker daemon will occasionally stop responding on one of our Kubernetes systems, but Systemd still thinks the service is running:
systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-04-15 20:40:57 UTC; 3 months 22 days ago
Docs: https://docs.docker.com
Main PID: 1281 (dockerd)
Tasks: 1409
Memory: 31.0G
CPU: 5d 17h 3min 4.758s
CGroup: /system.slice/docker.service
├─ 1281 /usr/bin/dockerd -H fd://
...
There isn't anything in the journalctl -u docker or syslog files to indicate what the issue is, but the Docker daemon no longer responds to requests (docker ps just hangs). We are currently using the 17.03.2~ce-0~ubuntu-xenial package for Ubuntu 16.04, which has the following service unit:
cat /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket firewalld.service
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd://
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I noticed that even though it is a Type=notify service, there isn't a WatchdogSec= defined in the service unit.
Does the Docker daemon support setting a watchdog timeout for sd_notify based health checks?
No, currently the components/engine/cmd/dockerd/daemon_linux.go file only implements systemdDaemon.SdNotifyReady to notify Systemd when the process has started. For watchdog support it would have to use something like SdWatchdogEnabled to continually send SdNotifyWatchdog = "WATCHDOG=1" notifications.
If you try and set WatchdogSec=60s on the docker.service file it will kill and restart the service because the daemon doesn't send the required notifications.
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-08-08 02:09:52 UTC; 50s ago
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: deactivating (stop-sigabrt) (Result: watchdog) since Thu 2019-08-08 02:10:02 UTC; 45ms ago
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: activating (start) since Thu 2019-08-08 02:10:04 UTC; 777ms ago
# Log entries:
Aug 08 02:09:14 kam1 systemd[1]: Starting Docker Application Container Engine...
Aug 08 02:09:15 kam1 systemd[1]: Started Docker Application Container Engine.
Aug 08 02:10:15 kam1 systemd[1]: docker.service: Watchdog timeout (limit 60s)!
Aug 08 02:10:15 kam1 systemd[1]: docker.service: Killing process 12383 (dockerd) with signal SIGABRT.
Aug 08 02:10:16 kam1 systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 08 02:10:16 kam1 systemd[1]: docker.service: Failed with result 'watchdog'.
Aug 08 02:10:18 kam1 systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Aug 08 02:10:18 kam1 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Aug 08 02:10:18 kam1 systemd[1]: Stopped Docker Application Container Engine.
Aug 08 02:10:18 kam1 systemd[1]: Starting Docker Application Container Engine...
I am running docker on a Redhat system with devicemapper and thinpool device just as recommended for production systems. Now when I want to reinstall docker I need two steps:
1) remove docker directory (in my case /area51/docker)
2) clear thinpool device
The docker documentation states that when using devicemapper with dm.metadev and dm.datadev options, the easiest way of cleaning devicemapper would be:
If setting up a new metadata pool it is required to be valid.
This can be achieved by zeroing the first 4k to indicate empty metadata, like this:
$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
Unfortunately, according to the documentation, the dm.metadatadev is deprecated, it says to use dm.thinpooldev instead.
My thinpool has been created along the lines of this docker instruction
So, my setup now looks like this:
cat /etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/thinpool_VG_38401-thinpool",
"dm.basesize=18G"
]
}
Under the devicemapper directory i see the following thinpool devices
ls -l /dev/mapper/thinpool_VG_38401-thinpool*
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool -> ../dm-8
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tdata -> ../dm-7
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tmeta -> ../dm-6
So, after running docker successfully I tried to reinstall as described above and clear the thinpool by writing 4K zeroes into the tmeta device and restart docker:
dd if=/dev/zero of=/dev/mapper/thinpool_VG_38401-thinpool_tmeta bs=4096 count=1
systemctl start docker
And endet up with
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:28:46 UTC; 10s ago
Docs: https://docs.docker.com
Process: 1566 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 1566 (code=exited, status=1/FAILURE)
Memory: 236.0K
CGroup: /system.slice/docker.service
Dec 06 10:28:45 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:28:45 yoda3 dockerd[1566]: time="2017-12-06T10:28:45.816049000Z" level=info msg="libcontainerd: new containerd process, pid: 1577"
Dec 06 10:28:46 yoda3 dockerd[1566]: time="2017-12-06T10:28:46.816966000Z" level=warning msg="failed to rename /area51/docker/tmp for background deletion: renam...chronously"
Dec 06 10:28:46 yoda3 dockerd[1566]: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (thinpool_VG_38401-...data blocks
Dec 06 10:28:46 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:28:46 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:28:46 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:28:46 yoda3 systemd[1]: docker.service failed.
I assumed I could get around the 'unable to take ownership of thin-pool' by doing a reboot. But after reboot and trying to start docker again I got the following error:
systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:30:37 UTC; 2min 29s ago
Docs: https://docs.docker.com
Process: 3180 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 3180 (code=exited, status=1/FAILURE)
Memory: 37.9M
CGroup: /system.slice/docker.service
Dec 06 10:30:36 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.893777000Z" level=warning msg="libcontainerd: makeUpgradeProof could not open /var/run/docker/lib...containerd"
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.901958000Z" level=info msg="libcontainerd: new containerd process, pid: 3224"
Dec 06 10:30:37 yoda3 dockerd[3180]: Error starting daemon: error initializing graphdriver: devicemapper: Non existing device thinpool_VG_38401-thinpool
Dec 06 10:30:37 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:30:37 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:30:37 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:30:37 yoda3 systemd[1]: docker.service failed.
So, obviously writing zeroes into the thinpool_meta device is not the right thing to do, it seems to destroy my thinpool device.
Anyone here that can tell me the right steps to clear the thin-pool device? Preferably the solution should not require a reboot.
I'm trying run docker in Ubuntu 16.04 after system reboot . I created service for it "/etc/systemd/system/openvpnBOX.service":
[Unit]
Description=Openvpn Docker
[Service]
User=root
ExecStart=/etc/init/openvpn.conf
[Install]
WantedBy=multi-user.target
Alias=openvpnBOX.service
openvpn.conf:
#!/bin/bash
exec docker run --volumes-from ovpn-data --rm -p 1194:1194/udp --cap- add=NET_ADMIN kylemanna/openvpn
When i'm running this service "sudo service openvpnBOX start i see that service is run, but when i'm rebooting my system, after reboot i see that service can't start:
"sudo service openvpnBOX status"
● openvpnBOX.service - Openvpn Docker
Loaded: loaded (/etc/systemd/system/openvpnBOX.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2017-10-01 21:35:48 SST; 2min 51s ago
Process: 1771 ExecStart=/etc/init/openvpn.conf (code=exited, status=1/FAILURE)
Main PID: 1771 (code=exited, status=1/FAILURE)
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Unit entered failed state.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Failed with result 'exit-code'.
Oct 01 21:35:48 systemd[1]: Started Openvpn Docker.
Oct 01 21:35:48 openvpn.conf[1771]: Error response from daemon: 404 page not found
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Unit entered failed state.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Failed with result 'exit-code'.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Start request repeated too quickly.
Oct 01 21:35:48 systemd[1]: Failed to start Openvpn Docker.
I can use "sudo docker run --restart=always --volumes-from ovpn-data -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn" but it doesn't solve my problem, because i woud like understand why my service doesn't work after reboot.
Any idea?