Rails 6 + Capistrano - No such puma.sock file - ruby-on-rails

please, I have a giant problem for more than 10 hours.
Whenever I run my application deployment in Rails, with Capistrano and Puma, and I run a restart of nginx, I see this error when I try to access my web:
enter image description here
When I access my nginx logs, I see the following error:
2020/12/29 04:09:50 [crit] 9536#9536: *73 connect() to unix:///home/ubuntu/apps/my_app/shared/tmp/sockets/my_app-puma.sock failed (2: No such file or directory) while connecting to upstream, client: [CLIENT_ID], server: , request: "GET / HTTP/1.1", upstream: "http://unix:///home/ubuntu/apps/my_app/shared/tmp/sockets/my_app-puma.sock:/", host: "[MY_HOST]"
2020/12/29 04:09:50 [crit] 9536#9536: *73 connect() to unix:///home/ubuntu/apps/my_app/shared/tmp/sockets/my_app-puma.sock failed (2: No such file or directory) while connecting to upstream, client: [CLIENT_ID], server: , request: "GET / HTTP/1.1", upstream: "http://unix:///home/ubuntu/apps/my_app/shared/tmp/sockets/my_app-puma.sock:/500.html", host: "[MY_HOST]"
Thanks in advance for someone's help. Because it has been over 10 hours that I am trying to solve this problem of missing the ".sock" file and I can't
Update 1:
Following a tutorial I create:
I create in the path: /etc/systemd/system a file: puma-website.service
Inside has:
After=network.target
[Service]
# Foreground process (do not use --daemon in ExecStart or config.rb)
Type=simple
# Preferably configure a non-privileged user
User=ubuntu
Group=ubuntu
# Specify the path to your puma application root
WorkingDirectory=/home/ubuntu/my_app/current
# Helpful for debugging socket activation, etc.
Environment=PUMA_DEBUG=1
#EnvironmentFile=/var/www/my-website.com/.env
# The command to start Puma
ExecStart=/home/ubuntu/.rbenv/shims/bundle exec puma -C /home/ubuntu/my_app/current/config/puma.rb
Restart=always
[Install]
WantedBy=multi-user.target
But I error:
:/etc/systemd/system$ sudo systemctl status puma-website.service
● puma-website.service
Loaded: loaded (/etc/systemd/system/puma-website.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2020-12-29 00:52:19 UTC; 12h ago
Process: 4316 ExecStart=/home/ubuntu/.rbenv/shims/bundle exec puma -C /home/ubuntu/my_app/current/config/puma.rb (code=exited, status=1/FAILURE
Main PID: 4316 (code=exited, status=1/FAILURE)
Dec 29 00:52:19 MyIp systemd[1]: puma-website.service: Main process exited, code=exited, status=1/FAILURE
Dec 29 00:52:19 MyIp systemd[1]: puma-website.service: Failed with result 'exit-code'.
Dec 29 00:52:19 MyIp systemd[1]: puma-website.service: Service hold-off time over, scheduling restart.
Dec 29 00:52:19 MyIp systemd[1]: puma-website.service: Scheduled restart job, restart counter is at 10.
Dec 29 00:52:19 MyIp systemd[1]: Stopped puma-website.service.
Dec 29 00:52:19 MyIp systemd[1]: puma-website.service: Start request repeated too quickly.
Dec 29 00:52:19 MyIp systemd[1]: puma-website.service: Failed with result 'exit-code'.
Dec 29 00:52:19 MyIp systemd[1]: Failed to start puma-website.service.

Related

Warning: Stopping docker.service, but it can still be activated by: docker.socket

I've reinstalled Docker. When I'm trying to start Docker, everything is fine:
# /etc/init.d/docker start
[ ok ] Starting docker (via systemctl): docker.service.
until I want to stop Docker service and many times restart it:
# /etc/init.d/docker stop
[....] Stopping docker (via systemctl): docker.serviceWarning: Stopping docker.service, but it can still be activated by:
docker.socket
. ok
Finally, I've got error:
# /etc/init.d/docker start
[....] Starting docker (via systemctl): docker.serviceJob for docker.service failed.
See "systemctl status docker.service" and "journalctl -xe" for details.
failed!
# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Sat 2017-11-25 20:04:20 CET; 2min 4s ago
Docs: https://docs.docker.com
Process: 12845 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=0/SUCCESS)
Main PID: 12845 (code=exited, status=0/SUCCESS)
CPU: 326ms
Nov 25 20:04:18 example.com systemd[1]: Started Docker Application Container Engine.
Nov 25 20:04:18 example.com dockerd[12845]: time="2017-11-25T20:04:18.191949863+01:00" level=inf
Nov 25 20:04:19 example.com systemd[1]: Stopping Docker Application Container Engine...
Nov 25 20:04:19 example.com dockerd[12845]: time="2017-11-25T20:04:19.368990531+01:00" level=inf
Nov 25 20:04:19 example.com dockerd[12845]: time="2017-11-25T20:04:19.37953454+01:00" level=info
Nov 25 20:04:20 example.com systemd[1]: Stopped Docker Application Container Engine.
Nov 25 20:04:21 example.com systemd[1]: docker.service: Start request repeated too quickly.
Nov 25 20:04:21 example.com systemd[1]: Failed to start Docker Application Container Engine.
Nov 25 20:04:21 example.com systemd[1]: docker.service: Unit entered failed state.
Nov 25 20:04:21 example.com systemd[1]: docker.service: Failed with result 'start-limit-hit'.
I've installed Docker on Debian 9 Stretch.
Can anyone help me get rid of this warning and resolve an error "Failed with result 'start-limit-hit'"?
Simply start and stop the socket if the docker is triggered by the socket
sudo systemctl stop docker.socket
This is because in addition to the docker.service unit file, there is a docker.socket unit file... this is for socket activation. The warning means if you try to connect to the docker socket while the docker service is not running, then systemd will automatically start docker for you.
You can get rid of this by removing /lib/systemd/system/docker.socket... you may also need to remove -H fd:// from the docker.service unit file.

Can't start docker after reboot Ubuntu 16.05

I'm trying run docker in Ubuntu 16.04 after system reboot . I created service for it "/etc/systemd/system/openvpnBOX.service":
[Unit]
Description=Openvpn Docker
[Service]
User=root
ExecStart=/etc/init/openvpn.conf
[Install]
WantedBy=multi-user.target
Alias=openvpnBOX.service
openvpn.conf:
#!/bin/bash
exec docker run --volumes-from ovpn-data --rm -p 1194:1194/udp --cap- add=NET_ADMIN kylemanna/openvpn
When i'm running this service "sudo service openvpnBOX start i see that service is run, but when i'm rebooting my system, after reboot i see that service can't start:
"sudo service openvpnBOX status"
● openvpnBOX.service - Openvpn Docker
Loaded: loaded (/etc/systemd/system/openvpnBOX.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2017-10-01 21:35:48 SST; 2min 51s ago
Process: 1771 ExecStart=/etc/init/openvpn.conf (code=exited, status=1/FAILURE)
Main PID: 1771 (code=exited, status=1/FAILURE)
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Unit entered failed state.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Failed with result 'exit-code'.
Oct 01 21:35:48 systemd[1]: Started Openvpn Docker.
Oct 01 21:35:48 openvpn.conf[1771]: Error response from daemon: 404 page not found
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Unit entered failed state.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Failed with result 'exit-code'.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Start request repeated too quickly.
Oct 01 21:35:48 systemd[1]: Failed to start Openvpn Docker.
I can use "sudo docker run --restart=always --volumes-from ovpn-data -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn" but it doesn't solve my problem, because i woud like understand why my service doesn't work after reboot.
Any idea?

Apache2 is not working... (warning)

So i'm trying to deploy a rails 4.2.5 app, and at the last step of it, while apache2 needs to be reloaded, it fails, and I don't have many informations about this.
I browsed the web, but there isn't many answers about this, so i don't know what to do...
Active: active (exited) (Result: exit-code) since dim. 2016-01-31 03:01:31 CET; 9min ago
Process: 10766 ExecStop=/etc/init.d/apache2 stop (code=exited, status=0/SUCCESS)
Process: 10993 ExecReload=/etc/init.d/apache2 reload (code=exited, status=1/FAILURE)
Process: 10773 ExecStart=/etc/init.d/apache2 start (code=exited, status=0/SUCCESS)
janv. 31 03:08:47 vps240378.ovh.net systemd[1]: apache2.service: control process exited, code=exited status=1
janv. 31 03:08:47 vps240378.ovh.net systemd[1]: Reload failed for LSB: Apache2 web server.
janv. 31 03:10:52 vps240378.ovh.net apache2[10945]: Reloading web server: apache2 failed!
janv. 31 03:10:52 vps240378.ovh.net apache2[10945]: Apache2 is not running ... (warning).
janv. 31 03:10:52 vps240378.ovh.net systemd[1]: apache2.service: control process exited, code=exited status=1
janv. 31 03:10:52 vps240378.ovh.net systemd[1]: Reload failed for LSB: Apache2 web server.
janv. 31 03:11:23 vps240378.ovh.net apache2[10993]: Reloading web server: apache2 failed!
janv. 31 03:11:23 vps240378.ovh.net apache2[10993]: Apache2 is not running ... (warning).
janv. 31 03:11:23 vps240378.ovh.net systemd[1]: apache2.service: control process exited, code=exited status=1
janv. 31 03:11:23 vps240378.ovh.net systemd[1]: Reload failed for LSB: Apache2 web server.
Does anybody know how to deal with it ?
Thanks
Looking at your top three lines you look like you are stopping apache, trying to reload, which falls - because it's stopped. And them starting it again. Is that the issue? If so don't stop apache, just reload. Reload is there so you don't have to restart.
Apologies if this is not the issue just my first take.
Reload provides instructions to children to complete their current request and stop, while new requests are served by your new configuration. If apache is stopped, there's nothing to do.

kube-addons.service failed on CoreOS-libvirt installation

I have the following issue installing and provisioning my Kubernetes CoreOS-libvirt-based cluster.
When I'm logging on the master node, I see the following:
ssh core#192.168.10.1
Last login: Thu Dec 10 17:19:21 2015 from 192.168.10.254
CoreOS alpha (884.0.0)
Update Strategy: No Reboots
Failed Units: 1
kube-addons.service
Trying to debug it, I run and receive the following:
core#kubernetes-master ~ $ systemctl status kube-addons.service
● kube-addons.service - Kubernetes addons
Loaded: loaded (/etc/systemd/system/kube-addons.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2015-12-10 16:41:06 UTC; 41min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 801 ExecStart=/opt/kubernetes/bin/kubectl create -f /opt/kubernetes/addons (code=exited, status=1/FAILURE)
Process: 797 ExecStartPre=/bin/sleep 10 (code=exited, status=0/SUCCESS)
Process: 748 ExecStartPre=/bin/bash -c while [[ "$(curl -s http://127.0.0.1:8080/healthz)" != "ok" ]]; do sleep 1; done (code=exited, status=0/SUCCESS)
Main PID: 801 (code=exited, status=1/FAILURE)
Dec 10 16:40:53 kubernetes-master systemd[1]: Starting Kubernetes addons...
Dec 10 16:41:06 kubernetes-master kubectl[801]: replicationcontroller "skydns" created
Dec 10 16:41:06 kubernetes-master kubectl[801]: error validating "/opt/kubernetes/addons/skydns-svc.yaml": error validating data: found invalid field portalIP for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 16:41:06 kubernetes-master systemd[1]: Failed to start Kubernetes addons.
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Unit entered failed state.
Dec 10 16:41:06 kubernetes-master systemd[1]: kube-addons.service: Failed with result 'exit-code'.
My etcd version is:
etcd --version
etcd version 0.4.9
But I have a etcd2 also:
etcd2 --version
etcd Version: 2.2.2
Git SHA: b4bddf6
Go Version: go1.4.3
Go OS/Arch: linux/amd64
And at the current moment the second one is being runned:
ps aux | grep etcd
etcd 731 0.5 8.4 329788 42436 ? Ssl 16:40 0:16 /usr/bin/etcd2
root 874 0.4 7.4 59876 37804 ? Ssl 17:19 0:02 /opt/kubernetes/bin/kube-apiserver --address=0.0.0.0 --port=8080 --etcd-servers=http://127.0.0.1:2379 --kubelet-port=10250 --service-cluster-ip-range=10.11.0.0/16
core 953 0.0 0.1 6740 876 pts/0 S+ 17:27 0:00 grep --colour=auto etcd
What causes the issue and how can I solve it?
Thank you.
The relevant log line is:
/opt/kubernetes/addons/skydns-svc.yaml": error validating data: found invalid field portalIP for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
You should figure out what's invalid about that IP or set the flag to ignore.

After installing docker on centos7,Failed to start docker."Job for docker.service failed."

After executing yum install docker on centos7, I want to start docker by executing service docker start, then i can see the error:
Redirecting to /bin/systemctl start docker.service
Job for docker.service failed. See 'systemctl status docker.service' and 'journalctl -xn' for details.
then I execute systemctl status docker.service -l, then the error is:
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Active: failed (Result: exit-code) since Sun 2015-03-15 03:49:49 EDT; 12min ago
Docs: http://docs.docker.com
Process: 11444 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS (code=exited, status=1/FAILURE)
Main PID: 11444 (code=exited, status=1/FAILURE)
Mar 15 03:49:48 localhost.localdomain docker[11444]: 2015/03/15 03:49:48 docker daemon: 1.3.2 39fa2fa/1.3.2; execdriver: native; graphdriver:
Mar 15 03:49:48 localhost.localdomain docker[11444]: [a25f748b] +job serveapi(fd://)
Mar 15 03:49:48 localhost.localdomain docker[11444]: [info] Listening for HTTP on fd ()
Mar 15 03:49:48 localhost.localdomain docker[11444]: [a25f748b] +job init_networkdriver()
Mar 15 03:49:48 localhost.localdomain docker[11444]: [a25f748b] -job init_networkdriver() = OK (0)
Mar 15 03:49:49 localhost.localdomain docker[11444]: 2015/03/15 03:49:49 write /var/lib/docker/init/dockerinit-1.3.2: no space left on device
Mar 15 03:49:49 localhost.localdomain systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 15 03:49:49 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.
Mar 15 03:49:49 localhost.localdomain systemd[1]: Unit docker.service entered failed state.
I really have no idea, looking forward to your response, I will be very appreciative!
this error usually occurs because of missing device-mapper-event-libs package.
# yum install device-mapper-event-libs
Thanks for Ben Whaley's advice,When I check my disk space,Indeed it's not enough.I extend my disk space and solve the problem. It's the first time I put forward questions,It's really of help. thanks again.
I upgraded the CentOS 7 kernel from 3 to 4.
NOTE: I upgraded Kernel for other reasons also, first try without upgrading kernel.
delete the folder docker under /var/lib
go to cd /etc/sysconfig
vi docker (before editing copy docker docker.org)
see Line there you find OPTIONS='--selinux-disabled --log-driver=journald'
Remove --selinux-disabled should like OPTIONS='--log-driver=journald'
Now un-comment # setsebool -P docker_transition_unconfined 1 to setsebool -P docker_transition_unconfined 1
reboot the machine or you try only docker start to check for me it works :)

Resources