Job for sidekiq.service failed in ubuntu 20.04 deployment server - ruby-on-rails

I have tired to setup sidekiq on ubuntu
This is my sidekiq.service file (wrote by this example)
[Unit]
Description=sidekiq
After=syslog.target network.target
[Service]
Type=notify
WatchdogSec=10
WorkingDirectory=/var/www/document-draft
# WorkingDirectory=/var/www/document-draft/current -> I also tried this
ExecStart=/bundle exec sidekiq -e production
# I have also tried these commands:
# ExecStart=/sudo bundle exec sidekiq -e production
# ExecStart=bundle exec sidekiq -e production
# ExecStart=/home/deploy/.rvm/gems/ruby-2.7.1/wrappers/bundle exec sidekiq -e production
# ExecStart=/home/deploy/.rvm/bin/rvm in /opt/myapp/current do bundle exec sidekiq -e production
Environment=MALLOC_ARENA_MAX=2
RestartSec=1
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
I'm using ruby 2.7 and when I start sidekiq service
$ systemctl enable sidekiq
$ systemctl start sidekiq
I get this error
Job for sidekiq.service failed because the control process exited with error code.
See "systemctl status sidekiq.service" and "journalctl -xe" for details.
when I check logs I see this
● sidekiq.service - sidekiq
Loaded: loaded (/lib/systemd/system/sidekiq.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2022-04-08 05:04:21 UTC; 9s ago
Process: 150072 ExecStart=/sudo bundle exec sidekiq -e production (code=exited, status=200/CHDIR)
Main PID: 150072 (code=exited, status=200/CHDIR)
Apr 08 05:04:19 ip-172-31-29-35 systemd[1]: sidekiq.service: Main process exited, code=exited, status=200/CHDIR
Apr 08 05:04:19 ip-172-31-29-35 systemd[1]: sidekiq.service: Failed with result 'exit-code'.
Apr 08 05:04:19 ip-172-31-29-35 systemd[1]: Failed to start sidekiq.
Apr 08 05:04:21 ip-172-31-29-35 systemd[1]: sidekiq.service: Scheduled restart job, restart counter is at 5.
Apr 08 05:04:21 ip-172-31-29-35 systemd[1]: Stopped sidekiq.
Apr 08 05:04:21 ip-172-31-29-35 systemd[1]: sidekiq.service: Start request repeated too quickly.
Apr 08 05:04:21 ip-172-31-29-35 systemd[1]: sidekiq.service: Failed with result 'exit-code'.
Apr 08 05:04:21 ip-172-31-29-35 systemd[1]: Failed to start sidekiq.
I'm confused on why I cannot start sidekiq because my Gemfile has sidekiq gem and i can successfully start it manually using any of the commands I'm using in service file.
But my motive is to start is as a background service so it may not shut down.

I was able to make it run by writing my sidekiq.service job in /lib/systemd/system using sudo vim /lib/systemd/system/sidekiq.service
# start us only once the network and logging subsystems are available,
# consider adding redis-server.service if Redis is local and systemd-managed.
After=syslog.target network.target
[Service]
# You may want to use
# Type=notify
# to ensure service is not marked as started before it actually did.
# Include sd_notify gem to send a message on sidekiq startup like
# Sidekiq.configure_server do |config|
# config.on(:startup) { SdNotify.ready }
# end
# to let systemd know when the service is actually started.
Type=simple
WorkingDirectory=/var/www/document-draft
# If you use rbenv:
ExecStart=/bin/bash -lc 'exec /home/ubuntu/.rbenv/shims/bundle exec sidekiq -e production'
# If you use the system's ruby:
# ExecStart=/bin/bash -lc 'exec /home/deploy/.rvm/wrappers/ruby-2.6.5#brentmark-portal/bundle exec sidekiq -e production'
# use `systemctl reload sidekiq` to send the quiet signal to Sidekiq
# at the start of your deploy process.
ExecReload=/usr/bin/kill -TSTP $MAINPID
User=ubuntu
Group=ubuntu
UMask=0002
# Greatly reduce Ruby memory fragmentation and heap usage
# https://www.mikeperham.com/2018/04/25/taming-rails-memory-bloat/
Environment=MALLOC_ARENA_MAX=2
# if we crash, restart
RestartSec=1
Restart=on-failure
# output goes to /var/log/syslog
# StandardOutput=syslog
# StandardError=syslog
# ERROR: Logfile redirection was removed in Sidekiq 6.0, Sidekiq will only log to STDOUT
# StandardOutput=/var/www/sites/document-draft/log/sidekiq.log
# StandardError=/var/www/sites/document-draft/log/sidekiq.log
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
After that systemctl start sidekiq to start this service

Related

I am unable to start puma.service using systemd

It has been running well until yesterday. Now, suddenly I cannot get puma running at all. I am currenly using tmux to run puma and run my app. But, it fails when I try to run server using systemctl. Note:- I updated rake and other gems then had to revert back, which gave me error. That I had activated new version of rake, and using older one. So, I decided to install gem 'rubygems-bundler'. Could this have been the cause of the issue?. I have removed this gem now. But, it still doesn't work.
Puma Version: 3.12.0
Here's the puma.service status:
puma.service - Puma HTTP Server
Loaded: loaded (/etc/systemd/system/puma.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Wed 2020-07-01 12:06:47 UTC; 1s ago
Process: 14640 ExecStart=/home/deploy/.rbenv/shims/puma -C config/puma.rb -p 9100 -e staging (code=exited, status=1/FAILURE)
Main PID: 14640 (code=exited, status=1/FAILURE)
systemd[1]: puma.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: puma.service: Unit entered failed state.
systemd[1]: puma.service: Failed with result 'exit-code'.
systemd[1]: puma.service: Service hold-off time over, scheduling restart.
systemd[1]: Stopped Puma HTTP Server.
systemd[1]: puma.service: Start request repeated too quickly.
systemd[1]: Failed to start Puma HTTP Server.
systemd[1]: puma.service: Unit entered failed state.
systemd[1]: puma.service: Failed with result 'start-limit-hit'.
My pumar.rb file is:
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
bind "unix:///tmp/production-puma.sock"
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
ActiveRecord::Base.establish_connection
end
app_path = File.expand_path(File.dirname(__FILE__) + '/../')
stdout_redirect "/#{app_path}/log/puma.stdout.log", "/#{app_path}/log/puma.stderr.log"
Here is puma.service file
[Unit]
Description=Puma HTTP Server
After=network.target
# Uncomment for socket activation (see below)
Requires=puma.socket
[Service]
# Foreground process (do not use --daemon in ExecStart or config.rb)
Type=simple
#Type=forking
# Preferably configure a non-privileged user
User=deploy
# The path to the puma application root
# Also replace the "<WD>" place holders below with this path.
WorkingDirectory=/home/deploy/app
# Helpful for debugging socket activation, etc.
# Environment=PUMA_DEBUG=1
ExecStart=/home/deploy/.rbenv/shims/puma -C config/puma.rb -p 9100 -e staging
Restart=always
[Install]
WantedBy=multi-user.target
Ok. Fixed the issue. It was indeed caused by rubygems-bundler.
Uninstall rubygems-bundler with
gem uninstall rubygems-bundler
My exact issue was not running following command:
executable-hooks-uninstaller
Everything works now.
`

`bundle: command not found` in systemd script

I'm trying to run a rails server using the below systemd script:
[Unit]
Description=vue-chat-app
Requires=network.target
[Service]
Type=simple
User=ubuntu
Group=ubuntu
WorkingDirectory=/var/www/vue-chat-app.lizardgizzards.com/vue-chat-app/backend-rails/
ExecStart=/bin/bash -lc 'bundle exec rails s -e production -b 0.0.0.0 -p 3010'
TimeoutSec=60s
RestartSec=30s
Restart=always
[Install]
WantedBy=multi-user.target
This is the output I get from journalctl -xe
Sep 16 16:11:28 lab sudo[27454]: ubuntu : TTY=pts/6 ; PWD=/var/www/vue-chat-app.lizardgizzards.com/vue-chat-app/backend-rails ; USER=root ; COMMAND=/usr/bin/vi /etc/systemd/system/vue-chat-app.service
Sep 16 16:11:28 lab sudo[27454]: pam_unix(sudo:session): session opened for user root by ubuntu(uid=0)
Sep 16 16:11:46 lab sudo[27454]: pam_unix(sudo:session): session closed for user root
Sep 16 16:11:56 lab systemd[1]: vue-chat-app.service: Service hold-off time over, scheduling restart.
Sep 16 16:11:56 lab systemd[1]: vue-chat-app.service: Scheduled restart job, restart counter is at 69.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit vue-chat-app.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Sep 16 16:11:56 lab systemd[1]: Stopped vue-chat-app.
-- Subject: Unit vue-chat-app.service has finished shutting down
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit vue-chat-app.service has finished shutting down.
Sep 16 16:11:56 lab systemd[1]: Started vue-chat-app.
-- Subject: Unit vue-chat-app.service has finished start-up
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit vue-chat-app.service has finished starting up.
--
-- The start-up result is RESULT.
Sep 16 16:11:56 lab bash[27526]: /bin/bash: bundle: command not found
Sep 16 16:11:56 lab systemd[1]: vue-chat-app.service: Main process exited, code=exited, status=127/n/a
Sep 16 16:11:56 lab systemd[1]: vue-chat-app.service: Failed with result 'exit-code'.
I can run the command /bin/bash -lc 'bundle exec rails s -e production -b 0.0.0.0 -p 3010' just fine in my terminal, as the user ubuntu, but for some reason it isn't running as a systemd script.

Warning: Stopping docker.service, but it can still be activated by: docker.socket

I've reinstalled Docker. When I'm trying to start Docker, everything is fine:
# /etc/init.d/docker start
[ ok ] Starting docker (via systemctl): docker.service.
until I want to stop Docker service and many times restart it:
# /etc/init.d/docker stop
[....] Stopping docker (via systemctl): docker.serviceWarning: Stopping docker.service, but it can still be activated by:
docker.socket
. ok
Finally, I've got error:
# /etc/init.d/docker start
[....] Starting docker (via systemctl): docker.serviceJob for docker.service failed.
See "systemctl status docker.service" and "journalctl -xe" for details.
failed!
# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Sat 2017-11-25 20:04:20 CET; 2min 4s ago
Docs: https://docs.docker.com
Process: 12845 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=0/SUCCESS)
Main PID: 12845 (code=exited, status=0/SUCCESS)
CPU: 326ms
Nov 25 20:04:18 example.com systemd[1]: Started Docker Application Container Engine.
Nov 25 20:04:18 example.com dockerd[12845]: time="2017-11-25T20:04:18.191949863+01:00" level=inf
Nov 25 20:04:19 example.com systemd[1]: Stopping Docker Application Container Engine...
Nov 25 20:04:19 example.com dockerd[12845]: time="2017-11-25T20:04:19.368990531+01:00" level=inf
Nov 25 20:04:19 example.com dockerd[12845]: time="2017-11-25T20:04:19.37953454+01:00" level=info
Nov 25 20:04:20 example.com systemd[1]: Stopped Docker Application Container Engine.
Nov 25 20:04:21 example.com systemd[1]: docker.service: Start request repeated too quickly.
Nov 25 20:04:21 example.com systemd[1]: Failed to start Docker Application Container Engine.
Nov 25 20:04:21 example.com systemd[1]: docker.service: Unit entered failed state.
Nov 25 20:04:21 example.com systemd[1]: docker.service: Failed with result 'start-limit-hit'.
I've installed Docker on Debian 9 Stretch.
Can anyone help me get rid of this warning and resolve an error "Failed with result 'start-limit-hit'"?
Simply start and stop the socket if the docker is triggered by the socket
sudo systemctl stop docker.socket
This is because in addition to the docker.service unit file, there is a docker.socket unit file... this is for socket activation. The warning means if you try to connect to the docker socket while the docker service is not running, then systemd will automatically start docker for you.
You can get rid of this by removing /lib/systemd/system/docker.socket... you may also need to remove -H fd:// from the docker.service unit file.

systemd init script for resque worker

i habe an nginx webserver running on an Ubuntu 16.04 Server.
Now i am trying to build an init script for resque worker and scheduler for a Rails app.
I created a file resque-worker.service in "/etc/systemd/system/" and it looks like this:
[Unit]
Description=resque-worker for pageflow
[Service]
Type=forking
ExecStart=/home/pageflow/pageflow_daad/rake resque:scheduler QUEUE=* RAILS_ENV=production > /home/pageflow/pageflow_daad/log/resqueschedule.log &
[Install]
WantedBy=multi-user.target
For some reason after executing "systemctl daemon-reload" and "systemctl start name.service" i get this error:
$ systemctl status resque-worker.service
● resque-worker.service - resque-worker for pageflow
Loaded: loaded (/etc/systemd/system/resque-worker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2017-05-17 15:52:38 CEST; 15s ago
Process: 28096 ExecStart=/home/pageflow/pageflow_daad/rake resque:scheduler QUEUE=* RAILS_ENV=production > /home/pageflow/pageflow_daad/log/resqueschedule.log & (code=exited, status=203/EXEC)
May 17 15:52:38 ostheim systemd[1]: Starting resque-worker for pageflow...
May 17 15:52:38 ostheim systemd[1]: resque-worker.service: Control process exited, code=exited status=203
May 17 15:52:38 ostheim systemd[1]: Failed to start resque-worker for pageflow.
May 17 15:52:38 ostheim systemd[1]: resque-worker.service: Unit entered failed state.
May 17 15:52:38 ostheim systemd[1]: resque-worker.service: Failed with result 'exit-code'.
In this case i used the root path of my Rails app for "/home/pageflow/pageflow_daad/rake".
The times before where i tried the path of the rake binary i got the error:May 17 15:30:26 ostheim rake[26846]: rake aborted!
May 17 15:30:26 ostheim rake[26846]: ArgumentError: couldn't find HOME environment -- expanding~'`
I hope someone with more experience in this can help me out.
Thanks in advance and best regards,
Ronald
After studying the several docu sites and the documentation itself i found a way to get this running. Just wanted to post this if someone finds this to be helpfull:
[Unit]
Description=resque-scheduler for pageflow
[Service]
User=yourUser
WorkingDirectory=/path/to/rails/app
ExecStart=/path/to/executeable/rake resque:scheduler &
Environment=QUEUE=*
Environment=RAILS_ENV=production
[Install]
WantedBy=multi-user.target
With this script, a
systemctl daemon-reload
&
systemctl start example.service
The Service startet running and runs like a charm.
I think there 2 types of errors inside the systemd unit. Here are some advices :
1) Let systemd capture your application logs, then use journalctl to look at it
journalctl -u resque-worker.service
2) Use env the systemd way :
[Unit]
Description=resque-worker for pageflow
[Service]
Type=forking
ExecStart=/home/pageflow/pageflow_daad/rake resque:scheduler
Environement=QUEUE=*
Enrironement=RAILS_ENV=production
[Install]
WantedBy=multi-user.target
Then,
systemctl daemon-reload
systemctl restart resque-worker.service

run puma server as a service at centos 7 - no ruby found

There are many things I do not understand, so my question may be silly.
I want to run a puma ror server as a systemd service at centos 7. Use ruby installed using rvm.
My puma_test.service file is:
[Unit]
Description=Puma application server
After=network.target
[Service]
WorkingDirectory=/var/www/test_app
Environment=RAILS_ENV=development
PIDFile=/var/www/shared/pids/puma.pid
ExecStart=/usr/local/rvm/gems/ruby-2.2.1/gems/bundler-1.9.4/bin/bundle exec puma -e development -b unix:///var/www/shared/pids/puma.sock --pidfile /var/www/shared/pids/puma.pid
[Install]
WantedBy=multi-user.target
but when I run it, it does not work. I get error (from journalctl):
kwi 18 22:56:15 vps150852.ovh.net systemd[1]: Starting Puma application server...
kwi 18 22:56:15 vps150852.ovh.net systemd[1]: Started Puma application server.
kwi 18 22:56:15 vps150852.ovh.net bundle[2072]: /usr/bin/env: ruby: No such file or directory
kwi 18 22:56:15 vps150852.ovh.net systemd[1]: puma_test.service: main process exited, code=exited, status=127/n/a
kwi 18 22:56:15 vps150852.ovh.net systemd[1]: Unit puma_test.service entered failed state.
when I run i /usr/www/test_app
/usr/local/rvm/gems/ruby-2.2.1/gems/bundler-1.9.4/bin/bundle exec puma -e development -b unix:///var/www/shared/pids/puma.sock --pidfile /var/www/shared/pids/puma.pid
everything works fine, but I am probably doing something wrong
Looks like you need to load rvm when you run your task. systemd run in shell, not in bash, your bashrc will not be loaded

Resources