I am using monit to monitor my WEB service. Everything went OK until monit process was killed by the system. There is a part of monit log file showing the issue
To have monit automatically launched at startup and restarted on failure I added upstart config for monit (I am running Ubuntu 14.04), that looks like this:
# This is an upstart script to keep monit running.
# To install disable the old way of doing things:
#
# /etc/init.d/monit stop && update-rc.d -f monit remove
#
# then put this script here:
#
# /etc/init/monit.conf
#
# and reload upstart configuration:
#
# initctl reload-configuration
#
# You can manually start and stop monit like this:
#
# start monit
# stop monit
#
description "Monit service manager"
limit core unlimited unlimited
start on runlevel [2345]
stop on starting rc RUNLEVEL=[016]
expect daemon
respawn
exec /usr/bin/monit -c /etc/monitrc
pre-stop exec /usr/bin/monit -c /etc/monitrc quit
When I reboot the system monit is not running.
sudo monit status
$ monit: Status not available -- the monit daemon is not running
How can I configure upstart to keep monit running and monitored?
I had the wrong path to the config file in the start monit command. The correct commands are
exec /usr/bin/monit -c /etc/monit/monitrc
pre-stop exec /usr/bin/monit -c /etc/monit/monitrc quit
It starts monit as daemon and restarts when I kill it:
ps aux | grep monit
root 2173 0.0 0.1 104348 1332 ? Sl 04:13 0:00 /usr/bin/monit -c /etc/monit/monitrc
sudo kill -9 2173
ps aux | grep monit
root 2184 0.0 0.1 104348 1380 ? Sl 04:13 0:00 /usr/bin/monit -c /etc/monit/monitrc
Related
Don't know how to exactly reproduce. It occurs on our production servers but not every time.
After deploy, we will issuse bundle exec pumactl -S pids/puma.state -F config/puma.rb restart (on Ubuntu 14.04 and Ubuntu 16.04). But from time to time, we encounter that one of the puma servers hangs at restarting for very very long time, that we could only kill -9 and start it again. There's no certain pattern of when/which it will stuck.
Like following, after 15 minutes the restart command is issued, I still sees:
$ ps -ef | grep puma
deployer 2535 6533 99 10:11 ? 08:33:16 puma: cluster worker 3: 6533
deployer 2910 6533 99 10:14 ? 08:31:22 puma: cluster worker 2: 6533
deployer 6533 1 0 01:08 ? 00:00:23 puma 3.8.2 (tcp://0.0.0.0:2801)
deployer 9973 9683 0 18:47 pts/0 00:00:00 grep --color=auto puma
And if I run strace -p 6533 -q -f to attach and check inside puma process, I get this: https://gist.github.com/larryzhao/446234a3af91bec917119494f9bc2384
Just don't know where to look into.
I am running on ruby 2.3.4p301 (2017-03-30 revision 58214) [x86_64-linux] with Rails 5.0.5 and Puma 3.8.2
I deployed project with capistrano, but puma does not start after server reboot..
I shoul do -> cap production puma:start every time
I tried it:
/etc/init.d/myscript
#!/bin/sh
/etc/init.d/puma_start.sh
puma_start.sh
#!/bin/bash
puma -C /root/project/shared/puma.rb
but, I have error
/usr/local/rvm/rubies/ruby-2.3.3/lib/ruby/site_ruby/2.3.0/rubygems.rb:270:in `find_spec_for_exe': can't find gem puma (>= 0.a) (Gem::GemNotFoundException)
from /usr/local/rvm/rubies/ruby-2.3.3/lib/ruby/site_ruby/2.3.0/rubygems.rb:298:in `activate_bin_path'
from /usr/local/rvm/gems/ruby-2.3.3#project/bin/puma:22:in `<main>'
from /usr/local/rvm/gems/ruby-2.3.3#project/bin/ruby_executable_hooks:15:in `eval'
from /usr/local/rvm/gems/ruby-2.3.3#project/bin/ruby_executable_hooks:15:in `<main>'
if I put in the console root#host:~# puma -C /root/project/shared/puma.rb it work, and all okey.
I think I have not correct path to gem puma
How can I do puma autostart after server reboot
Thank You
Beginning with Ubuntu 16.04 recommended use systemctl. Before I used upstart.
I created this instruction for myself. Maybe it will be useful to someone.
https://gist.github.com/DSKonstantin/708f346f1cf62fb6d61bf6592e480781
Instruction:
Article: https://github.com/puma/puma/blob/master/docs/systemd.md
#1 nano /etc/systemd/system/puma.service
#2 paste from puma.service
Commands:
# After installing or making changes to puma.service
systemctl daemon-reload
# Enable so it starts on boot
systemctl enable puma.service
# Initial start up.
systemctl start puma.service
# Check status
systemctl status puma.service
# A normal restart. Warning: listeners sockets will be closed
# while a new puma process initializes.
systemctl restart puma.service
puma.service file
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
User=root
Group=root
WorkingDirectory=<path_to_project>/current
Environment=SECRET_KEY_BASE='<SECRET KEY>'
ExecStart=/usr/local/rvm/bin/rvm <ruby_version>#<gemset_name> do bundle exec puma -C <path_to_project>/shared/puma.rb --daemon
ExecStop=/usr/local/rvm/bin/rvm <ruby_version>#<gemset_name> do bundle exec pumactl -S <path_to_project>/shared/tmp/pids/puma.state -F <path_to_project>/shared/puma.rb stop
#Restart=always
Restart=on-failure
[Install]
WantedBy=multi-user.target
I found this http://codepany.com/blog/rails-5-puma-capistrano-nginx-jungle-upstart/
it was help for me ->
cd ~
$ wget https://raw.githubusercontent.com/puma/puma/master/tools/jungle/upstart/puma-manager.conf
$ wget https://raw.githubusercontent.com/puma/puma/master/tools/jungle/upstart/puma.conf
Open downloaded puma.conf file and set your system’s user account for setuid and setguid. (in our case we use root account, but it’s recommended to use a less-priviliged account):
vim puma.conf
setuid root
setgid root
Move downloaded upstart files to /etc/init and create another puma.conf
$ sudo cp puma.conf puma-manager.conf /etc/init
$ sudo touch /etc/puma.conf
Open /etc/puma.conf and add path to app:
/root/name_of_your_app/current
Open /etc/init/puma.conf , and to find something similar
exec bundle exec puma -C /root/project/shared/puma.rb
and replace path to your file puma.rb
Thank you
There's actually a pretty straightforward way of diagnosing and solving this problem:
1. Locate your rvm executable.
which rvm
In my case it was:
/usr/share/rvm/bin/rvm
...but yours may be different! So you have to do that first to find out where your executable is.
2. Find out what Ruby version your server is running.
ruby --version
For me it was 2.6.2. All you need is that version number. Nothing else.
3. Try something like what Konstantin recommended but do this instead:
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
User=root
Group=root
WorkingDirectory= /var/www/your/current
ExecStart=/usr/share/rvm/bin/rvm 2.6.2 do bundle exec pumactl -S /var/www/your/shared/tmp/pids/puma.state -F /var/www/your/shared/puma.rb start
ExecStop=/usr/share/rvm/bin/rvm 2.6.2 do bundle exec pumactl -S /var/www/your/shared/tmp/pids/puma.state -F /var/www/your/shared/puma.rb stop
# Restart=always
Restart=on-failure
[Install]
WantedBy=multi-user.target
4. Then it's a simple matter of:
systemctl daemon-reload
systemctl enable puma.service
systemctl start puma.service
systemctl status puma.service
Then that's it! Next time you boot your server, puma should boot fine.
I am using sidekiq gem to run API calls in background. I ran sidekiq in Daemon process like:
bundle exec sidekiq -d
Now I made some changes in my method, so I want to restart sidekiq. I tried to kill the sidekiq by using the below command:
kill -9 process_id
but it's not working. I want to know the command to restart sidekiq process. If you have any idea please share with me.
I tried the below command also:
sidekiqctl stop /path/to/pid file/pids/sidekiq.pid
Start:
$ bundle exec sidekiq -d -P tmp/sidekiq.pid -L log/sidekiq.log
where -d demonize, -P pid file, -L log file.
Stop:
$ bundle exec sidekiqctl stop tmp/sidekiq.pid 0
Sidekiq shut down gracefully.
where 0 is number of seconds to wait until Sidekiq exits.
So after you find you PID, you can use the below commands: the first will stop the workers from getting new jobs and will let existing jobs complete:
kill -USR1 [PID]
after that, you can kill the process using:
kill -TERM [PID]
Also, there is a page on sidekiq/wiki about this called Signals.
[edit]
Here is the signal page.
[edit]
Check video
For finding PIDs one can use:
ps aux | grep sidekiq
To keep the daemon running you should definitely have some good error handling in the HardWorker classes, but you can also use the command below to restart the sidekiq runners if they are not found in the system processes.
x=`ps aux | grep sidekiq | grep -v grep | awk '{print $2}'`; [ "$x" == "" ] && cd /path/to/deploy && bundle exec sidekiq -d -L /path/to/deploy/log/sidekiq.log -C /path/to/deploy/config/sidekiq.yml -e production
This basically looks for the PID using ps aux | grep sidekiq | grep -v grep | awk '{print $2}' and then stores it in variable x. Then, if it's empty, it will run a daemonized sidekiq process.
You can stick this guy into a cron job or something. But if your jobs are failing continually, you'll definitely want to figure out why.
EDIT: Added path to deploy from cron.
Type in following command:
ps -ef | grep sidekiq
This gives process_id and other details of running sidekiq process in background.
Copy process_id and use following command:
kill process_id
Use following command to start sidekiq again in background with -d option:
bundle exec sidekiq -d -L log/sidekiq.log
systemctl {start,stop,restart} sidekiq
Use above command on different option to start stop and restart sidekiq on server
e.g. systemctl restart sidekiq
I am running my rails application using ruby enterprise edition with unicorn as app server. I run this command
bundle exec unicorn -D -c /home/ubuntu/apps/st/config/unicorn.rb
I need to run this command soon after the system reboots or starts. I am running the app on ubuntu 10.04 LTS EC2 instance. I tried couple of examples which are mentioned on this site as well as this site but it’s not working for me. Any heads up
Try it as an Upstart. To do so, you need to create a myapp.conf file into the directory /etc/init/ with the contents below:
description "myapp server"
start on runlevel [23]
stop on shutdown
exec sudo -u myuser sh -c "cd /path/to/my/app && bundle exec unicorn -D -c /home/ubuntu/apps/st/config/unicorn.rb"
respawn
After that, you should be able to start/stop/restart your app with the commands below:
start myapp
stop myapp
restart myapp
Use ps -aux | grep myapp to check if your app is running.
You can use this file as a template, set appropriate paths mentioned in this file, make it executable and symlink into /etc/init.d/my_unicorn_server. Now you can start the server using:
sudo service my_unicorn_server start
Then you can do:
sudo update-rc.d my_unicorn_server defaults
To startup the unicorn server on system reboot automatically.
In my case, I just wanted it quick so I place the startup command in /etc/rc.local like below. Note that i'm using RVM.
# By default this script does nothing.
cd <your project dir>
/usr/local/rvm/gems/ruby-2.2.1/wrappers/bundle exec unicorn -c <your project dir>/config/unicorn.conf -D
test -e /etc/ssh/ssh_host_dsa_key || dpkg-reconfigure openssh-server
exit 0
Make sure your startup command is above the exit 0. After you reboot, check whether it is running or not by directly hitting the url of your application or use ps -aux | grep unicorn command.
Note* Previously I use Phusion Passenger but I'm having trouble to see its error log, so I move back to unicorn. I also tried #warantesbr without success, which I guess it fails because my whole environment where setup using root access.
If you are using unicorn_init script
You can configure a cron job to start the unicorn server on reboot
crontab -e
and add
#reboot /bin/bash -l -c 'service unicorn_<your service name> start >> /<path to log file>/cron.log 2>&1'
Does anyone have any suggestions of how I might go about achieving a rolling restart of a process group using monit?
Thanks in advance,
fturtle
I'm not sure for which server you are talking about. But I can provide you an example for thin which supports rolling restart itself. (option onebyone: true)
So for monit you can use something like,
if ... then exec '/path/to/thin_restart.sh'
And thin_restart.sh would be something like,
source /path/to/scripts/rvm
rvm use your_gemset#some_ruby
thin -C thin.yml restart
And contents of thin.yml would look like,
port: 1337
pid: tmp/pids/thin.pid
rackup: /path/to/config.ru
daemonize: true
servers: 2
onebyone: true
There are other ways to fine tune this restarts based on pid. You can monitor files with pids and restart only those thin process based on conditions.
e.g.
check process app-1337
with pid /path/to/app.1337.pid
start = 'thin -d -p 1337 start'
stop = 'thin -d -p 1337 -P /path/to/thin.1337.pid stop'
if cpu usage > 50% then restart
check process app-1338
with pid /path/to/app.1338.pid
start = 'thin -d -p 1338 start'
stop = 'thin -d -p 1338 -P /path/to/thin.1338.pid stop'
if cpu usage > 50% then restart
The other way would be of using groups which monit provides.
Extending above example.
check process app-1337
with pid /path/to/app.1337.pid
group thin
group thin-odd
start = 'thin -d -p 1337 start'
stop = 'thin -d -p 1337 -P /path/to/thin.1337.pid stop'
if cpu usage > 50% then restart
check process app-1338
with pid /path/to/app.1338.pid
group thin
group thin-even
start = 'thin -d -p 1338 start'
stop = 'thin -d -p 1338 -P /path/to/thin.1338.pid stop'
if cpu usage > 50% then restart
check process app-1337
with pid /path/to/app.1339.pid
group thin
group thin-odd
start = 'thin -d -p 1339 start'
stop = 'thin -d -p 1339 -P /path/to/thin.1339.pid stop'
if cpu usage > 50% then restart
check process app-1340
with pid /path/to/app.1340.pid
group thin
group thin-even
start = 'thin -d -p 1340 start'
stop = 'thin -d -p 1340 -P /path/to/thin.1340.pid stop'
if cpu usage > 50% then restart
So now you can do following to restart all:
monit -g thin restart
or to achieve sort of rolling restart, restart odd ones then even.
To restart only odd ones:
monit -g thin-odd restart
and to restart even:
monit -g thin-even restart