Using Chef to restart Mongrel cluster via init.d script - ruby-on-rails

I'm using Chef to manage deployments of a Rails application running with Mongrel cluster.
My init.d file is very simple. Here's the case for a restart:
restart)
sudo su -l myuser -c "cd /path/to/myapp/current && mongrel_rails cluster::restart"
;;
I can run service myapp restart as root with no issue. I can run mongrel_rails cluster::restart as myuser with no issue.
However, when I do a deployment through Chef, the tmp/pids/mongrel.port.pid files don't get cleaned up (causing all future restarts to fail).
Chef is simply doing the following to perform the restart:
service "myapp" do
action :restart
end
The init.d script is definitely being called as the logs all have the expected output (with the exception of exploding on the pid files, of course).
What am I missing?

As a work around, I can simply kill the mongrel processes before the init.d script is called. This allows the init.d script to still be used to start/stop the processes on the server directly, but handles the bogus case when mongrel is running and Chef tries to restart the service. Chef handles starting the service correctly as long as the .pid files don't already exist.
To do that, I included the following immediately before the service "myapp" do call:
ruby_block "stop mongrel" do
block do
ports = ["10031", "10032", "10033"].each do |port|
path = "/path/to/myapp/shared/pids/mongrel.#{port}.pid"
if File.exists?(path)
file = File.open(path, "r")
pid = file.read
file.close
system("kill #{pid}")
File.delete(path) if File.exists?(path)
end
end
end
end

Related

How to use supervisor start/stop uwsgi(4 processes)

This is my centos uwsgi service setting:
[Unit]
Description=uWSGI for uwsgi
After=syslog.target
[Service]
Restart=always
ExecStart=/usr/share/nginx/ENV/bin/uwsgi --ini /usr/share/nginx/ENV/config/uwsgi.ini
StandardError=syslog
KillSignal=SIGQUIT
Type=forking
PIDFile=/var/run/uwsgi.pid
[Install]
WantedBy=multi-user.target
And I want to convert to use supervisor to start/stop the uwsgi service
But still not find a solution
Please help me
This is my supervisor.conf :
[program:wiarea-positioning]
command = /usr/share/nginx/ENV/bin/uwsgi --ini /usr/share/nginx/ENV/config/uwsgi.ini
stdout_logfile=/var/log/uwsgi.log
stderr_logfile=/var/log/uwsgi.log
;stopasgroup = true
stopsignal=QUIT
This is my uwsgi.ini
[uwsgi]
chdir = /usr/share/nginx/ENV/mysite
env = DJANGO_SETTINGS_MODULE=mysite.settings
module = mysite.wsgi:application
# the virtualenv
home = /usr/share/nginx/ENV
master = true
thunder-lock=true
processes = 4
pidfile = /var/run/uwsgi.pid
socket = 127.0.0.1:8001
daemonize = /var/log/uwsgi.log
vacuum = true
I think your problem (at least one of them) is this uwsgi.ini line:
daemonize = /var/log/uwsgi.log
Remember that supervisor basically just runs your command= command from the command line, and waits for it to exit. If it exits, supervisor runs the command again.
The uwsgi daemonize option breaks this, because it causes the main uwsgi command to start a background process and immediately exit. Supervisor doesn't know about the background process, so it assumes the command failed and tries to restart it repeatedly. You can confirm this is what's happening by looking at the log files in the /var/log/supervisor/ folder.
So, if you want to run uwsgi with supervisor, you need to remove the daemonize option. After that, you can try just running the command from the command line to confirm that uwsgi starts and stays in the foreground.
This blog has more discussion of daemon processes and supervisor:

Sidekiq not running at startup of passenger server in Rails 4.1.6 app

I need Sidekiq to run once I start the server on our staging application. We moved to a different server instance on Rackspace to better mirror our production conditions.
The application is started with
passenger start --nginx-config-template nginx.conf.erb --address 127.0.0.1 -p 3002 --daemonize
The sidekiq files are as follows:
# /etc/init/sidekiq.conf - Sidekiq config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Sidekiq instances at once.
#
# Save this config as /etc/init/sidekiq.conf then mange sidekiq with:
# sudo start sidekiq index=0
# sudo stop sidekiq index=0
# sudo status sidekiq index=0
#
# or use the service command:
# sudo service sidekiq {start,stop,restart,status}
#
description "Sidekiq Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping workers or runlevel [06])
# change to match your deployment user
setuid root
setgid root
respawn
respawn limit 3 30
# TERM is sent by sidekiqctl when stopping sidekiq. Without declaring these as normal exit codes, it just respawns.
normal exit 0 TERM
instance $index
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv
exec /bin/bash <<EOT
# use syslog for logging
exec &> /dev/kmsg
# pull in system rbenv
export HOME=/root
source /etc/profile.d/rbenv.sh
cd /srv/monolith
exec bin/sidekiq -i ${index} -e staging
EOT
end script
and workers.conf
# /etc/init/workers.conf - manage a set of Sidekiqs
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See sidekiq.conf for how to manage a single Sidekiq instance.
#
# Use "stop workers" to stop all Sidekiq instances.
# Use "start workers" to start all instances.
# Use "restart workers" to restart all instances.
# Crazy, right?
#
description "manages the set of sidekiq processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Sidekiq processes you want
# to run on this machine
env NUM_WORKERS=2
pre-start script
for i in `seq 0 $((${NUM_WORKERS} - 1))`
do
start sidekiq index=$i
done
end script
When I go into the server and try service sidekiq start index=0 or service sidekiq status index=0, it can't find the service, but if I run bundle exec sidekiq -e staging, sidekiq starts up and runs through the job queue without problem. Unfortunately, as soon as I close the ssh session, sidekiq finds a way to kill itself.
How can I ensure sidekiq runs when I start the server and that it will restart itself if something goes wrong as mentioned in the use of upstart to run sidekiq?
Thanks.
In order to run Sidekiq as a service you should put script called "sidekiq" in
/etc/init.d
, not /etc/init

Apache + passenger - /tmp permission denied

I am trying to run ruby on rails under passenger with apache2 under fedora 19 and I got this error in log:
[Tue Feb 25 09:37:52.367683 2014] [passenger:error] [pid 2779] ***
Passenger could not be initialized because of this error: Unable to
start the Phusion Passenger watchdog because it encountered the
following error during startup: Cannot change the directory
'/tmp/passenger.1.0.2779/generation-1/buffered_uploads' its UID to 48
and GID to 48: Operation not permitted (errno=1)
That directory (/tmp/passenger.1.0.2779) doesn't even exist. I think that problem is with selinux. I tried to solve it about 4 hours. Httpd is running under user apache and group apache, I tried:
cat /var/log/audit/audit.log | grep passenger | audit2allow -M
passenger semodule -i passenger.pp
but still nothing.
In your case, you should switch SELinux into Permissive mode at first, then try to capture the audit log from starting Apache to run your application.1
Once you got the home page of your application, you can build your custom policy with the logs.
Switch SELinux into Permissive mode and clean audit.log
]# setenforce 0
]# rm /var/log/audit/audit.log
]# service auditd restart
Restart Apache
]# service httpd restart
Try to open your application with a web browser
It might give more information about what is happenning when you application is running.
Make a custom policy module to allow these actions
]# mkdir work
]# cd work
]# grep httpd /var/log/audit/audit.log | audit2allow -M passenger
]# ls
passenger.pp passenger.te
Load postgrey policy module using the 'semodule' command into the current SELinux policy:
]# semodule -i passenger.pp
]# setenforce 1
Restart Apache
]# service httpd restart
References:
http://wiki.centos.org/HowTos/SELinux#head-faa96b3fdd922004cdb988c1989e56191c257c01
I ran into a similar error, with a startup error about being unable to create a directory that did not exist. (logs, not tmp, but same sort of thing) I, too, battled with it for an hour and couldn't make sense of it. I created/deleted/chmod the directory many ways without success.
The fix for me was to change the parameters to passenger-start. Initially, my Docker container started passenger with:
exec bundle exec passenger start --auto --disable-security-update-check --min-instances 20 --max-pool-size 20 --max-request-queue-size 500
I removed all parameters, leaving just this:
exec bundle exec passenger start
At this point, passenger could create the log folder and file, and all was well. I could have restored the params at this point, but we decided they were not needed for the development environment so left them out moving ahead.
In hindsight, I have a hunch that I deleted the log directory while a file in it was still open, and the file system persisted that condition in some way. But that's just a hunch. Perhaps simply rebooting my Mac would have fixed it...

Unicorn failing to spawn workers on USR2 signal

I'm sending a USR2 signal to the master process in order to achieve zero downtime deploy with unicorn. After the old master is dead, I'm getting the following error:
adding listener failed addr=/path/to/unix_socket (in use)
unicorn-4.3.1/lib/unicorn/socket_helper.rb:140:in `initialize':
Address already in use - /path/to/unix_socket (Errno::EADDRINUSE)
The old master is killed in the before_fork block on the unicorn.rb config file. The process is started via upstart without the daemon (-D) option.
Any Ideia on what's going on?
Well, turns out you have to run in daemonized mode (-D) if you want to be able to do zero downtime deployment. I changed a few things in my upstart script and now it works fine:
setuid username
pre-start exec unicorn_rails -E production -c /path/to/app/config/unicorn.rb -D
post-stop exec kill cat `/path/to/app/tmp/pids/unicorn.pid`
respawn

How to restart individual servers in thin cluster in rails 3.1 app

I have a thin cluster set up to start 3 servers:
/etc/thin/myapp.yml
...
wait: 30
servers: 3
daemonize: true
...
and the I use thin restart -C /etc/thin/myapp.yml to restart. However, I would like to restart each server at a time, to reduce downtime.
Is there a way to restart each server by pid number or location for example?
There is something better for you
try option: --onebyone
you may also add the following line to your config file
onebyone: true
afterwards you able to restart you thin cluster without any downtime.
I know the question has been answered, but I'd like to add the -o option to the mix.
So
thin restart -C /etc/thin/myapp.yml -o 3000
Will only start the server running on port 3000. If let's say you have two other servers running on 3001 and 3002, they'll be left untouched.
-o works with start and stop commands too.

Resources