In /etc/thin/ I've got several yml files. When I run service thin stop -C /etc/thin/app.yml thin stops all applications, instead of only the one I specified.
How do I get thin to stop/start only the specified application?
UPDATE: Hmm, in /etc/init.d/thin there's this: $DAEMON restart --all $CONFIG_PATH. That explains a lot. Are there smarter init.d scripts? This is my script:
https://gist.github.com/1003131
See also:
Running Rails apps with thin as a service
you have to edit /etc/init.d/thin adding a new action or modifying the "restart" action.
as you can see, --all $CONFIG_PATH sends the command to all thin instances.
paste the init script somewhere, we can find a decent solution ;)
UPDATE:
try to add the following lines, below this:
restart)
$DAEMON restart --all $CONFIG_PATH
;;
restart-single)
$DAEMON restart -C $2
;;
stop-single)
$DAEMON stop -C $2
;;
I didn't tried it, but it should work well. this is a really simple solution (no error checking), we've added 2 new actions that must be called as:
service thin restart-single /etc/thin/your_app.yml
or
service thin stop-single /etc/thin/your_app.yml
let me know if it works ;)
cheers,
A.
I propose another solution (which i think is more thin-convenient):
set the content of your /etc/init.d/thin file to use my fixes:
#!/bin/sh
### BEGIN INIT INFO
# Provides: thin
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: S 0 1 6
# Short-Description: thin initscript
# Description: thin
### END INIT INFO
# Original author: Forrest Robertson
# Do NOT "set -e"
DAEMON=/usr/local/bin/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
if [ "X$2" = X ] || [ "X$3" = X ]; then
INSTANCES="--all $CONFIG_PATH"
else
INSTANCES="-C $3"
fi
case "$1" in
start)
$DAEMON start $INSTANCES
;;
stop)
$DAEMON stop $INSTANCES
;;
restart)
$DAEMON restart $INSTANCES
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|restart} (-C config_file.yml)" >&2
exit 3
;;
esac
:
Use thin restart -C /etc/thin/my_website.yml. It is possible to use such syntax with start, restart and stop commands. Yet, thin restart (or start or stop, of course) would inflict all the instances registered.
This is weird, I added a patch to the script from the gem itself for the init script for the next build to allow a single restart on future installations
restart-file)
$DAEMON restart -C $2
;;
but the gem owner refused the merge and said you can use thin start - C /path/ which is weird, because I've tried it a lot and the script itself says --all and no single config is allowed,
I've also tried doing what he said and obviously it restarted all since the script uses all, can anyone shed more light to this https://github.com/macournoyer/thin/pull/176
Related
Can you help me, why I get sometimes (50:50):
webkit_server.NoX11Error: Cannot connect to X. You can try running with xvfb-run.
When I start the script in parallel as:
xvfb-run -a python script.py
You can reproduce this yourself like so:
for ((i=0; i<10; i++)); do
xvfb-run -a xterm &
done
Of the 10 instances of xterm this starts, 9 of them will typically fail, exiting with the message Xvfb failed to start.
Looking at xvfb-run 1.0, it operates as follows:
# Find a free server number by looking at .X*-lock files in /tmp.
find_free_servernum() {
# Sadly, the "local" keyword is not POSIX. Leave the next line commented in
# the hope Debian Policy eventually changes to allow it in /bin/sh scripts
# anyway.
#local i
i=$SERVERNUM
while [ -f /tmp/.X$i-lock ]; do
i=$(($i + 1))
done
echo $i
}
This is very bad practice: If two copies of find_free_servernum run at the same time, neither will be aware of the other, so they both can decide that the same number is available, even though only one of them will be able to use it.
So, to fix this, let's write our own code to find a free display number, instead of assuming that xvfb-run -a will work reliably:
#!/bin/bash
# allow settings to be updated via environment
: "${xvfb_lockdir:=$HOME/.xvfb-locks}"
: "${xvfb_display_min:=99}"
: "${xvfb_display_max:=599}"
# assuming only one user will use this, let's put the locks in our own home directory
# avoids vulnerability to symlink attacks.
mkdir -p -- "$xvfb_lockdir" || exit
i=$xvfb_display_min # minimum display number
while (( i < xvfb_display_max )); do
if [ -f "/tmp/.X$i-lock" ]; then # still avoid an obvious open display
(( ++i )); continue
fi
exec 5>"$xvfb_lockdir/$i" || continue # open a lockfile
if flock -x -n 5; then # try to lock it
exec xvfb-run --server-num="$i" "$#" || exit # if locked, run xvfb-run
fi
(( i++ ))
done
If you save this script as xvfb-run-safe, you can then invoke:
xvfb-run-safe python script.py
...and not worry about race conditions so long as no other users on your system are also running xvfb.
This can be tested like so:
for ((i=0; i<10; i++)); do xvfb-wrap-safe xchat & done
...in which case all 10 instances correctly start up and run in the background, as opposed to:
for ((i=0; i<10; i++)); do xvfb-run -a xchat & done
...where, depending on your system's timing, nine out of ten will (typically) fail.
This questions was asked in 2015.
In my version of xvfb (2:1.20.13-1ubuntu1~20.04.2), this problem has been fixed.
It looks at /tmp/.X*-lock to find an available port, and then runs Xvfb. If Xvfb fails to start, it finds a new port and retries, up to 10 times.
My goal is zero downtime deployments for an ecommerce app, and I'm trying to this in the best way possible.
I'm doing this on a nginx/unicorn/django setup as well as a nginx/unicorn/rails setup for a separate server.
My strategy is to set preload_app=true in my guincorn.py/unicorn.rb file, then reload by sending a USR2 signal to the PID running the server. This forks the process and it's children and a pre_fork/before_fork can pick up on this and send a subsequent QUIT signal.
Here's an example of what my pre_fork is doing in the guincorn version:
# ...
pidfile='/opt/run/my-website/my-website.pid'
# socket doesn't come back after QUIT
bind='unix:/opt/run/my-website/my-website.socket'
# works, but I'd prefer the socket for security
# bind='localhost:8333'
# ...
def pre_fork(server, worker):
old_pid_file = '/opt/run/my-website/my-website.pid.oldbin'
if os.path.isfile(old_pid_file):
with open(old_pid_file, 'r') as pid_contents:
try:
old_pid = int(pid_contents.read())
if old_pid != server.pid:
os.kill(old_pid, signal.SIGQUIT)
except Exception as err:
pass
pre_fork=pre_fork
And here's a selection from my sysv script which performs the reload:
DESC="my website"
SITE_PATH="/opt/python/my-website"
ENV_PATH="/opt/env/my-website"
RUN_AS="myuser"
SETTINGS="my.settings"
STDOUT_LOG="/var/log/my-website/my-website-access.log"
STDERR_LOG="/var/log/my-website/my-website-error.log"
GUNICORN="/opt/env/my-website/bin/gunicorn.py"
CMD="$ENV_PATH/bin/python $SITE_PATH/manage.py run_gunicorn -c $GUNICORN >> $STDOUT_LOG 2>>$STDERR_LOG"
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
run () {
if [ "$(id -un)" = "$RUN_AS" ]; then
eval $1
else
su -c "$1" - $RUN_AS
fi
}
reload () {
echo "Reloading $DESC"
sig USR2 && echo reloaded OK && exit 0
echo >&2 "Couldn't reload, starting '$DESC' instead"
run "$CMD"
}
action="$1"
case $action in
reload)
reload
;;
esac
I chose preload_app=true, for the zero-downtime appeal. Since the workers have the app preloaded into memory, then as long as I switch processes correctly, it should simulate a zero downtime result. That's the thinking anyway.
This works where I'm listening to through a port but I haven't been able to get it work over a socket.
My questions are the following:
Is this how the rest of you are doing this?
Is there a better way, for example with HUP somehow? My understanding is you can't use preload_app=true with HUP though.
Is it possible to do this using a socket? My socket keeps going away on the QUIT and never coming back. My thinking is that a socket is more secure because you have to have access to the filesystem.
Is anyone doing this with upstart rather than sysv? I'd ideally like to do that and I saw an interesting way of accomplishing that by flocking the PID. It's a challenge with upstart because once the exec-fork from gunicorn/unicorn takes over, upstart is no longer monitoring the process it was originally managing and needs to be re-established somehow.
You should look at unicornherder from my colleagues at GDS, which is specifically designed to manage this:
Unicorn Herder is a utility designed to assist in the use of Upstart and similar supervisors with Unicorn.
I have the following setup, an ubuntu server with nginx - passenger, postgres db, redis and sidekiq.
i have set up an upstart job to keep an eye on the sidekiq process.
description "Sidekiq Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping workers or runlevel [06])
# change to match your deployment user
setuid deploy
setgid deploy
respawn
respawn limit 3 30
# TERM and USR1 are sent by sidekiqctl when stopping sidekiq. Without declaring these as normal exit codes, it just respawns.
normal exit 0 TERM USR1
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv
exec /bin/bash <<EOT
# use syslog for logging
exec &> /dev/kmsg
# pull in system rbenv
export HOME=/home/deploy
export RAILS_ENV=production
source /home/deploy/.rvm/scripts/rvm
cd /home/deploy/myapp/current
bundle exec sidekiq -c 25 -L log/sidekiq.log -P tmp/pids/sidekiq.pid -q default -q payment -e production
EOT
end script
This actually work i can start, stop and restart the process. i can see the process over at the sidekiq web http monitor. but when i run this workers workers they get processed. Here are both workers:
class EventFinishedWorker
include Sidekiq::Worker
def perform(event_id)
event = Event.find(event_id)
score_a = event.team_A_score
score_b = event.team_B_score
event.bets.each do |bet|
if (score_a == bet.team_a_score) && (score_b == bet.team_b_score)
bet.guessed = true
bet.save
end
end
end
end
and
class PayPlayersBetsWorker
include Sidekiq::Worker
sidekiq_options :queue => :payment
def perform(event_id)
event = Event.find(event_id)
bets = event.bets.where(guessed: true)
total_winning_bets = bets.count
unless total_winning_bets == 0
pot = event.pot
amount_to_pay = pot / total_winning_bets
unless bets.empty?
bets.each do |bet|
account = bet.user.account
user = User.find_by_id(bet.user.id)
bet.payed = true
bet.save
account.wincoins += amount_to_pay
account.accum_wincoins.increment(amount_to_pay)
account.save
user.set_score(user.current_score)
end
end
end
end
end
2014-05-27T18:48:34Z 5985 TID-7agy8 EventFinishedWorker JID-33ef5d7e7de51189d698c1e7 INFO: start
2014-05-27T18:48:34Z 5985 TID-5s4rg PayPlayersBetsWorker JID-e8473aa1bc59f0b958d23509 INFO: start
2014-05-27T18:48:34Z 5985 TID-5s4rg PayPlayersBetsWorker JID-e8473aa1bc59f0b958d23509 INFO: done: 0.07 sec
2014-05-27T18:48:35Z 5985 TID-7agy8 EventFinishedWorker JID-33ef5d7e7de51189d698c1e7 INFO: done: 0.112 sec
I get no errors running both workers processes, but ...
cheking my data, EventFinishedWorker, does its job. no problems whatsoever. the second job PayPlayersBetsWorker doesnt work. but when i go into the console and do something like.
worker = PayPlayersBetsWorker.new
worker.perform(1)
it works it runs the jobs flawlessly. i have detected also that if i run sidekiq from the console directly not using upstart it works too.
bundle exec sidekiq -P tmp/pids/sidekiq.pid -q default -q payment -e production
this works. the problem seems to be running sidekiq with upstart. help would be appreciated :D
It looks like you have a race condition between the two jobs. EFW isn't finished before PPBW queries, doesn't find anything and immediately exits.
I used the rubber gem to deploy my application on ec2.
I followed the instructions here: http://ramenlab.wordpress.com/2011/06/24/deploying-your-rails-app-to-aws-ec2-using-rubber/.
The process seems to finish successfully but when I try to use the app I keep getting 504 gateway time-out.
Why is this happening and how do I fix it?
Answer from Matthew Conway (reposted below): https://groups.google.com/forum/?fromgroups#!searchin/rubber-ec2/504/rubber-ec2/AtEoOf-T9M0/zgda0Fo1qeIJ
Note: Even with this code you need to do something like:
> cap deploy:update
> FILTER=app01,app02 cap deploy:restart
> FILTER=app03,app04 cap deploy:restart
I assume this is a rails application? The rails stack is notoriously slow to load up, so this delay in load time is probably what you are seeing. Passenger was supposed to make this better with the zero downtime feature v3, but they seemed to have reneged on that and are only going to be offering it as part of some undefined paid version at some undefined point n the future.
What I do is have multiple app server instances and restart them serially so that I can continue to serve traffic on one, while the others are restarting. Doesn't work with a single instance, but most production setups need multiple instances for redundancy/reliability anyway. This isn't currently part of rubber, but I have it deploy scripts setup for my app and will merge it in at some point - my config looks something like the below.
Matt
rubber-passenger.yml:
roles:
passenger:
rolling_restart_port: "#{passenger_listen_port}"
web_tools:
rolling_restart_port: "#{web_tools_port}"
deploy-apache.rb:
on :load do
rubber.serial_task self, :serial_restart, :roles => [:app, :apache] do
rsudo "service apache2 restart"
end
rubber.serial_task self, :serial_reload, :roles => [:app, :apache] do
# remove file checked by haproxy to take server out of pool, wait some
# secs for haproxy to realize it
maybe_sleep = " && sleep 5" if RUBBER_ENV == 'production'
rsudo "rm -f #{previous_release}/public/httpchk.txt #{current_release}/public/httpchk.txt#{maybe_sleep}"
rsudo "if ! ps ax | grep -v grep | grep -c apache2 &> /dev/null; then service apache2 start; else service apache2 reload; fi"
# Wait for passenger to startup before adding host back into haproxy pool
logger.info "Waiting for passenger to startup"
opts = get_host_options('rolling_restart_port') {|port| port.to_s}
rsudo "while ! curl -s -f http://localhost:$CAPISTRANO:VAR$/ &> /dev/null; do echo .; done", opts
# Touch the file so that haproxy adds this server back into the pool.
rsudo "touch #{current_path}/public/httpchk.txt#{maybe_sleep}"
end
end
after "deploy:restart", "rubber:apache:reload"
desc "Starts the apache web server"
task :start, :roles => :apache do
rsudo "service apache2 start"
opts = get_host_options('rolling_restart_port') {|port| port.to_s}
rsudo "while ! curl -s -f http://localhost:$CAPISTRANO:VAR$/ &> /dev/null; do echo .; done", opts
rsudo "touch #{current_path}/public/httpchk.txt"
end
I got the same error and solved the problem.
It was "haproxy" timeout. It is a load balancer installed by Rubber.
It is set to 30000ms, you should change it in rubber configuration file.
Good luck!
How I can run Yard server on production server?
Maybe use some task?
Load from capistrano, using passenger and nginx, Jenkins(Hudson).
I found the simplest option to be just symlinking the generated docs folder from /public in my rails app. You just need to be sure that the js/css resources are accessible via the same path.
For example:
$ cd <railsapp>
$ ls
Gemfile
app/
..
public/
doc/ <- Folder that contains the html files generated by yard
$ cd public/
$ ln -s ../doc/ docs
This would serve your docs at /docs/index.html
The javascript based search for classes/methods/files still works as it is javascript based. However the search that appears on the top won't appear in this method. However I found the javascript based search sufficient.
I use nginx and passenger, serving this tiny web app:
# ~/Documentation/config.ru
require 'rubygems'
require 'yard'
libraries = {}
gems = Gem.source_index.find_name('').each do |spec|
libraries[spec.name] ||= []
libraries[spec.name] << YARD::Server::LibraryVersion.new(spec.name, spec.version.to_s, nil, :gem)
end
run YARD::Server::RackAdapter.new libraries
Nginx virtual host:
# /opt/nginx/config/sites-enabled/gems.doc
server {
listen 80;
server_name gems.doc;
root /Users/your-user/Documentation/yard/public;
rails_env development;
passenger_enabled on;
}
More in this post: http://makarius.me/offline-rails-ruby-jquery-and-gems-docs-with
I'm use this shell script:
#!/bin/sh
#or you process here
PROCESS='ruby */yard server'
PID=`pidof $PROCESS`
start() {
yard server &
}
stop() {
if [ "$PID" ];then
kill -KILL $PID
echo 'yard is stopped'
fi
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
*)
echo Usage: $0 [start|stop|restart]
;;
esac
And in Hudson: yard doc && ./yard.sh restart.