I'm finding an awful lot of conflicting information about monitoring Unicorn, with people saying other config scripts are wrong, and posting their own. There seems to be no main config that Just Works™
I'm assuming preload_app and zero-downtime deployment are the main culprit. I'd love to have that, but for now I'm more interested in just getting monitoring running, period. So currently I have all those settings turned off.
Right now I'm using capistrano-unicorn which is a really great gem.
It gives me all the capistrano deploy hooks I need to reload unicorn. The app has successfully deployed already with it.
The main thing I want to do now is...
a) Make sure unicorn starts up automatically on server failure/reboot
b) Monitor unicorn to restart workers that die/hang/whatever.
If I'm using this gem, what might be the best approach to complete my goals (keeping in mind I don't necessarily need zero downtime)?
Thanks
Engine yard uses Monit, and it's a pretty little utility that does exactly what you need!
Here is the configuration for unicorn:
check process unicorn
with pidfile /path/to/unicorn.pid
start program = "command to start unicorn"
as uid yourUID and gid yourGID
stop program = "command to stop unicorn"
as uid yourUID and gid yourGID
if mem > 255.0 MB for 2 cycles then restart
if cpu > 100% for 2 cycles then restart
check process unicorn_worker0
with pidfile /path/to/unicorn_worker_0.pid
if mem > 255.0 MB for 2 cycles then exec "/bin/bash -c '/bin/kill -6 `cat /path/to/unicorn_worker_0.pid` && sleep 1'"
...
Related
I send mails with delay job worker.
The web app runs on EC2 instance with 2GB ram, somehow, the instance runs out of memory after booting a while
I guess the root cause is delayed job.
What's the good alternative for that.
Can I send the mail in Thread.new , therefore the user won't be blocked on sending email
here's how i run the servers and worker on boot
every :reboot do
command " cd #{PROJECT} ; git pull origin develop "
command " cd #{PROJECT} ; memcached -vv "
command " cd #{PROJECT} ; bundle exec rake Delayed::Backend::Mongoid::Job.create_indexes "
command " cd #{PROJECT} ; bundle exec rake jobs:work "
command " cd #{PROJECT} ; bundle exec puma config/puma.rb"
command " cd #{PROJECT} ; ruby app_periodic_tasks.rb"
end
Try Sidekiq which seems much more solid.
There are a solid number of background processing tools in the Rails world and Sidekiq and Resque would be the top ones on that list for me.
I've been using Sidekiq for the last 3 years at work and it's a phenomenal tool that has done wonders for our large application.
We use it on two worker boxes: one handles assembling, building, storing, and sending e-newsletters and the other handles our application's members (uploading lists, parsing them, validating/verifying emails, and the like).
The assembly worker never runs into memory issues. Once or twice a month, I'll restart Sidekiq just to freshen it up but that's about it.
The member manager worker, which does far less than the assembly worker, requires its instance of Sidekiq to be restarted every few days. I end up with a Sidekiq worker/process that eats up a lot of memory and forces me to restart the service.
Sidekiq, however, isn't really the problem. It's my code ... there's a lot of I/O involved in managing members and I am certain that it's something I'm not doing right versus the tool itself.
So, long and short for me is this: your background tool is important but what you're doing in the code - related to the worker and elsewhere in an app - is more important (can also check to make sure memcached and caching aren't filling up your RAM, etc.).
I have some gems in my Rails App, such as resque, sunspot. I run the following command manually when the machines boots:
rake sunspot:solr:start
/usr/local/bin/redis-server /usr/local/etc/redis.conf
rake resque:work QUEUE='*'
Is there a better practice to run these daemon in the background? And is there any side-effect when run these tasks run in the background?
My solution to that is to use a mix of god, capistrano and whenever. A specific problem I have is that I want all app processes to be run as user, so initd scripts are not an option (this could be done, but it's quite a pain of user switching / environment loading).
God
The basic idea is to use god to start / restart / monitor processes. God may be difficult to get start with, but is very powerful :
running god alone will start all your processes (webserver, bg jobs, whatever)
it can detect a process crashed and restart it
you can group processes and batch restart them (staging, production, background, devops, etc)
Whenever
You still have to start god on server restart. A good mean to do so is to use user crontab. Most cron implementation have a special instruction called #reboot, which allows you to run a specific command on server restart :
#reboot /bin/bash -l -c 'cd /home/my_app && SERVER=true god -c production/current/config/app.god"
Whenever is a gem that allows easy management for crontab, including generating reboot command. While it's not absolutely necessary for achieving what I describe, it's really useful for its capistrano integration.
Capistrano
You not only want to start your processes on server restart, you also want to restart them on deploy. If your background jobs code is not up to date, problem will arise.
Capistrano allows to easily handle that, just ask god to restart the whole group (like : god restart production) in a post deploy capistrano task, and it will be handled seamlessly.
Whenever's capistrano integration also ensure your crontab is always up to date, updating it if you changed your config/schedule.rb file.
You can use something like foreman to manage these processes. You can define process types and other things in a Procfile and you can start and do whatever with them.
Resque is currently showing me that I have a worker doing work on a queue. That worker was shutdown by me in the middle of the queue (it's just for testing) and the worker is still showing as running. I've confirmed the process ID has been killed and bluepill is no longer monitoring it. I can't find anyway in the UI to force clear that it is working.
What's the best way to update the status for the # of workers that are currently up (I have 2, web UI reports 3).
You may have a lingering pid file. This file is independent of the process running; in other words, when you killed the process, it didn't delete the pid file.
If you're using a typical Rails and Resque setup, Resque will store the pid in the Rails ./tmp directory.
Some Resque start scripts specify the pid file in a different location, something like this:
PIDFILE=foo/bar/resque/pid bundle exec rake resque:work
Wherever the script puts the pid file, look there, then delete it, then restart.
Also on the command line, you can ask redis for the running workers:
redis-cli keys *worker:*
If there are workers that you don't expect, you can delete them with:
redis-cli del <keyname>
Try to restart the applications.
For future references: also have a look under https://github.com/resque/resque/issues/299
I'm using rufus-scheduler to run a process every day from a rails server. For testing purposes, let's say every 5 minutes. My code looks like this:
in config/initializers/task_scheduler.rb
scheduler = Rufus::Scheduler::PlainScheduler.start_new
scheduler.every "10m", :first_in => '30s' do
# Do stuff
end
I've also tried the cron format:
scheduler.cron '50 * * * *' do
# stuff
end
for example, to get the process to run every hour at 50 minutes after the hour.
The infuriating part is that it works on my local machine. The process will run regularly and just work. It's only on my deployed-to-production app that the process will run once, and not repeat.
ps faux reveals that cron is running, passenger is handling the spin-up of the rails process, the site has been pinged again so it knows it should refresh, and production shows the changes in the code. The only thing that's different is that, without a warning or error, the scheduled task doesn't repeat.
Help!
You probably shouldn't run rufus-scheduler in the rails server itself, especially not with a multi-process framework like passenger. Instead, you should run it in a daemon process.
My theory on what's happening:
Passenger starts up a ruby server process and uses it to fork off other servers to handle requests. But since rufus-scheduler runs its jobs in a separate thread from the main thread, the rufus thread is only alive in the original ruby process (ruby's fork only duplicates the thread that does the forking). This might seem like a good thing because it prevents multiple schedulers from running, but... Passenger may kill ruby processes under certain conditions - and if it kills the original, the scheduler thread is gone.
Add the Below lines to your apache2 config /etc/apache2/apach2.conf and restart your apache server
RailsAppSpawnerIdleTime 0
PassengerMinInstances 1
Kelvin is right.
Passenger kills 'unnecessary' threads.
http://groups.google.com/group/rufus-ruby/search?group=rufus-ruby&q=passenger
I'm running a Rails app (Tracks, to be exact) with nginx. The Rails process that starts seems to persist indefinitely? Is it suppose to stop?
I have a low RAM allotment on my Shared Hosting and want to be able to kill the Rails process after, say, 10 minutes. Is there a way to do this in nginx or Passenger?
In the meantime, I'm running this bash script with cron every 10 minutes:
PID=$(ps ax|grep [R]ails.*lytracks | cut -f2 -d" " | head -n1)
if [ $PID ]; then
kill -SIGUSR1 $PID
else
echo Not running
fi
You can do that, but you shouldn't.
Rails(in production mode) does not normally leak memory, so restarting the process should have no effect.
A healthy rails app with reasonable load should stabilize at about 30-70MB RAM and stay there forever.
Restarting it every 10 minutes means that every 10 minutes some of your users will see a page that takes 20 seconds to load. Or fail to load at all.
You're trying to use Rails like you would use a CGI PHP script. It's not meant to do that.
If you have memory leaks, you should try and find out what's causing them, then fix it.