Kill a Rails Process After Its Usefulness is Over - ruby-on-rails

I'm running a Rails app (Tracks, to be exact) with nginx. The Rails process that starts seems to persist indefinitely? Is it suppose to stop?
I have a low RAM allotment on my Shared Hosting and want to be able to kill the Rails process after, say, 10 minutes. Is there a way to do this in nginx or Passenger?
In the meantime, I'm running this bash script with cron every 10 minutes:
PID=$(ps ax|grep [R]ails.*lytracks | cut -f2 -d" " | head -n1)
if [ $PID ]; then
kill -SIGUSR1 $PID
else
echo Not running
fi

You can do that, but you shouldn't.
Rails(in production mode) does not normally leak memory, so restarting the process should have no effect.
A healthy rails app with reasonable load should stabilize at about 30-70MB RAM and stay there forever.
Restarting it every 10 minutes means that every 10 minutes some of your users will see a page that takes 20 seconds to load. Or fail to load at all.
You're trying to use Rails like you would use a CGI PHP script. It's not meant to do that.
If you have memory leaks, you should try and find out what's causing them, then fix it.

Related

Rails 4 - how is the "/tmp" cleaned?

I am generating PDF document and storing it temporarily to the /tmp directory. Once the document is generated and stored in the directory (I am doing it as a background process with Sidekiq), then I upload it to Amazon S3 and delete it from the /tmp directory.
What I noticed is that when a user generate a document and I am deploying some new code to the server (with using Capistrano), the process of generating/uploading document is interrupted.
I was wondering if this might be related to Sidekiq? It's running as an Upstart service on Ubuntu, so I don't think so.
Then I thought the problem might be that I am storing the document in the /tmp directory. How the directory works? Is the whole content of the directory deleted when I do a new deployment with Capistrano?
EDIT:
The document generation takes usually takes 5-10 seconds, but the queue is default, so the process might fail because there's too many default processes in the queue?
The /tmp directory should be cleaned only during server boot (as #Зелёный already commented). But your PDF generation / upload might just take too long and the process might get killed. This is documented here and I quote from the docs:
sidekiqctl stop [pidfile] 60
This sends TERM, waits up to 60 seconds and then will kill -9 the Sidekiq process if it has not exited by then. Keep in mind the deadline timeout is the amount of time sidekiqctl will wait before running kill -9 on the Sidekiq process.
The details should be shown in the console output during the capistrano deployment, so if it's not the case of process getting killed, please add the output to the question.

memory leak on delay job what's the better alternative on rails

I send mails with delay job worker.
The web app runs on EC2 instance with 2GB ram, somehow, the instance runs out of memory after booting a while
I guess the root cause is delayed job.
What's the good alternative for that.
Can I send the mail in Thread.new , therefore the user won't be blocked on sending email
here's how i run the servers and worker on boot
every :reboot do
command " cd #{PROJECT} ; git pull origin develop "
command " cd #{PROJECT} ; memcached -vv "
command " cd #{PROJECT} ; bundle exec rake Delayed::Backend::Mongoid::Job.create_indexes "
command " cd #{PROJECT} ; bundle exec rake jobs:work "
command " cd #{PROJECT} ; bundle exec puma config/puma.rb"
command " cd #{PROJECT} ; ruby app_periodic_tasks.rb"
end
Try Sidekiq which seems much more solid.
There are a solid number of background processing tools in the Rails world and Sidekiq and Resque would be the top ones on that list for me.
I've been using Sidekiq for the last 3 years at work and it's a phenomenal tool that has done wonders for our large application.
We use it on two worker boxes: one handles assembling, building, storing, and sending e-newsletters and the other handles our application's members (uploading lists, parsing them, validating/verifying emails, and the like).
The assembly worker never runs into memory issues. Once or twice a month, I'll restart Sidekiq just to freshen it up but that's about it.
The member manager worker, which does far less than the assembly worker, requires its instance of Sidekiq to be restarted every few days. I end up with a Sidekiq worker/process that eats up a lot of memory and forces me to restart the service.
Sidekiq, however, isn't really the problem. It's my code ... there's a lot of I/O involved in managing members and I am certain that it's something I'm not doing right versus the tool itself.
So, long and short for me is this: your background tool is important but what you're doing in the code - related to the worker and elsewhere in an app - is more important (can also check to make sure memcached and caching aren't filling up your RAM, etc.).

How do I monitor (non-zero-downtime) Unicorn?

I'm finding an awful lot of conflicting information about monitoring Unicorn, with people saying other config scripts are wrong, and posting their own. There seems to be no main config that Just Works™
I'm assuming preload_app and zero-downtime deployment are the main culprit. I'd love to have that, but for now I'm more interested in just getting monitoring running, period. So currently I have all those settings turned off.
Right now I'm using capistrano-unicorn which is a really great gem.
It gives me all the capistrano deploy hooks I need to reload unicorn. The app has successfully deployed already with it.
The main thing I want to do now is...
a) Make sure unicorn starts up automatically on server failure/reboot
b) Monitor unicorn to restart workers that die/hang/whatever.
If I'm using this gem, what might be the best approach to complete my goals (keeping in mind I don't necessarily need zero downtime)?
Thanks
Engine yard uses Monit, and it's a pretty little utility that does exactly what you need!
Here is the configuration for unicorn:
check process unicorn
with pidfile /path/to/unicorn.pid
start program = "command to start unicorn"
as uid yourUID and gid yourGID
stop program = "command to stop unicorn"
as uid yourUID and gid yourGID
if mem > 255.0 MB for 2 cycles then restart
if cpu > 100% for 2 cycles then restart
check process unicorn_worker0
with pidfile /path/to/unicorn_worker_0.pid
if mem > 255.0 MB for 2 cycles then exec "/bin/bash -c '/bin/kill -6 `cat /path/to/unicorn_worker_0.pid` && sleep 1'"
...

Override 30 seconds timeout on gem class timeout

My thin server is timing out after 30 seconds. I would like to override this ruby file.
DEFAULT_TIMEOUT from 30 seconds to 120 seconds. how to do it? Please let me know.
code is here:
https://github.com/macournoyer/thin/blob/master/lib/thin/server.rb
I would like to override without "already initialized constant" Warnings.
See the help
➜ ~/app ✓ thin --help | grep timeout
-t, --timeout SEC Request or command timeout in sec (default: 30)
So you can change it from the command line when starting the server
➜ ~/app ✓ thin --timeout 60 start
or you can set a config file somewhere like /etc/thin/your_app.yml with something like this
---
timeout: 60
and then run thin, pointing it at this YAML file with
thin -C /etc/thin/your_app.yml start
As a side note, you should consider if increasing your timeout is really necessary. Typically long running requests should be queued up and run later through a service like delayed_job or resque
After seeing your comment and learning you're using Heroku, I suggest you read the documentation
Occasionally a web request may hang or take an excessive amount of time to process by your application. When this happens the router will terminate the request if it takes longer than 30 seconds to complete. The timeout countdown begins when the request leaves the router. The request must then be processed in the dyno by your application, and then a response delivered back to the router within 30 seconds to avoid the timeout.
I even more strongly suggest looking into delayed_job, resque, or similar if you're using Heroku. You will have at least one worker running to handle the queue. HireFire is an excellent service to save you money by only spinning up workers when your queue actually has jobs to process.

Rufus-scheduler only running once on production

I'm using rufus-scheduler to run a process every day from a rails server. For testing purposes, let's say every 5 minutes. My code looks like this:
in config/initializers/task_scheduler.rb
scheduler = Rufus::Scheduler::PlainScheduler.start_new
scheduler.every "10m", :first_in => '30s' do
# Do stuff
end
I've also tried the cron format:
scheduler.cron '50 * * * *' do
# stuff
end
for example, to get the process to run every hour at 50 minutes after the hour.
The infuriating part is that it works on my local machine. The process will run regularly and just work. It's only on my deployed-to-production app that the process will run once, and not repeat.
ps faux reveals that cron is running, passenger is handling the spin-up of the rails process, the site has been pinged again so it knows it should refresh, and production shows the changes in the code. The only thing that's different is that, without a warning or error, the scheduled task doesn't repeat.
Help!
You probably shouldn't run rufus-scheduler in the rails server itself, especially not with a multi-process framework like passenger. Instead, you should run it in a daemon process.
My theory on what's happening:
Passenger starts up a ruby server process and uses it to fork off other servers to handle requests. But since rufus-scheduler runs its jobs in a separate thread from the main thread, the rufus thread is only alive in the original ruby process (ruby's fork only duplicates the thread that does the forking). This might seem like a good thing because it prevents multiple schedulers from running, but... Passenger may kill ruby processes under certain conditions - and if it kills the original, the scheduler thread is gone.
Add the Below lines to your apache2 config /etc/apache2/apach2.conf and restart your apache server
RailsAppSpawnerIdleTime 0
PassengerMinInstances 1
Kelvin is right.
Passenger kills 'unnecessary' threads.
http://groups.google.com/group/rufus-ruby/search?group=rufus-ruby&q=passenger

Resources