What's the better way to execute daemons when Rails server runing - ruby-on-rails

I have some gems in my Rails App, such as resque, sunspot. I run the following command manually when the machines boots:
rake sunspot:solr:start
/usr/local/bin/redis-server /usr/local/etc/redis.conf
rake resque:work QUEUE='*'
Is there a better practice to run these daemon in the background? And is there any side-effect when run these tasks run in the background?

My solution to that is to use a mix of god, capistrano and whenever. A specific problem I have is that I want all app processes to be run as user, so initd scripts are not an option (this could be done, but it's quite a pain of user switching / environment loading).
God
The basic idea is to use god to start / restart / monitor processes. God may be difficult to get start with, but is very powerful :
running god alone will start all your processes (webserver, bg jobs, whatever)
it can detect a process crashed and restart it
you can group processes and batch restart them (staging, production, background, devops, etc)
Whenever
You still have to start god on server restart. A good mean to do so is to use user crontab. Most cron implementation have a special instruction called #reboot, which allows you to run a specific command on server restart :
#reboot /bin/bash -l -c 'cd /home/my_app && SERVER=true god -c production/current/config/app.god"
Whenever is a gem that allows easy management for crontab, including generating reboot command. While it's not absolutely necessary for achieving what I describe, it's really useful for its capistrano integration.
Capistrano
You not only want to start your processes on server restart, you also want to restart them on deploy. If your background jobs code is not up to date, problem will arise.
Capistrano allows to easily handle that, just ask god to restart the whole group (like : god restart production) in a post deploy capistrano task, and it will be handled seamlessly.
Whenever's capistrano integration also ensure your crontab is always up to date, updating it if you changed your config/schedule.rb file.

You can use something like foreman to manage these processes. You can define process types and other things in a Procfile and you can start and do whatever with them.

Related

Rails Active Job usage , or running watcher thread automatically with Rails

It's nice to see Rails 4.2 come with Active Job as a common interface for background jobs. But I can't find how to start a worker in the document. It seems that the document is still immature (e.g. the right version of Sneakers is only referred to in Rails' Gemfile), so I'm not sure if the "running workers" part is not in Active Job or just not mentioned in docs.
So with Active Job, do I still need to manually start the job watcher threads like sidekiq or in my case, rake sneakers:run? If so, where should I put these commands to let rails server run these parallel tasks automatically in a develop environment?
ActiveJob is just a common interface. You still need the backend gem, and you still need to launch it separately from your server (it is a separated process, which is the objective).
Sample using resque:
In the Gemfile:
gem 'resque'
In the terminal, launching a worker:
bin/resque work
The case is similary when using sidekick, delayed job or something else.
If you want to launch the server & worker in a single command, you can create a short bash script for it, but I would advise not doing so: having two separated console helps to watch what is happening on each side (web app & worker).
A better solution would be to use the foreman gem to manage starting & stopping your process.
You can create a simple Procfile with the processes to start:
web: bundle exec rails s
job: bundle exec resque work
And then just start both using foreman:
foreman start
By default, foreman will interleave the logs of the process in the console, but this can be configured.
You still have to run the job thread watcher.

Rails: start script on server and then "leave"

I want to run a rake script on my application on Heroku that will take several hours. If I start it from my console on my laptop, and shut down my laptop the script will stop (and I will have to shut down my laptop before the script is finished).
How do I do to start this rake script without it being "tied" to my laptop. I.e. so that it continues to run until it breaks or is finished?
heroku run rake update_feeds_complete --app myapp
is the script to run...
Giving me advice on what commands etc to Google would be helpful as well.
You can run commands with run:detached and close the terminal window and they will continue to run. So you simply need to run
heroku run:detached rake update_feeds_complete --app myapp
You might want to take a look at the scheduler, too. https://devcenter.heroku.com/articles/scheduler
Pretty handy for running rake tasks, even if one-offs.
In general, I use resque and redis for jobs I want to run on heroku. If it's a one-time job I may not, but if it's something I'm going to do regularly then having them installed is a great convenience.
redis installs through the 'Redis To Go' add-on with Heroku.
resque is a gem that's used for running background jobs of different kinds. There are lots of plugins for resque for sending email, for job scheduling, etc.
I'm not familiar with Heroku in particular, but in the past when i have run into this case I used remote execute via SSH:
ssh username#your.server -f 'your command here'
This will open an ssh session, execute your command and then close the connection. But your script will keep running remotely.

How to autostart Rails delayed jobs?

I'm using delayed job to create job queues such as 'mailer'
For this to work I have to run this:
$ RAILS_ENV=development QUEUE=mailer rake jobs:work
But if the server crashes and is restarted, I need the worker to start running again automatically.
What would be the recommended way to deal with this?
You need to use a third-party service like monit/bluepill/god/upstart to watch the process and restart it. I recommend using the combination of foreman and upstart. See here: http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
Some time ago I wrote a patch for the DelayedJob to reload the classes for every job in development mode. Same patch should work for your requirement also.
betamatt's approach is definitely one way to do it if you have such a monitoring tool in place.
Another way to do it would simply be to add a script to your OS's startup which runs the RAILS_ENV=development QUEUE=mailer rake jobs:work command under a user who has the necessary permissions.
Here's an example of how to do it on Ubuntu using Upstart, but if you lookup similar init.d methods, or whatever is the relevant for your server OS, you'll find other ways. What you're looking for, basically, is "How to run a script on startup [your OS name]", and then wrap your command in an executable script.
I had the same issue with my application am working with. So i wrote a rake task which runs every minute(as a cron job). When delayed job starts it will create a .pid file in the temp folder. I used this to check the existence of a delayed job process. If the file doesn't exist i ran the console command through code.
delayed_job_status = File.file?("./tmp/pids/delayed_job.pid")
This will check the existence of process. If nil response go to next statement
./bundle exec script/delayed_job start production
This will start delayed job
My solution was creating the bash script in user's home "delayed_job_startup.sh"
which contain
#!/bin/bash
cd /home/deploy/project/current/
RAILS_ENV=production bin/delayed_job start
and in file /etc/rc.local I run this script from my user
su -s /bin/bash - deploy /home/deploy/delayed_job_startup.sh

Start up required additional services (resque, redis) with `rails server` command

I would like my development environment for Rails to automatically start redis and resque (and potentially in other projects, mongod, mysql-server etc.) for me, in the following cases:
When starting up the development server rails server.
Additionally, it would be nice if the following cases detect already running services, and, if not running start them up too:
Rake rspec, rspec /spec, when running tests.
When starting up a rails console.
When shutting down the rails server, the started child-services should be shut down too.
What is the correct place for such additional startup scripts?
And how to avoid them being started in production too (where I run everything trough /etc/init.d services)?
A lot of these built-in tasks are available as rake tasks already.
You can create a master rake task that does it all.
For example, with resque, you get "rake resque:start" "rake resque:scheduler:start", etc.
You can create a generic "start" task that depends on the rest. Similarly, a "stop" task would shut everything down.
So you would do:
rake start # starts all associated processes
rake stop # stops them all
This is also very use to use from Capistrano, when you end up deploying your code somewhere else. Rake tasks are very easy to call from Capistrano.
I think it's really better to do that in some external script. Do it in your rails server command can be really annoying to anyone to try your code.
By example, in one year, a nez developper come to your project. He can be desoriented if your rails server commande launch a such of other application in background.
In same idea, if you do that you need maintain your code in your rails env. Can be a little tricky. Maintain an independant script can be more usefull.
You can add your script in script directory. That be a good pratice. But not when you launch a command with a manual who do not that.

Delayed Jobs on Rails 2: any better way to run workers?

I finally got the DelayedJobs plugin working for Rails 2, and it does indeed work fine...as long as I run:
rake jobs:work
Just like the readme says, to be fair.
BUT, this doesn't fit my requirements...what kind of background task requires you to have a shell open, and a command running? That'd be like having to say script/server to run my rails app, and never getting that -d option so it'll keep running even after I close my shell.
Is there ANY way to keep the workers getting processed in the backgroun, or in daemon mode, or whatever?
I had a ray of hope when I saw the
You can also run by writing a simple
#script/job_runner#, and invoking it
externally:
Line in the readme...but...that just does the exact same thing the rake task does, you just call it a different way.
What I want:
I want to start my rails app, then start whatever will process the workers, and have BOTH of them run invisibly in the background, without the need for me to babysit it and keep the shell that started it running.
(My server is something I SSH into, so I don't want to have that shell that SSHed into it running 24/7 (especially since I like to turn off my local computer now and again)).
Is there any way to acomplish this?
You can make any *nix command run on the background by appending an & to its end:
rake jobs:work &
Just make sure you exit the shell (or use the disown command) to detach the process from your login session... Otherwise, if your session disconnects, the processes you own will be killed with it.
Perhaps Beanstalkd and Stalker?
Beanstalk is a fast and easy way to queue background tasks. Stalker provides a nice wrapper interface for creating these jobs.
See the railscast on it for more information
Edit:
You could also run that rake task as a cronjob which would mean the server would run it periodically without you needing to be logged in
Use the collectiveidea fork of delayed_job... It's more actively developed and has support for running the jobs in a daemon without any extra messing about.
My capistrano script calls
RAILS_ENV=production script/delayed_job start

Resources