I've a Rails server that runs flawless at the moment (Ruby Enterprise + Passenger + Apache).
It should also to run some independent ruby scripts (setting up localhost XML-RPC servers) on the background.
What is the best way to do this?
Thanks in advance!
Consider using Foreman. It allows you to specify your background processes in a simple, text-based Procfile and run them with foreman start.
If you are looking to start your web server and background scripts together with one command, and you're okay with using Passenger Standalone, your Procfile might look something like:
web: passenger start
rpc: ruby rpc_server.rb
worker: script/delayed_job
I've tried Starling/Workling and found them difficult to configure and keep running as compared to delayed_job. In any case, you'll need a process monitor like God or Monit to make sure whatever solution you choose remains running.
The link in the comments to another question (by Smar, thanks):
http://railscasts.com/episodes/127-rake-in-background
seemed to work fine for me. I didn't need Foreman or any other tool.
I just needed to add this to the Rakefile:
desc "Start some other jobs"
task :start_other_jobs do
system "ruby job1.rb &"
system "ruby job2.rb &"
end
(note the ampersand to make it run as a background task)
and then start it by
rake start_other_jobs
Easy, isn't it? :D
Related
It's nice to see Rails 4.2 come with Active Job as a common interface for background jobs. But I can't find how to start a worker in the document. It seems that the document is still immature (e.g. the right version of Sneakers is only referred to in Rails' Gemfile), so I'm not sure if the "running workers" part is not in Active Job or just not mentioned in docs.
So with Active Job, do I still need to manually start the job watcher threads like sidekiq or in my case, rake sneakers:run? If so, where should I put these commands to let rails server run these parallel tasks automatically in a develop environment?
ActiveJob is just a common interface. You still need the backend gem, and you still need to launch it separately from your server (it is a separated process, which is the objective).
Sample using resque:
In the Gemfile:
gem 'resque'
In the terminal, launching a worker:
bin/resque work
The case is similary when using sidekick, delayed job or something else.
If you want to launch the server & worker in a single command, you can create a short bash script for it, but I would advise not doing so: having two separated console helps to watch what is happening on each side (web app & worker).
A better solution would be to use the foreman gem to manage starting & stopping your process.
You can create a simple Procfile with the processes to start:
web: bundle exec rails s
job: bundle exec resque work
And then just start both using foreman:
foreman start
By default, foreman will interleave the logs of the process in the console, but this can be configured.
You still have to run the job thread watcher.
I have some gems in my Rails App, such as resque, sunspot. I run the following command manually when the machines boots:
rake sunspot:solr:start
/usr/local/bin/redis-server /usr/local/etc/redis.conf
rake resque:work QUEUE='*'
Is there a better practice to run these daemon in the background? And is there any side-effect when run these tasks run in the background?
My solution to that is to use a mix of god, capistrano and whenever. A specific problem I have is that I want all app processes to be run as user, so initd scripts are not an option (this could be done, but it's quite a pain of user switching / environment loading).
God
The basic idea is to use god to start / restart / monitor processes. God may be difficult to get start with, but is very powerful :
running god alone will start all your processes (webserver, bg jobs, whatever)
it can detect a process crashed and restart it
you can group processes and batch restart them (staging, production, background, devops, etc)
Whenever
You still have to start god on server restart. A good mean to do so is to use user crontab. Most cron implementation have a special instruction called #reboot, which allows you to run a specific command on server restart :
#reboot /bin/bash -l -c 'cd /home/my_app && SERVER=true god -c production/current/config/app.god"
Whenever is a gem that allows easy management for crontab, including generating reboot command. While it's not absolutely necessary for achieving what I describe, it's really useful for its capistrano integration.
Capistrano
You not only want to start your processes on server restart, you also want to restart them on deploy. If your background jobs code is not up to date, problem will arise.
Capistrano allows to easily handle that, just ask god to restart the whole group (like : god restart production) in a post deploy capistrano task, and it will be handled seamlessly.
Whenever's capistrano integration also ensure your crontab is always up to date, updating it if you changed your config/schedule.rb file.
You can use something like foreman to manage these processes. You can define process types and other things in a Procfile and you can start and do whatever with them.
I have a chain of nginx + passenger for my rails app.
Now after each server restart i need to write in terminal in project folder
rake ts:start
but how can i automatize it?
So that after each server restart thinking sphinx is automatically started without my command in terminal?
I use rails 3.2.8 and ubuntu 12.04.
I can not imagine what can i try ever, please help me.
How can i do this, give some advices?
What I did to solve the same problem:
In config/application.rb, add:
module Rails
def self.rake?
!!#rake
end
def self.rake=(value)
#rake = !!value
end
end
In Rakefile, add this line:
Rails.rake = true
Finally, in config/initializers/start_thinking_sphinx.rb put:
unless Rails.rake?
begin
# Prope ts connection
ThinkingSphinx.search "test", :populate => true
rescue Mysql2::Error => err
puts ">>> ThinkingSphinx is unavailable. Trying to start .."
MyApp::Application.load_tasks
Rake::Task['ts:start'].invoke
end
end
(Replace MyApp above with your app's name)
Seems to work so far, but if I encounter any issues I'll post back here.
Obviously, the above doesn't take care of monitoring that the server stays up. You might want to do that separately. Or an alternative could be to manage the service with Upstart.
If you are using the excellent whenever gem to manage your crontab, you can just put
every :reboot do
rake "ts:start"
end
in your schedule.rb and it seems to work great. I just tested on an EC2 instance running Ubuntu 14.04.
There's two options I can think of.
You could look at how Ubuntu manages start-up scripts and add one for this (perhaps in /etc/init?).
You could set up monit or another monitoring tool and have it keep Sphinx running. Monit should boot automatically when your server restarts, and so it should ensure Sphinx (and anything else it's tracking) is running.
The catch with Monit and other such tools is that when you deliberately stop Sphinx (say, to update configuration structure and corresponding index changes), it might start it up again before it's appropriate. So I think you should start with the first of these two options - I just don't know a great deal about the finer points of that approach.
I followed #pat's suggestion and wrote a script to start ThinkingSphinx whenever the server boots up. You can see it as a gist -
https://gist.github.com/declan/4b7cc4fb4926df16f54c
We're using Capistrano for deployment to Ubuntu 14.04, and you may need to modify the path and user name to match your server setup. Otherwise, all you need to do is
Put this script into /etc/init.d/thinking_sphinx
Confirm that the script works: calling /etc/init.d/thinking_sphinx start on the command line should start ThinkingSphinx for your app, and /etc/init.d/thinking_sphinx stop should stop it
Tell Ubuntu to run this script automatically on startup: update-rc.d thinking_sphinx defaults
There's a good post on debian-administration.org called making scripts run at boot time that has more details.
I would like my development environment for Rails to automatically start redis and resque (and potentially in other projects, mongod, mysql-server etc.) for me, in the following cases:
When starting up the development server rails server.
Additionally, it would be nice if the following cases detect already running services, and, if not running start them up too:
Rake rspec, rspec /spec, when running tests.
When starting up a rails console.
When shutting down the rails server, the started child-services should be shut down too.
What is the correct place for such additional startup scripts?
And how to avoid them being started in production too (where I run everything trough /etc/init.d services)?
A lot of these built-in tasks are available as rake tasks already.
You can create a master rake task that does it all.
For example, with resque, you get "rake resque:start" "rake resque:scheduler:start", etc.
You can create a generic "start" task that depends on the rest. Similarly, a "stop" task would shut everything down.
So you would do:
rake start # starts all associated processes
rake stop # stops them all
This is also very use to use from Capistrano, when you end up deploying your code somewhere else. Rake tasks are very easy to call from Capistrano.
I think it's really better to do that in some external script. Do it in your rails server command can be really annoying to anyone to try your code.
By example, in one year, a nez developper come to your project. He can be desoriented if your rails server commande launch a such of other application in background.
In same idea, if you do that you need maintain your code in your rails env. Can be a little tricky. Maintain an independant script can be more usefull.
You can add your script in script directory. That be a good pratice. But not when you launch a command with a manual who do not that.
I finally got the DelayedJobs plugin working for Rails 2, and it does indeed work fine...as long as I run:
rake jobs:work
Just like the readme says, to be fair.
BUT, this doesn't fit my requirements...what kind of background task requires you to have a shell open, and a command running? That'd be like having to say script/server to run my rails app, and never getting that -d option so it'll keep running even after I close my shell.
Is there ANY way to keep the workers getting processed in the backgroun, or in daemon mode, or whatever?
I had a ray of hope when I saw the
You can also run by writing a simple
#script/job_runner#, and invoking it
externally:
Line in the readme...but...that just does the exact same thing the rake task does, you just call it a different way.
What I want:
I want to start my rails app, then start whatever will process the workers, and have BOTH of them run invisibly in the background, without the need for me to babysit it and keep the shell that started it running.
(My server is something I SSH into, so I don't want to have that shell that SSHed into it running 24/7 (especially since I like to turn off my local computer now and again)).
Is there any way to acomplish this?
You can make any *nix command run on the background by appending an & to its end:
rake jobs:work &
Just make sure you exit the shell (or use the disown command) to detach the process from your login session... Otherwise, if your session disconnects, the processes you own will be killed with it.
Perhaps Beanstalkd and Stalker?
Beanstalk is a fast and easy way to queue background tasks. Stalker provides a nice wrapper interface for creating these jobs.
See the railscast on it for more information
Edit:
You could also run that rake task as a cronjob which would mean the server would run it periodically without you needing to be logged in
Use the collectiveidea fork of delayed_job... It's more actively developed and has support for running the jobs in a daemon without any extra messing about.
My capistrano script calls
RAILS_ENV=production script/delayed_job start