I finally got the DelayedJobs plugin working for Rails 2, and it does indeed work fine...as long as I run:
rake jobs:work
Just like the readme says, to be fair.
BUT, this doesn't fit my requirements...what kind of background task requires you to have a shell open, and a command running? That'd be like having to say script/server to run my rails app, and never getting that -d option so it'll keep running even after I close my shell.
Is there ANY way to keep the workers getting processed in the backgroun, or in daemon mode, or whatever?
I had a ray of hope when I saw the
You can also run by writing a simple
#script/job_runner#, and invoking it
externally:
Line in the readme...but...that just does the exact same thing the rake task does, you just call it a different way.
What I want:
I want to start my rails app, then start whatever will process the workers, and have BOTH of them run invisibly in the background, without the need for me to babysit it and keep the shell that started it running.
(My server is something I SSH into, so I don't want to have that shell that SSHed into it running 24/7 (especially since I like to turn off my local computer now and again)).
Is there any way to acomplish this?
You can make any *nix command run on the background by appending an & to its end:
rake jobs:work &
Just make sure you exit the shell (or use the disown command) to detach the process from your login session... Otherwise, if your session disconnects, the processes you own will be killed with it.
Perhaps Beanstalkd and Stalker?
Beanstalk is a fast and easy way to queue background tasks. Stalker provides a nice wrapper interface for creating these jobs.
See the railscast on it for more information
Edit:
You could also run that rake task as a cronjob which would mean the server would run it periodically without you needing to be logged in
Use the collectiveidea fork of delayed_job... It's more actively developed and has support for running the jobs in a daemon without any extra messing about.
My capistrano script calls
RAILS_ENV=production script/delayed_job start
Related
I want to start a process from inside a controller.
I've tried the usual
pid = fork do
code
end
Process.detach(pid)
But nothing is happening. When I try with eval(code) in the fork block the code runs but it's the actual rails server/puma running it. This means that when I kill the process I also shut down the whole server.
I had some code before that I lost that worked and I'm nearly sure it used exec or eval or something like this that created a process(and therefore returned a pid to be able to kill it later) and I remember checking with ps that it was run by something of rails but not the actual whole server.
Why isn't the fork do block enough for it to work? What's the way to do it?
And, for non-testing purposes and actual implementation, how can I make it run totally independent from the rails server?
You can execute a shell command from inside your Rails controller using exec.
I hope you are looking for the same. The process you start on the system will be totally independent of the Rails server and will be visible under ps command if it is running when you hit the command.
Documentation: http://ruby-doc.org/core-2.5.1/Kernel.html#method-i-exec
I have some gems in my Rails App, such as resque, sunspot. I run the following command manually when the machines boots:
rake sunspot:solr:start
/usr/local/bin/redis-server /usr/local/etc/redis.conf
rake resque:work QUEUE='*'
Is there a better practice to run these daemon in the background? And is there any side-effect when run these tasks run in the background?
My solution to that is to use a mix of god, capistrano and whenever. A specific problem I have is that I want all app processes to be run as user, so initd scripts are not an option (this could be done, but it's quite a pain of user switching / environment loading).
God
The basic idea is to use god to start / restart / monitor processes. God may be difficult to get start with, but is very powerful :
running god alone will start all your processes (webserver, bg jobs, whatever)
it can detect a process crashed and restart it
you can group processes and batch restart them (staging, production, background, devops, etc)
Whenever
You still have to start god on server restart. A good mean to do so is to use user crontab. Most cron implementation have a special instruction called #reboot, which allows you to run a specific command on server restart :
#reboot /bin/bash -l -c 'cd /home/my_app && SERVER=true god -c production/current/config/app.god"
Whenever is a gem that allows easy management for crontab, including generating reboot command. While it's not absolutely necessary for achieving what I describe, it's really useful for its capistrano integration.
Capistrano
You not only want to start your processes on server restart, you also want to restart them on deploy. If your background jobs code is not up to date, problem will arise.
Capistrano allows to easily handle that, just ask god to restart the whole group (like : god restart production) in a post deploy capistrano task, and it will be handled seamlessly.
Whenever's capistrano integration also ensure your crontab is always up to date, updating it if you changed your config/schedule.rb file.
You can use something like foreman to manage these processes. You can define process types and other things in a Procfile and you can start and do whatever with them.
I'm using delayed job to create job queues such as 'mailer'
For this to work I have to run this:
$ RAILS_ENV=development QUEUE=mailer rake jobs:work
But if the server crashes and is restarted, I need the worker to start running again automatically.
What would be the recommended way to deal with this?
You need to use a third-party service like monit/bluepill/god/upstart to watch the process and restart it. I recommend using the combination of foreman and upstart. See here: http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
Some time ago I wrote a patch for the DelayedJob to reload the classes for every job in development mode. Same patch should work for your requirement also.
betamatt's approach is definitely one way to do it if you have such a monitoring tool in place.
Another way to do it would simply be to add a script to your OS's startup which runs the RAILS_ENV=development QUEUE=mailer rake jobs:work command under a user who has the necessary permissions.
Here's an example of how to do it on Ubuntu using Upstart, but if you lookup similar init.d methods, or whatever is the relevant for your server OS, you'll find other ways. What you're looking for, basically, is "How to run a script on startup [your OS name]", and then wrap your command in an executable script.
I had the same issue with my application am working with. So i wrote a rake task which runs every minute(as a cron job). When delayed job starts it will create a .pid file in the temp folder. I used this to check the existence of a delayed job process. If the file doesn't exist i ran the console command through code.
delayed_job_status = File.file?("./tmp/pids/delayed_job.pid")
This will check the existence of process. If nil response go to next statement
./bundle exec script/delayed_job start production
This will start delayed job
My solution was creating the bash script in user's home "delayed_job_startup.sh"
which contain
#!/bin/bash
cd /home/deploy/project/current/
RAILS_ENV=production bin/delayed_job start
and in file /etc/rc.local I run this script from my user
su -s /bin/bash - deploy /home/deploy/delayed_job_startup.sh
I have a rake task which parses a streaming API and enters data into database. The streaming API is live feed and the rake task should run continuously for the live data to enter the database. The rake task once called will run continuously and parse the data. Now i have started the rake task and it is running. The problem is that if i close the terminal or reboot the server, the rake task wil be stopped. So, i want a script in linux (something like the one used to start, or stop apache server), which does the following:
1. start the rake task by calling rake command (rake parse:stream) from the RAILS-ROOT (application directory of Rails app)
2. stop the rake task by killing the process.
3. start the rake task automatically when the server reboots.
i am not familiar to linux scripts and i dont know where to start. i am using ubuntu server. can anyone help me?
Here's an article that might help you also. It discussed various options for managing Ruby applications and their related processes:
http://michaelvanrooijen.com/articles/2011/06/08-managing-and-monitoring-your-ruby-application-with-foreman-and-upstart/
You need to run your script as a daemon. When I create this kind of startup scripts I usually make 2 files, one that stays in /etc/init.d and handles the start/stop/status/restart commands and another one that actually does the job and gets called by the first script.
Here is one solution, and although the daemon script is written in perl, you want to run some command lines only, so daemonizing a perl script could do your job easily.
If you want, there are also ruby gems for daemonizing scripts, so you can write a script in ruby that does the rake tasks.
And if you want to go hardcore, there are solutions for writing bash scripts that can daemonize, but I'm not sure I would recommend a solution like that; at least I find them pretty difficult to use.
Take a look at how Github's Resque project does it.
Essentially they create tasks for starting/restarting/stopping a particular task, in this case resque:work. Note that the restart_workers task simply invokes the other tasks, stop and start. It should be really easy to change this for what you want.
I would like my development environment for Rails to automatically start redis and resque (and potentially in other projects, mongod, mysql-server etc.) for me, in the following cases:
When starting up the development server rails server.
Additionally, it would be nice if the following cases detect already running services, and, if not running start them up too:
Rake rspec, rspec /spec, when running tests.
When starting up a rails console.
When shutting down the rails server, the started child-services should be shut down too.
What is the correct place for such additional startup scripts?
And how to avoid them being started in production too (where I run everything trough /etc/init.d services)?
A lot of these built-in tasks are available as rake tasks already.
You can create a master rake task that does it all.
For example, with resque, you get "rake resque:start" "rake resque:scheduler:start", etc.
You can create a generic "start" task that depends on the rest. Similarly, a "stop" task would shut everything down.
So you would do:
rake start # starts all associated processes
rake stop # stops them all
This is also very use to use from Capistrano, when you end up deploying your code somewhere else. Rake tasks are very easy to call from Capistrano.
I think it's really better to do that in some external script. Do it in your rails server command can be really annoying to anyone to try your code.
By example, in one year, a nez developper come to your project. He can be desoriented if your rails server commande launch a such of other application in background.
In same idea, if you do that you need maintain your code in your rails env. Can be a little tricky. Maintain an independant script can be more usefull.
You can add your script in script directory. That be a good pratice. But not when you launch a command with a manual who do not that.