I'm using the delayed_job Ruby gem just fine.
It defaults to a single worker, so I've gone ahead and done:
script/delayed_job stop
script/delayed_job -n 5 start
to ensure there are 5 workers.
However, when I reboot (or when the system decides to reboot), the Rails app boots back up with only a single delayed_job worker.
How can I change the default number of workers? It doesn't seem to be listed at https://github.com/collectiveidea/delayed_job.
It turns out the mechanism which ensured delayed_job started up on reboot was being controlled in my scenario by the 'whenever' gem's config/schedule.rb.
The number of jobs may also change if you're using Capistrano with delayed_job recipes. You'll have to ensure the two numbers are shared correctly.
Related
I'm trying to use cron in my application to send mails every week but I think it doesn't work on Windows.
Does anybody knows any equivalent to cron solution that works on Windows?
Windows equivalent of Unix's cron is a "Task Scheduler". You can configure your periodical task there.
Purely Ruby solution
If you want a purely Ruby solution look into:
rufus-scheduler - it's Windows cron gem.
crono - it's a in-Rails cron scheduler, so it should work anywhere.
Web services - there are plenty of free online services that would make a request to a given URL in specific time periods. This is basically a poor man's cronjob.
I recommend taking a look at Resque and the extension Resque-scheduler gems. You will need to have a resque scheduler process running with bundle exec rake resque:scheduler and at least one worker process running with QUEUE=* bundle exec rake resque:work.
If you want these services to run in the background as a windows service, you can do it with srvany.exe as described in this SO question.
The above assumes you are ok with installing Redis - a key-value store that is very popular among the Rails community as it can be easily used to support other Rails components such as caching and ActionCable, and it is awesome by itself for many multi-process use cases.
Resque is a queue system on top of Redis that allows you to define jobs that can be executed asynchronously in the background. When you run QUEUE=* bundle exec rake resque:work, a worker process runs constantly and polls the queue. Once a job is enqueued, an available worker pops it from the queue and starts working on it. This architecture is quite scalable, as you can have multiple workers listening to the queues if you'd like.
To define a job, you do this:
class MyWeeklyEmailSenderJob
def self.perform
# Your code to send weekly emails
end
end
While you can enqueue this job to the queue yourself from anywhere (e.g. from a controller as a response to an action), in your case you want it to automatically be placed into the queue once a week. This is what Resque-scheduler is for. It allows you to configure a file such as app/config/resque_schedule.yml in which you can define which jobs should be enqueued in which time interval. For example:
send_weekly_emails:
cron: 0 8 * * Mon
class: MyWeeklyEmailSenderJob
queue: email_sender_queue
description: "Send weekly emails"
Remember that a scheduling process has to run in order for this to work with bundle exec rake resque:scheduler.
thanks guys , actually i tried rufus scheduler gem and it worked for me , i guess it's the best and easier solution
I experimented with a Rake task with Cron. I started with no Ruby processes, then the Cron job started and spawned one process. The highlighted process below is what is run by Cron, which is expected:
I wanted to check if any records were being written to the database. I ran rails c to enter the Rails console, and noticed that suddenly four other ruby processes showed up in my process list as above. Why would this happen? I think that running the console should create one other process and not four.
After quitting the console, I am left with three Ruby processes including the Rake task that is running.
I am using Rails 4.2.
It's not that I find this to be problematic, but I am curious why there would need to be more than one process for a REPL and then two leftover processes after the REPL is closed.
This is because of spring which has shipped with rails by default for a little while now.
You might notice that the second time you run rails c is a lot faster than the first time. This is because the first time you run a springified script your app is loaded as normal and then forks to run what you requested. The second time around this loader script can just fork a second time, so you get a much faster startup. This possible because of these extra processes you have noticed.
You can see them by running
spring status
And you can kill them by running
spring stop
It's nice to see Rails 4.2 come with Active Job as a common interface for background jobs. But I can't find how to start a worker in the document. It seems that the document is still immature (e.g. the right version of Sneakers is only referred to in Rails' Gemfile), so I'm not sure if the "running workers" part is not in Active Job or just not mentioned in docs.
So with Active Job, do I still need to manually start the job watcher threads like sidekiq or in my case, rake sneakers:run? If so, where should I put these commands to let rails server run these parallel tasks automatically in a develop environment?
ActiveJob is just a common interface. You still need the backend gem, and you still need to launch it separately from your server (it is a separated process, which is the objective).
Sample using resque:
In the Gemfile:
gem 'resque'
In the terminal, launching a worker:
bin/resque work
The case is similary when using sidekick, delayed job or something else.
If you want to launch the server & worker in a single command, you can create a short bash script for it, but I would advise not doing so: having two separated console helps to watch what is happening on each side (web app & worker).
A better solution would be to use the foreman gem to manage starting & stopping your process.
You can create a simple Procfile with the processes to start:
web: bundle exec rails s
job: bundle exec resque work
And then just start both using foreman:
foreman start
By default, foreman will interleave the logs of the process in the console, but this can be configured.
You still have to run the job thread watcher.
I have some gems in my Rails App, such as resque, sunspot. I run the following command manually when the machines boots:
rake sunspot:solr:start
/usr/local/bin/redis-server /usr/local/etc/redis.conf
rake resque:work QUEUE='*'
Is there a better practice to run these daemon in the background? And is there any side-effect when run these tasks run in the background?
My solution to that is to use a mix of god, capistrano and whenever. A specific problem I have is that I want all app processes to be run as user, so initd scripts are not an option (this could be done, but it's quite a pain of user switching / environment loading).
God
The basic idea is to use god to start / restart / monitor processes. God may be difficult to get start with, but is very powerful :
running god alone will start all your processes (webserver, bg jobs, whatever)
it can detect a process crashed and restart it
you can group processes and batch restart them (staging, production, background, devops, etc)
Whenever
You still have to start god on server restart. A good mean to do so is to use user crontab. Most cron implementation have a special instruction called #reboot, which allows you to run a specific command on server restart :
#reboot /bin/bash -l -c 'cd /home/my_app && SERVER=true god -c production/current/config/app.god"
Whenever is a gem that allows easy management for crontab, including generating reboot command. While it's not absolutely necessary for achieving what I describe, it's really useful for its capistrano integration.
Capistrano
You not only want to start your processes on server restart, you also want to restart them on deploy. If your background jobs code is not up to date, problem will arise.
Capistrano allows to easily handle that, just ask god to restart the whole group (like : god restart production) in a post deploy capistrano task, and it will be handled seamlessly.
Whenever's capistrano integration also ensure your crontab is always up to date, updating it if you changed your config/schedule.rb file.
You can use something like foreman to manage these processes. You can define process types and other things in a Procfile and you can start and do whatever with them.
I have a rails app with the whenever gem installed to setup cron jobs which invoke various rake tasks. For reasons unbeknownst to me, each rake task gets invoked twice at precisely the same time. So my db backup task backs up the db twice at 4:00am.
Inspecting crontab reveals correct syntax for all of the cron jobs, so I don't think this is an issue with the whenever gem not correctly configuring the cron jobs. Also confusing is that in both staging and production environments and can invoke tasks on the command line and they only run once.
Any thoughts on what would cause this? I'm at a complete loss troubleshooting wise.
The number of cron jobs that run depends on the number of application instances running in the server box. Are you have two instances of rails application running in the same server box?