I have a Rails 3.x application with Resque. I run the resque command with:
nohup rake RAILS_ENV=production environment resque:work QUEUE='*' & >>/tmp/resque.log 2>> /tmp/resque.err.log
Every other day the process dies, but the two output files are always empty. Any other way of figuring out why the Resque process goes down?
Try the super awesome Pry console. This is similar to irb only much more advanced.
You can use binding.pry inside your perform method or preferably in a hook which will start up a pry console using which you can debug. Helped me in similar situations a lot.
Related
I am newbie to Ruby on Rails development
Can someone please give explain to me what this command line do bundle exec rake jobs:work
I don't understand what is worker and what the command line can do.
Can someone gives me some examples.
Thank you
In ruby due to GIL(global interpreter lock), you can run only one ruby thread (multithreading is supported, but it works only if you do IO) at a time. To work around this issue and make things asynchronous, people use sidekiq, delayedjob, etc.
Worker in this terminology, is a separate background ruby process that processes jobs a.k.a task you put in it. And if you use DelayedJob the bundle exec rake jobs:work will start this processes (other gems for background jobs use other commands)
In my Rails app (4.2.4), I have been trying to get asynchronous mail sending to work.
I installed delayed_job as my queue adapter, and set it as the adapter in several places: config/application.rb, config/environments/{development,production}.rb, and config/initializers/active_job.rb.
Installation:
I added this to my Gemfile:
gem 'delayed_job_active_record'
Then, I ran the following commands:
$ bundle install
$ rails generate delayed_job:active_record
$ rake db:migrate
$ bin/delayed_job start
In config/application.rb, config/environments/production.rb, config/environments/development.rb:
config.active_job.queue_adapter = :delayed_job
In config/initializers/active_job.rb (added when the above did not work):
ActiveJob::Base.queue_adapter = :delayed_job
I've also run an ActiveRecord migration for delayed_job, and started bin/delayed_job before running my server.
That being said, any time I try:
UserMailer.welcome_email(#user).deliver_later(wait: 1.minutes)
I get the following error:
NotImplementedError (Use a queueing backend to enqueue jobs in the
future. Read more at http://guides.rubyonrails.org/active_job_basics.html):
app/controllers/user_controller.rb:25:in `create'
config.ru:25:in `call'
I was under the impression that delayed_job is a queueing backend... am I missing something?
EDIT:
I can't get sucker_punch to work either. When installing sucker_punch in the bundler, and using:
config.active_job.queue_adapter = :sucker_punch
in config/application.rb, I get the same error and stack trace.
If you are having this issue in your development environment even though you are using an adapter capable of asynchronous jobs like Sidekiq, make sure that Rails.application.config.active_job.queue_adapter is set to :async instead of :inline.
# config/environments/development.rb
Rails.application.config.active_job.queue_adapter = :async
Provide you are following all the steps listed here, I feel you didn't start delayed_job running
bin/delayed_job start
Please also check you run
rails generate delayed_job:active_record
rake db:migrate
Try this:
in controller:
#user.delay.welcome_email
in your model
def welcome_email
UserMailer.welcome_email(self).deliver_later(wait: 1.minutes)
end
Figured out what it was: I typically start my server and everything associated with it using a single shell script. In this script, I was running bin/delayed_job start in the background, and starting the server before bin/delayed_job start finished. The solution was to make sure delayed_job start finished before starting the server by running it in the foreground in my startup script.
Thanks everyone for all the help!
With the delayed_jobs gem(https://github.com/collectiveidea/delayed_job) in rails, I am able to queue my notifications. But I don't quite understand how can I run the queued jobs on the production server. I knew I can just run
$ rake jobs:work
in the console for the local server. As the documentation said,
You can then do the following:
RAILS_ENV=production script/delayed_job start
RAILS_ENV=production script/delayed_job stop
# Runs two workers in separate processes.
RAILS_ENV=production script/delayed_job -n 2 start
RAILS_ENV=production script/delayed_job stop
# Set the --queue or --queues option to work from a particular queue.
RAILS_ENV=production script/delayed_job --queue=tracking start
RAILS_ENV=production script/delayed_job --queues=mailers,tasks start
# Runs all available jobs and the exits
RAILS_ENV=production script/delayed_job start --exit-on-complete
# or to run in the foreground
RAILS_ENV=production script/delayed_job run --exit-on-complete
My question is how to integrate it with my Rails app?I was thinking to create a file called delayed_jobs.rb in config/initializers as:
# in config/initializers/delayed_jobs
script/delayed_job start if Rails.env.production?
But I am not sure if it is the right way to do with it. Thanks
The workers run as separate processes, not as part of your Rails application. The simplest way would be to run the rake task in a screen session to prevent it from quitting when you log out of the terminal session. But there are better ways:
You would use a system such as monit or God or run the worker script provided by delayed_job. You'll find more information in the answers to this question.
In my experiencie I've found my solution using capistrano gem, which in words of the official doc
It supports the scripting and execution of arbitrary tasks, and includes a set of sane-default deployment workflows.
Basically it is a tool that helps you to deploy your app, including all of those task like starting/stoping queues, migrating the database, bundle new gems, and all of those thing that we usually do with ssh connection.
Here is a beutifull tutorial about capistrano and webfaction as hosting. And here is a nice module to blend capistrano and delayed_job. At the end you should only be concern about the development environment, because every time that you need to deploy to production, you'll do a commit to your repository and then
$ cap production deploy
Which will manage the whole production environment, stoping/restarting those queues, restarting the app, installing gems and everything that you can perform through the capistrano scripting way.
I am using Ruby on Rails 3.0.9 and I am trying to setup the delay_job gem for my web application in order to send emails in this way:
Notifier.delay.send_email(#user)
As well as written in the official gem documentation, to start my "delayed jobs" I should use one of the following line of code
$ RAILS_ENV=production script/delayed_job start
$ RAILS_ENV=production script/delayed_job stop
# Runs two workers in separate processes.
$ RAILS_ENV=production script/delayed_job -n 2 start
$ RAILS_ENV=production script/delayed_job stop
or invoke the rake jobs:work task.
In production mode I prefer to use one of the RAILS_ENV=... statements, but I would like to know where (that is, in which file) I should add that code in order to start the workers on application start (BTW: at this time I am not using Capistrano to deploy my application).
More, I would like to know what exactly "workers" are and if my VPS hosting (running Ubuntu 10.04 LTS) can run multiple of those or how to know how many workers my server can run.
Finally, I would like to know what options can I add in the config/initializers/delayed_job.rb file and if there are some advices or tricks about the Delay Job gem.
To start your workers on application start I would just call the proper command from an initalizer. The code to do this would look like:
system "RAILS_ENV=production #{Rails.root.join('script','delayed_job')} stop"
system "RAILS_ENV=production #{Rails.root.join('script','delayed_job')} -n 2 start"
The path might be a little off and there most likely is a cleaner way to do it but I dont know of anything off of the top of my head.
I have no problems with running it in development mode via rake jobs:work. However, I'm somehow unable to figure out how to use it in production. I'm using Capistrano for deployment.
Thanks for any advice!
If you install delayed_job as a gem you need to run the generator in order to create the script scripts/delayed_job and set run permissions.
Then you can follow the instructions on How to configure Capistrano for Delayed Job to hook it up in your Capistrano file.
See this answer. In a nutshell, use the Collective Idea fork of delayed_job. It contains a script called delayed_job that can be used.
You can run the generated delayed_job script as follows:
RAILS_ENV=production script/delayed_job start
Hope this helps
My first thought will be to add a after deploy task in capistrano to run the rake jobs:work task. you might need to check if the process is already running and restart it.
If you are running it via rake then couldn't you just run however often you wanted via cron? The whenever gem is a great interface to this from ruby.