Background jobs with Resque on Heroku - ruby-on-rails

I am having a really strange problem on Heroku that I have been spinning my wheels to figure out for a while now.
My app has a few external API calls and mailers which I have set up ActiveJob to run in the background. On Heroku I have two workers set up as and I am using a Resque/Redis combo for the jobs as per the below snippets. I am using the Redis Cloud add-on on Heroku.
Config / setup
Procfile
web: bundle exec puma -C config/puma.rb
resque: env TERM_CHILD=1 QUEUE=* bundle exec rake resque:work
lib/tasks/resque.rake
require "resque/tasks"
require "resque/scheduler/tasks"
task "resque:setup": :environment do
Resque.before_fork = proc { ActiveRecord::Base.connection.disconnect! }
Resque.after_fork = proc { ActiveRecord::Base.establish_connection }
end
config/initializers/active_job.rb
Rails.application.config.active_job.queue_adapter = :resque
config/initializers/redis.rb
if ENV["REDISCLOUD_URL"]
$redis = Redis.new(url: ENV["REDISCLOUD_URL"])
end
config/initializers/resque.rb
if Rails.env.production?
uri = URI.parse ENV["REDISCLOUD_URL"]
Resque.redis = Redis.new(host: uri.host, port: uri.port,
password: uri.password)
else
Resque.redis = "localhost:6379"
end
The problem
The problem I am having is when a user is using the app in browser (i.e., interfacing with the web worker) and performs an action which triggers one of the ActiveJob jobs the job is run "inline" using the web worker and not the resque worker. When I run the specific model method that queues the job in my Heroku app console (opened by running heroku run rails console) it adds the job to Redis and runs it using the resque worker as expected.
Why would one way work properly and the other way not work? I have looked at almost every tutorial / SO question on the topic and have tried everything so any help getting the jobs to be run but the right worker would be amazing!
Thanks in advance!

I managed to solve the problem by playing with my config a little. It seems that actions were being tunnelled through ActiveJob's "Inline" default rather than via Resque. To get things working I just had to direct Resque.redis to be equal to the $redis variable set in config/initializers/redis.rb so everything was pointing to the same Redis instance and then move the config set in config/initializers/active_job.rb to application.rb.
For reference, the new & improved config that all works is:
Config / setup
Procfile
web: bundle exec puma -C config/puma.rb
resque: env TERM_CHILD=1 RESQUE_TERM_TIMEOUT=7 QUEUE=* bundle exec rake resque:work
lib/tasks/resque.rake
require "resque/tasks"
task "resque:setup" => :environment
config/application.rb
module App
class Application < Rails::Application
...
# Set Resque as ActiveJob queue adapter.
config.active_job.queue_adapter = :resque
end
end
config/initializers/redis.rb
if ENV["REDISCLOUD_URL"]
$redis = Redis.new(url: ENV["REDISCLOUD_URL"])
end
config/initializers/resque.rb
Resque.redis = Rails.env.production? ? $redis : "localhost:6379"

thanks a lot for providing the answer. It saved me a lot of time.
You have one typo inside your Procfile.
It should be resque instead of rescue.
resque: env TERM_CHILD=1 RESQUE_TERM_TIMEOUT=7 QUEUE=* bundle exec rake resque:work
Also, I had to type in one more command to get this all to work in production. Hopefully this helps someone.
heroku ps:scale resque=1 --app appname
This command scales the resque process to 1 dyno(free). You can also do this from the dashboard on heroku.
You can read more about it on the heroku docs https://devcenter.heroku.com/articles/scaling

Related

Redis to Go - Heroku - Rails

I'm trying to get Sidekiq playing in Heroku. Without Luck.. My configs look like that:
Procfile
web: bundle exec passenger start -p $PORT --max-pool-size 5
worker: bundle exec sidekiq
Initializers/redis.rb
uri = URI.parse(ENV["REDISTOGO_URL"] || "redis://localhost:6379/")
REDIS = Redis.new(:url => ENV['REDISTOGO_URL'])
but when i do heroku ps only the web instance is shown. Not Sidekiq.
However i can manually run heroku run sidekiq and run my workers. What am i missing so that Heroku doesn't start that on it's own ?
The problem is that you didn't configure Sidekiq in the initializer so that Sidekiq knows exactly how to connect to redis. Create Initializers/sidekiq.rb and add the following code
Sidekiq.configure_server do |config|
config.redis = { :url => ENV["REDISTOGO_URL"] }
end
Also you can remove
uri = URI.parse(ENV["REDISTOGO_URL"] || "redis://localhost:6379/")
You don't need it as you are using the redis url from the environment.
Don't forget to restart your server.

How to get Sidekiq workers running on Heroku

I've set up Sidekiq with my Rails project. It's running on Heroku with Unicorn. I've gone through all the configuration steps including setting the proper REDISTOGO_URL (as this question references), I've added the following to my after_fork in unicorn.rb:
after_fork do |server,worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
Sidekiq.configure_client do |config|
config.redis = { :size => 1 }
end
end
My Procfile is as follows:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq
Right now I call my worker to perform_async and it adds the task to the queue. In fact in my Sidekiq web interface it says there are 7 items in the queue and it has all of the data there. Yet there are no workers processing the queue and for the life of me, I can't figure out why. If I run
heroku ps
I get the following output:
=== web: `bundle exec unicorn -p $PORT -c ./config/unicorn.rb`
web.1: up 2012/12/09 08:04:24 (~ 9m ago)
=== worker: `bundle exec sidekiq`
worker.1: up 2012/12/09 08:04:08 (~ 10m ago)
Anybody have any idea what's going on here?
Update
Here's the code for my worker class. Yes, I'm aware that the Oj gem has some issues potentially with sidekiq, but figured I'd give it a shot first. I'm not getting any error messages at this point (the workers don't even run).
require 'addressable/uri'
class DatasiftInteractionsWorker
include Sidekiq::Worker
sidekiq_options queue: "tweets"
def perform( stream_id , interactions )
interactions = Oj.load(interactions)
interactions.each{ |interaction|
if interaction['interaction']['type'] == 'twitter'
url = interaction['links']['normalized_url'] unless interaction['links']['normalized_url'][0].nil?
url = interaction['links']['url'] if interaction['links']['normalized_url'][0].nil?
begin
puts interaction['links'] if url[0].nil?
next if url[0].nil?
host = Addressable::URI.parse(url[0]).host
host = host.gsub(/^www\.(.*)$/,'\1')
date_str = Time.now.strftime('%Y%m%d')
REDIS.pipelined do
# Add domain to Redis domains Set
REDIS.sadd date_str , host
# INCR Redis host
REDIS.incr( host + date_str )
end
rescue
puts "ERROR: Could not store the following links: " + interaction['links'].to_s
end
end
}
end
end
My preference is to create a /config/sidekiq.yml file and then use worker: bundle exec sidekiq -C config/sidekiq.yml in your Procfile.
Here's an example sidekiq.yml file
Figured out that if you're using a custom queue, you need to make Sidekiq aware of that queue in the Procfile, as follows:
worker: bundle exec sidekiq -q tweets, 1 -q default
I'm not quite sure why this is the case since sidekiq is aware of all queues. I'll post this as an issue on the Sidekiq project.

Running multiple rails tasks on heroku with the same worker that resque runs

With the command below, I start a worker running the resque
heroku ps:scale worker=1
But I want to know: Since I will be paying for a whole worker only to run resque, why can't run multiple running tasks?
For example: I need to poll a AWS SQS and it would be wasteful to have another worker for this poll.
I tried to put then: Resque, AWS_QUEUE to listen converted files. But Heroku crashes the worker.
=== worker: `bundle exec rake worker:all`
worker.1: crashed for 2m
Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake worker:all
worker.rake
namespace :worker do
task :all => [:environment, "sqs:listen_converted", "resque:work"] do
puts "All workers started"
end
end
sqs.rake
task "sqs:listen_converted" => :environment do
puts "Start to listen converted..."
Thread.new do
queue = AWS::SQS::Queue.new(SQSADDR['incoming'])
queue.poll do |msg|
...
end
end
Can it be done? Thanks!!

How to run "rake resque:work QUEUE=*" when Rails server boots?

I have installed resque correctly, but to process all queues I need to run
rake resque:work QUEUE='*'
The problem is that I need to keep the terminal window opened, otherwise resque:work won't works.
Do you know a way to auto-run that rake command every time I run "rails server" ?
I'm on Localhost
lib/tasks/resque.rake
require 'resque/tasks'
task "resque:setup" => :environment do
ENV['QUEUE'] = "*"
end
Instead of calling the invoke function, you can use a gem like foreman that can invoke all the other tasks.
This is useful if you are looking to have a largely platform neutral solution and also while deploying into cloud.
Your Procfile can have the following contents:
web: bundle exec thin start -p $PORT
worker: bundle exec rake resque:work QUEUE=*
clock: bundle exec rake resque:scheduler
Source:introduction to foreman.
Now to start the server, you just have to issue foreman start command which forks off child threads to perform individual work.
Edit: Answer from 2012! Seems that this works just for Rails 2!
Add an initializer in config/initializers with something like this:
Rake::Task["resque:work QUEUE='*'"].invoke
Not tested!
The best way to do it is
ENV['QUEUE'] = "*"
Rake::Task["resque:work"].invoke

Resque workers do not start due to middleware failure

I have rescue 1.22.0 installed locally and on a server. To be able to catch MultiJson::DecodeErrors I added the following to my application.rb:
config.middleware.swap ActionDispatch::ParamsParser, ::MyParamsParser
and added the class to my lib folder. In dev mode this works fine, I can rescue from DecodeErrors and I can start a worker using:
QUEUE=* bundle exec rake environment resque:work
In production mode on my server the code itself works as well, but my god process was not able to start workers again. The error that occurs after god starts a worker:
QUEUE=* /usr/local/rvm/rubies/ruby-1.9.2-p320/bin/ruby /usr/local/rvm/gems/ruby-1.9.2-p320#global/bin/bundle exec rake -f /home/deployer/apps/kassomat/current/Rakefile environment resque:work
rake aborted!
No such middleware to insert before: ActionDispatch::ParamsParser
I tried to patch my application.rb
config.middleware.swap ActionDispatch::ParamsParser, ::MyParamsParser if Object.const_defined?('ActionDispatch') && ActionDispatch.const_defined?('ParamsParser')
but that did not work out. I do not understand why this works in development but fails in production.
Can anyone help?
Regards
Felix

Resources