So I just migrated to Rails 5.1.4 and I'm trying to make Active Job work, but the jobs are just stuck in the queue and never processed.
rails: 5.1.4
ruby: 2.4.3
sidekiq: 5.0.5
redis: 4.0.1
sidekiq.yml
---
:verbose: true
:concurrency: 5
:timeout: 60
development:
:concurrency: 25
staging:
:concurrency: 50
production:
:concurrency: 5
:queues:
- default
- [high_priority, 2]
sidekiq.rb
Sidekiq.configure_server do |config|
config.redis = {url: ENV['ACTIVE_JOB_URL'], network_timeout: 5}
end
Sidekiq.configure_client do |config|
config.redis = {url: ENV['ACTIVE_JOB_URL'], network_timeout: 5}
end
Here the is how I perform the task from the rails console:
TestJob.perform_later
TestJob.rb content:
class TestJob < ApplicationJob
queue_as :default
def perform(*args)
Rails.logger.debug "#{self.class.name}: I'm performing my job with arguments: #{args.inspect}"
end
end
The jobs are just stuck in the queue and never processed:
Have you started the worker, eg in development it might be:
bundle exec sidekiq
If in production Heroku should do this for you, if you have configured your Procfile, eg:
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq
You can also use your Procfile in development with foreman, eg:
foreman start -f Procfile
Related
In my Rails 6 app I've got ActiveRecord callback (after_create) which should call SyncProductsWorker (Sidekiq worker) each time when record ProductsBatch is created:
ProductsBatch model with after_create:
module Imports
class ProductsBatch < ImportsRecord
attr_accessor :product_codes
after_create :enqueue
def enqueue
::Imports::SyncProductsWorker.perform_async(product_codes, self.id)
end
end
end
# base class for the above model
class ImportsRecord < ApplicationRecord
self.abstract_class = true
connects_to database: { writing: :imports }
end
SyncProductsWorker class:
module Imports
class SyncProductsWorker
include Sidekiq::Worker
sidekiq_options queue: 'imports_sync'
def perform(list, id)
# do some things
end
end
end
config/sidekiq.yml
:max_retries: 16
:queues:
- default
- imports_sync
- imports_fetch_all
:dynamic: true
config/initializers/sidekiq.rb
redis = { url: ENV['REDIS_URL'], ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE } }
Sidekiq.configure_server do |config|
config.redis = redis
end
Sidekiq.configure_client do |config|
config.redis = redis
end
Everything works well locally but when I deploy code into Heroku worker doesn't seem to call. The strange thing is that, based on Heroku logs, it doesn't even work when I try to call it directly from Heroku rails console:
heroku run rails console --app test-app
› Warning: heroku update available from 7.47.7 to 7.60.2.
Running rails console on ⬢ test-app... up, run.8489 (Hobby)
Loading production environment (Rails 6.1.4.1)
irb(main):013:0> ::Imports::SyncProductsWorker.perform_async(['11'], 10)
=> "5edf93e27fa2f41245587d49"
But nothing happens inside Heroku logs:
2022-06-06T22:02:00.240650+00:00 app[worker.1]: [ActiveJob] [ProductAvailabilityAdjusterJob] [25c15f9d-e032-438e-bda8-16ffd557cc32] Performed ProductAvailabilityAdjusterJob (Job ID: 25c15f9d-e032-438e-bda8-16ffd557cc32) from Sidekiq(default) in 5.44ms
2022-06-06T22:02:00.240789+00:00 app[worker.1]: pid=4 tid=2xbk class=ProductAvailabilityAdjusterJob jid=91ad7e69e061df9f2f681ef3 elapsed=0.006 INFO: done
Is there anything special I should do to make this worker work on Heroku?
I've been getting a whole bunch of uninitialized constant errors lately and I can't figure out why. Below is a specific example. In this example, I am calling a job from within a job. But I am getting a similar uninitialized constant error will many of my other jobs. All jobs are in app/jobs. Am I missing something? Sidekiq has been working just fine until recently.
I've purged my Heroku cache and killed all retries in Sidekiq and I'm still getting these issues. There's something really strange here. With another error I'm getting related to a sidekiq job, I'm getting "wrong number of arguments (given 2, expected 1)". I updated the function in question to receive two arguments weeks ago. Is it possible that Sidekiq is somehow stuck on a cached version of the codebase?
Ruby version: ruby 2.5.3p105
Sidekiq version: 6.0.7
app/jobs/process_email_notifications_job.rb
class ProcessEmailNotificationsJob < ApplicationJob
queue_as :default
def perform
user_ids = UserNotification.where(is_read: false).pluck(:user_id).uniq
user_ids.each do |user_id|
ProcessIndividualEmailNotificationsJob.perform_later user_id
end
end
end
app/jobs/process_individual_email_notifications_job.rb
class ProcessIndividualEmailNotificationsJob < ApplicationJob
queue_as :default
def perform(user_id)
...
end
end
Error message:
2020-05-06T20:07:45.720Z pid=56028 tid=owp0sdcm8 DEBUG: enqueued retry: {"retry":true,"queue":"production_default","class":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper","wrapped":"ProcessIndividualEmailNotificationsJob","args":[{"job_class":"ProcessIndividualEmailNotificationsJob","job_id":"4b31bc4f-d034-4190-b24f-d0464cf81df0","provider_job_id":null,"queue_name":"production_default","priority":null,"arguments":[988],"executions":0,"locale":"en"}],"jid":"0ecc861f5870a7b9a70f176f","creat
4:07:45 PM sidekiq.1 | > ed_at":1588794273.2726498,"enqueued_at":1588795006.4009435,"error_message":"uninitialized constant ProcessIndividualEmailNotificationsJob\nDid you mean? ProcessEmailNotificationsJob","error_class":"NameError","failed_at":1588794279.9911764,"retry_count":5,"retried_at":1588795006.763224}
Initializer:
require 'sidekiq'
require 'sidekiq/web'
Sidekiq.configure_client do |config|
config.redis = { :size => 5 }
end
Sidekiq.configure_server do |config|
config.redis = { :size => 25 }
end
Sidekiq::Web.set :sessions, false
sidekiq.yml
:concurrency: 18
development:
:verbose: true
:queues:
- [development_priority, 2]
- development_default
- development_mailers
staging:
:queues:
- [staging_priority, 2]
- staging_default
- staging_mailers
production:
:queues:
- [production_priority, 2]
- production_default
- production_mailers
OK. I seem to have resolved the issue by adding a namespace to sidekiq on redis per what was suggested here: https://github.com/mperham/sidekiq/issues/2834
I am having trouble with my sidekiq, heroku, redistogo, rails 4 configuration. I have 1 dyno and 1 worker on heroku. I am just using the worker for a get request to an external api.
Here is the error I get in my Heroku logs:
app[worker.1]: could not connect to server: No such file or directory
app[worker.1]: connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
app[worker.1]: Is the server running locally and accepting
Here is my config/initializers/sidekiq.rb
if Rails.env.production?
Sidekiq.configure_client do |config|
config.redis = { url: ENV['REDISTOGO_URL'] }
end
Sidekiq.configure_server do |config|
config.redis = { url: ENV['REDISTOGO_URL'] }
Rails.application.config.after_initialize do
Rails.logger.info("DB Connection Pool size for Sidekiq Server before disconnect is: #{ActiveRecord::Base.connection.pool.instance_variable_get('#size')}")
ActiveRecord::Base.connection_pool.disconnect!
ActiveSupport.on_load(:active_record) do
config = Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = ENV['DATABASE_REAP_FREQ'] || 10 # seconds
# config['pool'] = ENV['WORKER_DB_POOL_SIZE'] || Sidekiq.options[:concurrency]
config['pool'] = 16
ActiveRecord::Base.establish_connection(config)
Rails.logger.info("DB Connection Pool size for Sidekiq Server is now: #{ActiveRecord::Base.connection.pool.instance_variable_get('#size')}")
end
end
end
end
Here is my Procfile
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -e production -C config/sidekiq.yml
Here is my config/sidekiq.yml
development:
:concurrency: 5
production:
:concurrency: 20
:queues:
- default
Here is my config/initializers/redis.rb
uri = URI.parse(ENV["REDISTOGO_URL"])
REDIS = Redis.new(:url => uri)
Here is my config/puma.rb
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 1)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
before_fork do
puts "Puma master process about to fork. Closing existing Active record connections."
ActiveRecord::Base.connection.disconnect!
end
on_worker_boot do
ActiveRecord::Base.establish_connection
end
As per your Heroku logs, its giving error with PostgreSQL server not with Sidekiq.
So you need to fix it first.
I have absolutely no idea how to run my resque scheduler. When i enqueue a single task and run it manually it works fine but when i try to implement resque scheduler using the command rake resque:scheduler --trace, i get ArgumentError: unsupported signal SIGUSR1. Below are the files needed for resque scheduler:
config/initializers/resque.rb
require 'resque/failure/multiple'
require 'resque/failure/redis'
Resque::Failure::Multiple.classes = [Resque::Failure::Redis]
Resque::Failure.backend = Resque::Failure::Multiple
Dir[File.join(Rails.root, 'app', 'jobs', '*.rb')].each { |file| require file }
config = YAML.load(File.open("#{Rails.root}/config/resque.yml"))[Rails.env]
Resque.redis = Redis.new(host: config['host'], port: config['port'], db: config['db'])
config/resque.yml
defaults: &defaults
host: localhost
port: 6379
db: 6
development:
<<: *defaults
test:
<<: *defaults
staging:
<<: *defaults
production:
<<: *defaults
lib/tasks/resque.rake
require 'resque/tasks'
require 'resque/scheduler/tasks'
require 'yaml'
task 'resque:setup' => :environment
namespace :resque do
task :setup_schedule => :setup do
require 'resque-scheduler'
# If you want to be able to dynamically change the schedule,
# uncomment this line. A dynamic schedule can be updated via the
# Resque::Scheduler.set_schedule (and remove_schedule) methods.
# When dynamic is set to true, the scheduler process looks for
# schedule changes and applies them on the fly.
# Note: This feature is only available in >=2.0.0.
# Resque::Scheduler.dynamic = true
# The schedule doesn't need to be stored in a YAML, it just needs to
# be a hash. YAML is usually the easiest.
Resque.schedule = YAML.load_file(File.open("#{Rails.root}/config/resque_schedule.yml"))
end
task :scheduler => :setup_schedule
end
config/resque_schedule.yml
run_my_job:
cron: '30 6 * * 1'
class: 'MyJob'
queue: myjob
args:
description: "Runs MyJob"
Here's the error message for the rake resque:scheduler command:
error message
just found out that Windows doesn't support the SIGUSR1 signal. Here's a list of supported signals in Windows. The solution will be to use another OS such as Ubuntu to run the operation and it runs with no problems.
In development, it runs as I would expect it, having 5 threads (limited at the moment due to redis connection limit) it averages at about 5-7 processes running, depending if the worker has to do anything or not (sometimes a worker would decide not to work, since the object it is working on was updated less than a few days ago)
on production, it behaves differently. It seems to run in bursts of around 400, and then immediatly reschedules the workers and waits a bit and then shoots a burst again
The workers work with facebook api (koala gem), which for this I use sidekiq-throttler (https://github.com/gevans/sidekiq-throttler)
with the options
sidekiq_options throttle: { threshold: 50, period: 60.seconds , key: ->(user_id){ "facebook:#{user_id}"} }
I am using heroku and redislabs (free plan at the moment) with the procfile
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -c 5
and sidekiq setup:
Sidekiq.configure_server do |config|
config.redis = { :url => "#{ENV['REDISCLOUD_URL']}", :namespace => 'sidekiq'}
config.server_middleware do |chain|
chain.add Sidekiq::Throttler, storage: :redis
end
end
Sidekiq.configure_client do |config|
config.redis = { :url => "#{ENV['REDISCLOUD_URL']}", :namespace => 'sidekiq' }
end
is this a known symptom for something?
Looks like it's being throttled, as expected.