Sidekiq logs show JobWrapper instead of Job class name - ruby-on-rails

I have a Rails application that runs some background jobs via ActiveJob and Sidekiq. The sidekiq logs in both the terminal and the log file show the following:
2016-10-18T06:17:01.911Z 3252 TID-oukzs4q3k ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-97318b38b1391672d21feb93 INFO: start
Is there some way to show the class names of the jobs here similar to how logs work for a regular Sidekiq Worker?
Update:
Here is how a Sidekiq worker logs:
2016-10-18T11:05:39.690Z 13678 TID-or4o9w2o4 ClientJob JID-b3c71c9c63fe0c6d29fd2f21 INFO: start
Update 2:
My sidekiq version is 3.4.2
I'd like to replace ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper with Client Job

So I managed to do this by removing Sidekiq::Middleware::Server::Logging from the middleware configuration and adding a modified class that displays the arguments in the logs. The arguments themself contain the job and action names as well.
For latest version, currently 4.2.3, in sidekiq.rb
require 'sidekiq'
require 'sidekiq/middleware/server/logging'
class ParamsLogging < Sidekiq::Middleware::Server::Logging
def log_context(worker, item)
klass = item['wrapped'.freeze] || worker.class.to_s
"#{klass} (#{item['args'].try(:join, ' ')}) JID-#{item['jid'.freeze]}"
end
end
Sidekiq.configure_server do |config|
config.server_middleware do |chain|
chain.remove Sidekiq::Middleware::Server::Logging
chain.add ParamsLogging
end
end
For version 3.4.2, or similar, override the call method instead:
class ParamsLogging < Sidekiq::Middleware::Server::Logging
def call(worker, item, queue)
klass = item['wrapped'.freeze] || worker.class.to_s
Sidekiq::Logging.with_context("#{klass} (#{item['args'].try(:join, ' ')}) JID-#{item['jid'.freeze]}") do
begin
start = Time.now
logger.info { "start" }
yield
logger.info { "done: #{elapsed(start)} sec" }
rescue Exception
logger.info { "fail: #{elapsed(start)} sec" }
raise
end
end
end
end

You must be running some ancient version. Upgrade.
Sorry, looks like that's a Rails 5+ feature only. You'll need to upgrade Rails. https://github.com/rails/rails/commit/8d2b1406bc201d8705e931b6f043441930f2e8ac

Related

How to run background jobs during cucumber tests?

What is the best way to test something that requires background jobs with Cucumber? I need to run DelayedJob and Sneakers workers in background while tests are running.
You can run any application in the background:
#pid = Process.spawn "C:/Apps/whatever.exe"
Process.detach(#pid)
And even kill it after tests are done:
Process.kill('KILL', #pid) unless #pid.nil?
You can create your own step definition in features/step_definitions/whatever_steps.rb (hopefully with a better name)
When /^I wait for background jobs to complete$/ do
Delayed::Worker.new.work_off
end
That can be extended for any other scripts you'd like to run with that step. Then in the test, it goes something like:
Then I should see the text "..."
When I wait for background jobs to complete
And I refresh the page
Then I should see the text "..."
If anyone has similar problem I ended up writing this (thanks to Square blog post):
require "timeout"
class CucumberExternalWorker
attr_accessor :worker_pid, :start_command
def initialize(start_command)
raise ArgumentError, "start_command was expected" if start_command.nil?
self.start_command = start_command
end
def start
puts "Trying to start #{start_command}..."
self.worker_pid = fork do
start_child
end
at_exit do
stop_child
end
end
private
def start_child
exec({ "RAILS_ENV" => Rails.env }, start_command)
end
def stop_child
puts "Trying to stop #{start_command}, pid: #{worker_pid}"
# send TERM and wait for exit
Process.kill("TERM", worker_pid)
begin
Timeout.timeout(10) do
Process.waitpid(worker_pid)
puts "Process #{start_command} stopped successfully"
end
rescue Timeout::Error
# Kill process if could not exit in 10 seconds
puts "Sending KILL signal to #{start_command}, pid: #{worker_pid}"
Process.kill("KILL", worker_pid)
end
end
end
This can be called as following (added it to env.rb for cucumber):
# start delayed job
$delayed_job_worker = CucumberExternalWorker.new("rake jobs:work")
$delayed_job_worker.start

Log inside Sidekiq worker

I'm trying to log the progress of my sideqik worker using tail -f log/development.log in development and heroku logs in production.
However, everything inside the worker and everything called by the worker does not get logged. In the code below, only TEST 1 gets logged.
How can I log everything inside the worker and the classes the worker calls?
# app/controllers/TasksController.rb
def import_data
Rails.logger.info "TEST 1" # shows up in development.log
DataImportWorker.perform_async
render "done"
end
# app/workers/DataImportWorker.rb
class DataImportWorker
include Sidekiq::Worker
def perform
Rails.logger.info "TEST 2" # does not show up in development.log
importer = Importer.new
importer.import_data
end
end
# app/controllers/services/Importer.rb
class Importer
def import_data
Rails.logger.info "TEST 3" # does not show up in development.log
end
end
Update
I still don't understand why Rails.logger.info or Sidekiq.logger.info don't log into the log stream. Got it working by replacing Rails.logger.info with puts.
There is a Sidekiq.logger and simply logger reference that you can use within your workers. The default should be to STDOUT and you should just direct your output in production to the log file path of your choice.
It works in rails 6:
# config/initializers/sidekiq.rb
Rails.logger = Sidekiq.logger
ActiveRecord::Base.logger = Sidekiq.logger
#migu, have you tried the below command in the config/initializer.rb ?
Rails.logger = Sidekiq::Logging.logger
I've found this solution here, it seems to work well.
Sidekiq uses the Ruby Logger class with default Log Level as INFO, and its settings are independent from Rails.
You may set the Sidekiq Log Level for the Logger used by Sidekiq in config/initializers/sidekiq.rb:
Sidekiq.configure_server do |config|
config.logger.level = Rails.logger.level
end

How to tell if sidekiq is connected to redis server?

Using the console, how can I tell if sidekiq is connected to a redis server? I want to be able to do something like this:
if (sidekiq is connected to redis) # psuedo code
MrWorker.perform_async('do_work', user.id)
else
MrWorker.new.perform('do_work', user.id)
end
You can use Redis info provided by Sidekiq:
redis_info = Sidekiq.redis { |conn| conn.info }
redis_info['connected_clients'] # => "16"
Took it from Sidekiq's Sinatra status app.
I make this method to Rails whit the obove answer, return true if connected and false if not.
def redis_connected?
!!Sidekiq.redis(&:info) rescue false
end
It sounds like you want to know if there is a Sidekiq process up and running to process jobs at a given point in time. With Sidekiq 3.0, you can do this:
require 'sidekiq/api'
ps = Sidekiq::ProcessSet.new
if ps.size > 0
MyWorker.perform_async(1,2,3)
else
MyWorker.new.perform(1,2,3)
end
Sidekiq::ProcessSet gives you almost real-time (updated every 5 sec) info about any running Sidekiq processes.
jumping off #overallduka's answer, for those using the okcomputer gem, this is the custom check i set up:
class SidekiqCheck < OkComputer::Check
def check
if sidekiq_accessible?
mark_message "ok"
else
mark_failure
end
end
private
def sidekiq_accessible?
begin
Sidekiq.redis { |conn| conn.info }
rescue Redis::CannotConnectError
end.present?
end
end
OkComputer::Registry.register "sidekiq", SidekiqCheck.new
begin
MrWorker.perform_async('do_work', user.id)
rescue Redis::CannotConnectError => e
MrWorker.new.perform('do_work', user.id)
end

How can I run a rake task from a delayed_job

I'd like to run a rake task (apn:notifications:deliver from the apn_on_rails gem) from a delayed_job. In other words, I'd like enqueue a delayed job which will call the apn:notifications:deliver rake task.
I found this code http://pastie.org/157390 from http://geminstallthat.wordpress.com/2008/02/25/run-rake-tasks-with-delayedjob-dj/.
I added this code as DelayedRake.rb to my lib directory:
require 'rake'
require 'fileutils'
class DelayedRake
def initialize(task, options = {})
#task = task
#options = options
end
##
# Called by Delayed::Job.
def perform
FileUtils.cd RAILS_ROOT
#rake = Rake::Application.new
Rake.application = #rake
### Load all the Rake Tasks.
Dir[ "./lib/tasks/**/*.rake" ].each { |ext| load ext }
#options.stringify_keys!.each do |key, value|
ENV[key] = value
end
begin
#rake[#task].invoke
rescue => e
RAILS_DEFAULT_LOGGER.error "[ERROR]: task \"#{#task}\" failed. #{e}"
end
end
end
Everything runs fine until the delayed_job runs and it complains:
[ERROR]: task "apn:notifications:deliver" failed. Don't know how to build task 'apn:notifications:deliver'
How do I let it know about apn_on_rails? I'd tried require 'apn_on_rails_tasks' at the top of DelayedRake which didn't do anything. I also tried changing the directory of rake tasks to ./lib/tasks/*.rake
I'm somewhat new to Ruby/Rails. This is running on 2.3.5 on heroku.
Why don't do just a system call ?
system "rake apn:notifications:deliver"
I believe it's easier if you call it as a separate process. See 5 ways to run commands from Ruby.
def perform
`rake -f #{Rails.root.join("Rakefile")} #{#task}`
end
If you want to capture any errors, you should capture STDERR as shown in the article.

Phusion Passenger + Workling + RabbitMQ

I am trying to deploy a RoR app that does some asynchronous task. I use workling for that and the message queue is RabbitMQ. This combination worked flawlessly with Starling but we decided to change the MQ for Rabbit.
I read somewhere that I should include the following code in my environment.rb
require 'mq'
if defined?(PhusionPassenger)
PhusionPassenger.on_event(:starting_worker_process) do |forked|
if forked
if EM.reactor_running?
EM.stop_event_loop
EM.release_machine
EM.instance_variable_set( '#reactor_running', false )
end
Thread.current[:mq] = nil
AMQP.instance_variable_set('#conn', nil)
end
th = Thread.current
Thread.new{
AMQP.connect(:host => 'localhost'){
th.wakeup
}
}
Thread.stop
end
end
But that now Apache fails completely with message: The server encountered an internal error or misconfiguration and was unable to complete your request
EDIT: I've improved the code below somewhat since posting this. Available here: http://www.hiringthing.com/2011/11/04/eventmachine-with-rails.html
I just spent a milliioon years trying to get this to work, and finally did. Here is my code:
require 'amqp'
module HiringThingEM
def self.start
if defined?(PhusionPassenger)
PhusionPassenger.on_event(:starting_worker_process) do |forked|
if forked && EM.reactor_running?
EM.stop
end
Thread.new {
EM.run do
AMQP.channel ||= AMQP::Channel.new(AMQP.connect(:host=> Q_SERVER, :user=> Q_USER, :pass => Q_PASS, :vhost => Q_VHOST ))
end
}
die_gracefully_on_signal
end
end
end
def self.die_gracefully_on_signal
Signal.trap("INT") { EM.stop }
Signal.trap("TERM") { EM.stop }
end
end
HiringThingEM.start
Now I can use:
EM.next_tick { AMQP.channel.queue(Q_Q).publish("hi mom") }
Inside the controllers of my Rails app.
Hope this helps someone.
Not really an answer, but unless you're committed to AMQP, I would recommend using https://github.com/defunkt/resque - it does the asynchronous job + fork gig very nicely.

Resources