Log inside Sidekiq worker - ruby-on-rails

I'm trying to log the progress of my sideqik worker using tail -f log/development.log in development and heroku logs in production.
However, everything inside the worker and everything called by the worker does not get logged. In the code below, only TEST 1 gets logged.
How can I log everything inside the worker and the classes the worker calls?
# app/controllers/TasksController.rb
def import_data
Rails.logger.info "TEST 1" # shows up in development.log
DataImportWorker.perform_async
render "done"
end
# app/workers/DataImportWorker.rb
class DataImportWorker
include Sidekiq::Worker
def perform
Rails.logger.info "TEST 2" # does not show up in development.log
importer = Importer.new
importer.import_data
end
end
# app/controllers/services/Importer.rb
class Importer
def import_data
Rails.logger.info "TEST 3" # does not show up in development.log
end
end
Update
I still don't understand why Rails.logger.info or Sidekiq.logger.info don't log into the log stream. Got it working by replacing Rails.logger.info with puts.

There is a Sidekiq.logger and simply logger reference that you can use within your workers. The default should be to STDOUT and you should just direct your output in production to the log file path of your choice.

It works in rails 6:
# config/initializers/sidekiq.rb
Rails.logger = Sidekiq.logger
ActiveRecord::Base.logger = Sidekiq.logger

#migu, have you tried the below command in the config/initializer.rb ?
Rails.logger = Sidekiq::Logging.logger

I've found this solution here, it seems to work well.
Sidekiq uses the Ruby Logger class with default Log Level as INFO, and its settings are independent from Rails.
You may set the Sidekiq Log Level for the Logger used by Sidekiq in config/initializers/sidekiq.rb:
Sidekiq.configure_server do |config|
config.logger.level = Rails.logger.level
end

Related

Sidekiq logs show JobWrapper instead of Job class name

I have a Rails application that runs some background jobs via ActiveJob and Sidekiq. The sidekiq logs in both the terminal and the log file show the following:
2016-10-18T06:17:01.911Z 3252 TID-oukzs4q3k ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-97318b38b1391672d21feb93 INFO: start
Is there some way to show the class names of the jobs here similar to how logs work for a regular Sidekiq Worker?
Update:
Here is how a Sidekiq worker logs:
2016-10-18T11:05:39.690Z 13678 TID-or4o9w2o4 ClientJob JID-b3c71c9c63fe0c6d29fd2f21 INFO: start
Update 2:
My sidekiq version is 3.4.2
I'd like to replace ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper with Client Job
So I managed to do this by removing Sidekiq::Middleware::Server::Logging from the middleware configuration and adding a modified class that displays the arguments in the logs. The arguments themself contain the job and action names as well.
For latest version, currently 4.2.3, in sidekiq.rb
require 'sidekiq'
require 'sidekiq/middleware/server/logging'
class ParamsLogging < Sidekiq::Middleware::Server::Logging
def log_context(worker, item)
klass = item['wrapped'.freeze] || worker.class.to_s
"#{klass} (#{item['args'].try(:join, ' ')}) JID-#{item['jid'.freeze]}"
end
end
Sidekiq.configure_server do |config|
config.server_middleware do |chain|
chain.remove Sidekiq::Middleware::Server::Logging
chain.add ParamsLogging
end
end
For version 3.4.2, or similar, override the call method instead:
class ParamsLogging < Sidekiq::Middleware::Server::Logging
def call(worker, item, queue)
klass = item['wrapped'.freeze] || worker.class.to_s
Sidekiq::Logging.with_context("#{klass} (#{item['args'].try(:join, ' ')}) JID-#{item['jid'.freeze]}") do
begin
start = Time.now
logger.info { "start" }
yield
logger.info { "done: #{elapsed(start)} sec" }
rescue Exception
logger.info { "fail: #{elapsed(start)} sec" }
raise
end
end
end
end
You must be running some ancient version. Upgrade.
Sorry, looks like that's a Rails 5+ feature only. You'll need to upgrade Rails. https://github.com/rails/rails/commit/8d2b1406bc201d8705e931b6f043441930f2e8ac

View Resque log output in Rails server logs

I've got a Rails 4 app on a Puma server with Resque/Resque-Scheduler running background jobs. What I'd like to know is how I merge the log output of my two Resque workers into my server log, or, of that is not possible, how I can view the log output of my Resque workers. Currently I have not been able to figure out how to view the log output for the workers, so I have no idea what's happening under the hood. I found this blogpost, which suggests adding the following likes to my resque.rake file:
task "resque:setup" => :environment do
Resque.before_fork = Proc.new {
ActiveRecord::Base.establish_connection
# Open the new separate log file
logfile = File.open(File.join(Rails.root, 'log', 'resque.log'), 'a')
# Activate file synchronization
logfile.sync = true
# Create a new buffered logger
Resque.logger = ActiveSupport::BufferedLogger.new(logfile)
Resque.logger.level = Logger::INFO
Resque.logger.info "Resque Logger Initialized!"
}
end
That didn't work. I also tried the suggestion in the comments, which was to replace Resque.logger = ActiveSupport::BufferedLogger.new(logfile) with Resque.logger = ActiveSupport::Logger.new(logfile), however that didn't work either. With the second option, I still get a NoMethodError: undefined method 'logger=' for Resque:Module error when I try to boot up a worker.
Here is my current resque.rake file:
require 'resque/tasks'
require 'resque_scheduler/tasks'
namespace :resque do
puts "Loading Rails environment for Resque"
task :setup => :environment do
require 'resque'
require 'resque_scheduler'
require 'resque/scheduler'
require 'postman'
end
end
I've looked at the Resque docs on logging, but am not sure how to use what's there as I admittedly don't know very much about logging in Rails. I haven't had any luck finding other useful resources on the subject.
How I fix it, it is not perfect but just works.
my environment: rails 5.0.1, resque: 1.26.0
at the first time, I set the Resque.logger and Resque.logger.level in config/initializers/resque.rb as most docs suggest:
# config/initializers/resque.rb
Resque.logger = Logger.new("#{Rails.root}/log/resque.log")
Resque.logger.level = Logger::DEBUG
then in the job, I output log by Resque.logger.info:
# update_xxx_job.rb
class UpdateXxxJob
def self.perform
Resque.logger.info 'Job starts'
...
end
end
it doesn't work, I can see nothing in log/resque.log.
then someone said should set the logfile sync always, no buffer, so I update the config/initializers/resque.rb according a question from stackoverflow:
# config/initializers/resque.rb
logfile = File.open(File.join(Rails.root, 'log', 'resque.log'), 'a')
logfile.sync = true
Resque.logger = ActiveSupport::Logger.new(logfile)
Resque.logger.level = Logger::INFO
still doesn't work.
I also tried config resque logger in lib/tasks/resque.rake:
# lib/tasks/resque.rake
require 'resque'
require 'resque/tasks'
require 'resque/scheduler/tasks'
require 'resque-scheduler'
require 'resque/scheduler/server'
namespace :resque do
task setup: :environment do
Resque.schedule = YAML.load_file(Rails.root + "config/resque_scheduler_#{Rails.env}.yml")
Resque.redis.namespace = "xxx_#{Rails.env}"
Resque.logger = Logger.new("#{Rails.root}/log/resque.log")
Resque.logger.level = Logger::INFO
end
end
doesn't work.
finally, I decide to move the logger configuration from initializer to the job, so the job now looks like:
# update_xxx_job.rb
class UpdateXxxJob
def self.perform
Resque.logger = Logger.new("#{Rails.root}/log/resque.log")
Resque.logger.level = Logger::DEBUG
Resque.logger.info 'Job starts'
...
end
end
then I can get what I want in the log/resque.log.
you can try it.
I've had the same problem while setting up mine. Here's what I did:
Resque.before_fork do
# Your code here
end
It seems before_fork accepts a block as an argument rather than assigning a block to it.
I faced the same issue so that I check the source of resque and finally I needed to do the followings at initialization process:
define log formatter.
then define logger with log-file path.
set any log level.
Here the example is at my config/initializers/resque.rb in rails case:
...
Resque.logger = Logger.new("#{Rails.root}/log/resque.log")
Resque.logger.level = Logger::DEBUG
Resque.logger.formatter = ::Logger::Formatter.new # This is important
Resque default logger formatter is set here and its definitions is here. That apparently just ignores the output...

logging to production inside a delayed function rails

I am trying to output to the production log some logging that's happening in a function to which delay is being called on via delayed_job.
Example:
My controller
def create_something
#user = User.find(1)
#user.delay.do_something_crazy
end
My Model
def do_something_crazy
# some code
Rails.logger.info "Doing something crazy right now!"
end
The logging is not being output into my production log. Without delay, it does but with it seems to not?
Add initializer file delayed_jobs_settings.rb to config/initializers (unless you already have something like this for Delayed Jobs settings) and add this code:
Delayed::Worker.logger = Logger.new(Rails.root.join('log', 'dj.log'))
And it will save it to log/dj.log.
Or just
Delayed::Worker.logger = Rails.logger
for logging to default Rails log.

testing using Resque with Rspec examples?

I am processing my background jobs using Resque.
My model looks like this
class SomeClass
...
repo = Repo.find(params[:repo_id])
Resque.enqueue(ReopCleaner, repo.id)
...
end
class RepoCleaner
#queue = :repo_cleaner
def self.perform(repo_id)
puts "this must get printed in console"
repo = Repo.find(repo_id)
# some more action here
end
end
Now to test in synchronously i have added
Resque.inline = Rails.env.test?
in my config/initializers/resque.rb file
This was supposed to call #perform method inline without queuing it into Redis and without any Resque callbacks as Rails.env.test? returns true in test environment.
But
"this must get printed in console"
is never printed while testing. and my tests are also failing.
Is there any configurations that i have missed.
Currently i am using
resque (1.17.1)
resque_spec (0.7.0)
resque_unit (0.4.0)
I personally test my workers different. I use RSpec and for example in my user model I test something like this:
it "enqueue FooWorker#create_user" do
mock(Resque).enqueue(FooWorker, :create_user, user.id)
user.create_on_foo
end
Then I have a file called spec/workers/foo_worker_spec.rb with following content:
require 'spec_helper'
describe FooWorker do
describe "#perform" do
it "redirects to passed action" do
...
FooWorker.perform
...
end
end
end
Then your model/controller tests run faster and you don't have the dependency between model/controller and your worker in your tests. You also don't have to mock so much things in specs which don't have to do with the worker.
But if you wan't to do it like you mentioned, it worked for me some times ago. I put Resque.inline = true into my test environment config.
It looks like the question about logging never got answered. I ran into something similar to this and it was from not setting up the Resque logger. You can do something as simple as:
Resque.logger = Rails.logger
Or you can setup a separate log file by adding this to your /lib/tasks/resque.rake. When you run your worker it will write to /log/resque.log
Resque.before_fork = Proc.new {
ActiveRecord::Base.establish_connection
# Open the new separate log file
logfile = File.open(File.join(Rails.root, 'log', 'resque.log'), 'a')
# Activate file synchronization
logfile.sync = true
# Create a new buffered logger
Resque.logger = ActiveSupport::Logger.new(logfile)
Resque.logger.level = Logger::INFO
Resque.logger.info "Resque Logger Initialized!"
}
Mocking like daniel-spangenberg mentioned above ought to write to STDOUT unless your methods are in the "private" section of your class. That's tripped me up a couple times when writing rspec tests. ActionMailer requires it's own log setup too. I guess I've been expecting more convention than configuration. :)

How to make each unicorn worker of my Rails application log to a different file?

How can I make each unicorn worker of my Rails application writting in a different log file ?
The why : problem of mixed log files...
In its default configuration, Rails will write its log messages to a single log file: log/<environment>.log.
Unicorn workers will write to the same log file at once, the messages can get mixed up. This is a problem when request-log-analyzer parses a log file. An example:
Processing Controller1#action1 ...
Processing Controller2#action2 ...
Completed in 100ms...
Completed in 567ms...
In this example, what action was completed in 100ms, and what action in 567 ms? We can never be sure.
add this code to after_fork in unicorn.rb:
#one log per unicorn worker
if log = Rails.logger.instance_values['log']
ext = File.extname log.path
new_path =log.path.gsub %r{(.*)(#{Regexp.escape ext})}, "\\1.#{worker.nr}\\2"
Rails.logger.instance_eval do
#log.close
#log= open_log new_path, 'a+'
end
end
#slact's answer doesn't work on Rails 3. This works:
after_fork do |server, worker|
# Override the default logger to use a separate log for each Unicorn worker.
# https://github.com/rails/rails/blob/3-2-stable/railties/lib/rails/application/bootstrap.rb#L23-L49
Rails.logger = ActiveRecord::Base.logger = ActionController::Base.logger = begin
path = Rails.configuration.paths["log"].first
f = File.open(path.sub(".log", "-#{worker.nr}.log"), "a")
f.binmode
f.sync = true
logger = ActiveSupport::TaggedLogging.new(ActiveSupport::BufferedLogger.new(f))
logger.level = ActiveSupport::BufferedLogger.const_get(Rails.configuration.log_level.to_s.upcase)
logger
end
end

Resources