In my Rails 5 application I am trying to access and use the logger in an initializer:
begin
raise "REDIS_URL environment var not found" unless ENV["REDIS_URL"]
$redis = Redis.new
$redis.ping
Rails.logger.info "Successfully connected to redis: #{ENV["REDIS_URL"]}" if $redis.connected?
rescue Exception => e
Rails.logger.warn e.message
Rails.logger.warn 'API only mode. Request to root will not return any index.html'
end
debugging inside the block I can access it:
(byebug) Rails.logger
#<ActiveSupport::Logger:0x007ffb69ef48d8>
(byebug) Rails.logger.level
0
(byebug)
but I don't see any msg in development. Is the logger supposed to be accessible but not to be used in there? Is this this considered a correct solution?
(It works in production)
thank you
Related
I use the capybara-webkit gem to scrape data from certain pages in my Rails application. I've noticed, what seems to be "random" / "sporadic", that the application will crash with the following error:
Capybara::Webkit::ConnectionError: /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/bin/webkit_server failed to start.
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/lib/capybara/webkit/server.rb:56:in `parse_port'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/lib/capybara/webkit/server.rb:42:in `discover_port'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/lib/capybara/webkit/server.rb:26:in `start'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/lib/capybara/webkit/connection.rb:67:in `start_server'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/lib/capybara/webkit/connection.rb:17:in `initialize'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/lib/capybara/webkit/driver.rb:16:in `new'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/lib/capybara/webkit/driver.rb:16:in `initialize'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/lib/capybara/webkit.rb:15:in `new'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-webkit-1.11.1/lib/capybara/webkit.rb:15:in `block in <top (required)>'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-2.7.1/lib/capybara/session.rb:85:in `driver'
from /home/daveomcd/.rvm/gems/ruby-2.3.1/gems/capybara-2.7.1/lib/capybara/session.rb:233:in `visit'
It happens even after it's already connected and accessed a website multiple times before. Here's a code snippet of what I'm using currently...
if site.url.present?
begin
# Visit the URL
session = Capybara::Session.new(:webkit)
session.visit(site.url) # here is where the error occurs...
document = Nokogiri::HTML.parse(session.body)
# Load configuration options for Development Group
roster_table_selector = site.development_group.table_selector
header_row_selector = site.development_group.table_header_selector
row_selector = site.development_group.table_row_selector
row_offset = site.development_group.table_row_selector_offset
header_format_type = site.config_header_format_type
# Get the Table and Header Row for processing
roster_table = document.css(roster_table_selector)
header_row = roster_table.css(header_row_selector)
header_hash = retrieve_headers(header_row, header_format_type)
my_object = process_rows(roster_table, header_hash, site, row_selector, row_offset)
rescue ::Capybara::Webkit::ConnectionError => e
raise e
rescue OpenURI::HTTPError => e
if e.message == '404 Not Found'
raise "404 Page not found..."
else
raise e
end
end
end
I've even thought perhaps I don't find out why it's happening necessarily - but just recover when it does. So I was going to do a "retry" in the rescue block for the error but it appears the server is just down - so I get the same result when retrying. Perhaps someone knows of a way I can check if the server is down and restart it then perform a retry? Thanks for the help!
So after further investigating it appears that I was generating a new Capybara::Session for each iteration of my loop. I moved it outside of the loop and also added Capybara.reset_sessions! at the end of my loop. Not sure if that helps with anything -- but the issue seems to have been resolved. I'll monitor it for the next hour or so. Below is an example of my ActiveJob code now...
class ScrapeJob < ActiveJob::Base
queue_as :default
include Capybara::DSL
def perform(*args)
session = Capybara::Session.new(:webkit)
Site.where(config_enabled: 1).order(:code).each do |site|
process_roster(site, session)
Capybara.reset_sessions!
end
end
def process_roster(site, session)
if site.roster_url.present?
begin
# Visit the Roster URL
session.visit(site.roster_url)
document = Nokogiri::HTML.parse(session.body)
# processing code...
# pass the session that was created as the final parameter..
my_object = process_rows( ..., session)
rescue ::Capybara::Webkit::ConnectionError => e
raise e
rescue OpenURI::HTTPError => e
if e.message == '404 Not Found'
raise "404 Page not found..."
else
raise e
end
end
end
end
end
I am trying to rescue an aws s3 exception.
class FileManager
def fetchFile
begin
s3.get_object(bucket:'myBucket', key: file_name)
rescue Aws::S3::Errors => e
debugger
end
end
end
Then when I call the method from console
FileManager.fetch_file('Non existing file')
I am not getting the debugger instead there is an error message in the console
Aws::S3::Errors::NoSuchKey: The specified key does not exist.
I believe this has what your looking for (assuming you are using this gem):
https://github.com/marcel/aws-s3#when-things-go-wrong
You can rescue all S3 errors using ServiceError
begin
# ...
rescue Aws::S3::Errors::ServiceError => e
# ...
end
I was annoyed with having to cut way through ANSI sequences in a log on production server (log/production.log), so I added config.colorize_logging = false to config/environments/production.rb. But now when I run a console (bin/rails c), the output is not colorized as well. Why is it so? Is there a way to make logger use ANSI sequences when outputting to screen, and not use them when logging to a file?
UPD What I was able to figure out. When rails app starts, it creates logger to log into a file:
Rails.logger ||= config.logger || begin
path = config.paths["log"].first
unless File.exist? File.dirname path
FileUtils.mkdir_p File.dirname path
end
f = File.open path, 'a'
f.binmode
f.sync = config.autoflush_log # if true make sure every write flushes
logger = ActiveSupport::Logger.new f
logger.formatter = config.log_formatter
logger = ActiveSupport::TaggedLogging.new(logger)
logger
rescue StandardError
logger = ActiveSupport::TaggedLogging.new(ActiveSupport::Logger.new(STDERR))
logger.level = ActiveSupport::Logger::WARN
logger.warn(
"Rails Error: Unable to access log file. Please ensure that #{path} exists and is writable " +
"(ie, make it writable for user and group: chmod 0664 #{path}). " +
"The log level has been raised to WARN and the output directed to STDERR until the problem is fixed."
)
logger
end
And then attaches to it another logger to output messages to STDOUT:
def log_to_stdout
wrapped_app # touch the app so the logger is set up
console = ActiveSupport::Logger.new($stdout)
console.formatter = Rails.logger.formatter
console.level = Rails.logger.level
Rails.logger.extend(ActiveSupport::Logger.broadcast(console))
end
For some reason, breaking with byebug keyword at ActiveSupport::Logger#initialize never succeed when I ran ./bin/rails c.
UPD Okay, the culprit was spring, console (or should I say activerecord) creates its logger here:
console do |app|
require "active_record/railties/console_sandbox" if app.sandbox?
require "active_record/base"
console = ActiveSupport::Logger.new(STDERR)
Rails.logger.extend ActiveSupport::Logger.broadcast console
end
One way to switch colorized_logging on in console would be to set it explicitly to true as follows:
$ bin/rails c
:001 > Rails.application.config.colorize_logging
=> false
:002 > Rails.application.config.colorize_logging = true
=> true
:003 > Rails.application.config.colorize_logging
=> true
There might be a way to set this automatically every time the console is loaded by customizing the console with a .irbrc file
I'm trying to log the progress of my sideqik worker using tail -f log/development.log in development and heroku logs in production.
However, everything inside the worker and everything called by the worker does not get logged. In the code below, only TEST 1 gets logged.
How can I log everything inside the worker and the classes the worker calls?
# app/controllers/TasksController.rb
def import_data
Rails.logger.info "TEST 1" # shows up in development.log
DataImportWorker.perform_async
render "done"
end
# app/workers/DataImportWorker.rb
class DataImportWorker
include Sidekiq::Worker
def perform
Rails.logger.info "TEST 2" # does not show up in development.log
importer = Importer.new
importer.import_data
end
end
# app/controllers/services/Importer.rb
class Importer
def import_data
Rails.logger.info "TEST 3" # does not show up in development.log
end
end
Update
I still don't understand why Rails.logger.info or Sidekiq.logger.info don't log into the log stream. Got it working by replacing Rails.logger.info with puts.
There is a Sidekiq.logger and simply logger reference that you can use within your workers. The default should be to STDOUT and you should just direct your output in production to the log file path of your choice.
It works in rails 6:
# config/initializers/sidekiq.rb
Rails.logger = Sidekiq.logger
ActiveRecord::Base.logger = Sidekiq.logger
#migu, have you tried the below command in the config/initializer.rb ?
Rails.logger = Sidekiq::Logging.logger
I've found this solution here, it seems to work well.
Sidekiq uses the Ruby Logger class with default Log Level as INFO, and its settings are independent from Rails.
You may set the Sidekiq Log Level for the Logger used by Sidekiq in config/initializers/sidekiq.rb:
Sidekiq.configure_server do |config|
config.logger.level = Rails.logger.level
end
Once the script is daemonized then the logger can't write to the file anymore. So how and when should I initialise the log?
require 'rubygems'
require 'daemons'
require 'logging'
def create_new_logger
logger = Logging.logger['trend-analyzer']
logger.add_appenders(
Logging.appenders.rolling_file('./logs/trend-analyzer.log'),
Logging.appenders.stdout
)
logger.level = :debug
return logger
end
logger = create_new_logger
#this log message gets written to the log file
logger.debug Time.new
Daemons.run_proc('ForestPress', :log_dir => '.logs', :backtrace => true) do
running_as_daemon = true
#this log message does NOT get written to the log file
logger.debug Time.new
loop do
#this log message does NOT get written to the log file
logger.info Time.new
sleep 5
end
end
EDIT
I notice the current path changes from where I executed the script to /. Could this be why I can't log messages?
EDIT 2
I now save the original path before becoming a daemon and then use Dir.chdir to set the path to the original path. I can then open the file directly and write to it. However the logging gem can't write to it still.
Here is how it started working
require 'rubygems'
require 'daemons'
require 'logging'
def create_new_logger
logger = Logging.logger['trend-analyzer']
logger.add_appenders(
Logging.appenders.rolling_file('./logs/trend-analyzer.log'),
Logging.appenders.stdout
)
logger.level = :debug
return logger
end
current_dir = Dir.pwd
Daemons.run_proc('ForestPress', :log_dir => '.logs', :backtrace => true) do
Dir.chdir(current_dir)
logger = create_new_logger
loop do
puts Dir.pwd
logger.debug Time.new
sleep 5
end
end