Indentation in Rails logs - ruby-on-rails

When logging custom information, e.g. in a rake-task, I'd like to indent the log lines for better readability. For example:
Seeding database...
Importing xyz.csv...
Skipping row 5 due to invalid value 'Unknown' in column 'year'
Finished importing xyz.csv
Finished seeding database
In my seeds.rb, I use the following for logging:
logger = Logger.new(STDOUT)
logger.info('Seeding database...')
However, the logging of Skipping row 5... takes place in a service, which does not necessarily have to be called from the seeds.rb, but could be called from anywhere. Thus I can not hardcode the correct indentation (which sounds like a bad idea, anyway).
One possibility would be to keep an "indentation counter", which I could increase when starting to import a file and decrease when finishing. I'm unsure though how to access this from anywhere in my application, or if this is the best solution. Any ideas?

Better solution is to provide your service object with a logger object along with data to process. This way it does not have to know anything about your logger preferences.
And use something like:
require 'logger'
class IndentedLogger < Logger
INDENTATION_STR = ' '.freeze
attr_accessor :indentation
def initialize(io, *args, **params)
self.indentation = 0
super
end
def indented
return self.dup.tap{|l| l.indentation += 1 } unless block_given?
self.indentation += 1
yield self
self.indentation -= 1
end
protected
def format_message(severity, datetime, progname, msg)
(#formatter || #default_formatter).call(severity, datetime, progname, "#{INDENTATION_STR*indentation}#{msg}")
end
end
# Example:
logger = IndentedLogger.new(STDOUT)
logger.info "Foo"
logger.indented{
logger.info "Foo"
logger.indented.info "Indented even more"
}
logger.info "Foo"
And for service call - YourService.new(some_data, logger: logger.indented).process_or_whatever

Related

How can I count the number of accesses/queries to database through Mongoid?

I'm using the Mongoid in a Rails project. To improve the performance of large queries, I'm using the includes method to eager load the relationships.
I would like to know if there is an easy way to count the real number of queries performed by a block of code so that I can check if my includes really reduced the number of DB accesses as expected. Something like:
# It will perform a large query to gather data from companies and their relationships
count = Mongoid.count_queries do
Company.to_csv
end
puts count # Number of DB access
I want to use this feature to add Rspec tests to prove that my query remains efficient after changes (e.g; when adding data from a new relationship). In python's Django framework, for instance, one may use the assertNumQueries method to this end.
Checking on rubygems.org didn't yield anything that seems to do what you want.
You might be better off looking into app performance tools like New Relic, Scout, or DataDog. You may be able to get some out of the gate benchmarking specs with
https://github.com/piotrmurach/rspec-benchmark
I just implemented this feature to count mongo queries in my rspec suite in a small module using mongo Command Monitoring.
It can be used like this:
expect { code }.to change { finds("users") }.by(3)
expect { code }.to change { updates("contents") }.by(1)
expect { code }.not_to change { inserts }
Or:
MongoSpy.flush
# ..code..
expect(MongoSpy.queries).to match(
"find" => { "users" => 1, "contents" => 1 },
"update" => { "users" => 1 }
)
Here is the Gist (ready to copy) for the last up-to-date version: https://gist.github.com/jarthod/ab712e8a31798799841c5677cea3d1a0
And here is the current version:
module MongoSpy
module Helpers
%w(find delete insert update).each do |op|
define_method(op.pluralize) { |ns = nil|
ns ? MongoSpy.queries[op][ns] : MongoSpy.queries[op].values.sum
}
end
end
class << self
def queries
#queries ||= Hash.new { |h, k| h[k] = Hash.new(0) }
end
def flush
#queries = nil
end
def started(event)
op = event.command.keys.first # find, update, delete, createIndexes, etc.
ns = event.command[op] # collection name
return unless ns.is_a?(String)
queries[op][ns] += 1
end
def succeeded(_); end
def failed(_); end
end
end
Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::COMMAND, MongoSpy)
RSpec.configure do |config|
config.include MongoSpy::Helpers
end
What you're looking for is command monitoring. With Mongoid and the Ruby Driver, you can create a custom command monitoring class that you can use to subscribe to all commands made to the server.
I've adapted this from the Command Monitoring Guide for the Mongo Ruby Driver.
For this particular example, make sure that your Rails app has the log level set to debug. You can read more about the Rails logger here.
The first thing you want to do is define a subscriber class. This is the class that tells your application what to do when the Mongo::Client performs commands against the database. Here is the example class from the documentation:
class CommandLogSubscriber
include Mongo::Loggable
# called when a command is started
def started(event)
log_debug("#{prefix(event)} | STARTED | #{format_command(event.command)}")
end
# called when a command finishes successfully
def succeeded(event)
log_debug("#{prefix(event)} | SUCCEEDED | #{event.duration}s")
end
# called when a command terminates with a failure
def failed(event)
log_debug("#{prefix(event)} | FAILED | #{event.message} | #{event.duration}s")
end
private
def logger
Mongo::Logger.logger
end
def format_command(args)
begin
args.inspect
rescue Exception
'<Unable to inspect arguments>'
end
end
def format_message(message)
format("COMMAND | %s".freeze, message)
end
def prefix(event)
"#{event.address.to_s} | #{event.database_name}.#{event.command_name}"
end
end
(Make sure this class is auto-loaded in your Rails application.)
Next, you want to attach this subscriber to the client you use to perform commands.
subscriber = CommandLogSubscriber.new
Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::COMMAND, subscriber)
# This is the name of the default client, but it's possible you've defined
# a client with a custom name in config/mongoid.yml
client = Mongoid::Clients.from_name('default')
client.subscribe( Mongo::Monitoring::COMMAND, subscriber)
Now, when Mongoid executes any commands against the database, those commands will be logged to your console.
# For example, if you have a model called Book
Book.create(title: "Narnia")
# => D, [2020-03-27T10:29:07.426209 #43656] DEBUG -- : COMMAND | localhost:27017 | mongoid_test_development.insert | STARTED | {"insert"=>"books", "ordered"=>true, "documents"=>[{"_id"=>BSON::ObjectId('5e7e0db3f8f498aa88b26e5d'), "title"=>"Narnia", "updated_at"=>2020-03-27 14:29:07.42239 UTC, "created_at"=>2020-03-27 14:29:07.42239 UTC}], "lsid"=>{"id"=><BSON::Binary:0x10600 type=uuid data=0xfff8a93b6c964acb...>}}
# => ...
You can modify the CommandLogSubscriber class to do something other than logging (such as incrementing a global counter).

Add custom severity inside a Ruby logger formatter [duplicate]

I need to add a custom log level like "verbose" or "traffic" to ruby logger, how to do?
Your own logger just need to overwrite the Logger#format_severity method, something like this :
class MyLogger < Logger
SEVS = %w(DEBUG INFO WARN ERROR FATAL VERBOSE TRAFFIC)
def format_severity(severity)
SEVS[severity] || 'ANY'
end
def verbose(progname = nil, &block)
add(5, nil, progname, &block)
end
end
You can simply add to the Logger class:
require 'logger'
class Logger
def self.custom_level(tag)
SEV_LABEL << tag
idx = SEV_LABEL.size - 1
define_method(tag.downcase.gsub(/\W+/, '_').to_sym) do |progname, &block|
add(idx, nil, progname, &block)
end
end
# now add levels like this:
custom_level 'TRAFFIC'
custom_level 'ANOTHER-TAG'
end
# using it:
log = Logger.new($stdout)
log.traffic('testing')
log.another_tag('another message here.')
Log levels are nothing but integer constants defined in logger.rb:
# Logging severity.
module Severity
DEBUG = 0
INFO = 1
WARN = 2
ERROR = 3
FATAL = 4
UNKNOWN = 5
end
You can log messages with any level you like using Logger#add method:
l.add 6, 'asd'
#=> A, [2010-02-17T16:25:47.763708 #14962] ANY -- : asd
If you start needing a bunch of custom stuff, it may be worth checking out log4r, which is a flexible logging library that lets you do a bunch of interesting/useful stuff easily.
This is an old question, but since it comes up so high on google, I figured it'd be useful to have to correct answer. You can actually use the Logging.init method. Here's how you would add a trace log level
require 'logging'
Logging.init %w(trace debug info warn error fatal)
Logging.logger.root.level = :trace
Logging.logger.root.add_appenders Logging.appenders.stdout
Logging.logger['hello'].trace 'TEST'
This is using the 2.0.0 of the logging gem.
You can create your own logger by overloading the Logger class

How to DRY a list of functions in ruby that are differ only by a single line of code?

I have a User model in a ROR application that has multiple methods like this
#getClient() returns an object that knows how to find certain info for a date
#processHeaders() is a function that processes output and updates some values in the database
#refreshToken() is function that is called when an error occurs when requesting data from the object returned by getClient()
def transactions_on_date(date)
if blocked?
# do something
else
begin
output = getClient().transactions(date)
processHeaders(output)
return output
rescue UnauthorizedError => ex
refresh_token()
output = getClient().transactions(date)
process_fitbit_rate_headers(output)
return output
end
end
end
def events_on_date(date)
if blocked?
# do something
else
begin
output = getClient().events(date)
processHeaders(output)
return output
rescue UnauthorizedError => ex
refresh_token()
output = getClient().events(date)
processHeaders(output)
return output
end
end
end
I have several functions in my User class that look exactly the same. The only difference among these functions is the line output = getClient().something(date). Is there a way that I can make this code look cleaner so that I do not have a repetitive list of functions.
The answer is usually passing in a block and doing it functional style:
def handle_blocking(date)
if blocked?
# do something
else
begin
output = yield(date)
processHeaders(output)
output
rescue UnauthorizedError => ex
refresh_token
output = yield(date)
process_fitbit_rate_headers(output)
output
end
end
end
Then you call it this way:
handle_blocking(date) do |date|
getClient.something(date)
end
That allows a lot of customization. The yield call executes the block of code you've supplied and passes in the date argument to it.
The process of DRYing up your code often involves looking for patterns and boiling them down to useful methods like this. Using a functional approach can keep things clean.
Yes, you can use Object#send: getClient().send(:method_name, date).
BTW, getClient is not a proper Ruby method name. It should be get_client.
How about a combination of both answers:
class User
def method_missing sym, *args
m_name = sym.to_s
if m_name.end_with? '_on_date'
prop = m_name.split('_').first.to_sym
handle_blocking(args.first) { getClient().send(prop, args.first) }
else
super(sym, *args)
end
end
def respond_to? sym, private=false
m_name.end_with?('_on_date') || super(sym, private)
end
def handle_blocking date
# see other answer
end
end
Then you can call "transaction_on_date", "events_on_date", "foo_on_date" and it would work.

Rails fixtures use empty_clob() with CLOB fields but nothing else

I'm back with Rails fixtures after seeing they were much improved since the last time I used them.
#models.yml
one:
id: 1
clob_field: "My Text"
When the models fixture is loaded into the DB - I can see that the clob text (My Text) is substituted with an empty_clob() call (in the insert statement)
According to my understanding, the Oracle enhanced adapter should make another update statement that sets the clob_field appropriately - but this doesn't get executed (and the value remains blank).
Any idea why that is?
I traced the fixture loading and found out that that was due to a mixture of specifying a schema_name along with the table_name (self.table_name = "SCHEMA_OWNER.TABLE_NAME") as well as using upper-case TABLE_NAMES.
I've worked-around the issue by overriding insert_fixture method (in the oracle-enhanced adapter) to properly manipulate table_name.
Now the write_lobs is being called correctly.
UPDATE
Here's the change as requested by #jeff-k
# config/initializers/oracle_enhanced_adapter.rb
...
ActiveSupport.on_load(:active_record) do
ActiveRecord::ConnectionAdapters::OracleEnhancedAdapter.class_eval do
# Overriding this method to account for including the schema_name in the table name
# which is implemented to work around another limitation of having the schema_owner different
# than the connected user
#
# Inserts the given fixture into the table. Overridden to properly handle lobs.
def insert_fixture(fixture, table_name) #:nodoc:
super
if table_name =~ /\./i
table_name = table_name.downcase.split('.')[1]
end
if ActiveRecord::Base.pluralize_table_names
klass = table_name.to_s.singularize.camelize
else
klass = table_name.to_s.camelize
end
klass = klass.constantize rescue nil
if klass.respond_to?(:ancestors) && klass.ancestors.include?(ActiveRecord::Base)
write_lobs(table_name, klass, fixture, klass.lob_columns)
end
end
end
end

How to test the number of database calls in Rails

I am creating a REST API in rails. I'm using RSpec. I'd like to minimize the number of database calls, so I would like to add an automatic test that verifies the number of database calls being executed as part of a certain action.
Is there a simple way to add that to my test?
What I'm looking for is some way to monitor/record the calls that are being made to the database as a result of a single API call.
If this can't be done with RSpec but can be done with some other testing tool, that's also great.
The easiest thing in Rails 3 is probably to hook into the notifications api.
This subscriber
class SqlCounter< ActiveSupport::LogSubscriber
def self.count= value
Thread.current['query_count'] = value
end
def self.count
Thread.current['query_count'] || 0
end
def self.reset_count
result, self.count = self.count, 0
result
end
def sql(event)
self.class.count += 1
puts "logged #{event.payload[:sql]}"
end
end
SqlCounter.attach_to :active_record
will print every executed sql statement to the console and count them. You could then write specs such as
expect do
# do stuff
end.to change(SqlCounter, :count).by(2)
You'll probably want to filter out some statements, such as ones starting/committing transactions or the ones active record emits to determine the structures of tables.
You may be interested in using explain. But that won't be automatic. You will need to analyse each action manually. But maybe that is a good thing, since the important thing is not the number of db calls, but their nature. For example: Are they using indexes?
Check this:
http://weblog.rubyonrails.org/2011/12/6/what-s-new-in-edge-rails-explain/
Use the db-query-matchers gem.
expect { subject.make_one_query }.to make_database_queries(count: 1)
Fredrick's answer worked great for me, but in my case, I also wanted to know the number of calls for each ActiveRecord class individually. I made some modifications and ended up with this in case it's useful for others.
class SqlCounter< ActiveSupport::LogSubscriber
# Returns the number of database "Loads" for a given ActiveRecord class.
def self.count(clazz)
name = clazz.name + ' Load'
Thread.current['log'] ||= {}
Thread.current['log'][name] || 0
end
# Returns a list of ActiveRecord classes that were counted.
def self.counted_classes
log = Thread.current['log']
loads = log.keys.select {|key| key =~ /Load$/ }
loads.map { |key| Object.const_get(key.split.first) }
end
def self.reset_count
Thread.current['log'] = {}
end
def sql(event)
name = event.payload[:name]
Thread.current['log'] ||= {}
Thread.current['log'][name] ||= 0
Thread.current['log'][name] += 1
end
end
SqlCounter.attach_to :active_record
expect do
# do stuff
end.to change(SqlCounter, :count).by(2)

Resources