How can I count the number of accesses/queries to database through Mongoid? - ruby-on-rails

I'm using the Mongoid in a Rails project. To improve the performance of large queries, I'm using the includes method to eager load the relationships.
I would like to know if there is an easy way to count the real number of queries performed by a block of code so that I can check if my includes really reduced the number of DB accesses as expected. Something like:
# It will perform a large query to gather data from companies and their relationships
count = Mongoid.count_queries do
Company.to_csv
end
puts count # Number of DB access
I want to use this feature to add Rspec tests to prove that my query remains efficient after changes (e.g; when adding data from a new relationship). In python's Django framework, for instance, one may use the assertNumQueries method to this end.

Checking on rubygems.org didn't yield anything that seems to do what you want.
You might be better off looking into app performance tools like New Relic, Scout, or DataDog. You may be able to get some out of the gate benchmarking specs with
https://github.com/piotrmurach/rspec-benchmark

I just implemented this feature to count mongo queries in my rspec suite in a small module using mongo Command Monitoring.
It can be used like this:
expect { code }.to change { finds("users") }.by(3)
expect { code }.to change { updates("contents") }.by(1)
expect { code }.not_to change { inserts }
Or:
MongoSpy.flush
# ..code..
expect(MongoSpy.queries).to match(
"find" => { "users" => 1, "contents" => 1 },
"update" => { "users" => 1 }
)
Here is the Gist (ready to copy) for the last up-to-date version: https://gist.github.com/jarthod/ab712e8a31798799841c5677cea3d1a0
And here is the current version:
module MongoSpy
module Helpers
%w(find delete insert update).each do |op|
define_method(op.pluralize) { |ns = nil|
ns ? MongoSpy.queries[op][ns] : MongoSpy.queries[op].values.sum
}
end
end
class << self
def queries
#queries ||= Hash.new { |h, k| h[k] = Hash.new(0) }
end
def flush
#queries = nil
end
def started(event)
op = event.command.keys.first # find, update, delete, createIndexes, etc.
ns = event.command[op] # collection name
return unless ns.is_a?(String)
queries[op][ns] += 1
end
def succeeded(_); end
def failed(_); end
end
end
Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::COMMAND, MongoSpy)
RSpec.configure do |config|
config.include MongoSpy::Helpers
end

What you're looking for is command monitoring. With Mongoid and the Ruby Driver, you can create a custom command monitoring class that you can use to subscribe to all commands made to the server.
I've adapted this from the Command Monitoring Guide for the Mongo Ruby Driver.
For this particular example, make sure that your Rails app has the log level set to debug. You can read more about the Rails logger here.
The first thing you want to do is define a subscriber class. This is the class that tells your application what to do when the Mongo::Client performs commands against the database. Here is the example class from the documentation:
class CommandLogSubscriber
include Mongo::Loggable
# called when a command is started
def started(event)
log_debug("#{prefix(event)} | STARTED | #{format_command(event.command)}")
end
# called when a command finishes successfully
def succeeded(event)
log_debug("#{prefix(event)} | SUCCEEDED | #{event.duration}s")
end
# called when a command terminates with a failure
def failed(event)
log_debug("#{prefix(event)} | FAILED | #{event.message} | #{event.duration}s")
end
private
def logger
Mongo::Logger.logger
end
def format_command(args)
begin
args.inspect
rescue Exception
'<Unable to inspect arguments>'
end
end
def format_message(message)
format("COMMAND | %s".freeze, message)
end
def prefix(event)
"#{event.address.to_s} | #{event.database_name}.#{event.command_name}"
end
end
(Make sure this class is auto-loaded in your Rails application.)
Next, you want to attach this subscriber to the client you use to perform commands.
subscriber = CommandLogSubscriber.new
Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::COMMAND, subscriber)
# This is the name of the default client, but it's possible you've defined
# a client with a custom name in config/mongoid.yml
client = Mongoid::Clients.from_name('default')
client.subscribe( Mongo::Monitoring::COMMAND, subscriber)
Now, when Mongoid executes any commands against the database, those commands will be logged to your console.
# For example, if you have a model called Book
Book.create(title: "Narnia")
# => D, [2020-03-27T10:29:07.426209 #43656] DEBUG -- : COMMAND | localhost:27017 | mongoid_test_development.insert | STARTED | {"insert"=>"books", "ordered"=>true, "documents"=>[{"_id"=>BSON::ObjectId('5e7e0db3f8f498aa88b26e5d'), "title"=>"Narnia", "updated_at"=>2020-03-27 14:29:07.42239 UTC, "created_at"=>2020-03-27 14:29:07.42239 UTC}], "lsid"=>{"id"=><BSON::Binary:0x10600 type=uuid data=0xfff8a93b6c964acb...>}}
# => ...
You can modify the CommandLogSubscriber class to do something other than logging (such as incrementing a global counter).

Related

Speed up rake task by using typhoeus

So i stumbled across this: https://github.com/typhoeus/typhoeus
I'm wondering if this is what i need to speed up my rake task
Event.all.each do |row|
begin
url = urlhere + row.first + row.second
doc = Nokogiri::HTML(open(url))
doc.css('.table__row--event').each do |tablerow|
table = tablerow.css('.table__cell__body--location').css('h4').text
next unless table == row.eventvenuename
tablerow.css('.table__cell__body--availability').each do |button|
buttonurl = button.css('a')[0]['href']
if buttonurl.include? '/checkout/external'
else
row.update(row: buttonurl)
end
end
end
rescue Faraday::ConnectionFailed
puts "connection failed"
next
end
end
I'm wondering if this would speed it up, Or because i'm doing a .each it wouldn't?
If it would could you provide an example?
Sam
If you set up Typhoeus::Hydra to run parallel requests, you might be able to speed up your code, assuming that the Kernel#open calls are what's slowing you down. Before you optimize, you might want to run benchmarks to validate this assumption.
If it is true, and parallel requests would speed it up, you would need to restructure your code to load events in batches, build a queue of parallel requests for each batch, and then handle them after they execute. Here's some sketch code.
class YourBatchProcessingClass
def initialize(batch_size: 200)
#batch_size = batch_size
#hydra = Typhoeus::Hydra.new(max_concurrency: #batch_size)
end
def perform
# Get an array of records
Event.find_in_batches(batch_size: #batch_size) do |batch|
# Store all the requests so we can access their responses later.
requests = batch.map do |record|
request = Typhoeus::Request.new(your_url_build_logic(record))
#hydra.queue request
request
end
#hydra.run # Run requests in parallel
# Process responses from each request
requests.each do |request|
your_response_processing(request.response.body)
end
end
rescue WhateverError => e
puts e.message
end
private
def your_url_build_logic(event)
# TODO
end
def your_response_processing(response_body)
# TODO
end
end
# Run the service by calling this in your Rake task definition
YourBatchProcessingClass.new.perform
Ruby can be used for pure scripting, but it functions best as an object-oriented language. Decomposing your processing work into clear methods can help clarify your code and help you catch things like Tom Lord mentioned in the comments on your question. Also, instead of wrapping your whole script in a begin..rescue block, you can use method-level rescues as in #perform above, or just wrap #hydra.run.
As a note, .all.each is a memory hog, and is thus considered a bad solution to iterating over records: .all loads all of the records into memory before iterating over them with .each. To save memory, it's better to use .find_each or .find_in_batches, depending on your use case. See: http://api.rubyonrails.org/classes/ActiveRecord/Batches.html

How to continue indexing documents in elasticsearch(rails)?

So I ran this command rake environment elasticsearch:import:model CLASS='AutoPartsMapper' FORCE=true to index documents in elasticsearch.In my database I have 10 000 000 records=)...it takes (I think) one day to index this...When indexing was running my computer turned off...(I indexed 2 000 000 documents)Is it possible to continue indexing documents?
If you use rails 4.2+ you can use ActiveJob to schedule and leave it running. So, first generate it with this
bin/rails generate job elastic_search_index
This will give you class and method perform:
class ElasticSearchIndexJob < ApplicationJob
def perform
# impleement here indexing
AutoPartMapper.__elasticsearch__.create_index! force:true
AutoPartMapper.__elasticsearch__.import
end
end
Set the sidekiq as your active job provider and from console initiate this with:
ElasticSearchIndexJob.perform_later
This will set the active job and execute it on next free job but it will free your console. You can leave it running and check the process in bash later:
ps aux | grep side
this will give you something like: sidekiq 4.1.2 app[1 of 12 busy]
Have a look at this post that explains them
http://ruby-journal.com/how-to-integrate-sidekiq-with-activejob/
Hope it helps
There is no such functionality in elasicsearch-rails afaik but you could write a simple task to do that.
namespace :es do
task :populate, [:start_id] => :environment do |_, args|
start_id = args[:start_id].to_i
AutoPartsMapper.where('id > ?', start_id).order(:id).find_each do |record|
puts "Processing record ##{record.id}"
record.__elasticsearch__.index_document
end
end
end
Start it with bundle exec rake es:populate[<start_id>] passing the id of the record from which to start the next batch.
Note that this is a simplistic solution which will be much slower than batch indexing.
UPDATE
Here is a batch indexing task. It is much faster and automatically detects the record from which to continue. It does make an assumption that previously imported records were processed in increasing id order and without gaps. I haven't tested it but most of the code is from a production system.
namespace :es do
task :populate_auto => :environment do |_, args|
start_id = get_max_indexed_id
AutoPartsMapper.find_in_batches(batch_size: 1000).where('id > ?', start_id).order(:id) do |records|
elasticsearch_bulk_index(records)
end
end
def get_max_indexed_id
AutoPartsMapper.search(aggs: {max_id: {max: {field: :id }}}, size: 0).response[:aggregations][:max_id][:value].to_i
end
def elasticsearch_bulk_index(records)
return if records.empty?
klass = records.first.class
klass.__elasticsearch__.client.bulk({
index: klass.__elasticsearch__.index_name,
type: klass.__elasticsearch__.document_type,
body: elasticsearch_records_to_index(records)
})
end
def self.elasticsearch_records_to_index(records)
records.map do |record|
payload = { _id: record.id, data: record.as_indexed_json }
{ index: payload }
end
end
end

Ruby on Rails: doing a find on reference

I have two tables (nodes and agents). nodes are belong_to agents. I have a script that is pulling the Rails project into it and I'm trying to pull in values from the ActiveRecord. I'm assuming what I'm asking should work whether it's in a controller or view -or- in a cli script. So, my script looks thusly:
#!/usr/bin/env ruby
require '/Users/hseritt/devel/rails_projects/monitor_app/config/environment'
banner = "Banner running..."
script_dir = '/devel/rails_projects/monitor_app/monitor/bin'
class Runner
attr_accessor :banner, :agents, :agent_module, :nodes
def initialize(banner, script_dir)
#banner = banner
#agents = Agent.all
#nodes = Node.all
#script_dir = script_dir
end
def run
puts #banner
#agents.each do |agent|
if agent.active?
agent_module = '%s/%s' % [#script_dir, agent.name]
require agent_module
#nodes.each do |node|
if node.agent == agent
puts node.name
end
end
#
# HERE IS THE ISSUE:
# ns = Node.find_by_agent_id(agent.id)
# ns.each do |node|
# puts node.name
# end
#
# yields this error:
#`method_missing': undefined method `each' for #<Node:0x007fe4dc4beba0> (NoMethodError)
# I would think `ns` here would be itterable but it doesn't seem that way.
end
end
end
end
if $0 == __FILE__
runner = Runner.new(banner, script_dir)
runner.run
end
So, this is in the run method. The block that is not commented out but of course this is not a good solution since each time you iterate through agents you'll have to iterate through nodes each time. The block that is commented out seemed logical to me but throws an error. I'm having trouble probably googling the right thing here I think. What am I missing here?
Node.find_all_by_agent_id
if you don't use "all", it takes the first element only

How to test the number of database calls in Rails

I am creating a REST API in rails. I'm using RSpec. I'd like to minimize the number of database calls, so I would like to add an automatic test that verifies the number of database calls being executed as part of a certain action.
Is there a simple way to add that to my test?
What I'm looking for is some way to monitor/record the calls that are being made to the database as a result of a single API call.
If this can't be done with RSpec but can be done with some other testing tool, that's also great.
The easiest thing in Rails 3 is probably to hook into the notifications api.
This subscriber
class SqlCounter< ActiveSupport::LogSubscriber
def self.count= value
Thread.current['query_count'] = value
end
def self.count
Thread.current['query_count'] || 0
end
def self.reset_count
result, self.count = self.count, 0
result
end
def sql(event)
self.class.count += 1
puts "logged #{event.payload[:sql]}"
end
end
SqlCounter.attach_to :active_record
will print every executed sql statement to the console and count them. You could then write specs such as
expect do
# do stuff
end.to change(SqlCounter, :count).by(2)
You'll probably want to filter out some statements, such as ones starting/committing transactions or the ones active record emits to determine the structures of tables.
You may be interested in using explain. But that won't be automatic. You will need to analyse each action manually. But maybe that is a good thing, since the important thing is not the number of db calls, but their nature. For example: Are they using indexes?
Check this:
http://weblog.rubyonrails.org/2011/12/6/what-s-new-in-edge-rails-explain/
Use the db-query-matchers gem.
expect { subject.make_one_query }.to make_database_queries(count: 1)
Fredrick's answer worked great for me, but in my case, I also wanted to know the number of calls for each ActiveRecord class individually. I made some modifications and ended up with this in case it's useful for others.
class SqlCounter< ActiveSupport::LogSubscriber
# Returns the number of database "Loads" for a given ActiveRecord class.
def self.count(clazz)
name = clazz.name + ' Load'
Thread.current['log'] ||= {}
Thread.current['log'][name] || 0
end
# Returns a list of ActiveRecord classes that were counted.
def self.counted_classes
log = Thread.current['log']
loads = log.keys.select {|key| key =~ /Load$/ }
loads.map { |key| Object.const_get(key.split.first) }
end
def self.reset_count
Thread.current['log'] = {}
end
def sql(event)
name = event.payload[:name]
Thread.current['log'] ||= {}
Thread.current['log'][name] ||= 0
Thread.current['log'][name] += 1
end
end
SqlCounter.attach_to :active_record
expect do
# do stuff
end.to change(SqlCounter, :count).by(2)

Run Ruby block as specific OS user?

Can you execute a block of Ruby code as a different OS user?
What I, ideally, want is something like this:
user("christoffer") do
# do something
end
Possible?
This code can do what you want. Error handling is up to you. ;-)
require 'etc'
def as_user(user, &block)
u = Etc.getpwnam(user)
Process.fork do
Process.uid = u.uid
block.call(user)
end
end
puts("caller PID = #{Process.pid}")
puts("caller UID = #{Process.uid}")
as_user "bmc" do |user|
puts("In block as #{user} (uid=#{Process.uid}), pid is #{Process.pid}")
end
Note, however, that it will require that you run Ruby as root, or as setuid-to-root, which has some severe security implications.
The accepted answer does change UID, but doing this alone can have surprising results when you create files or child processes. Try:
as_user 'bmc' do |user|
File.open('/tmp/out.txt', 'w')
end
You'll find that file was created as root, which isn't what one might expect.
The behavior is less predictable when running a command using the backtics. The results of the following probably aren't what one would expect:
as_user 'puppet' do
puts `whoami`
puts `id`
puts `whoami; id`
end
Testing on a Linux system, the first puts printed root. id printed the following:
uid=1052(chet) gid=0(root) euid=0(root) groups=0(root)
The final puts disagreed:
puppet
uid=1052(chet) gid=0(root) groups=0(root)
To get consistent behavior, be sure to set effective UID as well:
def as_user(user, &block)
u = Etc.getpwnam(user)
Process.fork do
Process.uid = Process.euid = u.uid
block.call(user)
end
end
It can be useful to get a value back from the child process. Adding a little IPC fun gives:
require 'etc'
def as_user(user, &block)
u = (user.is_a? Integer) ? Etc.getpwuid(user) : Etc.getpwnam(user)
reader, writer = IO.pipe
Process.fork do
# the child process won't need to read from the pipe
reader.close
# use primary group ID of target user
# This needs to be done first as we won't have
# permission to change our group after changing EUID
Process.gid = Process.egid = u.gid
# set real and effective UIDs to target user
Process.uid = Process.euid = u.uid
# get the result and write it to the IPC pipe
result = block.call(user)
Marshal.dump(result, writer)
writer.close
# prevent shutdown hooks from running
Process.exit!(true)
end
# back to reality... we won't be writing anything
writer.close
# block until there's data to read
result = Marshal.load(reader)
# done with that!
reader.close
# return block result
result
end
val = as_user 'chet' do
`whoami` + `id` + `whoami; id`
end
puts "back from wonderland: #{val}"

Resources