How to run background jobs during cucumber tests? - ruby-on-rails

What is the best way to test something that requires background jobs with Cucumber? I need to run DelayedJob and Sneakers workers in background while tests are running.

You can run any application in the background:
#pid = Process.spawn "C:/Apps/whatever.exe"
Process.detach(#pid)
And even kill it after tests are done:
Process.kill('KILL', #pid) unless #pid.nil?

You can create your own step definition in features/step_definitions/whatever_steps.rb (hopefully with a better name)
When /^I wait for background jobs to complete$/ do
Delayed::Worker.new.work_off
end
That can be extended for any other scripts you'd like to run with that step. Then in the test, it goes something like:
Then I should see the text "..."
When I wait for background jobs to complete
And I refresh the page
Then I should see the text "..."

If anyone has similar problem I ended up writing this (thanks to Square blog post):
require "timeout"
class CucumberExternalWorker
attr_accessor :worker_pid, :start_command
def initialize(start_command)
raise ArgumentError, "start_command was expected" if start_command.nil?
self.start_command = start_command
end
def start
puts "Trying to start #{start_command}..."
self.worker_pid = fork do
start_child
end
at_exit do
stop_child
end
end
private
def start_child
exec({ "RAILS_ENV" => Rails.env }, start_command)
end
def stop_child
puts "Trying to stop #{start_command}, pid: #{worker_pid}"
# send TERM and wait for exit
Process.kill("TERM", worker_pid)
begin
Timeout.timeout(10) do
Process.waitpid(worker_pid)
puts "Process #{start_command} stopped successfully"
end
rescue Timeout::Error
# Kill process if could not exit in 10 seconds
puts "Sending KILL signal to #{start_command}, pid: #{worker_pid}"
Process.kill("KILL", worker_pid)
end
end
end
This can be called as following (added it to env.rb for cucumber):
# start delayed job
$delayed_job_worker = CucumberExternalWorker.new("rake jobs:work")
$delayed_job_worker.start

Related

Rails start multiple processes that involve loops at the same time

In my Ruby on Rails project I have 3 workers. They are in /lib/workers/worker_a.rb, /lib/workers/worker_b.rb, and /lib/workers/worker_c.rb.
Worker A is defined like
class WorkerA
def run!
logger.info "Starting worker A"
loop do
begin
do workerAWork
sleep 1.5 if workerAWork
rescue => e
logger.error e
sleep 3
end
end
end
end
WorkerA.new.run! if __FILE__==$0
Worker B is defined very similarly:
class WorkerB
def run!
logger.info "Starting Worker B"
loop do
begin
do workerBWork
sleep 1.5 if workerBWork
rescue => e
logger.error e
sleep 3
end
end
end
end
WorkerA.new.run! if __FILE__==$0
Worker C again is defined like above.
All 3 workers are doing what the app needs them to do. However, during deployment using Docker, I would need to open 3 terminals and start each worker using bin/rails runner ./lib/workers/worker_a.rb,bin/rails runner ./lib/workers/worker_b.rb, bin/rails runner ./lib/workers/worker_c.rb.
If I create a shell script that looks like
bin/rails runner ./lib/workers/worker_a.rb
bin/rails runner ./lib/workers/worker_b.rb
bin/rails runner ./lib/workers/worker_c.rb
Only the first worker will start and the terminal will hang there as there is a loop. The loop will prevent workers B and C from starting.
Is there a way to write the script such that they can be started one after another?
Thanks!

Sidekiq logs show JobWrapper instead of Job class name

I have a Rails application that runs some background jobs via ActiveJob and Sidekiq. The sidekiq logs in both the terminal and the log file show the following:
2016-10-18T06:17:01.911Z 3252 TID-oukzs4q3k ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-97318b38b1391672d21feb93 INFO: start
Is there some way to show the class names of the jobs here similar to how logs work for a regular Sidekiq Worker?
Update:
Here is how a Sidekiq worker logs:
2016-10-18T11:05:39.690Z 13678 TID-or4o9w2o4 ClientJob JID-b3c71c9c63fe0c6d29fd2f21 INFO: start
Update 2:
My sidekiq version is 3.4.2
I'd like to replace ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper with Client Job
So I managed to do this by removing Sidekiq::Middleware::Server::Logging from the middleware configuration and adding a modified class that displays the arguments in the logs. The arguments themself contain the job and action names as well.
For latest version, currently 4.2.3, in sidekiq.rb
require 'sidekiq'
require 'sidekiq/middleware/server/logging'
class ParamsLogging < Sidekiq::Middleware::Server::Logging
def log_context(worker, item)
klass = item['wrapped'.freeze] || worker.class.to_s
"#{klass} (#{item['args'].try(:join, ' ')}) JID-#{item['jid'.freeze]}"
end
end
Sidekiq.configure_server do |config|
config.server_middleware do |chain|
chain.remove Sidekiq::Middleware::Server::Logging
chain.add ParamsLogging
end
end
For version 3.4.2, or similar, override the call method instead:
class ParamsLogging < Sidekiq::Middleware::Server::Logging
def call(worker, item, queue)
klass = item['wrapped'.freeze] || worker.class.to_s
Sidekiq::Logging.with_context("#{klass} (#{item['args'].try(:join, ' ')}) JID-#{item['jid'.freeze]}") do
begin
start = Time.now
logger.info { "start" }
yield
logger.info { "done: #{elapsed(start)} sec" }
rescue Exception
logger.info { "fail: #{elapsed(start)} sec" }
raise
end
end
end
end
You must be running some ancient version. Upgrade.
Sorry, looks like that's a Rails 5+ feature only. You'll need to upgrade Rails. https://github.com/rails/rails/commit/8d2b1406bc201d8705e931b6f043441930f2e8ac

How to test concurrency locally?

Whats the best way to test concurrency locally? i.e. i want to test 10 concurrent hits. I am aware of services like Blitz. However, I am trying to find a simpler way of doing it locally to test against race conditions.
Any ideas? Via Curl maybe?
Check out Apache Bench (ab). Basic usage is dead simple:
ab -n 100 -c 10 http://your.application
For locally testing race conditions in the tests you can use helpers like this
# call block in a forked process
def fork_with_new_connection(config, object = nil, options={})
raise ArgumentError, "Missing block" unless block_given?
options = {
:stop => true, :index => 0
}.merge(options)
fork do
# stop the process after fork
Signal.trap('STOP') if options[:stop]
begin
ActiveRecord::Base.establish_connection(config)
yield(object)
ensure
ActiveRecord::Base.remove_connection
end
end
end
# call multiply times blocks
def multi_times_call_in_fork(count=3, &block)
raise ArgumentError, "Missing block" unless block_given?
config = ActiveRecord::Base.remove_connection
pids = []
count.times do |index|
pids << fork_with_new_connection(config, nil, :index=>index, &block)
end
# continue forked processes
Process.kill("CONT", *pids)
Process.waitall
ActiveRecord::Base.establish_connection(config)
end
# example
multi_times_call_in_fork(5) do
# do something with race conditions
# add asserts
end

Resque error- wrong number of arguments(0 for 1)

I am using rescue to handle all the heavy lifting background tasks,
In my library/parsers/file.rb I have
Resque.enqueue(Hello)
This will redirect app/workers/file.rb where I have
class Hello
def self.perform(page)
.......
.......
end
rescue Exception => e
log "error: #{e}"
end
end
my lib/tasks/resque.rake file is
require "resque/tasks"
task "resque:setup" => :environment
I am able to queue the jobs buts when i try to execute the job using
rake resque:work QUEUE=*
it is throwing an error by saying
argument error
wrong number of arguments (0 for 1)
what am I doing wrong in this?
pjumble is exactly right, you're not passing the page.
Resque.enqueue(Hello, page_id)
enqueue takes the Job followed by the args which go into the perform action. If you had:
class Hello
def self.perform(page_number, page_foo, page_bar)
...
end
end
Then you would do this:
Resque.enqueue(Hello, page_number, page_foo, page_bar)

Heroku, cron, delayed_job and workers (Rails 3)

I have two questions:
How can I add a heroku worker just before running a delayed job and remove it after it finishes?
Is my cron.rake ok?
cron.rake:
desc "This task is called by the Heroku cron add-on"
task :cron => :environment do
puts "requesting homepage to refresh cache"
uri = URI.parse('http://something.com')
Net::HTTP.get(uri)
puts "end requesting homepage"
puts "start sending daily mail"
User.notified_today.each do |user|
Delayed::Job.enqueue UserMailer.daily_mail(user).deliver
end
puts "end sending daily mail"
end
I use collectiveidea delayed_job.
I've had good success with HireFire.
Easy setup:
Add gem 'hirefire' to your Gemfile
Create Rails.root/config/initializers/hirefire.rb with the config information.
To add remove/remove workers, hook into your ORM's after :create / after :destroy
With DataMapper on Heroku, I did it like this (You must set the ENV vars yourself)
MAX_CONCURRENT_WORKERS = 5
if ENV["HEROKU_APP"]
Delayed::Job.after :create do
workers_needed = [Delayed::Job.count, MAX_CONCURRENT_WORKERS].min
client = Heroku::Client.new(ENV['HEROKU_USERNAME'], ENV['HEROKU_PASSWORD'])
client.set_workers(ENV['HEROKU_APP'], workers_needed)
puts "- Initialized Heroku workers for ZipDecoder"
end
Delayed::Job.after :destroy do
workers_needed = [Delayed::Job.count, MAX_CONCURRENT_WORKERS].min
client = Heroku::Client.new(ENV['HEROKU_USERNAME'], ENV['HEROKU_PASSWORD'])
client.set_workers(ENV['HEROKU_APP'], workers_needed)
puts "- Cleaned Up a Delayed Job for ZipDecoder ---------------------------------"
end
end
You maybe can use an "autoscale" plugin like workless or heroku-autoscale.
About the cron I don't see any problem on it...

Resources