I'm writing a rake task that would be called every minute (possibly every 30 seconds in the future) by Whenever, and it contacts a polling API endpoint (per user in our database). Obviously, this is not efficient run as a single thread, but is it possible to multithread? If not, is there a good event-based HTTP library that would be able to get the job done?
I'm writing a rake task that would be called every minute (possibly every 30 seconds in the future) by Whenever
Beware of Rails startup times, it might be better to use a forking model such as Resque or Sidekiq, Rescue provides https://github.com/bvandenbos/resque-scheduler which should be able to do what you need, I can't speak about Sidekiq, but I'm sure it has something similar available (Sidekiq is much newer than Resque)
Obviously, this is not efficient run as a single thread, but is it possible to multithread? If not, is there a good event-based HTTP library that would be able to get the job done?
I'd suggest you look at ActiveRecord's find_each for tips on making your finder process more efficient, once you have your batches you can easily do something using threads such as:
#
# Find each returns 50 by default, you can pass options
# to optimize that for larger (or smaller) batch sizes
# depending on your available RAM
#
Users.find_each do |batch_of_users|
#
# Find each returns an Enumerable collection of users
# in that batch, they'll be always smaller than or
# equal to the batch size chosen in `find_each`
#
#
# We collect a bunch of new threads, one for each
# user, eac
#
batch_threads = batch_of_users.collect do |user|
#
# We pass the user to the thread, this is good
# habit for shared variables, in this case
# it doesn't make much difference
#
Thread.new(user) do |u|
#
# Do the API call here use `u` (not `user`)
# to access the user instance
#
# We shouldn't need to use an evented HTTP library
# Ruby threads will pass control when the IO happens
# control will return to the thread sometime when
# the scheduler decides, but 99% of the time
# HTTP and network IO are the best thread optimized
# thing you can do in Ruby.
#
end
end
#
# Joining threads means waiting for them to finish
# before moving onto the next batch.
#
batch_threads.map(&:join)
end
This will start no more than batch_size of threads, waiting after each batch_size to finish.
It would be possible to do something like this, but then you will have an uncontrollable number of threads, there's an alternative you might benefit from here, it gets a lot more complicated including a ThreadPool, and shared list of work to do, I've posted it as at Github so'as not to spam stackoverflow: https://gist.github.com/6767fbad1f0a66fa90ac
I would suggest using sidekiq which is great at multithreading. You can then enqueue separate jobs per user for polling the API. clockwork can be used to make the jobs you enqueue recurring.
Related
I need to consume SQS events with my rails application. I've written a Sidekiq job which does a long polling like this:
class SqsConsumerWorker
include Sidekiq::Worker
def perform
...
poller = Aws::SQS::QueuePoller.new(<queue_url>, client: <sqs_instance>)
poller.poll(wait_time_seconds: 20, max_number_of_messages: 10, visibility_timeout: 180) do |messages|
messages.each do |message|
puts message.inspect
end
end
end
end
First problem was when to initiate this job. Currently I've moved the invocation to rails initializer where I've overriden the Sidekiq config.on(:startup) block to call this job. This will help me to start the job on every deployment. (I have also written some logic in this initializer to check the number of workers are not above some limit etc.)
I wanted to understand is there a better way to solve this problem? I've seen the gem shoryuken which abstracts out these things. But I need more control over the consumer and thought of having my own implementation. I also need to understand how to scale up and scale down the number of consumers with this approach.
I want to run an infinite loop on a separate thread that starts as soon as the app initializes (in an initializer). Here's what it might look like:
# in config/initializers/item_loop.rb
Thread.new
loop do
Item.find_each do |item|
# Get price from third-party api and update record.
item.update_price!
# Need to wait a little between requests to avoid getting throttled.
sleep 5
end
end
end
I tend to accomplish this by running batch updates in recurring background jobs. But this doesn't make sense since I don't really need parallelization, downtime, or queueing, I just want to update one item at a time in a single thread, forever.
Yet there are multiple things that concern me:
Leaked Connections: Should I open up a new connection_pool for the thread? Should I use a gem like safely to avoid crashing the thread?
Thread Safety: Should I be worried about race conditions? Should I make use of Mutex and synchronize? Does using ActiveRecord::Base.transaction impact thread safety?
Deadlock: Should I use Rails.application.executor.wrap?
Concurrent Ruby/Sleep Intervals: Should I use TimerTask from concurrent-ruby gem instead of sleep or something other than Thread.new?
Information on any of these subjects is appreciated.
Usually to perform a job in a background process(non web-server process) a background workers manager is used. Rails has a specific interface for that manager called ActiveJob There are few implementation of a background workers manager - Sidekiq, DelayedJob, Resque, etc. Sidekiq is preferred. Returning back to actual problem - you may create a schedule to run UpdatePriceJob every interval using gem sidekiq-scheduler Another nice extension for throttling Sidekiq workers is sidekiq-throttler
Some code snippets:
# app/workers/update_price_worker.rb
# Actual Worker class
class UpdatePriceWorker
include Sidekiq::Worker
sidekiq_options throttle: { threshold: 720, period: 1.hour }
def perform(item_id)
Item.find(item_id).update_price!
end
end
# app/workers/update_price_master_worker.rb
# Master worker that loops over items
class UpdatePriceMasterWorker
include Sidekiq::Worker
def perform
Item.find_each { |item| UpdatePriceWorker.perform_async item.id }
end
end
# config/sidekiq.yml
:schedule:
update_price:
cron: '0 */4 * * *' # Runs once per 4 hours - depends on how many Items are there
class: UpdatePriceMasterWorker
Idea of this setup - we run MasterWorker every 4 hours(this depends on how much time it takes to update all items). Master worker creates jobs to update price of an every particular item. UpdatePriceWorker is throttled to max 720 RPH.
I use rails runner x (god gem or k8s) in our similar case.
Rails runner runs in another process so that we do not have to worry about connection-leak and thread-safety.
God-gem or k8s supports concurrency and monitoring the job failure. Running 1 process with some specific sleep-time would promise third-party API throttles (running N process with N API-key could support speed up).
I think deadlock would happen in any concurrency situation.
I do not think this loop + sleep approach is a design flaw, because:
cron always starts based on schedule so that long running jobs could run simultaneously. We need to add a logic to avoid job overlapping. Rather, just loop + sleep keeps maximum throughput without any job overlap.
ActiveJob is good for one-shot long-running task, but it does not fit for daemon.
Does loading multiple Models in sidekiq worker can cause memory leak? Does it get garbage collected?
For example:
class Worker
include Sidekiq::Worker
def perform
Model.find_each do |item|
end
end
end
Does using ActiveRecord::Base.connection inside worker can cause problems? Or this connection automatically closes?
I think you are running into a problem that I also had with a "worker" - the actual problem was the code, not Sidekiq in any way, shape or form.
In my problematic code, I thoughtlessly just loaded up a boatload of models with a big, fat, greedy query (hundreds of thousands of instances).
I fixed my worker/code quite simply. For my instance, I transitioned my DB call from all to use find_in_batches with a lower number of objects pulled for the batch.
Model.find_in_batches(100) do |record|
# ... I like find_in_batches better than find_each because you can use lower numbers for the batch size
# ... other programming stuff
As soon as I did this, a job that would bring down Sidekiq after a while (running out of memory on the box) has run with find_in_batches for 5 months without me even having to restart Sidekiq ... Ok, I may have restarted Sidekiq some in the last 5 months when I've deployed or done maintenance :), but not because of the worker!
I have a piece of code that performs the same queries over and over, and it's doing that in a background worker within a thread.
I checkout out the activerecord query cache middleware but apparently it needs to be enabled before use. However I'm not sure if it's a safe thing to do and if it will affect other running threads.
you can see the tests here: https://github.com/rails/rails/blob/3e36db4406beea32772b1db1e9a16cc1e8aea14c/activerecord/test/cases/query_cache_test.rb#L19
my question is: can I borrow and/or use the middleware directly to enable query cache for the duration of a block safely in a thread?
when I tried ActiveRecord::Base.cache do my CI started failing left and right...
EDIT: Rails 5 and later: the ActiveRecord query cache is automatically enabled even for background jobs like Sidekiq (see: https://github.com/mperham/sidekiq/wiki/Problems-and-Troubleshooting#activerecord-query-cache for information on how to disable it).
Rails 4.x and earlier:
The difficulty with applying ActiveRecord::QueryCache to your Sidekiq workers is that, aside from the implementation details of it being a middleware, it's meant to be built during the request and destroyed at the end of it. Since background jobs don't have a request, you need to be careful about when you clear the cache. A reasonable approach would be to cache only during the perform method, though.
So, to implement that, you'll probably need to write your own piece of Sidekiq middleware, based on ActiveRecord::QueryCache but following Sidekiq's middleware guide. E.g.,
class SidekiqQueryCacheMiddleware
def call(worker, job, queue)
connection = ActiveRecord::Base.connection
enabled = connection.query_cache_enabled
connection_id = ActiveRecord::Base.connection_id
connection.enable_query_cache!
yield
ensure
ActiveRecord::Base.connection_id = connection_id
ActiveRecord::Base.connection.clear_query_cache
ActiveRecord::Base.connection.disable_query_cache! unless enabled
end
end
I have a Rails app that sends multiple requests both sequentially and in parallel to a third-party API and do calculation in the backend.
I would like to know how long each of my API requests and calculation takes. Is there performance testing gem I should use?
Note: my app uses Sidekiq to process backend jobs.
http://guides.rubyonrails.org/performance_testing.html might get you started, check out section 3 for details of wrapping methods in "benchmark" which outputs some useful stats to the log.
As a quick example:
def process
Benchmark.bm do |x|
x.report("Processing Task") do
process_task(task_options)
end
end
end
would output something like:
user system total real
Processing Task 8.206000 1.092000 9.298000 ( 14.609000)