Optimising Sidekiq, Redis, Heroku and Rails - ruby-on-rails

So I'm trying to process a CSV file via Sidekiq background job processing on a Heroku Worker instance. Whilst I can complete the process, I feel it could certainly be done quicker/more efficiently than I'm currently doing it. This question contains two parts - firstly are the database pools setup correctly and secondly how can I optimise the process.
Application environment:
Rails 4 application
Unicorn
Sidekiq
Redis-to-go (Mini plan, 50 connections max)
CarrierWave S3 implementation
Heroku Postgres (Standard Yanari, 60 connections max)
1 Heroku Web dyno
1 Heroku Worker dyno
NewRelic monitoring
config/unicorn.rb
worker_processes 3
timeout 15
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
if defined?(ActiveRecord::Base)
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || 2
ActiveRecord::Base.establish_connection(config)
end
end
config/sidekiq.yml
---
:concurrency: 5
staging:
:concurrency: 5
production:
:concurrency: 35
:queues:
- [default, 1]
- [imports, 10]
- [validators, 10]
- [send, 5]
- [clean_up_tasks, 30]
- [contact_generator, 20]
config/initializers/sidekiq.rb
ENV["REDISTOGO_URL"] ||= "redis://localhost:6379"
Sidekiq.configure_server do |config|
config.redis = { url: ENV["REDISTOGO_URL"] }
database_url = ENV['DATABASE_URL']
if database_url
ENV['DATABASE_URL'] = "#{database_url}?pool=50"
ActiveRecord::Base.establish_connection
end
end
Sidekiq.configure_client do |config|
config.redis = { url: ENV["REDISTOGO_URL"] }
end
The database connection pools are worked out as such:
I have 3 Web processes (unicorn worker_processes), to each of these I am allocating 2 ActiveRecord connections via the after_fork hook (config/unicorn.rb) for 6 total (maximum) of my 60 available Postgres connections assigned to the Web dyno. In the Sidekiq initialiser, I'm allocating 50 Postgres connections via the ?pool=50 param appended to ENV['DATABASE_URL'] as described (somewhere) in the docs. I'm keeping my Sidekiq concurrency value at 35 (sidekiq.yml) to ensure I stay under both the 50 Redis connections and 60 Postgres connection limits. This still needs more finely grained tuning, but I'd rather get the data processing itself sorted before going any further with this.
Now, assuming the above is correct (and it wouldn't surprise me at all if it weren't) I'm handling the following scenario:
A user uploads a CSV file to be processed via their browser. This file can be anywhere between 50 rows and 10 million rows. The file is uploaded to S3 via the CarrierWave gem.
The user then configures a couple of settings for the import via the UI,the culmination of which adds an FileImporter job to the Sidekiq queue to start creating various models based on the rows.
The Import worker looks something like:
class FileImporter
include Sidekiq::Worker
sidekiq_options :queue => :imports
def perform(import_id)
import = Import.find_by_id import_id
CSV.foreach(open(import.csv_data), headers: true) do |row|
# import.csv_data is the S3 URL of the file
# here I do some validation against a prebuilt redis table
# to validate the row without making any activerecord calls
# (business logic validations rather than straight DB ones anyway)
unless invalid_record # invalid_record being the product of the previous validations
# queue another job to actually create the AR models for this row
ImportValidator.perform_async(import_id, row)
# increment some redis counters
end
end
end
This is slow - I've tried to limit the calls to ActiveRecord in the FileImporter worker so I'm assuming it's because I'm streaming the file from S3. It's not processing rows fast enough to build up a queue so I'm never utilising all of my worker threads (usually somewhere between 15-20 of the 35 available threads are active. I've tried splitting this job up and feeding rows a 100 at a time into an intermediary worker which then creates the ImportValidator jobs in a more parallel fashion but that didn't fare much better.
So my question is, what's the best/most efficient method to accomplish a task like this?

It's possible you are at 100% CPU with 20 threads. You need another dyno.

Related

Using Redis in Heroku for a React and Rails API

I have a Rails API deployed onto heroku, which serves to a React.js Static page. There both deployed on heroku, and they comunicate through an API link. My struggle is comming when using Redis and Sidekiq.
On my Rails API I got the RedisToGo link Configured with no issues, but when I go to my React app and try to send an email invite I get this message Redis::CannotConnectError: Error connecting to Redis on 127.0.0.1:6379 (ECONNREFUSED).
I thought if I have it configured on my backend then it would work to my react static page app.
Sidekiq.yml
---
:verbose: false
:concurrency: 3
staging:
:concurrency: 1
production:
:concurrency: 5
:queues:
- [mailers,2]
- slack_notifications
- mixpanel
- invoices
- default
- rollbar
Redis.rb
uri = URI.parse(ENV["REDISTOGO_URL"])
REDIS = Redis.new(:url => uri)
sidekiq.rb
Sidekiq::Extensions.enable_delay!
unless Rails.env == 'development' || Rails.env == 'test'
Sidekiq.configure_server do |config|
config.redis = {
url: Rails.application.credentials.redis_url,
password: Rails.application.credentials.redis_password
}
end
Sidekiq.configure_client do |config|
config.redis = {
url: Rails.application.credentials.redis_url,
password: Rails.application.credentials.redis_password
}
end
end
# Turn off backtrace if at all memory issues are popping up as
# backtrace occupies to much memory on redis
# number of lines of backtrace and number of re-tries
Sidekiq.default_worker_options = { backtrace: false, retry: 3 }
Sidekiq.configure_server do |config|
# runs after your app has finished initializing
# but before any jobs are dispatched.
config.on(:startup) do
puts 'Sidekiq is starting...'
# make_some_singleton
end
config.on(:quiet) do
puts 'Got USR1, stopping further job processing...'
end
config.on(:shutdown) do
puts 'Got TERM, shutting down process...'
# stop_the_world
end
end
So my question is the next one:
If I have already configured a REDISTOGO_LINK on my rails app, do I need the same on my react config vars?
Whats the best way to configure sidekiq and Redis on a Rails API using react as a front-end in heroku? I haven't seen something that covers this on the internet.
I would appreciate your help! ;)
You need to run heroku config:set REDIS_PROVIDER=REDISTOGO_URL to tell Sidekiq to use REDISTOGO_URL to connect to Redis.
https://github.com/mperham/sidekiq/wiki/Using-Redis#using-an-env-variable

Rails: Namespace redis on a per-request basis for a multi-tenency app

Consider a multi-tenancy rails application. How would I namespace my redis connections on a per-request basis such that each tenant lives in its own namespace?
Multi-tenancy
For multi-tenancy, I'm using the apartment gem. The tenant is determined on each request by reading out the request.host.
# config/initializers/apartment.rb
#
Rails.application.config.middleware.use 'Apartment::Elevators::Generic', lambda { |request|
Tenant.find_identifier_by_host(request.host)
}
Redis
Redis is used for sidekiq, redis-analytics and, most importantly, rails caching using redis-rails.
# config/initializers/cache.rb
# http://stackoverflow.com/a/38619281/2066546
#
Rails.application.config.cache_store = :redis_store, {
host: ENV['REDIS_HOST'],
port: '6379',
expires_in: 1.week,
namespace: "#{::STAGE}_cache",
timeout: 15.0
}
Rails.cache = ActiveSupport::Cache.lookup_store(Rails.application.config.cache_store)
# config/initializers/redis_analytics.rb
#
RedisAnalytics.configure do |configuration|
configuration.redis_connection = Redis.new(host: ENV['REDIS_HOST'], port: '6379')
configuration.redis_namespace = "#{::STAGE}_redis_analytics"
end
# config/initializers/sidekiq.rb
#
Sidekiq.configure_server do |config|
config.redis = {host: ENV['REDIS_HOST'], port: '6379', namespace: "#{::STAGE}_sidekiq", timeout: 15.0 }
end
Sidekiq.configure_client do |config|
config.redis = {host: ENV['REDIS_HOST'], port: '6379', namespace: "#{::STAGE}_sidekiq", timeout: 15.0 }
end
Thank you very much for any suggestion you might have!
Using namespaces is a terrible way to isolate Redis in an attempt at multi-tenancy. Insider this: you have one single password for the instance. There is no concept of users in Redis.
There is nothing preventing user A from issuing a flushall and wiping out every bit of data for every "tenant". Nothing at all. Nor is there anything preventing user B from issuing a select command to get to other tenants' data.
Redis is single threaded. It customer C issues a sleep command it blocks the server for everyone. A keys command on a DB with lots of keys will result in the entire server being blocked until it completes.
Redis is not designed for multi tenant use. Attempting to shoehorn it into one will result in problems. If you truly need multi tenant usage, use something else or run individual instances for each tenant.
You could use a redis database feature for this. So after creating the redis connection on a request send select $id to redis, where id is different for each tennant.
Each database has its own keyspace, so no interferences are to be expected. By default 16 dbs are allowed, but you can configure as much as you need in redis.conf.
See also http://redis.io/commands/select

Oracle database connection is not released after it exceeds maximum idle time in Rails app

I have a Rails app using an Nginx HTTP server and a Unicorn app server. I'm getting the following error when the app server doesn't receive any requests for about 15min:
OCIError: ORA-02396: exceeded maximum idle time, please connect again
After a page refresh, the page loads fine.
I'm using rails 4.2.1, ruby-oci8 2.1.0, and active-record-oracle_enhanced-adapter 1.6.0.
I'm still relatively new to web development, but I'm thinking this error occurs when the oracle connection idles out but app server doesn't know that the connection is bad.
I've tried setting the reaping_frequency to every 15min, but that didn't fix the problem.
How can I manage the database connections and make sure that these idle connections are dropped? Can I set a timeout to drop database connections before oracle times out?
this is my config/unicorn.rb
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
working_directory app_dir
worker_processes 2
preload_app true
timeout 15
listen "#{shared_dir}/sockets/unicorn.sock", :backlog => 64
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stderr.log"
pid "{shared_dir}/pids/unicorn.pid"
before_fork do |server,worker|
if defined? ActiveRecord::Base
ActiveRecord::Base.connection.disconnect!
end
end
after_fork do |server,worker|
if defined? ActiveRecord::Base
ActiveRecord::Base.establish_connection
end
end
Here is my fix, which isn't that great.
in the application controller:
before_action :refresh_connection
def refresh_connection
puts Time.now.to_s + ' - refreshing connection'
ActiveRecord::Base.connection.disconnect!
if ActiveRecord::Base.establish_connection
puts Time.now.to_s + ' - new connection established'
else
puts Time.now.to_s + ' - new connection cannot be established'
end
ebd

Why are we out of database connections on Heroku?

We have a Rails app on Heroku with Sidekiq and are running out of database connections.
ActiveRecord::ConnectionTimeoutError: could not obtain a database
connection within 5.000 seconds (waited 5.000 seconds)
Heroku stuff:
Database plan: Standard0 (120 connections)
Web dynos: 2 Standard-2X
Worker dynos: 1 Standard-2X
heroku config:
MAX_THREADS: 5
(DB_POOL not set)
(WEB_CONCURRENCY not set)
Procfile:
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq
database.yml:
...
production:
url: <%= ENV["DATABASE_URL"] %>
pool: <%= ENV["DB_POOL"] || ENV['MAX_THREADS'] || 5 %>
puma.rb:
# https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#adding-puma-to-your-application
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['MAX_THREADS'] || 2)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
sidekiq.yml:
---
:concurrency: 25
:queues:
- [default]
We also have a couple of rake tasks that fire every 10 minutes, and they finish within a second or two.
The problem seems to happen when we do a lot of message processing in sidekiq. We do something like:
get article headlines from a 3rd party web service
insert each headline into the db inside a single transaction
create a message in sidekiq for each headline (worker.perform_async)
each message is processed, hits an endpoint to get the body and updates the body (can take .5 - 3 seconds)
While number 4 is happening we see the connection issue.
My understanding is we are way, way, way below the connection limit with our configuration above, but did we do something incorrectly? Is something just consuming the pool? Any help would be great, thanks.
Sources:
https://devcenter.heroku.com/articles/concurrency-and-database-connections
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server
https://github.com/mperham/sidekiq/wiki/Advanced-Options
You are sharing 5 DB connections among 25 Sidekiq threads. Set DB_POOL to 25 or Sidekiq's concurrency to 5.

Possible to avoid ActiveRecord::ConnectionTimeoutError on Heroku?

On Heroku I have a rails app running that with both a couple web dynos as well as one worker dyno. I'm running thousands of worker tasks throughout the day on Sidekiq however occasionally the ActiveRecord::ConnectionTimeoutError is raised (approximately 50 times a day). I've set up my unicorn server as follows
worker_processes 4
timeout 30
preload_app true
before_fork do |server, worker|
# As suggested here: https://devcenter.heroku.com/articles/rails-unicorn
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
end
after_fork do |server,worker|
if defined?(ActiveRecord::Base)
config = Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || 10
ActiveRecord::Base.establish_connection(config)
end
Sidekiq.configure_client do |config|
config.redis = { :size => 1 }
end
Sidekiq.configure_server do |config|
config = Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || 10
ActiveRecord::Base.establish_connection(config)
end
end
On heroku I've set the DB_POOL config variable to 2 as recommended by Heroku. Should these errors be happening at all? Seems odd that it would be impossible to avoid such errors, no? What would you suggest?
A sidekiq server (the process running on your server that is actually performing the delayed tasks) will by default dial up to 25 threads to process work off its queue. Each of these threads could be requesting a connection to your primary database through ActiveRecord if your tasks require it.
If you only have a connection pool of 5 connections, but you have 25 threads trying to connect, after 5 seconds the threads will just give up if they can't get an available connection from the pool and you'll get a connection time out error.
Setting the pool size for your Sidekiq server to something closer to your concurrency level (set with the -c flag when you start the process) will help alleviate this issue at the cost of opening many more connections to your database. If you are on Heroku and are using Postgres for example, some of their plans are limited to 20, whereas others have a connection limit of 500 (source).
If you are running a multi-process server environment like Unicorn, you also need to monitor the number of connections each forked process makes as well. If you have 4 unicorn processes, and a default connection pool size of 5, your unicorn environment at any given time could have 20 live connections. You can read more about that on Heroku's docs. Note also that the DB pool size doesn’t mean that each dyno will now have that many open connections, but only that if a new connection is needed it will be created until a maximum of that many have been created.
With that said, here is what I do.
# config/initializers/unicorn.rb
if ENV['RACK_ENV'] == 'development'
worker_processes 1
listen "#{ENV['BOXEN_SOCKET_DIR']}/rails_app"
timeout 120
else
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 2)
timeout 29
end
# The timeout mechanism in Unicorn is an extreme solution that should be avoided whenever possible.
# It will help catch bugs in your application where and when your application forgets to use timeouts,
# but it is expensive as it kills and respawns a worker process.
# see http://unicorn.bogomips.org/Application_Timeouts.html
# Heroku recommends a timeout of 15 seconds. With a 15 second timeout, the master process will send a
# SIGKILL to the worker process if processing a request takes longer than 15 seconds. This will
# generate a H13 error code and you’ll see it in your logs. Note, this will not generate any stacktraces
# to assist in debugging. Using Rack::Timeout, we can get a stacktrace in the logs that can be used for
# future debugging, so we set that value to something less than this one
preload_app true # for new relic
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to sent QUIT'
end
Rails.logger.info("Done forking unicorn processes")
#https://devcenter.heroku.com/articles/concurrency-and-database-connections
if defined?(ActiveRecord::Base)
db_pool_size = if ENV["DB_POOL"]
ENV["DB_POOL"]
else
ENV["WEB_CONCURRENCY"] || 2
end
config = Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || 2
ActiveRecord::Base.establish_connection(config)
# Turning synchronous_commit off can be a useful alternative when performance is more important than exact certainty about the durability of a transaction
ActiveRecord::Base.connection.execute "update pg_settings set setting='off' where name = 'synchronous_commit';"
Rails.logger.info("Connection pool size for unicorn is now: #{ActiveRecord::Base.connection.pool.instance_variable_get('#size')}")
end
end
And for sidekiq:
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
sidekiq_pool = ENV['SIDEKIQ_DB_POOL'] || 20
if defined?(ActiveRecord::Base)
Rails.logger.debug("Setting custom connection pool size of #{sidekiq_pool} for Sidekiq Server")
db_config = Rails.application.config.database_configuration[Rails.env]
db_config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
cb_config['pool'] = sidekiq_pool
ActiveRecord::Base.establish_connection(db_config)
Rails.logger.info("Connection pool size for Sidekiq Server is now: #{ActiveRecord::Base.connection.pool.instance_variable_get('#size')}")
end
end
If all goes well, when you fire up your processes you'll see something like in your log:
Setting custom connection pool size of 10 for Sidekiq Server
Connection pool size for Sidekiq Server is now: 20
Done forking unicorn processes
(1.4ms) update pg_settings set setting='off' where name = 'synchronous_commit';
Connection pool size for unicorn is now: 2
Sources:
https://devcenter.heroku.com/articles/concurrency-and-database-connections#connection-pool
https://github.com/mperham/sidekiq/issues/503
https://github.com/mperham/sidekiq/wiki/Advanced-Options
For Sidekiq server config it is recommended to have a db_pool number the same as your concurrency, which I assume you have set to greater than 2.
Assuming that setting your db_pool is working in unicorn.rb (I've not had experience doing it this way) a potential solution is to set another environment variable to control the Sidekiq db_pool directly.
If you had a sidekiq concurrency of 20 then something like:
Config var - SIDEKIQ_DB_POOL = 20
Sidekiq.configure_server do |config|
config = Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['SIDEKIQ_DB_POOL'] || 10
ActiveRecord::Base.establish_connection(config)
end
This ensures you have two separate pools optimised to either your web workers DB_POOL and your background workers SIDEKIQ_DB_POOL

Resources