Sidekiq not closing postgresql connections - ruby-on-rails

My Rails 4.1 sidekiq application is running into an issue with the active record connections to postgresql not being closed.
Since the connections are not being closed postgres is not releasing the memory and after a while I run out of memory on my postgres server(I have 15GB of ram and a fairly small database).
The other issue is that after awhile my sidekiq workers start to take about 10X as long as they should to run, and after a time the workers just get stuck and stop processing any new jobs.
I have tried wrapping my worker code in the following to no affect:
def with_connection(&block)
ActiveRecord::Base.connection_pool.with_connection do
yield block
end
ensure
ActiveRecord::Base.clear_active_connections!
ActiveRecord::Base.connection.close
end
I have also added this in my unicorn.rb file in the after_fork method
if defined?(ActiveRecord::Base)
config = ActiveRecord::Base.configurations[Rails.env] || Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = 10 # seconds
ActiveRecord::Base.establish_connection(config)
end
Is there anything I can do to get these connections to close, or at least release the postgres memory?

Related

Heroku: error for PostGIS column in web dyno but not in one-off or worker dynos

I've got the beginnings of a spatial app (PostGIS + Rails + RGeo) with a single line_string column on one of my models. On my local machine everything is good, the problem is on Heroku web dynos, where Rails/activerecord doesn't recognise the RGeo/PostGIS line_string type.
Each time after deploying, the first time the model is loaded I see the following in the logs:
app[web.1]: unknown OID 17059: failed to recognize type of 'line_string'. It will be treated as String.
Then if I add some logging to STDOUT I can see:
mymodel.line_string.class # String
RGeo::Feature::Geometry.check_type(mymodel.line_string) # false
However, if I run a one-off dyno with rails console (heroku run rails c production) everything is good. I see the output I expect:
irb(main):001:0> mymodel.line_string.class
=> RGeo::Geographic::SphericalLineStringImpl
irb(main):002:0> RGeo::Feature::Geometry.check_type(mymodel.line_string)
=> true
It's also ok under worker dynos.
What would cause the unknown OID error? Why would one-off and worker dynos be able to work with this column perfectly but the web dyno not?
Puma was starting without configuration for PostGIS that I had added in application.rb. ActiveRecord started working with PostGIS once I added the following to config/puma.rb:
on_worker_boot do
if defined?(ActiveRecord::Base)
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['adapter'] = 'postgis'
ActiveRecord::Base.establish_connection(config)
end
end
on_restart do
if defined?(ActiveRecord::Base)
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['adapter'] = 'postgis'
ActiveRecord::Base.establish_connection(config)
end
end
The other dyno types were just using application.rb so they were always configured correctly.

Kill puma thread once request has been handled rails

With puma the number of threads can be altered to handle multiple requests at the same time. But in case of Heroku, the database connections to postgres are limited.
To handle more requests we can increase the number of dynos, where each dyno has lets say by default 0:16 threads. In which case under load each dyno can make 16 connections to the database.
With rails ActiveRecord we can limit the number of database connections per rails worker process using this configuration:
Rails.application.config.after_initialize do
ActiveRecord::Base.connection_pool.disconnect!
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || ENV['MAX_THREADS'] || 5
ActiveRecord::Base.establish_connection(config)
end
end
However with the db connection limit, if the number of dynos increase the connection limit is hit.
Is there any way to like kill a thread and close the database connection soon as the request has been served?
I've tried using pgbouncer as a buildpack but there are issues with prepared statements.
I'm currently on rails 4.0.0 using puma 2.7.1.
Is there some event hook in puma whcih we can configure like this for when a request is complete?
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
Try to use cluster mode of puma if you use MRI. I suggest to setup 4 workers.
puma -w 4
If you use Jruby, you need to specify near 20 threads. But it should not be more then number of allowed connections to the DB.
puma -t 20:20
More info:
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#thread-safety
https://devcenter.heroku.com/articles/concurrency-and-database-connections#maximum-database-connections

Reconnect with Redis after Puma fork

I am using a global variable in a rails application to store a redis client using the redis gem. In a config/initializers/redis.rb, I have
$redis = Redis.new(host: "localhost", port: 6379)
Then in application code, I use $redis to work the data in the Redis store.
I also use puma as the web server in production environment, and capistrano to deploy code. In the deploy process, capistrano restarts puma.
Every time I start or restart the puma web servers, I always get an "Internal Server Error" when I first use $redis to access data in the Redis store. I saw errors like Redis::InheritedError (Tried to use a connection from a child process without reconnecting. You need to reconnect to Redis after forking.)
Searching around with google and stackoverflow led me to think that I needed to reconnect to Redis after puma forks child processes. So, I added in my config/puma.rb:
on_worker_boot do
$redis.ping
end
But I was still getting the "Internal Server Error" caused by Redis::InheritedError (Tried to use a connection from a child process without reconnecting. You need to reconnect to Redis after forking.).
I saw this post http://qiita.com/yaotti/items/18433802bf1720fc0c53. I then tried adding in config/puma.rb:
on_restart do
$redis.quit
end
That did not work.
I tried in config/initializers/redis.rb to $redis.ping right after Redis.new. That did not work either.
I got this error if puma was started with no puma processes running, or restarted when an instance of puma process was running.
Refreshing the page would get me past this error. But I want to get rid of this even on the first attempt to use $redis. I was thinking that I did not use the redis gem or configure its reconnection correctly. Could someone tell me:
Is that the right way to use redis gem in a rails application?
How should the redis connection be reconnected in puma?
puma gem documentation says, "You should place code to close global log files, redis connections, etc in this block so that their file descriptors don't leak into the restarted process. Failure to do so will result in slowly running out of descriptors and eventually obscure crashes as the server is restart many times." It was talking about the on_restart block. But it did not say how that should be done.
I was able to fix the error with a monkeypatch. This changes the behaviour so it just reconnects instead of throwing the Redis::InheritedError
###### MONKEYPATCH redis-rb
# https://github.com/redis/redis-rb/issues/364
# taken from https://github.com/redis/redis-rb/pull/389/files#diff-597c124889a64c18744b52ef9687c572R314
class Redis
class Client
def ensure_connected
tries = 0
begin
if connected?
if Process.pid != #pid
reconnect
end
else
connect
end
tries += 1
yield
rescue ConnectionError
disconnect
if tries < 2 && #reconnect
retry
else
raise
end
rescue Exception
disconnect
raise
end
end
end
end
## MONKEYPATCH end
I'm running a Rails Application with IdentityCache using Puma in clustered mode with workers=4.
It is essential that the reconnects happen in the on_worker_boot callback.
I have to reconnect both the Rails.cache and the IdentityCache to avoid restart errors. Here's what I got working:
puma-config.rb
on_worker_boot do
puts 'On worker boot...'
puts "Reconnecting Rails.cache"
Rails.cache.reconnect
begin
puts "Reconnecting IdentityCache"
IdentityCache.cache.cache_backend.reconnect
rescue Exception => e
puts "Error trying to reconnect identity_cache_store: #{e.message}"
end
end
Then I see the following in my logs, showing me the proof that it all works.
On worker boot...
Reconnecting Rails.cache
Reconnecting IdentityCache
On worker boot...
Reconnecting Rails.cache
Reconnecting IdentityCache
On worker boot...
Reconnecting Rails.cache
Reconnecting IdentityCache
On worker boot...
Reconnecting Rails.cache
Reconnecting IdentityCache
[7109] - Worker 7115 booted, phase: 0
[7109] - Worker 7123 booted, phase: 0
[7109] - Worker 7119 booted, phase: 0
[7109] - Worker 7127 booted, phase: 0
Sure enough, the first request problems that used to be there after server restart are gone. QED.
Upgrade redis-rb to 3.1.0 or above.
The detail https://github.com/redis/redis-rb/pull/414/files#
monkey patch
# https://github.com/redis/redis-rb/pull/414/files#diff-5bc007010e6c2e0aa70b64d6f87985c20986ee1b2882b63a89b52659ee9c91f8
class Redis
class Client
def ensure_connected
tries = 0
begin
if connected?
if Process.pid != #pid
raise InheritedError,
"Tried to use a connection from a child process without reconnecting. " +
"You need to reconnect to Redis after forking."
end
else
connect
end
tries += 1
yield
rescue ConnectionError, InheritedError
disconnect
if tries < 2 && #reconnect
retry
else
raise
end
rescue Exception
disconnect
raise
end
end
end
end
Here's what I did:
Redis.current.client.reconnect
$redis = Redis.current
($redis is my global instance of a redis client)
I've put this into my config/puma.rb file, works for me.
on_restart do
$redis = DiscourseRedis.new
Discourse::Application.config.cache_store.reconnect
end
on_worker_boot do
$redis = DiscourseRedis.new
Discourse::Application.config.cache_store.reconnect
end

Error R12 (Exit timeout) using Heroku's recommended Unicorn config

My Unicorn config (copied from Heroku's docs):
# config/unicorn.rb
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 30
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
But every time a dyno is restarted, we get this:
heroku web.5 - - Error R12 (Exit timeout) -> At least one process failed to exit within 10 seconds of SIGTERM
Ruby 2.0, Rails 3.2, Unicorn 4.6.3
We've had issues like this with Unicorn for some time . . . we also get seemingly random timeout errors, even though we never see much load and have 4 dynos with 4 workers each (we never have any request queuing). We have had 0 luck getting rid of these errors, even with help from Heroku. I get the feeling even they aren't 100% confident in the optimal settings for Unicorn on Heroku.
We just recently switched to Puma and so far so good, much better performance and no weird timeouts yet. One of the other reasons we switched to Puma is that I suspect some of our random timeouts come from "slow clients" . . . Unicorn isn't designed to handle slow clients.
I will let you know if we see continued success with Puma, but so far so good. The switch is pretty painless, assuming your app is thread-safe.
Here are the puma settings we are using. We are using "Clustered Mode".
procfile:
web: bundle exec puma -p $PORT -C ./config/puma.rb
puma.rb:
environment ENV['RACK_ENV']
threads Integer(ENV["PUMA_THREADS"] || 5),Integer(ENV["PUMA_THREADS"] || 5)
workers Integer(ENV["WEB_CONCURRENCY"] || 4)
preload_app!
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
We currently have WEB_CONCURRENCY set to 4 and PUMA_THREADS set to 5.
We aren't using an initializer for DB_POOL, just using the default DB_POOL setting of 5 (hence the 5 threads).
The only reason we are using WEB_CONCURRENCY as our environment variable name is so that log2viz reports the correct number of workers. Would rather call it PUMA_WORKERS but whatever, not a huge deal.
Hope this helps . . . again, will let you know if we see any issues with Puma.
I hate to add another answer, especially one this simple, but ultimately what fixed this problem for us was removing the 'rack-timeout' gem. I realize this is probably not best practice but I'm curious if there is some conflict between rack-timeout and Unicorn and/or Puma (which is odd because Heroku recommends rack-timeout for use with Unicorn).
Anyway Puma is working great for us but we did still see some random inexplicable timeouts even after the Puma upgrade . . . but removing rack-timeout got rid of the issue completely. Obviously we still get timeouts but only for code we haven't optimized or if we are getting heavy usage (basically when you would expect to see timeouts). Thus I would blame this issue on rack-timeout and not on Unicorn . . . thus contradicting my previous answer :)
Hope this helps. If anyone else wants to poke holes in my theory, feel free!

Running delayed jobs on Heroku for free

Is it possible to run delayed jobs on Heroku for free?
I'm trying to use delayed_job_active_record on Heroku. However, it requires a worker dyno and it would cost money if I turned this dyno on for full time.
I thought using Unicorn and making its workers run delayed jobs instead of the Heroku worker, would cost nothing, while successfully running all the jobs. However, Unicorn workers do not seem to start "working" automatically.
I have the following in my Procfile.
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
and the following in my unicorn.rb
worker_processes 3
timeout 30
preload_app true
before_fork do |server, worker|
# Replace with MongoDB or whatever
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
end
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis.quit
Rails.logger.info('Disconnected from Redis')
end
sleep 1
end
after_fork do |server, worker|
# Replace with MongoDB or whatever
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis = ENV['REDIS_URI']
Rails.logger.info('Connected to Redis')
end
end
Delayed jobs only seem to work when I scale the Heroku worker from 0 to 1.
Again, is it not possible to use Unicorn workers instead of Heroku worker to do the delayed jobs?
Do I have to use a gem like workless to run delayed jobs on Heroku for free? (reference)
Splitting the process like that can incur problems - your best bet is it not try and get it 'free' but use something like http://hirefireapp.com/ which will start up a worker when there are jobs to perform reducing the cost significantly rather than running a worker 24x7.
Also note, Heroku will only ever autostart a 'web' process for you, starting other named processes is a manual task.
You can use Heroku Scheduler to run the jobs using the command
rake jobs:workoff
This way the jobs can run in your web dyno. According to Delayed_Job docs, this command will run all available jobs and exit.
You can configure the scheduler to run this command every 10 minutes for example, and it doesn't have sensible effect on the app's performance when no jobs are queued. Another good idea is to schedule it to run daily at a time with lower access rates.
Ideally there is no straight way to get this free, but you would find lots of workaround one can make to enjoy free background jobs. One of which is http://nofail.de/2011/07/heroku-cedar-background-jobs-for-free/
Also if you plan to use resque which is an excellent choice for background jobs you would need redis which comes free with nano version => https://addons.heroku.com/redistogo. https://devcenter.heroku.com/articles/queuing-ruby-resque
Simple solution is to buy a one dyno for the worker, whereas your web dyno would be free.
Let me if you need more help.
Thanks
Consider using the Workless gem: https://github.com/lostboy/workless
If you only have one web worker, Heroku will sleep it if it's inactive for an hour.
Also, Heroku will reboot all dynos at least once a day.
This makes it hard to do a within-ruby scheduler. It has to at least use persistent storage (e.g. database).

Resources