Kill puma thread once request has been handled rails - ruby-on-rails

With puma the number of threads can be altered to handle multiple requests at the same time. But in case of Heroku, the database connections to postgres are limited.
To handle more requests we can increase the number of dynos, where each dyno has lets say by default 0:16 threads. In which case under load each dyno can make 16 connections to the database.
With rails ActiveRecord we can limit the number of database connections per rails worker process using this configuration:
Rails.application.config.after_initialize do
ActiveRecord::Base.connection_pool.disconnect!
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || ENV['MAX_THREADS'] || 5
ActiveRecord::Base.establish_connection(config)
end
end
However with the db connection limit, if the number of dynos increase the connection limit is hit.
Is there any way to like kill a thread and close the database connection soon as the request has been served?
I've tried using pgbouncer as a buildpack but there are issues with prepared statements.
I'm currently on rails 4.0.0 using puma 2.7.1.
Is there some event hook in puma whcih we can configure like this for when a request is complete?
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end

Try to use cluster mode of puma if you use MRI. I suggest to setup 4 workers.
puma -w 4
If you use Jruby, you need to specify near 20 threads. But it should not be more then number of allowed connections to the DB.
puma -t 20:20
More info:
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#thread-safety
https://devcenter.heroku.com/articles/concurrency-and-database-connections#maximum-database-connections

Related

Recurrent h12 timeouts on Heroku

We are getting recurrently timeout errors on Heroku. When we look into our logs to find some clues about what can be the reason for that, we could not find a good reason for it. These timeouts are generally occurring not in the busiest times, but something like random times.
We are using puma as web server and sidekiq as background job processor.
Our configurations for puma is; MAX_THREADS=60, WEB_CONCURRENCY=1.
Our configuration for sidekiq is; WORKER_CONCURRENCY=8.
We are using single Standard-2x dynos for worker and web.
We are using standard-0 posgresql package with 120 connection limit. We have DB_POOL=60 configuration for matching the same number with the thread count. With these configurations, we are not using all connections.
puma.rb
thread_count = ENV.fetch("MAX_THREADS") { 5 }.to_i
threads thread_count, thread_count
port ENV.fetch("PORT") { 3000 }
environment ENV.fetch("RACK_ENV") { "development" }
workers ENV.fetch("WEB_CONCURRENCY") { 1 }
preload_app!
before_fork do
ActiveRecord::Base.connection_pool.disconnect! if defined?(ActiveRecord)
end
on_worker_boot do
ActiveRecord::Base.establish_connection if defined?(ActiveRecord)
end
plugin :tmp_restart
As you can see from our metrics, it is occurring on some random times, not in the busiest times that we get more requests.
What can be some potential reasons for us to get these timeouts? Thank you for your helps.

Puma Rails 5 binding.pry only available for 60 seconds before timeout

Puma times out my request when I'm using binding.pry. In my controller
def new
require 'pry'
binding.pry
end
I then make a request that hits the controller and enter the pry session. After 60 seconds Puma? times out my request, restarts a worker and subsequently blows up by debugging session.
[1] pry(#<Agent::ClientsController>)> [3522] ! Terminating timed out worker: 3566
[3522] - Worker 0 (pid: 4171) booted, phase: 0
I generated this app with suspenders if that matters. How do I extend my debugging session in rails 5?
How about this?
# config/puma.rb
...
environment ENV['RACK_ENV'] || 'development'
...
if ENV['RACK_ENV'] == 'development'
worker_timeout 3600
end
Edit (Rails 5.1.5):
Because ENV['RACK_ENV'] was empty, I did the following:
# config/puma.rb
...
if ENV.fetch('RAILS_ENV') == 'development'
puts "LOGGER: development => worker_timeout 3600"
worker_timeout 3600
end
You can create a configuration file and set the timeout value in there (for all requests, not just ones involved in debugging). I'd recommend making a dev-specific one, and referencing that when running the server locally (and not setting some big timeout value for production).
In your rails application, create a file like /config/dev_puma_config.rb and in it put:
#!/usr/bin/env puma
worker_timeout 3600
Then when you start your server, reference that file with a -C like this:
bundle exec puma -t 1:1 -w 1 -p 3000 -e development -C config/dev_puma_config.rb
As a bit of background info on that worker_timeout setting, here's what the puma config says about it:
Verifies that all workers have checked in to the master process within
the given timeout. If not the worker process will be restarted. This
is not a request timeout, it is to protect against a hung or dead
process. Setting this value will not protect against slow requests.
Default value is 60 seconds.

Sidekiq not closing postgresql connections

My Rails 4.1 sidekiq application is running into an issue with the active record connections to postgresql not being closed.
Since the connections are not being closed postgres is not releasing the memory and after a while I run out of memory on my postgres server(I have 15GB of ram and a fairly small database).
The other issue is that after awhile my sidekiq workers start to take about 10X as long as they should to run, and after a time the workers just get stuck and stop processing any new jobs.
I have tried wrapping my worker code in the following to no affect:
def with_connection(&block)
ActiveRecord::Base.connection_pool.with_connection do
yield block
end
ensure
ActiveRecord::Base.clear_active_connections!
ActiveRecord::Base.connection.close
end
I have also added this in my unicorn.rb file in the after_fork method
if defined?(ActiveRecord::Base)
config = ActiveRecord::Base.configurations[Rails.env] || Rails.application.config.database_configuration[Rails.env]
config['reaping_frequency'] = 10 # seconds
ActiveRecord::Base.establish_connection(config)
end
Is there anything I can do to get these connections to close, or at least release the postgres memory?

Rails app on Heroku with Puma

We have a Rails app running on a single Heroku instance configured with Puma. We are having performance issues that are causing H14 errors (session timeouts).
We run anywhere from 2-5 Web dynos depending on traffic. We run 3 Worker dynos for our background processes. I have increased our Web dynos from 1x (512mb) to 2x (1gb) and removed our logging service Papertrail, which seemed to be causing memory leaks. This has helped a little bit.
We are receiving anywhere from 30-60 RPMs depending on the time of day.
Here is our Puma config:
workers Integer(ENV['PUMA_WORKERS'] || 2)
threads Integer(ENV['MIN_THREADS'] || 8), Integer(ENV['MAX_THREADS'] || 12)
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# worker specific setup
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['pool'] = ENV['MAX_THREADS'] || 12
ActiveRecord::Base.establish_connection(config)
end
end
We have implemented the Heroku Rack Timeout gem and have used the default timeout setting of 15 seconds, but this has not helped the situation at all. It possibly could have made it worse, so we removed it.
Does anyone know of the optimal configuration for an app like ours with the traffic metrics described above? Any config suggestions would be much appreciated!

Rails, Heroku, Unicorn & Resque - how to choose the amount of web workers / resque workers?

I've just switched to using Unicorn on Heroku. I'm also going to switch to resque from delayed_job and use the setup described at http://bugsplat.info/2011-11-27-concurrency-on-heroku-cedar.html
What I don't understand from this is how config/unicorn.rb:
worker_processes 3
timeout 30
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake " + \
"resque:work QUEUES=scrape,geocode,distance,mailer")
end
translates into:
"This will actually result in six processes in each web dyno: 1 unicorn master, 3 unicorn web workers, 1 resque worker, 1 resque child worker when it actually is processing a job"
How many workers will actually process background jobs? 1 or 2?
Lets say I wanted to increase the number of resque workers - what would I change?
I think if you run that block, you have your unicorn master already running, plus 3 web workers that you specify at the top of the file, and then the block below launches one Resque worker if it's not already started.
I'm guessing that Resque launches a child worker by itself when it actually performs work.
It would appear that if you wanted another Resque worker, you could just do
worker_processes 3
timeout 30
#resque_pid = nil
#resque_pid2 = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake " + \
"resque:work QUEUES=scrape,geocode,distance,mailer")
#resque_pid2 ||= spawn("bundle exec rake " + \
"resque:work QUEUES=scrape,geocode,distance,mailer")
end
In my experience with Resque, it's as simple as launching another process as specified above. The only uncertainty I have is with Heroku and how it chooses to deal with giving you more workers.

Resources