We have a Rails app running on a single Heroku instance configured with Puma. We are having performance issues that are causing H14 errors (session timeouts).
We run anywhere from 2-5 Web dynos depending on traffic. We run 3 Worker dynos for our background processes. I have increased our Web dynos from 1x (512mb) to 2x (1gb) and removed our logging service Papertrail, which seemed to be causing memory leaks. This has helped a little bit.
We are receiving anywhere from 30-60 RPMs depending on the time of day.
Here is our Puma config:
workers Integer(ENV['PUMA_WORKERS'] || 2)
threads Integer(ENV['MIN_THREADS'] || 8), Integer(ENV['MAX_THREADS'] || 12)
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# worker specific setup
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['pool'] = ENV['MAX_THREADS'] || 12
ActiveRecord::Base.establish_connection(config)
end
end
We have implemented the Heroku Rack Timeout gem and have used the default timeout setting of 15 seconds, but this has not helped the situation at all. It possibly could have made it worse, so we removed it.
Does anyone know of the optimal configuration for an app like ours with the traffic metrics described above? Any config suggestions would be much appreciated!
Related
We are getting recurrently timeout errors on Heroku. When we look into our logs to find some clues about what can be the reason for that, we could not find a good reason for it. These timeouts are generally occurring not in the busiest times, but something like random times.
We are using puma as web server and sidekiq as background job processor.
Our configurations for puma is; MAX_THREADS=60, WEB_CONCURRENCY=1.
Our configuration for sidekiq is; WORKER_CONCURRENCY=8.
We are using single Standard-2x dynos for worker and web.
We are using standard-0 posgresql package with 120 connection limit. We have DB_POOL=60 configuration for matching the same number with the thread count. With these configurations, we are not using all connections.
puma.rb
thread_count = ENV.fetch("MAX_THREADS") { 5 }.to_i
threads thread_count, thread_count
port ENV.fetch("PORT") { 3000 }
environment ENV.fetch("RACK_ENV") { "development" }
workers ENV.fetch("WEB_CONCURRENCY") { 1 }
preload_app!
before_fork do
ActiveRecord::Base.connection_pool.disconnect! if defined?(ActiveRecord)
end
on_worker_boot do
ActiveRecord::Base.establish_connection if defined?(ActiveRecord)
end
plugin :tmp_restart
As you can see from our metrics, it is occurring on some random times, not in the busiest times that we get more requests.
What can be some potential reasons for us to get these timeouts? Thank you for your helps.
I am deploying my Rails 5.2 app in elastic beanstalk with puma as application server and Nginx as default by Elastic Beanstalk.
I am facing an issue of a race condition. After I check more details in container instance I found this:
#example /opt/elasticbeanstalk/support/conf/pumaconf.rb
directory '/var/app/current'
threads 8, 32
workers %x(grep -c processor /proc/cpuinfo)
bind 'unix:///var/run/puma/my_app.sock'
pidfile '/var/run/puma/puma.pid'
stdout_redirect '/var/log/puma/puma.log', '/var/log/puma/puma.log', true
daemonize false
As seen here the number of workers is equal to the number of my CPU core.
However, in Heroku.com we can do this:
# config/puma.rb
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
How can I lower down the number of threads and increase the number of workers in elastic beanstalk? by taking into account that I have a load balancer enabled and the config above is managed by elastic beanstalk.
In the case of Heroku I can manage with puma.rb, however in elastic beanstalk I don't see any other approach besides changing the file
/opt/elasticbeanstalk/support/conf/pumaconf.rb
manually. Manually modification will cause issues when the number of instances scaling down or up.
Not sure, if you've resolved your issue. I had a similar issue and I resolved it using the .ebextensions.
You can create a new pumaconf.rb file on your config directory in your code. Then in the .ebextensions directory, create a file that will copy out the new pumaconf.rb and replace the default pumaconf.rb.
Also, if you are going about it this way. In your .ebextensions code, use this path for your new file
/var/app/ondeck/config/pumaconf.rb
and not
/var/app/current/config/pumaconf.rb
Because using the latter one, wont copy out your latest pumaconf.rb
cp /var/app/ondeck/config/pumaconf.rb /opt/elasticbeanstalk/support/conf/pumaconf.rb
My question:
Is this exemplary of memory bloat, memory leak, or just bad server configuration?
First, I will add a screenshot of memory usage
As you can see, I have been using swap memory.
Also, I am getting a constant plateau and then increase in memory after setting up my Puma server config/puma.rb file according to the Heroku documentation.
I am running the hobby dyno 1x (512 mb) with 0 workers.
My WEB_CONCURRENCY variable is set to 1
My RAILS_MAX_THREADS is also set to 1
MIN_THREADS is also set to 1
Here is my config/puma.rb file
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
ActiveRecord::Base.establish_connection
end
I am using the derailed gem to measure memory use from my gems.
I am using rack-mini-profiler & memory_profiler to measure on a per page basis.
After allowing the app the run, here is the following:
As you can see the app is not going over its limit. If anyone has any suggestions that make sense please feel free to answer the question.
The dyno and puma set up mentioned above is producing this report.
So, we are now only using swap memory occasionally and not more than a few MB and only occasionally hitting 23 MB. The app uses a lot of gems and you can see that we are staying under the 512 MB limit.
I used the following documentation from Heroku:
To get your puma server configured properly
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server
For R14 memory errors
https://devcenter.heroku.com/articles/ruby-memory-use
Puma times out my request when I'm using binding.pry. In my controller
def new
require 'pry'
binding.pry
end
I then make a request that hits the controller and enter the pry session. After 60 seconds Puma? times out my request, restarts a worker and subsequently blows up by debugging session.
[1] pry(#<Agent::ClientsController>)> [3522] ! Terminating timed out worker: 3566
[3522] - Worker 0 (pid: 4171) booted, phase: 0
I generated this app with suspenders if that matters. How do I extend my debugging session in rails 5?
How about this?
# config/puma.rb
...
environment ENV['RACK_ENV'] || 'development'
...
if ENV['RACK_ENV'] == 'development'
worker_timeout 3600
end
Edit (Rails 5.1.5):
Because ENV['RACK_ENV'] was empty, I did the following:
# config/puma.rb
...
if ENV.fetch('RAILS_ENV') == 'development'
puts "LOGGER: development => worker_timeout 3600"
worker_timeout 3600
end
You can create a configuration file and set the timeout value in there (for all requests, not just ones involved in debugging). I'd recommend making a dev-specific one, and referencing that when running the server locally (and not setting some big timeout value for production).
In your rails application, create a file like /config/dev_puma_config.rb and in it put:
#!/usr/bin/env puma
worker_timeout 3600
Then when you start your server, reference that file with a -C like this:
bundle exec puma -t 1:1 -w 1 -p 3000 -e development -C config/dev_puma_config.rb
As a bit of background info on that worker_timeout setting, here's what the puma config says about it:
Verifies that all workers have checked in to the master process within
the given timeout. If not the worker process will be restarted. This
is not a request timeout, it is to protect against a hung or dead
process. Setting this value will not protect against slow requests.
Default value is 60 seconds.
With puma the number of threads can be altered to handle multiple requests at the same time. But in case of Heroku, the database connections to postgres are limited.
To handle more requests we can increase the number of dynos, where each dyno has lets say by default 0:16 threads. In which case under load each dyno can make 16 connections to the database.
With rails ActiveRecord we can limit the number of database connections per rails worker process using this configuration:
Rails.application.config.after_initialize do
ActiveRecord::Base.connection_pool.disconnect!
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || ENV['MAX_THREADS'] || 5
ActiveRecord::Base.establish_connection(config)
end
end
However with the db connection limit, if the number of dynos increase the connection limit is hit.
Is there any way to like kill a thread and close the database connection soon as the request has been served?
I've tried using pgbouncer as a buildpack but there are issues with prepared statements.
I'm currently on rails 4.0.0 using puma 2.7.1.
Is there some event hook in puma whcih we can configure like this for when a request is complete?
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
Try to use cluster mode of puma if you use MRI. I suggest to setup 4 workers.
puma -w 4
If you use Jruby, you need to specify near 20 threads. But it should not be more then number of allowed connections to the DB.
puma -t 20:20
More info:
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#thread-safety
https://devcenter.heroku.com/articles/concurrency-and-database-connections#maximum-database-connections