We are getting recurrently timeout errors on Heroku. When we look into our logs to find some clues about what can be the reason for that, we could not find a good reason for it. These timeouts are generally occurring not in the busiest times, but something like random times.
We are using puma as web server and sidekiq as background job processor.
Our configurations for puma is; MAX_THREADS=60, WEB_CONCURRENCY=1.
Our configuration for sidekiq is; WORKER_CONCURRENCY=8.
We are using single Standard-2x dynos for worker and web.
We are using standard-0 posgresql package with 120 connection limit. We have DB_POOL=60 configuration for matching the same number with the thread count. With these configurations, we are not using all connections.
puma.rb
thread_count = ENV.fetch("MAX_THREADS") { 5 }.to_i
threads thread_count, thread_count
port ENV.fetch("PORT") { 3000 }
environment ENV.fetch("RACK_ENV") { "development" }
workers ENV.fetch("WEB_CONCURRENCY") { 1 }
preload_app!
before_fork do
ActiveRecord::Base.connection_pool.disconnect! if defined?(ActiveRecord)
end
on_worker_boot do
ActiveRecord::Base.establish_connection if defined?(ActiveRecord)
end
plugin :tmp_restart
As you can see from our metrics, it is occurring on some random times, not in the busiest times that we get more requests.
What can be some potential reasons for us to get these timeouts? Thank you for your helps.
Related
I am deploying my Rails 5.2 app in elastic beanstalk with puma as application server and Nginx as default by Elastic Beanstalk.
I am facing an issue of a race condition. After I check more details in container instance I found this:
#example /opt/elasticbeanstalk/support/conf/pumaconf.rb
directory '/var/app/current'
threads 8, 32
workers %x(grep -c processor /proc/cpuinfo)
bind 'unix:///var/run/puma/my_app.sock'
pidfile '/var/run/puma/puma.pid'
stdout_redirect '/var/log/puma/puma.log', '/var/log/puma/puma.log', true
daemonize false
As seen here the number of workers is equal to the number of my CPU core.
However, in Heroku.com we can do this:
# config/puma.rb
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
How can I lower down the number of threads and increase the number of workers in elastic beanstalk? by taking into account that I have a load balancer enabled and the config above is managed by elastic beanstalk.
In the case of Heroku I can manage with puma.rb, however in elastic beanstalk I don't see any other approach besides changing the file
/opt/elasticbeanstalk/support/conf/pumaconf.rb
manually. Manually modification will cause issues when the number of instances scaling down or up.
Not sure, if you've resolved your issue. I had a similar issue and I resolved it using the .ebextensions.
You can create a new pumaconf.rb file on your config directory in your code. Then in the .ebextensions directory, create a file that will copy out the new pumaconf.rb and replace the default pumaconf.rb.
Also, if you are going about it this way. In your .ebextensions code, use this path for your new file
/var/app/ondeck/config/pumaconf.rb
and not
/var/app/current/config/pumaconf.rb
Because using the latter one, wont copy out your latest pumaconf.rb
cp /var/app/ondeck/config/pumaconf.rb /opt/elasticbeanstalk/support/conf/pumaconf.rb
My question:
Is this exemplary of memory bloat, memory leak, or just bad server configuration?
First, I will add a screenshot of memory usage
As you can see, I have been using swap memory.
Also, I am getting a constant plateau and then increase in memory after setting up my Puma server config/puma.rb file according to the Heroku documentation.
I am running the hobby dyno 1x (512 mb) with 0 workers.
My WEB_CONCURRENCY variable is set to 1
My RAILS_MAX_THREADS is also set to 1
MIN_THREADS is also set to 1
Here is my config/puma.rb file
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
ActiveRecord::Base.establish_connection
end
I am using the derailed gem to measure memory use from my gems.
I am using rack-mini-profiler & memory_profiler to measure on a per page basis.
After allowing the app the run, here is the following:
As you can see the app is not going over its limit. If anyone has any suggestions that make sense please feel free to answer the question.
The dyno and puma set up mentioned above is producing this report.
So, we are now only using swap memory occasionally and not more than a few MB and only occasionally hitting 23 MB. The app uses a lot of gems and you can see that we are staying under the 512 MB limit.
I used the following documentation from Heroku:
To get your puma server configured properly
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server
For R14 memory errors
https://devcenter.heroku.com/articles/ruby-memory-use
We have a Rails app running on a single Heroku instance configured with Puma. We are having performance issues that are causing H14 errors (session timeouts).
We run anywhere from 2-5 Web dynos depending on traffic. We run 3 Worker dynos for our background processes. I have increased our Web dynos from 1x (512mb) to 2x (1gb) and removed our logging service Papertrail, which seemed to be causing memory leaks. This has helped a little bit.
We are receiving anywhere from 30-60 RPMs depending on the time of day.
Here is our Puma config:
workers Integer(ENV['PUMA_WORKERS'] || 2)
threads Integer(ENV['MIN_THREADS'] || 8), Integer(ENV['MAX_THREADS'] || 12)
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# worker specific setup
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['pool'] = ENV['MAX_THREADS'] || 12
ActiveRecord::Base.establish_connection(config)
end
end
We have implemented the Heroku Rack Timeout gem and have used the default timeout setting of 15 seconds, but this has not helped the situation at all. It possibly could have made it worse, so we removed it.
Does anyone know of the optimal configuration for an app like ours with the traffic metrics described above? Any config suggestions would be much appreciated!
With puma the number of threads can be altered to handle multiple requests at the same time. But in case of Heroku, the database connections to postgres are limited.
To handle more requests we can increase the number of dynos, where each dyno has lets say by default 0:16 threads. In which case under load each dyno can make 16 connections to the database.
With rails ActiveRecord we can limit the number of database connections per rails worker process using this configuration:
Rails.application.config.after_initialize do
ActiveRecord::Base.connection_pool.disconnect!
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env]
config['reaping_frequency'] = ENV['DB_REAP_FREQ'] || 10 # seconds
config['pool'] = ENV['DB_POOL'] || ENV['MAX_THREADS'] || 5
ActiveRecord::Base.establish_connection(config)
end
end
However with the db connection limit, if the number of dynos increase the connection limit is hit.
Is there any way to like kill a thread and close the database connection soon as the request has been served?
I've tried using pgbouncer as a buildpack but there are issues with prepared statements.
I'm currently on rails 4.0.0 using puma 2.7.1.
Is there some event hook in puma whcih we can configure like this for when a request is complete?
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
Try to use cluster mode of puma if you use MRI. I suggest to setup 4 workers.
puma -w 4
If you use Jruby, you need to specify near 20 threads. But it should not be more then number of allowed connections to the DB.
puma -t 20:20
More info:
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#thread-safety
https://devcenter.heroku.com/articles/concurrency-and-database-connections#maximum-database-connections
Is it possible to run delayed jobs on Heroku for free?
I'm trying to use delayed_job_active_record on Heroku. However, it requires a worker dyno and it would cost money if I turned this dyno on for full time.
I thought using Unicorn and making its workers run delayed jobs instead of the Heroku worker, would cost nothing, while successfully running all the jobs. However, Unicorn workers do not seem to start "working" automatically.
I have the following in my Procfile.
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
and the following in my unicorn.rb
worker_processes 3
timeout 30
preload_app true
before_fork do |server, worker|
# Replace with MongoDB or whatever
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
end
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis.quit
Rails.logger.info('Disconnected from Redis')
end
sleep 1
end
after_fork do |server, worker|
# Replace with MongoDB or whatever
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis = ENV['REDIS_URI']
Rails.logger.info('Connected to Redis')
end
end
Delayed jobs only seem to work when I scale the Heroku worker from 0 to 1.
Again, is it not possible to use Unicorn workers instead of Heroku worker to do the delayed jobs?
Do I have to use a gem like workless to run delayed jobs on Heroku for free? (reference)
Splitting the process like that can incur problems - your best bet is it not try and get it 'free' but use something like http://hirefireapp.com/ which will start up a worker when there are jobs to perform reducing the cost significantly rather than running a worker 24x7.
Also note, Heroku will only ever autostart a 'web' process for you, starting other named processes is a manual task.
You can use Heroku Scheduler to run the jobs using the command
rake jobs:workoff
This way the jobs can run in your web dyno. According to Delayed_Job docs, this command will run all available jobs and exit.
You can configure the scheduler to run this command every 10 minutes for example, and it doesn't have sensible effect on the app's performance when no jobs are queued. Another good idea is to schedule it to run daily at a time with lower access rates.
Ideally there is no straight way to get this free, but you would find lots of workaround one can make to enjoy free background jobs. One of which is http://nofail.de/2011/07/heroku-cedar-background-jobs-for-free/
Also if you plan to use resque which is an excellent choice for background jobs you would need redis which comes free with nano version => https://addons.heroku.com/redistogo. https://devcenter.heroku.com/articles/queuing-ruby-resque
Simple solution is to buy a one dyno for the worker, whereas your web dyno would be free.
Let me if you need more help.
Thanks
Consider using the Workless gem: https://github.com/lostboy/workless
If you only have one web worker, Heroku will sleep it if it's inactive for an hour.
Also, Heroku will reboot all dynos at least once a day.
This makes it hard to do a within-ruby scheduler. It has to at least use persistent storage (e.g. database).