Non thread safe Rails 5 App in elasticbeanstalk - ruby-on-rails

I am deploying my Rails 5.2 app in elastic beanstalk with puma as application server and Nginx as default by Elastic Beanstalk.
I am facing an issue of a race condition. After I check more details in container instance I found this:
#example /opt/elasticbeanstalk/support/conf/pumaconf.rb
directory '/var/app/current'
threads 8, 32
workers %x(grep -c processor /proc/cpuinfo)
bind 'unix:///var/run/puma/my_app.sock'
pidfile '/var/run/puma/puma.pid'
stdout_redirect '/var/log/puma/puma.log', '/var/log/puma/puma.log', true
daemonize false
As seen here the number of workers is equal to the number of my CPU core.
However, in Heroku.com we can do this:
# config/puma.rb
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
How can I lower down the number of threads and increase the number of workers in elastic beanstalk? by taking into account that I have a load balancer enabled and the config above is managed by elastic beanstalk.
In the case of Heroku I can manage with puma.rb, however in elastic beanstalk I don't see any other approach besides changing the file
/opt/elasticbeanstalk/support/conf/pumaconf.rb
manually. Manually modification will cause issues when the number of instances scaling down or up.

Not sure, if you've resolved your issue. I had a similar issue and I resolved it using the .ebextensions.
You can create a new pumaconf.rb file on your config directory in your code. Then in the .ebextensions directory, create a file that will copy out the new pumaconf.rb and replace the default pumaconf.rb.
Also, if you are going about it this way. In your .ebextensions code, use this path for your new file
/var/app/ondeck/config/pumaconf.rb
and not
/var/app/current/config/pumaconf.rb
Because using the latter one, wont copy out your latest pumaconf.rb
cp /var/app/ondeck/config/pumaconf.rb /opt/elasticbeanstalk/support/conf/pumaconf.rb

Related

No PID file created when starting Puma as daemon

I am working on getting my Rails app deployed using Nginx as a reverse proxy. Everything work correctly when starting the app manually using rails s to launch it. All the proper PIDs are created in the tmp/pids/ directory (puma.pid, puma.state, andserver.pid) and the puma.sock is properly created in the tmp/sockets/ directory.
When I attempt to start the same app using rails s -d, to start it as a daemon, everything start normally except the tmp/pids/puma.pid is nowhere to be found which causes my reverse proxy to break. I'll paste a copy of my puma.conf below.
Using:
puma 3.12.6 and rails 5.2.6
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
port ENV.fetch("PORT") { 3000 }
environment ENV.fetch("RAILS_ENV") { "development" }
workers ENV.fetch("WEB_CONCURRENCY") { 4 }
preload_app!
plugin :tmp_restart
# Prep for Nginx integration
app_dir = File.expand_path("../..", __FILE__)
tmp_dir = "#{app_dir}/tmp"
bind "unix://#{tmp_dir}/sockets/puma.sock"
pidfile "#{tmp_dir}/pids/puma.pid"
state_path "#{tmp_dir}/pids/puma.state"
activate_control_app
It turns out that the problem occurs when launching the Rails server and using the -d switch to daemonize (which I was doing) like this:
rails s -d
However, if I add daemonize true to the puma.conf everything works as expected. So, now I launch the server using rails s with the following puma.conf and the missing puma.pid appears!
# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers: a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum; this matches the default thread size of Active Record.
#
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
# Specifies the `port` that Puma will listen on to receive requests; default is 3000.
#
port ENV.fetch("PORT") { 3000 }
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked webserver processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
#
workers ENV.fetch("WEB_CONCURRENCY") { 4 }
# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory.
#
preload_app!
# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
# Prep for Nginx integration
app_dir = File.expand_path("../..", __FILE__)
tmp_dir = "#{app_dir}/tmp"
bind "unix://#{tmp_dir}/sockets/puma.sock"
pidfile "#{tmp_dir}/pids/puma.pid"
state_path "#{tmp_dir}/pids/puma.state"
# Run Puma as a daemon
daemonize true
activate_control_app
to config/puma.rb
add pidfile "tmp/pids/server.pid"

Adding foreman on passenger

So far I had a simple application that only required the classic rails server to boot.
I have recently added the react_on_rails gem and it requires to boot a nodejs server to handle webpack and javascript stuff.
So I understand I need this foreman gem that is capable of managing several processes. So far so good, but then I'm still having a few problems understanding and deploying this enhanced app to my production environment (Phusion Passenger on Apache/Nginx)
So several questions :
Does passenger handle the transition from rails s to foreman start -f Procfile.dev automatically ?
If no then where do I setup things so passenger works ?
Side question : almost all google results refer to puppet when looking for foreman on passenger. Anyone could explain what puppet does in 1 line and if I really need it in production ? So far everythings runs smoothly on localhost with the foreman start -f Procfile.dev command so I don't know where this is coming from...
I am deploying my application to the Amazon Cloud using Capistrano, and I was expecting to have the rails + nodejs setup on every autoscaled instance (and Passenger would graciously handle all that). Am I thinking wrong ?
In our production environment we use eye to manage other processes related to the rails app. (Passenger will run from mod_passenger while the workers are controlled by eye)
And here is an example of how to start 4 concurrent queue_classic workers:
APP_ROOT = File.expand_path(File.dirname(__FILE__))
APP_NAME = File.basename(APP_ROOT)
Eye.config do
logger File.join(APP_ROOT, "log/eye.log")
end
Eye.application APP_NAME do
working_dir File.expand_path(File.dirname(__FILE__))
stdall 'log/trash.log' # stdout,err logs for processes by default
env 'RAILS_ENV' => 'production' # global env for each processes
trigger :flapping, times: 10, within: 1.minute, retry_in: 10.minutes
group 'qc' do
1.upto(4) do |i|
process "worker-#{i}" do
stdall "log/worker-#{i}.log"
pid_file "tmp/pids/worker-#{i}.pid"
start_command 'bin/rake qc:work'
daemonize true
stop_on_delete true
end
end
end
end

How can I prevent R14 errors and swap memory usage in a Rails app on Heroku?

My question:
Is this exemplary of memory bloat, memory leak, or just bad server configuration?
First, I will add a screenshot of memory usage
As you can see, I have been using swap memory.
Also, I am getting a constant plateau and then increase in memory after setting up my Puma server config/puma.rb file according to the Heroku documentation.
I am running the hobby dyno 1x (512 mb) with 0 workers.
My WEB_CONCURRENCY variable is set to 1
My RAILS_MAX_THREADS is also set to 1
MIN_THREADS is also set to 1
Here is my config/puma.rb file
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
ActiveRecord::Base.establish_connection
end
I am using the derailed gem to measure memory use from my gems.
I am using rack-mini-profiler & memory_profiler to measure on a per page basis.
After allowing the app the run, here is the following:
As you can see the app is not going over its limit. If anyone has any suggestions that make sense please feel free to answer the question.
The dyno and puma set up mentioned above is producing this report.
So, we are now only using swap memory occasionally and not more than a few MB and only occasionally hitting 23 MB. The app uses a lot of gems and you can see that we are staying under the 512 MB limit.
I used the following documentation from Heroku:
To get your puma server configured properly
https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server
For R14 memory errors
https://devcenter.heroku.com/articles/ruby-memory-use

Rails app on Heroku with Puma

We have a Rails app running on a single Heroku instance configured with Puma. We are having performance issues that are causing H14 errors (session timeouts).
We run anywhere from 2-5 Web dynos depending on traffic. We run 3 Worker dynos for our background processes. I have increased our Web dynos from 1x (512mb) to 2x (1gb) and removed our logging service Papertrail, which seemed to be causing memory leaks. This has helped a little bit.
We are receiving anywhere from 30-60 RPMs depending on the time of day.
Here is our Puma config:
workers Integer(ENV['PUMA_WORKERS'] || 2)
threads Integer(ENV['MIN_THREADS'] || 8), Integer(ENV['MAX_THREADS'] || 12)
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# worker specific setup
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['pool'] = ENV['MAX_THREADS'] || 12
ActiveRecord::Base.establish_connection(config)
end
end
We have implemented the Heroku Rack Timeout gem and have used the default timeout setting of 15 seconds, but this has not helped the situation at all. It possibly could have made it worse, so we removed it.
Does anyone know of the optimal configuration for an app like ours with the traffic metrics described above? Any config suggestions would be much appreciated!

How do I use puma's configuration file?

I was following this guide it documents the puma.rb file that is stored inside the app's config directory.
The guide is a bit flakey, but here's what I assume the puma.rb file does. Instead of running crazy commands such as this to get puma running on a specified socket:
bundle exec puma -e production -b unix:///var/run/my_app.sock
You can just specify the port, pid, session and other parameters in the puma.rb file like this:
rails_env = ENV['RAILS_ENV'] || 'production'
threads 4,4
bind "/home/starkers/Documents/alpha/tmp/socket"
pidfile "/home/starkers/Documents/alpha/tmp/pid"
state_path "/home/starkers/Documents/alpha/tmp/state"
activate_control_app
And then you could cd into the app's root and run a simple command like
'puma'
and the parameters set in puma.rb would be followed. Unfortunately that doesn't seem to work for me.
At least, I ran puma inside the root of a tiny test app, and no .sock file appeared in
/home/starkers/Documents/alpha/tmp/sockets so does that mean it isn't working?
How do I get this working? I am on a local development machine, so could that cause this error somehow? Is there a parameter I need to pass in when running
puma ?
I was also stuck trying to find documentation on the config file for puma but I did find the all-in-one config.ru file useful. I've formatted it here for future reference:
# The directory to operate out of.
# The default is the current directory.
directory '/u/apps/lolcat'
# Load “path” as a rackup file.
# The default is “config.ru”.
rackup '/u/apps/lolcat/config.ru'
# Set the environment in which the rack's app will run. The value must be a string.
# The default is “development”.
environment 'production'
# Daemonize the server into the background. Highly suggest that
# this be combined with “pidfile” and “stdout_redirect”.
# The default is “false”.
daemonize
daemonize false
# Store the pid of the server in the file at “path”.
pidfile '/u/apps/lolcat/tmp/pids/puma.pid'
# Use “path” as the file to store the server info state. This is
# used by “pumactl” to query and control the server.
state_path '/u/apps/lolcat/tmp/pids/puma.state'
# Redirect STDOUT and STDERR to files specified. The 3rd parameter
# (“append”) specifies whether the output is appended, the default is
# “false”.
stdout_redirect '/u/apps/lolcat/log/stdout', '/u/apps/lolcat/log/stderr'
stdout_redirect '/u/apps/lolcat/log/stdout', '/u/apps/lolcat/log/stderr', true
# Disable request logging.
# The default is “false”.
quiet
# Configure “min” to be the minimum number of threads to use to answer
# requests and “max” the maximum.
# The default is “0, 16”.
threads 0, 16
# Bind the server to “url”. “tcp://”, “unix://” and “ssl://” are the only
# accepted protocols.
# The default is “tcp://0.0.0.0:9292”.
bind 'tcp://0.0.0.0:9292'
bind 'unix:///var/run/puma.sock'
bind 'unix:///var/run/puma.sock?umask=0777'
bind 'ssl://127.0.0.1:9292?key=path_to_key&cert=path_to_cert'
# Listens on port 7001
# The default is 9292
port 7001
# Instead of “bind 'ssl://127.0.0.1:9292?key=path_to_key&cert=path_to_cert'” you
# can also use the “ssl_bind” option.
ssl_bind '127.0.0.1', '9292', { key: path_to_key, cert: path_to_cert }
# Code to run before doing a restart. This code should
# close log files, database connections, etc.
# This can be called multiple times to add code each time.
on_restart do
puts 'On restart...'
end
# Command to use to restart puma. This should be just how to
# load puma itself (ie. 'ruby -Ilib bin/puma'), not the arguments
# to puma, as those are the same as the original process.
restart_command '/u/app/lolcat/bin/restart_puma'
# === Cluster mode ===
# How many worker processes to run.
# The default is “0”.
workers 2
# Code to run when a worker boots to setup the process before booting
# the app.
# This can be called multiple times to add hooks.
on_worker_boot do
puts 'On worker boot...'
end
# === Puma control rack application ===
# Start the puma control rack application on “url”. This application can
# be communicated with to control the main server. Additionally, you can
# provide an authentication token, so all requests to the control server
# will need to include that token as a query parameter. This allows for
# simple authentication.
# Check out https://github.com/puma/puma/blob/master/lib/puma/app/status.rb
# to see what the app has available.
activate_control_app 'unix:///var/run/pumactl.sock'
activate_control_app 'unix:///var/run/pumactl.sock', { auth_token: '12345' }
activate_control_app 'unix:///var/run/pumactl.sock', { no_token: true }
Those settings would then go in a ruby file (e.g. config/puma.rb) and then as Starkers says, you can run it with
puma -C config/puma.rb
Update: The original answer is no longer correct for Puma versions since 2019: Puma added a fallback mechanism, so both locations are checked now. ( https://github.com/puma/puma/pull/1885)
Puma first looks for configuration at config/puma/<environment_name>.rb, and then falls back to config/puma.rb.
Outdated answer:
If there is an environment defined - which is the case in your example - the configuration file is read from config/puma/[environment].rb and not config/puma.rb.
Just move your config/puma.rb to config/puma/production.rb and it should work.
Read the Puma documentation for more details: Configuration file
This will work:
puma -C config/puma.rb
You need to tell puma where to find your rackup file you can do it by putting this in your config:
rackup DefaultRackup
It looks like a fix for this is merged into master: https://github.com/puma/puma/pull/271

Resources