Rails production not work in AWS load balancer - ruby-on-rails

My Rails 6 App work fine with development mode in EC2 instances. But when config to use production mode. The load balancer not able to do health check and not able to run the app.
My health check:
Security: Load Balancer
Security: Rails App(s)
Load balancer worked in development
Here the development that work with load balancer
Start rails:
rails s -p 3000 -b 0.0.0.0
then responded
=> Booting Puma
=> Rails 6.0.3.2 application starting in development
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 4.3.5 (ruby 2.6.3-p62), codename: Mysterious Traveller
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3000
config/environments/development.rb
Rails.application.configure do
config.hosts << "xxxxxxxx.us-east-2.elb.amazonaws.com" #This is public dns of load balance
config.cache_classes = false
config.eager_load = false
config.consider_all_requests_local = true
if Rails.root.join('tmp', 'caching-dev.txt').exist?
config.action_controller.perform_caching = true
config.action_controller.enable_fragment_cache_logging = true
config.cache_store = :memory_store
config.public_file_server.headers = {
'Cache-Control' => "public, max-age=#{2.days.to_i}"
}
else
config.action_controller.perform_caching = false
config.cache_store = :null_store
end
config.action_mailer.raise_delivery_errors = false
config.action_mailer.default_url_options = { :host => 'localhost:3000' }
config.action_mailer.perform_caching = false
config.active_support.deprecation = :log
config.assets.debug = true
config.assets.quiet = true
config.file_watcher = ActiveSupport::EventedFileUpdateChecker
end
Below is production(that not working)
config/environments/production.rb
Start rails:
RAILS_ENV=production rails s -p 3000 -b 0.0.0.0
then responded:
=> Booting Puma
=> Rails 6.0.3.2 application starting in production
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 4.3.5 (ruby 2.6.3-p62), codename: Mysterious Traveller
* Min threads: 5, max threads: 5
* Environment: production
* Listening on tcp://0.0.0.0:3000
Rails.application.configure do
config.hosts << "xxxxxxxx.us-east-2.elb.amazonaws.com" #This is public dns of load balance
config.hosts << "3.14.65.84"
config.cache_classes = true
config.eager_load = true
config.consider_all_requests_local = false
config.action_controller.perform_caching = true
config.public_file_server.enabled = ENV['RAILS_SERVE_STATIC_FILES'].present?
config.assets.compile = false
config.log_level = :debug
config.log_tags = [ :request_id ]
config.action_mailer.perform_caching = false
config.i18n.fallbacks = true
config.active_support.deprecation = :notify
config.log_formatter = ::Logger::Formatter.new
if ENV["RAILS_LOG_TO_STDOUT"].present?
logger = ActiveSupport::Logger.new(STDOUT)
logger.formatter = config.log_formatter
config.logger = ActiveSupport::TaggedLogging.new(logger)
end
end
Load Balancer: Health Check not work!
I also tried:
copy config/environments/development.rb to production.rb and then run as production environment the result =====> Health Check Not Work!
copy config/environment/production.rb to development.rb and then run as development environment the result =====> Health Check Work!
It seems nothing about the rails config, but the way it handle production in AWS
Help: How to make this Rails 6 work as production in AWS EC2 with load balancer ?

My company just ran into a very similar sounding issue. Once ECS spun up the tasks, we were able to access the Rails app through the ELB, but the health checks would fail and it would automatically shut down each container it tried to spin up.
We ended up adding our IP range to the hosts configuration. Completely disabling it in production didn't feel right, so we arrived at something akin to this:
config.hosts = [
"publicdomain.com",
"localhost",
IPAddr.new("10.X.X.X/23")
]
The whitelisted IP address matches the range that will be used by ECS when creating and slotting in the containers. Hopefully that helps!

Had the same issue today. In my case, I simply lower the bar to accept 403 as healthy. It's not ideal, but we shouldn't sacrifice hosts protection or open it widely for predictable IPs.
Updated 1:
Rails already support exclude config from 6.1
config.host_authorization = { exclude: ->(request) { request.path =~ /healthcheck/ } }
Ref: https://api.rubyonrails.org/classes/ActionDispatch/HostAuthorization.html
The main reason is connection from target group to container for healthcheck uses IP, not domain, so rails response 403.
Accept 403 or exclude it from host authorization.

Instead of ping to root path, I think better if you create your own routes for health check in application like this:
# controller
class HealthCheckController < ApplicationController
def show
render body: nil, status: 200
end
end
# routes
get '/health_check', to: 'health_check#show'
then update ping path in LB health check to /health_check
Edit:
Add config.hosts.clear replaced config.hosts << "xxxxxxxx.us-east-2.elb.amazonaws.com" in production config file to make rails accept request

The missing information here is that Rails by default does not set config.hosts in production. The purpose of config.hosts is to protect against DNS rebinding in development environments due to the presence of web-console.
This is the best article I found on the topic: https://prathamesh.tech/2019/09/02/dns-rebinding-attacks-protection-in-rails-6/
For us, we have set config.hosts in application.rb for our primary domain and subdomain and then customized it in all the other environments. As such, this causes config.hosts to be enforced in production and then fail AWS health checks as observed by the OP.
You have two options:
Remove config.hosts completely in production. Since this is not set by Rails by default, the presumption is that DNS rebinding attacks are not an issue in prod.
Determine the request ip in production.rb. The above solutions tie the app to the infrastructure which is not good. What if you want to deploy your app to a new region? You can do this dynamically or statically.
Static: set an environment variable to pull in the ELB request ip addresses. If you're using AWS, hopefully you're using CloudFormation, so you can pass the appropriate values through as ENV or as ParameterStore variable.
Dynamically: Use the AWS Ruby SDK to pull in the ELB ip addresses
Another to those who come across this is here: https://discuss.rubyonrails.org/t/feature-proposal-list-of-paths-to-skip-when-checking-host-authorization/76246

Related

Rails ActionController::Live without restart server on developpement

I'm using SSE (Server Sent Event) in rails with the ActionController::Live of rails 4.
It's working fine. But in developpement, I have to defined my configuration like this :
Rails.application.configure do
config.cache_classes = true
config.eager_load = true
I have to restart the puma server each time I do a modification.
Is there a way to do that without restart ?
Your configuration of config.cache_classes = true explicitly disables hot-reloading of classes when they change. You'll have to set that to false in development to avoid server restarts between changes.

Elastic Beanstalk not loading assets for Ruby on Rails

I have a ruby on rails application that works locally in production locally but will not work when I upload it to EB, It breaks everything.
When it is local it looks like this
And here is my eb site, I have included on with the errors form the console, but you get the idea
Here is my production.rb
Rails.application.configure do
# Settings specified here will take precedence over those in config/application.rb.
# Code is not reloaded between requests.
config.cache_classes = true
# Eager load code on boot. This eager loads most of Rails and
# your application in memory, allowing both threaded web servers
# and those relying on copy on write to perform better.
# Rake tasks automatically ignore this option for performance.
config.eager_load = true
# Full error reports are disabled and caching is turned on.
config.consider_all_requests_local = false
config.action_controller.perform_caching = true
# Enable Rack::Cache to put a simple HTTP cache in front of your application
# Add `rack-cache` to your Gemfile before enabling this.
# For large-scale production use, consider using a caching reverse proxy like
# NGINX, varnish or squid.
# config.action_dispatch.rack_cache = true
# Disable serving static files from the `/public` folder by default since
# Apache or NGINX already handles this.
config.serve_static_files = true
# Compress JavaScripts and CSS.
config.assets.js_compressor = :uglifier
# config.assets.css_compressor = :sass
# Do not fallback to assets pipeline if a precompiled asset is missed.
config.assets.compile = true
# Asset digests allow you to set far-future HTTP expiration dates on all assets,
# yet still be able to expire them through the digest params.
config.assets.digest = true
# `config.assets.precompile` and `config.assets.version` have moved to config/initializers/assets.rb
# Specifies the header that your server uses for sending files.
# config.action_dispatch.x_sendfile_header = 'X-Sendfile' # for Apache
# config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' # for NGINX
# Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies.
# config.force_ssl = true
# Use the lowest log level to ensure availability of diagnostic information
# when problems arise.
config.log_level = :debug
# Prepend all log lines with the following tags.
# config.log_tags = [ :subdomain, :uuid ]
# Use a different logger for distributed setups.
# config.logger = ActiveSupport::TaggedLogging.new(SyslogLogger.new)
# Use a different cache store in production.
# config.cache_store = :mem_cache_store
# Enable serving of images, stylesheets, and JavaScripts from an asset server.
# config.action_controller.asset_host = 'http://assets.example.com'
# Ignore bad email addresses and do not raise email delivery errors.
# Set this to true and configure the email server for immediate delivery to raise delivery errors.
# config.action_mailer.raise_delivery_errors = false
# Enable locale fallbacks for I18n (makes lookups for any locale fall back to
# the I18n.default_locale when a translation cannot be found).
config.i18n.fallbacks = true
# Send deprecation notices to registered listeners.
config.active_support.deprecation = :notify
# Use default logging formatter so that PID and timestamp are not suppressed.
config.log_formatter = ::Logger::Formatter.new
# Do not dump schema after migrations.
config.active_record.dump_schema_after_migration = false
end
I have tried making
config.serve_static_files = true
config.assets.compile = true
False and it still doesn't work.
Any help?
I encountered a similar error when deploying my React on Rails 5 application to Elastic Beanstalk for production (was previously in Heroku in production). I came to solution from a different angle, adjusting the nginx default configuration. My problem was content from public/packs/ was not being served as packs/, which is how the <%= javascript_pack_tag 'application' %> was producing the link in app/views/layouts/application.html.erb. I added the following location directive at the end of /etc/nginx/conf.d/webapp_healthd.conf to remedy the issue:
location /packs {
alias /var/app/current/public/packs;
gzip_static on;
expires 1y;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
}
After ssh-ing into the app server, adding this directive, and sudo su; service nginx restart, static content from public/packs/ was correctly served as packs/. I believe this location directive can be added through a config file in .ebextensions to automate this change to the default elastic beanstalk nginx conf file.
This is due to Nginx config (in /etc/nginx/conf.d/webapp_healthd.conf).
When a request for /assets/anything is made, Nginx will look by itself in /var/app/current/public/assets.
This means any asset not present in /public/assets won't load
My ugly solution:
# config/application.rb
config.assets.prefix = '/some_other_path'
Note that this will slow assets requests as those will be processed by Puma and not directly by Nginx.
If you use the latest Amazon Linux 2 Elastic Beanstalk platform, then there's a much simpler way to solve this with platform configuration. See the full details here: https://stackoverflow.com/a/69103413/1852005

Rails expects me to restart on every change?

My views are working as expected; every time I change something, it is immediately reflected on the page. But every time I make a change in a controller, model, or config, I have to restart the server in order for it to show.
I start my server with rails s -e development and it states this:
=> Booting Puma
=> Rails 4.1.8 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
=> Notice: server is listening on all interfaces (0.0.0.0). Consider using 127.0.0.1 (--binding option)
=> Ctrl-C to shutdown server
My config/environments/development.rb looks like this:
# -*- encoding : utf-8 -*-
Gvm::Application.configure do
# Settings specified here will take precedence over those in config/application.rb.
# In the development environment your application's code is reloaded on
# every request. This slows down response time but is perfect for development
# since you don't have to restart the web server when you make code changes.
config.cache_classes = true
# Do not eager load code on boot.
config.eager_load = false
config.action_mailer.delivery_method = :smtp
config.action_mailer.smtp_settings = {
:address => "smtp.gmail.com",
:port => 587,
:domain => 'gmail.com',
:user_name => '...',
:password => '...',
:authentication => 'plain',
:enable_starttls_auto => true
}
config.action_mailer.default_url_options = { :host => "localhost:3000" }
# Para debug apenas, é melhor que a linha abaixo seja adicionado apenas no ambiente de desenvolvimento
config.action_mailer.raise_delivery_errors = true
# Show full error reports and disable caching.
config.consider_all_requests_local = true
config.action_controller.perform_caching = false
# Don't care if the mailer can't send.
config.action_mailer.raise_delivery_errors = false
# Print deprecation notices to the Rails logger.
config.active_support.deprecation = :log
# Raise an error on page load if there are pending migrations
config.active_record.migration_error = :page_load
# Debug mode disables concatenation and preprocessing of assets.
# This option may cause significant delays in view rendering with a large
# number of complex assets.
config.assets.debug = true
end
Any ideas of why I still have to restart it after each change?
Conclusion (without a solution):
At the end, it seems that this is a rails and mounted partitions bug. My Vagrant VirtualBox VM mounts a shared folder and, on doing that, rails can't properly deal with time synchronization between guest and host.
While I don't have a proper confirmation of this issue, it is what could explain the initial question.
Please add this line in your development.rb file. It work for me.
config.reload_classes_only_on_change = false
NOTE:
With VirtualBox setup we have a very well know problem: rails issue track
Solution: You need to synchronize time between host and client due some changes in Rails 4.
Rails 4.1 comes with spring from the box, so it maybe your issue. Run spring stop and after that check if there are any spring processes left ps ax | grep spring and run pkill -9 spring if any. Restart Rails and look if reloading works as expected.
Check your development.rb file, there may be
config.cache_classes = true
In the development environment your application's code is reloaded on every request. This slows down response time but is perfect for development, since you don't have to restart the web server when you make code changes, just make this false
config.cache_classes = false

Rails does not reload controllers, helpers on each request in FreeBSD 9.1

I've detected weird behavior of rails. Please give me some advice!
For example I have a code like this:
def new
raise
end
I start rails server in development mode.
Hit refresh in browser and see
RuntimeError in AuthenticationController#new
Okay. I comment out line with "raise" like this:
def
# raise
end
Hit refresh in browser but again I see that error as shown above. Even though in browser I see code with commented out "raise".
My guess is that controllers and helpers etc. are getting reloaded but rails returns cached results.
config/environments/development.rb:
Rails.application.configure do
# BetterErrors::Middleware.allow_ip! '192.168.78.0/16'
# In the development environment your application's code is reloaded on
# every request. This slows down response time but is perfect for development
# since you don't have to restart the web server when you make code changes.
config.cache_classes = false
# Do not eager load code on boot.
config.eager_load = false
# Show full error reports and disable caching.
config.consider_all_requests_local = true
config.action_controller.perform_caching = false
# Don't care if the mailer can't send.
config.action_mailer.raise_delivery_errors = false
# Print deprecation notices to the Rails logger.
config.active_support.deprecation = :log
# Raise an error on page load if there are pending migrations.
config.active_record.migration_error = :page_load
# Debug mode disables concatenation and preprocessing of assets.
# This option may cause significant delays in view rendering with a large
# number of complex assets.
config.assets.debug = true
# Asset digests allow you to set far-future HTTP expiration dates on all assets,
# yet still be able to expire them through the digest params.
config.assets.digest = true
# Adds additional error checking when serving assets at runtime.
# Checks for improperly declared sprockets dependencies.
# Raises helpful error messages.
config.assets.raise_runtime_errors = false
# Raises error for missing translations
# config.action_view.raise_on_missing_translations = true
end
How I start server:
=> Booting Puma
=> Rails 4.2.1.rc3 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
Puma 2.11.1 starting...
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://0.0.0.0:3000
Any suggestions please.
UPDATE 1.
This problem does not exists in Ubuntu 14.04 but exists in FreeBSD 9.1.
I've created simple app and tested it out in FreeBSD first (same problem), in Ubuntu then (no problem).
Can you help me with advice how to deal with this problem on FreeBSD 9.1?
Had the same issue with Rails 5 + Vagrant + Ubuntu 16. None of the other solutions worked (my guest and host times are synced).
The only thing that worked for me was to comment out the following line from config/environments/development.rb:
config.file_watcher = ActiveSupport::EventedFileUpdateChecker
Thought I would post this in case someone else gets to this page for a similar issue, as I did.
I have finally figured this out!
Here's an answer on rails tracker: https://github.com/rails/rails/issues/16678
If you use VirtualBox + NFS you must synchronize time between host and client due some changes in Rails 4.
Please check if you are really running the application in development mode, rather than production.
Also check you /config/environments/development.rb to see if cache classes is off:
config.cache_classes = false
This other post might help you.
Rails.application.reloader.reload!
found with method(:reload!).source in rails console.
(rails 6)

Hiding all exception messages

I'm running Rails 3.0.8 with Webrick webserver started in production mode with such command
RAILS_ENV=production rails server
I have a following problem.
I've read, that rails in production mode should handle all exceptions and errors.
But I'm actually still having error message "ActiveRecord::RecordNotFound" when I'm trying to get unexisted item in production mode.
I've also read about
rescue_from ActiveRecord::RecordNotFound, :with => :page_not_found
such hack, but I think that it isn't a Rails-way.
here's my production.rb file contents:
BeerPub::Application.configure do
# Settings specified here will take precedence over those in config/application.rb
# The production environment is meant for finished, "live" apps.
# Code is not reloaded between requests
config.cache_classes = true
config.whiny_nils = false
# Full error reports are disabled and caching is turned on
config.consider_all_requests_local = false
config.action_view.debug_rjs = false
config.action_controller.perform_caching = true
# Specifies the header that your server uses for sending files
config.action_dispatch.x_sendfile_header = "X-Sendfile"
# For nginx:
# config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'
# If you have no front-end server that supports something like X-Sendfile,
# just comment this out and Rails will serve the files
# See everything in the log (default is :info)
# config.log_level = :debug
# Use a different logger for distributed setups
# config.logger = SyslogLogger.new
# Use a different cache store in production
# config.cache_store = :mem_cache_store
# Disable Rails's static asset server
# In production, Apache or nginx will already do this
config.serve_static_assets = true
# Enable serving of images, stylesheets, and javascripts from an asset server
# config.action_controller.asset_host = "http://assets.example.com"
# Disable delivery errors, bad email addresses will be ignored
# config.action_mailer.raise_delivery_errors = false
# Enable threaded mode
# config.threadsafe!
# Enable locale fallbacks for I18n (makes lookups for any locale fall back to
# the I18n.default_locale when a translation can not be found)
config.i18n.fallbacks = true
# Send deprecation notices to registered listeners
config.active_support.deprecation = :notify
end
As you can see, it is quite usual.
Please, help me to solve this issue.
UPD:
I can also get following error
Routing Error
No route matches "/lol"
It's another type of exception, but the question is the same. What is the Best way to handle such situations?
"I've read, that rails in production mode should handle all exceptions and errors."
This is incorrect, Rails doesn't catch exceptions per-se, but it's just that in production mode you have different ways to handle them.
The rescue_from method you wrote is absolutely correct.
Many Rails developers don't care about rescueing the RecordNotFound exception for the simple fact that, depending on the app, is the user that did something wrong.
There are apps however that likes to trap this exception and perform custom actions, such as redirects, rendering text or different views.

Resources