AWS Elastic beanstalk Ruby App not loading Production ENV - ruby-on-rails

My Ruby on Rails app on AWS Elastic beanstalk is for loading the staging environment, even though it has been configured as production.
BUNDLE_PATH: vendor/bundle
RAILS_ENV: production
RAILS_SKIP_ASSET_COMPILATION: false
RAILS_SKIP_MIGRATIONS: false
BUNDLE_WITHOUT: test:development
RACK_ENV: production
The way i know its loading staging. Is i use the figaro gem to build links in my html.erb files. For instance.
<%= link_to(t('pages.sign_in.forgot_pswd'), "#{ENV['EXTERNAL_LINK_HOME']}/forgot_password", class: 'forgot-pass', target: '_blank')%>
And this link's base URL is the default variable and not production.
application.yml
EXTERNAL_LINK_HOME: 'https://staging.blahblah.com'
production:
EXTERNAL_LINK_HOME: 'https://production.blahblah.com'
This just recently happen on my last deploy. There has been no updates to the figaro gem. The only change was upgrading to the latest AWS Elastic beanstalk instance for Ruby 2.2.
Ruby 2.2 with Puma version 2.3.3 64bit Amazon Linux 2016.09 v2.3.1 running Ruby 2.2 (Puma) 2016.09.0
Ruby 2.2.5-p319
RubyGems 2.4.5.1
Puma 2.16.0
nginx 1.10.1
Rails 4.2.4
I can SSH into the instance, via eb ssh and then cd /var/app/current and run bundle exec rails c. I get a production rails console no problem. verified by Rails.env.

Related

AWS elastic beanstalk is not getting the environment variables

I'm trying to run a Rails 6 app on AWS Elastic Beanstalk, but I get from puma log the following (repeats every few seconds)
[21776] + Gemfile in context: /var/app/current/Gemfile
[21776] Early termination of worker
The version numbers:
Rails 6.0.3.3
puma 4.3.5
ElasticBeanstalk Ruby 2.7 running on 64bit Amazon Linux 2/3.1.1
ruby 2.7.1p83
The server is unresponsive from outside the instance, and there's nothing on log/production.log.
Running on a dev machine on production mode there's no errors, and the database is reachable (no migration failure).
Running on the AWS instance the command bundle exec puma -p 3000 -e production I get
Puma starting in single mode...
Version 4.3.5 (ruby 2.7.1-p83), codename: Mysterious Traveller
Min threads: 5, max threads: 5
Environment: production
Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
so there's no obvious error that may cause the worker to halt.
How can I find out what's causing the workers to fail?
Edit 1:
I ran Rails console on the instance and found that the environment variables are missing - e.g. the production database user/pass/host. Once I hardcoded them I could connect to the database.
I suspect the absence of other environment variables is making the app crash.
A user on AWS forum had the answer.
Setting in my Gemfile
gem "nio4r", "= 2.5.2"
fixed the issue.

Unable to run rails console/server in AWS Cloud9 CodeStar EB dev Ruby on Rails environment

I created my first environment with CodeStar and selected the Ruby on Rails w/ Elastic Beanstalk option. I'm using AWS Cloud9 for the IDE. I'd like to use the Preview option to view the impact of code changes prior to committing, and have looked through the docs at http://docs.aws.amazon.com/cloud9/latest/user-guide/app-preview.html, however I can't seem to get a server running in the development environment.
From within my environment directory in the Cloud9 terminal (path: /home/ec2-user/environment/env_name) I tried rails s -b $IP -p $PORT as documented for the previous non-AWS Cloud9, and also rails server and even rails console just to check. In each case I just get the help details for rails new:
$ rails s
Usage:
rails new APP_PATH [options]
Options:
-r, [--ruby=PATH] # Path to the Ruby binary of your choice
...etc...
What am I missing?
Per the discussion on this question, this behavior indicates that rails does not recognize that it is running in a rails directory so it thinks the only valid action is rails new. There were several suggested answers, but the one that worked for me was to run rake rails:update:bin (or rake app:update:bin for Rails 5).

Running Rails Console Gets Killed

When i run on my production servers the command:
bundle exec rails c production
After few seconds i get "Killed", the server has 32GB RAM and have no lack of memory.
I use Ruby 2.1.6 with Rails 4.2.9 On CentOS 6.5.
I tried add SWAP file it doesn't worked.

Capistrano 3 Start Server Error (Rails 4.2)

This is my spec for deployment to my staging:
Rails 4.2.5
Capistrano 3.4.1
Thin 1.7.0
Nginx 1.4.6
Everything works fine, capistrano also able to deploy and run the server
BUT
When I try to access the staging, it's always internal server error and written on log/thin.log on the rails app:
Unexpected error while processing request: Missing 'secret_token' and
'secret_key_base' for 'production' environment, set these values in
'config/secrets.yml'
I also have set the secret_key_base for production environment generated from rake secret RAILS_ENV=production
if I kill the running server process run by capistrano and manually run the server using bundle exec thin -p [MY_PORT] -e production -d start, the error disappears and everything is normal
So,
it passes nginx so the error must be in thin or capistrano
it bugs me everytime I deploy to production then I must kill the
server process and start it manually
my questions are:
why thin server started by capistrano always have error with missing
secret_key_base and secret_token although I already have it on
my secrets.yml?
how to fix it? I'm out of options

elasticbeanstalk ruby on rails assets

I am new to Ruby on Rails. When I run my rails server on development mode locally, it works fine. I tried deploying to elastic beanstalk (on development mode) but for some reason the assets can't be found. Are there any configurations I need to do in the development.rb or config.yml files?

Resources