How do you switch between applications using EB CLI? - ruby-on-rails

I managed to get a rails app running throw Elastic Beanstalk using the EB CLI and instructions outlined here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-reference-get-started.html
I then set up a second application going through the "eb init" process a second time and using a different application name. How do I now switch between the two applications using the command line before doing "git aws.push"? Can I switch between them while keeping both applications live?
Bonus question: If I have two different AWS accounts and I have the access key/secret for both accounts, how do I switch between applications on different AWS accounts? I assume the same process of "switching accounts" is the same process you'd go through if you were to say set up git and eb on a second development computer and try to link up the second computer's local git repo with the live eb instance. Is this correct?

When you run eb init, it creates a folder in your current directory called .elasticbeanstalk. In it there will be a config file which will have all of the info that you need for your current environment/application. It also has a value called AwsCredentialFile which points to a file that contains your Access Key ID as well as Secret Key.
Therefore if you want to switch between applications, you can just have multiple directories where you have ran eb init in each one and change the files accordingly.

When you configure an elastic beanstalk application using the cli, a file called config.yml is generated inside the .elasticbeanstalk directory. This file basically contains all the info about your elastic beanstalk application.
To change the application your project is linked to you can simply change the value of the application_name in config.yml.
Run eb status to verify if the application switch was successful.

I don't want multiple directories, and I don't want to modify application_name in the .elasticbeanstalk/config.yml file, so I just do this:
eb init --interactive
Of course I have to answer the questions again, but that only takes a few seconds.

eb use
usage: eb use [environment_name] [options ...]

Related

Rails 6 is unable to connect to AWS Elastic Beanstalk provisioned RDS. Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"

I am having a very difficult time trying to launch a sample Rails 6 application to Elastic Beanstalk. For context, I am following these instructions
ADD RDS to Ruby Application
ADD an RDS to Beanstalk
I have followed these instructions to a tee and am still unable to connect to the rds database that I have provisioned. I keep receiving the following error:
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Whenever I try to run RAILS_ENV=production rails db:migrate or any other rake task, I keep getting that error.
On my AWS console, under Configuration and Software, I have the following environment variables:
Also in my database.yml file I have the rds configured variables listed as such.
production:
adapter: postgresql
database: <%= ENV['RDS_DB_NAME'] %>
username: <%= ENV['RDS_USERNAME'] %>
password: <%= ENV['RDS_PASSWORD'] %>
host: <%= ENV['RDS_HOSTNAME'] %>
port: <%= ENV['RDS_PORT'] %>
I have mapped my values as instructed in the documentation and am certain that they are correct.
Finally, I have sshed into my beanstalk provisioned ec2 instance and have executed the following command:
psql -U username -p 5432 -h examplehost.rds.amazonaws.com -d ebdb
provided the password and am able to connect. I am really at my wits end, I've spent too much time trying to diagnose this and am running out of ideas. I don't know where to look too next for ideas on how to trouble shoot this. I've read so many stack overflow questions and blogs that my head is spinning. If anyone has any ideas on how to resolve this, I would greatly appreciate it.
---Update----
I have created a new environment variable on the elastic beanstalk console.
ENV['DATABASE_URL'] = postgres://YourUserName:YourPassword#YourHostname:5432/YourDatabaseName
I made the necessary configurations, uploaded my .zip file and the connection to the database failed.
---- UPDATE-----
printenv does not show the varialbes provided by beanstalk, however this command does sudo /opt/elasticbeanstalk/bin/get-config environment.
My first advice is that, in my opinion, it is a much better option to create an Amazon RDS on their own, and not tied to Beanstalk.
As the AWS documentation indicates (emphasis mine):
AWS Elastic Beanstalk provides support for running Amazon Relational Database Service (Amazon RDS) instances in your Elastic Beanstalk environment. To learn about that, see Adding a database to your Elastic Beanstalk environment. This works great for development and testing environments. However, it isn't ideal for a production environment because it ties the lifecycle of the database instance to the lifecycle of your application's environment.
And:
To decouple your database instance from your environment, you can run a database instance in Amazon RDS and configure your application to connect to it on launch. This enables you to connect multiple environments to a database, terminate an environment without affecting the database, and perform seamless updates with blue-green deployments.
In my opinion, even for testing or development, it is always advisable to configure a small database instance and give your application the ability of define the most appropriate mechanism for connecting to your database.
The only downside is that you will probably need to configure a VPC, although it should not be actually a problem and, in ay case, it is worth value.
If for any reason you need to use the Beanstalk provisioned RDS database perhaps you have some workarounds to your problem (it should be a workaround because your configuration looks fine - please, only, verify that the database configuration is defined for the right Beanstalk environment).
For instance, one thing you can try is to store the database connection configuration in a S3 bucket, as also suggested in the AWS documentation. The idea is basically create some configuration file with the necessary connectivity information, store it in S3, and read that configuration in your application, i.e., process that file, in order to initialize your database.
But maybe you can try another approach.
Please, consider this SO question, and the answer from Jon McAuliffe and others. As indicated, Beanstalk will provide your application with environment variables, but maybe this variables will not be exposed as shell variables, they will be exposed to your application in different ways depending on the runtime the application needs to be executed on.
In the case of Ruby, you are accessing these variables in the correct way but, for any reason, your program is not having access to that information.
This probably also explains why printenv does not print any if your variables but the get-config script does.
But maybe you can take advantage of the fact that get-config provides you the right information and, either define this variables in your ENV by executing the get-config script for every RDS* key, perhaps in your environment.rb - please, be aware that I programmed in Ruby when I was a student but there is a long time since that, do the task in the file you consider appropriate - or using .ebextensions and a custom configuration file. You can find several examples here.
For instance, consider the following (copy and paste, with minor modifications of this example configuration):
commands:
01_update_env:
command: "/tmp/update_environment_variables.sh"
files:
"/tmp/update_environment_variables.sh":
mode: "000755"
content : |
#!/bin/bash
RDS_HOSTNAME=$(/opt/elasticbeanstalk/bin/get-config environment -k RDS_HOSTNAME)
if [ -z "$RDS_HOSTNAME" ]; then
echo "Could not determine RDS hostname"
exit 1
fi
echo "RDS hostname $RDS_HOSTNAME..."
# Just export the variable at OS level, or make it visible to
# the rails env in some other way
export RDS_HOSTNAME=$RDS_HOSTNAME
# Process the rest of the variables...
# Probably we should create a list and iterate through it
A similar approach could be the one exposed in this stackoverrun question, but restricted to the container that Beanstalk will use to encapsulate your app. AFAIK, the container should receive as env variables the different RDS* ones corresponding to the database configuration.
Dan, be aware that I have not tested these solutions, they are only ideas: please, be careful with that, I do not want to cause any damage to your system.
I found an answer for this problem with a mysql server that might still help you. Basically, even though I followed all your steps, could see my envars using sudo /opt/elasticbeanstalk/bin/get-config environment and could connect directly to my database with the mysql command, I was still getting the following error:
Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) (Mysql2::Error::ConnectionError)
The solution turned out to be the fact that Elastic Beanstalk was not connecting my envars to my bundle exec rails console command in the eb ssh instance access. I solved the issue by prepending all of the required envars explicitly to any rails commands I ran from within the eb ssh instance access. So for example, in order to run rails console, I had to run the following:
RAILS_MASTER_KEY=xxxxxxx RAILS_ENV=production RDS_HOSTNAME=xxxxxxx RDS_PASSWORD=xxxxxxx RDS_USERNAME=xxxxxxx RDS_DB_NAME=xxxxxxx AWS_REGION=xxxxxxx AWS_BUCKET=xxxxxxx bundle exec rails c
Replace the xxxxxxxs above with the values from the corresponding variables in your EB > Configuration > Software tab, and you should be able to connect to the remote database and run migrations, rake tasks and other database-reliant functions.
For Linux2 instances I was having the same issue and just noticed that the env variables I set in the config just didn't exist for su that I had set myself to -- if I remain the default login after eb ssh env prints everything I expected
edit: sorry -- env printing of variables on linux 2 instance enabled by
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
so what I did was find where those env variables were being exported for default user shell, which was /etc/profile.d/sh.local as noted in the above aws knowledge center link and just source that file when I needed to start the rails console as su

Unable to create AWS Elastic Beanstalk environment in either command line or admin dashboard

I am trying to deploy a "Hello, world" Rails app (Rails v 5.0.1, Ruby v 2.3.1) to AWS for the purposes of learning about AWS.
I have created an IAM user in the AWS Elastic Beanstalk dashboard, and I've verified that the user has one access ID and secret access key. I've ensured these two credentials are stored in environment variables in my local machine, and for completeness I've also ensured these same values are correct in the ~/.aws/credentials file. I have gone through the steps of creating a new application in the UI, however whenever I click "Create Application", I see the following error at the top of the screen:
Validation Error
Configuration validation exception: AWS Elastic Beanstalk could not communicate with Amazon EC2 to determine whether to create a custom security group for Elastic Load Balancing.
My IAM user is a member of the "AdministratorAccess" and "AWSElasticBeanstalkFullAccess" permissions groups.
When creating the application, I went through the following steps:
1) Selected "Web server environment" in the "Choose environment tier" menu.
2) In the "Create a new environment" menu, I choose "Ruby" as the platform and "Sample Application" under the "Application code" selection.
Similarly, when I nagivate to my project directory in the command line, and I run "eb create dev-env", I see the following:
MacBook-Pro-5:beanstalk richiethomas$ eb create dev-env
WARNING: You have uncommitted changes.
Creating application version archive "app-e4da-170116_145453".
Uploading beanstalk/app-e4da-170116_145453.zip to S3. This may take a while.
Upload Complete.
ERROR: API Call unsuccessful. Status code returned 401
EDIT: The same 401 response is returned even when I have no uncommitted changes.
Can anyone illuminate what I'm doing wrong?
I know this question was awhile ago but for those looking for the solution to this issue you need to add the following permissions to your IAM account.
AWSCodeCommitPowerUser
AWSCodeCommitFullAccess

Permission denied to `heroku fork` app but able to push code to same app

I am able to execute something like git push heroku master with no problems but when I execute heroku fork -a heroku staging (where heroku is my existing app and staging is a new app I am trying to create) I get ' ! You do not have access to the app heroku.' and the fork does not initiate.
I am following the instructions at https://devcenter.heroku.com/articles/fork-app
I am trying to just make a staging environment and would like to use fork so I don't have to manually copy over config vars or DB data. Thanks.
EDIT:
Just found "Forking is only supported on production tier database plans. Follow these steps to upgrade from a starter tier (dev or basic) plan to a production plan." at https://devcenter.heroku.com/articles/heroku-postgres-fork. Looks like forking is not permitted on starter tier DB's. Does this statement mean you can't fork an app (heroku fork -a sourceapp targetapp) on a starter tier DB?
This question is answered in another thread- Forking my existing heroku app for multiple environments:
"staging should be the name of the application you are creating, not the name of the git remote. You need to pick a unique name. I typically use mysite-prod and mysite-staging as my application names."
Not sure on your question in your edit, but the first message means you don't have access to the app itself. I'd double check the credentials you're signed into the Heroku Toolbelt with.

Heroku app id within the application environment

Is it possible to retrieve the app id (app123#heroku.com) within the application environment?
I know, that I can manually set a config var, but I figured such info could be exposed by Heroku?
If you have an add-on like SendGrid or Memcache installed, you can access the environment variables for the username of one of those add-ons. For example, if you were using Ruby, you can log into the console and output the value of ENV['SENDGRID_USERNAME'] or ENV['MEMCACHE_USERNAME']. It's easy to extract the app id from there. I'm not sure which other add-ons also expose that value in an environment variable but you can output the entire ENV global hash and find out what's available.
I used Jared's solution for over a year.
Today I ran into an issue when ENV['SENDGRID_USERNAME'] was not there yet (during deployment).
Heroku recommends to set a config var for this yourself, so I set:
heroku config:add APP_NAME=<myappname> --app <myappname>
And enable lab feature that allows you to use them during compile
heroku labs:enable user-env-compile -a myapp
And now I have my app name available here:
ENV["APP_NAME"] # '<myappname>'
So I won't run into the issue again, though I would like to get this kind of info set from Heroku instead.
This is straight from my support ticket with Heroku:
You cannot retrieve that value yourself. This is a value that SendGrid support requires that only Heroku support can supply to them.
So you will need to ask Heroku for it via a support ticket
UPDATE
Somewhat contradictorily, I found I could access my Heroku app id by running:
heroku config:get SENDGRID_USERNAME
app171441466#heroku.com

How do I open source my Rails' apps without giving away the app's secret keys and credentials

I have a number of Rails apps hosted on GitHub. They are all currently private, and I often will deploy them from their GitHub repository. I'd like to be able to make some of them open source, just like the ones you can find on http://opensourcerails.com.
My question is: How can I make these repositories public without giving away super secret credentials?
For example, I can look in /config/initializers/cookie_verification_secret.rb and see the cookie secret for nearly every one of them. I don't understand how this is acceptable. Are these users all changing these values in their deploy environments somehow?
Some users even expose their AWS secret and key! Others will instead set their AWS secret to something like:
ENV['aws-secret']
although I'm not sure at what point they're setting that value.
So, what are the best practices for open sourcing your Rails app without compromising your app's security.
I recently went through this with one of my own apps. My solution was to store anything secret in a git-ignored YAML config file, and then to access that file using a simple class in the initializers directory. The config file is stored in the 'shared' folder for the Capistrano deployment and copied to config at each deploy.
Config store: http://github.com/tsigo/jugglf/blob/master/config/initializers/juggernaut.rb
Example usage: https://github.com/tsigo/jugglf/blob/6b91baae72fbe4b1f7efa2759bb472541546f7cf/config/initializers/session_store.rb
You may also want to remove from source control all history of the file that used these secret values. Here's a guide for doing this in Git that I used: http://help.github.com/removing-sensitive-data/
If you're using foreman, put an .env file in the root of your app. (foreman docs)
.env will have
AWS_SECRET=xxx
AWS_ACCESS=yyy
Then when you need to use the keys, insert:
ENV['AWS_SECRET']
ENV['AWS_ACCESS']
Though it's important that you don't commit this .env to your version control. So if you're using git, add the .env to your .gitignore.
Bonus round! - Heroku
If deploying to Heroku, these environment variables need to be configured in the Heroku environment, too. There are two options:
Manually add the keys through the heroku config:add command
Use the heroku-config gem to synchronize your local environment variables, both ways.
Not storing any secret value at all. At any point in the history of a Git repo.
Those values should be stored elsewhere, leaving only template config files versioned, along with a script able:
to read the right values from the external repo
and build the final config file complete (with the secret values in it)
By keeping the tow set of data separate (sources on one side, secret values on the other), you can then open source the sources repo without comprising any secrets.
I actually took a hint from your question, using ENV.
I had three different secret values that I didn't want made available. They're the app's secret token of course, and Twitter's consumer key and secret. In my secret token initializer:
KinTwit::Application.config.secret_token = ENV['SECRET_TOKEN']
Twitter.consumer_key = ENV['CONSUMER_KEY']
Twitter.consumer_secret = ENV['CONSUMER_SECRET']
I'm hosting my project on Heroku, so I added these as configuration variables to Heroku.
[03:07:48] [william#enterprise ~/dev/rwc/kintwit]$ heroku config:add CONSUMER_KEY=ub3rs3cr3tk3y
Adding config vars and restarting app... done, v7
CONSUMER_KEY => ub3rs3cr3tk3y
[03:08:40] [william#enterprise ~/dev/rwc/kintwit]$ heroku config:add CONSUMER_SECRET=ub3rs3cr3tk3y
Adding config vars and restarting app... done, v8
CONSUMER_SECRET => ub3rs3cr3tk3y
[03:08:57] [william#enterprise ~/dev/rwc/kintwit]$ heroku config:add SECRET_TOKEN=ub3rs3cr3tk3y
Adding config vars and restarting app... done, v9
SECRET_TOKEN => ub3rs3cr3tk3y
Now, the values are ready on my next push. But, what if you aren't using Heroku? I'm obviously not an expert on every single rails deployment (jeesh, not even a Heroku pro), but an example of this would be doing a db:migrate for testing.
$ RAILS_ENV=test rake db:migrate
The KEY=value pair before the command sets the environment variable, so running this command, echo ENV['RAILS_ENV'] would print test. So however this is set up in your environment is how you would do it. But, the environment variables aren't in your code, so that's the trick.
[EDIT - The following method has the annoyance of having to switch to the Production branch to run "rails server" in order to include necessary cookies. Thus, making edits while the server is difficult... and I'm still looking for a good solution]
After further investigation, I think the solution I was looking for was to exclude anything that stored a secret value from my Git repo's master branch (just as #VonC said). But instead of then reading from those files in a separate repo, I simply create a new "production" branch and add them to that.
This way they're excluded from Master and I can push that to Github or some other public repo just fine. When I'm ready to deploy, I can checkout the Production branch and merge Master into it and deploy Production.
I need to be able to do this because Heroku and other hosts require a single git repo to be pushed to their servers.
More information here:
http://groups.google.com/group/heroku/browse_thread/thread/d7b1aecb42696568/26d5249204c70574

Resources