Simplest way to share credentials across multiple servers - environment-variables

I know this is a common problem, but after searching quite a bit I don't have a solution that I like. I am creating a simple Service Oriented Architecture example for my students. We are using Digital Ocean. There will be three separate servers who need to find out each other's IP addresses and various secret tokens (e.g. Twitter access tokens). I want to illustrate the idea that one does not put such info into the code repo.
My plan is to use environment variables in all the strategic spots. But how to get those environment variables set up "automatically" and "maintainably"?
I could have a private github repo with just one file and remember to clone it and update it on all three servers
I don't want to get into capistrano and similar complicated things
I thought of having a gist with the information in it and using http to grab it but gists are public. Ok I make the gist private but now I have to authenticate that gist and I am back where i started.
Suggestions?

Whatever path you choose, you will have to provide a means of authenticating accessing to the secrets.
Either "I've written the Twitter access token on the white board"
Or "the secrets are stored in a service and here's how you authenticate access to them".
For the latter, a private Git repo is not the worst solution. You could provide access to the repo to your students(' accounts). They would have a 2-step: acquire token from GitHub repo, apply token to Digital Ocean resource.
Alternatives exist including HashiCorp's Vault see tutorial. I recently used Google's Secret Manager too. These are 3rd-party services rather than features provided by Digital Ocean directly.
I suspect that there's an implicit question in simplifying the process too which would entail delegated auth. There may be a way (though I'm unaware of it) to delegate auth to Digital Ocean such that you can provide a mechanism (similar to the above examples) that's contrained to specific Digital Ocean accounts.

Related

How can I most effectively mock/stub API Gateway, DynamoDB, and Cognito for integration testing an SPA?

I have a React-based SPA that I'm trying to test against a versioned backend with its own Database.
In production, the part of the backend is exposed to the outside world via AWS services, like API Gateway. We also use DynamoDB for storing some API-level user details, and Cognito w/ User Pools for authentication. Calls are made to API Gateway, which after authenticating with a Key, makes VPC-link calls to the backend (all of our applications are in a private VPC). Here's a diagram illustrating the relationship:
This is fine when deployed, but I'd want to be able to reproduce this setup locally for development and testing purposes (not deployment). From the reading I was doing about AWS SAM, it seems like it might be the best tool for the job. But getting started with it has been difficult as I'm not sure what the relationship between all the methods/endpoints that I defined and individual Lambda functions that I have to define for SAM as part of my API.
I have a swagger template, so that should make things easier. But I'm not sure how to handle things like Proxying the calls to my backend, setting up authentication, etc. and the SAM documentation seems lacking in regards to this.
Anyone have any tips or experiences?
Many thanks!

ec2 roles vs ec2 roles with temporary keys for s3 access

So I have a standard Rails app running on ec2 that needs access to s3. I am currently doing it with long-term access keys, but rotating keys is a pain, and I would like to move away from this. It seems I have two alternative options:
One, tagging the ec2 instance with a role with proper permissions to access the s3 bucket. This seems easy to setup, yet not having any access keys seems like a bit of a security threat. If someone is able to access a server, it would be very difficult to stop access to s3. Example
Two, I can 'Assume the role' using the ruby SDK and STS classes to get temporary access keys from the role, and use them in the rails application. I am pretty confused how to set this up, but could probably figure it out. It seems like a very secure method, however, as even if someone gets access to your server, the temporary access keys make it considerably harder to access your s3 data over the long term. General methodology of this setup.
I guess my main question is which should I go with? Which is the industry standard nowadays? Does anyone have experience setting up STS?
Sincere thanks for the help and any further understanding on this issue!
All of the methods in your question require AWS Access Keys. These keys may not be obvious but they are there. There is not much that you can do to stop someone once they have access inside the EC2 instance other than terminating the instance. (There are other options, but that is for forensics)
You are currently storing long term keys on your instance. This is strongly NOT recommended. The recommended "best practices" method is to use IAM Roles and assign a role with only required permissions. The AWS SDKs will get the credentials from the instance's metadata.
You are giving some thought to using STS. However, you need credentials to call STS to obtain temporary credentials. STS is an excellent service, but is designed to for handing out short term temporary credentials to others - such as the case where your web server is creating credentials via STS to hand to your users for limited case use such as accessing files on S3 or sending an email, etc. The fault in your thinking about STS is that once the bad guy has access to your server, he will just steal the keys that you call STS with, thereby defeating the need to call STS.
In summary, follow best practices for securing your server such as NACLs, security groups, least privilege, minimum installed software, etc. Then use IAM Roles and assign the minimum privileges to your EC2 instance. Don't forget the value of always backing up your data to a location that your access keys CANNOT access.

How to provide saas customer with server snapshot for business continuity concerns

I'm proposing a SaaS solution to a prospective client to avoid the need for local installation and upgrades. The client uploads their input data as needed and downloads the outputs, so data backup and maintenance is not an issue, but continuity of the online software service is a concern for them.
Code escrow would appear to be overkill here and probably of little value. I was wondering is there an option along the lines of providing a snapshot image of a cloud server that includes a working version of the app, and for that to be in the client's possession for use in an emergency where they can no longer access the software.
This would need to be as close to a point and click solution as possible - say a one page document with a few steps that a non web savvy IT person can follow - for starting up the backup server image and being able to use the app. If I were to create a private AWS EBS snapshot / AMI that includes a working version of the application, and they created an AWS account for themselves, might they be able to kick that off easily enough?
Update:the app is on heroku at the moment so hopefully it'd be pretty straightforward to get it running in amazon EC2.
Host their app at any major PAAS providers, such as EngineYard or Heroku. Check their code into a private Github repository that you can assign them as the owner. That way they have access to the source code and can create a new instance quickly using the repository as the source.
I don't see the need to create an entire service mirror for a Rails app, unless there are specific configuration needs that can't be contained in the project or handled through capistrano.

Is it safe to add security features to a mass-distributable website?

I'm making a website that I'm planning on making readily deployable by users; they'll be able to take my source code and deploy it to their own server and run it as their own.
I was thinking of trying to incorporate SSL and OpenID and other features into the website. Would giving the users access to these files (such as my OpenID/Twitter/Facebook client key and secret, or the SSL certificate stuff, or whatever else..) be a potential security hazard or anything? Would they be able to do anything dangerous with this information?
SSL is not the app's concern
All client key and secret are your own responsibility... I wouldn't distribute them openly.
Normally what one does is to read this information from the environment
facebook_client_key = ENV["FACEBOOK_CLIENT_KEY"]
so the deployer has only to configure the environment, not the application.
I would steer clear of adding things like your clients keys and secrets to any files you distribute to your users. They're called secrets for a reason! I don't known the ins and outs of Facebook or Twitter's APIs but certainly with products such as Akismet, the anti-spam addon for Wordpress, the key is used to identify your particular Wordpress instance.
If you are using a Wordpress site for commerical purposes, you're supposed to pay for Akismet. The problem is that whilst you might not be using it yourself for commerical purposes, depending on what you're making and distributing that's not to say that other people won't use it for commerical purposes, and end up ruining it not just for you, but for everyone else using your software.
You should make the keys and secrets part of your application's configuration and, perhaps, provide instructions on how your users can obtain their own.

Using OAuth in free/open source software

I'm now reading some introduction materials about OAuth, having the idea to use it in a free software.
And I read this:
The consumer secret must never be
revealed to anyone. DO NOT include it
in any requests, show it in any code
samples (including open source) or in
any way reveal it.
If I am writing a free client for a specific website using OAuth, then I have to include the consumer secret in the source code, otherwise making from source would make the software unusable. However, as it is said, the secret should not be release along with the source.
I completely understand the security considerations, but, how can I solve this dilemma, and use OAuth in free software?
I thought of using an external website as a proxy for authentication, but this is very much complicated. Do you have better ideas?
Edit:
Some clients like Gwibber also use OAuth, but I haven't checked its code.
I'm not sure I get the problem, can't you develop the code as open source retrieve the customer secret from a configuration file or maybe leave it in a special table in the database? That way the code will not contain the customer secret (and as such will be "shareable" as open source), but the customer secret will still be accessible to the application.
Maybe having some more details on the intended platform would help, as in some (I'm thinking tomcat right now) parameters such as this one can be included in server configuration files.
If it's PHP, I know a case of an open source project (Moodle), that keeps a php (config.php) file containing definitions of all important configurations, and references this file from all pages to get the definition. It is the responsibility of the administrator to complete the contents of this file with the values particular to that installation. In fact, if the application sees that the file is missing (usually on the first access to the site) it will redirect to a wizard where the administrator can fill the contents in a more user friendly way. In this case the customer secret will be one of these configurations, and as such will be present in the "production" code, but not in the "distributable" form of the code.
I personally like the idea of storing that value in the database in a table designed for it and possibly other parameters as the code needs not be changed. Maybe a installation wizard can be presented here ass well in the case the values do not exist.
Does this solve your problem?
If your service provider is a webapp, your server needs consumer signup pages that provides the consumer secret as the user signs up their consumer. This is the same process Twitter applications go through. Try signing up there and look at their workflow, you'll have all the steps.
If your software is peer-to-peer, each application needs to be both a service provider and a consumer. The Jira and Confluence use cases below outline that instance.
In one of my comments, I mention https://twitter.com/apps/new as the location of where Twitter app developers generate a consumer secret. How you would make such a page depends on the system architecture. If all the consumers will be talking to one server, that one server will have to have a page like https://twitter.com/apps/new. If there are multiple servers (i.e. federations of clients), each federation will need one server with this page.
Another example to consider is how Atlassian apps use OAuth. They are peer-to-peer. Setting up Jira and Confluence to talk to one another still has a setup page in each app, but it is nowhere near as complex as https://twitter.com/apps/new. Both apps are consumers and service providers at the same time. Visiting the setup in each app allows that app to be set up as a service provider with a one-way trust on the other app, as consumer. To make a two-way trust, the user must visit both app's service provider setup page and tell it the URL of the other app.

Resources