How to handle configuration values in production applications - environment-variables

I am new to and have recently been learning how to build/deploy a MEAN stack application and now wish to deploy to AWS (Using EC2). Currently my node.js API utilises environment variables (process.env) for values such as:
MongoDB URL (for process running on port 27017)
JWT authentication secret
Email and passwords for emailing service
Port to run node
What is the best way to handle these dynamic values when deploying this app to production? I have read that environment variables, whilst more secure than plaintext values, are still insecure in some regard. I am aware of services such as the AWS parameter store for secure storage of these values but wanted to know if there is some general best practice advice to follow for storing such configuration variables when deploying an app into production for any given deployment option.
Thanks

AWS Parameter Store is indeed advantageous when compared to storing credentials in config files or environment variables. To know more about potential issues with those 2 you may want to check answers to this question https://stackoverflow.com/a/28329996/2579733
AWS Parameter Store would require little configuration since it's a tool within the AWS ecosystem.
Secrets stored in PS are encrypted in transit and at rest.
Basically you'd need an IAM role with ssm:GetParameter and kms:Decrypt permissions which you can assign to your EC2 instance.
Then basic node.js implementation can be something like this:
const aws = require('aws-sdk')
async function getSecureValue(path) {
const ssm = new aws.SSM()
const ssmParams = {
Name: path,
WithDecryption: true,
}
const storeResponse = await ssm.getParameter(ssmParams).promise()
return storeResponse.Parameter.Value
}
const password = await ssm.getSecureValue(PASSWORD_SSM_PATH)

Related

CSRF with a dockerized Flask application on AWS Fargate

I made a docker container with a pretty simple web app: https://github.com/liquidcarbon/dockerflask2fa
The whole thing behaves well locally and when you're accessing via the ELB endpoint:
http://dockerflask2faloadbalancer-f10e5f558aaa921f.elb.us-east-1.amazonaws.com:5000
But when I use my Cloudfront distribution that lives on my domain, logging in does not work, returning "CSRF tokens do not match" message on registering a new user, and or logging in as an existing user.
https://flask.albond.xyz
The Cloudfront Cache Policy was set to CachingDisabled.
I'm new to web security, and I'll appreciate your help.
Looks like the caching & cookies behavior needs to be tweaked in Cloudfront: https://github.com/liquidcarbon/dockerflask2fa

Is there any way to get the arn for a secret manager secret from a secret (or from cdk)?

I am looking to get the arn of a secret so I can allow infrastructure I am deploying to have the permissions to get that secret e.g.:
const secret = secretsmanager.Secret.fromSecretArn(stack, secretName, secretArn);
secret.grantRead(CdkBuild);
In this case to allow a codebuild build to be able to read a secret without me having to pass it in a way which appears in the logs (e.g. echo to file or... etc)
I can get the arn by using the AWS SDK listSecrets call and then iterating until the secretName matches but that is an async call and the value is returned after the cdk has passed the point where it needs the value.
I can get the secret value itself from within cdk e.g. for a github token:
ghtokenSecret = cdk.SecretValue.secretsManager(secretName, {
jsonField: ghtoken,
});
But can see no method/way to get the secret arn such that I can pass the value to my codebuild infrastructure.
Given multiple accounts for multiple environments (dev, sandbox, integration etc) allowing the use of just the SecretName allows me to use the cdk script effectively unchanged vs if using the arn as I have to create the secret for that account (using the same secret name) and then find it's arn and then insert that arn into some config (e.g. cdk.json) for each branch etc. which is potentially a source of failure/error.
Any ideas gladly welcomed

Heroku - Keep Keys Hidden Upon Inpect Element

I have deployed an application to Heroku and have used the heroku config:set command to set environmental variables such as keys for certain things (Google Maps API, for example). When I view the page and inspect element, the key shows up in the url in the console. Should this be the case? I was under the impression that keys should be kept hidden to keep others from knowing what they are for security reasons. Please advise. Thank you.
You can't. Anything which is sent to the client is not secret. That includes any values used in javascript.
But don't worry - most API's like Google Maps use a public key. And applications where you use Oauth only allow a whitelist of callback domains.
In fact in the Google Maps Javascript API your API key is used in constructing the URLs used to request resources so attempting to hide it would be a true fools errand.
Some API's do however provide client secrets for calling the API from the server side. These should be kept secret and placed in an ENV var on the server.

Really Basic S3 Upload credentials

I'm giving Amazon Web Services a try for the first time and getting stuck on understanding the credentials process.
From a tutorial from awsblog.com, I gather that I can upload a file to one of my AWS "buckets" as follows:
s3 = Aws::S3::Resource.new
s3.bucket('bucket-name').object('key').upload_file('/source/file/path')
In the above circumstance, I'm assuming he's using the default credentials (as described here in the documentation), where he's using particular environment variables to store the access key and secret or something like that. (If that's not the right idea, feel free to set me straight.)
The thing I'm having a hard time understanding is the meaning behind the .object('key'). What is this? I've generated a bucket easily enough but is it supposed to have a specific key? If so, how to I create it? If not, what is supposed to go into .object()?
I figure this MUST be out there somewhere but I haven't been able to get it (maybe I'm misreading the documentation). Thanks to anyone who gives me some direction here.
Because S3 doesn't have traditional directories, what you would consider the entire 'file path' in your client machines, i.e. \some\directory\test.xls becomes the 'key'. The object is the data in the file.
Buckets are unique across S3, and the keys must be unique within your bucket.
As far as the credentials, there are multiple ways of providing them - one is to actually supply the id and secret access key right in your code, another is to store them in a config file somewhere on your machine (this varies by OS type), and then when you are running your code in production, i.e. on an EC2 instance, the best practice is to start your instance with a IAM Role assigned, and then anything that runs on that machine automatically has all of the permissions of that role. This is the best/safest option for code that runs in EC2.

Are Heroku's environmental variables a secure way to store sensitive data?

I use Heroku to deploy a Rails app. I store sensitive data such as API keys and passwords in Heroku's environment variables, and then use the data in rake tasks that utilize various APIs.
I am just wondering how secure Heroku's environmental variables are? Is there a way to hash these variables while retaining the ability to use them in the background somehow?
I came across a previous thread here: Is it secure to store passwords as environment variables (rather than as plain text) in config files?.
But it doesn't quite cover instances when I still need to unhashed password to perform important background tasks.
Several things (mostly my opinion):
--
1. API Key != Password
When you talk about API Keys, you're talking about a public token which is generally already very secure. The nature of API's nowadays is they need some sort of prior authentication (either at app or user level) to create a more robust level of security.
I would firstly ensure what type of data you're storing in the ENV variables. If it's pure passwords (for email etc), perhaps consider migrating your setup to one of the cloud providers (SendGrid / Mandrill etc), allowing you to use only API keys
The beauty of API keys is they can be changed whilst not affecting the base account, as well as limiting interactivity to the constrains of the API. Passwords affect the base account
--
2. ENV Vars are OS-level
They are part of the operating environment in which a process runs.
For example, a running process can query the value of the TEMP
environment variable to discover a suitable location to store
temporary files, or the HOME or USERPROFILE variable to find the
directory structure owned by the user running the process.
You must remember Environment Variables basically mean you store the data in the environment you're operating. The generally means the "OS", but can be the virtual instance of an OS too, if required.
The bottom line is your ENV vars are present in the core of your server. The same way as text files would be sitting in a directory on the hard drive - Environment Variables reside in the core of the OS
Unless you received a hack to the server itself, it would be very difficult to get the ENV variable data pro-grammatically, at least in my experience.
What are you looking for? Security against who or what?
Every piece of information store in a config file or the ENV is readable to everyone who has access to the server. And even more important, every gem can read the information and send it somewhere.
You can not encrypt the information, because then you need to store the key to decrypt somewhere. Same problem.
IMO both – environment variables and config files – are secure as long you can trust everyone that has access to your servers and you carefully reviewed the source code of all libraries and gems you have bundled with your app.

Resources