Is it safe to store secrets in serverless.yml? - serverless

I am working with aws free tier account and amazon charges us for custom secrets. I am creating a lambda function that needs access to secrets. I came across this post on how to manage secrets in serverless. Can someone please help me understand if approach 1 of storing it in local is safe? Further is it safe to just put them in yml file if you are not going to check it in anywhere.

It is a valid approach, however, not recommended in the long run for production systems as it has a few potential issues:
secrets need to be stored on your local machine, if your machine gets compromised, so are your secrets
they are stored in plaintext in generated CloudFormation template, if someone gets access to that template, they will be able to use them. Please keep in mind that generated CF template gets stored in an S3 bucket in plaintext, which means that in the end you'll be storing your secrets unencrypted in an S3 bucket
Though, if it's just your personal project, that approach should work just fine for you and will be relatively safe.
The recommended way to do it though is to fetch and decrypt the secrets at runtime, as described in #4 of the cited article.

Related

ec2 roles vs ec2 roles with temporary keys for s3 access

So I have a standard Rails app running on ec2 that needs access to s3. I am currently doing it with long-term access keys, but rotating keys is a pain, and I would like to move away from this. It seems I have two alternative options:
One, tagging the ec2 instance with a role with proper permissions to access the s3 bucket. This seems easy to setup, yet not having any access keys seems like a bit of a security threat. If someone is able to access a server, it would be very difficult to stop access to s3. Example
Two, I can 'Assume the role' using the ruby SDK and STS classes to get temporary access keys from the role, and use them in the rails application. I am pretty confused how to set this up, but could probably figure it out. It seems like a very secure method, however, as even if someone gets access to your server, the temporary access keys make it considerably harder to access your s3 data over the long term. General methodology of this setup.
I guess my main question is which should I go with? Which is the industry standard nowadays? Does anyone have experience setting up STS?
Sincere thanks for the help and any further understanding on this issue!
All of the methods in your question require AWS Access Keys. These keys may not be obvious but they are there. There is not much that you can do to stop someone once they have access inside the EC2 instance other than terminating the instance. (There are other options, but that is for forensics)
You are currently storing long term keys on your instance. This is strongly NOT recommended. The recommended "best practices" method is to use IAM Roles and assign a role with only required permissions. The AWS SDKs will get the credentials from the instance's metadata.
You are giving some thought to using STS. However, you need credentials to call STS to obtain temporary credentials. STS is an excellent service, but is designed to for handing out short term temporary credentials to others - such as the case where your web server is creating credentials via STS to hand to your users for limited case use such as accessing files on S3 or sending an email, etc. The fault in your thinking about STS is that once the bad guy has access to your server, he will just steal the keys that you call STS with, thereby defeating the need to call STS.
In summary, follow best practices for securing your server such as NACLs, security groups, least privilege, minimum installed software, etc. Then use IAM Roles and assign the minimum privileges to your EC2 instance. Don't forget the value of always backing up your data to a location that your access keys CANNOT access.

Google Cloud Storage - Rails web app - different buckets and different access keys for different environments

I plan to use a cloud based storage service to store some static user-uploaded content of my web application. I have settled upon Google Cloud Storage for now.
My web application is Rails, and I am using Paperclip with fog to connect to Google Cloud Storage.
I understand that I need to use the Interoperable Storage Access Keys in the fog config to connect to my bucket. Any additional key I add is given access to all the buckets.
I want to have a separate bucket per environment (development, staging and production). I want to have separate access and secret keys, with each key having access to only one bucket.
Basically, I don't want to put my production keys in my web-app source code, which all developers will have access to.
I read the Google Cloud Storage documentation on ACLs, but I could not find out how to achieve what I want.
I can't imagine that others wouldn't have had the same kind of requirement. Maybe I am using the wrong search terms, but I cannot get any info about this.
I would appreciate some help.
P.S. - Is what I want possible on AWS S3? I am open to switching to S3 if this is possible on it.
The normal solution for something like this would be to have 3 service accounts (development-app, staging-app, production-app), each of which would have its own set of credentials and permissions. You could either have a test, staging, and production project, or you could just have test, staging, and production buckets within a single project. You can create a whole range of per-project service accounts, each with its own set of credentials and permissions.
Unfortunately, interoperable storage access keys are not available for service accounts, only regular Google user accounts. In order to do what you want, you'd need to have three user accounts, each of which was granted access to exactly one of those buckets.

API keys and secrets used in iOS app - where to store them?

I'm developing for iOS and I need to make requests to certain APIs using an API key and a secret. However, I wouldn't like for it to be exposed in my source code and have the secret compromised when I push to my repository.
What is the best practice for this case? Write it in a separate file which I'll include in .gitignore?
Thanks
Write it in a separate file which I'll include in .gitignore?
No, don't write it ever.
That means:
you don't write that secret within your repo (no need to gitignore it, or ot worry about adding/committing/pushing it by mistake)
you don't write it anywhere on your local drive (no need to worry about your computer stolen with that "secret" on it)
Store in your repo a script able to seek that secret from an external source (from outside of git repo) and load it in memory.
This is similar to a git credential-helper process, and that script would launch a process listening to localhost:port in order to serve that "secret" to you when you whenever you need it in the current session only.
Once the session is done, there is no trace left.
And that is the best practice to manage secret data.
You can trigger automatically that script on git checkout, if you declare it in a .gitattributes file as a content filter:
This is a very old question, but if anyone is seeing this in google I would suggest you try CloudKit for storing any App secrets (API keys, Oauth secrets). Only your app can access your app container and communication between Apple and your app is secure.
You can check it out here.

Azure - uploading files to blob storage via shared hosting

Im struggling to find an answer to this. I have a website that is deployed in a shared hosting environment. I want to allow people to upload files to my azure blob storage account.
I have this working locally, using the storage emulator, however when I publish the site I get a Security Exception.
Is this actually possible under a shared hosting envrionment ?
Cheers
A bit more detail would help, in understanding how these uploads are taking place. That said, I'll make the assumption that people are uploading directly to Blob Storage, and not through your Website (or Web Service).
To allow direct uploads, you need to provide either a public blob or container (which everyone in the world can see), or create a temporary Shared Access Signature (SAS) on a specific blob or container, that grants access for a short time window.
If your app is Silverlight, then you are probably running into a cross-domain issue (and you'll need to correct that with an access policy).
If you provide more details around the way uploads are being sent, as well as the client and server technology, I can edit my answer to be more specific.

Securing S3 via your own application

Imagine the following use case:
You have a basecamp style application hosting files with S3. Accounts all have their own files, but stored on S3.
How, therefore, would a developer go about securing files so users of account 1, couldn't somehow get to files of account 2?
We're talking Rails if that's a help.
S3 supports signed time expiring URLs that mean you can furnish a user with a URL that effectively lets only people with that link view the file, and only within a certain time period from issue.
http://www.miracletutorials.com/s3-amazon-expiring-urls/
If you want to restrict control of those remote resources you could proxy the files through your app. For something like S3 this may defeat the purpose of what you are trying to do, but it would still allow you to keep the data with amazon and restrict access.
You should be careful with an approach like this as it could cause your ruby thread to block while it is proxying the file, which could become a real problem with the application.
Serve the files using an EC2 Instance
If you set your S3 bucket to private, then start up an EC2 instance, you could serve your files on S3 via EC2, using the EC2 instance to verify permissions based on your application's rules. Because there is no charge for EC2 to transfer to/from S3 (within the same region), you don't have to double up your bandwidth consumption costs at Amazon.
I haven't tackled this exact issue. But that doesn't stop me from having an opinion :)
Check out cancan:
http://github.com/ryanb/cancan
http://railscasts.com/episodes/192-authorization-with-cancan
It allows custom authorization schemes, without too much hassle.

Resources