How to manage secrets in multi tenant app for multiple environments (Local, Dev, Prod) - environment

I have a multi-tenant application that stores the data of each client in a distinct database. To access the data I retrieve the credentials from the SecretsManager in AWS where the secret is stored by giving it the tenant_id as the name. In code, I can then just retrieve it by passing the tennant id.
Now I'm looking for a clean way to implement multiple environments. But I'm not able to find a way that suits my use case. The restrictions I'm having are:
The tenant_id is actually the Azure Tenant ID of the client and is also used to connect to the GraphQL API. As such just using an ID like "Test-Tenant" would not be possible as I would not be able to run all the code.
Just relying on the same database in staging and testing (which probably is a bad idea anyway) would also not be possible as the database in staging is a document DB and connecting to it is not possible via the local machine (unless via ssh tunneling but then my end point URLs would not match).
What would be a clean way to implement multiple environments in this multi-tenancy setup?

Related

Give access to RDS database

i have several databases running in RDS Service.
I'd like to know the best pratice to grant access to developers to these DB.
I tought a solution using jenkins but i dont think this is the best option.
I am trying to avoid give some password to developers.
Hope you can help me.
As #ceejayoz mentioned you can create a few users with restricted privileges, for example an user who only can run selects on few schemas, another user who can update registers in a few tables.
I can share what we do and what I've seen. We do A and use B where it is easy.
A) Standard Users
For all databases, we have 3 standard users with the following suffixes (_dba, _rw, _ro). Those all have their own passwords using a strong password generator.
_dba is used to deploy scheme and has all rights
_rw is used by the application (CRUD on all tables, but can't modify scheme)
_ro only has R on all tables and generally given to developers
Note: Developers have access to a bastion used for port forwarding and proxycap. They can query the RDS endpoints from their own machines (DB Tools) going through socks proxy and bastion.
This is lazy method - since creation of users is done programmatically and we feel comfortable giving some developers read only access. They could write a bad query and slow down system, but they could do that with a specific user so not much different and the bastion logs tell me who really was in if I had to investigate.
B) UI
Simple web app with login (ideally MFA) - that provides a way to run queries. If only for reporting, ideally against R/O copy of system. Stackoverflow offers one themselves (https://data.stackexchange.com/).
What would be nice is if RDS offered this themselves (linked to your IAM roles). They offer this on RDS Serverless (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/query-editor.html) and it may be a feature in other RDS versions. That allows fine control or even lazy control (IAM groups).

ec2 roles vs ec2 roles with temporary keys for s3 access

So I have a standard Rails app running on ec2 that needs access to s3. I am currently doing it with long-term access keys, but rotating keys is a pain, and I would like to move away from this. It seems I have two alternative options:
One, tagging the ec2 instance with a role with proper permissions to access the s3 bucket. This seems easy to setup, yet not having any access keys seems like a bit of a security threat. If someone is able to access a server, it would be very difficult to stop access to s3. Example
Two, I can 'Assume the role' using the ruby SDK and STS classes to get temporary access keys from the role, and use them in the rails application. I am pretty confused how to set this up, but could probably figure it out. It seems like a very secure method, however, as even if someone gets access to your server, the temporary access keys make it considerably harder to access your s3 data over the long term. General methodology of this setup.
I guess my main question is which should I go with? Which is the industry standard nowadays? Does anyone have experience setting up STS?
Sincere thanks for the help and any further understanding on this issue!
All of the methods in your question require AWS Access Keys. These keys may not be obvious but they are there. There is not much that you can do to stop someone once they have access inside the EC2 instance other than terminating the instance. (There are other options, but that is for forensics)
You are currently storing long term keys on your instance. This is strongly NOT recommended. The recommended "best practices" method is to use IAM Roles and assign a role with only required permissions. The AWS SDKs will get the credentials from the instance's metadata.
You are giving some thought to using STS. However, you need credentials to call STS to obtain temporary credentials. STS is an excellent service, but is designed to for handing out short term temporary credentials to others - such as the case where your web server is creating credentials via STS to hand to your users for limited case use such as accessing files on S3 or sending an email, etc. The fault in your thinking about STS is that once the bad guy has access to your server, he will just steal the keys that you call STS with, thereby defeating the need to call STS.
In summary, follow best practices for securing your server such as NACLs, security groups, least privilege, minimum installed software, etc. Then use IAM Roles and assign the minimum privileges to your EC2 instance. Don't forget the value of always backing up your data to a location that your access keys CANNOT access.

Azure Key Vault - multiple environments, do I need a Azure Key Vault for each environment?

I am doing some initial research and I am unable to find a clear answer for my problem. The plan is to have multiple environments, (i.e. Dev, Prod, and QA) would I need to have a new instance of Azure Key Vaults for each environment or would I just be able to share the data between them?
I would rather advise to use separate Key Vault instances for the different environments. You can avoid "mixing" secrets across environments by mistake and you have clear separation.
Microsoft officially recommends this approach too:
Our recommendation is to use a vault per application per environment (Development, Pre-Production and Production).
You can read more in the official documentation
Multiple resources/entities can access a single Key Vault instance - provided they're all in the same location (data centre).
You may choose to segment your keys, secrets and certificates, either by placing them in different Key Vaults or by using different access methods/identities, however that's not necessary.
The only time you need a separate Key Vault instance is when the resources/entities accessing it are in another location (data centre/region).
It's worth noting that you don't need to worry too much about provisioning Disaster Recovery for resources using Key Vault, as the SLA Microsoft provide is unsurprisingly good: https://learn.microsoft.com/en-gb/azure/key-vault/key-vault-disaster-recovery-guidance. One caveat to that would be if you're running IaaS/PaaS instances and want to run a DR fail-over yourself to another data centre, at which point you'd need to manually migrate the keys/secrets/certificates in your main Key Vault into another instance (and re-point your VMs accordingly)

Google Cloud Storage - Rails web app - different buckets and different access keys for different environments

I plan to use a cloud based storage service to store some static user-uploaded content of my web application. I have settled upon Google Cloud Storage for now.
My web application is Rails, and I am using Paperclip with fog to connect to Google Cloud Storage.
I understand that I need to use the Interoperable Storage Access Keys in the fog config to connect to my bucket. Any additional key I add is given access to all the buckets.
I want to have a separate bucket per environment (development, staging and production). I want to have separate access and secret keys, with each key having access to only one bucket.
Basically, I don't want to put my production keys in my web-app source code, which all developers will have access to.
I read the Google Cloud Storage documentation on ACLs, but I could not find out how to achieve what I want.
I can't imagine that others wouldn't have had the same kind of requirement. Maybe I am using the wrong search terms, but I cannot get any info about this.
I would appreciate some help.
P.S. - Is what I want possible on AWS S3? I am open to switching to S3 if this is possible on it.
The normal solution for something like this would be to have 3 service accounts (development-app, staging-app, production-app), each of which would have its own set of credentials and permissions. You could either have a test, staging, and production project, or you could just have test, staging, and production buckets within a single project. You can create a whole range of per-project service accounts, each with its own set of credentials and permissions.
Unfortunately, interoperable storage access keys are not available for service accounts, only regular Google user accounts. In order to do what you want, you'd need to have three user accounts, each of which was granted access to exactly one of those buckets.

Multiple Web Roles in an Azure Compute Instance [Deployment]

There is an option for us to have 2 or more web roles in a single deployment. But each deployment can be either be staging or production i.e. by extension, we get only 1 URL to access that deployment.
Considering this case how to access the different webroles, what will be the URLs for those.Also what is the use for having multiple webroles in a single deployment.
Why multiple web roles in a single deployment? Consider an application with a public-facing (customer-oriented) website, as well as an administrative website (maybe on port 8000). There are two basic ways to handle this:
Place both sites in the same web role. This means they now share the VM instances, network cards, memory, etc. It also means that, should you need to scale to handle traffic, both sites are scaled together as a single unit.
Place each site in its own role. Now, they're in their own VM instances and may be scaled separately.
Option #1 is more cost-effective because you can get by with only two role instances (minimum two needed for SLA). Option #2 is better for independent scaling. for instance: If you get a huge spike in customer traffic, this could cause trouble for you when trying to access the administrative website, whereas if your admin website is in its own role, it won't be affected by customer traffic.
In both cases, you get one IP addres, one *.cloudapp.net name (and you can map a custom domain name to it with a CNAME).
Staging vs. Production: Your entire deployment may be published to either Staging or Production (or both, as two separate publishes). Staging is not meant for external users - it's really meant for a pre-live area, where you can verify that a new deployment works as expected. You can then perform a virtual IP swap with your currently-running system in Production, which effectively swaps your staging and production deployments. This results in a near-instant upgrade of your software with no customer downtime.
Keep in mind: Every role in a deployment must stay together - you can't deploy one role to one service and the other role to another service. If you want to do this: Separate your roles into separate deployments. Then you can publish them to different URLs.
In a production deployment your webrole can be accessed by the URL with a prefix you defined previously for example myapp.cloudapp.net; web roles in staging deployment on the other hand can be accessed by automatically generated URL for example 205521014d8c440a83852b62e0df9db5.cloudapp.net
I am afraid there is no way to access web role instance directly, bypassing AppFabric router. Why would you ever need to do it anyway?
If you need get access from one web role instance to another, consider using a queue or distributed cache instead of direct communication.

Resources