ClientID and ClientCredential management for a multi instance deployment - spring-security

How do you manage clientId and clientCredentials of an OIDC(or OAuth2) application's deployment consisting of multiple instances fronted by let's say an L7 load balancer ?
Do you maintain separate clientId/clientCredential for each application instance ?
If yes, how do you manage clientId/clientCredentials for dynamically provisioned instance (E.g. Kubernetes/AWS adding a new application instance in response to a health check failure or a scale-out event)
If you share clientId/clientCredentials across multiple application instances,Isn't that violating the basic rule (i.e. 'secret' is no longer a secret anymore) ?
Also, Compromise of any individual instance by potential attackers, kind of impacts the entire deployment.

If you share clientId/clientCredentials across multiple application
instances,Isn't that violating the basic rule (i.e. 'secret' is no
longer a secret anymore)
Instead of using the secret, you can leverage the Hashicorp Vault which will store and inject the environment variable to the deployment. You can implement encryption at rest and other security options like RBAC on vault UI access.
Yes, secret is base64 encoded not encrypted if you have a large team managing a cluster and RBAC is not set everyone having access to the cluster will be able to decode the secret.
Read more about the has corp vault : https://www.vaultproject.io/
With Kubernetes : https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-raft-deployment-guide?in=vault%2Fkubernetes
Once the vault is set and you have created the secret into it with values you can refer my answer for more details injecting the secret to deployment : https://stackoverflow.com/a/73046067/5525824

Related

How to manage secrets in multi tenant app for multiple environments (Local, Dev, Prod)

I have a multi-tenant application that stores the data of each client in a distinct database. To access the data I retrieve the credentials from the SecretsManager in AWS where the secret is stored by giving it the tenant_id as the name. In code, I can then just retrieve it by passing the tennant id.
Now I'm looking for a clean way to implement multiple environments. But I'm not able to find a way that suits my use case. The restrictions I'm having are:
The tenant_id is actually the Azure Tenant ID of the client and is also used to connect to the GraphQL API. As such just using an ID like "Test-Tenant" would not be possible as I would not be able to run all the code.
Just relying on the same database in staging and testing (which probably is a bad idea anyway) would also not be possible as the database in staging is a document DB and connecting to it is not possible via the local machine (unless via ssh tunneling but then my end point URLs would not match).
What would be a clean way to implement multiple environments in this multi-tenancy setup?

Storing items in Azure Key Vaults vs a Configuration File

Looking for guidance on what items to store in a simple configuration file versus an Azure Key Vault?
For example, an SQL Database name should be stored in a configuration file while its password should be stored in a KeyVault. Is this correct?
Should there be a key vault for each environment (I think not) or simply one for production and one for non-production?
Yes, you can just store the password of SQL Database in azure key vault and store the database name in configuration file, or we can store the whole connection string of the database in azure key vault.
For your second question about should there be a key vault for each environment, I think it's unnecessary for us to create multiple key vault for each environment, you can just separate them with several different names in one key vault.
Anything that should be protected (passwords, certs, API keys, etc..) should be in a Key Vault and have strict access policies assigned to it.
Agree with Hury on the first half; however, disagree on the one key vault for all environments. Different access policies will be applied for different environments.
Your developers may want access to the Key Vault for the dev credentials. As such they'd have an access policy to the dev Key Vault. I would not want to grant them access to a production Key Vault, which would give them access to all the keys in it.
Key Vault is a globally available resource. So if you have multiple instances in different regions connecting that is fine as you wouldn't need to have a separate Key Vault in a different region from a disaster recovery and availability standpoint.
Here is a similar quesiton and also link to Microsoft best practices which supports this

disadvantages of storing secrets in Blob Storage

My current customer has secrets stored in Blob Storage and we want to propose them to migrate to KeyVault. May I know what are the benefits or storing secrets into KeyVault as compared to Blob?
When I read the documentation, KeyVault uses the HSM to protect the keys and secrets but Blob also uses the encryption which is also secure. so what are the other advantages?
I'd say that in general they look very similar, however I'd say the most important difference between the two would be the authorization model.
Access to a storage account is done by one of the two available connectionstrings/keys. Access to a KeyVault can be assigned directly to users or groups (from AAD) and the access to resources within the Key vault can be configured with more granularity. Next to that it is very easy to limit the type of resources from within azure that may or may not retrieve data from a KeyVault, reducing the attack service.
Storage accounts do have AAD integration currently in preview, but what i gather is that that is mostly focusing on the Azure file share functionality (https://learn.microsoft.com/en-us/azure/storage/files/storage-files-active-directory-overview).
Another nice differentiation is definitely the integrations that are already available when using KeyVault (i.e. Retrieving Azure DevOps secrets directly from a KeyVault or automatically retrieving Certificates for VMs)
FYI, i'm by no means a KeyVault expert but that's just my 2 cents :)

Azure Key Vault - multiple environments, do I need a Azure Key Vault for each environment?

I am doing some initial research and I am unable to find a clear answer for my problem. The plan is to have multiple environments, (i.e. Dev, Prod, and QA) would I need to have a new instance of Azure Key Vaults for each environment or would I just be able to share the data between them?
I would rather advise to use separate Key Vault instances for the different environments. You can avoid "mixing" secrets across environments by mistake and you have clear separation.
Microsoft officially recommends this approach too:
Our recommendation is to use a vault per application per environment (Development, Pre-Production and Production).
You can read more in the official documentation
Multiple resources/entities can access a single Key Vault instance - provided they're all in the same location (data centre).
You may choose to segment your keys, secrets and certificates, either by placing them in different Key Vaults or by using different access methods/identities, however that's not necessary.
The only time you need a separate Key Vault instance is when the resources/entities accessing it are in another location (data centre/region).
It's worth noting that you don't need to worry too much about provisioning Disaster Recovery for resources using Key Vault, as the SLA Microsoft provide is unsurprisingly good: https://learn.microsoft.com/en-gb/azure/key-vault/key-vault-disaster-recovery-guidance. One caveat to that would be if you're running IaaS/PaaS instances and want to run a DR fail-over yourself to another data centre, at which point you'd need to manually migrate the keys/secrets/certificates in your main Key Vault into another instance (and re-point your VMs accordingly)

sharing keys across owin self hosted processes

I'm trying to create a token server for a few selfhosted owin services (console applications)
However, this seems like its only possible if I host in IIS:
The data format used to protect the information contained in the access token. If not provided by the application the default data protection provider depends on the host server. The SystemWeb host on IIS will use ASP.NET machine key data protection, and HttpListener and other self-hosted servers will use DPAPI data protection. If a different access token provider or format is assigned, a compatible instance must be assigned to the OAuthBearerAuthenticationOptions.AccessTokenProvider or OAuthBearerAuthenticationOptions.AccessTokenFormat property of the resource server. - MSDN
Is there any way to share keys across servers if I'm self hosting by sharing some kind of key in the app.config like how I can share a machine key via web.config? If not, that would mean the only option left is to implement my own AccessTokenProvider (assuming I still use the built in OAuth server and self host)?
I've found this answer, which gives an idea on how you can use machine key in self-hosted OWIN app. Please note that a reference to System.Web is required.
After adding MachineKeyProtectionProvider and MachineKeyDataProtector, I just add the protection provider as below.
//...
app.SetDataProtectionProvider(new MachineKeyProtectionProvider());
app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions
{
AuthenticationMode = AuthenticationMode.Active
});
app.UseWebApi(config);
The difficult moment for me here was that the order of initialization matters: UseWebApi should come after SetDataProtectionProvider
I've tried MachineKey protection to no avail under Self-Hosted Web API. What finally worked for me is to specify a DPAPI Protection Provider in both projects:
app.SetDataProtectionProvider(new DpapiDataProtectionProvider("myApp"));
HTH

Resources