docker secrets and refresh tokens - docker

I'm looking for a way to use docker secrets and for all case where I don't need to update the stored value of the secret that would be a perfect situation but my app is having multiple services which are having 3 legged OAuth authorization. After successfully obtaining all tokens a script is collecting all tokens then creating secrets out of them and executing the config of my docker.compose.yml file with the container using those secrets. The problem is when the tokens have to be refreshed and stored again as secrets. Docker secrets does not allow updating the secrets. What would be the possible workaround or better approach?

You do not update a secret or config in place. They are immutable. Instead, include a version number in your secret name. When you need to change the secret, create a new one with a new name, and then update your service with the new secret version. This will trigger a rolling update of your service.

Related

How to update secret values of secret manger in CDK?

My secret manager is created with credential of RDS in CDK, with DatabaseCluster and credential param in it. Now i want to update some value in that secretmanager.
How can i update secret value of secret manager in CDK?
There isn't a great way to do this, and that's by design. Anything you put in the CDK has to be in your CFN template, and then it's no longer secret.
You'll need to find a process outside of the CDK and CFN to update those values.
The problem is, if you inject the secret as a string in your CDK code, it will show up in the complied CloudFormation output.
There are a hot debate about this topic here
I solve this problem by update the secret by AWS CLI
aws secretsmanager put-secret-value --secret-id your_secret_arn --secret-string your_secret

How to authorize Google API inside of Docker

I am running an application inside of Docker that requires me to leverage google-bigquery. When I run it outside of Docker, I just have to go to the link below (redacted) and authorize. However, the link doesn't work when I copy-paste it from the Docker terminal. I have tried port mapping as well and no luck either.
Code:
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
# Make clients.
client = bigquery.Client(credentials=credentials, project=credentials.project_id,)
Response:
requests_oauthlib.oauth2_session - DEBUG - Generated new state
Please visit this URL to authorize this application:
Please see the available solutions on this page, it's constantly updated.
gcloud credential helper
Standalone Docker credential helper
Access token
Service account key
In short you need to use a service account key file. Make sure you either use a Secret Manager, or you just issue a service account key file for the purpose of the Docker image.
You need to place the service account key file into the Docker container either at build or runtime.

GCP docker authentication: How is using gcloud more secure than just using a JSON keyfile?

Setting up authentication for Docker  |  Artifact Registry Documentation suggests that gcloud is more secure than using a JSON file with credentials. I disagree. In fact I'll argue the exact opposite is true. What am I misunderstanding?
Setting up authentication for Docker | Artifact Registry Documentation says:
gcloud as credential helper (Recommended)
Configure your Artifact Registry credentials for use with Docker directly in gcloud. Use this method when possible for secure, short-lived access to your project resources. This option only supports Docker versions 18.03 or above.
followed by:
JSON key file
A user-managed key-pair that you can use as a credential for a service account. Because the credential is long-lived, it is the least secure option of all the available authentication methods
The JSON key file contains a private key and other goodies giving a hacker long-lived access. The keys to the kingdom. But only to the Artifact Repository in this instance, because the service account that the JSON file is for only has specifically those rights.
Now gcloud has two auth options:
gcloud auth activate-service-account ACCOUNT --key-file=KEYFILE
gcloud auth login
Lets start with gcloud and a service account: Here it stores KEYFILE in unencrypted in ~/.config/gcloud/credentials.db. Using the JSON file directly boils down docker login -u _json_key --password-stdin https://some.server < KEYFILE which stores the KEYFILE contents in ~/.docker/config.json. So using gcloud with a service account or just using the JSON file directly should be equivalent, security wise. They both store the same KEYFILE unencrypted in a file.
gcloud auth login requires login with a browser where I give consent to giving gcloud access to my user account in its entirety. It is not limited to the Artifact Repository like the service account is. Looking with sqlite3 ~/.config/gcloud/credentials.db .dump I can see that it stores an access_token but also a refresh_token. If the hacker has access to ~/.config/gcloud/credentials.db with access and refresh tokens, doesn't he own the system just as much as if he had access to the JSON file? Actually, this is worse because my user account is not limited to just accessing the Artifact Registry - now the user has access to everything my user has access to.
So all in all: gcloud auth login is at best security-wise equivalent to using the JSON file. But because the access is not limited to the Artifact Registry, it is in fact worse.
Do you disagree?

Keycloak Docker import LDAP bind credentials without exposing them

I have a keycloak docker image and I import the configuration of my realm from a json file. And it works, so far so good.
But in my configuration there is an LDAP provider, which doesn't have the right credentials (Bind DN and Bind Credentials). They are not inserted in the JSON due to security purposes. So I have to manually insert the credentials in the Admin Console after startup.
I am now trying to find a secure way to automate that without exposing the credentials in clear text, so that we don't have to manually insert the credentials after each startup.
I thought about inserting them in the JSON file inside the container with a shell script or whatever and then importing the resulting file when starting keycloak. The problem is that the credentials would then be exposed in clear text in the JSON file inside the container. So anybody with access to the container would be able to see them.
I'm thinking about inserting the credentials in that JSON file based on environment variables (these are securely stored in the Gitlab runner and masked in the logs), starting keycloak and then removing the JSON file on the fly after keycloak successfully starts without exposing the credentials in any of the layers. But I couldn't find a way to do that.
Can anybody think of an idea of how this can be achieved?
Any help would be much appreciated.
A workaround is to bind your keycloak instance to an external database with a persistent volume (examples from keycloak here) and to change the migration strategy from OVERWRITE_EXISTING, to IGNORE_EXISTING (documentation here) in your docker-compose, like this:
command: '-b 0.0.0.0 -Dkeycloak.migration.strategy=IGNORE_EXISTING'
In this way, your configuration is persistent so you just enter your LDAP credentials the first time and don't need complex operations with pipelines.

Access KeyVault from Azure Container Instance deployed in VNET

Azure Container Instance is deployed in VNET and I want to store my keys and other sensitive variables in Key Vault and somehow access to it. I found in documentation, it's currently limitation to use managed identities once ACI is in VNET.
Is there another way to bypass this identities and to use Key Vault?
I'm trying to avoid environment variables and secret volumes, because this container will be scheduled to run every day, which means there will be some script with access to all secrets and I don't want to expose them in script.
to access the Azure Key Vault you will need to have access to a Token, are you ok storing this token into a k8s secret ?
If you are, then any SKD or CURL command could be use to leverage the Rest API of the Key Vault to retrieve the secret at run time : https://learn.microsoft.com/en-us/rest/api/keyvault/
If you don't want to use secret/volumes to store the token for AKV it would be best to bake in your token in your container Image and maybe rebuild your image everyday with a new token that you could manage its access I AKS at the same time within your CI process

Resources