Dropwizard and Hashicorp Vault - dropwizard

I'm wondering if anyone has experience using Dropwizard and Vault together? I'm looking for a solution where I can keep my DB and other passwords in Vault instead of the dropwizard .yaml configuration. I'd also like to be able to start my service up and be prompted for the Vault key and not place any secrets in any config files. Any help is greatly appreciated.

Related

accessing secret from google secret manager

I put a serviceAccount.json in google secret manager, and I want to build am api service by Fastapi, a python web framework.
I mounted secret as a disk ,I want to read it from my file,but it reply no such file....plz anyone help me?
Never store JSON service account keys in Google Secret Manager. If your workload is running in Cloud Run, you should use the service identity to grant permissions https://cloud.google.com/run/docs/securing/service-identity.

Dynamically set Client URLs on Keycloak Startup

In Keycloak there is already a way to export the whole realm with all clients, users, roles etc.
This results in a file that can be used to import that realm during keycloak startup. This works like a charm, but the problem is that the URLs of the clients in keycloak are hardcoded, in my case to localhost.
I'm looking for a way to set the Base URLs of the clients dynamically, in order to deploy keycloak with an imported realm and everything works out of the box. Unfortunately, Keycloak doesnt seem to allow environment variables in the client configuration using the Keycloak Admin Dashboard.
As a consequence, using environment variables in the realm-export.json itself is also not allowed :/
The docker container of keycloak (jboss/keycloak) does not even have envsubst. Its really frustrating to already have a json file that does most of the configuration at container startup when I still have to manually configure the client URLs afterwards.
Any solution? Thanks in advance.

How to access the secrets, keys from azure keyvault in app-configmap.yaml file

I have created AKS based application deployment where all the environment variables of application are defined in app-configmap.yaml file. This file is refered in deployment.yaml file.
I would like to store all the credentials those are mentioned in app-configmap.yaml file as environment variable into secrets in keyvault and finally from keyvault , it will be refered in app-configmap.yaml file.
I need help to understand it step by step by which I can implement it
In general I would not recommend to use secrets as environment variables or with configmaps.
With the AZURE KEY VAULT PROVIDER FOR SECRETS STORE CSI DRIVER you should use the secrets as file mounts inside the pod that really needs the secret. With this you can also rotate secrets on-demand or sync own TLS certs etc.
Pro is you dont need AAD-Pod-Identity bcs the CSI handles auth on its own.

AKS with managed identity. Need Service Principal to automate deployment using bitbucket pipeline

I have an AKS (Kubernetes cluster) created with a managed identity in Azure portal.
I want to automate deployment in the cluster using bitbucket pipelines. For this, it seems I need a service principal.
script:
- pipe: microsoft/azure-aks-deploy:1.0.2
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
Is there a way to get this from the managed identity? Do I need to delete the cluster and re-create it with service principal? Are there any other alternatives?
Thanks!
Unfortunately, the managed identity can only be used inside the Azure Resources. And it seems the bitbucket pipeline should have the service principal with enough permissions first to access the Azure, then it can manage the Azure resources. And for AKS, you can't change the managed identity that you enable it at the creation into service principal.
So finally, you need to delete the existing AKS cluster and recreate a new cluster with a service principal. Then you can use the same service principal to access Azure and manage the AKS cluster.
I wanted to post this for anyone looking.
The OP asked here about retrieving the service principal details for a managed identity. While it is possible to retrieve the azure resource ID and also the "username" of the service principal, as #charles-xu mentioned using a managed identity for anything outside of Azure is not possible, and this is because there is no method to access the password (also known as client secret)
That being said, you can find the command necessary to retrieve your Managed Identity's SP name in case you need it, for example in order to insert it into another azure resource being created by Terraform. The command is documented here: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-cli

Keycloak Docker import LDAP bind credentials without exposing them

I have a keycloak docker image and I import the configuration of my realm from a json file. And it works, so far so good.
But in my configuration there is an LDAP provider, which doesn't have the right credentials (Bind DN and Bind Credentials). They are not inserted in the JSON due to security purposes. So I have to manually insert the credentials in the Admin Console after startup.
I am now trying to find a secure way to automate that without exposing the credentials in clear text, so that we don't have to manually insert the credentials after each startup.
I thought about inserting them in the JSON file inside the container with a shell script or whatever and then importing the resulting file when starting keycloak. The problem is that the credentials would then be exposed in clear text in the JSON file inside the container. So anybody with access to the container would be able to see them.
I'm thinking about inserting the credentials in that JSON file based on environment variables (these are securely stored in the Gitlab runner and masked in the logs), starting keycloak and then removing the JSON file on the fly after keycloak successfully starts without exposing the credentials in any of the layers. But I couldn't find a way to do that.
Can anybody think of an idea of how this can be achieved?
Any help would be much appreciated.
A workaround is to bind your keycloak instance to an external database with a persistent volume (examples from keycloak here) and to change the migration strategy from OVERWRITE_EXISTING, to IGNORE_EXISTING (documentation here) in your docker-compose, like this:
command: '-b 0.0.0.0 -Dkeycloak.migration.strategy=IGNORE_EXISTING'
In this way, your configuration is persistent so you just enter your LDAP credentials the first time and don't need complex operations with pipelines.

Resources