Mapping Vault CLI Path to Packer Vault Path - path

Using the Vault CLI I am able to get data for the following path:
vault kv get -field=databag chef0/databags/wireguard/hedge
However, in my Packer script, this:
"{{ vault `chef0/databags/wireguard/hedge` `databag` }}"
generates a no data error:
template: root:1:3: executing "root" at <vault `chef0/databags/wireguard/hedge`
`databag`>: error calling vault: Vault data was empty at the given path.
Warnings: Invalid path for a versioned K/V secrets engine. See the API docs for
the appropriate API endpoints to use. If using the Vault CLI, use 'vault kv get'
for this operation.
Is there a rule for translating/mapping one to the other?
Note:
To eliminate unrelated permission issues I have run both these using a root token.

Okay, not sure where this is documented, and am not suggesting it isn't, but here is what I discovered:
It appears any data stored in a secret, say chef0, is accessible via the API under a data sub-path. It may also help you to know there is a metadata sub-path- at the same level as data.
So it appears the Vault CLI does not expose these sub-paths, by the Vault HTTP-API and the Packer Vault-API expose these sub-paths.
The correct Packer incantation (chickens optional) is:
"{{ vault `chef0/**data**/databags/wireguard/hedge` `databag` }}"

You must be using v2 of the kv engine. For that engine, you do indeed need to have /data/ in the path, as shown in the API docs. The requirement for this prefix is also described in the engine docs. I've certainly run into this same problem myself :-)

Related

Equivalent Docker.DotNet AuthConfig class in KubernetesClient for Dotnet application

I have a Dokerswarm application(.Net) which is using Authconfig class for storing information [username, passord, serveraddress, tokens etc] for authenticating with the registries. The same application I am trying to write in Kubernetes using KubernetesClient.
Can someone please let me know if there is any equivalent of Authconfig class in Kubernetes K8s.Model client also ?
The similar class for creating connection to the k8s APIServer endpoint would be the following:
KubernetesClientConfiguration (in case you have proper KUBECONFIG environment variable set, or at least k8s config on the disk)
More specific classes could be found in the folder:
csharp/src/KubernetesClient/KubeConfigModels/
Usage examples could be found here:
csharp/examples/
I would also recommend to read the following documentation pages:
Access Clusters Using the Kubernetes API
Configure Access to Multiple Clusters

Keycloak Docker import LDAP bind credentials without exposing them

I have a keycloak docker image and I import the configuration of my realm from a json file. And it works, so far so good.
But in my configuration there is an LDAP provider, which doesn't have the right credentials (Bind DN and Bind Credentials). They are not inserted in the JSON due to security purposes. So I have to manually insert the credentials in the Admin Console after startup.
I am now trying to find a secure way to automate that without exposing the credentials in clear text, so that we don't have to manually insert the credentials after each startup.
I thought about inserting them in the JSON file inside the container with a shell script or whatever and then importing the resulting file when starting keycloak. The problem is that the credentials would then be exposed in clear text in the JSON file inside the container. So anybody with access to the container would be able to see them.
I'm thinking about inserting the credentials in that JSON file based on environment variables (these are securely stored in the Gitlab runner and masked in the logs), starting keycloak and then removing the JSON file on the fly after keycloak successfully starts without exposing the credentials in any of the layers. But I couldn't find a way to do that.
Can anybody think of an idea of how this can be achieved?
Any help would be much appreciated.
A workaround is to bind your keycloak instance to an external database with a persistent volume (examples from keycloak here) and to change the migration strategy from OVERWRITE_EXISTING, to IGNORE_EXISTING (documentation here) in your docker-compose, like this:
command: '-b 0.0.0.0 -Dkeycloak.migration.strategy=IGNORE_EXISTING'
In this way, your configuration is persistent so you just enter your LDAP credentials the first time and don't need complex operations with pipelines.

Access KeyVault from Azure Container Instance deployed in VNET

Azure Container Instance is deployed in VNET and I want to store my keys and other sensitive variables in Key Vault and somehow access to it. I found in documentation, it's currently limitation to use managed identities once ACI is in VNET.
Is there another way to bypass this identities and to use Key Vault?
I'm trying to avoid environment variables and secret volumes, because this container will be scheduled to run every day, which means there will be some script with access to all secrets and I don't want to expose them in script.
to access the Azure Key Vault you will need to have access to a Token, are you ok storing this token into a k8s secret ?
If you are, then any SKD or CURL command could be use to leverage the Rest API of the Key Vault to retrieve the secret at run time : https://learn.microsoft.com/en-us/rest/api/keyvault/
If you don't want to use secret/volumes to store the token for AKV it would be best to bake in your token in your container Image and maybe rebuild your image everyday with a new token that you could manage its access I AKS at the same time within your CI process

docker secrets and refresh tokens

I'm looking for a way to use docker secrets and for all case where I don't need to update the stored value of the secret that would be a perfect situation but my app is having multiple services which are having 3 legged OAuth authorization. After successfully obtaining all tokens a script is collecting all tokens then creating secrets out of them and executing the config of my docker.compose.yml file with the container using those secrets. The problem is when the tokens have to be refreshed and stored again as secrets. Docker secrets does not allow updating the secrets. What would be the possible workaround or better approach?
You do not update a secret or config in place. They are immutable. Instead, include a version number in your secret name. When you need to change the secret, create a new one with a new name, and then update your service with the new secret version. This will trigger a rolling update of your service.

Adding BigQuery connection with gdrive scopes?

I have an external Sheets table that I want to query via the BigQueryOperator in Airflow.
I would prefer to use the Cloud Composer service account.
I've created a new connection via the Airflow UI with the following parameters:
Conn Id: bigquery_with_gdrive_scope
Conn Type: google_cloud_platform
Project Id: <my project id>
Keyfile path: <none>
Keyfile JSON: <none>
Scopes: https://www.googleapis.com/auth/bigquery,https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/drive
In my DAG, I use: BigQueryOperator(..., bigquery_conn_id='bigquery_with_gdrive_scope')
The log reports: Access Denied: BigQuery BigQuery: No OAuth token with Google Drive scope was found.
The task attributes show: bigquery_conn_id bigquery_with_gdrive_scope
It's almost as though the bigquery_conn_id parameter is being ignored.
Adding GCP API scopes (like in the accepted answer) did not work for us. After a lot of debugging, it seemed like GCP had "root" scopes that were assigned to the environment during creation, and could not be overridden via Airflow Connections. It seems like this only affects GCP API scopes.
For reference, we were using composer 1.4.0 and airflow 1.10.0
If you want to add a scope pertaining to GCP on Cloud Composer, you MUST do so when you create the environment. It cannot be modified after the fact.
When creating your environment, be sure to add https://www.googleapis.com/auth/drive. Specifically, you can add the following flag to your gcloud composer environment create command:
--oauth-scopes=https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/drive
Lastly, do not forget to share the document with the service account email (unless you have given the service account domain wide access)
In case anyone runs up against the same problem, (Composer 1.0.0, Airflow 1.9.0) falls back to gcloud auth unless Keyfile path or Keyfile json are provided. This ignores any scope arguments.
The master branch of Airflow fixes this; but for now you have to generate a credential file for the service account and tell Airflow where these are located.
There are step by step directions here.
For my use-case I created a key for airflow's service account and set up a connection as follows:
Conn Id: bigquery_with_gdrive_scope
Conn Type: google_cloud_platform
Project Id: <my project id>
Keyfile path: <none>
Keyfile JSON: <contents of keyfile for airflow service account>
Scopes: https://www.googleapis.com/auth/bigquery,https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/drive

Resources