How to get more than the first 25 secrets from an Azure key vault via command line? - azure-keyvault

I am using az keyvault secret list to get secrets from my Azure key vault. Its help says:
Arguments
--maxresults : Maximum number of results to return in a page. If not
specified, the service will return up to 25 results.
It is not possible to set --maxresults any higher than 25. The help says "in a page", but I can find no explanation of how to get the next page.
Is it possible to list more than the top 25 secrets using this tool?

We cannot get more than 25 Secret lists by using the --maxresults in the CLI command.
Please find the below workaround:
If we specify the --maxresults more than 25 the cli returns the below result.
Az keyvault secret list --vault-name <your keyvault name> --maxresults 30
If you want to get all the Secrets in a specific key vault you have to use the below command without using --maxresults.
Az keyvault secret list --vault-name <your keyvault name>
Or
If you want it to achieve programmatically need to write a script with the REST API or some language library directly. Refer here

To get all the secrets with name and value via azure cli in Mac, you can use the below script:
sh keyvault-list.sh keyvaultname
#!/usr/bin/env bash
keyvaultEntries=($(az keyvault secret list --vault-name $1 --query "[*].{name:name}" -o tsv))
for i in "${keyvaultEntries[#]}"
do
# do whatever on "$i" here
echo "$i"::"$(az keyvault secret show --name $i --vault-name $1 -o tsv --query value)"
done

Related

Authenticate Google Compute Engine (GCE) to Pull Image from Google Container Registry (GCR)

I'm building deployment pipeline using Google Cloud Build and store the Docker image in GCR. I planned to restart the GCE instance group on the latest Cloud Build step so the GCE can run the latest docker image by add docker pull gcr.io/my-project/my-image in the GCE instance template startup script. The problem is I can't authorize the docker to pull image from GCR. I've read the 4 GCR authentication method but all of them required us to login manually from the browser. Also at this stage I can't upload the service account key since I need to provision and maintain the infrastructure fully from code (Terraform), no Google Cloud console. So how do we authenticate docker as a machine?
If the instance doesn't have gcloud installed, you can use the Metadata service to acquire an access token and use that to login to GCR using Docker.
I've not used this to login to GCR using Docker but it should work. I use this format to access Google Cloud services from an instance startup script:
echo "Getting token from metadata"
ENDPOINT="http://metadata.google.internal/computeMetadata/v1"
ACCOUNT="default" # Replace with Service Account Email (!)
TOKEN=$(\
curl \
--silent \
--header "Metadata-Flavor: Google" \
http://${ENDPOINT}/instance/service-accounts/${ACCOUNT}/token)
echo "Extract access token"
ACCESS=$(\
echo ${TOKEN} \
| grep --extended-regexp --only-matching "(ya29.[0-9a-zA-Z._-]*)")
echo "Login to Docker"
HOST="https://gcr.io" # Or ...
printf ${ACCESS} \
| docker login -u oauth2accesstoken \
--password-stdin ${HOST}
You can grant IAM privileges or scopes to the service account attached to your GCE instance, then run the following command:
gcloud auth print-access-token | docker login -u oauth2accesstoken \
--password-stdin https://HOSTNAME
That will authenticate against the registry and be able to push and pull images.

How to pass google cloud application credentials file to docker container

I would like to pass my Google Cloud Platform's service account JSON credentials file to a docker container so that the container can access a cloud storage bucket. So far I tried to pass the file as an environment parameter on the run command like this:
Using the --env flag: docker run -p 8501:8501 --env GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
Using the -e flag and even exporting the same env variable in the command line: docker run -p 8501:8501 -e GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
But nothing worked, and I always get the following error when running the docker container:
W
external/org_tensorflow/tensorflow/core/platform/cloud/google_auth_provider.cc:184]
All attempts to get a Google authentication bearer token failed,
returning an empty token. Retrieving token from files failed with "Not
found: Could not locate the credentials file.".
How to pass the google credentials file to a container running locally on my personal laptop?
You cannot "pass" an external path, but have to add the JSON into the container.
Two ways to do it:
Volumes: https://docs.docker.com/storage/volumes/
Secrets: https://docs.docker.com/engine/swarm/secrets/
secrets - work with docker swarm mode.
create docker secrets
use secret with a container using --secret
Advantage being, secrets are encrypted. Secrets are decrypted when mounted to containers.
I log into gcloud in my local environment then share that json file as a volume in the same location in the container.
Here is great post on how to do it with relevant extract below: Use Google Cloud user credentials when testing containers locally
Login locally
To get your default user credentials on your local environment, you
have to use the gcloud SDK. You have 2 commands to get authentication:
gcloud auth login to get authenticated on all subsequent gcloud
commands gcloud auth application-default login to create your ADC
locally, in a “well-known” location.
Note location of credentials
The Google auth library tries to get a valid credentials by performing
checks in this order
Look at the environment variable GOOGLE_APPLICATION_CREDENTIALS value.
If exists, use it, else… Look at the metadata server (only on Google
Cloud Platform). If it returns correct HTTP codes, use it, else… Look
at “well-know” location if a user credential JSON file exists The
“well-known” locations are
On linux: ~/.config/gcloud/application_default_credentials.json On
Windows: %appdata%/gcloud/application_default_credentials.json
Share volume with container
Therefore, you have to run your local docker run command like this
ADC=~/.config/gcloud/application_default_credentials.json \ docker run
\
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
-v ${ADC}:/tmp/keys/FILE_NAME.json:ro \ <IMAGE_URL>
NB: this is only for local development, on Google Cloud Platform the credentials for the service are automatically inserted for you.

Failed to pull image - unauthorized: authentication required (ImagePullBackOff )

My release pipeline runs successfully and creates a container in Azure Kubernetes, however when I view in azure Portal>Kubernetes service> Insights screen, it shows a failure.
It fails to pull the image from my private container repository with error message 'ImagePullBackOff'
I did a kubectl describe on the pod and got below error message:
Failed to pull image "myexampleacr.azurecr.io/myacr:13": [rpc error: code = Unknown desc = Error response from daemon: Get https://myexampleacr.azurecr.io/v2/myacr/manifests/53: unauthorized: authentication required.
Below is a brief background on my setup:
I am using Kubernetes secret to access the containers in private container registry.
I generated the Kubernetes secret using clientId and password(secret) from the Service Principle that my DevOps team created.
.
The command used to generate kubernetes secret:
kubectl create secret docker-registry acr-auth --docker-server --docker-username --docker-password --docker-email
I then updated my deployment.yaml with imagePullSecrets: name:acr-auth
After this, I ran my deployment and release pipeline both ran successfully, but they show failure in the kubernetes service with error message 'ImagePullBackOff' error.
Any help will be much appreciated.
As the error shows it required authentication. As I see from your description, the possible reason is that your team does not assign the ACR role to the service principal that your team creates, or you use the wrong service principal. So you need to check two things:
If the service principal you use has the right permission of the ACR.
If the Kubernetes secret was created right in the Kubernetes service.
The way to check if the service principal has the right permission of the ACR is that pull an image in the ACR after you log in with the service principal in docker server. Also, as the comment said, you need to make sure the command is right as below:
kubectl create secret docker-registry acr-auth --docker-server myexampleacr.azurecr.io --docker-username clientId --docker-password password --docker-email yourEmail
Additional, there is a little possibility that you use the wrong image with tag. By the way, check it out.
I had the same error, and I realised that the service principal is expired.
To check the expiration date of your service principal and update your AKS cluster with the new credentials, fallow the following steps:
NOTE: You need the Azure CLI version 2.0.65 or later installed and configured.
1- Get the Client ID of your cluster using the az aks show command.
az aks show --resource-group YOUR_AKS_RESOURCE_GROUP_NAME --name YOUR_AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId"
2- Check the expiration date of your service principal.
az ad sp credential list --id YOUR_CLIENT_ID --query "[].endDate" -o tsv
If the service principal is expired then, to reset the existing service principal credential fallow the following steps:
1- Reset the credentials using az ad sp credential reset command.
az ad sp credential reset --name YOUR_CLIENT_ID --query password -o tsv
2- Update your AKS cluster with the new service principal credentials.
az aks update-credentials --resource-group YOUR_AKS_RESOURCE_GROUP_NAME --name YOUR_AKS_CLUSTER_NAME --reset-service-principal --service-principal YOUR_CLIENT_ID --client-secret YOUR_NEW_PASSWORD
Source: https://learn.microsoft.com/en-us/azure/aks/update-credentials
It's odd, maybe it shows an old deployment which you didn't delete. It may also be these; incorrect credientials, acr may not be up, image name or tag is wrong. You can also go with aks-acr native authentication and never use a secret: https://learn.microsoft.com/en-gb/azure/container-registry/container-registry-auth-aks
In my case the problem was that my --docker-password had an special character and I was not escaping it using quotes (i.e. --docker-password 'myPwd$')
You can check your password is correct my executing this command:
kubectl get secret < SECRET > -n < NAMESPACE> --output="jsonpath={.data..dockerconfigjson}" | base64 --decode
Reference: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Bitbucket pipeline ssh to compute engine instance with gcloud

I am using Bitbucket pipelines to run deploy script on preemptable machines on compute engine. I use google sdk and service account with Owner role, but still can't ssh to the machine.
that is how my bitbucket-pipelines.yml looks like:
- echo $GCLOUD_API_KEYFILE | base64 --decode --ignore-garbage > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
- gcloud compute --project $GCLOUD_PROJECT ssh --zone "us-east1-c" $INSTANCE_NAME --command "./deploy"
I can see that I am able to successfully authenticate:
Activated service account credentials for: [...]
but I am still failing on ssh to instance
gcloud compute --project "..." ssh --zone "us-east1-c" "..." --command "..."
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/google_compute_engine.
Your public key has been saved in /root/.ssh/google_compute_engine.pub.
The key fingerprint is: ...
...
Updating project ssh metadata...
................Updated [https://www.googleapis.com/compute/v1/projects/...].
done.
Waiting for SSH key to propagate.
Warning: Permanently added '...' (RSA) to the list of known hosts.
Permission denied (publickey).
Am I missing something? My understanding was once I authenticate as service account with permissions perform ssh, gcloud ssh command suppose to work
This is a basic SSH issue,
Please check this thread [1].
[1] How to get the ssh keys for a new Google Compute Engine instance?

Having issue setting up access token from policy in Vault

I am setting up Hashicorp vault on my development environment in -dev mode and trying to use access token created from policies to access to secret which policy is created for but I get "*permission denied" while I try to access to secret from CLI or API. Based on Vault documentation it should work.
The following is what I have done to set up:
Set up docker container using docker run --cap-add=IPC_LOCK -p 8200:8200 -e 'VAULT_DEV_ROOT_TOKEN_ID=roottoken' -v //c/confi
g:/config vault
Connect to docker container using docker exec -it {docker name} ash. I know it should be bash command but bash doesn't work and ash works!
After bashing to docker, export VAULT_ADDR='http://127.0.0.1:8200'
Set the root token in environment variable export VAULT_TOKEN='roottoken'
Create a secret vault secret/foo/bar value=secret
Create a policy file called secret.hcl with the content path "secret/foo/*" {
policy = "read"
}
Create a policy for the secret vault policy-write secret /config/secret.hcl and make sure the policy is created
Create a token for the policy is just created vault token-create -policy="secret"
Try to access the policy using API 'http://127.0.0.1:8200/v1/secret/foo' passing X-Vault-Token='token created in step 8' in header
Getting "*permission denied" error
Would be great is someone can shed a light..

Resources