I am setting up Hashicorp vault on my development environment in -dev mode and trying to use access token created from policies to access to secret which policy is created for but I get "*permission denied" while I try to access to secret from CLI or API. Based on Vault documentation it should work.
The following is what I have done to set up:
Set up docker container using docker run --cap-add=IPC_LOCK -p 8200:8200 -e 'VAULT_DEV_ROOT_TOKEN_ID=roottoken' -v //c/confi
g:/config vault
Connect to docker container using docker exec -it {docker name} ash. I know it should be bash command but bash doesn't work and ash works!
After bashing to docker, export VAULT_ADDR='http://127.0.0.1:8200'
Set the root token in environment variable export VAULT_TOKEN='roottoken'
Create a secret vault secret/foo/bar value=secret
Create a policy file called secret.hcl with the content path "secret/foo/*" {
policy = "read"
}
Create a policy for the secret vault policy-write secret /config/secret.hcl and make sure the policy is created
Create a token for the policy is just created vault token-create -policy="secret"
Try to access the policy using API 'http://127.0.0.1:8200/v1/secret/foo' passing X-Vault-Token='token created in step 8' in header
Getting "*permission denied" error
Would be great is someone can shed a light..
Related
I have a twitter bot which I am attempting to deploy to my server with Docker; I don't want to save my API keys & Access Tokens in the code, so I access them via env variables.
I ssh'ed into the server and exported the keys & tokens in my ~/.profile, yet when I run the Docker container on my server, I get an error as if my keys/tokens are incorrect.
I'm new to Docker so I have to ask - Is my running Docker container able to access these env variables? Or do I have to set them another way such that my Docker container can see them?
Docker can't access the env vars on the server. You need to pass them explicitly when running the docker using -e / --env flag.
Example: docker run --env VAR1=value1 --env VAR2=value2 ...
Documentation: https://docs.docker.com/engine/reference/commandline/run
I want to check if a container on gitlab is built properly with the right content. As a first step, I'm trying to login to the registry by running the following command:
sudo docker login -u "ci-registry-user" -p "some-token" "registry.gitlab.com/some-registry:container"
However, I run into Get "https://registry.gitlab.com/v2/": unauthorized: HTTP Basic: Access denied errors.
My question is in two folds:
How do I access the hosted containers on gitlab? My goal is to access the container and run docker exec -it container_name bash && cat /some/path/to_container.py
Is there an alternative way to achieve this without logging in to the registry?
Check your GitLab PAT scope, to make sure it is API or at least read_registry.
Read-only (pull) for Container Registry images if a project is private and authorization is required.
And make sure you have access to that project with that token, if thesekyi/paalup is a private project.
Avoid sudo, as it changes your environment execution from your logged-in user to root.
I would like to pass my Google Cloud Platform's service account JSON credentials file to a docker container so that the container can access a cloud storage bucket. So far I tried to pass the file as an environment parameter on the run command like this:
Using the --env flag: docker run -p 8501:8501 --env GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
Using the -e flag and even exporting the same env variable in the command line: docker run -p 8501:8501 -e GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
But nothing worked, and I always get the following error when running the docker container:
W
external/org_tensorflow/tensorflow/core/platform/cloud/google_auth_provider.cc:184]
All attempts to get a Google authentication bearer token failed,
returning an empty token. Retrieving token from files failed with "Not
found: Could not locate the credentials file.".
How to pass the google credentials file to a container running locally on my personal laptop?
You cannot "pass" an external path, but have to add the JSON into the container.
Two ways to do it:
Volumes: https://docs.docker.com/storage/volumes/
Secrets: https://docs.docker.com/engine/swarm/secrets/
secrets - work with docker swarm mode.
create docker secrets
use secret with a container using --secret
Advantage being, secrets are encrypted. Secrets are decrypted when mounted to containers.
I log into gcloud in my local environment then share that json file as a volume in the same location in the container.
Here is great post on how to do it with relevant extract below: Use Google Cloud user credentials when testing containers locally
Login locally
To get your default user credentials on your local environment, you
have to use the gcloud SDK. You have 2 commands to get authentication:
gcloud auth login to get authenticated on all subsequent gcloud
commands gcloud auth application-default login to create your ADC
locally, in a “well-known” location.
Note location of credentials
The Google auth library tries to get a valid credentials by performing
checks in this order
Look at the environment variable GOOGLE_APPLICATION_CREDENTIALS value.
If exists, use it, else… Look at the metadata server (only on Google
Cloud Platform). If it returns correct HTTP codes, use it, else… Look
at “well-know” location if a user credential JSON file exists The
“well-known” locations are
On linux: ~/.config/gcloud/application_default_credentials.json On
Windows: %appdata%/gcloud/application_default_credentials.json
Share volume with container
Therefore, you have to run your local docker run command like this
ADC=~/.config/gcloud/application_default_credentials.json \ docker run
\
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
-v ${ADC}:/tmp/keys/FILE_NAME.json:ro \ <IMAGE_URL>
NB: this is only for local development, on Google Cloud Platform the credentials for the service are automatically inserted for you.
My release pipeline runs successfully and creates a container in Azure Kubernetes, however when I view in azure Portal>Kubernetes service> Insights screen, it shows a failure.
It fails to pull the image from my private container repository with error message 'ImagePullBackOff'
I did a kubectl describe on the pod and got below error message:
Failed to pull image "myexampleacr.azurecr.io/myacr:13": [rpc error: code = Unknown desc = Error response from daemon: Get https://myexampleacr.azurecr.io/v2/myacr/manifests/53: unauthorized: authentication required.
Below is a brief background on my setup:
I am using Kubernetes secret to access the containers in private container registry.
I generated the Kubernetes secret using clientId and password(secret) from the Service Principle that my DevOps team created.
.
The command used to generate kubernetes secret:
kubectl create secret docker-registry acr-auth --docker-server --docker-username --docker-password --docker-email
I then updated my deployment.yaml with imagePullSecrets: name:acr-auth
After this, I ran my deployment and release pipeline both ran successfully, but they show failure in the kubernetes service with error message 'ImagePullBackOff' error.
Any help will be much appreciated.
As the error shows it required authentication. As I see from your description, the possible reason is that your team does not assign the ACR role to the service principal that your team creates, or you use the wrong service principal. So you need to check two things:
If the service principal you use has the right permission of the ACR.
If the Kubernetes secret was created right in the Kubernetes service.
The way to check if the service principal has the right permission of the ACR is that pull an image in the ACR after you log in with the service principal in docker server. Also, as the comment said, you need to make sure the command is right as below:
kubectl create secret docker-registry acr-auth --docker-server myexampleacr.azurecr.io --docker-username clientId --docker-password password --docker-email yourEmail
Additional, there is a little possibility that you use the wrong image with tag. By the way, check it out.
I had the same error, and I realised that the service principal is expired.
To check the expiration date of your service principal and update your AKS cluster with the new credentials, fallow the following steps:
NOTE: You need the Azure CLI version 2.0.65 or later installed and configured.
1- Get the Client ID of your cluster using the az aks show command.
az aks show --resource-group YOUR_AKS_RESOURCE_GROUP_NAME --name YOUR_AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId"
2- Check the expiration date of your service principal.
az ad sp credential list --id YOUR_CLIENT_ID --query "[].endDate" -o tsv
If the service principal is expired then, to reset the existing service principal credential fallow the following steps:
1- Reset the credentials using az ad sp credential reset command.
az ad sp credential reset --name YOUR_CLIENT_ID --query password -o tsv
2- Update your AKS cluster with the new service principal credentials.
az aks update-credentials --resource-group YOUR_AKS_RESOURCE_GROUP_NAME --name YOUR_AKS_CLUSTER_NAME --reset-service-principal --service-principal YOUR_CLIENT_ID --client-secret YOUR_NEW_PASSWORD
Source: https://learn.microsoft.com/en-us/azure/aks/update-credentials
It's odd, maybe it shows an old deployment which you didn't delete. It may also be these; incorrect credientials, acr may not be up, image name or tag is wrong. You can also go with aks-acr native authentication and never use a secret: https://learn.microsoft.com/en-gb/azure/container-registry/container-registry-auth-aks
In my case the problem was that my --docker-password had an special character and I was not escaping it using quotes (i.e. --docker-password 'myPwd$')
You can check your password is correct my executing this command:
kubectl get secret < SECRET > -n < NAMESPACE> --output="jsonpath={.data..dockerconfigjson}" | base64 --decode
Reference: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Why doesn't gsutil use the Gcloud credentials as it should when running in a docker container on Cloud Shell?
According to [1] gsutil should use gcloud credentials when they are available:
Once credentials have been configured via gcloud auth, those credentials will be used regardless of whether the user has any boto configuration files (which are located at ~/.boto unless a different path is specified in the BOTO_CONFIG environment variable). However, gsutil will still look for credentials in the boto config file if a type of non-GCS credential is needed that's not stored in the gcloud credential store (e.g., an HMAC credential for an S3 account).
This seems to work fine in gcloud installs but not in docker images. The process I used in Cloud Shell is:
docker run -ti --name gcloud-config google/cloud-sdk gcloud auth login
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gcloud compute instances list --project my_project
... (works ok)
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls gs://bucket/
ServiceException: 401 Anonymous caller does not have storage.objects.list access to bucket.
[1] https://cloud.google.com/storage/docs/gsutil/addlhelp/CredentialTypesSupportingVariousUseCases
You need to mount a volume with your credentials :
docker run -v ~/.config/gcloud:/root/.config/gcloud your_docker_image
The following steps solve this problem for me:
Set the gs_service_key_file in the [Credentials] section of the boto config file (see here)
Activate your service account with gcloud auth activate-service-account
Set your default project in gcloud config
Dockerfile snipped:
ENV GOOGLE_APPLICATION_CREDENTIALS=/.gcp/your_service_account_key.json
ENV GOOGLE_PROJECT_ID=your-project-id
RUN echo '[Credentials]\ngs_service_key_file = /.gcp/your_service_account_key.json' \
> /etc/boto.cfg
RUN mkdir /.gcp
COPY your_service_account_key.json $GOOGLE_APPLICATION_CREDENTIALS
RUN gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS --project $GOOGLE_PROJECT_ID
RUN gcloud config set project $GOOGLE_PROJECT_ID
I found #Alexandre's answer basically worked for me, except for one problem: my credentials worked for bq, but not for gsutil (the subject of OP's question), which returned
ServiceException: 401 Anonymous caller does not have storage.objects.list access to bucket
How could the same credentials work for one but not the other!?
Eventually I tracked it down: ~/.config/configurations/config_default looks like this:
[core]
account = xxx#xxxxxxx.xxx
project = xxxxxxxx
pass_credentials_to_gsutil = false
Why?! Why isn't this documented??
Anyway...change the flag to true, and you're all sorted.