My release pipeline runs successfully and creates a container in Azure Kubernetes, however when I view in azure Portal>Kubernetes service> Insights screen, it shows a failure.
It fails to pull the image from my private container repository with error message 'ImagePullBackOff'
I did a kubectl describe on the pod and got below error message:
Failed to pull image "myexampleacr.azurecr.io/myacr:13": [rpc error: code = Unknown desc = Error response from daemon: Get https://myexampleacr.azurecr.io/v2/myacr/manifests/53: unauthorized: authentication required.
Below is a brief background on my setup:
I am using Kubernetes secret to access the containers in private container registry.
I generated the Kubernetes secret using clientId and password(secret) from the Service Principle that my DevOps team created.
.
The command used to generate kubernetes secret:
kubectl create secret docker-registry acr-auth --docker-server --docker-username --docker-password --docker-email
I then updated my deployment.yaml with imagePullSecrets: name:acr-auth
After this, I ran my deployment and release pipeline both ran successfully, but they show failure in the kubernetes service with error message 'ImagePullBackOff' error.
Any help will be much appreciated.
As the error shows it required authentication. As I see from your description, the possible reason is that your team does not assign the ACR role to the service principal that your team creates, or you use the wrong service principal. So you need to check two things:
If the service principal you use has the right permission of the ACR.
If the Kubernetes secret was created right in the Kubernetes service.
The way to check if the service principal has the right permission of the ACR is that pull an image in the ACR after you log in with the service principal in docker server. Also, as the comment said, you need to make sure the command is right as below:
kubectl create secret docker-registry acr-auth --docker-server myexampleacr.azurecr.io --docker-username clientId --docker-password password --docker-email yourEmail
Additional, there is a little possibility that you use the wrong image with tag. By the way, check it out.
I had the same error, and I realised that the service principal is expired.
To check the expiration date of your service principal and update your AKS cluster with the new credentials, fallow the following steps:
NOTE: You need the Azure CLI version 2.0.65 or later installed and configured.
1- Get the Client ID of your cluster using the az aks show command.
az aks show --resource-group YOUR_AKS_RESOURCE_GROUP_NAME --name YOUR_AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId"
2- Check the expiration date of your service principal.
az ad sp credential list --id YOUR_CLIENT_ID --query "[].endDate" -o tsv
If the service principal is expired then, to reset the existing service principal credential fallow the following steps:
1- Reset the credentials using az ad sp credential reset command.
az ad sp credential reset --name YOUR_CLIENT_ID --query password -o tsv
2- Update your AKS cluster with the new service principal credentials.
az aks update-credentials --resource-group YOUR_AKS_RESOURCE_GROUP_NAME --name YOUR_AKS_CLUSTER_NAME --reset-service-principal --service-principal YOUR_CLIENT_ID --client-secret YOUR_NEW_PASSWORD
Source: https://learn.microsoft.com/en-us/azure/aks/update-credentials
It's odd, maybe it shows an old deployment which you didn't delete. It may also be these; incorrect credientials, acr may not be up, image name or tag is wrong. You can also go with aks-acr native authentication and never use a secret: https://learn.microsoft.com/en-gb/azure/container-registry/container-registry-auth-aks
In my case the problem was that my --docker-password had an special character and I was not escaping it using quotes (i.e. --docker-password 'myPwd$')
You can check your password is correct my executing this command:
kubectl get secret < SECRET > -n < NAMESPACE> --output="jsonpath={.data..dockerconfigjson}" | base64 --decode
Reference: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Related
How can i add Auth key from Service Account for (GCP-> Container Registry) to docker daemon.json?
Normally i write url and user:pass in base64 in docker daemon.json and docker can do pull from private registry.
How about GCP Container registry? I generated a json key and it works.
docker login -u _json_key --password-stdin https://gcr.io < credentials.json
I can login to GCP Container Registry and pull the image from it but how can i add this Key to docker daemon.json So that the docker automatically makes a pull from private repo.
Thanks.
Seems that you already choose your authentication metthod:
Choosing an authentication method
gcloud credential helper
Standalone docker credential helper
Access Token
JSON key file
Regarding JSON Key File, Use the following guidelines to limit access to your container images:
Create dedicated service accounts that are only used to interact with Container Registry.\
Grant the specific role for the least amount of access that the service account requires.\
Follow best practices for managing credentials.
To create a new service account and a service account key for use with Container Registry repositories only:
Create a new service account that will interact with Container Registry.
You can run the following commands using Cloud SDK on your local machine, or in Cloud Shell.
a. Create the service account. Replace NAME with a name for the service account.
gcloud iam service-accounts create NAME
b. Grant a role to the service account. Replace PROJECT_ID with your project ID and ROLE with the appropriate Cloud Storage role for the service account.
gcloud projects add-iam-policy-binding PROJECT_ID --member "serviceAccount:NAME#PROJECT_ID.iam.gserviceaccount.com" --role "roles/ROLE"
Obtain a key for the service account that will interact with Container Registry.
You can run the following command using Cloud SDK on your local machine, or in Cloud Shell.The instructions on this page use the file name keyfile.json for the key file.
gcloud iam service-accounts keys create keyfile.json --iam-account [NAME]#[PROJECT_ID].iam.gserviceaccount.com
Verify that permissions are correctly configured for the service account. If you are using the Compute Engine service account, you must correctly configure both permissions and access scopes.
Use the service account key as your password to authenticate with Docker.
Username is _json_key (NOT the name of your service account)
keyfile.json is the service account key you created
for example:
cat keyfile.json | docker login -u _json_key --password-stdin https://HOSTNAME
where HOSTNAME is gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io.
Or, for older Docker clients which don't support --password-stdin:
docker login -u _json_key -p "$(cat keyfile.json)" https://HOSTNAME
where HOSTNAME is gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io.
I am using minikube to develop my Kubernetes application. I have a private azure registry where my images are saved. Whenever I start the app, k8s start to pull an image. It throws the following error
Failed to pull image "myregistry.azurecr.io/myapp:mytag": rpc error: code = Unknown desc = Error response from daemon: Get https://myregistry.azurecr.io/v2/myapp/manifests/mytag: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
I am configuring my minikube using this documentation. where first, I log-in to acr using below command,
az acr login --name myregistry.azurecr.io --expose-token
And after using the token provided by the above command, I log-in to my private docker-registry by the below command in minikube ssh.
docker login myregistry.azurecr.io -u 00000000-0000-0000-0000-000000000000
After that as per mention in the document, I copy the .docker/config.json to /var/lib/kubelet/config.json in minikube ssh. Still I am facing above error.
If I manually pull the image using the docker pull command, it works. I tried with imagepullsecret also and it is working. But from the above method, getting an authentication error. Do I have missing any step here? Can you please help me?
Thanks...
It seems all the steps are right. Maybe you can check if you really copy the config file to all the minikube nodes. In default, the command minikube ssh connect the control plane. You can check if the nodes' IP addresses is right when you copy the config file to them.
But in my opinion, it's not a good way to use the way like this. It's better and more convenient to use the imagePullSecret and service account.
I've discovered a flow that works through GCP console but not through the gcloud CLI.
Minimal Repro
The following bash snippet creates a fresh GCP project and attempts to push an image to gcr.io, but fails with "access denied" even though the user is project owner:
gcloud auth login
PROJECT_ID="example-project-20181120"
gcloud projects create "$PROJECT_ID" --set-as-default
gcloud services enable containerregistry.googleapis.com
gcloud auth configure-docker --quiet
mkdir ~/docker-source && cd ~/docker-source
git clone https://github.com/mtlynch/docker-flask-upload-demo.git .
LOCAL_IMAGE_NAME="flask-demo-app"
GCR_IMAGE_PATH="gcr.io/${PROJECT_ID}/flask-demo-app"
docker build --tag "$LOCAL_IMAGE_NAME" .
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH"
docker push "$GCR_IMAGE_PATH"
Result
The push refers to repository [gcr.io/example-project-20181120/flask-demo-app]
02205dbcdc63: Preparing
06ade19a43a0: Preparing
38d9ac54a7b9: Preparing
f83363c693c0: Preparing
b0d071df1063: Preparing
90d1009ce6fe: Waiting
denied: Token exchange failed for project 'example-project-20181120'. Access denied.
The system is Ubuntu 16.04 with the latest version of gcloud 225.0.0, as of this writing. The account I auth'ed with has role roles/owner.
Inconsistency with GCP Console
I notice that if I follow the same flow through GCP Console, I can docker push successfully:
Create a new GCP project via GCP Console
Create a service account with roles/owner via GCP Console
Download JSON key for service account
Enable container registry API via GCP Console
gcloud auth activate-service-account --key-file key.json
gcloud config set project $PROJECT_ID
gcloud auth configure-docker --quiet
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH" && docker push "$GCR_IMAGE_PATH"
Result: Works as expected. Successfully pushes docker image to gcr.io.
Other attempts
I also tried using gcloud auth login as my #gmail.com account, then using that account to create a service account with gcloud, but that gets the same denied error:
SERVICE_ACCOUNT_NAME=test-service-account
gcloud iam service-accounts create "$SERVICE_ACCOUNT_NAME"
KEY_FILE="${HOME}/key.json"
gcloud iam service-accounts keys create "$KEY_FILE" \
--iam-account "${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member "serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/owner
gcloud auth activate-service-account --key-file="${HOME}/key.json"
docker push "$GCR_IMAGE_PATH"
Result: denied: Token exchange failed for project 'example-project-20181120'. Access denied.
I tried to reproduce the same error using bash snippet you provided, however it successfully built the ‘flask-demo-app’ container registry image for me. I used below steps to reproduce the issue:
Step 1: Use account which have ‘role: roles/owner’ and ‘role: roles/editor’
Step 2: Created bash script using your given snippet
Step 3: Added ‘gcloud auth activate-service-account --key-file skey.json’ in script to authenticate the account
Step 4: Run the bash script
Result : It created the ‘flask-demo-app’ container registry image
This leads me to believe that there might be an issue with your environment which is causing this error for you. To troubleshoot this you could try running your code on a different machine, a different network or even on the Cloud Shell.
In my case project IAM permission was the issue. Make sure proper permission given and also cloud resource/container registry API enabled.
GCP Access Control
I am setting up Hashicorp vault on my development environment in -dev mode and trying to use access token created from policies to access to secret which policy is created for but I get "*permission denied" while I try to access to secret from CLI or API. Based on Vault documentation it should work.
The following is what I have done to set up:
Set up docker container using docker run --cap-add=IPC_LOCK -p 8200:8200 -e 'VAULT_DEV_ROOT_TOKEN_ID=roottoken' -v //c/confi
g:/config vault
Connect to docker container using docker exec -it {docker name} ash. I know it should be bash command but bash doesn't work and ash works!
After bashing to docker, export VAULT_ADDR='http://127.0.0.1:8200'
Set the root token in environment variable export VAULT_TOKEN='roottoken'
Create a secret vault secret/foo/bar value=secret
Create a policy file called secret.hcl with the content path "secret/foo/*" {
policy = "read"
}
Create a policy for the secret vault policy-write secret /config/secret.hcl and make sure the policy is created
Create a token for the policy is just created vault token-create -policy="secret"
Try to access the policy using API 'http://127.0.0.1:8200/v1/secret/foo' passing X-Vault-Token='token created in step 8' in header
Getting "*permission denied" error
Would be great is someone can shed a light..
I am trying to push a Docker image to Google Container Registry from a CircleCI build, as per their instructions. However, pushing to GCR fails due to an apparent authentication error:
Using 'push eu.gcr.io/realtimemusic-147914/realtimemusic-test/realtimemusic-test' for DOCKER_ARGS.
The push refers to a repository [eu.gcr.io/realtimemusic-147914/realtimemusic-test/realtimemusic-test] (len: 1)
Post https://eu.gcr.io/v2/realtimemusic-147914/realtimemusic-test/realtimemusic-test/blobs/uploads/: token auth attempt for registry: https://eu.gcr.io/v2/token?account=oauth2accesstoken&scope=repository%3Arealtimemusic-147914%2Frealtimemusic-test%2Frealtimemusic-test%3Apush%2Cpull&service=eu.gcr.io request failed with status: 403 Forbidden
I've prior to pushing the Docker image authenticated the service account against Google Cloud:
echo $GCLOUD_KEY | base64 --decode > ${HOME}/client-secret.json
gcloud auth activate-service-account --key-file ${HOME}/client-secret.json
gcloud config set project $GCLOUD_PROJECT_ID
Then I build the image and push it to GCR:
docker build -t $EXTERNAL_REGISTRY_ENDPOINT/realtimemusic-test -f docker/test/Dockerfile .
gcloud docker push -- $EXTERNAL_REGISTRY_ENDPOINT/realtimemusic-test
What am I doing wrong here?
Have you tried using the _json_key method for authenticating with Docker?
https://cloud.google.com/container-registry/docs/advanced-authentication
After that, please use naked 'docker' (without 'gcloud').
If you are pushing docker image using google cloud sdk. You can use temporary authorization with the following command:
gcloud docker --authorize-only
The above command gives you a temporary authorization for pushing and pulling images using docker.
You can refer this link for details Gcloud docker.
Hope it helps to solve your issue.
After many retries... I solved using access token:
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://[HOSTNAME]
The service account requires permission to write to the Cloud Storage bucket containing the container registry. Granting the service account either the project editor role or write access to the bucket (via ACL) solves the issue. The latter should be preferable since the account doesn't receive wider permissions than it needs.