I have a deployment that uses a private registry by the use of imagePullSecrets. It is running ok but when I try to update its image by specifying another tag like this:
kubectl set image deployment/mydeployment mycontainer=my_docker_hub_user/my_image:some_tag
my pod get a ImagePullBackOff status with the message:
Failed to pull image "my_docker_hub_user/my_image:some_tag": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/my_docker_hub_user/my_image/manifests/some_tag: unauthorized: incorrect username or password
But I cannot find how to inform user and password
Kubernetes uses secrets to store the credentials to pull from a private docker registry. You can check out this guide to set it up properly as most likely the Secret defined in the imagePullSecrets does not have access to your new image. You need to define a Secret that has access to the private docker registry and update the deployment with the new imagePullSecrets.
Related
I'm using JFrog repository as my private jfog repo. And I have specified the secret in order to authenticate it. The pod fails with an ImagePullBackOff error, when I describe the pod I see
Failed to pull image "private_registry/image_name": rpc error: code =
Unknown desc = failed to pull and unpack image
"private_registry/image_name": failed to do request: Head
https://xx.xx.xx.xx:port-number/v2/<docker-registryname>/<application-name>/manifests/<tag>:
http: server gave HTTP response to HTTPS client Warning Failed
23m (x4 over 24m) kubelet, worker01 Error: ErrImagePull
when I pull the same image using docker pull , the image get pulled successfully
While having the HTTP server communicating with the HTTPS server (probably due to the usage of a self-signed certificate) being the private registry, registering the concerned registry as an insecure registry with the docker client could resolve the docker error.
{ "insecure-registries":["IP:PORT"] }
An entry similar to the above need to be included in the /etc/docker/daemon.json file and considering the environment to be in K8s, it needs to be configured on all the nodes.
I have logged in to the Docker hub using the CLI command: docker login. Entered username and password and I can pull and push images to Docker hub.
However, my K8S can't. I am trying to apply a deployment that should pull those images into its pods but I got the following error when running kubectl describe pod POD_NAME:
Warning Failed 9s kubelet Failed to pull image "myprivate/repo:tag": rpc error: code = Unknown desc = Error response from daemon: pull access denied for myprivate/repo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
How to make the docker run in the pods to also be logged to the docker hub as well as doing it from my terminal?
Create "image pull secret" and define on your deployment. Here is how you can do https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
I am using minikube to develop my Kubernetes application. I have a private azure registry where my images are saved. Whenever I start the app, k8s start to pull an image. It throws the following error
Failed to pull image "myregistry.azurecr.io/myapp:mytag": rpc error: code = Unknown desc = Error response from daemon: Get https://myregistry.azurecr.io/v2/myapp/manifests/mytag: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
I am configuring my minikube using this documentation. where first, I log-in to acr using below command,
az acr login --name myregistry.azurecr.io --expose-token
And after using the token provided by the above command, I log-in to my private docker-registry by the below command in minikube ssh.
docker login myregistry.azurecr.io -u 00000000-0000-0000-0000-000000000000
After that as per mention in the document, I copy the .docker/config.json to /var/lib/kubelet/config.json in minikube ssh. Still I am facing above error.
If I manually pull the image using the docker pull command, it works. I tried with imagepullsecret also and it is working. But from the above method, getting an authentication error. Do I have missing any step here? Can you please help me?
Thanks...
It seems all the steps are right. Maybe you can check if you really copy the config file to all the minikube nodes. In default, the command minikube ssh connect the control plane. You can check if the nodes' IP addresses is right when you copy the config file to them.
But in my opinion, it's not a good way to use the way like this. It's better and more convenient to use the imagePullSecret and service account.
I was trying to use docker pull other AWS ec2 instance but got the error below.
(py36) ubuntu#ip-xxx:~$ docker pull xxxxx.dkr.ecr.eu-west-2.amazonaws.com/xxxx/xxxx:latest
Error response from daemon: Get https://xxxxx.dkr.ecr.eu-west-2.amazonaws.com/v2/xxxx/xxxx/xxx/xxxx: no basic auth credentials
I was referring this (https://forums.docker.com/t/docker-push-to-ecr-failing-with-no-basic-auth-credentials/17358). But it didn't work, does anyone know how to do with it?
~/.docker/config.json
Try
--registry-ids <some-id>
Based on "no basic auth credentials" when trying to pull an image from a private ECR
My release pipeline runs successfully and creates a container in Azure Kubernetes, however when I view in azure Portal>Kubernetes service> Insights screen, it shows a failure.
It fails to pull the image from my private container repository with error message 'ImagePullBackOff'
I did a kubectl describe on the pod and got below error message:
Failed to pull image "myexampleacr.azurecr.io/myacr:13": [rpc error: code = Unknown desc = Error response from daemon: Get https://myexampleacr.azurecr.io/v2/myacr/manifests/53: unauthorized: authentication required.
Below is a brief background on my setup:
I am using Kubernetes secret to access the containers in private container registry.
I generated the Kubernetes secret using clientId and password(secret) from the Service Principle that my DevOps team created.
.
The command used to generate kubernetes secret:
kubectl create secret docker-registry acr-auth --docker-server --docker-username --docker-password --docker-email
I then updated my deployment.yaml with imagePullSecrets: name:acr-auth
After this, I ran my deployment and release pipeline both ran successfully, but they show failure in the kubernetes service with error message 'ImagePullBackOff' error.
Any help will be much appreciated.
As the error shows it required authentication. As I see from your description, the possible reason is that your team does not assign the ACR role to the service principal that your team creates, or you use the wrong service principal. So you need to check two things:
If the service principal you use has the right permission of the ACR.
If the Kubernetes secret was created right in the Kubernetes service.
The way to check if the service principal has the right permission of the ACR is that pull an image in the ACR after you log in with the service principal in docker server. Also, as the comment said, you need to make sure the command is right as below:
kubectl create secret docker-registry acr-auth --docker-server myexampleacr.azurecr.io --docker-username clientId --docker-password password --docker-email yourEmail
Additional, there is a little possibility that you use the wrong image with tag. By the way, check it out.
I had the same error, and I realised that the service principal is expired.
To check the expiration date of your service principal and update your AKS cluster with the new credentials, fallow the following steps:
NOTE: You need the Azure CLI version 2.0.65 or later installed and configured.
1- Get the Client ID of your cluster using the az aks show command.
az aks show --resource-group YOUR_AKS_RESOURCE_GROUP_NAME --name YOUR_AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId"
2- Check the expiration date of your service principal.
az ad sp credential list --id YOUR_CLIENT_ID --query "[].endDate" -o tsv
If the service principal is expired then, to reset the existing service principal credential fallow the following steps:
1- Reset the credentials using az ad sp credential reset command.
az ad sp credential reset --name YOUR_CLIENT_ID --query password -o tsv
2- Update your AKS cluster with the new service principal credentials.
az aks update-credentials --resource-group YOUR_AKS_RESOURCE_GROUP_NAME --name YOUR_AKS_CLUSTER_NAME --reset-service-principal --service-principal YOUR_CLIENT_ID --client-secret YOUR_NEW_PASSWORD
Source: https://learn.microsoft.com/en-us/azure/aks/update-credentials
It's odd, maybe it shows an old deployment which you didn't delete. It may also be these; incorrect credientials, acr may not be up, image name or tag is wrong. You can also go with aks-acr native authentication and never use a secret: https://learn.microsoft.com/en-gb/azure/container-registry/container-registry-auth-aks
In my case the problem was that my --docker-password had an special character and I was not escaping it using quotes (i.e. --docker-password 'myPwd$')
You can check your password is correct my executing this command:
kubectl get secret < SECRET > -n < NAMESPACE> --output="jsonpath={.data..dockerconfigjson}" | base64 --decode
Reference: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/