I am running into a strange issue, docker pull works but when using kubectl create or apply -f with kind cluster, it is getting below error
Warning Failed 20m kubelet, kind-control-plane Failed to pull image "quay.io/airshipit/kubernetes-entrypoint:v1.0.0": rpc error: code = Unknown desc = failed to pull and unpack image "quay.io/airshipit/kubernetes-entrypoint:v1.0.0": failed to copy: httpReaderSeeker: failed open: failed to do request: Get https://d3uo42mtx6z2cr.cloudfront.net/sha256/b5/b554c0d094dd848c822804a164c7eb9cc3d41db5f2f5d2fd47aba54454d95daf?Expires=1587558576&Signature=Tt9R1O4K5zI6hFG9GYt-tLAWkwlQyLoAF0NDNouFnff2ywZnPlMSo2x2aopKcQJ5cAMYYTHvYBKm2Zwk8W80tE9cRet1PfP6CnAmo2lzsYzKnRRWbgQhgsyJK8AmAvKzw7iw6lbYdP91JjEiUcpfjMAj7dMPj97tpnEnnd72kljRew8VfgBhClblnhNFvfR9fs9lRS7wNFKrZ1WUSGpNEEJZjNcc9zBNIbOyKeDPfvIpdJ6OthQMJ3EKaFEFfVN6asiyz3lOgM2IMjJ0uBI2ChhCyDx7YHTdNZCOoYAEmw8zo5Ma0n8EQpX3EwU1qSR0IwoGNawF0qV6tFAZi5lpbQ__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA: x509: certificate signed by unknown authority
Here is the ./kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXlNakV4TVRNd09Gb1hEVE13TURReU1ERXhNVE13T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDlvCkNiYlFYblBxbXpUV0hwdnl6ZXdPcWo5L0NCSmFLV1lrSEVCZzJHcXhjWnFhWG92aVpOdkt3NVZsQmJvTUlSOTMKVUxiWGFVeFl4MHJyQ3pWanNKU09lWDd5VjVpY3JTOXRZTkF1eHhPZzBMM1F3SElxUEFKWkY5b1JwWG55VnZMcwpIcVBDQ2ZRblhBYWRpM3VsM2J5bjcrbVFhcU5mV0NSQkZhRVJjcXF5cDltbzduRWZ2YktybVM0TUdIUHN3eUV0CkYxeXJjc041Vlo5QkM5TWlXZnhEY1dUL2R5SXIrUjFtL3hWYlU0aGNjdkowYi9CQVJ3aUhVajNHVFpnYUtmbGwKNUE5elJsVFRNMjV6c0t5WHVLOFhWcFJlSTVCNTNqUUo3VGRPM0lkc0NqelNrbnByaFI0YmNFcll5eVNhTWN6cgo4c1l0RHNWYmtWOE9rd0pFTnlNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHdEFyckYrKzdjOGZEN09RZWxHWldxSHpyazkKbFQ2MHhmZjBtTzRFdWI3YUxzZGdSTmZuSjB5UDRVelhIeXBkZEhERFhzUHFzVHZzZ2h6MXBPNFQrVTVCVmRqQQpGWjdxZW9iUWN2NkhnSERZSjhOdy9sTHFObGMyeUtPYVJSNTNsbjRuWERWYkROaTcyeEJTbUlNN0hhOFJQSVNFCmttTndHeHFKQVM3UmFOanN0SDRzbC9LR2xKcUowNFdRZnN0b1lkTUY4MERuc0prYlVuSkQyb29oOGVHTlQ5WGsKOTZPbGdoa05yZ09ybmFOR2hTZlQxYjlxdDJZOFpGUlRrKzhhZGNNczlHWW50RzZZTW1WRzVVZDh0L1phbVlRSwpIWlJ6WDRxM3NoY1p3NWRmR2JZUmRPelVTZkhBcE9scHFOQ1FmZGxyOWMyeDMxdkRpOW4vZE9RMHVNbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://127.0.0.1:32768
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJSWNDdHVsWUhYaVl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBME1qSXhNVEV6TURoYUZ3MHlNVEEwTWpJeE1URXpNVEJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTArZ0JKWHBpUncxK09WaGEKVjU0bG5IMndHTzRMK1hIZjBnUjFadU01MnUwUFV3THQ5cDNCd2d5WDVHODhncUFIMmh3K1c4U2lYUi9WUUM5MgpJd3J3cnc1bFlmcTRrWDZhWEcxdFZLRjFsU2JMUHd4Nk4vejFMczlrbnlRb2piMHdXZkZ2dUJrOUtCMjJuSVozCmdOUEZZVmNVcWwyM2s3ck5yL0xzdGZncEJoVTRaYWdzbCsyZG53Qll2MVh4Z1M1UGFuTGxUcFVYODIxZ3RzQ0QKbUN1aFFyQlQzdzZ0NXlqUU5MSGNrZ3M4Y1JXUkdxZFNnZGMrdGtYczkzNDdoSzRjazdHYUw0OHFBMTgzZzBXKwpNZEllcDR3TUxGbU9XTCtGS2Q5dC83bXpMbjJ5RWdsRXlvNjFpUWRmV2s1S2Q1c1BqQUtVZXlWVTIrTjVBSlBLCndwaGFyUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGNXp5d1hSaitiakdzSG1OdjgwRXNvcXBjOEpSdVY4YnpNUQoxV0dkeGl3Mzk3TXBKVHFEaUNsTlZoZjZOOVNhVmJ2UXp2dFJqWW5yNmIybi9HREdvZDdyYmxMUWJhL2NLN1hWCm1ubTNHTXlqSzliNmc0VGhFQjZwUGNTa25yckRReFFHL09tbXE3Ulg5dEVCd2RRMHpXRGdVOFU0R0t3a3NyRmgKMFBYNE5xVnAwdHcyaVRDeE9lU0FpRnBCQ0QzS3ZiRTNpYmdZbHNPUko5S0Y3Y00xVkpuU0YzUTNZeDNsR3oxNgptTm9JanVHNWp2a3NDejc3TlFIL3Ztd2dXRXJLTndCZ0NDeEVQY1BjNFRZREU1SzBrUTY1aXc1MzR6bHZuaW5JCjZRTGYvME9QaHRtdC9FUFhRSU5PS0dKWEpkVFo1ZU9JOStsN0lMcGROREtkZjlGU3pNND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBMCtnQkpYcGlSdzErT1ZoYVY1NGxuSDJ3R080TCtYSGYwZ1IxWnVNNTJ1MFBVd0x0CjlwM0J3Z3lYNUc4OGdxQUgyaHcrVzhTaVhSL1ZRQzkySXdyd3J3NWxZZnE0a1g2YVhHMXRWS0YxbFNiTFB3eDYKTi96MUxzOWtueVFvamIwd1dmRnZ1Qms5S0IyMm5JWjNnTlBGWVZjVXFsMjNrN3JOci9Mc3RmZ3BCaFU0WmFncwpsKzJkbndCWXYxWHhnUzVQYW5MbFRwVVg4MjFndHNDRG1DdWhRckJUM3c2dDV5alFOTEhja2dzOGNSV1JHcWRTCmdkYyt0a1hzOTM0N2hLNGNrN0dhTDQ4cUExODNnMFcrTWRJZXA0d01MRm1PV0wrRktkOXQvN216TG4yeUVnbEUKeW82MWlRZGZXazVLZDVzUGpBS1VleVZVMitONUFKUEt3cGhhclFJREFRQUJBb0lCQUZzYWsrT1pDa2VoOVhLUwpHY1V4cU5udTc1YklRVDJ0UjV6emJjWWVTdkZrbWdJR2NHaG15cmF5MDFyU3VDRXd6QzlwbFNXL0ZFOFZNSW0zCjNnS1M0WWRobVJUV3hpTkhXdllCMWM5YzIwQ1V2UzBPSUQyUjg1ZDhjclk0eFhhcXIrNzdiaHlvUFRMU0U0Q1kKRHlqRDQwaEdPQXhHM25ZVkNmbHJaM21VaDQ2bEo4YlROcXB5UzFCcVdNZnZwekt1ZDB6TElmMWtTTW9Cbm1XeQo0RzBrNC9qWVdEOWNwdGtSTGxvZXp5WVlCMTRyOVdNQjRENkQ5eE84anhLL0FlOEQraTl2WCtCaUdGOURSYllJCmVVQmRTQzE2QnQybW5XWGhXMmhSRFFqRmR2dzJIQ0gxT0ppcVZuWUlwbGJEcjFYVXI1NzFYWTZQMFJlQ0JRc3kKOUZpMG44RUNnWUVBMUQ3Nmlobm5YaEZyWFE2WkVEWnp3ZGlwcE5mbHExMGJxV0V5WUVVZmpPd2p3ZnJ4bzVEYgppUmoySm5Fei96bDhpVDFEbmh3bFdWZlBNbWo3bUdFMVYwMkFWSkJoT20vdU1tZnhYVmcvWGwxOVEzODdJT0tpCjBKSmdabGZqVjEyUGdRU3NnbnRrckdJa3dPcisrOUFaL3R0UVVkVlU0bFgxWWRuazZ5T1V6YWNDZ1lFQS81Y1kKcHJxMVhuNGZBTUgxMzZ2dVhDK2pVaDhkUk9xS1Zia2ZtWUw0dkI0dG9jL2N1c1JHaGZPaTZmdEZCSngwcDhpSgpDU1ZCdzIxbmNHeGRobDkwNkVjZml2ZG0vTXJlSmlyQmFlMlRRVWdsMjh1cmU3MWJEdXpjbWMrQVRQa1VXVDgyCmJpaDM5b3A1SEo5N2NlU3JVYU5zRTgxaEdIaXNSSzJEL2pCTjU0c0NnWUVBcUExeHJMVlQ5NnlOT1BKZENYUkQKOWFHS3VTWGxDT2xCQkwwYitSUGlKbCsyOUZtd3lGVGpMc3RmNHhKUkhHMjFDS2xFaDhVN1lXRmdna2FUcDVTWQplcGEzM0wwdzd1Yy9VQlB6RFhqWk8rdUVTbFJNU2Y2SThlSmtoOFJoRW9UWElrM0VGZENENXVZU3VkbVhxV1NkCm9LaWdFUnQ4Q1hZTVE3MFdQNFE5eHhNQ2dZQnBkVTJ0bGNJNkQrMzQ0UTd6VUR5VWV1OTNkZkVjdTIyQ3UxU24KZ1p2aCtzMjNRMDMvSGZjL1UreTNnSDdVelQxdzhWUmhtcWJNM1BwZUw4aFRKbFhWZFdzMWFxbHF5c1hvbDZHZwpkRzlhODByenF0REJ5THFtcU9MSThBNHZOR0xLQkVRUUpkQ0J3RmNDa1dkYzhnNGlMRHp1MnNJaVY4QTB3aWVCCkhTczN5d0tCZ1FDeXl2Tk45enk5S3dNOW1nMW5GMlh3WUVzMzB4bmsrNXJmTGdRMzQvVm1sSVA5Y1cyWS9oTWQKWnVlNWd4dnlYREcrZW9GU3Njc281TmcwLytWUDI0Sjk0cGJIcFJWV3FIWENvK2gxZjBnKzdET2p0dWp2aGVBRwpSb240NmF1clJRSG5HUStxeldWcWtpS2l1dDBybFpHd2ZzUGs4eWptVjcrWVJuamxES1hUWUE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
I ran into a similar issue (I think) on OpenShift - I could pull images, but I couldn't push or get k8s to pull them. To resolve it, I had to update the docker config at /etc/sysconfig/docker and add the registry as an insecure registry. For openshift, the default route was required.
OPTIONS=' <some existing config stuff here> --insecure-registry=<fqdn-of-your-registry>'
Then systemctl restart docker to have the changes take effect.
You might also need to create a docker pull secret with your credentials in kubernetes to allow it to access the registry. Details here
Related
Hell Guys,
I have K8s cluster which contains 3 nodes ( 1 master and 2 workers) , I deployed nexus I can push and pull without any issue (from the 2 workers), but when I tried to create a deployment using the image which is located in nexus
: NEXUS_URL:NEXUS_PORT/image_name:tagname
spec:
imagePullSecrets:
- name: nexus-registry-key
containers:
- name: container-name
image: NEXUS_URL:NEXUS_PORT/image_name:tagname
I noticed that the kublet Failed to pull image, and it send HTTPS request to nexus https://NEXUS_URL:NEXUS_PORT/v2/image_name/manifests/tagname
which it gives this error message
rpc error: code = Unknown desc = failed to pull and unpack image NEXUS_URL:NEXUS_PORT/image_name:tagname failed to resolve reference NEXUS_URL:NEXUS_PORT/image_name:tagname failed to do request: Head
https://NEXUS_URL:NEXUS_PORT/v2/image_name/manifests/tagname http: server gave HTTP response to HTTPS client
any help guys please and thank you in advance
I'm trying to pull some container in my private gitlab registry but when I try I get the following error:
Failed to pull image "registry.mygitlab.com/name/container-backend:nginx": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.mygitlab.com/v2/: x509: certificate is valid for ingress.local, not registry.mygitlab.com
In the deployment.yml I've added:
imagePullSecrets:
- name: registry-secret
and the registry-secret has been created with:
kubectl create \
—namespace=myns \
secret docker-registry registry2-secret \
--docker-server=registry.mygitlab.com \
--docker-username=myname \
--docker-password=mypassword
One month ago it worked and now it doesnt anymore.. and the credentials are correct :/
Problem lays in TLS certificate on the server, not the login credential on the client.
Check if you have following record in daemon.json file:
"insecure-registries" : ["registry.mygitlab.com"]
"Also depending of the registries you are accessing, you may have to perform a "kubectl create secret docker-registry ..." action as explained here.
To create secret you should execute command:
$ kubectl create secret docker-registry registry-secret --docker-server=registry.gitlab.com --docker-username=<username> --docker-password=<password> --docker-email=<email
In deployment.yaml file you have used
secret named registry-secret not
registry2-secret as you have used in command while creating secret -you had created wrong named secret.
Finally, you may have to define the certificate to docker by creating a new directory in /etc/docker/certs.d containing the certificates as explained here.
Take a look: x509-certificate-signed-by-unknown-authority, create-a-secret-that-holds-your-authorization-token.
Also take a look: gitlab-tls.
I am at the initial stage of Kubernetes. I've just created a pod using the command:
kubectl apply -f posts.yaml
It returns me the following:
pod/posts created
After that when I run kubectl get pods
I found the result as following:
NAME READY STATUS RESTARTS AGE
posts 0/1 ErrImagePull 0 2m4s
Here is my posts.yaml file in below:
apiVersion: v1
kind: Pod
metadata:
name: posts
spec:
containers:
- name: posts
image: bappa/posts:0.0.1
This means that kubernetes could not pull the image from the repository. Does the repo maybe need some authorization to allow image pull?
You can do
kubectl describe pod posts
to get some more info.
After applying yaml and looking into the kubectl describe pod posts you can clearly see below error:
Normal BackOff 21s kubelet Back-off pulling image "bappa/posts:0.0.1"
Warning Failed 21s kubelet Error: ImagePullBackOff
Normal Pulling 9s (x2 over 24s) kubelet Pulling image "bappa/posts:0.0.1"
Warning Failed 8s (x2 over 22s) kubelet Failed to pull image "bappa/posts:0.0.1": rpc error: code = Unknown desc = Error response from daemon: pull access denied for bappa/posts, repository does not exist or may require 'docker login'
Warning Failed 8s (x2 over 22s) kubelet Error: ErrImagePull
Failed to pull image "bappa/posts:0.0.1": rpc error: code = Unknown desc = Error response from daemon: pull access denied for bappa/posts, repository does not exist or may require 'docker login'
That means either you have posts image in your PRIVATE bappa repository, or you use non-exist image at all. So if this is your private repo - you should be authorized.
Maybe you wanted to use cleptes/posts:0.01 ?
apiVersion: v1
kind: Pod
metadata:
name: posts
spec:
containers:
- name: posts
image: cleptes/posts:0.01
kubectl get pods posts
NAME READY STATUS RESTARTS AGE
posts 1/1 Running 0 26m10s
kubectl describe pod posts
Normal Pulling 20s kubelet Pulling image "cleptes/posts:0.01"
Normal Pulled 13s kubelet Successfully pulled image "cleptes/posts:0.01"
Normal Created 13s kubelet Created container posts
Normal Started 12s kubelet Started container posts
Basically ErrImagePull means kubernetes is unable to locate the image, bappa/posts:0.0.1 This could either be the registry settings are not correct in the worker nodes or your image name or tags are not correct.
Just like #Henry explained issue a 'kubectl describe pod posts and inspect (and share) the error messages.
If you are using private repository you need to be authorized. If you are authorized and you can't reach the repository I think it might be related you using free account on docker hub and you have more private repositories than one which is for free. If you try to push your repository again you should get an error 'denied: requested access to the resource is denied'.
If you make your repository public it should solve your issue.
I have set up a private docker registry with self-signed certificates.
docker run -d -p 443:5000 --restart=always --name registry -v `pwd`/auth:/auth
-e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm"
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v `pwd`/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/domain.crt
-e REGISTRY_HTTP_TLS_KEY=/domain.key
domain.crt and domain.key are generated using OpenSSL.
To Connect from a remote host,
cp domain.crt /etc/pki/ca-trust/source/anchors/mydockerregistry.com.crt
update-ca-trust
systemctl daemon-reload
systemctl restart docker
After this able to log in from the remote host
docker login mydockerregistry.com --username=test
password: test
I am able to push/pull the image to this registry and it is successful.
Similarly, I tried to deploy this image in the Kubernetes cluster. I created a secret with the registry with a username and password.
kubectl create secret docker-registry my-registry --docker-server=mydockerregistry.com --docker-username=test --docker-password=test --docker-email=abc.com
Also, I did the self-signed certificates from docker registry steps in worker nodes,
cp domain.crt /etc/pki/ca-trust/source/anchors/mydockerregistry.com.crt
update-ca-trust
systemctl daemon-reload
systemctl restart docker
Given the name in the imagePullSecrets of deployment.yaml file. I am trying to create a POD in the Kubernetes cluster (Calico Network) but it is unable to pull the image.
deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
chart: test-image
spec:
containers:
- name: {{ .Chart.Name }}
image: "mydockerregistry.com/test-image:latest"
imagePullPolicy: Always
imagePullSecrets:
- name: my-registry
Warning Failed 45s (x2 over 59s) kubelet,
kube-worker-02 Failed to pull image
"mydockerregistry.com/test-image:latest": rpc error: code = Unknown
desc = unauthorized: authentication required
Warning Failed
45s (x2 over 59s) kubelet, kube-worker-02 Error: ErrImagePull
I checked the docker registry logs,
time="2020-01-13T14:58:05.269921112Z" level=error msg="error
authenticating user "": authentication failure" go.version=go1.11.2
http.request.host=mydockerregistry.com
http.request.id=02fcccff-9a30-443c-8a00-48bcacb90e99
http.request.method=GET http.request.remoteaddr="10.76.112.148:35454"
http.request.uri="/v2/test-image/manifests/latest"
http.request.useragent="docker/1.13.1 go/go1.10.8
kernel/3.10.0-957.21.3.el7.x86_64 os/linux arch/amd64
UpstreamClient(Go-http-client/1.1)" vars.name=test-image
vars.reference=latest
time="2020-01-13T14:58:05.269987492Z" level=warning msg="error
authorizing context: basic authentication challenge for realm
"Registry Realm": authentication failure" go.version=go1.11.2
http.request.host=mydockerregistry.com
http.request.id=02fcccff-9a30-443c-8a00-48bcacb90e99
http.request.method=GET http.request.remoteaddr="10.76.112.148:35454"
http.request.uri="/v2/ca-config-calc/manifests/latest"
http.request.useragent="docker/1.13.1 go/go1.10.8
kernel/3.10.0-957.21.3.el7.x86_64 os/linux arch/amd64
UpstreamClient(Go-http-client/1.1)" vars.name=test-image
vars.reference=latest
I am able to do docker login myregistrydomain and pull the image from worker node
Anything I am missing in the configuration?
You have a typo in the registry name in the create secret command.
kubectl create secret docker-registry my-registry --docker-server=myregistryregistry.com --docker-username=test --docker-password=test --docker-email=abc.com
Change myregistryregistry.com to mydockerregistry.com which you have used with docker login.
I've been able to successfully pull an image from a secure, private, docker registry into kubernetes using this link.
I'm stepping through Kubernetes in Action to get more than just familiarity with Kubernetes.
I already had a Docker Hub account that I've been using for Docker-specific experiments.
As described in chapter 2 of the book, I built the toy "kubia" image, and I was able to push it to Docker Hub. I verified this again by logging into Docker Hub and seeing the image.
I'm doing this on Centos7.
I then run the following to create the replication controller and pod running my image:
kubectl run kubia --image=davidmichaelkarr/kubia --port=8080 --generator=run/v1
I waited a while for statuses to change, but it never finishes downloading the image, when I describe the pod, I see something like this:
Normal Scheduled 24m default-scheduler Successfully assigned kubia-25th5 to minikube
Normal SuccessfulMountVolume 24m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-x5nl4"
Normal Pulling 22m (x4 over 24m) kubelet, minikube pulling image "davidmichaelkarr/kubia"
Warning Failed 22m (x4 over 24m) kubelet, minikube Failed to pull image "davidmichaelkarr/kubia": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
So I then constructed the following command:
curl -v -u 'davidmichaelkarr:**' 'https://registry-1.docker.io/v2/'
Which uses the same password I use for Docker Hub (they should be the same, right?).
This gives me the following:
* About to connect() to proxy *** port 8080 (#0)
* Trying **.**.**.**...
* Connected to *** (**.**.**.**) port 8080 (#0)
* Establish HTTP proxy tunnel to registry-1.docker.io:443
* Server auth using Basic with user 'davidmichaelkarr'
> CONNECT registry-1.docker.io:443 HTTP/1.1
> Host: registry-1.docker.io:443
> User-Agent: curl/7.29.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
<
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=*.docker.io
* start date: Aug 02 00:00:00 2017 GMT
* expire date: Sep 02 12:00:00 2018 GMT
* common name: *.docker.io
* issuer: CN=Amazon,OU=Server CA 1B,O=Amazon,C=US
* Server auth using Basic with user 'davidmichaelkarr'
> GET /v2/ HTTP/1.1
> Authorization: Basic ***
> User-Agent: curl/7.29.0
> Host: registry-1.docker.io
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io"
< Date: Wed, 24 Jan 2018 18:34:39 GMT
< Content-Length: 87
< Strict-Transport-Security: max-age=31536000
<
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
* Connection #0 to host *** left intact
I don't understand why this is failing auth.
Update:
Based on the first answer and the info I got from this other question, I edited the description of the service account, adding the "imagePullSecrets" key, then I deleted the replicationcontroller again and recreated it. The result appeared to be identical.
This is the command I ran to create the secret:
kubectl create secret docker-registry regsecret --docker-server=registry-1.docker.io --docker-username=davidmichaelkarr --docker-password=** --docker-email=**
Then I obtained the yaml for the serviceaccount, added the key reference for the secret, then set that yaml as the settings for the serviceaccount.
This are the current settings for the service account:
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
imagePullSecrets:
- name: regsecret
kind: ServiceAccount
metadata:
creationTimestamp: 2018-01-24T00:05:01Z
name: default
namespace: default
resourceVersion: "81492"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 38e2882c-009a-11e8-bf43-080027ae527b
secrets:
- name: default-token-x5nl4
Here's the updated events list from the describe of the pod after doing this:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m default-scheduler Successfully assigned kubia-f56th to minikube
Normal SuccessfulMountVolume 7m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-x5nl4"
Normal Pulling 5m (x4 over 7m) kubelet, minikube pulling image "davidmichaelkarr/kubia"
Warning Failed 5m (x4 over 7m) kubelet, minikube Failed to pull image "davidmichaelkarr/kubia": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Normal BackOff 4m (x6 over 7m) kubelet, minikube Back-off pulling image "davidmichaelkarr/kubia"
Warning FailedSync 2m (x18 over 7m) kubelet, minikube Error syncing pod
What else might I be doing wrong?
Update:
I think it's likely that all these issues with authentication are unrelated to the real issue. The key point is what I see in the pod description (breaking into multiple lines to make it easier to see):
Warning Failed 22m (x4 over 24m) kubelet,
minikube Failed to pull image "davidmichaelkarr/kubia": rpc error: code =
Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/:
net/http: request canceled while waiting for connection
(Client.Timeout exceeded while awaiting headers)
The last line seems like the most important piece of information at this point. It's not failing authentication, it's timing out the connection. In my experience, something like this is usually caused by issues getting through a firewall/proxy. We do have an internal proxy, and I have those environment variables set in my environment, but what about the "serviceaccount" that kubectl is using to make this connection? Do I have to somehow set a proxy configuration in the serviceaccount description?
You need to make sure the Docker daemon running in the Minikube VM uses your corporate proxy by starting minikube along these lines:
minikube start --docker-env http_proxy=http://proxy.corp.com:port --docker-env https_proxy=http://proxy.corp.com:port --docker-env no_proxy=192.168.99.0/24
I faced same issue couple of time.
Updating here, might be useful for someone.
First describe the POD(kubectl describe pod <pod_name>),
1. If you see access denied/repository does not exist errors like
Error response from daemon: pull access denied for test/nginx,
repository does not exist or may require 'docker login': denied:
requested access to the resource is denied
Solution:
If local K8s, you need to login into docker registry first OR
if Kubernetes Cluster on Cloud, create secret for Registry and add imagepullsecret
along with secret name
2. If you get timeout error,
Error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while
awaiting headers)
Solution:
check the node is able to connect network OR able to reach private/public Registry.
If AWS EKS Cluster, you need to enable auto-assign ip to Subnet where EC2 is running.
To fetch images stored on registries that require credentials, you need to create a special type of secret called imagePullSecrets.
kubectl create secret docker-registry regsecret --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Then create the Pod specifying the imagePullSecrets field
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regsecret
As mentioned in my comment to the original post, I had the same issue. The only thing of note is the minikube was up as at creation. I restarted the underlying VM and image pulls started working.
This seems to be quite old issue, but I have similar issue and solved by logged in to your docker account.
You can try it by deleting the existing failed pods, firing "docker login" command (login to your acc), then retry for the pod creation.