K8S Failed to pull image from local repo - docker

I have an image in my docker repository. I an trying to create POD out of it but K8S is giving the following error.
Failed to pull image "cloudanswer:latest": rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
It seems K8S is connecting to https://registry-1.docker.io/v2/ instead of taking from local docker repository.
How to make K8S take image for local docker repository ?

If you use single node in your cluster, make sure this docker image is available on this node.
You can check via
docker image ls
Also set the imagePullPolicy to Never, otherwise Kubernetes will try to download the image.
Multiple node cluster, you can use docker registry image.
Use a local registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Now tag your image properly:
docker tag ubuntu <dns-name-of-machine>:5000/ubuntu
dns name of the machine running registry container should be reachable by all nodes in network
Now push your image to local registry:
docker push <dns-name-of-machine>:5000/ubuntu
You should be able to pull it back:
docker pull <dns-name-of-machine>:5000/ubuntu
Now change your yaml file to use local registry.

imagePullPolicy should be set to IfNotPresent to pull images from local docker repo

Kubernetes supports a special type of secret that you can create that will be used to fetch images for your pods. More details here

Related

Where does "docker images" look when outputting a list of images

A word of warning, this is my first posting, and I am new to docker and Kubernetes with enough knowledge to get me into trouble.
I am confused about where docker container images are being stored and listing images.
To illustrate my confusion I start with the confirmation that "docker images" indicates no image for nginx is present.
Next I create a pod running nginx.
kubectl run nginx --image=nginx is succesful in pulling image "nginx" from github (or that's my assumption):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/nginx to minikube
Normal Pulling 8s kubelet Pulling image "nginx"
Normal Pulled 7s kubelet Successfully pulled image "nginx" in 833.30993ms
Normal Created 7s kubelet Created container nginx
Normal Started 7s kubelet Started container nginx
Even though the above output indicates the image is pulled, issuing "docker images" does not include nginx the output.
If I understand correctly, when an image is pulled, it is being stored on my local disk. In my case (Linux) in /var/lib/docker.
So my first question is, why doesn't docker images list it in the output, or is the better question where does docker images look for images?
Next if I issue a docker pull for nginx it is pulled from what I assume to be Github. docker images now includes it in it's output.
Just for my clarification, nothing up to this point involves a private local registry, correct?
I purposefully create a basic local Docker Registry using the docker registry container thinking it would be clearer since that will allow me to explicitly specify a registry but this only results in another issue:
docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
-v /registry:/var/lib/registry \
registry
I tag and push the nginx image to my newly created local registry:
docker tag nginx localhost:5000/nginx:latest
docker push localhost:5000/nginx:latest
The push refers to repository [localhost:5000/nginx]
2bed47a66c07: Pushed
82caad489ad7: Pushed
d3e1dca44e82: Pushed
c9fcd9c6ced8: Pushed
0664b7821b60: Pushed
9321ff862abb: Pushed
latest: digest: sha256:4424e31f2c366108433ecca7890ad527b243361577180dfd9a5bb36e828abf47 size: 1570
I now delete the original nginx image:
docker rmi nginx
Untagged: nginx:latest
Untagged: nginx#sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603
... and the newely tagged one:
docker rmi localhost:5000/nginx
Untagged: localhost:5000/nginx:latest
Untagged: localhost:5000/nginx#sha256:4424e31f2c366108433ecca7890ad527b243361577180dfd9a5bb36e828abf47
Deleted: sha256:f652ca386ed135a4cbe356333e08ef0816f81b2ac8d0619af01e2b256837ed3e
... but from where are they being deleted?
Now the image nginx should only be present in localhost:5000/? But docker images doesn't show it in it's output.
Moving on, I try to create the nginx pod once more using the image pushed to localhost:5000/nginx:latest.
kubectl run nginx --image=localhost:5000/nginx:latest --image-pull-policy=IfNotPresent
This is the new issue. The connection to localhost:5000 is refused.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 1s kubelet Pulling image "localhost:5000/nginx:latest"
Warning Failed 1s kubelet Failed to pull image "localhost:5000/nginx:latest": rpc error: code = Unknown desc = Error response from daemon: Get "http://localhost:5000/v2/": dial tcp 127.0.0.1:5000: connect: connection refused
Warning Failed 1s kubelet Error: ErrImagePull
Normal BackOff 0s kubelet Back-off pulling image "localhost:5000/nginx:latest"
Why is it I can pull and push to localhost:5000, but pod creation fails with what appears to be an authorization issue? I try logging into the registry but no matter what I use for the username and user password, login is successful. This confuses me more.
I would try creating/specifying imagePullSecret, but based on docker login outcome, it doesn't make sense.
Clearly I not getting it.
Someone please have pity on me and show where I have lost my way.
I will try to bring some clarity to you despite the fact your question already contains about 1000 questions (and you'll probably have 1000 more after my answer :D)
Before you can begin to understand any of this, you need to learn a few basic things:
Docker produces images which are used by containers - it similar to Virtual Machine, but more lightweight (I'm oversimplifying, but the TL;DR is pretty much that).
Kubernetes is an orchestration tool - it is responsible for starting containers (by using already built images) and tracking their state (i.e. if this container has crashed it should be restarted, or if it's not started it should be started, etc)
Docker can run on any machine. To be able to start a container you need to build an image first. The image is essentially a lightweight mini OS (i.e. alpine, ubuntu, windows, etc) which is configured with only those dependencies you need to run your application. This image is then pushed to a public repository/registry (hub.docker.com) or to a private one. And afterwards it's used for starting containers.
Kubernetes builds on top of this and adds the "automation" layer which is responsible for scheduling and monitoring the containers. For example, you have a group of 10 servers all running nginx. One of those servers restarts - the nginx container will be automatically started by k8s.
A kubernetes cluster is the group of physical machines that are dedicated to the mentioned logical cluster. These machines have labels or tags which define the purpose of physical node and work as a constraint for where a container will be scheduled.
Now that I have explained the minimum basics in an oversimplified way I can move with answering your questions.
When you do docker run nginx - you are instructing docker to pull the nginx image from https://hub.docker.com/_/nginx and then start it on the machine you executed the command on (usually your local machine).
When you do kubectl run nginx --image=nginx - you are instructing Kubernetes to do something similar to 1. but in a cluster. The container will be deployed to a random machine somewhere in the cluster unless you put a nodeSelector or configure affinity. If you put a nodeSelector this container (called Pod in K8S) will be placed on that specific node.
You have started a private registry server on your local machine. It is crucial to know that localhost inside a container will point to the container itself.
It is worth mentioning that some of the kubernetes commands will create their own container for the execution phase of the command. (remember this!)
When you run kubectl run nginx --image=nginx everything works fine, because it is downloading the image from https://hub.docker.com/_/nginx.
When you run kubectl run nginx --image=localhost:5000/nginx you are telling kubernetes to instruct docker to look for the image at localhost which is ambiguous because you have multiple layers of containers running (check 4.). This means the command that will do docker pull localhost:5000/nginx also runs in a docker container -- so there is no service running at port :5000 (the registry is running in a completely different isolated container!) :D
And this is why you are getting Error: ErrImagePull - it can't resolve localhost as it points to itslef.
As for the docker rmi nginx and docker rmi localhost:5000/nginx commands - by running them you removed your local copy of the nginx images.
If you run docker run localhost:5000/nginx on the machine where you started docker run registry you should get a running nginx container.
You should definitely read the Docker Guide BEFORE you try to dig into Kubernetes or nothing will ever make sense.
Your head will stop hurting after that I promise... :D
TL;DR
docker images lists images stored in the docker daemon's data root, by default /var/lib/docker.
You're deploying images to Kubernetes, the images are pulled onto the node on which the pod is scheduled. For example, using Kubernetes in Docker:
kind create cluster
kubectl run nginx --image=nginx
docker exec -it $(kubectl get pod nginx -o jsonpath={.spec.nodeName}) crictl images
crictl is a command-line interface for CRI-compatible container runtimes.
Docker images are pulled from Docker Hub by default, not Github. When using a local docker registry, images are stored in the registry's data volume. The docker registry storage may be customized, by default data is stored in (storage.filesystem.rootdirectory) /var/lib/registry.
You can use tools like skopeo to list images stored in a docker registry, for example:
skopeo list-tags docker://localhost:5000/nginx --tls-verify=false

Local Docker Registry using IP address instead of localhost or 127.0.0.1

By following the link local-docker-registry I am able to create local docker registry. And if I try to use docker pull from the registry then it is working with localhost. However, If i try with the ip address (192.168.1.100) with the docker pull then it got stuck. Is there any way that I can use local docker registry from different node. Like I have started the docker registry in Node_1 and Node_2, Node_3 can use the same registry to download images using repo URL with ip adrres.
sudo docker ps | grep 5000
ac85ef5e1468 registry:2 "/entrypoint.sh /etc…" 0.0.0.0:5000->5000/tcp registry
Docker pull result with localhost
$docker pull localhost:5000/<repo path>/<my image name>
2020.3.0-05-e00b8b5: Pulling from <repo path>/<my image name>
6cf436f81810: Already exists
...
7a4174f2f781: Already exists
12625989883c: Pull complete
704db5aa2eb9: Pull complete
Extracting [===> 688.1kB/8.76MB
de0615bc3c45: Download complete
fbaceba9fc67: Download complete
With IP Address its not working:
$docker pull 192.168.1.100:5000/<repo path>/<my image name>`
Error response from daemon: Get https://192.168.1.100:5000/v2/:
net/http: request canceled while waiting for connection
(Client.Timeout exceeded while awaiting headers)
Thanks in advance for any suggestions.
Run this on other node
sudo service docker stop
vi /etc/docker/daemon.json
{
"insecure-registries" : [ "192.168.1.100:5000" ]
}
//add the entry
vi /etc/sysconfig/docker
ADD_REGISTRY='--add-registry 192.168.1.100:5000'
// Add this
start the docker service: service start docker
Check the command
docker info
Registry: https://192.168.1.100:5000/v1/
Experimental: false
Insecure Registries:
192.168.1.100:5000
127.0.0.0/8
This has to perform from all the nodes where you want to make the default registry as private registry.
now run : docker pull <image name>
//it should pull from private registry

How to access Docker Registry publicly from both sub network and outside world

I have just run a docker registry by:
$ docker run -d --name registry --restart always -p 5961:5000 registry:2.7.1
Now I can push to it by:
$ docker tag ubuntu:v2 localhost:5961/ubuntu:v2
$ docker push localhost:5961/ubuntu:v2
But not from outside. For example I can not push to it from another machine on the same network by executing:
$ docker tag ubuntu:v2 192.168.1.122:5961/ubuntu:v2
$ docker push 192.168.1.122:5961/ubuntu:v2
The error is:
The push refers to repository [192.168.1.122:5961/ubuntu]
Get https://192.168.1.122:5961/v2/: http: server gave HTTP response to HTTPS client
Why?
Also I don't know how to pull this image (192.168.1.122:5961/ubuntu:v2) from outside world. For example by:
$ docker pull <public-ip>:5961/ubuntu:v2
Note that I can port forward the port 5961 of the machine 192.168.1.122 to the same port of <public-ip>.
1 Regarding local network:
Your docker registry is insecure and is using HTTP, not HTTPS. So you need to define an insecure registry for the client daemon, by updating the /etc/docker/daemon.json file like so:
{
"insecure-registries" : ["192.168.1.122:5961"]
}
See: docs
2 Regarding pulling the image from the outside world:
It should work the way you described it docker pull <public-ip>:5961/ubuntu:v2 (as long as all clients defines the registry as insecure if it is)
But please DO NOT use an insecure registry open to the outside world, and unless you want everyone in the world to be able to pull your images, add some authentication mechanism in front of your registry service

Image pull fail when creating a pod

Just testing on local machine. Windows 7 x64, Minikube 1.14, docker toolbox.
$docker image ls does show the image I would like to use.
REPOSITORY myname/hello-service
TAG 0.0.6
IMAGE ID xxxxxxxxxxx
In my Pod yaml:
spec:
containers:
-name: my-pod
image: myname/hello-service:0.0.6
After running $kubectl create -f pod.yaml. It failed
Error: ImagePullBackOff
Failed to pull image "xxxxx" rpc error: code = ... manifest for myname/hello-service:0.0.6 not found
But the previous version :0.0.5 works just fine.
Both image are build on my machine and store in "default" of docker.
Can it be that myname/hello-service:0.0.6 is only on your windows host? If so, minikube cannot find it.
You have a few options to access in Minikube. One of them is building your local image with minikube's Docker daemon. Another is running a private local Docker registry.
A few examples for this and more I found are [well described here].(https://www.edureka.co/community/17481/local-docker-image-on-minikube)
Try to push it on DockerHub first
docker tag <imageid> <usrDockerHub>/<image_name>:<version>
docker push <usernameDockerHub>/<nome immagine>:<tag>
and try again kubectl create -f pod.yaml

Setting up a remote private Docker registry

I need some tips on setting up a 'remote private Docker registry'.
README.md on Docker-Registry mainly focus on private registry running on the same host, does not specify how other machines can access it remotely (or maybe too complex to understand).
So far I found these threads:
Docker: Issue with pulling from a private registry from another server
(Still an open thread, no solution offered. Further discussion on Github gives hint on proxy, but how does that work?)
Create a remote private registry
(Maybe closest to what I'm looking for, but what command do I need to access the registry from other machines?)
How to use your own registry (Again, this focuses on running registry on the same host. It did mention running on port 443 or 80 for other machines to access, but need more detail!)
Running out of clues, any input very appreciated!
I was able to set up a remote private registry by referring to this:
Remote access to a private docker-registry
Steps:
On registry host, run docker run -p 5000:5000 registry
On client host, start Docker service by docker -d --insecure-registry 10.11.12.0:5000 (replace 10.11.12.0 with your own registry ip, and you might want to daemonize the process so it'll continue running after shell closes.)
Edit: Alternatively, you can edit Docker's init script (/etc/sysconfig/docker for RHEL/CentOS, /var/lib/docker for Ubuntu/Debian). Add this line other_args="--insecure-registry 10.11.12.0:5000", then do a service docker restart. This is a recommended method as it daemonizes the Docker process.
Now, try if it works:
In client, download a busybox image docker pull busybox
Give it a new tag docker tag busybox 10.11.12.0:5000/busybox
Push it to registry docker push 10.11.12.0:5000/busybox
Verify the push docker search 10.11.12.0:5000/busybox
Remove all images and pull it from your registry docker rmi busybox 10.11.12.0:5000:busybox docker pull 10.11.12.0:5000:busybox
Run docker images should have the image you just pulled from your own remote private registry.
I use private registry in the next way:
It has FQDN: docker.mycompany.com
All images which I create have name: docker.mycompany.com/image1, docker.mycompany.com/image2, etc
After that all is working seamlessly:
Push image to registry:
docker push docker.mycompany.com/image1
Pull and run image:
docker run docker.mycompany.com/image2

Resources