I'm using the latest version of microk8s and docker on the same VM. microk8s registry is enabled.
I restaged my image argus
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
argus 0.1 6d72b6be9981 3 hours ago 164MB
localhost:32000/argus registry 6d72b6be9981 3 hours ago 164MB
then I pushed it
$ docker push localhost:32000/argus:registry
The push refers to repository [localhost:32000/argus]
c8a05c6fda3e: Pushed
5836f564d6a0: Pushed
9e3dd069b4a1: Pushed
6935b1ceeced: Pushed
d02e8e9f8523: Pushed
c5129c726314: Pushed
0f299cdf8fbc: Pushed
edaf6f6a5ef5: Pushed
9eb034f85642: Pushed
043895432150: Pushed
a26398ad6d10: Pushed
0dee9b20d8f0: Pushed
f68ef921efae: Pushed
registry: digest: sha256:0a0ac9e076e3249b8e144943026bc7c24ec47ce6559a4e087546e3ff3fef5c14 size: 3052
all working seemingly fine but when I try to deploy a pod with:
$ microk8s kubectl create deployment argus --image=argus
deployment.apps/argus created
$ microk8s kubectl get pods
NAME READY STATUS RESTARTS AGE
argus-84c8dcc968-27nlz 0/1 ErrImagePull 0 9s
$ microk8s kubectl logs argus-84c8dcc968-27nlz
Error from server (BadRequest): container "argus" in pod "argus-84c8dcc968-27nlz" is waiting to start: trying and failing to pull image
The image can not be pulled, I tried the $ microk8s ctr images ls but this does not tell me anything.
So what is it that I'm doing wrong here?
update:
A bit of an update here when I try:
$ microk8s ctr image pull localhost:32000/argus:registry
ctr: failed to resolve reference "localhost:32000/argus:registry": failed to do request: Head "https://localhost:32000/v2/argus/manifests/registry": http: server gave HTTP response to HTTPS client
So it seems that it does not like that it gets and http response from my local repository. I looked into the config at /var/snap/microk8s/current/args/containerd-template.toml and there the localhost repository is correctly configured:
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
I'm running all of this on a centos8 VM. When I installed docker
I needed to do it with sudo dnf install docker-ce --nobest because otherwise there was some kind of conflict with containerd maybe it has todo something with this?
Okay, there were multiple issues at play here, and I think I solved them all. First of, I made a mistake with the Docker Image. It was a test image, but it should have had something that continuously runs because after PID 1 ends the container gets restarted, the reasons is that microk8s/kubernetes assumes a there is a problem. That's why there was the crash loop.
Second, to check which repositories are present in the local registry, it's easiest to curl the rest API of the registry with:
$curl http://host:32000/v2/_catalog
to get a list of all images, and:
$curl http://host:32000/v2/{repositoryName}/tags/list
to get all tags for a given repo.
Lastly, to pull from the registry to the cluster manually without getting the https error, it's necessary to add the --plain-http option like this:
$microk8s ctr image pull --plain-http localhost:32000/repo:tag
you can use kubectl describe to check the pod
i guess it try to pull the "argus" from docker.io
have you try add localhost:32000 to the image parameter?
microk8s kubectl create deployment argus --image=localhost:32000/argus:registry
Related
A word of warning, this is my first posting, and I am new to docker and Kubernetes with enough knowledge to get me into trouble.
I am confused about where docker container images are being stored and listing images.
To illustrate my confusion I start with the confirmation that "docker images" indicates no image for nginx is present.
Next I create a pod running nginx.
kubectl run nginx --image=nginx is succesful in pulling image "nginx" from github (or that's my assumption):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/nginx to minikube
Normal Pulling 8s kubelet Pulling image "nginx"
Normal Pulled 7s kubelet Successfully pulled image "nginx" in 833.30993ms
Normal Created 7s kubelet Created container nginx
Normal Started 7s kubelet Started container nginx
Even though the above output indicates the image is pulled, issuing "docker images" does not include nginx the output.
If I understand correctly, when an image is pulled, it is being stored on my local disk. In my case (Linux) in /var/lib/docker.
So my first question is, why doesn't docker images list it in the output, or is the better question where does docker images look for images?
Next if I issue a docker pull for nginx it is pulled from what I assume to be Github. docker images now includes it in it's output.
Just for my clarification, nothing up to this point involves a private local registry, correct?
I purposefully create a basic local Docker Registry using the docker registry container thinking it would be clearer since that will allow me to explicitly specify a registry but this only results in another issue:
docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
-v /registry:/var/lib/registry \
registry
I tag and push the nginx image to my newly created local registry:
docker tag nginx localhost:5000/nginx:latest
docker push localhost:5000/nginx:latest
The push refers to repository [localhost:5000/nginx]
2bed47a66c07: Pushed
82caad489ad7: Pushed
d3e1dca44e82: Pushed
c9fcd9c6ced8: Pushed
0664b7821b60: Pushed
9321ff862abb: Pushed
latest: digest: sha256:4424e31f2c366108433ecca7890ad527b243361577180dfd9a5bb36e828abf47 size: 1570
I now delete the original nginx image:
docker rmi nginx
Untagged: nginx:latest
Untagged: nginx#sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603
... and the newely tagged one:
docker rmi localhost:5000/nginx
Untagged: localhost:5000/nginx:latest
Untagged: localhost:5000/nginx#sha256:4424e31f2c366108433ecca7890ad527b243361577180dfd9a5bb36e828abf47
Deleted: sha256:f652ca386ed135a4cbe356333e08ef0816f81b2ac8d0619af01e2b256837ed3e
... but from where are they being deleted?
Now the image nginx should only be present in localhost:5000/? But docker images doesn't show it in it's output.
Moving on, I try to create the nginx pod once more using the image pushed to localhost:5000/nginx:latest.
kubectl run nginx --image=localhost:5000/nginx:latest --image-pull-policy=IfNotPresent
This is the new issue. The connection to localhost:5000 is refused.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 1s kubelet Pulling image "localhost:5000/nginx:latest"
Warning Failed 1s kubelet Failed to pull image "localhost:5000/nginx:latest": rpc error: code = Unknown desc = Error response from daemon: Get "http://localhost:5000/v2/": dial tcp 127.0.0.1:5000: connect: connection refused
Warning Failed 1s kubelet Error: ErrImagePull
Normal BackOff 0s kubelet Back-off pulling image "localhost:5000/nginx:latest"
Why is it I can pull and push to localhost:5000, but pod creation fails with what appears to be an authorization issue? I try logging into the registry but no matter what I use for the username and user password, login is successful. This confuses me more.
I would try creating/specifying imagePullSecret, but based on docker login outcome, it doesn't make sense.
Clearly I not getting it.
Someone please have pity on me and show where I have lost my way.
I will try to bring some clarity to you despite the fact your question already contains about 1000 questions (and you'll probably have 1000 more after my answer :D)
Before you can begin to understand any of this, you need to learn a few basic things:
Docker produces images which are used by containers - it similar to Virtual Machine, but more lightweight (I'm oversimplifying, but the TL;DR is pretty much that).
Kubernetes is an orchestration tool - it is responsible for starting containers (by using already built images) and tracking their state (i.e. if this container has crashed it should be restarted, or if it's not started it should be started, etc)
Docker can run on any machine. To be able to start a container you need to build an image first. The image is essentially a lightweight mini OS (i.e. alpine, ubuntu, windows, etc) which is configured with only those dependencies you need to run your application. This image is then pushed to a public repository/registry (hub.docker.com) or to a private one. And afterwards it's used for starting containers.
Kubernetes builds on top of this and adds the "automation" layer which is responsible for scheduling and monitoring the containers. For example, you have a group of 10 servers all running nginx. One of those servers restarts - the nginx container will be automatically started by k8s.
A kubernetes cluster is the group of physical machines that are dedicated to the mentioned logical cluster. These machines have labels or tags which define the purpose of physical node and work as a constraint for where a container will be scheduled.
Now that I have explained the minimum basics in an oversimplified way I can move with answering your questions.
When you do docker run nginx - you are instructing docker to pull the nginx image from https://hub.docker.com/_/nginx and then start it on the machine you executed the command on (usually your local machine).
When you do kubectl run nginx --image=nginx - you are instructing Kubernetes to do something similar to 1. but in a cluster. The container will be deployed to a random machine somewhere in the cluster unless you put a nodeSelector or configure affinity. If you put a nodeSelector this container (called Pod in K8S) will be placed on that specific node.
You have started a private registry server on your local machine. It is crucial to know that localhost inside a container will point to the container itself.
It is worth mentioning that some of the kubernetes commands will create their own container for the execution phase of the command. (remember this!)
When you run kubectl run nginx --image=nginx everything works fine, because it is downloading the image from https://hub.docker.com/_/nginx.
When you run kubectl run nginx --image=localhost:5000/nginx you are telling kubernetes to instruct docker to look for the image at localhost which is ambiguous because you have multiple layers of containers running (check 4.). This means the command that will do docker pull localhost:5000/nginx also runs in a docker container -- so there is no service running at port :5000 (the registry is running in a completely different isolated container!) :D
And this is why you are getting Error: ErrImagePull - it can't resolve localhost as it points to itslef.
As for the docker rmi nginx and docker rmi localhost:5000/nginx commands - by running them you removed your local copy of the nginx images.
If you run docker run localhost:5000/nginx on the machine where you started docker run registry you should get a running nginx container.
You should definitely read the Docker Guide BEFORE you try to dig into Kubernetes or nothing will ever make sense.
Your head will stop hurting after that I promise... :D
TL;DR
docker images lists images stored in the docker daemon's data root, by default /var/lib/docker.
You're deploying images to Kubernetes, the images are pulled onto the node on which the pod is scheduled. For example, using Kubernetes in Docker:
kind create cluster
kubectl run nginx --image=nginx
docker exec -it $(kubectl get pod nginx -o jsonpath={.spec.nodeName}) crictl images
crictl is a command-line interface for CRI-compatible container runtimes.
Docker images are pulled from Docker Hub by default, not Github. When using a local docker registry, images are stored in the registry's data volume. The docker registry storage may be customized, by default data is stored in (storage.filesystem.rootdirectory) /var/lib/registry.
You can use tools like skopeo to list images stored in a docker registry, for example:
skopeo list-tags docker://localhost:5000/nginx --tls-verify=false
On Ubuntu 18, I installed Docker (19.03.12) from these instructions
https://docs.docker.com/engine/install/ubuntu/
And then went through these steps
manage docker as non-root user
https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
start on boot using systemd
https://docs.docker.com/engine/install/linux-postinstall/#configure-docker-to-start-on-boot
and set up a private docker registry using this
docker run -d -p 5000:5000 -e REGISTRY_DELETE_ENABLED=true --restart=always --name registry registry:2
I also added this to the daemon.json file
{ "insecure-registries" : ["my.registrydomain.lan:5000"] }
And restarted the docker daemon
sudo /etc/init.d/docker restart
I checked docker info to make sure the setting for insecure registry was applied and I saw this at the end so it seems ok
Insecure Registries:
my.registrydomain.lan:5000
127.0.0.0/8
On the same machine I start minikube (1.12.3) with this command
minikube start --driver=docker --memory=3000 --insecure-registry=my.registrydomain.lan:5000
So everything is running and fine, and I proceed to apply my deployments using kubectl except when I get to the pod that needs to pull the container form the local registry I get an ErrImagePull status. Here is part of my deployment
spec:
containers:
- name: my-container
image: my.registrydomain.lan:5000/name:1.0.0.9
imagePullPolicy: IfNotPresent
When I describe the pod that failed using
kubectl describe pod mypod-8474577f6f-bpmp2
I see this message
Failed to pull image "my.registrydomain.lan:5000/name:1.0.0.9": rpc
error: code = Unknown desc = Error response from daemon: Get
https://my.registrydomain.lan:5000/v2/: http: server gave HTTP
response to HTTPS client
EDIT: I forgot to mention that I am able to PUSH my images into the registry without any issues from a separate machine over http (machine is Windows 10 and I set the insecure registry option in the daemon config)
I tried to reproduce your issue with exact same settings that you provided and this works just fine. Image is being pulled without any problem. I tested this with my debian 9 and fresh ubuntu installation with this settings:
minikube version: v1.12.3
docker version: v19.03.12
k8s version: v1.18.3
ubuntu version: v18
What I`ve done what is not described in the question is to place an entry in minikube container hosts file:
root#minikube:/# cat /etc/hosts
...
10.128.5.6 my.registrydomain.lan
...
And the tag/push commands:
docker tag 4e2eef94cd6b my.registrydomain.lan:5000/name:1.0.0.9
docker push my.registrydomain.lan:5000/name:1.0.0.9
Here`s the describe from the pod:
Normal Pulling 8m19s (x5 over 10m) kubelet, minikube Pulling image "my.registrydomain.lan:5000/name:1.0.0.9"
As suggested in the comments already you may want to check this github case. It goes thru couple of solution of your problem:
First is to check your hosts file and update it correctly if you hosting your repository on another node. Second solution is related to pushing images in to repository which turned for the user that both insecure-registries and docker push command are case sensitive. Third one is to use systemd to control docker daemon.
Lastly If those would not help I would try to clear all settings, uninstall docker, clear docker configuration and start again from scratch.
I have configured a secret on Kubernetes and inside the node, I am able to pull an image with docker pull perfectly. But when kubectl tries to schedule a pod on the node it shows image pull backoff error. Is there any setting needs to be done while bootstrapping. I am using community AMI on AWS for Kubernetes node.
Try this:
kubectl describe pod-name - see event log at the end. it should show series of events starting from initial image pull to subsequent attempts and may continue to restart in order to achieve desired state as per deployment record
In most scenarios something within container erroring out resulting restart expected behavior by k8s. to check logs - kubectl logs pod-name
Try to keep container running so you can peek inside running container for more troubleshooting using kubectl exec -it pod-name (if single container) or kubectl exec -it pod-name -c container-name.
I'm at my wits trying to get Docker images from Google Container Registry onto a Google Compute Engine instance. (The images I need have been successfully uploaded to GCR.)
I've logged in using gcloud auth login and then tried gcloud docker pull -- us.gcr.io/app-999/app which results in ERROR: (gcloud.docker) Docker is not installed..
I've tried to authenticate using oauth and pulling via a normal docker call. I see my credentials when I look at the file at .docker/config.json. Doing that, it looks like it's going to work, but ultimatly ends like this:
mbname#instance-1 ~ $ docker pull -- us.gcr.io/app-999/app
Using default tag: latest
latest: Pulling from app-999/app
b7f33cc0b48e: Pulling fs layer
43a564ae36a3: Pulling fs layer
b294f0e7874b: Pulling fs layer
eb34a236f836: Waiting
error pulling image configuration: unauthorized: authentication required
which looks like progress, because at least it attempted to download something.
I've tried both of these things on my local machine as well and both methods were successful.
Am I missing something?
Thanks for your help.
P.S. I've also tried loading a container from another registry (
Docker Hub) and that worked fine, but I need more than one container and want to keep expenses down.
After contacting Google support they informed me that there is a bug in the CoreOS gcloud alias. This bug is fixed by overwriting the alias in the shell as follows:
alias gcloud='(docker images google/cloud-sdk || docker pull google/cloud-sdk) > /dev/null;docker run -t -i --net=host -v $HOME/.config:/.config -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker google/cloud-sdk gcloud'
I've tried this and it works now.
docker should be included in the latest versions of gcloud. You can update to the latest version of gcloud by running gcloud components update
I used a "jenkins-1-centos7" image to deploy in my openshift to run projects on my jenkins image.
It successfully worked and after many configurations, I duplicated a new image out of this jenkins container.
Now I want to use this image to be used as a base for further development, but deploying a pod on to this image fails with the error "ErrImagePull".
On my investigations, I found that openshift needs the image to be present in the docker registry in order to deploy pods successfully.
I deployed another app for docker registries, now when I try to push my updated image into this docker registry it fails with the message "authentication required".
I've given admin privileges to my user.
docker push <local-ip>:5000/openshift/<new-updated-image>
The push refers to a repository [<local-ip>:5000/openshift/<new-updated-image>] (len: 1)
c014669e27a0: Preparing
unauthorized: authentication required
How can I make sure that the modified image gets deployed successfully?
Probably this answer will need edits because your issue can be caused by a lot of things. (I assume you are using OpenShift origin? (opensource)). Because I see the Centos7 image for Jenkins.
First off all you need to deploy the openshift registry in the default project.
$ oc project default
$ oadm registry --config=/etc/origin/master/admin.kubeconfig \
--service-account=registry
A registry pod will be deployed. Above the registry will be created a service (sort of endpoint which will function as loadbalancer above your pods).
This service has an IP which is inside the 172.30 range.
You can check this IP in the webconsole or perform (assuming you're still in the default project):
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.22.11 <none> 5000/TCP 8d
kubernetes 172.30.32.13 <none> 443/TCP,53/UDP,53/TCP 9d
router 172.30.42.42 <none> 80/TCP,443/TCP,1936/TCP 9d
So you'll need to use the service IP of your docker-registry to authenticate. You'll also need a token:
$ oc whoami -t
D_OPnWLdgEbiKJzvG1fm9dYdX..
Now you're able to perform the login and push the image:
$ docker login -u admin -e any#mail.com \
-p D_OPnWLdgEbiKJzvG1fm9dYdX 172.30.22.11:5000
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded
$ docker tag myimage:latest 172.30.22.11/my-proj/myimage:latest
$ docker push 172.30.22.11/my-proj/myimage:latest
hope this helps. You can give some feedback on this answer and tell if it works for you or which new issues you're facing.
Everything is fine only last line getting authentication error
docker push 172.30.22.11/my-proj/myimage:latest
😢