Kubernetes on Docker Desktop does not recognize local images - docker

I am trying to deploy Windows Container image on the following software stack
Windows 10 Pro + Docker Desktop + Embedded Kubernetes in docker desktop
Due to some reason 'embedded kubernetes' does not recognize 'local images' no matter whatever --image-pull-policy was set
Docker images
PS C:\WINDOWS\system32> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myimg final 90c09acbfc59 15 hours ago 5.45GB
Kubectl run
PS C:\WINDOWS\system32> kubectl run --image=myimg:final tskuberun
Pod output
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25s default-scheduler Successfully assigned default/tskuberun to docker-desktop
Normal BackOff 23s (x2 over 24s) kubelet Back-off pulling image "myimg:final"
Warning Failed 23s (x2 over 24s) kubelet Error: ImagePullBackOff
Normal Pulling 9s (x2 over 25s) kubelet Pulling image "myimg:final"
Warning Failed 8s (x2 over 25s) kubelet Failed to pull image "myimg:final": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.65.5:53: no such host
Warning Failed 8s (x2 over 25s) kubelet Error: ErrImagePull
However, when I execute docker run it pulled the local image. Following worked as expected
PS C:\WINDOWS\system32> docker run myimg:final
I googled for the answer but most of the links were related to Unix flavors and Minikube.
Only few links were related to Docker desktop + embedded kubernetes, but unfortunately none resolved the issue
I am struggling to get rid of this issue. Any help is highly appreciated
EDIT
On further investigation, I observed that 'Docker desktop' refers to local images in case had I selected option "Switch to Linux Containers"
Kubectl run for Linux image
PS C:\WINDOWS\system32> kubectl run --image=wphp --image-pull-policy=IfNotPresent lntest
PS C:\WINDOWS\system32> kubectl describe pod/lntest
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40s default-scheduler Successfully assigned default/lntest to docker-desktop
Normal Pulled 2s (x4 over 39s) kubelet Container image "wphp" already present on machine
Normal Created 2s (x4 over 39s) kubelet Created container lntest
Normal Started 2s (x4 over 39s) kubelet Started container lntest
It appears that this issue occurs only for 'Windows containers' ie Docker desktop does NOT refer local images had I selected option 'Switch to Windows Containers'

Allthough imagePullPolicy: never should do the trick for you, there could be some certificate related issues.
Personally I avoided using locally built Docker images because of those issues.
You can try to integrate docker push to docker hub in your workflow or build a docker registry in your kubernetes cluster e.g. using https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/

The VM used by Docker Desktop is unable to access the internet. You'll have to sort out that networking.

Related

kubectl deploy from within kubernetes container

How do you deploy from within Kubernetes container - using CI/CD?
Senario:
I am building within Kubernetes using Kaniko
Now how to run kubectl within Kuberneters. And I do have the right serviceAccount for it. First problem is to have a container ready for executing kubectl.
Note: - /bin/cat
I found this, but it give errors:
apiVersion: v1
kind: Pod
metadata:
name: kubectl-deploy
spec:
containers:
- name: kubectl
image: bitnami/kubectl:latest
imagePullPolicy: Always
command:
- /bin/cat
tty: true
Errors:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 78s default-scheduler Successfully assigned default/kubectl-deploy to master
Normal Pulled 76s kubelet Successfully pulled image "bitnami/kubectl:latest" in 874.059036ms
Normal Pulled 74s kubelet Successfully pulled image "bitnami/kubectl:latest" in 860.59161ms
Normal Pulled 60s kubelet Successfully pulled image "bitnami/kubectl:latest" in 859.31958ms
Normal Pulling 33s (x4 over 77s) kubelet Pulling image "bitnami/kubectl:latest"
Normal Created 32s (x4 over 76s) kubelet Created container kubectl
Normal Started 32s (x4 over 76s) kubelet Started container kubectl
Normal Pulled 32s kubelet Successfully pulled image "bitnami/kubectl:latest" in 849.398179ms
Warning BackOff 7s (x7 over 73s) kubelet Back-off restarting failed container
I found this, but it give errors
When you run a Pod in Kubernetes, by default, it expect it to be a long running service. But in your case, you run a one-off command that terminates immediately. To run one-off commands in Kubernetes, it is easiest to run them as Kubernetes Jobs.
First problem is to have a container ready for executing kubectl.
Since you are using Tekton, have a look at the "deploy task" from Tekton Hub, it is configured with an image that includes kubectl.

Installing Jenkins onminikube shows Failed to pull image "jenkins/jenkins:2.303.3-jdk11"

Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51s default-scheduler Successfully assigned jenkins/jenkins-0 to minikube
Normal BackOff 31s kubelet, minikube Back-off pulling image "jenkins/jenkins:2.303.3-jdk11"
Warning Failed 31s kubelet, minikube Error: ImagePullBackOff
Normal Pulling 17s (x2 over 47s) kubelet, minikube Pulling image "jenkins/jenkins:2.303.3-jdk11"
Warning Failed 1s (x2 over 32s) kubelet, minikube Failed to pull image "jenkins/jenkins:2.303.3-jdk11": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 1s (x2 over 32s) kubelet, minikube Error: ErrImagePull
The above error is what I am seeing, when trying to install Jenkins on a minikube cluster. I am using this link and following along: https://www.jenkins.io/doc/book/installing/kubernetes/
appreciate any ideas.
I tried with minikube with virtualbox and that worked out of the box.
But, wanted to get docker working, which I wasn't able to.
Finally, I deleted everything (even reinstalled ubuntu) and re-setup k8s cluster with latest k8s version (before i tried with --version=1.19.0 of k8s)
I used: minikube start --driver=docker. and then followed the official jenkins install with helm3, that too latest, lts –

Getting an error when trying to find a local image with helm/docker

I have a local kubernetes cluster (minikube), that is trying to load images from my local Docker repo.
When I do a "docker images", I get:
cluster.local/container-images/app-shiny-app-validation-app-converter 1.6.9
cluster.local/container-images/app-shiny-app-validation 1.6.9
Given I know the above images are there, I run some helm commands which uses these images, but I get the below error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 66s (x2 over 2m12s) kubelet Back-off pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 66s (x2 over 2m12s) kubelet Error: ImagePullBackOff
Normal Pulling 51s (x3 over 3m24s) kubelet Pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 11s (x3 over 2m13s) kubelet Failed to pull image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9": rpc error: code = Unknown desc = Error response from daemon: Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Warning Failed 11s (x3 over 2m13s) kubelet Error: ErrImagePull
Anyone know how I can fix this? Seems the biggest problem is Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Since minikube is being used, you can refer to their documentation.
It is recommended that if a imagePullPolicy is being used, it needs to be set to Never. If set to Always, it will try to reach out and pull from the network.
From docs: https://minikube.sigs.k8s.io/docs/handbook/pushing/
"Tip 1: Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never) in your yaml file. Otherwise Kubernetes won’t use your locally build image and it will pull from the network."
Add cluster.local to your /etc/hosts file in all your kubernetes nodes.
192.168.12.34 cluster.local
Check whether you can login to registry using docker login cluster.local
If your registry has self-signed certificates, copy cluster.local.crt key to all kubernetes worker nodes /etc/docker/certs.d/cluster.local/ca.crt

EKS Docker Image Pull CrashLoopBackOff

I'm trying to deploy a Docker image from ECR to my EKS. When attempting to deploy my docker image to a pod, I get the following events from a CrashLoopBackOff:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 62s default-scheduler Successfully assigned default/mlflow-tracking-server to <EC2 IP>.internal
Normal SuccessfulAttachVolume 60s attachdetach-controller AttachVolume.Attach succeeded for volume "<PVC>"
Normal Pulling 56s kubelet, <IP>.ec2.internal Pulling image "<ECR Image UI>"
Normal Pulled 56s kubelet, <IP>.ec2.internal Successfully pulled image "<ECR Image UI>"
Normal Created 7s (x4 over 56s) kubelet, <IP>.ec2.internal Created container mlflow-tracking-server
Normal Pulled 7s (x3 over 54s) kubelet, <IP>.ec2.internal Container image "<ECR Image UI>" already present on machine
Normal Started 6s (x4 over 56s) kubelet, <IP>.ec2.internal Started container mlflow-tracking-server
Warning BackOff 4s (x5 over 52s) kubelet, <IP>.ec2.internal Back-off restarting failed container
I don't understand why it keeps looping like this and failing. Would anyone know why this is happening?
CrashLoopBackError can be related to these possible reasons:
the application inside your pod is not starting due to an error;
the image your pod is based on is not present in the registry, or the
node where your pod has been scheduled cannot pull from the registry;
some parameters of the pod has not been configured correctly.
In your case it seems an application error, inside the container.
Try to view the logs with:
kubectl logs <your_pod> -n <namespace>
For more info on how to troubleshoot this kind of error refer to:
https://pillsfromtheweb.blogspot.com/2020/05/troubleshooting-kubernetes.html
The process inside container is crashing. Could be reason of entrypoint on docker base images.
You can try something like this to check the logs of container
kubectl logs -f <pod_name>

Trying to pull/run docker images from docker hub on Minikube fails

I am very new to Kuberetes and I have done some work with docker previously. I am trying to accomplish following:
Spin up Minikube
Use Kube-ctl to spin up a docker image from docker hub.
I started minikube and things look like they are up and running. Then I pass following command
kubectl run nginx --image=nginx (Please note I do not have this image anywhere on my machine and I am expecting k8 to fetch it for me)
Now, when I do that, it spins up the pod but the status is ImagePullBackOff. So I ran kubectl describe pod command on it and the results look like following:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m default-scheduler Successfully assigned default/ngix-67c6755c86-qm5mv to minikube
Warning Failed 8m kubelet, minikube Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.2:52133->192.168.64.1:53: read: connection refused
Normal Pulling 8m (x2 over 8m) kubelet, minikube Pulling image "nginx"
Warning Failed 8m (x2 over 8m) kubelet, minikube Error: ErrImagePull
Warning Failed 8m kubelet, minikube Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.2:40073->192.168.64.1:53: read: connection refused
Normal BackOff 8m (x3 over 8m) kubelet, minikube Back-off pulling image "nginx"
Warning Failed 8m (x3 over 8m) kubelet, minikube Error: ImagePullBackOff
Then I searched around to see if anyone has faced similar issues and it turned out that some people have and they did resolve it by restarting minikube using some more flags which look like below:
minikube start --vm-driver="xhyve" --insecure-registry="$REG_IP":80
when I do nslookup inside Minikube, it does resolve with following information:
Server: 10.12.192.22
Address: 10.12.192.22#53
Non-authoritative answer:
hub.docker.com canonical name = elb-default.us-east-1.aws.dckr.io.
elb-default.us-east-1.aws.dckr.io canonical name = us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com.
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 52.205.36.130
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 3.217.62.246
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 35.169.212.184
still no luck. Is there anything that I am doing wrong here?
There error message suggests that the Docker daemon running in the minikube VM can't resolve the registry-1.docker.io hostname because the DNS nameserver it's configured to use for DNS resolution (192.168.64.1:53) is refusing connection. It's strange to me that the Docker deamon is trying to resolve registry-1.docker.io via a nameserver at 192.168.64.1 but when you nslookup on the VM it's using a nameserver at 10.12.192.22. I did an Internet search for "minkube Get registry-1.docker.io/v2: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53" and found an issue where someone made this comment, seems identical to your problem, and seems specific to xhyve.
In that comment the person says:
This issue does look like an xhyve issue not seen with virtualbox.
and
Switching to virtualbox fixed this issue for me.
I stopped minikube, deleted it, started it without --vm-driver=xhyve (minikube uses virtualbox driver by default), and then docker build -t hello-node:v1 . worked fine without errors
In my case it was caused by running dnsmasq, a dns server, on my Mac using Homebrew, which caused the DNS requests to fail inside minikube. After stopping dnsmasq, everything worked.
I got this problem with my local minikube setup and I wasn't able to pull any images I added to a simple deployment manifest.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test1 0/1 ImagePullBackOff 0 68s
Tried to execute the below test:
apiVersion: v1
kind: Pod
metadata:
name: test1
labels:
site: blog
spec:
containers:
- name: web
image: nginx:latest
It was possible or fixed only after restarting the minikube.
Maybe the dnsmasq was really the cause in this case.
You have:
minukube running with default settings.
docker building your images
(*) configured minikube to point to your docker images local repo
And now minikube can't pull images from public "container" registries, like docker hub.
stop and start minikube, then point it back to your local docker images repo. The commands to do this (and (*) this):
minikube stop
minikube start
minikube -p minikube docker-env
eval $(minikube -p minikube docker-env)
Since running the above I was able to pull nginx, alpine and frens from hub.docker.come just by setting image: alpine in the yaml spec.
The issue was just a short drop in my network connectivity. So if you have no dns/vpn/xhyve complications and it just stops, the fix is easy enough.

Resources