Kind kubernetes cluster failed to pull docker images - docker

I tried to use KinD as an alternative of Minikube to bootstrap a K8S cluster in my local machine.
The cluster is created successfully.
But when I tried to create some pods/deployments from images, it failed.
$ kubectl run nginx --image=nginx
$ kubectl run hello --image=hello-world
After some minutes, use get pods to get a failed status.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello 0/1 ImagePullBackOff 0 11m
nginx 0/1 ImagePullBackOff 0 22m
I am afraid this is another Global Firewall problem in China.
kubectl describe pods/nginx
Name: nginx
Namespace: default
Priority: 0
Node: dev-control-plane/172.19.0.2
Start Time: Sun, 30 Aug 2020 19:46:06 +0800
Labels: run=nginx
Annotations: <none>
Status: Pending
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Containers:
nginx:
Container ID:
Image: nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mgq96 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mgq96:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mgq96
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 56m default-scheduler Successfully assigned default/nginx to dev-control-plane
Normal BackOff 40m kubelet, dev-control-plane Back-off pulling image "nginx"
Warning Failed 40m kubelet, dev-control-plane Error: ImagePullBackOff
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: unexpected EOF
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Error: ErrImagePull
Normal Pulling 13m (x4 over 56m) kubelet, dev-control-plane Pulling image "nginx"
When I entered to the kindest/node container, but there is no docker in it. Not sure how KIND works, originally I understand it deploys a K8S cluster into a Docker container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a644f8b61314 kindest/node:v1.19.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:52301->6443/tcp dev-control-plane
$ docker exec -it a644f8b61314 /bin/bash
root#dev-control-plane:/# docker -v
bash: docker: command not found
After reading the Kind docs, I can not find an option to set a repository mirror there like that in Minikube.
BTW, I am using the latest Docker Desktop beta on a Windows 10.

First pull the image in your local system using docker pull nginx and then use below command to load that image to the kind cluster
kind load docker-image nginx --name kind-cluster-name
Kind uses containerd instead of docker as runtime, that's why docker is not installed on the nodes.
Alternatively you can use crictl tool to pull and check images inside the kind node.
crictl pull nginx
crictl images

I've run into same issue because I've exported http_proxy and https_proxy before creating cluster to a local proxy (127.0.0.1), which is unrechable in the cluster. After unset http(s)_proxy and recreate cluster, everything runs fine.

Related

Helm fetches different "latest" image different than Docker pull's latest image

A different image is fetched with the same image name Helm does a chart install than if I do an ordinary docker pull. My values.yaml has
image:
repository: gcr.io/rsi-api-test/rsi-api
tag: latest
pullPolicy: IfNotPresent
The only image in the container registry is
gcr.io/rsi-api-test/rsi-
api#sha256:473bd9e31df8fd5d5424e11a57cabfd308179c8507683d6b9219c804bb315116
But helm somehow finds this image with this code:
gcr.io/rsi-api-test/rsi-api#sha256:c5cc78caa54ac4cf855c5fdb6a3448ff74ab641581fcda35d3e4e245c3154766
I believe it found some old version and for some reason refuses to get the latest version. Where is the "cached" or local collection of repositories used by helm and how can I force it cleaned?
The registry is a GCP Artifact Registry
>helm version
Client: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
(base)
UPDATE, CHANGED TAGGING
I edited the tags in the image in GCP, removing both the "latest" and "v0.1.0" tag and adding tag "v1.0.0". I changed the respective values in the Values.yaml file (above) to
image:
repository: gcr.io/rsi-api-test/rsi-api
tag: 1.0.0
pullPolicy: IfNotPresent
Then, I did
docker image pull gcr.io/rsi-api-test/rsi-api:latest
and confirmed there was no more latest (I'm not sure where it's getting these images from).
then
docker run --detached gcr.io/rsi-api-test/rsi-api:1.0.0
Docker fetched the 1.0.0 version, and it ran as I expected/wanted.
As for helm, I re-ran helm install, and it didn't work due to an image error (kubectl logs shown below).
helm install rsapi
>kubectl logs pod/sullen-lemur-rsiapi-7b8d6d656c-7xmxh
Error from server (BadRequest): container "rsiapi" in pod "sullen-lemur-rsiapi-7b8d6d656c-7xmxh" is waiting to start: trying and failing to pull image
(base)
id=woodsman pwd= ~/rsi/backend/rsiapi git=(master) history-id=1494$
>^Cep -R "\/v2" * |less
(base)
id=woodsman pwd= ~/rsi/backend/rsiapi git=(master) history-id=1494$
>kubectl describe pod/sullen-lemur-rsiapi-7b8d6d656c-7xmxh
Name: sullen-lemur-rsiapi-7b8d6d656c-7xmxh
Namespace: default
Priority: 0
Node: gke-helm-cluster-default-pool-7c542461-sdpv/10.128.0.6
Start Time: Thu, 26 May 2022 23:19:00 -0400
Labels: app.kubernetes.io/instance=sullen-lemur
app.kubernetes.io/name=rsiapi
pod-template-hash=7b8d6d656c
Annotations: <none>
Status: Pending
IP: 10.72.1.13
IPs:
IP: 10.72.1.13
Controlled By: ReplicaSet/sullen-lemur-rsiapi-7b8d6d656c
Containers:
rsiapi:
Container ID:
Image: gcr.io/rsi-api-test/rsi-api:1.0.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Liveness: http-get http://:http/ delay=60s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=20s timeout=1s period=10s #success=1 #failure=3
Environment:
SPRING_CLOUD_KUBERNETES_CONFIG_NAME: sullen-lemur-rsiapi
MANAGEMENT_ENDPOINT_RESTART_ENABLED: true
SPRING_CLOUD_KUBERNETES_RELOAD_ENABLED: true
SPRING_CLOUD_KUBERNETES_RELOAD_STRATEGY: refresh
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fvgg8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-fvgg8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned default/sullen-lemur-rsiapi-7b8d6d656c-7xmxh to gke-helm-cluster-default-pool-7c542461-sdpv
Normal Pulling 12m (x4 over 13m) kubelet Pulling image "gcr.io/rsi-api-test/rsi-api:1.0.0"
Warning Failed 12m (x4 over 13m) kubelet Failed to pull image "gcr.io/rsi-api-test/rsi-api:1.0.0": rpc error: code = NotFound desc = failed to pull and unpack image "gcr.io/rsi-api-test/rsi-api:1.0.0": failed to resolve reference "gcr.io/rsi-api-test/rsi-api:1.0.0": gcr.io/rsi-api-test/rsi-api:1.0.0: not found
Warning Failed 12m (x4 over 13m) kubelet Error: ErrImagePull
Warning Failed 11m (x6 over 13m) kubelet Error: ImagePullBackOff
Normal BackOff 3m32s (x42 over 13m) kubelet Back-off pulling image "gcr.io/rsi-api-test/rsi-api:1.0.0"
(base)
id=woodsman pwd= ~/rsi/backend/rsiapi git=(master) history-id=1495$
>helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
sullen-lemur 1 Thu May 26 23:19:00 2022 DEPLOYED rsiapi-0.1.0 1.0 default
The same image id was tied up in two SHA codes:
> docker image rm 8d0bfa85e8f9
Error response from daemon: conflict: unable to delete 8d0bfa85e8f9
(must be forced) - image is referenced in multiple repositories
>docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/rsi-api-test/rsi-api latest 8d0bfa85e8f9 10 hours ago 379MB
gcr.io/rsi-api-test/rsi-api v1.0.0 8d0bfa85e8f9 10 hours ago 379MB
>docker image rm -f 8d0bfa85e8f9
Untagged: gcr.io/rsi-api-test/rsi-api:latest
Untagged: gcr.io/rsi-api-test/rsi-api:v1.0.0
Untagged: gcr.io/rsi-api-test/rsi-api#sha256:473bd9e31df8fd5d5424e11a57cabfd308179c8507683d6b9219c804bb315116
Deleted: sha256:8d0bfa85e8f929221a7c6b66e5fd6008151e496407ed9e74072dd3e02314ad12
BONUS points to suggest a way/policy to increment the version tag in for helm. This is a Github Workflow Maven build of a spring boot application.
Also, I'm running Helm on my personal Linux machine, but want it to target a GCP cluster. However, I also tried installing helm and using it on a Minikube installation. What do I need to do to make sure I fully switch?

Kubernetes can't pull private image from docker

I hope somebody can help me.
I'm trying to pull a private docker image with no success. I already tried some solutions that I found, but without success.
Docker, Gitlab, Gitlab-Runner, Kubernetes all run on the same server
Insecure Registry
$ sudo cat /etc/docker/daemon.json
{ "insecure-registries":["10.0.10.20:5555"]}
Config.json
$ cat .docker/config.json
{
"auths": {
"10.0.10.20:5555": {
"auth": "NDUwNjkwNDcwODoxMjM0NTZzIQ=="
},
"https://index.docker.io/v1/": {
"auth": "NDUwNjkwNDcwODpGcGZHMXQyMDIyQCE="
}
}
}
Secret
$ kubectl create secret generic regcred \
--from-file=.dockerconfigjson=~/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
I'm trying to create a Kubernetes pod from a private docker image. However, I get the following error:
Name: private-reg
Namespace: default
Priority: 0
Node: 10.0.10.20
Start Time: Thu, 12 May 2022 12:44:22 -0400
Labels: <none>
Annotations: <none>
Status: Pending
IP: 10.244.0.61
IPs:
IP: 10.244.0.61
Containers:
private-reg-container:
Container ID:
Image: 10.0.10.20:5555/development/app-image-base:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stjn4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-stjn4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 2m7s (x465 over 107m) kubelet Back-off pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Normal Pulling 17s (x3 over 53s) kubelet Pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Warning Failed 17s (x3 over 53s) kubelet Failed to pull image "10.0.10.20:5555/development/expedicao-api-image-base:latest": rpc error: code = Unknown desc = failed to pull and unpack image "10.0.10.20:5555/development/app-image-base:latest": failed to resolve reference "10.0.10.20:5555/development/app-image-base:latest": failed to do request: Head "https://10.0.10.20:5555/v2/development/app-image-base/manifests/latest": http: server gave HTTP response to HTTPS client
Warning Failed 17s (x3 over 53s) kubelet Error: ErrImagePull
Normal BackOff 3s (x2 over 29s) kubelet Back-off pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Warning Failed 3s (x2 over 29s) kubelet Error: ImagePullBackOff
When I pull the image directly in docker, no problem occurs even with the secret
Pull image
$ docker login 10.0.10.20:5555
Username: 4506904708
Password:
WARNING! Your password will be stored unencrypted in ~/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker pull 10.0.10.20:5555/development/app-image-base:latest
latest: Pulling from development/app-image-base
Digest: sha256:1385a8aa2bc7bac1a8d3e92ead66fdf5db3d6625b736d908d1fec61ba59b6bdc
Status: Image is up to date for 10.0.10.20:5555/development/app-image-base:latest
10.0.10.20:5555/development/app-image-base:latest
Can someone help me?
First, you need to create a file in /etc/containerd/config.toml
# Config file is parsed as version 1 by default.
# To use the long form of plugin names set "version = 2"
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."10.0.10.20:5555"]
endpoint = ["http://10.0.10.20:5555"]
Second, restart contained
$ systemctl restart containerd

Why I get exec failed: container_linux.go:380 when I go inside Kubernetes pod?

I started learning about Kubernetes and I installed minikube and kubectl on Windows 7.
After that I created a pod with command:
kubectl run firstpod --image=nginx
And everything is fine:
[![enter image description here][1]][1]
Now I want to go inside the pod with this command: kubectl exec -it firstpod -- /bin/bash but it's not working and I have this error:
OCI runtime exec failed: exec failed: container_linux.go:380: starting container
process caused: exec: "C:/Program Files/Git/usr/bin/bash.exe": stat C:/Program
Files/Git/usr/bin/bash.exe: no such file or directory: unknown
command terminated with exit code 126
How can I resolve this problem?
And another question is about this firstpod pod. With this command kubectl describe pod firstpod I can see information about the pod:
Name: firstpod
Namespace: default
Priority: 0
Node: minikube/192.168.99.100
Start Time: Mon, 08 Nov 2021 16:39:07 +0200
Labels: run=firstpod
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Containers:
firstpod:
Container ID: docker://59f89dad2ddd6b93ac4aceb2cc0c9082f4ca42620962e4e692e3d6bcb47d4a9e
Image: nginx
Image ID: docker-pullable://nginx#sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 08 Nov 2021 16:39:14 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9b8mx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-9b8mx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned default/firstpod to minikube
Normal Pulling 32m kubelet Pulling image "nginx"
Normal Pulled 32m kubelet Successfully pulled image "nginx" in 3.677130128s
Normal Created 31m kubelet Created container firstpod
Normal Started 31m kubelet Started container firstpod
So I can see it is a docker container id and it is started, also there is the image, but if I do docker images or docker ps there is nothing. Where are these images and container? Thank you!
[1]: https://i.stack.imgur.com/xAcMP.jpg
One error for certain is gitbash adding Windows the path. You can disable that with a double slash:
kubectl exec -it firstpod -- //bin/bash
This command will only work if you have bash in the image. If you don't, you'll need to pick a different command to run, e.g. /bin/sh. Some images are distroless or based on scratch to explicitly not include things like shells, which will prevent you from running commands like this (intentionally, for security).

Kubectl create deployment fails to pull image from local docker repo connection refused

I am encountering a very basic error:
I have docker-desktop and minikube setup on my windows 10 machine.
Further I setup a local docker registry using the steps here.
Here is what I have when I run docker-ps :
c:\>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6675d2c57a74 registry:2 "/entrypoint.sh /etc…" 7 hours ago Up 2 hours 0.0.0.0:5000->5000/tcp registry
d05edc8f05b0 gcr.io/k8s-minikube/kicbase:v0.0.10 "/usr/local/bin/entr…" 8 hours ago Up 8 hours 127.0.0.1:32771->22/tcp, 127.0.0.1:32770->2376/tcp, 127.0.0.1:32769->5000/tcp, 127.0.0.1:32768->8443/tcp minikube
I pushed my local image to my local registry using docker push, and I deleted the local image using docker image remove command to avoid any confusion.
I now tried pulling the local registry image to see if it works, and it does
docker pull localhost:5000/dev/my-web:v1
v1: Pulling from dev/my-web
Digest: sha256:b3a0cf5c66ade8d39709c0cbbd0e08c9cc5f5e1c97f039a2bd1afed657dc8b74
Status: Downloaded newer image for localhost:5000/dev/my-web:v1
localhost:5000/dev/my-web:v1
Now I run my kubectl create commands and they fail with the error connection refused.
C:\>kubectl create deployment myweb --image=localhost:5000/dev/my-web:v1
deployment.apps/myweb created
C:\>kubectl get pods
NAME READY STATUS RESTARTS AGE
myweb-7d467f4bc4-r9xhc 0/1 ErrImagePull 0 12s
C:\>kubectl describe pod/myweb-7d467f4bc4-r9xhc
Name: myweb-7d467f4bc4-r9xhc
Namespace: default
Priority: 0
Node: minikube/172.17.0.3
Start Time: Sun, 09 Aug 2020 23:30:07 -0400
Labels: app=myweb
pod-template-hash=7d467f4bc4
Annotations: <none>
Status: Pending
IP: 172.18.0.3
IPs:
IP: 172.18.0.3
Controlled By: ReplicaSet/myweb-7d467f4bc4
Containers:
my-web:
Container ID:
Image: localhost:5000/dev/my-web:v1
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nr7vj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-nr7vj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nr7vj
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 93s default-scheduler Successfully assigned default/myweb-7d467f4bc4-r9xhc to minikube
Normal Pulling 53s (x3 over 92s) kubelet, minikube Pulling image "localhost:5000/dev/my-web:v1"
Warning Failed 53s (x3 over 92s) kubelet, minikube Failed to pull image "localhost:5000/dev/my-web:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Warning Failed 53s (x3 over 92s) kubelet, minikube Error: ErrImagePull
Normal BackOff 13s (x5 over 91s) kubelet, minikube Back-off pulling image "localhost:5000/dev/my-web:v1"
Warning Failed 13s (x5 over 91s) kubelet, minikube Error: ImagePullBackOff
I am not able to figure out why am I getting connection refused error when I am trying through kubectl command as against docker pull command.
Please help.
Additional notes: (1) I am using in-built windows hypervisor (2) Using default networking
Well, Kubernetes cannot find your registry. It depends on the setup you have, but Docker for Windows runs in a VM and you didn't specify how you are running minikube, but most likely in another VM. So potentially here you have two VMs that can or cannot talk to each depending on how you set them up.
And localhost is almost never going to work with Kubernetes because that always resolves to the local IP of your Kubernetes node when it comes to pulling the image. That means that you would have to have your registry and the kubelet pulling the image on the exact same VM.
I would just focus on making it work with the VM IP address where your registry is running and make sure that your Kubernetes VM can reach the registry VM IP address. Also, remember that when you run your registry you also have to expose you container port in this case 5000
P.S. You didn't specify what Hypervisor you are running? VirtualBox? VMware? are you using bridged networking? host only❓
✌️

Pod gets into status of CrashLoopBackOff and gets restarted repeatedly - Exit code is 0

I have a docker container that is running fine when I run it using docker run. I am trying to put that container inside a pod but I am facing issues. The first run of the pod shows status as "Completed". And then the pod keeps restarting with CrashLoopBackoff status. The exit code however is 0.
Here is the result of kubectl describe pod :
Name: messagingclientuiui-6bf95598db-5znfh
Namespace: mgmt
Node: db1mgr0deploy01/172.16.32.68
Start Time: Fri, 03 Aug 2018 09:46:20 -0400
Labels: app=messagingclientuiui
pod-template-hash=2695115486
Annotations: <none>
Status: Running
IP: 10.244.0.7
Controlled By: ReplicaSet/messagingclientuiui-6bf95598db
Containers:
messagingclientuiui:
Container ID: docker://a41db3bcb584582e9eacf26b02c7ef26f57c2d43b813f44e4fd1ba63347d3fc3
Image: 172.32.1.4/messagingclientuiui:667-I20180802-0202
Image ID: docker-pullable://172.32.1.4/messagingclientuiui#sha256:89a002448660e25492bed1956cfb8fff447569e80ac8b7f7e0fa4d44e8abee82
Port: 9087/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 03 Aug 2018 09:50:06 -0400
Finished: Fri, 03 Aug 2018 09:50:16 -0400
Ready: False
Restart Count: 5
Environment Variables from:
mesg-config ConfigMap Optional: false
Environment: <none>
Mounts:
/docker-mount from messuimount (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2pthw (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
messuimount:
Type: HostPath (bare host directory volume)
Path: /mon/monitoring-messui/docker-mount
HostPathType:
default-token-2pthw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2pthw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned messagingclientuiui-6bf95598db-5znfh to db1mgr0deploy01
Normal SuccessfulMountVolume 4m kubelet, db1mgr0deploy01 MountVolume.SetUp succeeded for volume "messuimount"
Normal SuccessfulMountVolume 4m kubelet, db1mgr0deploy01 MountVolume.SetUp succeeded for volume "default-token-2pthw"
Normal Pulled 2m (x5 over 4m) kubelet, db1mgr0deploy01 Container image "172.32.1.4/messagingclientuiui:667-I20180802-0202" already present on machine
Normal Created 2m (x5 over 4m) kubelet, db1mgr0deploy01 Created container
Normal Started 2m (x5 over 4m) kubelet, db1mgr0deploy01 Started container
Warning BackOff 1m (x8 over 4m) kubelet, db1mgr0deploy01 Back-off restarting failed container
kubectl get pods
NAME READY STATUS RESTARTS AGE
messagingclientuiui-6bf95598db-5znfh 0/1 CrashLoopBackOff 9 23m
I am assuming we need a loop to keep the container running in this case. But I dont understand why it worked when it ran using docker and not working when it is inside a pod. Shouldnt it behave the same ?
How do we henerally debug CrashLoopBackOff status apart from running kubectl describe pod and kubectl logs
The container would terminate with exit code 0 if there isn't at least one process running in the background. To keep the container running, add these to the deployment configuration:
command: ["sh"]
stdin: true
Replace sh with bash on any other shell that the image may have.
Then you can drop inside the container with exec:
kubectl exec -it <pod-name> sh
Add -c <container-name> argument if the pod has more than one container.
are you sure you run your software as docker run ... -d ... <command> and it kept running and you use the same exact command in your pod ? In some cases, if you compare things that run on docker with -it and no -d you might find your self in a pinch as they expect terminal to communicate with user and exit if tty is not available (hint: pod/container can be run with tty: true)
It is very unlikely that you have software that runs in a detached docker and does not in kube.

Resources