CrashLoopBackOff while deploying pod using image from private registry - docker

I am trying to create a pod using my own docker image on localhost.
This is the dockerfile used to create the image :
FROM centos:8
RUN yum install -y gdb
RUN yum group install -y "Development Tools"
CMD ["/usr/bin/bash"]
The yaml file used to create the pod is this :
---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: server
imagePullPolicy: Never
image: localhost:5000/server
ports:
- containerPort: 80
root#node1:~/test/server# docker images | grep server
server latest 82c5228a553d 3 hours ago 948MB
localhost.localdomain:5000/server latest 82c5228a553d 3 hours ago 948MB
localhost:5000/server latest 82c5228a553d 3 hours ago 948MB
The image has been pushed to localhost registry.
Following is the error I receive.
root#node1:~/test/server# kubectl get pods
NAME READY STATUS RESTARTS AGE
server 0/1 CrashLoopBackOff 5 5m18s
The output of describe pod :
root#node1:~/test/server# kubectl describe pod server
Name: server
Namespace: default
Priority: 0
Node: node1/10.0.2.15
Start Time: Mon, 07 Dec 2020 15:35:49 +0530
Labels: app=server
Annotations: cni.projectcalico.org/podIP: 10.233.90.192/32
cni.projectcalico.org/podIPs: 10.233.90.192/32
Status: Running
IP: 10.233.90.192
IPs:
IP: 10.233.90.192
Containers:
server:
Container ID: docker://c2982e677bf37ff11272f9ea3f68565e0120fb8ccfb1595393794746ee29b821
Image: localhost:5000/server
Image ID: docker-pullable://localhost.localdomain:5000/server#sha256:6bc8193296d46e1e6fa4cb849fa83cb49e5accc8b0c89a14d95928982ec9d8e9
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 07 Dec 2020 15:41:33 +0530
Finished: Mon, 07 Dec 2020 15:41:33 +0530
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tb7wb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-tb7wb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tb7wb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/server to node1
Normal Pulled 4m34s (x5 over 5m59s) kubelet Container image "localhost:5000/server" already present on machine
Normal Created 4m34s (x5 over 5m59s) kubelet Created container server
Normal Started 4m34s (x5 over 5m59s) kubelet Started container server
Warning BackOff 56s (x25 over 5m58s) kubelet Back-off restarting failed container
I get no logs :
root#node1:~/test/server# kubectl logs -f server
root#node1:~/test/server#
I am unable to figure out whether the issue is with the container or yaml file for creating pod. Any help would be appreciated.

Posting this as Community Wiki.
As pointed by #David Maze in comment section.
If docker run exits immediately, a Kubernetes Pod will always go into CrashLoopBackOff state. Your Dockerfile needs to COPY in or otherwise install and application and set its CMD to run it.
Root cause can be also determined by Exit Code. In 3) Check the exit code article, you can find a few exit codes like 0, 1, 128, 137 with description.
3.1) Exit Code 0
This exit code implies that the specified container command completed ‘sucessfully’, but too often for Kubernetes to accept as working.
In short story, your container was created, all action mentioned was executed and as there was nothing else to do, it exit with Exit Code 0.
A CrashLoopBackOff error occurs when a pod startup fails repeatedly in Kubernetes.`
Your image based on centos with few additional installations did not have any process in backgroud left, so it was categorized as Completed. As this happen so fast, kubernetes restarted it and it fall in loop.
$ kubectl run centos --image=centos
$ kubectl get po -w
NAME READY STATUS RESTARTS AGE
centos 0/1 CrashLoopBackOff 1 5s
centos 0/1 Completed 2 17s
centos 0/1 CrashLoopBackOff 2 31s
centos 0/1 Completed 3 46s
centos 0/1 CrashLoopBackOff 3 58s
centos 1/1 Running 4 88s
centos 0/1 Completed 4 89s
centos 0/1 CrashLoopBackOff 4 102s
$ kubectl describe po centos | grep 'Exit Code'
Exit Code: 0
But when you have used sleep 3600, in your container, command sleep was executing for hour. After this time it would also exit with Exit Code 0.
Hope it clarified.

Related

Why I get exec failed: container_linux.go:380 when I go inside Kubernetes pod?

I started learning about Kubernetes and I installed minikube and kubectl on Windows 7.
After that I created a pod with command:
kubectl run firstpod --image=nginx
And everything is fine:
[![enter image description here][1]][1]
Now I want to go inside the pod with this command: kubectl exec -it firstpod -- /bin/bash but it's not working and I have this error:
OCI runtime exec failed: exec failed: container_linux.go:380: starting container
process caused: exec: "C:/Program Files/Git/usr/bin/bash.exe": stat C:/Program
Files/Git/usr/bin/bash.exe: no such file or directory: unknown
command terminated with exit code 126
How can I resolve this problem?
And another question is about this firstpod pod. With this command kubectl describe pod firstpod I can see information about the pod:
Name: firstpod
Namespace: default
Priority: 0
Node: minikube/192.168.99.100
Start Time: Mon, 08 Nov 2021 16:39:07 +0200
Labels: run=firstpod
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Containers:
firstpod:
Container ID: docker://59f89dad2ddd6b93ac4aceb2cc0c9082f4ca42620962e4e692e3d6bcb47d4a9e
Image: nginx
Image ID: docker-pullable://nginx#sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 08 Nov 2021 16:39:14 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9b8mx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-9b8mx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned default/firstpod to minikube
Normal Pulling 32m kubelet Pulling image "nginx"
Normal Pulled 32m kubelet Successfully pulled image "nginx" in 3.677130128s
Normal Created 31m kubelet Created container firstpod
Normal Started 31m kubelet Started container firstpod
So I can see it is a docker container id and it is started, also there is the image, but if I do docker images or docker ps there is nothing. Where are these images and container? Thank you!
[1]: https://i.stack.imgur.com/xAcMP.jpg
One error for certain is gitbash adding Windows the path. You can disable that with a double slash:
kubectl exec -it firstpod -- //bin/bash
This command will only work if you have bash in the image. If you don't, you'll need to pick a different command to run, e.g. /bin/sh. Some images are distroless or based on scratch to explicitly not include things like shells, which will prevent you from running commands like this (intentionally, for security).

Running a docker container which uses GPU from kubernetes fails to find the GPU

I want to run a docker container which uses GPU (it runs a cnn to detect objects on a video), and then run that container on Kubernetes.
I can run the container from docker alone without problems, but when I try to run the container from Kubernetes it fails to find the GPU.
I run it using this command:
kubectl exec -it namepod /bin/bash
This is the problem that I get:
kubectl exec -it tym-python-5bb7fcf76b-4c9z6 /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root#tym-python-5bb7fcf76b-4c9z6:/opt# cd servicio/
root#tym-python-5bb7fcf76b-4c9z6:/opt/servicio# python3 TM_Servicev2.py
Try to load cfg: /opt/darknet/cfg/yolov4.cfg, weights: /opt/yolov4.weights, clear = 0
CUDA status Error: file: ./src/dark_cuda.c : () : line: 620 : build time: Jul 30 2021 - 14:05:34
CUDA Error: no CUDA-capable device is detected
python3: check_error: Unknown error -1979678822
root#tym-python-5bb7fcf76b-4c9z6:/opt/servicio#
EDIT.
I followed all the steps on the Nvidia docker 2 guide and downloaded the Nvidia plugin for Kubernetes.
however when I deploy Kubernetes it stays as "pending" and never actually starts. I don't get an error anymore, but it never starts.
The pod appears like this:
gpu-pod 0/1 Pending 0 3m19s
EDIT 2.
I ended up reinstalling everything and now my pod appears completed but not running. like this.
default gpu-operator-test 0/1 Completed 0 62m
Answering Wiktor.
when I run this command:
kubectl describe pod gpu-operator-test
I get:
Name: gpu-operator-test
Namespace: default
Priority: 0
Node: pdi-mc/192.168.0.15
Start Time: Mon, 09 Aug 2021 12:09:51 -0500
Labels: <none>
Annotations: cni.projectcalico.org/containerID: 968e49d27fb3d86ed7e70769953279271b675177e188d52d45d7c4926bcdfbb2
cni.projectcalico.org/podIP:
cni.projectcalico.org/podIPs:
Status: Succeeded
IP: 192.168.10.81
IPs:
IP: 192.168.10.81
Containers:
cuda-vector-add:
Container ID: docker://d49545fad730b2ec3ea81a45a85a2fef323edc82e29339cd3603f122abde9cef
Image: nvidia/samples:vectoradd-cuda10.2
Image ID: docker-pullable://nvidia/samples#sha256:4593078cdb8e786d35566faa2b84da1123acea42f0d4099e84e2af0448724af1
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 09 Aug 2021 12:10:29 -0500
Finished: Mon, 09 Aug 2021 12:10:30 -0500
Ready: False
Restart Count: 0
Limits:
nvidia.com/gpu: 1
Requests:
nvidia.com/gpu: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ktgq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-9ktgq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
I'm using this configuration file to create the pod
apiVersion: v1
kind: Pod
metadata:
name: gpu-operator-test
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
image: "nvidia/samples:vectoradd-cuda10.2"
resources:
limits:
nvidia.com/gpu: 1
Addressing two topics here:
The error you saw at the beginning:
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Means that you tried to use a deprecated version of the kubectl exec command. The proper syntax is:
$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]
See here for more details.
According the the official docs the gpu-operator-test pod should run to completion:
You can see that the pod's status is Succeeded and also:
State: Terminated
Reason: Completed
Exit Code: 0
Exit Code: 0 means that the specified container command completed successfully.
More details can be found in the official docs.

i/o timeout ImagePullBackOff issue on minikube docker image for pod on fedora32

I am getting ImagePullBackOff error when I pull images from docker fro kubernates pods. I am using Fedora32 and running a minikube on docker.
'''
[prem#localhost simplek8s]$ **docker ps --all**
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
070fba4347b7 **gcr.io/k8s-minikube/kicbase:v0.0.13** "/usr/local/bin/entr…" 3 days ago Up About an hour 127.0.0.1:32771->22/tcp, 127.0.0.1:32770->2376/tcp, 127.0.0.1:32769->5000/tcp, 127.0.0.1:32768->8443/tcp minikube
85a476d91b28 hello-world "/hello" 3 days ago Exited (0) 3 days ago musing_fermat
dbf5151bc72e frontend_tests "docker-entrypoint.s…" 6 days ago Exited (137) 3 days ago frontend_tests_1
e47486560719 frontend_web "docker-entrypoint.s…" 6 days ago Exited (137) 3 days ago frontend_web_1
75933fdf45c4 274c1d9065e6 "/bin/sh -c 'npm ins…" 6 days ago Created romantic_sinoussi
ab0d87295579 274c1d9065e6 "/bin/sh -c 'npm ins…" 6 days ago Created
'''
In the above containers you can see minikube running.
This is what is showing when I see the pods.
[prem#localhost simplek8s]$ **kubectl get pods**
NAME READY STATUS RESTARTS AGE
busybox 0/1 ImagePullBackOff 0 47h
client-pod 0/1 ImagePullBackOff 0 2d
dnsutils 0/1 ImagePullBackOff 0 21m
nginx 0/1 ImagePullBackOff 0 47h
[prem#localhost simplek8s]$ **kubectl describe pods**
Name: busybox
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Wed, 28 Oct 2020 02:05:29 +0530
Labels: run=busybox
Annotations: <none>
Status: Pending
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Containers:
busybox:
Container ID:
Image: busybox
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zr7nz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-zr7nz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zr7nz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
**Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute** op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 60m (x4 over 63m) kubelet Pulling image "busybox"
Warning Failed 60m **kubelet Failed to pull image "busybox": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:38852->192.168.49.1:53: i/o timeout**
Warning Failed 13m (x13 over 62m) kubelet Error: ErrImagePull
Warning Failed 8m17s (x226 over 62m) kubelet Error: ImagePullBackOff
Normal BackOff 3m12s (x246 over 62m) kubelet Back-off pulling image "busybox"
[myname#localhost simplek8s]$ **minikube ssh**
docker#minikube:~$ cat /etc/resolv.conf
nameserver 192.168.49.1
options ndots:0
docker#minikube:~$ **curl google.com**
curl: (6) **Could not resolve host: google.com**
Clearly the pods are not able to access the internet to pull images. looks like I am missing some dns config to make the minikube pull images from internet.Help me resolve this.
Due to the changes in Fedora 32, the install experience is slightly more involved than usual, and currently requires some extra manual steps to be performed, depending on your machine's configuration.
See: docker-for-linux-fedora32.
Take a look also on: cannot-pull-image, could-not-resolve-host, curl-6-could-not-resolve-host, err-docker-daemon.
Remember that the current state of Docker on Fedora 32 is not ideal. The lack of an official package might bother some, and there is an issue upstream where this is discussed. See: docker-fedora32-example.
Take a look on this installation Docker on Fedora32 - docker-on-fedora32.
To install Docker Engine, you need the 64-bit version of one of these
Fedora versions:
Fedora 30
Fedora 31
I advice you to use different drivers, example kvm or virtualbox - minikube-drivers. Look at this guide - minikube-kubernetes.

Kind kubernetes cluster failed to pull docker images

I tried to use KinD as an alternative of Minikube to bootstrap a K8S cluster in my local machine.
The cluster is created successfully.
But when I tried to create some pods/deployments from images, it failed.
$ kubectl run nginx --image=nginx
$ kubectl run hello --image=hello-world
After some minutes, use get pods to get a failed status.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello 0/1 ImagePullBackOff 0 11m
nginx 0/1 ImagePullBackOff 0 22m
I am afraid this is another Global Firewall problem in China.
kubectl describe pods/nginx
Name: nginx
Namespace: default
Priority: 0
Node: dev-control-plane/172.19.0.2
Start Time: Sun, 30 Aug 2020 19:46:06 +0800
Labels: run=nginx
Annotations: <none>
Status: Pending
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Containers:
nginx:
Container ID:
Image: nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mgq96 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mgq96:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mgq96
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 56m default-scheduler Successfully assigned default/nginx to dev-control-plane
Normal BackOff 40m kubelet, dev-control-plane Back-off pulling image "nginx"
Warning Failed 40m kubelet, dev-control-plane Error: ImagePullBackOff
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: unexpected EOF
Warning Failed 13m (x3 over 40m) kubelet, dev-control-plane Error: ErrImagePull
Normal Pulling 13m (x4 over 56m) kubelet, dev-control-plane Pulling image "nginx"
When I entered to the kindest/node container, but there is no docker in it. Not sure how KIND works, originally I understand it deploys a K8S cluster into a Docker container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a644f8b61314 kindest/node:v1.19.0 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:52301->6443/tcp dev-control-plane
$ docker exec -it a644f8b61314 /bin/bash
root#dev-control-plane:/# docker -v
bash: docker: command not found
After reading the Kind docs, I can not find an option to set a repository mirror there like that in Minikube.
BTW, I am using the latest Docker Desktop beta on a Windows 10.
First pull the image in your local system using docker pull nginx and then use below command to load that image to the kind cluster
kind load docker-image nginx --name kind-cluster-name
Kind uses containerd instead of docker as runtime, that's why docker is not installed on the nodes.
Alternatively you can use crictl tool to pull and check images inside the kind node.
crictl pull nginx
crictl images
I've run into same issue because I've exported http_proxy and https_proxy before creating cluster to a local proxy (127.0.0.1), which is unrechable in the cluster. After unset http(s)_proxy and recreate cluster, everything runs fine.

Pod gets into status of CrashLoopBackOff and gets restarted repeatedly - Exit code is 0

I have a docker container that is running fine when I run it using docker run. I am trying to put that container inside a pod but I am facing issues. The first run of the pod shows status as "Completed". And then the pod keeps restarting with CrashLoopBackoff status. The exit code however is 0.
Here is the result of kubectl describe pod :
Name: messagingclientuiui-6bf95598db-5znfh
Namespace: mgmt
Node: db1mgr0deploy01/172.16.32.68
Start Time: Fri, 03 Aug 2018 09:46:20 -0400
Labels: app=messagingclientuiui
pod-template-hash=2695115486
Annotations: <none>
Status: Running
IP: 10.244.0.7
Controlled By: ReplicaSet/messagingclientuiui-6bf95598db
Containers:
messagingclientuiui:
Container ID: docker://a41db3bcb584582e9eacf26b02c7ef26f57c2d43b813f44e4fd1ba63347d3fc3
Image: 172.32.1.4/messagingclientuiui:667-I20180802-0202
Image ID: docker-pullable://172.32.1.4/messagingclientuiui#sha256:89a002448660e25492bed1956cfb8fff447569e80ac8b7f7e0fa4d44e8abee82
Port: 9087/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 03 Aug 2018 09:50:06 -0400
Finished: Fri, 03 Aug 2018 09:50:16 -0400
Ready: False
Restart Count: 5
Environment Variables from:
mesg-config ConfigMap Optional: false
Environment: <none>
Mounts:
/docker-mount from messuimount (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2pthw (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
messuimount:
Type: HostPath (bare host directory volume)
Path: /mon/monitoring-messui/docker-mount
HostPathType:
default-token-2pthw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2pthw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned messagingclientuiui-6bf95598db-5znfh to db1mgr0deploy01
Normal SuccessfulMountVolume 4m kubelet, db1mgr0deploy01 MountVolume.SetUp succeeded for volume "messuimount"
Normal SuccessfulMountVolume 4m kubelet, db1mgr0deploy01 MountVolume.SetUp succeeded for volume "default-token-2pthw"
Normal Pulled 2m (x5 over 4m) kubelet, db1mgr0deploy01 Container image "172.32.1.4/messagingclientuiui:667-I20180802-0202" already present on machine
Normal Created 2m (x5 over 4m) kubelet, db1mgr0deploy01 Created container
Normal Started 2m (x5 over 4m) kubelet, db1mgr0deploy01 Started container
Warning BackOff 1m (x8 over 4m) kubelet, db1mgr0deploy01 Back-off restarting failed container
kubectl get pods
NAME READY STATUS RESTARTS AGE
messagingclientuiui-6bf95598db-5znfh 0/1 CrashLoopBackOff 9 23m
I am assuming we need a loop to keep the container running in this case. But I dont understand why it worked when it ran using docker and not working when it is inside a pod. Shouldnt it behave the same ?
How do we henerally debug CrashLoopBackOff status apart from running kubectl describe pod and kubectl logs
The container would terminate with exit code 0 if there isn't at least one process running in the background. To keep the container running, add these to the deployment configuration:
command: ["sh"]
stdin: true
Replace sh with bash on any other shell that the image may have.
Then you can drop inside the container with exec:
kubectl exec -it <pod-name> sh
Add -c <container-name> argument if the pod has more than one container.
are you sure you run your software as docker run ... -d ... <command> and it kept running and you use the same exact command in your pod ? In some cases, if you compare things that run on docker with -it and no -d you might find your self in a pinch as they expect terminal to communicate with user and exit if tty is not available (hint: pod/container can be run with tty: true)
It is very unlikely that you have software that runs in a detached docker and does not in kube.

Resources