I have a working installation with kubernetes 1.1.1 running on debian
I also have a private registry working nice running in v2 ..
I am facing a weird problem.
defining a pod in master
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker-registry.hiberus.com:5000/debian:ssh
imagePullSecrets:
- name: myregistrykey
I also have the secret on my master
myregistrykey kubernetes.io/dockercfg 1 44m
and my config.json is made this way
{
"auths": {
"https://docker-registry.hiberus.com:5000": {
"auth": "anNhdXJhOmpzYXVyYQ==",
"email": "jsaura#heraldo.es"
}
}
}
and so I did the base64 and created my secret.
simple as hell
on my node the image gets pulled without any problem
docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
docker-registry.hiberus.com:5000/debian ssh 3b332951c107 29 minutes ago 183.3 MB
golang 1.4 2819d1d84442 7 days ago 562.7 MB
debian latest 91bac885982d 8 days ago 125.1 MB
gcr.io/google_containers/pause 0.8.0 2c40b0526b63 7 months ago 241.7 kB
but my container does not start
./kubectl describe pod nginx
Name: nginx
Namespace: default
Image(s): docker-registry.hiberus.com:5000/debian:ssh
Node: 192.168.29.122/192.168.29.122
Start Time: Wed, 18 Nov 2015 17:08:53 +0100
Labels: app=nginx
Status: Running
Reason:
Message:
IP: 172.17.0.2
Replication Controllers:
Containers:
nginx:
Container ID: docker://3e55ab118a3e5d01d3c58361abb1b23483d41be06741ce747d4c20f5abfeb15f
Image: docker-registry.hiberus.com:5000/debian:ssh
Image ID: docker://3b332951c1070ba2d7a3bb439787a8169fe503ed8984bcefd0d6c273d22d4370
State: Waiting
Reason: CrashLoopBackOff
Last Termination State: Terminated
Reason: Error
Exit Code: 0
Started: Wed, 18 Nov 2015 17:08:59 +0100
Finished: Wed, 18 Nov 2015 17:08:59 +0100
Ready: False
Restart Count: 2
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-ha0i4:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-ha0i4
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
16s 16s 1 {kubelet 192.168.29.122} implicitly required container POD Created Created with docker id 4a063be27162
16s 16s 1 {kubelet 192.168.29.122} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine
16s 16s 1 {kubelet 192.168.29.122} implicitly required container POD Started Started with docker id 4a063be27162
16s 16s 1 {kubelet 192.168.29.122} spec.containers{nginx} Pulling Pulling image "docker-registry.hiberus.com:5000/debian:ssh"
15s 15s 1 {scheduler } Scheduled Successfully assigned nginx to 192.168.29.122
11s 11s 1 {kubelet 192.168.29.122} spec.containers{nginx} Created Created with docker id 36df2dc8b999
11s 11s 1 {kubelet 192.168.29.122} spec.containers{nginx} Pulled Successfully pulled image "docker-registry.hiberus.com:5000/debian:ssh"
11s 11s 1 {kubelet 192.168.29.122} spec.containers{nginx} Started Started with docker id 36df2dc8b999
10s 10s 1 {kubelet 192.168.29.122} spec.containers{nginx} Pulled Container image "docker-registry.hiberus.com:5000/debian:ssh" already present on machine
10s 10s 1 {kubelet 192.168.29.122} spec.containers{nginx} Created Created with docker id 3e55ab118a3e
10s 10s 1 {kubelet 192.168.29.122} spec.containers{nginx} Started Started with docker id 3e55ab118a3e
5s 5s 1 {kubelet 192.168.29.122} spec.containers{nginx} Backoff Back-off restarting failed docker container
it loops internally trying to start but it never does
the weird thing is that if y do a run command on my node manually the container starts without any problem, but using the pod pulls the image but never starts ..
am I doing something wrong?
if I use a public image for my pod it starts without any problem .. this only happens to me when using private images ..
I have also moved from debian to ubuntu, no luck same problem
I have also linked the secret to de default service account, still no luck
cloned last git version, compiled, no luck ..
It is clear for me that the problem is using private registry, but I have applied and followed all info I have read and still no luck.
A docker container could exit if it's main process has exit.
Could you share container logs ?
if you do docker ps -a you should see all running and exited containers
Run docker container logs container_id
Also try running your container in interactive and daemon mode and see if it fails only in daemon mode.
Running in daemon mode -
docker run -d -t Image_name
Running in interactive mode -
docker run -it Image_name
for interactive daemon mode docker run -idt Image_name
refer - Why docker container exits immediately
Related
When trying to create Pods that can use GPU, I get the error "exec: "nvidia-smi": executable file not found in $PATH" ".
To explain the error from the beginning, my main goal was to create JupyterHub enviroments that can use GPU. I installed Zero to JupyterHub for Kubernetes. I followed these steps to be able to use GPU. When I check my nodes GPUs seems schedulable by Kubernetes. So far everything seemed fine.
kubectl get nodes -o=custom-columns=NAME:.metadata.name,GPUs:.status.capacity.'nvidia\.com/gpu'
NAME GPUs
arge-server 1
However, when I logged in to JupyetHub and tried to open the profile using GPU, I got an error: [Warning] 0/1 nodes are available: 1 Insufficient nvidia.com/gpu. So, I checked the Pods and I found that they were all in the "Waiting: PodInitializing" state.
kubectl get pods -n gpu-operator-resources
NAME READY STATUS RESTARTS AGE
nvidia-dcgm-x5rqs 0/1 Init:0/1 2 6d20h
nvidia-device-plugin-daemonset-jhjhb 0/1 Init:0/1 0 6d20h
gpu-feature-discovery-pd4xv 0/1 Init:0/1 2 6d20h
nvidia-dcgm-exporter-7mjgt 0/1 Init:0/1 2 6d20h
nvidia-operator-validator-9xjmv 0/1 Init:Error 10 26m
After that, I took a closer look at the Pod nvidia-operator-validator-9xjmv, which was the beginning of the error, and I saw that the toolkit-validation container was throwing a CrashLoopBackOff error. Here is the relevant part of the log:
kubectl describe pod nvidia-operator-validator-9xjmv -n gpu-operator-resources
Name: nvidia-operator-validator-9xjmv
Namespace: gpu-operator-resources
.
.
.
Controlled By: DaemonSet/nvidia-operator-validator
Init Containers:
.
.
.
toolkit-validation:
Container ID: containerd://e7d004f0809cbefdae5407ea42eb659972ea7eefa5dd6e45e968cbf3ed22bf2e
Image: nvcr.io/nvidia/cloud-native/gpu-operator-validator:v1.8.2
Image ID: nvcr.io/nvidia/cloud-native/gpu-operator-validator#sha256:a07fd1c74e3e469ac316d17cf79635173764fdab3b681dbc282027a23dbbe227
Port: <none>
Host Port: <none>
Command:
sh
-c
Args:
nvidia-validator
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 18 Nov 2021 12:55:00 +0300
Finished: Thu, 18 Nov 2021 12:55:00 +0300
Ready: False
Restart Count: 16
Environment:
WITH_WAIT: false
COMPONENT: toolkit
Mounts:
/run/nvidia/validations from run-nvidia-validations (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hx7ls (ro)
.
.
.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 58m default-scheduler Successfully assigned gpu-operator-resources/nvidia-operator-validator-9xjmv to arge-server
Normal Pulled 58m kubelet Container image "nvcr.io/nvidia/cloud-native/gpu-operator-validator:v1.8.2" already present on machine
Normal Created 58m kubelet Created container driver-validation
Normal Started 58m kubelet Started container driver-validation
Normal Pulled 56m (x5 over 58m) kubelet Container image "nvcr.io/nvidia/cloud-native/gpu-operator-validator:v1.8.2" already present on machine
Normal Created 56m (x5 over 58m) kubelet Created container toolkit-validation
Normal Started 56m (x5 over 58m) kubelet Started container toolkit-validation
Warning BackOff 3m7s (x255 over 58m) kubelet Back-off restarting failed container
Then, I looked at the logs of the container and I got the following error.
kubectl logs -n gpu-operator-resources -f nvidia-operator-validator-9xjmv -c toolkit-validation
time="2021-11-18T09:29:24Z" level=info msg="Error: error validating toolkit installation: exec: \"nvidia-smi\": executable file not found in $PATH"
toolkit is not ready
For similar issues, it was suggested to delete the failed Pod and deployment. However, doing these did not fix my problem. Do you have any suggestions?
I have;
Ubuntu 20.04
Kubernetes v1.21.6
Docker 20.10.10
NVIDIA-SMI 470.82.01
CUDA 11.4
CPU: Intel Xeon E5-2683 v4 (32) # 2.097GHz
GPU: NVIDIA GeForce RTX 2080 Ti
Memory: 13815MiB / 48280MiB
Thanks in advance.
In case you're are still having the issue, we just had the same issue on our cluster, the "dirty" fix is to do that:
rm /run/nvidia/driver
ln -s / /run/nvidia/drive
kubectl delete pod -n gpu-operator nvidia-operator-validator-xxxxx
The reason is the init pod of the nvidia-operator-validator try to execute nvidia-smi within a chroot from /run/nvidia/driver .. which is a tmpfs (so doesn't persist accross reboot) and is not populated when performing a manual install of the drivers.
Do hope for a better fix from Nvidia.
I am trying to create a pod using my own docker image on localhost.
This is the dockerfile used to create the image :
FROM centos:8
RUN yum install -y gdb
RUN yum group install -y "Development Tools"
CMD ["/usr/bin/bash"]
The yaml file used to create the pod is this :
---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: server
imagePullPolicy: Never
image: localhost:5000/server
ports:
- containerPort: 80
root#node1:~/test/server# docker images | grep server
server latest 82c5228a553d 3 hours ago 948MB
localhost.localdomain:5000/server latest 82c5228a553d 3 hours ago 948MB
localhost:5000/server latest 82c5228a553d 3 hours ago 948MB
The image has been pushed to localhost registry.
Following is the error I receive.
root#node1:~/test/server# kubectl get pods
NAME READY STATUS RESTARTS AGE
server 0/1 CrashLoopBackOff 5 5m18s
The output of describe pod :
root#node1:~/test/server# kubectl describe pod server
Name: server
Namespace: default
Priority: 0
Node: node1/10.0.2.15
Start Time: Mon, 07 Dec 2020 15:35:49 +0530
Labels: app=server
Annotations: cni.projectcalico.org/podIP: 10.233.90.192/32
cni.projectcalico.org/podIPs: 10.233.90.192/32
Status: Running
IP: 10.233.90.192
IPs:
IP: 10.233.90.192
Containers:
server:
Container ID: docker://c2982e677bf37ff11272f9ea3f68565e0120fb8ccfb1595393794746ee29b821
Image: localhost:5000/server
Image ID: docker-pullable://localhost.localdomain:5000/server#sha256:6bc8193296d46e1e6fa4cb849fa83cb49e5accc8b0c89a14d95928982ec9d8e9
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 07 Dec 2020 15:41:33 +0530
Finished: Mon, 07 Dec 2020 15:41:33 +0530
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tb7wb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-tb7wb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tb7wb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/server to node1
Normal Pulled 4m34s (x5 over 5m59s) kubelet Container image "localhost:5000/server" already present on machine
Normal Created 4m34s (x5 over 5m59s) kubelet Created container server
Normal Started 4m34s (x5 over 5m59s) kubelet Started container server
Warning BackOff 56s (x25 over 5m58s) kubelet Back-off restarting failed container
I get no logs :
root#node1:~/test/server# kubectl logs -f server
root#node1:~/test/server#
I am unable to figure out whether the issue is with the container or yaml file for creating pod. Any help would be appreciated.
Posting this as Community Wiki.
As pointed by #David Maze in comment section.
If docker run exits immediately, a Kubernetes Pod will always go into CrashLoopBackOff state. Your Dockerfile needs to COPY in or otherwise install and application and set its CMD to run it.
Root cause can be also determined by Exit Code. In 3) Check the exit code article, you can find a few exit codes like 0, 1, 128, 137 with description.
3.1) Exit Code 0
This exit code implies that the specified container command completed ‘sucessfully’, but too often for Kubernetes to accept as working.
In short story, your container was created, all action mentioned was executed and as there was nothing else to do, it exit with Exit Code 0.
A CrashLoopBackOff error occurs when a pod startup fails repeatedly in Kubernetes.`
Your image based on centos with few additional installations did not have any process in backgroud left, so it was categorized as Completed. As this happen so fast, kubernetes restarted it and it fall in loop.
$ kubectl run centos --image=centos
$ kubectl get po -w
NAME READY STATUS RESTARTS AGE
centos 0/1 CrashLoopBackOff 1 5s
centos 0/1 Completed 2 17s
centos 0/1 CrashLoopBackOff 2 31s
centos 0/1 Completed 3 46s
centos 0/1 CrashLoopBackOff 3 58s
centos 1/1 Running 4 88s
centos 0/1 Completed 4 89s
centos 0/1 CrashLoopBackOff 4 102s
$ kubectl describe po centos | grep 'Exit Code'
Exit Code: 0
But when you have used sleep 3600, in your container, command sleep was executing for hour. After this time it would also exit with Exit Code 0.
Hope it clarified.
I am encountering a very basic error:
I have docker-desktop and minikube setup on my windows 10 machine.
Further I setup a local docker registry using the steps here.
Here is what I have when I run docker-ps :
c:\>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6675d2c57a74 registry:2 "/entrypoint.sh /etc…" 7 hours ago Up 2 hours 0.0.0.0:5000->5000/tcp registry
d05edc8f05b0 gcr.io/k8s-minikube/kicbase:v0.0.10 "/usr/local/bin/entr…" 8 hours ago Up 8 hours 127.0.0.1:32771->22/tcp, 127.0.0.1:32770->2376/tcp, 127.0.0.1:32769->5000/tcp, 127.0.0.1:32768->8443/tcp minikube
I pushed my local image to my local registry using docker push, and I deleted the local image using docker image remove command to avoid any confusion.
I now tried pulling the local registry image to see if it works, and it does
docker pull localhost:5000/dev/my-web:v1
v1: Pulling from dev/my-web
Digest: sha256:b3a0cf5c66ade8d39709c0cbbd0e08c9cc5f5e1c97f039a2bd1afed657dc8b74
Status: Downloaded newer image for localhost:5000/dev/my-web:v1
localhost:5000/dev/my-web:v1
Now I run my kubectl create commands and they fail with the error connection refused.
C:\>kubectl create deployment myweb --image=localhost:5000/dev/my-web:v1
deployment.apps/myweb created
C:\>kubectl get pods
NAME READY STATUS RESTARTS AGE
myweb-7d467f4bc4-r9xhc 0/1 ErrImagePull 0 12s
C:\>kubectl describe pod/myweb-7d467f4bc4-r9xhc
Name: myweb-7d467f4bc4-r9xhc
Namespace: default
Priority: 0
Node: minikube/172.17.0.3
Start Time: Sun, 09 Aug 2020 23:30:07 -0400
Labels: app=myweb
pod-template-hash=7d467f4bc4
Annotations: <none>
Status: Pending
IP: 172.18.0.3
IPs:
IP: 172.18.0.3
Controlled By: ReplicaSet/myweb-7d467f4bc4
Containers:
my-web:
Container ID:
Image: localhost:5000/dev/my-web:v1
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nr7vj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-nr7vj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nr7vj
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 93s default-scheduler Successfully assigned default/myweb-7d467f4bc4-r9xhc to minikube
Normal Pulling 53s (x3 over 92s) kubelet, minikube Pulling image "localhost:5000/dev/my-web:v1"
Warning Failed 53s (x3 over 92s) kubelet, minikube Failed to pull image "localhost:5000/dev/my-web:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Warning Failed 53s (x3 over 92s) kubelet, minikube Error: ErrImagePull
Normal BackOff 13s (x5 over 91s) kubelet, minikube Back-off pulling image "localhost:5000/dev/my-web:v1"
Warning Failed 13s (x5 over 91s) kubelet, minikube Error: ImagePullBackOff
I am not able to figure out why am I getting connection refused error when I am trying through kubectl command as against docker pull command.
Please help.
Additional notes: (1) I am using in-built windows hypervisor (2) Using default networking
Well, Kubernetes cannot find your registry. It depends on the setup you have, but Docker for Windows runs in a VM and you didn't specify how you are running minikube, but most likely in another VM. So potentially here you have two VMs that can or cannot talk to each depending on how you set them up.
And localhost is almost never going to work with Kubernetes because that always resolves to the local IP of your Kubernetes node when it comes to pulling the image. That means that you would have to have your registry and the kubelet pulling the image on the exact same VM.
I would just focus on making it work with the VM IP address where your registry is running and make sure that your Kubernetes VM can reach the registry VM IP address. Also, remember that when you run your registry you also have to expose you container port in this case 5000
P.S. You didn't specify what Hypervisor you are running? VirtualBox? VMware? are you using bridged networking? host only❓
✌️
I have a docker container that is running fine when I run it using docker run. I am trying to put that container inside a pod but I am facing issues. The first run of the pod shows status as "Completed". And then the pod keeps restarting with CrashLoopBackoff status. The exit code however is 0.
Here is the result of kubectl describe pod :
Name: messagingclientuiui-6bf95598db-5znfh
Namespace: mgmt
Node: db1mgr0deploy01/172.16.32.68
Start Time: Fri, 03 Aug 2018 09:46:20 -0400
Labels: app=messagingclientuiui
pod-template-hash=2695115486
Annotations: <none>
Status: Running
IP: 10.244.0.7
Controlled By: ReplicaSet/messagingclientuiui-6bf95598db
Containers:
messagingclientuiui:
Container ID: docker://a41db3bcb584582e9eacf26b02c7ef26f57c2d43b813f44e4fd1ba63347d3fc3
Image: 172.32.1.4/messagingclientuiui:667-I20180802-0202
Image ID: docker-pullable://172.32.1.4/messagingclientuiui#sha256:89a002448660e25492bed1956cfb8fff447569e80ac8b7f7e0fa4d44e8abee82
Port: 9087/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 03 Aug 2018 09:50:06 -0400
Finished: Fri, 03 Aug 2018 09:50:16 -0400
Ready: False
Restart Count: 5
Environment Variables from:
mesg-config ConfigMap Optional: false
Environment: <none>
Mounts:
/docker-mount from messuimount (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2pthw (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
messuimount:
Type: HostPath (bare host directory volume)
Path: /mon/monitoring-messui/docker-mount
HostPathType:
default-token-2pthw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2pthw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned messagingclientuiui-6bf95598db-5znfh to db1mgr0deploy01
Normal SuccessfulMountVolume 4m kubelet, db1mgr0deploy01 MountVolume.SetUp succeeded for volume "messuimount"
Normal SuccessfulMountVolume 4m kubelet, db1mgr0deploy01 MountVolume.SetUp succeeded for volume "default-token-2pthw"
Normal Pulled 2m (x5 over 4m) kubelet, db1mgr0deploy01 Container image "172.32.1.4/messagingclientuiui:667-I20180802-0202" already present on machine
Normal Created 2m (x5 over 4m) kubelet, db1mgr0deploy01 Created container
Normal Started 2m (x5 over 4m) kubelet, db1mgr0deploy01 Started container
Warning BackOff 1m (x8 over 4m) kubelet, db1mgr0deploy01 Back-off restarting failed container
kubectl get pods
NAME READY STATUS RESTARTS AGE
messagingclientuiui-6bf95598db-5znfh 0/1 CrashLoopBackOff 9 23m
I am assuming we need a loop to keep the container running in this case. But I dont understand why it worked when it ran using docker and not working when it is inside a pod. Shouldnt it behave the same ?
How do we henerally debug CrashLoopBackOff status apart from running kubectl describe pod and kubectl logs
The container would terminate with exit code 0 if there isn't at least one process running in the background. To keep the container running, add these to the deployment configuration:
command: ["sh"]
stdin: true
Replace sh with bash on any other shell that the image may have.
Then you can drop inside the container with exec:
kubectl exec -it <pod-name> sh
Add -c <container-name> argument if the pod has more than one container.
are you sure you run your software as docker run ... -d ... <command> and it kept running and you use the same exact command in your pod ? In some cases, if you compare things that run on docker with -it and no -d you might find your self in a pinch as they expect terminal to communicate with user and exit if tty is not available (hint: pod/container can be run with tty: true)
It is very unlikely that you have software that runs in a detached docker and does not in kube.
Started working on Docker and kubernetes lately.
I ran into a problem which I don't actually understand fully.
The thing is when I execute my svc.yaml(service) and rc.yaml(replication controller) pods get created but its status is terminated.
I tried checking the possible reason for failure by using the command
docker ps -a
954c3ee817f9 localhost:5000/HelloService
"/bin/sh -c ./startSe" 2 minutes ago Exited (127) 2 minutes
ago
k8s_HelloService.523e3b04_HelloService-64789_default_40e92b63-707a-11e7-9b96-080027f96241_195f2fee
then tried running
docker run -i -t localhost:5000/HelloService
/bin/sh: ./startService.sh: not found
what is the possible reason I am getting these errors.
Docker File:
FROM alpine:3.2
VOLUME /tmp
ADD HelloService-0.0.1-SNAPSHOT.jar app.jar
VOLUME /etc
ADD /etc/ /etc/
ADD startService.sh /startService.sh
RUN chmod 700 /startService.sh
ENTRYPOINT ./startService.sh
startService.sh
#!/bin/sh
touch /app.jar
java -Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx256m -jar /app.jar
Also I would like to know if there any specific way I can access the logs from kubernetes for the terminated pods?
Update :
on running below command
kubectl describe pods HelloService-522qw
24s 24s 1 {default-scheduler } Normal Scheduled Successfully
assigned HelloService-522qw to ssantosh.centos7 17s 17s 1 {kubelet
ssantosh.centos7} spec.containers{HelloService} Normal Created Created
container with docker id b550557f4c17; Security:[seccomp=unconfined]
17s 17s 1 {kubelet
ssantosh.centos7} spec.containers{HelloService} Normal Started Started
container with docker id b550557f4c17 18s 16s 2 {kubelet
ssantosh.centos7} spec.containers{HelloService} Normal Pulling pulling
image "localhost:5000/HelloService" 18s 16s 2 {kubelet
ssantosh.centos7} spec.containers{HelloService} Normal Pulled Successfully
pulled image "localhost:5000/HelloService" 15s 15s 1 {kubelet
ssantosh.centos7} spec.containers{HelloService} Normal Created Created
container with docker id d30b10211b1b; Security:[seccomp=unconfined]
14s 14s 1 {kubelet
ssantosh.centos7} spec.containers{HelloService} Normal Started Started
container with docker id d30b10211b1b 12s 11s 2 {kubelet
ssantosh.centos7} spec.containers{HelloService} Warning BackOff Back-off
restarting failed docker container 12s 11s 2 {kubelet
ssantosh.centos7} Warning FailedSync Error syncing pod, skipping:
failed to "StartContainer" for "HelloService" with CrashLoopBackOff:
"Back-off 10s restarting failed container=HelloService
pod=HelloService-522qw_default(1e951b45-7116-11e7-9b96-080027f96241)"
The issue was there is no java as a part of alpine image.
So modified the
FROM alpine:3.2
to
FROM anapsix/alpine-java
you need jdk on the machine also need to update the Dockerfile, remove the . infront of startService.sh command. like below
ENTRYPOINT /startService.sh
this will fix this error message.
/bin/sh: ./startService.sh: not found