MongoDb ImagePullBackOff error in Kubernetes despite trying every SOF solution - docker

I'm using minikube on a Fedora based machine to run a simple mongo-db deployment on my local machine but I'm constantly getting ImagePullBackOff error. Here is the yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I tried to pull the image locally by using docker pull mongo, minikube image pull mongo & minikube image pull mongo-express several times while restarting docker and minikube several times.
Logining into dockerhub (both in broweser and through terminal didn't work)
I also tried to login into docker using docker login command and then modified my /etc/resolv.conf and adding nameserver 8.8.8.8 and then restartied docker using sudo systemctl restart docker but even that failed to work.
On running kubectl describe pod command I get this output:
Name: mongodb-deployment-6bf8f4c466-85b2h
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 29 Aug 2022 23:04:12 +0530
Labels: app=mongodb
pod-template-hash=6bf8f4c466
Annotations: <none>
Status: Pending
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/mongodb-deployment-6bf8f4c466
Containers:
mongodb:
Container ID:
Image: mongo
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
MONGO_INITDB_ROOT_USERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'>
Optional: false
MONGO_INITDB_ROOT_PASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'>
Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vlcxl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-vlcxl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 22m default-scheduler Successfully assigned default/mongodb-deployment-6bf8f4c466-85b2h to minikube
Warning Failed 18m (x2 over 20m) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 18m (x2 over 20m) kubelet Error: ErrImagePull
Normal BackOff 17m (x2 over 20m) kubelet Back-off pulling image "mongo:latest"
Warning Failed 17m (x2 over 20m) kubelet Error: ImagePullBackOff
Normal Pulling 17m (x3 over 22m) kubelet Pulling image "mongo:latest"
Normal SandboxChanged 11m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 3m59s (x4 over 11m) kubelet Pulling image "mongo:latest"
Warning Failed 2m (x4 over 9m16s) kubelet Failed to pull image "mongo:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 2m (x4 over 9m16s) kubelet Error: ErrImagePull
Normal BackOff 83s (x7 over 9m15s) kubelet Back-off pulling image "mongo:latest"
Warning Failed 83s (x7 over 9m15s) kubelet Error: ImagePullBackOff
PS: Ignore any any spacing errors

I think your internet connection is slow. The timeout to pull an image is 120 seconds, so kubectl could not pull the image in under 120 seconds.
First, pull the image via Docker
docker image pull mongo
Then load the downloaded image to minikube
minikube image load mongo
And then everything will work because now kubectl will use the image that is stored locally.

Related

How to resolve pods failing due to exceeding pull rate limit from docker?

kubectl get all -n migration:
NAME READY STATUS RESTARTS AGE
pod/nginx2-7b8667968c-zxtq7 0/1 ImagePullBackOff 0 5m38s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx2 0/1 1 0 5m38s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx2-7b8667968c 1 1 0 5m38s
kubectl describe pod nginx2-7b8667968c-zxtq7 -n migration:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned migration/nginx2-7b8667968c-zxtq7 to k8s-master01
Normal SandboxChanged 33s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulling 18s (x2 over 43s) kubelet Pulling image "nginx"
Warning Failed 14s (x2 over 34s) kubelet Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning Failed 14s (x2 over 34s) kubelet Error: ErrImagePull
Normal BackOff 3s (x4 over 32s) kubelet Back-off pulling image "nginx"
Warning Failed 3s (x4 over 32s) kubelet Error: ImagePullBackOff
Post-logging in with different docker account:
I can manually pull an image from docker using docker pull nginx
But while deploying a deployment, it again shows the same error.
Deployment yaml is as:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
namespace: migration
spec:
replicas: 1
selector:
matchLabels:
name: nginx2
template:
metadata:
labels:
name: nginx2
spec:
containers:
- name: nginx2
imagePullPolicy: Always
image: nginx
ports:
- containerPort: 3000
volumeMounts:
- name: game-demo
mountPath: /usr/src/app/config
- name: secret-basic-auth
mountPath: /usr/src/app/secret
- name: site-data2
mountPath: /var/www/html
volumes:
- name: game-demo
configMap:
name: game-demo
- name: secret-basic-auth
secret:
secretName: secret-basic-auth
- name: site-data2
persistentVolumeClaim:
claimName: demo-pvc-claim2
Also, as the nginx image is present locally, I tried with modifying imagePullPolicy to Never as well as 'IfNotPresent'.
But nothing works. Please guide.
Here is your errror:
Warning Failed 14s (x2 over 34s) kubelet Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Basically docker hub is now becoming a paid thing if you exceed their rate limits.
You can use the publically hosted image of "nginx" from Amazon's ECR instead:
docker pull public.ecr.aws/nginx/nginx:latest
That should be the same as the one you use using, just double check over here:
https://gallery.ecr.aws/nginx/nginx

Unable to update image of StatefulSet in Kubernetes

I recently evaluated Kubernetes with a simple test project and I was able to update image of StatefulSet with command like this:
kubectl set image statefulset/cloud-stateful-set cloud-stateful-container=ncccloud:v716
I'm now trying to get our real system to work in Kubernetes and the pods don't do anything when I try to update image, even though I'm using basically the same command.
It says:
statefulset.apps "cloud-stateful-set" image updated
And kubectl describe statefulset.apps/cloud-stateful-set says:
Image: ncccloud:v716"
But kubectl describe pod cloud-stateful-set-0 and kubectl describe pod cloud-stateful-set-1 say:
"Image: ncccloud:latest"
The ncccloud:latest is an image which doesn't work:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cloud-stateful-set-0 0/1 CrashLoopBackOff 7 13m
cloud-stateful-set-1 0/1 CrashLoopBackOff 7 13m
mssql-deployment-6cd4ff766-pzz99 1/1 Running 1 55m
Another strange thing is that every time I try to apply the StatefulSet it says configured instead of unchanged.
$ kubectl apply -f k8s/cloud-stateful-set.yaml
statefulset.apps "cloud-stateful-set" configured
Here is my cloud-stateful-set.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cloud-stateful-set
labels:
app: cloud
group: service
spec:
replicas: 2
# podManagementPolicy: Parallel
serviceName: cloud-stateful-set
selector:
matchLabels:
app: cloud
template:
metadata:
labels:
app: cloud
group: service
spec:
containers:
- name: cloud-stateful-container
image: ncccloud:latest
imagePullPolicy: Never
ports:
- containerPort: 80
volumeMounts:
- name: cloud-stateful-storage
mountPath: /cloud-stateful-data
volumeClaimTemplates:
- metadata:
name: cloud-stateful-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Here is full output of kubectl describe pod/cloud-stateful-set-1:
Name: cloud-stateful-set-1
Namespace: default
Node: docker-for-desktop/192.168.65.3
Start Time: Tue, 02 Jul 2019 11:03:01 +0300
Labels: app=cloud
controller-revision-hash=cloud-stateful-set-5c9964c897
group=service
statefulset.kubernetes.io/pod-name=cloud-stateful-set-1
Annotations: <none>
Status: Running
IP: 10.1.0.20
Controlled By: StatefulSet/cloud-stateful-set
Containers:
cloud-stateful-container:
Container ID: docker://3ec26930c1a81caa39d5c5a16c4e25adf7584f90a71e0110c0b03ecb60dd9592
Image: ncccloud:latest
Image ID: docker://sha256:394427c40e964e34ca6c9db3ce1df1f8f6ce34c4ba8f3ab10e25da6e89678830
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Tue, 02 Jul 2019 11:19:03 +0300
Finished: Tue, 02 Jul 2019 11:19:03 +0300
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gzxpx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloud-stateful-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cloud-stateful-storage-cloud-stateful-set-1
ReadOnly: false
default-token-gzxpx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gzxpx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned cloud-stateful-set-1 to docker-for-desktop
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "pvc-4c9e1796-9c9a-11e9-998f-00155d64fa03"
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gzxpx"
Normal Pulled 17m (x5 over 19m) kubelet, docker-for-desktop Container image "ncccloud:latest" already present on machine
Normal Created 17m (x5 over 19m) kubelet, docker-for-desktop Created container
Normal Started 17m (x5 over 19m) kubelet, docker-for-desktop Started container
Warning BackOff 4m (x70 over 19m) kubelet, docker-for-desktop Back-off restarting failed container
Here is full output of kubectl describe statefulset.apps/cloud-stateful-set:
Name: cloud-stateful-set
Namespace: default
CreationTimestamp: Tue, 02 Jul 2019 11:02:59 +0300
Selector: app=cloud
Labels: app=cloud
group=service
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"app":"cloud","group":"service"},"name":"cloud-stateful-set","names...
Replicas: 2 desired | 2 total
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cloud
group=service
Containers:
cloud-stateful-container:
Image: ncccloud:v716
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
Volumes: <none>
Volume Claims:
Name: cloud-stateful-storage
StorageClass:
Labels: <none>
Annotations: <none>
Capacity: 10Mi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-0 in StatefulSet cloud-stateful-set successful
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-1 in StatefulSet cloud-stateful-set successful
I'm using Docker Desktop on Windows, if it matters.
in my case imagePullPolicy was set to Always already:
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]'
helped in my case, see k8s docs: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
In the stateful set yaml, change
imagePullPolicy: Never
to
imagePullPolicy: Always

Why Kubernetes is not attaching my secret into my pod?

I already created my secret as recommend by Kubernetes and followed the tutorial, but the pod isnt with my secret attached.
As you can see, i created the secret and described it.
After i created my pod.
$ kubectl get secret my-secret --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
{"auths":{"my-private-repo.com":{"username":"<username>","password":"<password>","email":"<email>","auth":"<randomAuth>="}}}
$ kubectl create -f my-pod.yaml
pod "my-pod" created
$ kubectl describe pods trunfo
Name: my-pod
Namespace: default
Node: gke-trunfo-default-pool-07eea2fb-3bh9/10.233.224.3
Start Time: Fri, 28 Sep 2018 16:41:59 -0300
Labels: <none>
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container container-trunfo
Status: Pending
IP: 10.10.1.37
Containers:
container-trunfo:
Container ID:
Image: <my-image>
Image ID:
Port: 9898/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hz4mf (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-hz4mf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hz4mf
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4s default-scheduler Successfully assigned trunfo to gke-trunfo-default-pool-07eea2fb-3bh9
Normal SuccessfulMountVolume 4s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 MountVolume.SetUp succeeded for volume "default-token-hz4mf"
Normal Pulling 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 pulling image "my-private-repo.com/my-image:latest"
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Failed to pull image "my-private-repo.com/my-image:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://my-private-repo.com/v1/_ping: dial tcp: lookup my-private-repo.com on 169.254.169.254:53: no such host
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Error: ErrImagePull
Normal BackOff 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Back-off pulling image "my-private-repo.com/my-image:latest"
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Error: ImagePullBackOff
What can i do to fix it?
EDIT
This is my pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-private-repo/images/<my-image>
ports:
- containerPort: 9898
imagePullSecrets:
- name: my-secret
As we can see, the secret is defined as expected, but not attached correctly.
You did not get as far as secrets yet. Your logs say
Failed to pull image "my-private-repo.com/my-image:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://my-private-repo.com/v1/_ping: dial tcp: lookup my-private-repo.com on 169.254.169.254:53: no such host
Warning Failed 3s kubelet, gke-trunfo-default-pool-07eea2fb-3bh9 Error: ErrImagePull
Which means that your pod cannot event start because the image is not available. Fix that, and if you still have problem with secrets after you observer pod state "ready" post your yaml definition.

kubectl get pods ErrImagePull

I'm fighting with Kubernetes. I've googled a lot and looked at a few answers such as this one - for example - but can't seem to get it to work.
I've created a docker container and push to local registry:
sudo docker run -d -p 5000:5000 --name registry registry:2
sudo docker tag i-a/i-a:latest localhost:5000/i-a
sudo docker push localhost:5000/i-a
The last command gives:
The push refers to a repository [localhost:5000/i-a]
e0a33c56cca0: Pushed
54ab83ede54d: Pushed
f5a58f369605: Pushed
cd7100a72410: Pushed
latest: digest: sha256:0f30cdf6b4a4e0e382a6cae50c1325103c3b987d9e51c42edea2244a82ae1331 size: 1164
Doing sudo docker pull localhost:5000/i-a gives:
Using default tag: latest
latest: Pulling from i-a
Digest: sha256:0f30cdf6b4a4e0e382a6cae50c1325103c3b987d9e51c42edea2244a82ae1331
Status: Image is up to date for localhost:5000/i-a:latest
The config file i-a.yaml:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: i-a
name: i-a
namespace: default
spec:
replicas: 3
selector:
matchLabels:
run: i-a
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
run: i-a
spec:
containers:
- image: localhost:5000/i-a
imagePullPolicy: IfNotPresent
name: i-a
ports:
- containerPort: 8090
dnsPolicy: ClusterFirst
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
run: i-a
name: i-a
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8090
selector:
run: i-a
sessionAffinity: None
type: ClusterIP
When I do sudo kubectl get pods --all-namespaces I get:
...
default i-a-3400848339-0x6c9 0/1 ImagePullBackOff 0 13m
default i-a-3400848339-7ltp1 0/1 ImagePullBackOff 0 13m
default i-a-3400848339-wv092 0/1 ImagePullBackOff 0 13m
The ImagePullBackOff turns to ErrImagePull.
When I run kubectl describe pod i-a-3400848339-0x6c9 I get an error Failed to pull image "localhost:5000/i-a": Error while pulling image: Get http://localhost:5000/v1/repositories/i-a/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused:
Name: i-a-3400848339-0x6c9
Namespace: default
Node: minikube/192.168.99.100
Start Time: Mon, 09 Apr 2018 21:11:15 +0200
Labels: pod-template-hash=3400848339
run=i-a
Status: Pending
IP: 172.17.0.7
Controllers: ReplicaSet/i-a-3400848339
Containers:
i-a:
Container ID:
Image: localhost:5000/i-a
Image ID:
Port: 8090/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bmwkd (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-bmwkd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bmwkd
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
18m 18m 1 {default-scheduler } Normal Scheduled Successfully assigned i-a-3400848339-0x6c9 to minikube
18m 2m 8 {kubelet minikube} spec.containers{i-a} Normal Pulling pulling image "localhost:5000/i-a"
18m 2m 8 {kubelet minikube} spec.containers{i-a} Warning Failed Failed to pull image "localhost:5000/i-a": Error while pulling image: Get http://localhost:5000/v1/repositories/i-a/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused
18m 2m 8 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "i-a" with ErrImagePull: "Error while pulling image: Get http://localhost:5000/v1/repositories/i-a/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused"
18m 13s 75 {kubelet minikube} spec.containers{i-a} Normal BackOff Back-off pulling image "localhost:5000/i-a"
18m 13s 75 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "i-a" with ImagePullBackOff: "Back-off pulling image \"localhost:5000/i-a\""
I'm not sure where to look next...
(I get 404 when I browse to http://localhost:5000/v1/repositories/i-a/images)
Try with:
spec:
containers:
- image: localhost:5000/i-a
imagePullPolicy: Never
name: i-a
ports:
- containerPort: 8090
dnsPolicy: ClusterFirst
restartPolicy: Always

Why Kubernetes pods sometimes got CrashLoopBackOff?

Using Kubernetes cluster: 3 hosts(1 master and 2 nodes).
Kubernetes version: 1.7
Deployed a Rails app to Kubernetes cluster.
Here is the deployment.yaml file:
apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: server
spec:
ports:
- port: 80
selector:
app: server
tier: backend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: server
labels:
app: server
spec:
replicas: 3
template:
metadata:
labels:
app: server
tier: backend
spec:
containers:
- image: 192.168.33.13/myapp/server
name: server
ports:
- containerPort: 3000
name: server
imagePullPolicy: Always
Deploy it:
$ kubectl create -f deployment.yaml
Then check the pods status:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
server-962161505-kw3jf 0/1 CrashLoopBackOff 6 9m
server-962161505-lxcfb 0/1 CrashLoopBackOff 6 9m
server-962161505-mbnkn 0/1 CrashLoopBackOff 6 9m
At the beginning, its status got Completed but went to CrashLoopBackOff soon. Is there anything wrong in the config yaml file?
(By the way, I don't want to run a entrypoint.sh script here but used a job.yaml file to call k8s Job kind to do it.)
Edit
Result of kubectl describe pod server-962161505-kw3jf:
Name: server-962161505-kw3jf
Namespace: default
Node: node1/192.168.33.11
Start Time: Mon, 13 Nov 2017 17:45:47 +0900
Labels: app=server
pod-template-hash=962161505
tier=backend
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"server-962161505","uid":"0acadda6-c84f-11e7-84b8-02178ad2db9a","...
Status: Running
IP: 10.42.254.104
Created By: ReplicaSet/server-962161505
Controlled By: ReplicaSet/server-962161505
Containers:
server:
Container ID: docker://29eca3d9a20c60c83314101b036d742c5868c3bf25a39f28c5e4208bcdbfcede
Image: 192.168.33.13/myapp/server
Image ID: docker-pullable://192.168.33.13/myapp/server#sha256:0e056e3ff5b1f1084e0946bc4211d33c6f48bc06dba7e07340c1609bbd5513d6
Port: 3000/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 14 Nov 2017 10:13:12 +0900
Finished: Tue, 14 Nov 2017 10:13:13 +0900
Ready: False
Restart Count: 26
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-csjqn (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-csjqn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-csjqn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 22m kubelet, node1 MountVolume.SetUp succeeded for volume "default-token-csjqn"
Normal SandboxChanged 22m kubelet, node1 Pod sandbox changed, it will be killed and re-created.
Warning Failed 20m (x3 over 21m) kubelet, node1 Failed to pull image "192.168.33.13/myapp/server": rpc error: code = 2 desc = Error response from daemon: {"message":"Get http://192.168.33.13/v2/: dial tcp 192.168.33.13:80: getsockopt: connection refused"}
Normal BackOff 20m (x5 over 21m) kubelet, node1 Back-off pulling image "192.168.33.13/myapp/server"
Normal Pulling 4m (x7 over 21m) kubelet, node1 pulling image "192.168.33.13/myapp/server"
Normal Pulled 4m (x4 over 20m) kubelet, node1 Successfully pulled image "192.168.33.13/myapp/server"
Normal Created 4m (x4 over 20m) kubelet, node1 Created container
Normal Started 4m (x4 over 20m) kubelet, node1 Started container
Warning FailedSync 10s (x99 over 21m) kubelet, node1 Error syncing pod
Warning BackOff 10s (x91 over 20m) kubelet, node1 Back-off restarting failed container

Resources