Kubernetes keeps re-pulling images - docker

Using microk8s installation. An image is tagged, so should use IfNotPresent policy.
apiVersion: apps/v1
2 kind: Deployment
3 metadata:
4 name: lh-graphql
5 labels:
6 app: lh-graphql
7 spec:
8 selector:
9 matchLabels:
10 app: lh-graphql
11 strategy:
12 type: Recreate
13 template:
14 metadata:
15 labels:
16 app: lh-graphql
17 spec:
18 containers:
19 - image: hasura/graphql-engine:v2.13.2.cli-migrations-v3
20 name: lh-graphql
21 ports:
22 - containerPort: 8080
23 name: lh-graphql
24 env:
25 - name: HASURA_GRAPHQL_DATABASE_URL
26 value: postgresql://postgres:postgres#$(ORCH_POSTGRES_IP):5432/lh
Image is already pulled to docker:
light#siddhalok:~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
postgres 12 3d6880d04326 2 weeks ago 373MB
hasura/graphql-engine v2.13.2.cli-migrations-v3 4cd490369623 2 months ago 570MB
However, it keeps pulling after a deployment is deleted and created again.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 112s default-scheduler Successfully assigned default/lh-graphql-6db75544cf-j65wp to siddhalok
Normal Pulling 112s kubelet Pulling image "hasura/graphql-engine:v2.13.2.cli-migrations-v3"
UPD:
The same happens when creating from command line:
microk8s kubectl run det2 --image=registry.dev.mpksoft.ru/lighthouse/lh-detector/lh-detector:current --image-pull-policy=IfNotPresent
REPOSITORY TAG IMAGE ID CREATED SIZE
postgres 12 3d6880d04326 2 weeks ago 373MB
lh-develop.img latest f26c3c667fbe 5 weeks ago 2.82GB
dpage/pgadmin4 latest 4d5afde0a02e 6 weeks ago 361MB
detector latest e6f7e6567b73 7 weeks ago 3.81GB
lh-detetctor.img latest e6f7e6567b73 7 weeks ago 3.81GB
registry.dev.mpksoft.ru/lighthouse/lh-detector/lh-detector current e6f7e6567b73 7 weeks ago 3.81GB

If you are running the microK8s and Docker still it's necessary microk8s to have an idea about the docker that is running on your machine. local Docker daemon is not part of the MicroK8s Kubernetes cluster.
You can export the image and inject to cache
docker save <image name> > myimage.tar
microk8s ctr image import myimage.tar
Ref : doc

Related

YAML file pulling image but without tag (<none> tag)

I have a YAML file which creates services by pulling images from repository as follows :-
services:
receiver:
image: 10.170.112.11:5000/receiver:1.0.0
user: "20001"
ports:
- "36200:36200"
deploy:
replicas: 3
restart_policy:
condition: any
command: "npm run production"
Everything goes well. Service goes up and I can also see image pulled on the server by this yaml but image do not get pulled with tags. I want images to be pulled and cached with tags. What should be done?
[DSAD ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
10.170.112.11:5000/processor <none> e635f9c55e5d 2 hours ago 186MB
10.170.112.11:5000/enricher <none> 940136867981 6 days ago 276MB
10.170.112.11:5000/receiver <none> c33e914cd0bf 7 days ago 153MB

CrashLoopBackOff while deploying pod using image from private registry

I am trying to create a pod using my own docker image on localhost.
This is the dockerfile used to create the image :
FROM centos:8
RUN yum install -y gdb
RUN yum group install -y "Development Tools"
CMD ["/usr/bin/bash"]
The yaml file used to create the pod is this :
---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: server
imagePullPolicy: Never
image: localhost:5000/server
ports:
- containerPort: 80
root#node1:~/test/server# docker images | grep server
server latest 82c5228a553d 3 hours ago 948MB
localhost.localdomain:5000/server latest 82c5228a553d 3 hours ago 948MB
localhost:5000/server latest 82c5228a553d 3 hours ago 948MB
The image has been pushed to localhost registry.
Following is the error I receive.
root#node1:~/test/server# kubectl get pods
NAME READY STATUS RESTARTS AGE
server 0/1 CrashLoopBackOff 5 5m18s
The output of describe pod :
root#node1:~/test/server# kubectl describe pod server
Name: server
Namespace: default
Priority: 0
Node: node1/10.0.2.15
Start Time: Mon, 07 Dec 2020 15:35:49 +0530
Labels: app=server
Annotations: cni.projectcalico.org/podIP: 10.233.90.192/32
cni.projectcalico.org/podIPs: 10.233.90.192/32
Status: Running
IP: 10.233.90.192
IPs:
IP: 10.233.90.192
Containers:
server:
Container ID: docker://c2982e677bf37ff11272f9ea3f68565e0120fb8ccfb1595393794746ee29b821
Image: localhost:5000/server
Image ID: docker-pullable://localhost.localdomain:5000/server#sha256:6bc8193296d46e1e6fa4cb849fa83cb49e5accc8b0c89a14d95928982ec9d8e9
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 07 Dec 2020 15:41:33 +0530
Finished: Mon, 07 Dec 2020 15:41:33 +0530
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tb7wb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-tb7wb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tb7wb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/server to node1
Normal Pulled 4m34s (x5 over 5m59s) kubelet Container image "localhost:5000/server" already present on machine
Normal Created 4m34s (x5 over 5m59s) kubelet Created container server
Normal Started 4m34s (x5 over 5m59s) kubelet Started container server
Warning BackOff 56s (x25 over 5m58s) kubelet Back-off restarting failed container
I get no logs :
root#node1:~/test/server# kubectl logs -f server
root#node1:~/test/server#
I am unable to figure out whether the issue is with the container or yaml file for creating pod. Any help would be appreciated.
Posting this as Community Wiki.
As pointed by #David Maze in comment section.
If docker run exits immediately, a Kubernetes Pod will always go into CrashLoopBackOff state. Your Dockerfile needs to COPY in or otherwise install and application and set its CMD to run it.
Root cause can be also determined by Exit Code. In 3) Check the exit code article, you can find a few exit codes like 0, 1, 128, 137 with description.
3.1) Exit Code 0
This exit code implies that the specified container command completed ‘sucessfully’, but too often for Kubernetes to accept as working.
In short story, your container was created, all action mentioned was executed and as there was nothing else to do, it exit with Exit Code 0.
A CrashLoopBackOff error occurs when a pod startup fails repeatedly in Kubernetes.`
Your image based on centos with few additional installations did not have any process in backgroud left, so it was categorized as Completed. As this happen so fast, kubernetes restarted it and it fall in loop.
$ kubectl run centos --image=centos
$ kubectl get po -w
NAME READY STATUS RESTARTS AGE
centos 0/1 CrashLoopBackOff 1 5s
centos 0/1 Completed 2 17s
centos 0/1 CrashLoopBackOff 2 31s
centos 0/1 Completed 3 46s
centos 0/1 CrashLoopBackOff 3 58s
centos 1/1 Running 4 88s
centos 0/1 Completed 4 89s
centos 0/1 CrashLoopBackOff 4 102s
$ kubectl describe po centos | grep 'Exit Code'
Exit Code: 0
But when you have used sleep 3600, in your container, command sleep was executing for hour. After this time it would also exit with Exit Code 0.
Hope it clarified.

Docker tags are lost between steps in Bitbucket pipelines

I am using Bitbucket pipelines to build Docker images with Gradle. Here is my build:
definitions:
steps:
- step: &build-docker
name: Build Docker images
image:
name: openjdk:8
services:
- docker
script:
- ./gradlew dockerBuildImage
- docker image ls
caches:
- gradle-wrapper
- gradle
- docker
- step: &publish-docker
name: Publish Docker images
image:
name: docker
services:
- docker
script:
- docker image ls
caches:
- docker
pipelines:
default:
- step: *build-docker
- step: *publish-docker
My build.gradle.kts is configured to tag the images with UTC timestamps:
configure<DockerExtension> {
configure(this.getProperty("javaApplication"), closureOf<DockerJavaApplication> {
baseImage = "openjdk:8-jre-alpine"
tag = "${name}:${Instant.now().epochSecond}"
})
}
When I run dockerBuildImage task locally, I can see my tagged images:
$docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
…
forklift-1 1540454741 93fd78260bd1 5 weeks ago 105MB
forklift-2 1540454741 3c8e4e191fd3 5 weeks ago 105MB
forklift-3 1540454741 1e80caffd59e 5 weeks ago 105MB
forklift-4 1540454741 0e3d9c513144 5 weeks ago 105MB
…
The output from the "build-docker" step is like:
REPOSITORY TAG IMAGE ID CREATED SIZE
forklift-1 1543511971 13146b26fe19 1 second ago 105MB
forklift-2 1543511971 7581987997aa 3 seconds ago 105MB
forklift-3 1543511971 a6ef74a8530e 6 seconds ago 105MB
forklift-4 1543511970 a7087154d731 10 seconds ago 105MB
<none> <none> cfc622dd7b3c 3 hours ago 105MB
<none> <none> f17e20778baf 3 hours ago 105MB
<none> <none> 75cc06f4b5ee 3 hours ago 105MB
<none> <none> 1762b4f89680 3 hours ago 105MB
openjdk 8-jre-alpine 2e01f547f003 5 weeks ago 83MB
But the output of the second step does not have any tags, though the sizes of the images are roughly equivalent:
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> cfc622dd7b3c 3 hours ago 105MB
<none> <none> f17e20778baf 3 hours ago 105MB
<none> <none> 75cc06f4b5ee 3 hours ago 105MB
<none> <none> 1762b4f89680 3 hours ago 105MB
Where are the tags lost?
Note that some of the image IDs from the second step (docker image ls) seems to be same to those printed in the first step.
P.S. I know that if I need the tags (e.g. to publish) I can just do both build and publish in a single step.
While I was not able to track down the root cause, I made a simple workaround based on Docker's save and load commands and Bitbucket Pipelines' artifacts.
First, I've changed the tagging scheme a little bit:
configure<DockerExtension> {
configure(this.getProperty("javaApplication"), closureOf<DockerJavaApplication> {
baseImage = "openjdk:8-jre-alpine"
tag = "${name}:${System.getenv("DOCKER_TAG")}"
})
}
So instead of the UTC timestamp I rely on an environment variable DOCKER_TAG that I can set externally.
Then, define "build-docker" step as following:
- step: &build-docker
name: Build Docker images
image:
name: openjdk:8
services:
- docker
script:
- export DOCKER_TAG=${BITBUCKET_BUILD_NUMBER}
- ./gradlew dockerBuildImage
- docker save
--output images.tar
forklift-1:${DOCKER_TAG}
forklift-2:${DOCKER_TAG}
forklift-3:${DOCKER_TAG}
forklift-3:${DOCKER_TAG}
artifacts:
- images.tar
caches:
- gradle-wrapper
- gradle
I'm ok with using build numbers as tags, but any value can be provided.
Finally, the step that pushes the images is:
- step: &publish-docker
name: Publish Docker images
image:
name: docker
services:
- docker
script:
- docker load --input images.tar
- docker image ls
- docker push …
This works, because docker save
Produces a tarred repository to the standard output stream. Contains all parent layers, and all tags + versions, or specified repo:tag, for each argument provided.

What is the meaning of ImagePullBackOff status on a Kubernetes pod?

I'm trying to run my first kubernetes pod locally.
I've run the following command (from here):
export ARCH=amd64
docker run -d \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override=127.0.0.1 \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged --v=2
Then, I've trying to run the following:
kubectl create -f ./run-aii.yaml
run-aii.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aii
spec:
replicas: 2
template:
metadata:
labels:
run: aii
spec:
containers:
- name: aii
image: aii
ports:
- containerPort: 5144
env:
- name: KAFKA_IP
value: kafka
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /home/aii/core
name: core-aii
readOnly: true
- mountPath: /home/aii/genome
name: genome-aii
readOnly: true
- mountPath: /home/aii/main
name: main-aii
readOnly: true
- name: kafka
image: kafkazoo
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /root/config
name: config-data
readOnly: true
- name: ws
image: ws
ports:
- containerPort: 3000
volumes:
- name: scripts-data
hostPath:
path: /home/aii/general/infra/script
- name: config-data
hostPath:
path: /home/aii/general/infra/config
- name: core-aii
hostPath:
path: /home/aii/general/core
- name: genome-aii
hostPath:
path: /home/aii/general/genome
- name: main-aii
hostPath:
path: /home/aii/general/main
Now, when I run: kubectl get pods
I'm getting:
NAME READY STATUS RESTARTS AGE
aii-806125049-18ocr 0/3 ImagePullBackOff 0 52m
aii-806125049-6oi8o 0/3 ImagePullBackOff 0 52m
aii-pod 0/3 ImagePullBackOff 0 23h
k8s-etcd-127.0.0.1 1/1 Running 0 2d
k8s-master-127.0.0.1 4/4 Running 0 2d
k8s-proxy-127.0.0.1 1/1 Running 0 2d
nginx-198147104-9kajo 1/1 Running 0 2d
BTW: docker images return:
REPOSITORY TAG IMAGE ID CREATED SIZE
ws latest fa7c5f6ef83a 7 days ago 706.8 MB
kafkazoo latest 84c687b0bd74 9 days ago 697.7 MB
aii latest bd12c4acbbaf 9 days ago 1.421 GB
node 4.4 1a93433cee73 11 days ago 647 MB
gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 11 days ago 316.7 MB
nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB
docker_kafka latest e1d954a6a827 5 weeks ago 697.7 MB
spotify/kafka latest 30d3cef1fe8e 12 weeks ago 421.6 MB
wurstmeister/zookeeper latest dc00f1198a44 3 months ago 468.7 MB
centos latest 61b442687d68 4 months ago 196.6 MB
centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB
gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB
sequenceiq/hadoop-docker latest 5c3cc170c6bc 10 months ago 1.766 GB
why do I get the ImagePullBackOff ??
By default Kubernetes looks in the public Docker registry to find images. If your image doesn't exist there it won't be able to pull it.
You can run a local Kubernetes registry with the registry cluster addon.
Then tag your images with localhost:5000:
docker tag aii localhost:5000/dev/aii
Push the image to the Kubernetes registry:
docker push localhost:5000/dev/aii
And change run-aii.yaml to use the localhost:5000/dev/aii image instead of aii. Now Kubernetes should be able to pull the image.
Alternatively, you can run a private Docker registry through one of the providers that offers this (AWS ECR, GCR, etc.), but if this is for local development it will be quicker and easier to get setup with a local Kubernetes Docker registry.
One issue that may cause an ImagePullBackOff especially if you are pulling from a private registry is if the pod is not configured with the imagePullSecret of the private registry.
An authentication error may cause an imagePullBackOff.
I had the same problem what caused it was that I already had created a pod from the docker image via the .yml file, however I mistyped the name, i.e test-app:1.0.1 when I needed test-app:1.0.2 in my .yml file. So I did kubectl delete pods --all to remove the faulty pod then redid the kubectl create -f name_of_file.yml which solved my problem.
You can specify also imagePullPolicy: Never in the container's spec:
containers:
- name: nginx
imagePullPolicy: Never
image: custom-nginx
ports:
- containerPort: 80
The issue arises when the image is not present on the cluster and k8s engine is going to pull the respective registry.
k8s Engine enables 3 types of ImagePullPolicy mentioned :
Always : It always pull the image in container irrespective of changes in the image
Never : It will never pull the new image on the container
IfNotPresent : It will pull the new image in cluster if the image is not present.
Best Practices : It is always recommended to tag the new image in both docker file as well as k8s deployment file. So That it can pull the new image in container.
I too had this problem, when I checked I image that I was pulling from a private registry was removed
If we describe pod it will show pulling event and the image it's trying to pull
kubectl describe pod <POD_NAME>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 18h (x35 over 20h) kubelet, gsk-kub Pulling image "registeryName:tag"
Normal BackOff 11m (x822 over 20h) kubelet, gsk-kub Back-off pulling image "registeryName:tag"
Warning Failed 91s (x858 over 20h) kubelet, gsk-kub Error: ImagePullBackOff
Despite all the other great answers none helped me until I found a comment that pointed out this Updating images:
The default pull policy is IfNotPresent which causes the kubelet to skip pulling an image if it already exists.
That's exactly what I wanted, but didn't seem to work.
Reading further said the following:
If you would like to always force a pull, you can do one of the following:
omit the imagePullPolicy and use :latest as the tag for the image to use.
When I replaced latest with a version (that I had pushed to minikube's Docker daemon), it worked fine.
$ kubectl create deployment presto-coordinator \
--image=warsaw-data-meetup/presto-coordinator:beta0
deployment.apps/presto-coordinator created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
presto-coordinator 1/1 1 1 3s
Find the pod of the deployment (using kubectl get pods) and use kubectl describe pod to find out more on the pod.
Debugging step:
kubectl get pod [name] -o yaml
Run this command to get the YAML configuration of the pod (Get YAML for deployed Kubernetes services?). In my case, it was under this section:
state:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: Get
https://repository:9999/v2/abc/location/image/manifests/tag:
unauthorized: BAD_CREDENTIAL'
reason: ErrImagePull
My issue got resolved upon adding the appropriate tag to the image I wanted to pull from the DockerHub.
Previously:
containers:
- name: nginx
image: alex/my-app-image
Corrected Version:
containers:
- name: nginx
image: alex/my-app-image:1.1
The image has only one version, which was 1.1. Since I skipped that initially, it has thrown an error.
After correctly mentioning the version, it worked fine!!
I had similar problem when using minikube over hyperv with 2048GB memory.
I found that in HyperV manager the Memory Demand was higher than allocated.
So I stopped minikube and assigned somewhere between 4096-6144GB. It worked fine after that, all pods running!
I don't know if this can nail down the issue in every case. But just have a look at the memory and disk allocated to the minikube.
I had face same issue.
imagePullBackOff means it is not able to pull docker image from registry or smoking issue with your registry.
the solution would be as below.
1. Check you image registry name.
2. check image pull secrets.
3. check image is present with same tag or name.
4. check you registry is working.
ImagepullBackoff mesns you have not passed secret in your yaml or secret is wrong and might be you image name is wrong.
If you pulling image from private registry you have to provide image pull secret then it will able to pull image.
you also need to creat secrete before you deploy the pod. you can use below command to create secrete.
kubectl create secret docker-registry regcred --docker-server=artifacts.exmple.int --docker-username=<username> --docker-password=<password> -n <namespace>
you can pass secret in yaml like below.
imagePullSecrets:
- name: regcred
I had this error when I tried to create a replicationcontroller. The issue was, I wrongly spelt the nginx image name in template definition.
Note: This error occurs when kubernetes is unable to pull the specified image from the repository.
I had the same issue.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-zcr5d 0/1 ImagePullBackOff 0 6m21s
Later I found that the docker on which the pod is created is using a private registry for images and Nginx was not present in it.
I have changed the docker registry to default and reloaded the daemon.
Post that issue got resolved.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-7cbjf 1/1 Running 0 33s
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$ kubectl exec -it nginx-598b589c46-7cbjf -- /bin/bash
root#nginx-598b589c46-7cbjf:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
root#nginx-598b589c46-7cbjf:/#
For my case, Kubernetes was not able to communicate to my private registry running on localhost:5000 after update to MacOS Monterey. It was running fine previously. The reason was Apple Airplay now listen to port 5000.
In order to resolve this issue, I disabled Apple Airplay receiver.
Go To System preference > Sharing > Disable checkbox for Airplay receiver.
Source Link: https://developer.apple.com/forums/thread/682332
To handle this error, Just have to create Kubernetes secrets and use it in manifest.yaml file
If it is private repository then it is mandatory to use user secrets
To generate secrets -
kubectl create secret docker-registry docker-secrets --docker-server=https://index.docker.io/v1/ --docker-username=ExamplaName --docker-password=ExamplePassword --docker-email=example#gmail.com
for --docker-server, use https://index.docker.io/v1/
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test
image: ExampleUsername/test:tagname
ports:
- containerPort: 3015
imagePullSecrets:
- name: docker-secrets

kubernetes : Containers not starting using private registry

I have a working installation with kubernetes 1.1.1 running on debian
I also have a private registry working nice running in v2 ..
I am facing a weird problem.
defining a pod in master
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: docker-registry.hiberus.com:5000/debian:ssh
imagePullSecrets:
- name: myregistrykey
I also have the secret on my master
myregistrykey kubernetes.io/dockercfg 1 44m
and my config.json is made this way
{
"auths": {
"https://docker-registry.hiberus.com:5000": {
"auth": "anNhdXJhOmpzYXVyYQ==",
"email": "jsaura#heraldo.es"
}
}
}
and so I did the base64 and created my secret.
simple as hell
on my node the image gets pulled without any problem
docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
docker-registry.hiberus.com:5000/debian ssh 3b332951c107 29 minutes ago 183.3 MB
golang 1.4 2819d1d84442 7 days ago 562.7 MB
debian latest 91bac885982d 8 days ago 125.1 MB
gcr.io/google_containers/pause 0.8.0 2c40b0526b63 7 months ago 241.7 kB
but my container does not start
./kubectl describe pod nginx
Name: nginx
Namespace: default
Image(s): docker-registry.hiberus.com:5000/debian:ssh
Node: 192.168.29.122/192.168.29.122
Start Time: Wed, 18 Nov 2015 17:08:53 +0100
Labels: app=nginx
Status: Running
Reason:
Message:
IP: 172.17.0.2
Replication Controllers:
Containers:
nginx:
Container ID: docker://3e55ab118a3e5d01d3c58361abb1b23483d41be06741ce747d4c20f5abfeb15f
Image: docker-registry.hiberus.com:5000/debian:ssh
Image ID: docker://3b332951c1070ba2d7a3bb439787a8169fe503ed8984bcefd0d6c273d22d4370
State: Waiting
Reason: CrashLoopBackOff
Last Termination State: Terminated
Reason: Error
Exit Code: 0
Started: Wed, 18 Nov 2015 17:08:59 +0100
Finished: Wed, 18 Nov 2015 17:08:59 +0100
Ready: False
Restart Count: 2
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-ha0i4:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-ha0i4
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
16s 16s 1 {kubelet 192.168.29.122} implicitly required container POD Created Created with docker id 4a063be27162
16s 16s 1 {kubelet 192.168.29.122} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine
16s 16s 1 {kubelet 192.168.29.122} implicitly required container POD Started Started with docker id 4a063be27162
16s 16s 1 {kubelet 192.168.29.122} spec.containers{nginx} Pulling Pulling image "docker-registry.hiberus.com:5000/debian:ssh"
15s 15s 1 {scheduler } Scheduled Successfully assigned nginx to 192.168.29.122
11s 11s 1 {kubelet 192.168.29.122} spec.containers{nginx} Created Created with docker id 36df2dc8b999
11s 11s 1 {kubelet 192.168.29.122} spec.containers{nginx} Pulled Successfully pulled image "docker-registry.hiberus.com:5000/debian:ssh"
11s 11s 1 {kubelet 192.168.29.122} spec.containers{nginx} Started Started with docker id 36df2dc8b999
10s 10s 1 {kubelet 192.168.29.122} spec.containers{nginx} Pulled Container image "docker-registry.hiberus.com:5000/debian:ssh" already present on machine
10s 10s 1 {kubelet 192.168.29.122} spec.containers{nginx} Created Created with docker id 3e55ab118a3e
10s 10s 1 {kubelet 192.168.29.122} spec.containers{nginx} Started Started with docker id 3e55ab118a3e
5s 5s 1 {kubelet 192.168.29.122} spec.containers{nginx} Backoff Back-off restarting failed docker container
it loops internally trying to start but it never does
the weird thing is that if y do a run command on my node manually the container starts without any problem, but using the pod pulls the image but never starts ..
am I doing something wrong?
if I use a public image for my pod it starts without any problem .. this only happens to me when using private images ..
I have also moved from debian to ubuntu, no luck same problem
I have also linked the secret to de default service account, still no luck
cloned last git version, compiled, no luck ..
It is clear for me that the problem is using private registry, but I have applied and followed all info I have read and still no luck.
A docker container could exit if it's main process has exit.
Could you share container logs ?
if you do docker ps -a you should see all running and exited containers
Run docker container logs container_id
Also try running your container in interactive and daemon mode and see if it fails only in daemon mode.
Running in daemon mode -
docker run -d -t Image_name
Running in interactive mode -
docker run -it Image_name
for interactive daemon mode docker run -idt Image_name
refer - Why docker container exits immediately

Resources