I've deployed a private registry and can pull from it with docker pull x.x.x/name. The thing is that I can't make Kubernetes pull from that repository. I think I've followed all the answers on other topics, but they don't seem to do the trick.
.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: x.x.x/nginx_1
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
imagePullSecrets:
- name: registrypullsecret
kubectl get pods:
NAME READY STATUS RESTARTS AGE
private-image-test-1 0/1 Image: x.x.x/nginx_1 is ready, container is creating 0 4m
kubectl describe pods private-image-test-1
Name: private-image-test-1
Namespace: default
Node: 37.72.163.69/37.72.163.69
Start Time: Fri, 06 May 2016 08:04:45 +0000
Labels: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
uses-private-image:
Container ID:
Image: x.x.x/nginx_1
Image ID:
Port:
Command:
echo
SUCCESS
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Reason: Image: x.x.x/nginx_1 is ready, container is creating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-zrn4n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zrn4n
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 {scheduler } scheduled Successfully assigned private-image-test-1 to 37.72.163.69
4m 8s 30 {kubelet 37.72.163.69} implicitly required container POD pulled Successfully pulled image "gcr.io/google_containers/pause:0.8.0"
4m 8s 30 {kubelet 37.72.163.69} implicitly required container POD failed Failed to create docker container with error: no such image
4m 8s 30 {kubelet 37.72.163.69} failedSync Error syncing pod, skipping: no such image
Any help is welcome at this point, thanks!
In most cases where I've come across this issue, it is almost always your credential secret being incorrect. The proper format should be along the lines of
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: {BASE64 encoding of your config}
type: kubernetes.io/dockerconfigjson
From memory, the type field has changed in recent versions of k8s so definitely check that you have the correct type listed.
Also, your yaml example has bad indenting, but thats likely a SO editor issue.
Related
I got error when creating deployment.
This is my Dockerfile that i have run and test it on local, i also push it to DockerHub
FROM node:14.15.4
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
RUN npm install pm2 -g
COPY . .
EXPOSE 3001
CMD [ "pm2-runtime", "server.js" ]
In my raspberry pi 3 model B, i have install k3s using curl -sfL https://get.k3s.io | sh -
Here is my controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-deployment
labels:
app: controller
spec:
replicas: 1
selector:
matchLabels:
app: controller
template:
metadata:
labels:
app: controller
spec:
containers:
- name: controller
image: akirayorunoe/node-controller-server
ports:
- containerPort: 3001
After run this file the pod is error
When i log the pod, it said
standard_init_linux.go:219: exec user process caused: exec format error
Here is the reponse from describe pod
Name: controller-deployment-8669c9c864-sw8kh
Namespace: default
Priority: 0
Node: raspberrypi/192.168.0.30
Start Time: Fri, 21 May 2021 11:21:05 +0700
Labels: app=controller
pod-template-hash=8669c9c864
Annotations: <none>
Status: Running
IP: 10.42.0.43
IPs:
IP: 10.42.0.43
Controlled By: ReplicaSet/controller-deployment-8669c9c864
Containers:
controller:
Container ID: containerd://439edcfdbf49df998e3cabe2c82206b24819a9ae13500b0 13b9bac1df6e56231
Image: akirayorunoe/node-controller-server
Image ID: docker.io/akirayorunoe/node-controller-server#sha256:e1c5115 2f9d596856952d590b1ef9a486e136661076a9d259a9259d4df314226
Port: 3001/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 21 May 2021 11:24:29 +0700
Finished: Fri, 21 May 2021 11:24:29 +0700
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-txm85 (ro )
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-txm85:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-txm85
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m33s default-scheduler Successfully ass igned default/controller-deployment-8669c9c864-sw8kh to raspberrypi
Normal Pulled 5m29s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.072053213s
Normal Pulled 5m24s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.018192177s
Normal Pulled 5m6s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 3.015959209s
Normal Pulled 4m34s kubelet Successfully pul led image "akirayorunoe/node-controller-server" in 2.921116157s
Normal Created 4m34s (x4 over 5m29s) kubelet Created containe r controller
Normal Started 4m33s (x4 over 5m28s) kubelet Started containe r controller
Normal Pulling 3m40s (x5 over 5m32s) kubelet Pulling image "a kirayorunoe/node-controller-server"
Warning BackOff 30s (x23 over 5m22s) kubelet Back-off restart ing failed container
Here is the error images
You are trying to launch a container built for x86 (or x86_64, same difference) on an ARM machine. This does not work. Containers for ARM must be built specifically for ARM and contain ARM executables. While major projects are slowly adding ARM support to their builds, most random images you find on Docker Hub or whatever will not work on ARM.
I have Question on pod termination on using image: kodekloud/throw-dice
pod-defintion.yml
apiVersion: v1
kind: Pod
metadata:
name: throw-dice-pod
spec:
containers:
- image: kodekloud/throw-dice
name: throw-dice
restartPolicy: Never
I have checked the steps in DockerFile . It runs throw-dice.sh which randomly returns number in between 1 - 6.
let's consider first time container return 3, so how below pod is terminated ? where is the condition defined in the pod level it suppose to terminate if script return number is !6.
Below steps were performed to execute pod-definition.yml
master $ kubectl create -f /root/throw-dice-pod.yaml
pod/throw-dice-pod created
master $ kubectl get pods
NAME READY STATUS RESTARTS AGE
throw-dice-pod 0/1 ContainerCreating 0 9s
master $ kubectl get pods
NAME READY STATUS RESTARTS AGE
throw-dice-pod 0/1 Error 0 12s
master $ kubectl describe pod throw-dice-pod
Name: throw-dice-pod
Namespace: default
Priority: 0
Node: node01/172.17.0.83
Start Time: Sat, 16 May 2020 11:20:01 +0000
Labels: <none>
Annotations: <none>
Status: Failed
IP: 10.88.0.4
IPs:
IP: 10.88.0.4
Containers:
throw-dice:
Container ID: docker://4560c794b58cf8f3e3fad691b2292e37db4e84e20c9286321f026d1735272b5f
Image: kodekloud/throw-dice
Image ID: docker-pullable://kodekloud/throw-dice#sha256:9c70a0f907b99293885a9591b6162e9ec89e127937626a97ca7f9f6be2d98b01
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 16 May 2020 11:20:10 +0000
Finished: Sat, 16 May 2020 11:20:10 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nr5kl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-nr5kl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nr5kl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/throw-dice-pod to node01
Normal Pulling 21s kubelet, node01 Pulling image "kodekloud/throw-dice"
Normal Pulled 19s kubelet, node01 Successfully pulled image "kodekloud/throw-dice"
Normal Created 19s kubelet, node01 Created container throw-dice
Normal Started 18s kubelet, node01 Started container throw-dice
master $ kubectl logs throw-dice-pod
2
The dockerfile has ENTRYPOINT sh throw-dice.sh which means execute the script and then the container terminates automatically. If you want the container to keep running you need to start a long running process for example a java process ENTRYPOINT ["java", "-jar", "/whatever/your.jar"]
Exit code 1 here is set by the application.
Take a look into throw-dice.sh. We see that the application shuffling exit code between 0,1,2.
The exit code is always equal to shuffling result above.
If the exit code is 0, 6 is logged.
If the exit code is 1 or 2, result from shuffling integer from 1-5 is logged
So in your case, exit code from shuffling is 1 and shuffling result for log is 2. And exit code 1 from application is considered as Error reason for Kubernetes.
I recently evaluated Kubernetes with a simple test project and I was able to update image of StatefulSet with command like this:
kubectl set image statefulset/cloud-stateful-set cloud-stateful-container=ncccloud:v716
I'm now trying to get our real system to work in Kubernetes and the pods don't do anything when I try to update image, even though I'm using basically the same command.
It says:
statefulset.apps "cloud-stateful-set" image updated
And kubectl describe statefulset.apps/cloud-stateful-set says:
Image: ncccloud:v716"
But kubectl describe pod cloud-stateful-set-0 and kubectl describe pod cloud-stateful-set-1 say:
"Image: ncccloud:latest"
The ncccloud:latest is an image which doesn't work:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cloud-stateful-set-0 0/1 CrashLoopBackOff 7 13m
cloud-stateful-set-1 0/1 CrashLoopBackOff 7 13m
mssql-deployment-6cd4ff766-pzz99 1/1 Running 1 55m
Another strange thing is that every time I try to apply the StatefulSet it says configured instead of unchanged.
$ kubectl apply -f k8s/cloud-stateful-set.yaml
statefulset.apps "cloud-stateful-set" configured
Here is my cloud-stateful-set.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cloud-stateful-set
labels:
app: cloud
group: service
spec:
replicas: 2
# podManagementPolicy: Parallel
serviceName: cloud-stateful-set
selector:
matchLabels:
app: cloud
template:
metadata:
labels:
app: cloud
group: service
spec:
containers:
- name: cloud-stateful-container
image: ncccloud:latest
imagePullPolicy: Never
ports:
- containerPort: 80
volumeMounts:
- name: cloud-stateful-storage
mountPath: /cloud-stateful-data
volumeClaimTemplates:
- metadata:
name: cloud-stateful-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Here is full output of kubectl describe pod/cloud-stateful-set-1:
Name: cloud-stateful-set-1
Namespace: default
Node: docker-for-desktop/192.168.65.3
Start Time: Tue, 02 Jul 2019 11:03:01 +0300
Labels: app=cloud
controller-revision-hash=cloud-stateful-set-5c9964c897
group=service
statefulset.kubernetes.io/pod-name=cloud-stateful-set-1
Annotations: <none>
Status: Running
IP: 10.1.0.20
Controlled By: StatefulSet/cloud-stateful-set
Containers:
cloud-stateful-container:
Container ID: docker://3ec26930c1a81caa39d5c5a16c4e25adf7584f90a71e0110c0b03ecb60dd9592
Image: ncccloud:latest
Image ID: docker://sha256:394427c40e964e34ca6c9db3ce1df1f8f6ce34c4ba8f3ab10e25da6e89678830
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Tue, 02 Jul 2019 11:19:03 +0300
Finished: Tue, 02 Jul 2019 11:19:03 +0300
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gzxpx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloud-stateful-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cloud-stateful-storage-cloud-stateful-set-1
ReadOnly: false
default-token-gzxpx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gzxpx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned cloud-stateful-set-1 to docker-for-desktop
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "pvc-4c9e1796-9c9a-11e9-998f-00155d64fa03"
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gzxpx"
Normal Pulled 17m (x5 over 19m) kubelet, docker-for-desktop Container image "ncccloud:latest" already present on machine
Normal Created 17m (x5 over 19m) kubelet, docker-for-desktop Created container
Normal Started 17m (x5 over 19m) kubelet, docker-for-desktop Started container
Warning BackOff 4m (x70 over 19m) kubelet, docker-for-desktop Back-off restarting failed container
Here is full output of kubectl describe statefulset.apps/cloud-stateful-set:
Name: cloud-stateful-set
Namespace: default
CreationTimestamp: Tue, 02 Jul 2019 11:02:59 +0300
Selector: app=cloud
Labels: app=cloud
group=service
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"app":"cloud","group":"service"},"name":"cloud-stateful-set","names...
Replicas: 2 desired | 2 total
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cloud
group=service
Containers:
cloud-stateful-container:
Image: ncccloud:v716
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
Volumes: <none>
Volume Claims:
Name: cloud-stateful-storage
StorageClass:
Labels: <none>
Annotations: <none>
Capacity: 10Mi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-0 in StatefulSet cloud-stateful-set successful
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-1 in StatefulSet cloud-stateful-set successful
I'm using Docker Desktop on Windows, if it matters.
in my case imagePullPolicy was set to Always already:
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]'
helped in my case, see k8s docs: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
In the stateful set yaml, change
imagePullPolicy: Never
to
imagePullPolicy: Always
When I try to run my elasticsearch container through kubernetes deployments, my elasticsearch pod fails after some time, While it runs perfectly fine when directly run as docker container using docker-compose or Dockerfile. This is what I get as a result of kubectl get pods
NAME READY STATUS RESTARTS AGE
es-764bd45bb6-w4ckn 0/1 Error 4 3m
below is the result of kubectl describe pod
Name: es-764bd45bb6-w4ckn
Namespace: default
Node: administrator-thinkpad-l480/<node_ip>
Start Time: Thu, 30 Aug 2018 16:38:08 +0530
Labels: io.kompose.service=es
pod-template-hash=3206801662
Annotations: <none>
Status: Running
IP: 10.32.0.8
Controlled By: ReplicaSet/es-764bd45bb6
Containers:
es:
Container ID: docker://9be2f7d6eb5d7793908852423716152b8cefa22ee2bb06fbbe69faee6f6aa3c3
Image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch#sha256:9ae20c753f18e27d1dd167b8675ba95de20b1f1ae5999aae5077fa2daf38919e
Port: 9200/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 78
Started: Thu, 30 Aug 2018 16:42:56 +0530
Finished: Thu, 30 Aug 2018 16:43:07 +0530
Ready: False
Restart Count: 5
Environment:
ELASTICSEARCH_ADVERTISED_HOST_NAME: es
ES_JAVA_OPTS: -Xms2g -Xmx2g
ES_HEAP_SIZE: 2GB
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nhb9z (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-nhb9z:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nhb9z
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully assigned default/es-764bd45bb6-w4ckn to administrator-thinkpad-l480
Normal Pulled 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Container image "docker.elastic.co/elasticsearch/elasticsearch:6.2.4" already present on machine
Normal Created 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Created container
Normal Started 3m (x5 over 6m) kubelet, administrator-thinkpad-l480 Started container
Warning BackOff 1m (x15 over 5m) kubelet, administrator-thinkpad-l480 Back-off restarting failed container
Here is my elasticsearc-deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.1.0 (36652f6)
creationTimestamp: null
labels:
io.kompose.service: es
name: es
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: es
spec:
containers:
- env:
- name: ELASTICSEARCH_ADVERTISED_HOST_NAME
value: es
- name: ES_JAVA_OPTS
value: -Xms2g -Xmx2g
- name: ES_HEAP_SIZE
value: 2GB
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
name: es
ports:
- containerPort: 9200
resources: {}
restartPolicy: Always
status: {}
When i try to get logs using kubectl logs -f es-764bd45bb6-w4ckn, I get
Error from server: Get https://<slave node ip>:10250/containerLogs/default/es-764bd45bb6-w4ckn/es?previous=true: dial tcp <slave node ip>:10250: i/o timeout
What could be the reason and solution for this problem ?
I had the same problem, there can be couple of reasons for this issue. In my case the jar file was missing. #Lakshya has already answered this problem, I would like to add the steps that you can take to troubleshoot it.
Get the pod status, Command - kubectl get pods
Describe pod to have further look - kubectl describe pod "pod-name"
The last few lines of output gives you events and where your deployment failed
Get logs for more details - kubectl logs "pod-name"
Get container logs - kubectl logs "pod-name" -c "container-name"
Get the container name from the output of describe pod command
If your container is up, you can use the kubectl exec -it command to further analyse the container
Hope it helps community members in future issues.
I found the logs using docker logs for the es container and found that es was not starting because of the vm.max_map_count set to very low value.
I changed the vm.max_map_count to desired value using sysctl -w vm.max_map_count=262144 and the pod has started after that.
In my case, I just run kubectl run ubuntu --image=ubuntu get similar err and kubectl logs is empty
I guess the reason is ubuntu image without command will auto poweroff, so the solution is:
output k8s ubuntu conf yaml
in command make container command with don't container poweroff(for ex, add "sleep infinity", following is work conf yaml
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "ubuntu",
"creationTimestamp": null,
"labels": {
"run": "ubuntu"
}
},
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:20.04",
"command": [
"sleep",
"infinity"
],
"resources": {},
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always"
},
"status": {}
}
Maybe config incorrect but valid, read pod logs and find error message. Fix configs and redeploy app
PLEASE READ UPDATE 2
I have a very simple EventHubClient app. It will just listen to an EventHub messages.
I get it running with the Docker support given in Visual Studio 2017 (Linux Container).
But when I try to deploy it in Kubernetes, I get "Back-off restarting failed container"
C# Code:
public static void Main(string[] args)
{
// Init Mapper
AutoMapper.Mapper.Initialize(cfg =>
{
cfg.AddProfile<AiElementProfile>();
});
Console.WriteLine("Registering EventProcessor...");
var eventProcessorHost = new EventProcessorHost(
EventHubPath,
ConsumerGroupName,
EventHubConnectionString,
AzureStorageConnectionString,
ContainerName
);
// Registers the Event Processor Host and starts receiving messages
eventProcessorHost.RegisterEventProcessorAsync<EventProcessor>();
Console.WriteLine("Receiving. Press ENTER to stop worker.");
Console.ReadLine();
}
Kubernetes Manifest file (.yaml):
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: historysvc-deployment
spec:
selector:
matchLabels:
app: historysvc
replicas: 2
template:
metadata:
labels:
app: historysvc
spec:
containers:
- name: historysvc
image: vncont.azurecr.io/historysvc:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: acr-auth
kubectl get pods:
NAME READY STATUS RESTARTS AGE
historysvc-deployment-558fc5649f-bln8f 0/1 CrashLoopBackOff 17 1h
historysvc-deployment-558fc5649f-jgjvq 0/1 CrashLoopBackOff 17 1h
kubectl describe pod historysvc-deployment-558fc5649f-bln8f
Name: historysvc-deployment-558fc5649f-bln8f
Namespace: default
Node: aks-nodepool1-81522366-0/10.240.0.4
Start Time: Tue, 24 Jul 2018 10:15:37 +0200
Labels: app=historysvc
pod-template-hash=1149712059
Annotations: <none>
Status: Running
IP: 10.244.0.11
Controlled By: ReplicaSet/historysvc-deployment-558fc5649f
Containers:
historysvc:
Container ID: docker://59e66f1e6420146f6eca4f19e2801a4ee0435a34c7ac555a8d04f699a1497f35
Image: vncont.azurecr.io/historysvc:v1
Image ID: docker-pullable://vncont.azurecr.io/historysvc#sha256:636d81435bd421ec92a0b079c3841cbeb3ad410509a6e37b1ec673dc4ab8a444
Port: 80/TCP
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 24 Jul 2018 10:17:10 +0200
Finished: Tue, 24 Jul 2018 10:17:10 +0200
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 24 Jul 2018 10:16:29 +0200
Finished: Tue, 24 Jul 2018 10:16:29 +0200
Ready: False
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mt8mm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-mt8mm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mt8mm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned historysvc-deployment-558fc5649f-bln8f to aks-nodepool1-81522366-0
Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-81522366-0 MountVolume.SetUp succeeded for volume "default-token-mt8mm"
Normal Pulled 8s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Container image "vncont.azurecr.io/historysvc:v1" already present on machine
Normal Created 7s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Created container
Normal Started 6s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Started container
Warning BackOff 6s (x8 over 1m) kubelet, aks-nodepool1-81522366-0 Back-off restarting failed container
What am I missing?
UPDATE 1
kubectl describe pod historysvc-deployment-558fc5649f-jgjvq
Name: historysvc-deployment-558fc5649f-jgjvq
Namespace: default
Node: aks-nodepool1-81522366-0/10.240.0.4
Start Time: Tue, 24 Jul 2018 10:15:37 +0200
Labels: app=historysvc
pod-template-hash=1149712059
Annotations: <none>
Status: Running
IP: 10.244.0.12
Controlled By: ReplicaSet/historysvc-deployment-558fc5649f
Containers:
historysvc:
Container ID: docker://ccf83bce216276450ed79d67fb4f8a66daa54cd424461762478ec62f7e592e30
Image: vncont.azurecr.io/historysvc:v1
Image ID: docker-pullable://vncont.azurecr.io/historysvc#sha256:636d81435bd421ec92a0b079c3841cbeb3ad410509a6e37b1ec673dc4ab8a444
Port: 80/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 25 Jul 2018 09:32:34 +0200
Finished: Wed, 25 Jul 2018 09:32:35 +0200
Ready: False
Restart Count: 277
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mt8mm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-mt8mm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mt8mm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m (x6238 over 23h) kubelet, aks-nodepool1-81522366-0 Back-off restarting failed container
UPDATE 2
When I run it localy with:
docker run <image>
it ends instantly (ignores the read line) (completes), which seems to be the problem.
I have to write
docker run -it <image>
-it at the end for it to do the read line.
How does kubernetes runs the docker image? Where can I set that?
This can be done by attaching an argument to run with your deployment.
In your case the Kubernetes Manifest file (.yaml) should look like this:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: historysvc-deployment
spec:
selector:
matchLabels:
app: historysvc
replicas: 2
template:
metadata:
labels:
app: historysvc
spec:
containers:
- name: historysvc
image: vncont.azurecr.io/historysvc:v1
ports:
- containerPort: 80
args: ["-it"]
imagePullSecrets:
- name: acr-auth
You can find this explained in k8s docs inject-data-application/define-command-argument-container
When you create a Pod, you can define a command and arguments for the containers that run in the Pod. To define a command, include the command field in the configuration file. To define arguments for the command, include the args field in the configuration file. The command and arguments that you define cannot be changed after the Pod is created.
The command and arguments that you define in the configuration file override the default command and arguments provided by the container image. If you define args, but do not define a command, the default command is used with your new arguments.