Kubernetes weave-net status shows - CrashLoopBackOff - docker

Environment: (On Premise):-
K8s-Master Server: Standalone Physical server.
OS - CentOS Linux release 8.2.2004 (Core)
K8s Client Version: v1.18.6
K8s Server Version: v1.18.6
K8S-Worker node OS – CentOS Linux release 8.2.2004 (Core) - Standalone Physical server.
I have installed the Kubernetes as per the below given link instructions.
https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/
But weave-net namespace shows – CrashLoopBackOff
[root#K8S-Master ~]# kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default test-pod 1/1 Running 0 156m 10.88.0.83 k8s-worker-1 <none> <none>
kube-system coredns-66bff467f8-99dww 1/1 Running 1 2d23h 10.88.0.5 K8S-Master <none> <none>
kube-system coredns-66bff467f8-ghk5g 1/1 Running 2 2d23h 10.88.0.6 K8S-Master <none> <none>
kube-system etcd-K8S-Master 1/1 Running 1 2d23h 100.101.102.103 K8S-Master <none> <none>
kube-system kube-apiserver-K8S-Master 1/1 Running 1 2d23h 100.101.102.103 K8S-Master <none> <none>
kube-system kube-controller-manager-K8S-Master 1/1 Running 1 2d23h 100.101.102.103 K8S-Master <none> <none>
kube-system kube-proxy-btgqb 1/1 Running 1 2d23h 100.101.102.103 K8S-Master <none> <none>
kube-system kube-proxy-mqg85 1/1 Running 1 2d23h 100.101.102.104 k8s-worker-1 <none> <none>
kube-system kube-scheduler-K8S-Master 1/1 Running 2 2d23h 100.101.102.103 K8S-Master <none> <none>
kube-system weave-net-2nxpk 1/2 CrashLoopBackOff 848 2d23h 100.101.102.104 k8s-worker-1 <none> <none>
kube-system weave-net-n8wv9 1/2 CrashLoopBackOff 846 2d23h 100.101.102.103 K8S-Master <none> <none>
[root#K8S-Master ~]# kubectl logs weave-net-2nxpk -c weave --namespace=kube-system
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component
[root#K8S-Master ~]# kubectl logs weave-net-n8wv9 -c weave --namespace=kube-system
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component
[root#K8S-Master ~]# kubectl describe pod/weave-net-n8wv9 -n kube-system
Name: weave-net-n8wv9
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: K8S-Master/100.101.102.103
Start Time: Mon, 03 Aug 2020 10:56:12 +0530
Labels: controller-revision-hash=6768fc7ccf
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 100.101.102.103
IPs:
IP: 100.101.102.103
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://efeb277639ac8262c8864f2d598606e19caadbb65cdda4645d67589eab13d109
Image: docker.io/weaveworks/weave-kube:2.6.5
Image ID: docker-pullable://weaveworks/weave-kube#sha256:703a045a58377cb04bc85d0f5a7c93356d5490282accd7e5b5d7a99fe2ef09e2
Port: <none>
Host Port: <none>
Command:
/home/weave/launch.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 06 Aug 2020 20:18:21 +0530
Finished: Thu, 06 Aug 2020 20:18:21 +0530
Ready: False
Restart Count: 971
Requests:
cpu: 10m
Readiness: http-get http://127.0.0.1:6784/status delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-p9ltl (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://33f9fed68c452187490e261830a283c1ec9361aba01b86b60598dcc871ca1b11
Image: docker.io/weaveworks/weave-npc:2.6.5
Image ID: docker-pullable://weaveworks/weave-npc#sha256:0f6166e000faa500ccc0df53caae17edd3110590b7b159007a5ea727cdfb1cef
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 06 Aug 2020 17:13:35 +0530
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 06 Aug 2020 11:38:19 +0530
Finished: Thu, 06 Aug 2020 17:08:40 +0530
Ready: True
Restart Count: 4
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-p9ltl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
weave-net-token-p9ltl:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-p9ltl
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
:NoExecute
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/network-unavailable:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m3s (x863 over 3h7m) kubelet, K8S-Master Back-off restarting failed container
[root#K8S-Master ~]# journalctl -u kubelet
Aug 06 20:36:56 K8S-Master kubelet[2647]: I0806 20:36:56.549156 2647 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f873d>
Aug 06 20:36:56 K8S-Master kubelet[2647]: E0806 20:36:56.549515 2647 pod_workers.go:191] Error syncing pod e7511fe6-c60f-4833-bfb6-d59d6e8720e3 ("wea>
Some configuration wrong during my K8s installation? Could someone provide workaround to solve this problem.

I have re-installed K8s with Calico network now all namespaces are running as expected.

Related

Unable to bring up Jenkins using Helm

I'm following the doc in Jenkins page, I'm running with 2 node K8s cluster (1 master 1 worker), setting service type to nodeport, for some reason the init container crashes and never comes up.
kubectl describe pod jenkins-0 -n jenkins
Name: jenkins-0
Namespace: jenkins
Priority: 0
Node: vlab048009.dom047600.lab/10.204.110.35
Start Time: Wed, 09 Dec 2020 23:19:59 +0530
Labels: app.kubernetes.io/component=jenkins-controller
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=jenkins
controller-revision-hash=jenkins-c5795f65f
statefulset.kubernetes.io/pod-name=jenkins-0
Annotations: checksum/config: 2a4c2b3ea5dea271cb7c0b8e8582b682814d39f8e933e0348725b0b9a7dbf258
Status: Pending
IP: 10.244.1.28
IPs:
IP: 10.244.1.28
Controlled By: StatefulSet/jenkins
Init Containers:
init:
Container ID: docker://95e3298740bcaed3c2adf832f41d346e563c92add728080cfdcfcac375e0254d
Image: jenkins/jenkins:lts
Image ID: docker-pullable://jenkins/jenkins#sha256:1433deaac433ce20c534d8b87fcd0af3f25260f375f4ee6bdb41d70e1769d9ce
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 09 Dec 2020 23:41:28 +0530
Finished: Wed, 09 Dec 2020 23:41:29 +0530
Ready: False
Restart Count: 9
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment: <none>
Mounts:
/usr/share/jenkins/ref/plugins from plugins (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
Containers:
jenkins:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--httpPort=8080
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=3
Startup: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=12
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
JAVA_OPTS: -Dcasc.reload.token=$(POD_NAME)
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
CASC_JENKINS_CONFIG: /var/jenkins_home/casc_configs
Mounts:
/run/secrets/chart-admin-password from admin-secret (ro,path="jenkins-admin-password")
/run/secrets/chart-admin-username from admin-secret (ro,path="jenkins-admin-user")
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
config-reload:
Container ID:
Image: kiwigrid/k8s-sidecar:0.1.275
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
LABEL: jenkins-jenkins-config
FOLDER: /var/jenkins_home/casc_configs
NAMESPACE: jenkins
REQ_URL: http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)
REQ_METHOD: POST
REQ_RETRY_CONNECT: 10
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins
Optional: false
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
sc-config-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
admin-secret:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins
Optional: false
jenkins-token-ppfw7:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-token-ppfw7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned jenkins/jenkins-0 to vlab048009.dom047600.lab
Normal Pulled 22m kubelet Successfully pulled image "jenkins/jenkins:lts" in 4.648858149s
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 1.407161762s
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 4.963056101s
Normal Created 21m (x4 over 22m) kubelet Created container init
Normal Started 21m (x4 over 22m) kubelet Started container init
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 8.0749493s
Normal Pulling 20m (x5 over 22m) kubelet Pulling image "jenkins/jenkins:lts"
Warning BackOff 2m1s (x95 over 21m) kubelet Back-off restarting failed container
[
kubectl logs -f jenkins-0 -c init -n jenkins
Error from server: Get "https://10.204.110.35:10250/containerLogs/jenkins/jenkins-0/init?follow=true": dial tcp 10.204.110.35:10250: connect: no route to host
kubectl get events -n jenkins
LAST SEEN TYPE REASON OBJECT MESSAGE
23m Normal Scheduled pod/jenkins-0 Successfully assigned jenkins/jenkins-0 to vlab048009.dom047600.lab
21m Normal Pulling pod/jenkins-0 Pulling image "jenkins/jenkins:lts"
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 4.648858149s
22m Normal Created pod/jenkins-0 Created container init
22m Normal Started pod/jenkins-0 Started container init
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 1.407161762s
3m30s Warning BackOff pod/jenkins-0 Back-off restarting failed container
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 4.963056101s
22m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 8.0749493s
23m Normal SuccessfulCreate statefulset/jenkins create Pod jenkins-0 in StatefulSet jenkins successful
Every 2.0s: kubectl get all -n jenkins Wed Dec 9 23:48:31 2020
NAME READY STATUS RESTARTS AGE
pod/jenkins-0 0/2 Init:CrashLoopBackOff 10 28m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins NodePort 10.103.209.122 <none> 8080:32323/TCP 28m
service/jenkins-agent ClusterIP 10.103.195.120 <none> 50000/TCP 28m
NAME READY AGE
statefulset.apps/jenkins 0/1 28m
Using helm3 to deploy jenkins, pretty much changes done as per doc.
Not sure how to debug this issues wrt init container crashing, any leads or a solution would be appreciated, Thanks
Firstly make sure that you had executed command:
$ helm repo update
Execute also command:
$ kubectl logs <pod-name> -c <init-container-name>
to inspect init container. Then you will be able to properly debug this setup.
This might be a connection issue to the Jenkins update site. You can build an image which contains required plugins and disable plugin download. Take a look: jenkins-kubernetes.
See more: jenkins-helm-issues - in this case problem lays in plug-in compatibility.

Unable to run Argo workflow due to an opaque error

I want to trigger a manual workflow in Argo. I am using Openshift and ArgoCD, have scheduled workflows that are running successfully in Argo but failing when triggering a manual run for one workflow.
The concerned workflow is:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: "obslytics-data-exporter-manual-workflow-"
labels:
owner: "obslytics-remote-reader"
app: "obslytics-data-exporter"
pipeline: "obslytics-data-exporter"
spec:
arguments:
parameters:
- name: start_timestamp
value: "2020-11-18T20:00:00Z"
entrypoint: manual-trigger
templates:
- name: manual-trigger
steps:
- - name: trigger
templateRef:
name: "obslytics-data-exporter-workflow-triggers"
template: trigger-workflow
volumes:
- name: "obslytics-data-exporter-workflow-secrets"
secret:
secretname: "obslytics-data-exporter-workflow-secrets"
When I run the command:
argo submit trigger.local.yaml
The build pod is completed but the rest pods fail:
➜ dh-workflow-obslytics git:(master) ✗ oc get pods
NAME READY STATUS RESTARTS AGE
argo-ui-7fcf5ff95-9k8cc 1/1 Running 0 3d
gateway-controller-76bb888f7b-lq84r 1/1 Running 0 3d
obslytics-data-exporter-1-build 0/1 Completed 0 3d
obslytics-data-exporter-calendar-gateway-fbbb8d7-zhdnf 2/2 Running 1 3d
obslytics-data-exporter-manual-workflow-m7jdg-1074461258 0/2 Error 0 4m
obslytics-data-exporter-manual-workflow-m7jdg-1477271209 0/2 Error 0 4m
obslytics-data-exporter-manual-workflow-m7jdg-1544087495 0/2 Error 0 4m
obslytics-data-exporter-manual-workflow-m7jdg-1979266120 0/2 Completed 0 4m
obslytics-data-exporter-sensor-6594954795-xw8fk 1/1 Running 0 3d
opendatahub-operator-8994ddcf8-v8wxm 1/1 Running 0 3d
sensor-controller-58bdc7c4f4-9h4jw 1/1 Running 0 3d
workflow-controller-759649b79b-s69l7 1/1 Running 0 3d
The pods starting with obslytics-data-exporter-manual-workflow are the concerned pods that are failing. When I attempt to debug by describing pods:
➜ dh-workflow-obslytics git:(master) ✗ oc describe pods/obslytics-data-exporter-manual-workflow-4hzqz-3278280317
Name: obslytics-data-exporter-manual-workflow-4hzqz-3278280317
Namespace: dh-dev-argo
Priority: 0
PriorityClassName: <none>
Node: avsrivas-dev-ocp-3.11/10.0.111.224
Start Time: Tue, 24 Nov 2020 07:27:57 -0500
Labels: workflows.argoproj.io/completed=true
workflows.argoproj.io/workflow=obslytics-data-exporter-manual-workflow-4hzqz
Annotations: openshift.io/scc=restricted
workflows.argoproj.io/node-message=timeout after 0s
workflows.argoproj.io/node-name=obslytics-data-exporter-manual-workflow-4hzqz[0].trigger[1].run[0].metric-split(0:cluster_version)[0].process-metric(0)
workflows.argoproj.io/template={"name":"run-obslytics","arguments":{},"inputs":{"parameters":[{"name":"metric","value":"cluster_version"},{"name":"start_timestamp","value":"2020-11-18T20:00:00Z"},{"na...
Status: Failed
IP: 10.128.0.69
Controlled By: Workflow/obslytics-data-exporter-manual-workflow-4hzqz
Init Containers:
init:
Container ID: docker://25b95c684ef66b13520ba9deeba353082142f3bb39bafe443ee508074c58047e
Image: argoproj/argoexec:v2.4.2
Image ID: docker-pullable://docker.io/argoproj/argoexec#sha256:4e393daa6ed985cf680bcf0ecf04f7b0758940f0789505428331fcfe99cce06b
Port: <none>
Host Port: <none>
Command:
argoexec
init
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 24 Nov 2020 07:27:59 -0500
Finished: Tue, 24 Nov 2020 07:27:59 -0500
Ready: True
Restart Count: 0
Environment:
ARGO_POD_NAME: obslytics-data-exporter-manual-workflow-4hzqz-3278280317 (v1:metadata.name)
ARGO_CONTAINER_RUNTIME_EXECUTOR: k8sapi
Mounts:
/argo/podmetadata from podmetadata (rw)
/argo/staging from argo-staging (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qpggm (ro)
Containers:
wait:
Container ID: docker://a94e7f1bc1cfec4c8b549120193b697c91760bb8f3af414babef1d6f7ccee831
Image: argoproj/argoexec:v2.4.2
Image ID: docker-pullable://docker.io/argoproj/argoexec#sha256:4e393daa6ed985cf680bcf0ecf04f7b0758940f0789505428331fcfe99cce06b
Port: <none>
Host Port: <none>
Command:
argoexec
wait
State: Terminated
Reason: Completed
Message: timeout after 0s
Exit Code: 0
Started: Tue, 24 Nov 2020 07:28:00 -0500
Finished: Tue, 24 Nov 2020 07:28:01 -0500
Ready: False
Restart Count: 0
Environment:
ARGO_POD_NAME: obslytics-data-exporter-manual-workflow-4hzqz-3278280317 (v1:metadata.name)
ARGO_CONTAINER_RUNTIME_EXECUTOR: k8sapi
Mounts:
/argo/podmetadata from podmetadata (rw)
/mainctrfs/argo/staging from argo-staging (rw)
/mainctrfs/etc/obslytics-data-exporter from obslytics-data-exporter-workflow-secrets (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qpggm (ro)
main:
Container ID: docker://<some_id>
Image: docker-registry.default.svc:5000/<some_id>
Image ID: docker-pullable://docker-registry.default.svc:5000/<some_id>
Port: <none>
Host Port: <none>
Command:
/bin/sh
-e
Args:
/argo/staging/script
State: Terminated
Reason: Error
Exit Code: 126
Started: Tue, 24 Nov 2020 07:28:01 -0500
Finished: Tue, 24 Nov 2020 07:28:01 -0500
Ready: False
Restart Count: 0
Limits:
memory: 1Gi
Requests:
memory: 1Gi
Environment: <none>
Mounts:
/argo/staging from argo-staging (rw)
/etc/obslytics-data-exporter from obslytics-data-exporter-workflow-secrets (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qpggm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
podmetadata:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.annotations -> annotations
obslytics-data-exporter-workflow-secrets:
Type: Secret (a volume populated by a Secret)
SecretName: obslytics-data-exporter-workflow-secrets
Optional: false
argo-staging:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-qpggm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qpggm
Optional: false
QoS Class: Burstable
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned dh-dev-argo/obslytics-data-exporter-manual-workflow-4hzqz-3278280317 to avsrivas-dev-ocp-3.11
Normal Pulled 27m kubelet, avsrivas-dev-ocp-3.11 Container image "argoproj/argoexec:v2.4.2" already present on machine
Normal Created 27m kubelet, avsrivas-dev-ocp-3.11 Created container
Normal Started 27m kubelet, avsrivas-dev-ocp-3.11 Started container
Normal Pulled 27m kubelet, avsrivas-dev-ocp-3.11 Container image "argoproj/argoexec:v2.4.2" already present on machine
Normal Created 27m kubelet, avsrivas-dev-ocp-3.11 Created container
Normal Started 27m kubelet, avsrivas-dev-ocp-3.11 Started container
Normal Pulling 27m kubelet, avsrivas-dev-ocp-3.11 pulling image "docker-registry.default.svc:5000/dh-dev-argo/obslytics-data-exporter:latest"
Normal Pulled 27m kubelet, avsrivas-dev-ocp-3.11 Successfully pulled image "docker-registry.default.svc:5000/dh-dev-argo/obslytics-data-exporter:latest"
Normal Created 27m kubelet, avsrivas-dev-ocp-3.11 Created container
Normal Started 27m kubelet, avsrivas-dev-ocp-3.11 Started container
The only thing I learn from the above description is that the pods fail due to an error. I am unable to see any error in order to debug this issue.
When I attempt to read the Argo watch logs:
Name: obslytics-data-exporter-manual-workflow-8wzcc
Namespace: dh-dev-argo
ServiceAccount: default
Status: Running
Created: Tue Nov 24 08:01:10 -0500 (8 minutes ago)
Started: Tue Nov 24 08:01:10 -0500 (8 minutes ago)
Duration: 8 minutes 10 seconds
Progress:
Parameters:
start_timestamp: 2020-11-18T20:00:00Z
STEP TEMPLATE PODNAME DURATION MESSAGE
● obslytics-data-exporter-manual-workflow-8wzcc manual-trigger
└───● trigger obslytics-data-exporter-workflow-triggers/trigger-workflow
├───✔ get-labels(0) obslytics-data-exporter-workflow-template/get-labels obslytics-data-exporter-manual-workflow-8wzcc-2604296472 6s
└───● run obslytics-data-exporter-workflow-template/init
└───● metric-split(0:cluster_version) metric-worker
└───● process-metric run-obslytics
├─✖ process-metric(0) run-obslytics obslytics-data-exporter-manual-workflow-8wzcc-4222496183 6s failed with exit code 126
└─◷ process-metric(1) run-obslytics obslytics-data-exporter-manual-workflow-8wzcc-531670266 7m PodInitializing

Want to upgrade elastissearch version to 7.7 using official elasticsearch docker image instead of custom docker image for upgradation

I am trying to upgraded pods with 7.7 version of elasticsearch. Unable to do so.
below is values.yaml. Referring to here https://www.docker.elastic.co/r/elasticsearch/elasticsearch-oss:7.7.1 for offical docker image.
cluster:
name: elastic-x-pack
replicaCount:
client: 2
data: 2
master: 3
minimum_master_nodes: 2
image:
registry: docker.elastic.co
name: elasticsearch/elasticsearch-oss
tag: 7.7.1
pullPolicy: Always
service:
type: NodePort
http:
externalPort: 30000
internalPort: 9200
tcp:
externalPort: 30112
internalPort: 9300
opts: -Xms256m -Xmx256m
resources: {}
global:
elasticsearch:
storage:
data:
class: standard
size: 3Gi
snapshot:
class: standard
size: 5Gi
accessModes: [ ReadWriteMany ]
name: data-snapshot
cluster:
features:
DistributedTracing: test
ignite:
registry: test
But pods are not running and are in CrashLoopBackOff state.
below is the description of the pod
Name: elastic-cluster-elasticsearch-cluster-client-685d698-jf7bb
Namespace: default
Priority: 0
Node: ip-172-31-38-123.us-west-2.compute.internal/172.31.38.123
Start Time: Fri, 26 Jun 2020 09:31:23 +0000
Labels: app=elasticsearch-cluster
component=elasticsearch
pod-template-hash=685d698
release=elastic-cluster
role=client
Annotations: <none>
Status: Running
IP: 10.233.68.58
Controlled By: ReplicaSet/elastic-cluster-elasticsearch-cluster-client-685d698
Init Containers:
init-sysctl:
Container ID: docker://d83c3be3f4d7ac1362599d115813d6cd1b1356959a5a2784c1f90f3ed74daa69
Image: busybox:1.27.2
Image ID: docker-pullable://busybox#sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Port: <none>
Host Port: <none>
Command:
sysctl
-w
vm.max_map_count=262144
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 26 Jun 2020 09:31:24 +0000
Finished: Fri, 26 Jun 2020 09:31:24 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d7b79 (ro)
Containers:
elasticsearch-cluster:
Container ID: docker://74822d62d876b798c1518c0e42da071d661b2ccdbeb1fe40487044a9cc07e6f4
Image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch-oss#sha256:04f0a377e55fcc41f3467e8a222357a7a5ef0b1e3ec026b6d63a59465870bd8e
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 78
Started: Fri, 26 Jun 2020 09:32:26 +0000
Finished: Fri, 26 Jun 2020 09:32:33 +0000
Ready: False
Restart Count: 3
Liveness: tcp-socket :transport delay=300s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/_cluster/health delay=10s timeout=5s period=10s #success=1 #failure=3
Environment:
NAMESPACE: default (v1:metadata.namespace)
NODE_NAME: elastic-cluster-elasticsearch-cluster-client-685d698-jf7bb (v1:metadata.name)
CLUSTER_NAME: elastic-x-pack
ES_JAVA_OPTS: -Xms256m -Xmx256m
NODE_DATA: false
HTTP_ENABLE: true
NETWORK_HOST: _site_,_lo_
NODE_MASTER: false
Mounts:
/data from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d7b79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-d7b79:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-d7b79
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 112s default-scheduler Successfully assigned default/elastic-cluster-elasticsearch-cluster-client-685d698-jf7bb to ip-172-31-38-123.us-west-2.compute.internal
Normal Pulled 111s kubelet, ip-172-31-38-123.us-west-2.compute.internal Container image "busybox:1.27.2" already present on machine
Normal Created 111s kubelet, ip-172-31-38-123.us-west-2.compute.internal Created container init-sysctl
Normal Started 111s kubelet, ip-172-31-38-123.us-west-2.compute.internal Started container init-sysctl
Normal Pulled 49s (x4 over 110s) kubelet, ip-172-31-38-123.us-west-2.compute.internal Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1" already present on machine
Normal Created 49s (x4 over 110s) kubelet, ip-172-31-38-123.us-west-2.compute.internal Created container elasticsearch-cluster
Normal Started 49s (x4 over 110s) kubelet, ip-172-31-38-123.us-west-2.compute.internal Started container elasticsearch-cluster
Warning BackOff 5s (x9 over 94s) kubelet, ip-172-31-38-123.us-west-2.compute.internal Back-off restarting failed container

nginx-ingress k8s on Google no IP address

I am practicing the k8s by following the ingress chapter. I am using Google Cluster. Specification are as follows
master: 1.11.7-gke.4
node: 1.11.7-gke.4
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-singh-default-pool-a69fa545-1sm3 Ready <none> 6h v1.11.7-gke.4 10.148.0.46 35.197.128.107 Container-Optimized OS from Google 4.14.89+ docker://17.3.2
gke-singh-default-pool-a69fa545-819z Ready <none> 6h v1.11.7-gke.4 10.148.0.47 35.198.217.71 Container-Optimized OS from Google 4.14.89+ docker://17.3.2
gke-singh-default-pool-a69fa545-djhz Ready <none> 6h v1.11.7-gke.4 10.148.0.45 35.197.159.75 Container-Optimized OS from Google 4.14.89+ docker://17.3.2
master endpoint: 35.186.148.93
DNS: singh.hbot.io (master IP)
To keep my question short. I post my source code in the snippet and links back to here.
Files:
deployment.yaml
ingress.yaml
ingress-rules.yaml
Problem:
curl http://singh.hbot.io/webapp1 got timed out
Description
$ kubectl get deployment -n nginx-ingress
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-ingress 1 1 1 0 2h
nginx-ingress deployment is not available.
$ kubectl describe deployment -n nginx-ingress
Name: nginx-ingress
Namespace: nginx-ingress
CreationTimestamp: Mon, 04 Mar 2019 15:09:42 +0700
Labels: app=nginx-ingress
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-ingress","namespace":"nginx-ingress"},"s...
Selector: app=nginx-ingress
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=nginx-ingress
Service Account: nginx-ingress
Containers:
nginx-ingress:
Image: nginx/nginx-ingress:edge
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
-nginx-configmaps=$(POD_NAMESPACE)/nginx-config
-default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
Environment:
POD_NAMESPACE: (v1:metadata.namespace)
POD_NAME: (v1:metadata.name)
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-ingress-77fcd48f4d (1/1 replicas created)
Events: <none>
pods:
$ kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
default webapp1-7d67d68676-k9hhl 1/1 Running 0 6h
default webapp2-64d4844b78-9kln5 1/1 Running 0 6h
default webapp3-5b8ff7484d-zvcsf 1/1 Running 0 6h
kube-system event-exporter-v0.2.3-85644fcdf-xxflh 2/2 Running 0 6h
kube-system fluentd-gcp-scaler-8b674f786-gvv98 1/1 Running 0 6h
kube-system fluentd-gcp-v3.2.0-srzc2 2/2 Running 0 6h
kube-system fluentd-gcp-v3.2.0-w2z2q 2/2 Running 0 6h
kube-system fluentd-gcp-v3.2.0-z7p9l 2/2 Running 0 6h
kube-system heapster-v1.6.0-beta.1-5685746c7b-kd4mn 3/3 Running 0 6h
kube-system kube-dns-6b98c9c9bf-6p8qr 4/4 Running 0 6h
kube-system kube-dns-6b98c9c9bf-pffpt 4/4 Running 0 6h
kube-system kube-dns-autoscaler-67c97c87fb-gbgrs 1/1 Running 0 6h
kube-system kube-proxy-gke-singh-default-pool-a69fa545-1sm3 1/1 Running 0 6h
kube-system kube-proxy-gke-singh-default-pool-a69fa545-819z 1/1 Running 0 6h
kube-system kube-proxy-gke-singh-default-pool-a69fa545-djhz 1/1 Running 0 6h
kube-system l7-default-backend-7ff48cffd7-trqvx 1/1 Running 0 6h
kube-system metrics-server-v0.2.1-fd596d746-bvdfk 2/2 Running 0 6h
kube-system tiller-deploy-57c574bfb8-xnmtj 1/1 Running 0 1h
nginx-ingress nginx-ingress-77fcd48f4d-rfwbk 0/1 CrashLoopBackOff 35 2h
describe pod
$ kubectl describe pods -n nginx-ingress
Name: nginx-ingress-77fcd48f4d-5rhtv
Namespace: nginx-ingress
Priority: 0
PriorityClassName: <none>
Node: gke-singh-default-pool-a69fa545-djhz/10.148.0.45
Start Time: Mon, 04 Mar 2019 17:55:00 +0700
Labels: app=nginx-ingress
pod-template-hash=3397804908
Annotations: <none>
Status: Running
IP: 10.48.2.10
Controlled By: ReplicaSet/nginx-ingress-77fcd48f4d
Containers:
nginx-ingress:
Container ID: docker://5d3ee9e2bf7a2060ff0a96fdd884a937b77978c137df232dbfd0d3e5de89fe0e
Image: nginx/nginx-ingress:edge
Image ID: docker-pullable://nginx/nginx-ingress#sha256:16c1c6dde0b904f031d3c173e0b04eb82fe9c4c85cb1e1f83a14d5b56a568250
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
-nginx-configmaps=$(POD_NAMESPACE)/nginx-config
-default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 04 Mar 2019 18:16:33 +0700
Finished: Mon, 04 Mar 2019 18:16:33 +0700
Ready: False
Restart Count: 9
Environment:
POD_NAMESPACE: nginx-ingress (v1:metadata.namespace)
POD_NAME: nginx-ingress-77fcd48f4d-5rhtv (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-zvcwt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nginx-ingress-token-zvcwt:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-token-zvcwt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned nginx-ingress/nginx-ingress-77fcd48f4d-5rhtv to gke-singh-default-pool-a69fa545-djhz
Normal Created 25m (x4 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Created container
Normal Started 25m (x4 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Started container
Normal Pulling 24m (x5 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz pulling image "nginx/nginx-ingress:edge"
Normal Pulled 24m (x5 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Successfully pulled image "nginx/nginx-ingress:edge"
Warning BackOff 62s (x112 over 26m) kubelet, gke-singh-default-pool-a69fa545-djhz Back-off restarting failed container
Fix container terminated
Add to the command to ingress.yaml to prevent container finish running and get terminated by k8s.
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
Ingress has no IP address from GKE. Let me have a look in details
describe ingress:
$ kubectl describe ing
Name: webapp-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.48.0.8:8080)
Rules:
Host Path Backends
---- ---- --------
*
/webapp1 webapp1-svc:80 (<none>)
/webapp2 webapp2-svc:80 (<none>)
webapp3-svc:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"webapp-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"webapp1-svc","servicePort":80},"path":"/webapp1"},{"backend":{"serviceName":"webapp2-svc","servicePort":80},"path":"/webapp2"},{"backend":{"serviceName":"webapp3-svc","servicePort":80}}]}}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Translate 7m45s (x59 over 4h20m) loadbalancer-controller error while evaluating the ingress spec: service "default/webapp1-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp2-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp3-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
From this line I got all the ultimate solution from Christian Roy Thank you very much.
Fix the ClusterIP
It is default value then I have to edit my manifest file using NodePort as follow
apiVersion: v1
kind: Service
metadata:
name: webapp1-svc
labels:
app: webapp1
spec:
type: NodePort
ports:
- port: 80
selector:
app: webapp1
And that is.
The answer is in your question. The describe of your ingress shows the problem.
You did kubectl describe ing and the last part of that output was:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Translate 7m45s (x59 over 4h20m) loadbalancer-controller error while evaluating the ingress spec: service "default/webapp1-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp2-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"; service "default/webapp3-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
The important part is:
error while evaluating the ingress spec: service "default/webapp1-svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
Solution
Just change all your services to be of type NodePort and it will work.
I have to add command in order to let the container not finish working.
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]

Traefik on a rpi kubernetes node return 404 page not found

I try to made a first experience on kubernetes by practice.
kubernetes v1.9 has been setup on 5 raspberry pi mounted as cluster.
OS : hypriot v1.4
host / static ip configured / raspberry hardware version :
master: 192.168.1.230 / rpi v3
node01: 192.168.1.231 / rpi v3
node02: 192.168.1.232 / rpi v3
node03: 192.168.1.233 / rpi v2
node04: 192.168.1.234 / rpi v2
For the pod network I choose Weave Net. Traefik has been installed in the node01 as load balancer to access my service from outside.
I ssh the master and use these commands to install it (origin: https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/) :
$ kubectl apply -f https://raw.githubusercontent.com/hypriot/rpi-traefik/master/traefik-k8s-example.yaml
$ kubectl label node node01 nginx-controller=traefik
All system pods are running.
$ kubectl get pods --all-namespaces
kube-system etcd-master 1/1 Running 5 22h
kube-system kube-apiserver-master 1/1 Running 40 13h
kube-system kube-controller-manager-master 1/1 Running 10 13h
kube-system kube-dns-7b6ff86f69-x58pj 3/3 Running 9 23h
kube-system kube-proxy-5bqwh 1/1 Running 2 15h
kube-system kube-proxy-kngp9 1/1 Running 2 16h
kube-system kube-proxy-n85xl 1/1 Running 5 23h
kube-system kube-proxy-ncg2k 1/1 Running 2 15h
kube-system kube-proxy-qbfcf 1/1 Running 2 21h
kube-system kube-scheduler-master 1/1 Running 5 22h
kube-system traefik-ingress-controller-9dc7454cc-7rhpf 1/1 Running 1 14h
kube-system weave-net-6mvc6 2/2 Running 31 15h
kube-system weave-net-8hff9 2/2 Running 31 15h
kube-system weave-net-9kwgr 2/2 Running 31 21h
kube-system weave-net-llgrk 2/2 Running 41 22h
kube-system weave-net-s2h62 2/2 Running 29 16h
The issue is when I try to connect to the node01 by using this url http://192.168.1.231/. I got a 404 page not found...
So I checked the log and figure out that they are a problem with the default account :
$ kubectl logs traefik-ingress-controller-9dc7454cc-7rhpf
ERROR: logging before flag.Parse: E1226 07:29:15.195193 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:default" cannot list endpoints at the cluster scope
ERROR: logging before flag.Parse: E1226 07:29:15.422807 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot list secrets at the cluster scope
ERROR: logging before flag.Parse: E1226 07:29:15.915317 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
ERROR: logging before flag.Parse: E1226 07:29:16.108385 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden: User "system:serviceaccount:kube-system:default" cannot list ingresses.extensions at the cluster scope
Is it really a problem with the account system:serviceaccount:kube-system:default used? What account should I use instead of?
Thanks for helping.
Additional informations:
$ docker -v
Docker version 17.03.0-ce, build 60ccb22
$ kubectl describe pods traefik-ingress-controller -n kube-system
Name: traefik-ingress-controller-9dc7454cc-7rhpf
Namespace: kube-system
Node: node01/192.168.1.231
Start Time: Mon, 25 Dec 2017 20:54:45 +0000
Labels: k8s-app=traefik-ingress-controller
pod-template-hash=587301077
Annotations: scheduler.alpha.kubernetes.io/tolerations=[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
Status: Running
IP: 192.168.1.231
Controlled By: ReplicaSet/traefik-ingress-controller-9dc7454cc
Containers:
traefik-ingress-controller:
Container ID: docker://9e28800da6937a48aa20b5ef6526846b321a516ad20ee24ea3d32876f6769531
Image: hypriot/rpi-traefik
Image ID: docker-pullable://hypriot/rpi-traefik#sha256:ecdfcd94571ec8c121c20a6ec616d68aeaad93150a0717260196f813e31737d9
Ports: 80/TCP, 8888/TCP
Args:
--web
--web.address=localhost:8888
--kubernetes
State: Running
Started: Mon, 25 Dec 2017 22:24:33 +0000
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 25 Dec 2017 20:54:50 +0000
Finished: Mon, 25 Dec 2017 22:17:09 +0000
Ready: True
Restart Count: 1
Limits:
cpu: 200m
memory: 30Mi
Requests:
cpu: 100m
memory: 20Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4wzhl (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-4wzhl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4wzhl
Optional: false
QoS Class: Burstable
Node-Selectors: nginx-controller=traefik
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Name: traefik-ingress-controller-9dc7454cc-jszgz
Namespace: kube-system
Node: node01/
Start Time: Mon, 25 Dec 2017 18:28:21 +0000
Labels: k8s-app=traefik-ingress-controller
pod-template-hash=587301077
Annotations: scheduler.alpha.kubernetes.io/tolerations=[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
Status: Failed
Reason: MatchNodeSelector
Message: Pod Predicate MatchNodeSelector failed
IP:
Controlled By: ReplicaSet/traefik-ingress-controller-9dc7454cc
Containers:
traefik-ingress-controller:
Image: hypriot/rpi-traefik
Ports: 80/TCP, 8888/TCP
Args:
--web
--web.address=localhost:8888
--kubernetes
Limits:
cpu: 200m
memory: 30Mi
Requests:
cpu: 100m
memory: 20Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4wzhl (ro)
Volumes:
default-token-4wzhl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4wzhl
Optional: false
QoS Class: Burstable
Node-Selectors: nginx-controller=traefik
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
$ kubectl describe pods weave-net-9kwgr -n kube-system
Name: weave-net-llgrk
Namespace: kube-system
Node: master/192.168.1.230
Start Time: Mon, 25 Dec 2017 13:33:40 +0000
Labels: controller-revision-hash=2209123374
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 192.168.1.230
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://7824b8b02f1a8f5a53d7f27f0c12b44f73a4b666a694b974142f974294bedd6c
Image: weaveworks/weave-kube:2.1.3
Image ID: docker-pullable://weaveworks/weave-kube#sha256:07a3d56b8592ea3e00ace6f2c3eb7e65f3cc4945188a9e2a884b8172e6a0007e
Port: <none>
Command:
/home/weave/launch.sh
State: Running
Started: Tue, 26 Dec 2017 00:13:58 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 26 Dec 2017 00:08:38 +0000
Finished: Tue, 26 Dec 2017 00:08:50 +0000
Ready: True
Restart Count: 37
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-mx5jk (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://b199904c10ed34501748c25e13862113aeb32c7779b0797d72c95f9e9d868331
Image: weaveworks/weave-npc:2.1.3
Image ID: docker-pullable://weaveworks/weave-npc#sha256:f35eb8166d7dae3fa7bb4d9892ab6dc8ea5c969f73791be590a0a213767c0f07
Port: <none>
State: Running
Started: Mon, 25 Dec 2017 22:24:32 +0000
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 25 Dec 2017 20:54:30 +0000
Finished: Mon, 25 Dec 2017 22:17:09 +0000
Ready: True
Restart Count: 4
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-mx5jk (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType:
weave-net-token-mx5jk:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-mx5jk
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events: <none>
root#master:/home/pirate# kubectl describe pods weave-net-9kwgr -n kube-system
Name: weave-net-9kwgr
Namespace: kube-system
Node: node01/192.168.1.231
Start Time: Mon, 25 Dec 2017 14:50:37 +0000
Labels: controller-revision-hash=2209123374
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 192.168.1.231
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://92e31f645b4dcd41e4d8189a6f67fa70a395971e071d635dc4c4208b8d1daf63
Image: weaveworks/weave-kube:2.1.3
Image ID: docker-pullable://weaveworks/weave-kube#sha256:07a3d56b8592ea3e00ace6f2c3eb7e65f3cc4945188a9e2a884b8172e6a0007e
Port: <none>
Command:
/home/weave/launch.sh
State: Running
Started: Tue, 26 Dec 2017 00:13:39 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 26 Dec 2017 00:08:17 +0000
Finished: Tue, 26 Dec 2017 00:08:28 +0000
Ready: True
Restart Count: 29
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-mx5jk (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://ddd86bef74d3fd40134c8609551cc07658aa62a2ede7ce51aec394001049e96d
Image: weaveworks/weave-npc:2.1.3
Image ID: docker-pullable://weaveworks/weave-npc#sha256:f35eb8166d7dae3fa7bb4d9892ab6dc8ea5c969f73791be590a0a213767c0f07
Port: <none>
State: Running
Started: Mon, 25 Dec 2017 22:24:32 +0000
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 25 Dec 2017 20:54:30 +0000
Finished: Mon, 25 Dec 2017 22:17:09 +0000
Ready: True
Restart Count: 2
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-mx5jk (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType:
weave-net-token-mx5jk:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-mx5jk
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events: <none>
Your Traefik service account is missing proper RBAC privileges. By default, no application may access any Kubernetes API.
You have to make sure that the necessary rights are granted. Please check our Kubernetes guide for details.

Resources