Pods not running with an error message of IMAGEPULLBACKOFF - docker

I initialized kubeflow pods using the following command.
juju deploy kubeflow
The following two pods didn't run and gave an error message stating IMAGEPULLBACKOFF.
kfp-viz,
kfp-profile-controller
Yaml code for kfp-viz
Name: kfp-viz-65bc89cd9b-dnng9
Namespace: kubeflow
Priority: 0
Node: mlops/10.50.60.90
Start Time: Mon, 30 May 2022 05:22:25 +0000
Labels: app.kubernetes.io/name=kfp-viz
pod-template-hash=65bc89cd9b
Annotations: apparmor.security.beta.kubernetes.io/pod: runtime/default
charm.juju.is/modified-version: 0
cni.projectcalico.org/podIP: 10.1.190.114/32
cni.projectcalico.org/podIPs: 10.1.190.114/32
controller.juju.is/id: ae0e16a7-f10b-41e9-8f64-35346b6c91dd
model.juju.is/id: a6f5b73a-cb38-42cd-8f8e-36aced0c1bbd
seccomp.security.beta.kubernetes.io/pod: docker/default
unit.juju.is/id: kfp-viz/0
Status: Pending
IP: 10.1.190.114
IPs:
IP: 10.1.190.114
Controlled By: ReplicaSet/kfp-viz-65bc89cd9b
Init Containers:
juju-pod-init:
Container ID: containerd://ffab869ab2beeab6a6bfde53be1175da58044c35da4d0bc2b66db231c585b142
Image: jujusolutions/jujud-operator:2.9.29
Image ID: sha256:47127013daad1c7215de0566f312d2a00eb83b3ef898e7b07f1ceb9e42860b4a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
export JUJU_DATA_DIR=/var/lib/juju
export JUJU_TOOLS_DIR=$JUJU_DATA_DIR/tools
mkdir -p $JUJU_TOOLS_DIR
cp /opt/jujud $JUJU_TOOLS_DIR/jujud
initCmd=$($JUJU_TOOLS_DIR/jujud help commands | grep caas-unit-init)
if test -n "$initCmd"; then
$JUJU_TOOLS_DIR/jujud caas-unit-init --debug --wait;
else
exit 0
fi
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 30 May 2022 05:22:35 +0000
Finished: Mon, 30 May 2022 05:24:10 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/lib/juju from juju-data-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82xc9 (ro)
Containers:
ml-pipeline-visualizationserver:
Container ID:
Image: registry.jujucharms.com/charm/c2o31yht1y825t6n49mwko4wyel0rracnrjn5/oci-image#sha256:13c46cf878062fd6ad672cbec4854eba7e869cd0123a8975bea49b9d75d4e698
Image ID:
Port: 8888/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Liveness: exec [wget -q -S -O - http://localhost:8888/] delay=3s timeout=2s period=5s #success=1 #failure=3
Readiness: exec [wget -q -S -O - http://localhost:8888/] delay=3s timeout=2s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/usr/bin/juju-run from juju-data-dir (rw,path="tools/jujud")
/var/lib/juju from juju-data-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82xc9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
juju-data-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-82xc9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/arch=amd64
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Failed 10m (x338 over 35h) kubelet (combined from similar events): Failed to pull image "registry.jujucharms.com/charm/c2o31yht1y825t6n49mwko4wyel0rracnrjn5/oci-image#sha256:13c46cf878062fd6ad672cbec4854eba7e869cd0123a8975bea49b9d75d4e698": rpc error: code = FailedPrecondition desc = failed to pull and unpack image "registry.jujucharms.com/charm/c2o31yht1y825t6n49mwko4wyel0rracnrjn5/oci-image#sha256:13c46cf878062fd6ad672cbec4854eba7e869cd0123a8975bea49b9d75d4e698": failed commit on ref "layer-sha256:d8f1984ce468ddfcc0f2752e09c8bbb5ea8513d55d5dd5f911d2d3dd135e5a84": "layer-sha256:d8f1984ce468ddfcc0f2752e09c8bbb5ea8513d55d5dd5f911d2d3dd135e5a84" failed size validation: 19367144 != 32561845: failed precondition
Normal BackOff 10s (x4271 over 36h) kubelet Back-off pulling image "registry.jujucharms.com/charm/c2o31yht1y825t6n49mwko4wyel0rracnrjn5/oci-image#sha256:13c46cf878062fd6ad672cbec4854eba7e869cd0123a8975bea49b9d75d4e698"
Yaml code for kfp-profile-controller
Name: kfp-profile-controller-operator-0
Namespace: kubeflow
Priority: 0
Node: mlops/10.50.60.90
Start Time: Mon, 30 May 2022 05:15:50 +0000
Labels: controller-revision-hash=kfp-profile-controller-operator-755985f4fc
operator.juju.is/name=kfp-profile-controller
operator.juju.is/target=application
statefulset.kubernetes.io/pod-name=kfp-profile-controller-operator-0
Annotations: apparmor.security.beta.kubernetes.io/pod: runtime/default
cni.projectcalico.org/podIP: 10.1.190.93/32
cni.projectcalico.org/podIPs: 10.1.190.93/32
controller.juju.is/id: ae0e16a7-f10b-41e9-8f64-35346b6c91dd
juju.is/version: 2.9.29
model.juju.is/id: a6f5b73a-cb38-42cd-8f8e-36aced0c1bbd
seccomp.security.beta.kubernetes.io/pod: docker/default
Status: Running
IP: 10.1.190.93
IPs:
IP: 10.1.190.93
Controlled By: StatefulSet/kfp-profile-controller-operator
Containers:
juju-operator:
Container ID: containerd://a4cc8356c4ce7d8bbb9282ccee17c38937a7a3a3bfa13caa29eaeccb63a86d30
Image: jujusolutions/jujud-operator:2.9.29
Image ID: sha256:47127013daad1c7215de0566f312d2a00eb83b3ef898e7b07f1ceb9e42860b4a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
export JUJU_DATA_DIR=/var/lib/juju
export JUJU_TOOLS_DIR=$JUJU_DATA_DIR/tools
mkdir -p $JUJU_TOOLS_DIR
cp /opt/jujud $JUJU_TOOLS_DIR/jujud
$JUJU_TOOLS_DIR/jujud caasoperator --application-name=kfp-profile-controller --debug
State: Running
Started: Mon, 30 May 2022 05:16:05 +0000
Ready: True
Restart Count: 0
Environment:
JUJU_APPLICATION: kfp-profile-controller
JUJU_OPERATOR_SERVICE_IP: 10.152.183.68
JUJU_OPERATOR_POD_IP: (v1:status.podIP)
JUJU_OPERATOR_NAMESPACE: kubeflow (v1:metadata.namespace)
Mounts:
/var/lib/juju/agents from charm (rw)
/var/lib/juju/agents/application-kfp-profile-controller/operator.yaml from kfp-profile-controller-operator-config (rw,path="operator.yaml")
/var/lib/juju/agents/application-kfp-profile-controller/template-agent.conf from kfp-profile-controller-operator-config (rw,path="template-agent.conf")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6x9qh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
charm:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: charm-kfp-profile-controller-operator-0
ReadOnly: false
kfp-profile-controller-operator-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kfp-profile-controller-operator-config
Optional: false
kube-api-access-6x9qh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: kfp-profile-controller-68f554c765-7vvmj
Namespace: kubeflow
Priority: 0
Node: mlops/10.50.60.90
Start Time: Mon, 30 May 2022 05:34:26 +0000
Labels: app.kubernetes.io/name=kfp-profile-controller
pod-template-hash=68f554c765
Annotations: apparmor.security.beta.kubernetes.io/pod: runtime/default
charm.juju.is/modified-version: 0
cni.projectcalico.org/podIP: 10.1.190.134/32
cni.projectcalico.org/podIPs: 10.1.190.134/32
controller.juju.is/id: ae0e16a7-f10b-41e9-8f64-35346b6c91dd
model.juju.is/id: a6f5b73a-cb38-42cd-8f8e-36aced0c1bbd
seccomp.security.beta.kubernetes.io/pod: docker/default
unit.juju.is/id: kfp-profile-controller/0
Status: Pending
IP: 10.1.190.134
IPs:
IP: 10.1.190.134
Controlled By: ReplicaSet/kfp-profile-controller-68f554c765
Init Containers:
juju-pod-init:
Container ID: containerd://7a1f2d07bce6f58bd7e9899e0f26a2cd36abfacad55d0e6b43fbb8fcb93df289
Image: jujusolutions/jujud-operator:2.9.29
Image ID: sha256:47127013daad1c7215de0566f312d2a00eb83b3ef898e7b07f1ceb9e42860b4a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
export JUJU_DATA_DIR=/var/lib/juju
export JUJU_TOOLS_DIR=$JUJU_DATA_DIR/tools
mkdir -p $JUJU_TOOLS_DIR
cp /opt/jujud $JUJU_TOOLS_DIR/jujud
initCmd=$($JUJU_TOOLS_DIR/jujud help commands | grep caas-unit-init)
if test -n "$initCmd"; then
$JUJU_TOOLS_DIR/jujud caas-unit-init --debug --wait;
else
exit 0
fi
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 30 May 2022 05:34:32 +0000
Finished: Mon, 30 May 2022 05:35:50 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/lib/juju from juju-data-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lqf7h (ro)
Containers:
kubeflow-pipelines-profile-controller:
Container ID:
Image: registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347
Image ID:
Port: 80/TCP
Host Port: 0/TCP
Command:
python
Args:
/hooks/sync.py
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
CONTROLLER_PORT: 80
DISABLE_ISTIO_SIDECAR: false
KFP_DEFAULT_PIPELINE_ROOT:
KFP_VERSION: 1.7.0-rc.3
METADATA_GRPC_SERVICE_HOST: mlmd.kubeflow
METADATA_GRPC_SERVICE_PORT: 8080
MINIO_ACCESS_KEY: minio
MINIO_HOST: minio
MINIO_NAMESPACE: kubeflow
MINIO_PORT: 9000
MINIO_SECRET_KEY: SV25N7GCN6HAYV19M4GMHGX0YTZ840
Mounts:
/hooks from kubeflow-pipelines-profile-controller-code (rw)
/usr/bin/juju-run from juju-data-dir (rw,path="tools/jujud")
/var/lib/juju from juju-data-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lqf7h (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
juju-data-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubeflow-pipelines-profile-controller-code:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kubeflow-pipelines-profile-controller-code
Optional: false
kube-api-access-lqf7h:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/arch=amd64
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 28m (x4631 over 38h) kubelet Back-off pulling image "registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347"
Warning Failed 11m (x207 over 38h) kubelet Error: ErrImagePull
Warning Failed 5m51s (x497 over 37h) kubelet (combined from similar events): Failed to pull image "registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347": rpc error: code = FailedPrecondition desc = failed to pull and unpack image "registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347": failed commit on ref "layer-sha256:3bfc3875e0f70f1fb305c87b4bc4d886f1118ddfedfded03ef0cb1c394cb90f0": "layer-sha256:3bfc3875e0f70f1fb305c87b4bc4d886f1118ddfedfded03ef0cb1c394cb90f0" failed size validation: 20672632 != 53354193: failed precondition
Kubeflow version:1.21:
kfctl version: kfctl v1.0.1-0-gf3edb9b:
Kubernetes platform: Microk8s
Kubernetes version: Client-GitVersion:"v1.24.0" , Server-GitVersion:"v1.21.12-3+6937f71915b56b":
OS : Linux 18.04

Apparently, these pods are not able to pull these images.
Does these systems connect to internet? with/without proxy?
Anyway a quick resolution can be, just make the images available on all these nodes. As simple as
docker pull registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347
you can find other images in the Events section of the describe command you have posted.

Related

Kubernetes pod status changes from Running to NotReady despite the pod logs displaying no errors and seemingly completing successfully

I created a k8 Job to use for schema migration in a rails app. Below is the Job yaml excluding the env vars:
---
- name: Deploy Migration Worker
k8s:
state: present
force: 'yes'
definition:
apiVersion: batch/v1
kind: Job
metadata:
name: schema-migration
namespace: "{{ k8s_namespace }}"
spec:
ttlSecondsAfterFinished: 60
template:
spec:
containers:
- command:
- /bin/sh
args:
- '-c'
- '{{ MIGRATIONS }}'
name: schema-migration-container
resources:
limits:
cpu: "{{ API_LIMIT_CPU }}"
memory: "{{ API_LIMIT_MEM }}"
requests:
cpu: "{{ API_REQUEST_CPU }}"
memory: "{{ API_REQUEST_MEM }}"
image: "redacted"
imagePullPolicy: IfNotPresent
restartPolicy: Never
imagePullSecrets:
- name: docker-pull-secret
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: docker-pull-secret
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Below is the pod status:
NAME READY STATUS RESTARTS AGE
schema-migration-mnvvw 1/2 NotReady 0 137m
Below is the job status:
NAME COMPLETIONS DURATION AGE
schema-migration 0/1 133m 133m
Below is the pod description:
Name: schema-migration-mnvvw
Namespace: dev1
Priority: 0
Node: redacted
Start Time: Wed, 01 Feb 2023 15:16:35 -0400
Labels: controller-uid=redacted
job-name=schema-migration
security.istio.io/tlsMode=istio
service.istio.io/canonical-name=schema-migration
service.istio.io/canonical-revision=latest
Annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: true
kubectl.kubernetes.io/default-container: main
kubectl.kubernetes.io/default-logs-container: main
kubernetes.io/psp: eks.privileged
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
sidecar.istio.io/status:
{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","istiod-...
Status: Running
IP: 10.131.217.49
IPs:
IP: 10.131.217.49
Controlled By: Job/schema-migration
Init Containers:
istio-init:
Container ID: redacted
Image: docker.io/istio/proxyv2:1.11.3
Image ID: docker-pullable://istio/proxyv2#sha256:28513eb3706315b26610a53e0d66b29b09a334e3164393b9a0591f34fe47a6fd
Port: <none>
Host Port: <none>
Args:
istio-iptables
-p
15001
-z
15006
-u
1337
-m
REDIRECT
-i
*
-x
172.20.0.1/32
-b
*
-d
15090,15021,15020
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 01 Feb 2023 15:16:36 -0400
Finished: Wed, 01 Feb 2023 15:16:36 -0400
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Environment:
DNS_AGENT:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vcj4 (ro)
Containers:
main:
Container ID: docker://a37824a15a748e7124a455a878f57b2ae22e08f8cddd0a2b1938b0414228b320
Image: redacted
Image ID: redacted
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
bundle exec rake db:migrate:status && bundle exec rake db:migrate && bundle exec rake seed:policy_types && bundle exec rake seed:permission_types && bundle exec rake seed:roles && bundle exec rake seed:default_role_permissions && bundle exec rake seed:partner_policy_defaults && bundle exec rake after_party:run
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 01 Feb 2023 15:16:50 -0400
Finished: Wed, 01 Feb 2023 15:18:13 -0400
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 4G
Requests:
cpu: 250m
memory: 2G
Readiness: http-get http://:15020/app-health/main/readyz delay=30s timeout=1s period=5s #success=1 #failure=12
Mounts:
/etc/istio/pod from istio-podinfo (rw)
/etc/istio/proxy from istio-envoy (rw)
/var/lib/istio/data from istio-data (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vcj4 (ro)
/var/run/secrets/tokens from istio-token (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
istio-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 43200
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
kube-api-access-9vcj4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
I'm fairly new to kubernetes however I've tried adding/removing a readinessProbe attribute as well as increasing cpu/memory resources among other things with no luck. There seems to be only a handful of similar issues (pod status showing NotReady as opposed to node status) but none that seems related to my problem. One thing of note perhaps, is that this is created via Ansible playbook though I don't believe this comes into play as far as this issue goes.
I'm hoping to get the job to a completed state essentially. Thanks in advance.

Rails in Kubernetes not picking up environment variables provided by configmap

I have a simple .env file with content like this
APP_PORT=5000
I add the values of that file with Kustomize. When I apply my Rails app, it crashes because it cannot find the environment vars:
I also tried to place a puts ENV['APP_PORT'] in application.rb but that one is nil
Rails Version & environment: 6.1.4.1 - development
! Unable to load application: KeyError: key not found: "APP_PORT"
Did you mean? "APP_HOME"
bundler: failed to load command: puma (/app/vendor/bundle/ruby/2.7.0/bin/puma)
KeyError: key not found: "APP_PORT"
Did you mean? "APP_HOME"
/app/config/environments/development.rb:2:in `fetch'
/app/config/environments/dev
when I change my image to image: nginx then the env vars are not there:
env
KUBERNETES_SERVICE_PORT_HTTPS=443
ELASTICSEARCH_PORT_9200_TCP_PORT=9200
KUBERNETES_SERVICE_PORT=443
ELASTICSEARCH_PORT_9200_TCP_ADDR=10.103.1.6
ELASTICSEARCH_SERVICE_HOST=10.103.1.6
HOSTNAME=myapp-backend-api-56b44c7445-h9g5m
ELASTICSEARCH_PORT=tcp://10.103.1.6:9200
PWD=/
ELASTICSEARCH_PORT_9200_TCP=tcp://10.103.1.6:9200
PKG_RELEASE=1~buster
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
ELASTICSEARCH_SERVICE_PORT_9200=9200
NJS_VERSION=0.6.2
TERM=xterm
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
ELASTICSEARCH_SERVICE_PORT=9200
ELASTICSEARCH_PORT_9200_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_VERSION=1.21.3
_=/usr/bin/env
This is my current state:
kustomization.yml
kind: Kustomization
configMapGenerator:
- name: backend-api-configmap
files:
- .env
bases:
- ../../base
patchesStrategicMerge:
- api-deployment.yml
api-deployment.yml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- image: nginx
imagePullPolicy: Never #imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image. envFrom:
- configMapRef:
name: backend-api-configma
That is the describe pod:
❯ k describe pod xxx-backend-api-56774c796d-s2zkd
Name: xxx-backend-api-56774c796d-s2zkd
Namespace: default
Annotations: <none>
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/xxx-backend-api-56774c796d
Containers:
xxx-backend-api:
Container ID: docker://5ee3112b0805271ebe4b32d7d8e5d1b267d8bf4e220f990c085638f7b975c41f
Image: xxx-backend-api:latest
Image ID: docker://sha256:55d96a68267d80f19e91aa0b4d1ffb11525e9ede054fcbb6e6ec74356c6a3c7d
Port: 5000/TCP
Ready: False
Restart Count: 3
Environment Variables from:
backend-api-configmap-99fbkbc4c9 ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w5czc (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-w5czc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 55s default-scheduler Successfully assigned default/xxx-backend-api-56774c796d-s2zkd to minikube
Normal Pulled 8s (x4 over 54s) kubelet Container image "xxx-backend-api:latest" already present on machine
Normal Created 8s (x4 over 54s) kubelet Created container xxx-backend-api
Normal Started 8s (x4 over 54s) kubelet Started container xxx-backend-api
Warning BackOff 4s (x4 over 48s) kubelet Back-off restarting failed container
and that is the describe configmap
❯ k describe configmaps backend-api-configmap-99fbkbc4c9
Name: backend-api-configmap-99fbkbc4c9
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
.env:
----
RAILS_MAX_THREADS=5
APPLICATION_URL=localhost:8000/backend
FRONTEND_URL=localhost:8000
APP_PORT=3000
BinaryData
====
Events: <none>
I got it:
Instead of using files: in my configMapGenerator I had to use envs: like so:
configMapGenerator:
- name: backend-api-configmap
envs:
- .env

Unable to bring up Jenkins using Helm

I'm following the doc in Jenkins page, I'm running with 2 node K8s cluster (1 master 1 worker), setting service type to nodeport, for some reason the init container crashes and never comes up.
kubectl describe pod jenkins-0 -n jenkins
Name: jenkins-0
Namespace: jenkins
Priority: 0
Node: vlab048009.dom047600.lab/10.204.110.35
Start Time: Wed, 09 Dec 2020 23:19:59 +0530
Labels: app.kubernetes.io/component=jenkins-controller
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=jenkins
controller-revision-hash=jenkins-c5795f65f
statefulset.kubernetes.io/pod-name=jenkins-0
Annotations: checksum/config: 2a4c2b3ea5dea271cb7c0b8e8582b682814d39f8e933e0348725b0b9a7dbf258
Status: Pending
IP: 10.244.1.28
IPs:
IP: 10.244.1.28
Controlled By: StatefulSet/jenkins
Init Containers:
init:
Container ID: docker://95e3298740bcaed3c2adf832f41d346e563c92add728080cfdcfcac375e0254d
Image: jenkins/jenkins:lts
Image ID: docker-pullable://jenkins/jenkins#sha256:1433deaac433ce20c534d8b87fcd0af3f25260f375f4ee6bdb41d70e1769d9ce
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 09 Dec 2020 23:41:28 +0530
Finished: Wed, 09 Dec 2020 23:41:29 +0530
Ready: False
Restart Count: 9
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment: <none>
Mounts:
/usr/share/jenkins/ref/plugins from plugins (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
Containers:
jenkins:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--httpPort=8080
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=3
Startup: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=12
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
JAVA_OPTS: -Dcasc.reload.token=$(POD_NAME)
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
CASC_JENKINS_CONFIG: /var/jenkins_home/casc_configs
Mounts:
/run/secrets/chart-admin-password from admin-secret (ro,path="jenkins-admin-password")
/run/secrets/chart-admin-username from admin-secret (ro,path="jenkins-admin-user")
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
config-reload:
Container ID:
Image: kiwigrid/k8s-sidecar:0.1.275
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
LABEL: jenkins-jenkins-config
FOLDER: /var/jenkins_home/casc_configs
NAMESPACE: jenkins
REQ_URL: http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)
REQ_METHOD: POST
REQ_RETRY_CONNECT: 10
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins
Optional: false
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
sc-config-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
admin-secret:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins
Optional: false
jenkins-token-ppfw7:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-token-ppfw7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned jenkins/jenkins-0 to vlab048009.dom047600.lab
Normal Pulled 22m kubelet Successfully pulled image "jenkins/jenkins:lts" in 4.648858149s
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 1.407161762s
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 4.963056101s
Normal Created 21m (x4 over 22m) kubelet Created container init
Normal Started 21m (x4 over 22m) kubelet Started container init
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 8.0749493s
Normal Pulling 20m (x5 over 22m) kubelet Pulling image "jenkins/jenkins:lts"
Warning BackOff 2m1s (x95 over 21m) kubelet Back-off restarting failed container
[
kubectl logs -f jenkins-0 -c init -n jenkins
Error from server: Get "https://10.204.110.35:10250/containerLogs/jenkins/jenkins-0/init?follow=true": dial tcp 10.204.110.35:10250: connect: no route to host
kubectl get events -n jenkins
LAST SEEN TYPE REASON OBJECT MESSAGE
23m Normal Scheduled pod/jenkins-0 Successfully assigned jenkins/jenkins-0 to vlab048009.dom047600.lab
21m Normal Pulling pod/jenkins-0 Pulling image "jenkins/jenkins:lts"
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 4.648858149s
22m Normal Created pod/jenkins-0 Created container init
22m Normal Started pod/jenkins-0 Started container init
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 1.407161762s
3m30s Warning BackOff pod/jenkins-0 Back-off restarting failed container
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 4.963056101s
22m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 8.0749493s
23m Normal SuccessfulCreate statefulset/jenkins create Pod jenkins-0 in StatefulSet jenkins successful
Every 2.0s: kubectl get all -n jenkins Wed Dec 9 23:48:31 2020
NAME READY STATUS RESTARTS AGE
pod/jenkins-0 0/2 Init:CrashLoopBackOff 10 28m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins NodePort 10.103.209.122 <none> 8080:32323/TCP 28m
service/jenkins-agent ClusterIP 10.103.195.120 <none> 50000/TCP 28m
NAME READY AGE
statefulset.apps/jenkins 0/1 28m
Using helm3 to deploy jenkins, pretty much changes done as per doc.
Not sure how to debug this issues wrt init container crashing, any leads or a solution would be appreciated, Thanks
Firstly make sure that you had executed command:
$ helm repo update
Execute also command:
$ kubectl logs <pod-name> -c <init-container-name>
to inspect init container. Then you will be able to properly debug this setup.
This might be a connection issue to the Jenkins update site. You can build an image which contains required plugins and disable plugin download. Take a look: jenkins-kubernetes.
See more: jenkins-helm-issues - in this case problem lays in plug-in compatibility.

Want to upgrade elastissearch version to 7.7 using official elasticsearch docker image instead of custom docker image for upgradation

I am trying to upgraded pods with 7.7 version of elasticsearch. Unable to do so.
below is values.yaml. Referring to here https://www.docker.elastic.co/r/elasticsearch/elasticsearch-oss:7.7.1 for offical docker image.
cluster:
name: elastic-x-pack
replicaCount:
client: 2
data: 2
master: 3
minimum_master_nodes: 2
image:
registry: docker.elastic.co
name: elasticsearch/elasticsearch-oss
tag: 7.7.1
pullPolicy: Always
service:
type: NodePort
http:
externalPort: 30000
internalPort: 9200
tcp:
externalPort: 30112
internalPort: 9300
opts: -Xms256m -Xmx256m
resources: {}
global:
elasticsearch:
storage:
data:
class: standard
size: 3Gi
snapshot:
class: standard
size: 5Gi
accessModes: [ ReadWriteMany ]
name: data-snapshot
cluster:
features:
DistributedTracing: test
ignite:
registry: test
But pods are not running and are in CrashLoopBackOff state.
below is the description of the pod
Name: elastic-cluster-elasticsearch-cluster-client-685d698-jf7bb
Namespace: default
Priority: 0
Node: ip-172-31-38-123.us-west-2.compute.internal/172.31.38.123
Start Time: Fri, 26 Jun 2020 09:31:23 +0000
Labels: app=elasticsearch-cluster
component=elasticsearch
pod-template-hash=685d698
release=elastic-cluster
role=client
Annotations: <none>
Status: Running
IP: 10.233.68.58
Controlled By: ReplicaSet/elastic-cluster-elasticsearch-cluster-client-685d698
Init Containers:
init-sysctl:
Container ID: docker://d83c3be3f4d7ac1362599d115813d6cd1b1356959a5a2784c1f90f3ed74daa69
Image: busybox:1.27.2
Image ID: docker-pullable://busybox#sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Port: <none>
Host Port: <none>
Command:
sysctl
-w
vm.max_map_count=262144
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 26 Jun 2020 09:31:24 +0000
Finished: Fri, 26 Jun 2020 09:31:24 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d7b79 (ro)
Containers:
elasticsearch-cluster:
Container ID: docker://74822d62d876b798c1518c0e42da071d661b2ccdbeb1fe40487044a9cc07e6f4
Image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1
Image ID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch-oss#sha256:04f0a377e55fcc41f3467e8a222357a7a5ef0b1e3ec026b6d63a59465870bd8e
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 78
Started: Fri, 26 Jun 2020 09:32:26 +0000
Finished: Fri, 26 Jun 2020 09:32:33 +0000
Ready: False
Restart Count: 3
Liveness: tcp-socket :transport delay=300s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/_cluster/health delay=10s timeout=5s period=10s #success=1 #failure=3
Environment:
NAMESPACE: default (v1:metadata.namespace)
NODE_NAME: elastic-cluster-elasticsearch-cluster-client-685d698-jf7bb (v1:metadata.name)
CLUSTER_NAME: elastic-x-pack
ES_JAVA_OPTS: -Xms256m -Xmx256m
NODE_DATA: false
HTTP_ENABLE: true
NETWORK_HOST: _site_,_lo_
NODE_MASTER: false
Mounts:
/data from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-d7b79 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-d7b79:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-d7b79
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 112s default-scheduler Successfully assigned default/elastic-cluster-elasticsearch-cluster-client-685d698-jf7bb to ip-172-31-38-123.us-west-2.compute.internal
Normal Pulled 111s kubelet, ip-172-31-38-123.us-west-2.compute.internal Container image "busybox:1.27.2" already present on machine
Normal Created 111s kubelet, ip-172-31-38-123.us-west-2.compute.internal Created container init-sysctl
Normal Started 111s kubelet, ip-172-31-38-123.us-west-2.compute.internal Started container init-sysctl
Normal Pulled 49s (x4 over 110s) kubelet, ip-172-31-38-123.us-west-2.compute.internal Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1" already present on machine
Normal Created 49s (x4 over 110s) kubelet, ip-172-31-38-123.us-west-2.compute.internal Created container elasticsearch-cluster
Normal Started 49s (x4 over 110s) kubelet, ip-172-31-38-123.us-west-2.compute.internal Started container elasticsearch-cluster
Warning BackOff 5s (x9 over 94s) kubelet, ip-172-31-38-123.us-west-2.compute.internal Back-off restarting failed container

Traefik on a rpi kubernetes node return 404 page not found

I try to made a first experience on kubernetes by practice.
kubernetes v1.9 has been setup on 5 raspberry pi mounted as cluster.
OS : hypriot v1.4
host / static ip configured / raspberry hardware version :
master: 192.168.1.230 / rpi v3
node01: 192.168.1.231 / rpi v3
node02: 192.168.1.232 / rpi v3
node03: 192.168.1.233 / rpi v2
node04: 192.168.1.234 / rpi v2
For the pod network I choose Weave Net. Traefik has been installed in the node01 as load balancer to access my service from outside.
I ssh the master and use these commands to install it (origin: https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/) :
$ kubectl apply -f https://raw.githubusercontent.com/hypriot/rpi-traefik/master/traefik-k8s-example.yaml
$ kubectl label node node01 nginx-controller=traefik
All system pods are running.
$ kubectl get pods --all-namespaces
kube-system etcd-master 1/1 Running 5 22h
kube-system kube-apiserver-master 1/1 Running 40 13h
kube-system kube-controller-manager-master 1/1 Running 10 13h
kube-system kube-dns-7b6ff86f69-x58pj 3/3 Running 9 23h
kube-system kube-proxy-5bqwh 1/1 Running 2 15h
kube-system kube-proxy-kngp9 1/1 Running 2 16h
kube-system kube-proxy-n85xl 1/1 Running 5 23h
kube-system kube-proxy-ncg2k 1/1 Running 2 15h
kube-system kube-proxy-qbfcf 1/1 Running 2 21h
kube-system kube-scheduler-master 1/1 Running 5 22h
kube-system traefik-ingress-controller-9dc7454cc-7rhpf 1/1 Running 1 14h
kube-system weave-net-6mvc6 2/2 Running 31 15h
kube-system weave-net-8hff9 2/2 Running 31 15h
kube-system weave-net-9kwgr 2/2 Running 31 21h
kube-system weave-net-llgrk 2/2 Running 41 22h
kube-system weave-net-s2h62 2/2 Running 29 16h
The issue is when I try to connect to the node01 by using this url http://192.168.1.231/. I got a 404 page not found...
So I checked the log and figure out that they are a problem with the default account :
$ kubectl logs traefik-ingress-controller-9dc7454cc-7rhpf
ERROR: logging before flag.Parse: E1226 07:29:15.195193 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:default" cannot list endpoints at the cluster scope
ERROR: logging before flag.Parse: E1226 07:29:15.422807 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot list secrets at the cluster scope
ERROR: logging before flag.Parse: E1226 07:29:15.915317 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope
ERROR: logging before flag.Parse: E1226 07:29:16.108385 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden: User "system:serviceaccount:kube-system:default" cannot list ingresses.extensions at the cluster scope
Is it really a problem with the account system:serviceaccount:kube-system:default used? What account should I use instead of?
Thanks for helping.
Additional informations:
$ docker -v
Docker version 17.03.0-ce, build 60ccb22
$ kubectl describe pods traefik-ingress-controller -n kube-system
Name: traefik-ingress-controller-9dc7454cc-7rhpf
Namespace: kube-system
Node: node01/192.168.1.231
Start Time: Mon, 25 Dec 2017 20:54:45 +0000
Labels: k8s-app=traefik-ingress-controller
pod-template-hash=587301077
Annotations: scheduler.alpha.kubernetes.io/tolerations=[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
Status: Running
IP: 192.168.1.231
Controlled By: ReplicaSet/traefik-ingress-controller-9dc7454cc
Containers:
traefik-ingress-controller:
Container ID: docker://9e28800da6937a48aa20b5ef6526846b321a516ad20ee24ea3d32876f6769531
Image: hypriot/rpi-traefik
Image ID: docker-pullable://hypriot/rpi-traefik#sha256:ecdfcd94571ec8c121c20a6ec616d68aeaad93150a0717260196f813e31737d9
Ports: 80/TCP, 8888/TCP
Args:
--web
--web.address=localhost:8888
--kubernetes
State: Running
Started: Mon, 25 Dec 2017 22:24:33 +0000
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 25 Dec 2017 20:54:50 +0000
Finished: Mon, 25 Dec 2017 22:17:09 +0000
Ready: True
Restart Count: 1
Limits:
cpu: 200m
memory: 30Mi
Requests:
cpu: 100m
memory: 20Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4wzhl (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-4wzhl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4wzhl
Optional: false
QoS Class: Burstable
Node-Selectors: nginx-controller=traefik
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Name: traefik-ingress-controller-9dc7454cc-jszgz
Namespace: kube-system
Node: node01/
Start Time: Mon, 25 Dec 2017 18:28:21 +0000
Labels: k8s-app=traefik-ingress-controller
pod-template-hash=587301077
Annotations: scheduler.alpha.kubernetes.io/tolerations=[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
Status: Failed
Reason: MatchNodeSelector
Message: Pod Predicate MatchNodeSelector failed
IP:
Controlled By: ReplicaSet/traefik-ingress-controller-9dc7454cc
Containers:
traefik-ingress-controller:
Image: hypriot/rpi-traefik
Ports: 80/TCP, 8888/TCP
Args:
--web
--web.address=localhost:8888
--kubernetes
Limits:
cpu: 200m
memory: 30Mi
Requests:
cpu: 100m
memory: 20Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4wzhl (ro)
Volumes:
default-token-4wzhl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4wzhl
Optional: false
QoS Class: Burstable
Node-Selectors: nginx-controller=traefik
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
$ kubectl describe pods weave-net-9kwgr -n kube-system
Name: weave-net-llgrk
Namespace: kube-system
Node: master/192.168.1.230
Start Time: Mon, 25 Dec 2017 13:33:40 +0000
Labels: controller-revision-hash=2209123374
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 192.168.1.230
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://7824b8b02f1a8f5a53d7f27f0c12b44f73a4b666a694b974142f974294bedd6c
Image: weaveworks/weave-kube:2.1.3
Image ID: docker-pullable://weaveworks/weave-kube#sha256:07a3d56b8592ea3e00ace6f2c3eb7e65f3cc4945188a9e2a884b8172e6a0007e
Port: <none>
Command:
/home/weave/launch.sh
State: Running
Started: Tue, 26 Dec 2017 00:13:58 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 26 Dec 2017 00:08:38 +0000
Finished: Tue, 26 Dec 2017 00:08:50 +0000
Ready: True
Restart Count: 37
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-mx5jk (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://b199904c10ed34501748c25e13862113aeb32c7779b0797d72c95f9e9d868331
Image: weaveworks/weave-npc:2.1.3
Image ID: docker-pullable://weaveworks/weave-npc#sha256:f35eb8166d7dae3fa7bb4d9892ab6dc8ea5c969f73791be590a0a213767c0f07
Port: <none>
State: Running
Started: Mon, 25 Dec 2017 22:24:32 +0000
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 25 Dec 2017 20:54:30 +0000
Finished: Mon, 25 Dec 2017 22:17:09 +0000
Ready: True
Restart Count: 4
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-mx5jk (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType:
weave-net-token-mx5jk:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-mx5jk
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events: <none>
root#master:/home/pirate# kubectl describe pods weave-net-9kwgr -n kube-system
Name: weave-net-9kwgr
Namespace: kube-system
Node: node01/192.168.1.231
Start Time: Mon, 25 Dec 2017 14:50:37 +0000
Labels: controller-revision-hash=2209123374
name=weave-net
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 192.168.1.231
Controlled By: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://92e31f645b4dcd41e4d8189a6f67fa70a395971e071d635dc4c4208b8d1daf63
Image: weaveworks/weave-kube:2.1.3
Image ID: docker-pullable://weaveworks/weave-kube#sha256:07a3d56b8592ea3e00ace6f2c3eb7e65f3cc4945188a9e2a884b8172e6a0007e
Port: <none>
Command:
/home/weave/launch.sh
State: Running
Started: Tue, 26 Dec 2017 00:13:39 +0000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 26 Dec 2017 00:08:17 +0000
Finished: Tue, 26 Dec 2017 00:08:28 +0000
Ready: True
Restart Count: 29
Requests:
cpu: 10m
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/host/etc from cni-conf (rw)
/host/home from cni-bin2 (rw)
/host/opt from cni-bin (rw)
/host/var/lib/dbus from dbus (rw)
/lib/modules from lib-modules (rw)
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-mx5jk (ro)
/weavedb from weavedb (rw)
weave-npc:
Container ID: docker://ddd86bef74d3fd40134c8609551cc07658aa62a2ede7ce51aec394001049e96d
Image: weaveworks/weave-npc:2.1.3
Image ID: docker-pullable://weaveworks/weave-npc#sha256:f35eb8166d7dae3fa7bb4d9892ab6dc8ea5c969f73791be590a0a213767c0f07
Port: <none>
State: Running
Started: Mon, 25 Dec 2017 22:24:32 +0000
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 25 Dec 2017 20:54:30 +0000
Finished: Mon, 25 Dec 2017 22:17:09 +0000
Ready: True
Restart Count: 2
Requests:
cpu: 10m
Environment:
HOSTNAME: (v1:spec.nodeName)
Mounts:
/run/xtables.lock from xtables-lock (rw)
/var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-mx5jk (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
weavedb:
Type: HostPath (bare host directory volume)
Path: /var/lib/weave
HostPathType:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
HostPathType:
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
HostPathType:
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
HostPathType:
dbus:
Type: HostPath (bare host directory volume)
Path: /var/lib/dbus
HostPathType:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType:
weave-net-token-mx5jk:
Type: Secret (a volume populated by a Secret)
SecretName: weave-net-token-mx5jk
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events: <none>
Your Traefik service account is missing proper RBAC privileges. By default, no application may access any Kubernetes API.
You have to make sure that the necessary rights are granted. Please check our Kubernetes guide for details.

Resources