kubernetes pull of jenkins image giving Imagepullbackoff - docker

Trying to pull jenkins image using helm install stable/jenkins --values jenkins.values --name jenkins
to into my local cluster,but every time, when i do build pod its gives the error as following.
`
[root#kube-master tmp]# kubectl describe pod newjenkins-84cd855fb6-mr9rm
Name: newjenkins-84cd855fb6-mr9rm
Namespace: default
Priority: 0
Node: worker-node2/192.168.20.56
Start Time: Thu, 14 May 2020 14:58:13 +0500
Labels: app.kubernetes.io/component=jenkins-master
app.kubernetes.io/instance=newjenkins
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=jenkins
helm.sh/chart=jenkins-1.16.0
pod-template-hash=84cd855fb6
Annotations: checksum/config: 70d4b49bc5cd79a1a978e1bbafdb8126f8accc44871772348fd481642e33cffb
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/newjenkins-84cd855fb6
Init Containers:
copy-default-config:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment:
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'newjenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'newjenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from newjenkins-token-jmfsz (ro)
Containers:
jenkins:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
--argumentsRealm.roles.$(ADMIN_USER)=admin
--httpPort=8080
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: newjenkins-84cd855fb6-mr9rm (v1:metadata.name)
JAVA_OPTS:
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'newjenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'newjenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from newjenkins-token-jmfsz (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: newjenkins
Optional: false
secrets-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: newjenkins
ReadOnly: false
newjenkins-token-jmfsz:
Type: Secret (a volume populated by a Secret)
SecretName: newjenkins-token-jmfsz
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/newjenkins-84cd855fb6-mr9rm to worker-node2
Normal Pulling 9m41s kubelet, worker-node2 Pulling image "jenkins/jenkins:lts"
[root#kube-master tmp]# kubectl describe pod newjenkins-84cd855fb6-mr9rm
Name: newjenkins-84cd855fb6-mr9rm
Namespace: default
Priority: 0
Node: worker-node2/192.168.20.56
Start Time: Thu, 14 May 2020 14:58:13 +0500
Labels: app.kubernetes.io/component=jenkins-master
app.kubernetes.io/instance=newjenkins
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=jenkins
helm.sh/chart=jenkins-1.16.0
pod-template-hash=84cd855fb6
Annotations: checksum/config: 70d4b49bc5cd79a1a978e1bbafdb8126f8accc44871772348fd481642e33cffb
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/newjenkins-84cd855fb6
Init Containers:
copy-default-config:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment:
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'newjenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'newjenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from newjenkins-token-jmfsz (ro)
Containers:
jenkins:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
--argumentsRealm.roles.$(ADMIN_USER)=admin
--httpPort=8080
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: newjenkins-84cd855fb6-mr9rm (v1:metadata.name)
JAVA_OPTS:
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'newjenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'newjenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from newjenkins-token-jmfsz (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: newjenkins
Optional: false
secrets-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: newjenkins
ReadOnly: false
newjenkins-token-jmfsz:
Type: Secret (a volume populated by a Secret)
SecretName: newjenkins-token-jmfsz
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/newjenkins-84cd855fb6-mr9rm to worker-node2
Warning Failed 50s kubelet, worker-node2 Failed to pull image "jenkins/jenkins:lts": rpc error: code = Unknown desc = unauthorized: authentication required
Warning Failed 50s kubelet, worker-node2 Error: ErrImagePull
Normal SandboxChanged 49s kubelet, worker-node2 Pod sandbox changed, it will be killed and re-created.
Normal Pulling 28s (x2 over 10m) kubelet, worker-node2 Pulling image "jenkins/jenkins:lts"
Pull the jenkins image manually using docker pull jenkins with docker hub login and without docker hub account login, every time Give me error ErrImagePull.

The hint is in the Events section:
Failed to pull image "jenkins/jenkins:lts": rpc error: code = Unknown desc = unauthorized: authentication required
The kubelet on worker nodes performs a docker pull prior to executing the pod when it spins up the containers.
Make sure the node is logged in with docker login so the local worker nodes can pull the image manually, if you haven't already.
If you have and it's still happening, you may need a secret in place to access the repository in question. If it's still happening even then, don't use a short name for your image (not jenkins/jenkins:lts, specify the full path, like my-image-registry:5001/jenkins/jenkins:lts) to ensure it's pulling from the right place and not the default registries Docker is configured with. Hope that helps.

Related

Kubernetes pod status changes from Running to NotReady despite the pod logs displaying no errors and seemingly completing successfully

I created a k8 Job to use for schema migration in a rails app. Below is the Job yaml excluding the env vars:
---
- name: Deploy Migration Worker
k8s:
state: present
force: 'yes'
definition:
apiVersion: batch/v1
kind: Job
metadata:
name: schema-migration
namespace: "{{ k8s_namespace }}"
spec:
ttlSecondsAfterFinished: 60
template:
spec:
containers:
- command:
- /bin/sh
args:
- '-c'
- '{{ MIGRATIONS }}'
name: schema-migration-container
resources:
limits:
cpu: "{{ API_LIMIT_CPU }}"
memory: "{{ API_LIMIT_MEM }}"
requests:
cpu: "{{ API_REQUEST_CPU }}"
memory: "{{ API_REQUEST_MEM }}"
image: "redacted"
imagePullPolicy: IfNotPresent
restartPolicy: Never
imagePullSecrets:
- name: docker-pull-secret
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: docker-pull-secret
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Below is the pod status:
NAME READY STATUS RESTARTS AGE
schema-migration-mnvvw 1/2 NotReady 0 137m
Below is the job status:
NAME COMPLETIONS DURATION AGE
schema-migration 0/1 133m 133m
Below is the pod description:
Name: schema-migration-mnvvw
Namespace: dev1
Priority: 0
Node: redacted
Start Time: Wed, 01 Feb 2023 15:16:35 -0400
Labels: controller-uid=redacted
job-name=schema-migration
security.istio.io/tlsMode=istio
service.istio.io/canonical-name=schema-migration
service.istio.io/canonical-revision=latest
Annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: true
kubectl.kubernetes.io/default-container: main
kubectl.kubernetes.io/default-logs-container: main
kubernetes.io/psp: eks.privileged
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
sidecar.istio.io/status:
{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","istiod-...
Status: Running
IP: 10.131.217.49
IPs:
IP: 10.131.217.49
Controlled By: Job/schema-migration
Init Containers:
istio-init:
Container ID: redacted
Image: docker.io/istio/proxyv2:1.11.3
Image ID: docker-pullable://istio/proxyv2#sha256:28513eb3706315b26610a53e0d66b29b09a334e3164393b9a0591f34fe47a6fd
Port: <none>
Host Port: <none>
Args:
istio-iptables
-p
15001
-z
15006
-u
1337
-m
REDIRECT
-i
*
-x
172.20.0.1/32
-b
*
-d
15090,15021,15020
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 01 Feb 2023 15:16:36 -0400
Finished: Wed, 01 Feb 2023 15:16:36 -0400
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Environment:
DNS_AGENT:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vcj4 (ro)
Containers:
main:
Container ID: docker://a37824a15a748e7124a455a878f57b2ae22e08f8cddd0a2b1938b0414228b320
Image: redacted
Image ID: redacted
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
bundle exec rake db:migrate:status && bundle exec rake db:migrate && bundle exec rake seed:policy_types && bundle exec rake seed:permission_types && bundle exec rake seed:roles && bundle exec rake seed:default_role_permissions && bundle exec rake seed:partner_policy_defaults && bundle exec rake after_party:run
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 01 Feb 2023 15:16:50 -0400
Finished: Wed, 01 Feb 2023 15:18:13 -0400
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 4G
Requests:
cpu: 250m
memory: 2G
Readiness: http-get http://:15020/app-health/main/readyz delay=30s timeout=1s period=5s #success=1 #failure=12
Mounts:
/etc/istio/pod from istio-podinfo (rw)
/etc/istio/proxy from istio-envoy (rw)
/var/lib/istio/data from istio-data (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vcj4 (ro)
/var/run/secrets/tokens from istio-token (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
istio-podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
istio-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 43200
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
kube-api-access-9vcj4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
I'm fairly new to kubernetes however I've tried adding/removing a readinessProbe attribute as well as increasing cpu/memory resources among other things with no luck. There seems to be only a handful of similar issues (pod status showing NotReady as opposed to node status) but none that seems related to my problem. One thing of note perhaps, is that this is created via Ansible playbook though I don't believe this comes into play as far as this issue goes.
I'm hoping to get the job to a completed state essentially. Thanks in advance.

Pods not running with an error message of IMAGEPULLBACKOFF

I initialized kubeflow pods using the following command.
juju deploy kubeflow
The following two pods didn't run and gave an error message stating IMAGEPULLBACKOFF.
kfp-viz,
kfp-profile-controller
Yaml code for kfp-viz
Name: kfp-viz-65bc89cd9b-dnng9
Namespace: kubeflow
Priority: 0
Node: mlops/10.50.60.90
Start Time: Mon, 30 May 2022 05:22:25 +0000
Labels: app.kubernetes.io/name=kfp-viz
pod-template-hash=65bc89cd9b
Annotations: apparmor.security.beta.kubernetes.io/pod: runtime/default
charm.juju.is/modified-version: 0
cni.projectcalico.org/podIP: 10.1.190.114/32
cni.projectcalico.org/podIPs: 10.1.190.114/32
controller.juju.is/id: ae0e16a7-f10b-41e9-8f64-35346b6c91dd
model.juju.is/id: a6f5b73a-cb38-42cd-8f8e-36aced0c1bbd
seccomp.security.beta.kubernetes.io/pod: docker/default
unit.juju.is/id: kfp-viz/0
Status: Pending
IP: 10.1.190.114
IPs:
IP: 10.1.190.114
Controlled By: ReplicaSet/kfp-viz-65bc89cd9b
Init Containers:
juju-pod-init:
Container ID: containerd://ffab869ab2beeab6a6bfde53be1175da58044c35da4d0bc2b66db231c585b142
Image: jujusolutions/jujud-operator:2.9.29
Image ID: sha256:47127013daad1c7215de0566f312d2a00eb83b3ef898e7b07f1ceb9e42860b4a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
export JUJU_DATA_DIR=/var/lib/juju
export JUJU_TOOLS_DIR=$JUJU_DATA_DIR/tools
mkdir -p $JUJU_TOOLS_DIR
cp /opt/jujud $JUJU_TOOLS_DIR/jujud
initCmd=$($JUJU_TOOLS_DIR/jujud help commands | grep caas-unit-init)
if test -n "$initCmd"; then
$JUJU_TOOLS_DIR/jujud caas-unit-init --debug --wait;
else
exit 0
fi
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 30 May 2022 05:22:35 +0000
Finished: Mon, 30 May 2022 05:24:10 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/lib/juju from juju-data-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82xc9 (ro)
Containers:
ml-pipeline-visualizationserver:
Container ID:
Image: registry.jujucharms.com/charm/c2o31yht1y825t6n49mwko4wyel0rracnrjn5/oci-image#sha256:13c46cf878062fd6ad672cbec4854eba7e869cd0123a8975bea49b9d75d4e698
Image ID:
Port: 8888/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Liveness: exec [wget -q -S -O - http://localhost:8888/] delay=3s timeout=2s period=5s #success=1 #failure=3
Readiness: exec [wget -q -S -O - http://localhost:8888/] delay=3s timeout=2s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/usr/bin/juju-run from juju-data-dir (rw,path="tools/jujud")
/var/lib/juju from juju-data-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82xc9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
juju-data-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-82xc9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/arch=amd64
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Failed 10m (x338 over 35h) kubelet (combined from similar events): Failed to pull image "registry.jujucharms.com/charm/c2o31yht1y825t6n49mwko4wyel0rracnrjn5/oci-image#sha256:13c46cf878062fd6ad672cbec4854eba7e869cd0123a8975bea49b9d75d4e698": rpc error: code = FailedPrecondition desc = failed to pull and unpack image "registry.jujucharms.com/charm/c2o31yht1y825t6n49mwko4wyel0rracnrjn5/oci-image#sha256:13c46cf878062fd6ad672cbec4854eba7e869cd0123a8975bea49b9d75d4e698": failed commit on ref "layer-sha256:d8f1984ce468ddfcc0f2752e09c8bbb5ea8513d55d5dd5f911d2d3dd135e5a84": "layer-sha256:d8f1984ce468ddfcc0f2752e09c8bbb5ea8513d55d5dd5f911d2d3dd135e5a84" failed size validation: 19367144 != 32561845: failed precondition
Normal BackOff 10s (x4271 over 36h) kubelet Back-off pulling image "registry.jujucharms.com/charm/c2o31yht1y825t6n49mwko4wyel0rracnrjn5/oci-image#sha256:13c46cf878062fd6ad672cbec4854eba7e869cd0123a8975bea49b9d75d4e698"
Yaml code for kfp-profile-controller
Name: kfp-profile-controller-operator-0
Namespace: kubeflow
Priority: 0
Node: mlops/10.50.60.90
Start Time: Mon, 30 May 2022 05:15:50 +0000
Labels: controller-revision-hash=kfp-profile-controller-operator-755985f4fc
operator.juju.is/name=kfp-profile-controller
operator.juju.is/target=application
statefulset.kubernetes.io/pod-name=kfp-profile-controller-operator-0
Annotations: apparmor.security.beta.kubernetes.io/pod: runtime/default
cni.projectcalico.org/podIP: 10.1.190.93/32
cni.projectcalico.org/podIPs: 10.1.190.93/32
controller.juju.is/id: ae0e16a7-f10b-41e9-8f64-35346b6c91dd
juju.is/version: 2.9.29
model.juju.is/id: a6f5b73a-cb38-42cd-8f8e-36aced0c1bbd
seccomp.security.beta.kubernetes.io/pod: docker/default
Status: Running
IP: 10.1.190.93
IPs:
IP: 10.1.190.93
Controlled By: StatefulSet/kfp-profile-controller-operator
Containers:
juju-operator:
Container ID: containerd://a4cc8356c4ce7d8bbb9282ccee17c38937a7a3a3bfa13caa29eaeccb63a86d30
Image: jujusolutions/jujud-operator:2.9.29
Image ID: sha256:47127013daad1c7215de0566f312d2a00eb83b3ef898e7b07f1ceb9e42860b4a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
export JUJU_DATA_DIR=/var/lib/juju
export JUJU_TOOLS_DIR=$JUJU_DATA_DIR/tools
mkdir -p $JUJU_TOOLS_DIR
cp /opt/jujud $JUJU_TOOLS_DIR/jujud
$JUJU_TOOLS_DIR/jujud caasoperator --application-name=kfp-profile-controller --debug
State: Running
Started: Mon, 30 May 2022 05:16:05 +0000
Ready: True
Restart Count: 0
Environment:
JUJU_APPLICATION: kfp-profile-controller
JUJU_OPERATOR_SERVICE_IP: 10.152.183.68
JUJU_OPERATOR_POD_IP: (v1:status.podIP)
JUJU_OPERATOR_NAMESPACE: kubeflow (v1:metadata.namespace)
Mounts:
/var/lib/juju/agents from charm (rw)
/var/lib/juju/agents/application-kfp-profile-controller/operator.yaml from kfp-profile-controller-operator-config (rw,path="operator.yaml")
/var/lib/juju/agents/application-kfp-profile-controller/template-agent.conf from kfp-profile-controller-operator-config (rw,path="template-agent.conf")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6x9qh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
charm:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: charm-kfp-profile-controller-operator-0
ReadOnly: false
kfp-profile-controller-operator-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kfp-profile-controller-operator-config
Optional: false
kube-api-access-6x9qh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: kfp-profile-controller-68f554c765-7vvmj
Namespace: kubeflow
Priority: 0
Node: mlops/10.50.60.90
Start Time: Mon, 30 May 2022 05:34:26 +0000
Labels: app.kubernetes.io/name=kfp-profile-controller
pod-template-hash=68f554c765
Annotations: apparmor.security.beta.kubernetes.io/pod: runtime/default
charm.juju.is/modified-version: 0
cni.projectcalico.org/podIP: 10.1.190.134/32
cni.projectcalico.org/podIPs: 10.1.190.134/32
controller.juju.is/id: ae0e16a7-f10b-41e9-8f64-35346b6c91dd
model.juju.is/id: a6f5b73a-cb38-42cd-8f8e-36aced0c1bbd
seccomp.security.beta.kubernetes.io/pod: docker/default
unit.juju.is/id: kfp-profile-controller/0
Status: Pending
IP: 10.1.190.134
IPs:
IP: 10.1.190.134
Controlled By: ReplicaSet/kfp-profile-controller-68f554c765
Init Containers:
juju-pod-init:
Container ID: containerd://7a1f2d07bce6f58bd7e9899e0f26a2cd36abfacad55d0e6b43fbb8fcb93df289
Image: jujusolutions/jujud-operator:2.9.29
Image ID: sha256:47127013daad1c7215de0566f312d2a00eb83b3ef898e7b07f1ceb9e42860b4a
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
export JUJU_DATA_DIR=/var/lib/juju
export JUJU_TOOLS_DIR=$JUJU_DATA_DIR/tools
mkdir -p $JUJU_TOOLS_DIR
cp /opt/jujud $JUJU_TOOLS_DIR/jujud
initCmd=$($JUJU_TOOLS_DIR/jujud help commands | grep caas-unit-init)
if test -n "$initCmd"; then
$JUJU_TOOLS_DIR/jujud caas-unit-init --debug --wait;
else
exit 0
fi
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 30 May 2022 05:34:32 +0000
Finished: Mon, 30 May 2022 05:35:50 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/lib/juju from juju-data-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lqf7h (ro)
Containers:
kubeflow-pipelines-profile-controller:
Container ID:
Image: registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347
Image ID:
Port: 80/TCP
Host Port: 0/TCP
Command:
python
Args:
/hooks/sync.py
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
CONTROLLER_PORT: 80
DISABLE_ISTIO_SIDECAR: false
KFP_DEFAULT_PIPELINE_ROOT:
KFP_VERSION: 1.7.0-rc.3
METADATA_GRPC_SERVICE_HOST: mlmd.kubeflow
METADATA_GRPC_SERVICE_PORT: 8080
MINIO_ACCESS_KEY: minio
MINIO_HOST: minio
MINIO_NAMESPACE: kubeflow
MINIO_PORT: 9000
MINIO_SECRET_KEY: SV25N7GCN6HAYV19M4GMHGX0YTZ840
Mounts:
/hooks from kubeflow-pipelines-profile-controller-code (rw)
/usr/bin/juju-run from juju-data-dir (rw,path="tools/jujud")
/var/lib/juju from juju-data-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lqf7h (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
juju-data-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubeflow-pipelines-profile-controller-code:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kubeflow-pipelines-profile-controller-code
Optional: false
kube-api-access-lqf7h:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/arch=amd64
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 28m (x4631 over 38h) kubelet Back-off pulling image "registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347"
Warning Failed 11m (x207 over 38h) kubelet Error: ErrImagePull
Warning Failed 5m51s (x497 over 37h) kubelet (combined from similar events): Failed to pull image "registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347": rpc error: code = FailedPrecondition desc = failed to pull and unpack image "registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347": failed commit on ref "layer-sha256:3bfc3875e0f70f1fb305c87b4bc4d886f1118ddfedfded03ef0cb1c394cb90f0": "layer-sha256:3bfc3875e0f70f1fb305c87b4bc4d886f1118ddfedfded03ef0cb1c394cb90f0" failed size validation: 20672632 != 53354193: failed precondition
Kubeflow version:1.21:
kfctl version: kfctl v1.0.1-0-gf3edb9b:
Kubernetes platform: Microk8s
Kubernetes version: Client-GitVersion:"v1.24.0" , Server-GitVersion:"v1.21.12-3+6937f71915b56b":
OS : Linux 18.04
Apparently, these pods are not able to pull these images.
Does these systems connect to internet? with/without proxy?
Anyway a quick resolution can be, just make the images available on all these nodes. As simple as
docker pull registry.jujucharms.com/charm/gm1axzm8pxqlan75l3a7znu2mv5bf0pm1wfar/oci-image#sha256:14ec52252771f8fa904afbdac497c80fc3234d518b1e0bced0c810d5748a7347
you can find other images in the Events section of the describe command you have posted.

ListenAndServeTLS runs locally but not in Docker container

When running a Go HTTPS server locally with self signed certificates, things are fine
When pushing the same to a docker container (via skaffold -- or Google GKE), ListenAndServeTLS is hanging and the container is looping on recreation.
Certificate was create via:
openssl genrsa -out https-server.key 2048
openssl ecparam -genkey -name secp384r1 -out https-server.key
openssl req -new -x509 -sha256 -key https-server.key -out https-server.crt -days 3650
main.go contains:
if IsSSL {
err := http.ListenAndServeTLS(addr+":"+srvPort, os.Getenv("CERT_FILE"), os.Getenv("KEY_FILE"), handler)
if err != nil {
log.Fatal(err)
}
} else {
log.Fatal(http.ListenAndServe(addr+":"+srvPort, handler))
}
The crt and key files are passed via K8s secrets and my yaml file contains the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "true"
spec:
volumes:
- name: google-cloud-key
secret:
secretName: ecomm-key
- name: ssl-cert
secret:
secretName: ecomm-cert-server
- name: ssl-key
secret:
secretName: ecomm-cert-key
containers:
- name: frontend
image: gcr.io/sca-ecommerce-291313/frontend:latest
ports:
- containerPort: 8080
readinessProbe:
initialDelaySeconds: 10
httpGet:
path: "/_healthz"
port: 8080
httpHeaders:
- name: "Cookie"
value: "shop_session-id=x-readiness-probe"
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: "/_healthz"
port: 8080
httpHeaders:
- name: "Cookie"
value: "shop_session-id=x-liveness-probe"
volumeMounts:
- name: ssl-cert
mountPath: /var/secrets/ssl-cert
- name: ssl-key
mountPath: /var/secrets/ssl-key
env:
- name: USE_SSL
value: "true"
- name: CERT_FILE
value: "/var/secrets/ssl-cert/cert-server.pem"
- name: KEY_FILE
value: "/var/secrets/ssl-key/cert-key.pem"
- name: PORT
value: "8080"
I have the same behaviour when referencing the file directly in the code like:
err := http.ListenAndServeTLS(addr+":"+srvPort, "https-server.crt", "https-server.key", handler)
The strange and not helping thing is that ListenAndServeTLS does not give any log output on why it's hanging or a hinch on the problem ( using kubectl logs )
Looking at the kubectl describe pod output:
Name: frontend-85f4d9cb8c-9bjh4
Namespace: ecomm-ns
Priority: 0
Start Time: Fri, 01 Jan 2021 17:04:29 +0100
Labels: app=frontend
app.kubernetes.io/managed-by=skaffold
pod-template-hash=85f4d9cb8c
skaffold.dev/run-id=44518449-c1c1-4b6c-8cc1-406ac6d6b91f
Annotations: sidecar.istio.io/rewriteAppHTTPProbers: true
Status: Running
IP: 192.168.10.7
IPs:
IP: 192.168.10.7
Controlled By: ReplicaSet/frontend-85f4d9cb8c
Containers:
frontend:
Container ID: docker://f867ea7a2f99edf891b571f80ae18f10e261375e073b9d2007bbff1600d272c7
Image: gcr.io/sca-ecommerce-291313/frontend:5110aa8a87655b07cc71ffb2c46fd8739e3c25c222a637b2f5a7a1af1bfccc22
Image ID: docker://sha256:5110aa8a87655b07cc71ffb2c46fd8739e3c25c222a637b2f5a7a1af1bfccc22
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 01 Jan 2021 17:05:08 +0100
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Fri, 01 Jan 2021 17:04:37 +0100
Finished: Fri, 01 Jan 2021 17:05:07 +0100
Ready: False
Restart Count: 1
Limits:
cpu: 200m
memory: 128Mi
Requests:
cpu: 100m
memory: 64Mi
Liveness: http-get http://:8080/_healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8080/_healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /var/secrets/google/key.json
CERT_FILE: /var/secrets/ssl-cert/cert-server.crt
KEY_FILE: /var/secrets/ssl-key/cert-server.key
PORT: 8080
USE_SSL: true
ONLINE_PRODUCT_CATALOG_SERVICE_ADDR: onlineproductcatalogservice:4040
ENV_PLATFORM: gcp
DISABLE_TRACING: 1
DISABLE_PROFILER: 1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tm62d (ro)
/var/secrets/google from google-cloud-key (rw)
/var/secrets/ssl-cert from ssl-cert (rw)
/var/secrets/ssl-key from ssl-key (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
google-cloud-key:
Type: Secret (a volume populated by a Secret)
SecretName: ecomm-key
Optional: false
ssl-cert:
Type: Secret (a volume populated by a Secret)
SecretName: https-cert-server
Optional: false
ssl-key:
Type: Secret (a volume populated by a Secret)
SecretName: https-cert-key
Optional: false
default-token-tm62d:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tm62d
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46s default-scheduler Successfully assigned ecomm-ns/frontend-85f4d9cb8c-9bjh4
Warning Unhealthy 17s (x2 over 27s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 400
Normal Pulled 8s (x2 over 41s) kubelet Container image "gcr.io/frontend:5110aa8a87655b07cc71ffb2c46fd8739e3c25c222a637b2f5a7a1af1bfccc22" already present on machine
Normal Created 8s (x2 over 39s) kubelet Created container frontend
Warning Unhealthy 8s (x3 over 28s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 400
Normal Killing 8s kubelet Container frontend failed liveness probe, will be restarted
Normal Started 7s (x2 over 38s) kubelet Started container frontend
The liveness probe and readyness probes are getting a 400 response.

Unable to bring up Jenkins using Helm

I'm following the doc in Jenkins page, I'm running with 2 node K8s cluster (1 master 1 worker), setting service type to nodeport, for some reason the init container crashes and never comes up.
kubectl describe pod jenkins-0 -n jenkins
Name: jenkins-0
Namespace: jenkins
Priority: 0
Node: vlab048009.dom047600.lab/10.204.110.35
Start Time: Wed, 09 Dec 2020 23:19:59 +0530
Labels: app.kubernetes.io/component=jenkins-controller
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=jenkins
controller-revision-hash=jenkins-c5795f65f
statefulset.kubernetes.io/pod-name=jenkins-0
Annotations: checksum/config: 2a4c2b3ea5dea271cb7c0b8e8582b682814d39f8e933e0348725b0b9a7dbf258
Status: Pending
IP: 10.244.1.28
IPs:
IP: 10.244.1.28
Controlled By: StatefulSet/jenkins
Init Containers:
init:
Container ID: docker://95e3298740bcaed3c2adf832f41d346e563c92add728080cfdcfcac375e0254d
Image: jenkins/jenkins:lts
Image ID: docker-pullable://jenkins/jenkins#sha256:1433deaac433ce20c534d8b87fcd0af3f25260f375f4ee6bdb41d70e1769d9ce
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 09 Dec 2020 23:41:28 +0530
Finished: Wed, 09 Dec 2020 23:41:29 +0530
Ready: False
Restart Count: 9
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment: <none>
Mounts:
/usr/share/jenkins/ref/plugins from plugins (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
Containers:
jenkins:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--httpPort=8080
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=3
Startup: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=12
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
JAVA_OPTS: -Dcasc.reload.token=$(POD_NAME)
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
CASC_JENKINS_CONFIG: /var/jenkins_home/casc_configs
Mounts:
/run/secrets/chart-admin-password from admin-secret (ro,path="jenkins-admin-password")
/run/secrets/chart-admin-username from admin-secret (ro,path="jenkins-admin-user")
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
config-reload:
Container ID:
Image: kiwigrid/k8s-sidecar:0.1.275
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
LABEL: jenkins-jenkins-config
FOLDER: /var/jenkins_home/casc_configs
NAMESPACE: jenkins
REQ_URL: http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)
REQ_METHOD: POST
REQ_RETRY_CONNECT: 10
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins
Optional: false
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
sc-config-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
admin-secret:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins
Optional: false
jenkins-token-ppfw7:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-token-ppfw7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned jenkins/jenkins-0 to vlab048009.dom047600.lab
Normal Pulled 22m kubelet Successfully pulled image "jenkins/jenkins:lts" in 4.648858149s
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 1.407161762s
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 4.963056101s
Normal Created 21m (x4 over 22m) kubelet Created container init
Normal Started 21m (x4 over 22m) kubelet Started container init
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 8.0749493s
Normal Pulling 20m (x5 over 22m) kubelet Pulling image "jenkins/jenkins:lts"
Warning BackOff 2m1s (x95 over 21m) kubelet Back-off restarting failed container
[
kubectl logs -f jenkins-0 -c init -n jenkins
Error from server: Get "https://10.204.110.35:10250/containerLogs/jenkins/jenkins-0/init?follow=true": dial tcp 10.204.110.35:10250: connect: no route to host
kubectl get events -n jenkins
LAST SEEN TYPE REASON OBJECT MESSAGE
23m Normal Scheduled pod/jenkins-0 Successfully assigned jenkins/jenkins-0 to vlab048009.dom047600.lab
21m Normal Pulling pod/jenkins-0 Pulling image "jenkins/jenkins:lts"
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 4.648858149s
22m Normal Created pod/jenkins-0 Created container init
22m Normal Started pod/jenkins-0 Started container init
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 1.407161762s
3m30s Warning BackOff pod/jenkins-0 Back-off restarting failed container
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 4.963056101s
22m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 8.0749493s
23m Normal SuccessfulCreate statefulset/jenkins create Pod jenkins-0 in StatefulSet jenkins successful
Every 2.0s: kubectl get all -n jenkins Wed Dec 9 23:48:31 2020
NAME READY STATUS RESTARTS AGE
pod/jenkins-0 0/2 Init:CrashLoopBackOff 10 28m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins NodePort 10.103.209.122 <none> 8080:32323/TCP 28m
service/jenkins-agent ClusterIP 10.103.195.120 <none> 50000/TCP 28m
NAME READY AGE
statefulset.apps/jenkins 0/1 28m
Using helm3 to deploy jenkins, pretty much changes done as per doc.
Not sure how to debug this issues wrt init container crashing, any leads or a solution would be appreciated, Thanks
Firstly make sure that you had executed command:
$ helm repo update
Execute also command:
$ kubectl logs <pod-name> -c <init-container-name>
to inspect init container. Then you will be able to properly debug this setup.
This might be a connection issue to the Jenkins update site. You can build an image which contains required plugins and disable plugin download. Take a look: jenkins-kubernetes.
See more: jenkins-helm-issues - in this case problem lays in plug-in compatibility.

How can I use an existing PVC to helm install stable/jenkins

I am stuck with a helm install of jenkins
:(
please help!
I have predefined a storage class via:
$ kubectl apply -f generic-storage-class.yaml
with generic-storage-class.yaml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: generic
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zones: us-east-1a, us-east-1b, us-east-1c
fsType: ext4
I then define a PVC via:
$ kubectl apply -f jenkins-pvc.yaml
with jenkins-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
namespace: jenkins-project
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
I can then see the PVC go into the BOUND status:
$ kubectl get pvc --all-namespaces
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-project jenkins-pvc Bound pvc-a173294f-7cea-11e9-a90f-161c7e8a0754 20Gi RWO gp2 27m
But when I try to Helm install jenkins via:
$ helm install --name jenkins \
--set persistence.existingClaim=jenkins-pvc \
stable/jenkins --namespace jenkins-project
I get this output:
NAME: jenkins
LAST DEPLOYED: Wed May 22 17:07:44 2019
NAMESPACE: jenkins-project
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
jenkins 5 0s
jenkins-tests 1 0s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
jenkins 0/1 1 0 0s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins Pending gp2 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
jenkins-6c9f9f5478-czdbh 0/1 Pending 0 0s
==> v1/Secret
NAME TYPE DATA AGE
jenkins Opaque 2 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins LoadBalancer 10.100.200.27 <pending> 8080:31157/TCP 0s
jenkins-agent ClusterIP 10.100.221.179 <none> 50000/TCP 0s
NOTES:
1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace jenkins-project jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace jenkins-project -w jenkins'
export SERVICE_IP=$(kubectl get svc --namespace jenkins-project jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo http://$SERVICE_IP:8080/login
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
where I see helm creating a new PersistentVolumeClaim called jenkins.
How come helm did not use the "exsistingClaim"
I see this as the only helm values for the jenkins release
$ helm get values jenkins
persistence:
existingClaim: jenkins-pvc
and indeed it has just made its own PVC instead of using the pre-created one.
kubectl get pvc --all-namespaces
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-project jenkins Bound pvc-a9caa3ba-7cf1-11e9-a90f-161c7e8a0754 8Gi RWO gp2 6m11s
jenkins-project jenkins-pvc Bound pvc-a173294f-7cea-11e9-a90f-161c7e8a0754 20Gi RWO gp2 56m
I feel like I am close but missing something basic. Any ideas?
So per Matthew L Daniel's comment I ran helm repo update and then re-ran the helm install command. This time it did not re-create the PVC but instead used the pre-made one.
My previous jenkins chart version was "jenkins-0.35.0"
For anyone wondering what the deployment looked like:
Name: jenkins
Namespace: jenkins-project
CreationTimestamp: Wed, 22 May 2019 22:03:33 -0700
Labels: app.kubernetes.io/component=jenkins-master
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=jenkins
helm.sh/chart=jenkins-1.1.21
Annotations: deployment.kubernetes.io/revision: 1
Selector: app.kubernetes.io/component=jenkins-master,app.kubernetes.io/instance=jenkins
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app.kubernetes.io/component=jenkins-master
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=jenkins
helm.sh/chart=jenkins-1.1.21
Annotations: checksum/config: 867177d7ed5c3002201650b63dad00de7eb1e45a6622e543b80fae1f674a99cb
Service Account: jenkins
Init Containers:
copy-default-config:
Image: jenkins/jenkins:lts
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment:
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'jenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'jenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins from plugins (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
Containers:
jenkins:
Image: jenkins/jenkins:lts
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
--argumentsRealm.roles.$(ADMIN_USER)=admin
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
JAVA_OPTS:
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'jenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'jenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins
Optional: false
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
secrets-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: jenkins-pvc
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: jenkins-86dcf94679 (1/1 replicas created)
NewReplicaSet: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 42s deployment-controller Scaled up replica set jenkins-86dcf94679 to 1

Resources