I am stuck with a helm install of jenkins
:(
please help!
I have predefined a storage class via:
$ kubectl apply -f generic-storage-class.yaml
with generic-storage-class.yaml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: generic
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zones: us-east-1a, us-east-1b, us-east-1c
fsType: ext4
I then define a PVC via:
$ kubectl apply -f jenkins-pvc.yaml
with jenkins-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
namespace: jenkins-project
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
I can then see the PVC go into the BOUND status:
$ kubectl get pvc --all-namespaces
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-project jenkins-pvc Bound pvc-a173294f-7cea-11e9-a90f-161c7e8a0754 20Gi RWO gp2 27m
But when I try to Helm install jenkins via:
$ helm install --name jenkins \
--set persistence.existingClaim=jenkins-pvc \
stable/jenkins --namespace jenkins-project
I get this output:
NAME: jenkins
LAST DEPLOYED: Wed May 22 17:07:44 2019
NAMESPACE: jenkins-project
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
jenkins 5 0s
jenkins-tests 1 0s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
jenkins 0/1 1 0 0s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins Pending gp2 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
jenkins-6c9f9f5478-czdbh 0/1 Pending 0 0s
==> v1/Secret
NAME TYPE DATA AGE
jenkins Opaque 2 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins LoadBalancer 10.100.200.27 <pending> 8080:31157/TCP 0s
jenkins-agent ClusterIP 10.100.221.179 <none> 50000/TCP 0s
NOTES:
1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace jenkins-project jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace jenkins-project -w jenkins'
export SERVICE_IP=$(kubectl get svc --namespace jenkins-project jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo http://$SERVICE_IP:8080/login
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
where I see helm creating a new PersistentVolumeClaim called jenkins.
How come helm did not use the "exsistingClaim"
I see this as the only helm values for the jenkins release
$ helm get values jenkins
persistence:
existingClaim: jenkins-pvc
and indeed it has just made its own PVC instead of using the pre-created one.
kubectl get pvc --all-namespaces
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-project jenkins Bound pvc-a9caa3ba-7cf1-11e9-a90f-161c7e8a0754 8Gi RWO gp2 6m11s
jenkins-project jenkins-pvc Bound pvc-a173294f-7cea-11e9-a90f-161c7e8a0754 20Gi RWO gp2 56m
I feel like I am close but missing something basic. Any ideas?
So per Matthew L Daniel's comment I ran helm repo update and then re-ran the helm install command. This time it did not re-create the PVC but instead used the pre-made one.
My previous jenkins chart version was "jenkins-0.35.0"
For anyone wondering what the deployment looked like:
Name: jenkins
Namespace: jenkins-project
CreationTimestamp: Wed, 22 May 2019 22:03:33 -0700
Labels: app.kubernetes.io/component=jenkins-master
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=jenkins
helm.sh/chart=jenkins-1.1.21
Annotations: deployment.kubernetes.io/revision: 1
Selector: app.kubernetes.io/component=jenkins-master,app.kubernetes.io/instance=jenkins
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app.kubernetes.io/component=jenkins-master
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=jenkins
helm.sh/chart=jenkins-1.1.21
Annotations: checksum/config: 867177d7ed5c3002201650b63dad00de7eb1e45a6622e543b80fae1f674a99cb
Service Account: jenkins
Init Containers:
copy-default-config:
Image: jenkins/jenkins:lts
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment:
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'jenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'jenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins from plugins (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
Containers:
jenkins:
Image: jenkins/jenkins:lts
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
--argumentsRealm.roles.$(ADMIN_USER)=admin
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
JAVA_OPTS:
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'jenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'jenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins
Optional: false
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
secrets-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: jenkins-pvc
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: jenkins-86dcf94679 (1/1 replicas created)
NewReplicaSet: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 42s deployment-controller Scaled up replica set jenkins-86dcf94679 to 1
Related
I'm following the doc in Jenkins page, I'm running with 2 node K8s cluster (1 master 1 worker), setting service type to nodeport, for some reason the init container crashes and never comes up.
kubectl describe pod jenkins-0 -n jenkins
Name: jenkins-0
Namespace: jenkins
Priority: 0
Node: vlab048009.dom047600.lab/10.204.110.35
Start Time: Wed, 09 Dec 2020 23:19:59 +0530
Labels: app.kubernetes.io/component=jenkins-controller
app.kubernetes.io/instance=jenkins
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=jenkins
controller-revision-hash=jenkins-c5795f65f
statefulset.kubernetes.io/pod-name=jenkins-0
Annotations: checksum/config: 2a4c2b3ea5dea271cb7c0b8e8582b682814d39f8e933e0348725b0b9a7dbf258
Status: Pending
IP: 10.244.1.28
IPs:
IP: 10.244.1.28
Controlled By: StatefulSet/jenkins
Init Containers:
init:
Container ID: docker://95e3298740bcaed3c2adf832f41d346e563c92add728080cfdcfcac375e0254d
Image: jenkins/jenkins:lts
Image ID: docker-pullable://jenkins/jenkins#sha256:1433deaac433ce20c534d8b87fcd0af3f25260f375f4ee6bdb41d70e1769d9ce
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 09 Dec 2020 23:41:28 +0530
Finished: Wed, 09 Dec 2020 23:41:29 +0530
Ready: False
Restart Count: 9
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment: <none>
Mounts:
/usr/share/jenkins/ref/plugins from plugins (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
Containers:
jenkins:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--httpPort=8080
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=3
Startup: http-get http://:http/login delay=0s timeout=5s period=10s #success=1 #failure=12
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
JAVA_OPTS: -Dcasc.reload.token=$(POD_NAME)
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
CASC_JENKINS_CONFIG: /var/jenkins_home/casc_configs
Mounts:
/run/secrets/chart-admin-password from admin-secret (ro,path="jenkins-admin-password")
/run/secrets/chart-admin-username from admin-secret (ro,path="jenkins-admin-user")
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
config-reload:
Container ID:
Image: kiwigrid/k8s-sidecar:0.1.275
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
POD_NAME: jenkins-0 (v1:metadata.name)
LABEL: jenkins-jenkins-config
FOLDER: /var/jenkins_home/casc_configs
NAMESPACE: jenkins
REQ_URL: http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)
REQ_METHOD: POST
REQ_RETRY_CONNECT: 10
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_home/casc_configs from sc-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-ppfw7 (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: jenkins
Optional: false
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
sc-config-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
admin-secret:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins
Optional: false
jenkins-token-ppfw7:
Type: Secret (a volume populated by a Secret)
SecretName: jenkins-token-ppfw7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned jenkins/jenkins-0 to vlab048009.dom047600.lab
Normal Pulled 22m kubelet Successfully pulled image "jenkins/jenkins:lts" in 4.648858149s
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 1.407161762s
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 4.963056101s
Normal Created 21m (x4 over 22m) kubelet Created container init
Normal Started 21m (x4 over 22m) kubelet Started container init
Normal Pulled 21m kubelet Successfully pulled image "jenkins/jenkins:lts" in 8.0749493s
Normal Pulling 20m (x5 over 22m) kubelet Pulling image "jenkins/jenkins:lts"
Warning BackOff 2m1s (x95 over 21m) kubelet Back-off restarting failed container
[
kubectl logs -f jenkins-0 -c init -n jenkins
Error from server: Get "https://10.204.110.35:10250/containerLogs/jenkins/jenkins-0/init?follow=true": dial tcp 10.204.110.35:10250: connect: no route to host
kubectl get events -n jenkins
LAST SEEN TYPE REASON OBJECT MESSAGE
23m Normal Scheduled pod/jenkins-0 Successfully assigned jenkins/jenkins-0 to vlab048009.dom047600.lab
21m Normal Pulling pod/jenkins-0 Pulling image "jenkins/jenkins:lts"
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 4.648858149s
22m Normal Created pod/jenkins-0 Created container init
22m Normal Started pod/jenkins-0 Started container init
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 1.407161762s
3m30s Warning BackOff pod/jenkins-0 Back-off restarting failed container
23m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 4.963056101s
22m Normal Pulled pod/jenkins-0 Successfully pulled image "jenkins/jenkins:lts" in 8.0749493s
23m Normal SuccessfulCreate statefulset/jenkins create Pod jenkins-0 in StatefulSet jenkins successful
Every 2.0s: kubectl get all -n jenkins Wed Dec 9 23:48:31 2020
NAME READY STATUS RESTARTS AGE
pod/jenkins-0 0/2 Init:CrashLoopBackOff 10 28m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jenkins NodePort 10.103.209.122 <none> 8080:32323/TCP 28m
service/jenkins-agent ClusterIP 10.103.195.120 <none> 50000/TCP 28m
NAME READY AGE
statefulset.apps/jenkins 0/1 28m
Using helm3 to deploy jenkins, pretty much changes done as per doc.
Not sure how to debug this issues wrt init container crashing, any leads or a solution would be appreciated, Thanks
Firstly make sure that you had executed command:
$ helm repo update
Execute also command:
$ kubectl logs <pod-name> -c <init-container-name>
to inspect init container. Then you will be able to properly debug this setup.
This might be a connection issue to the Jenkins update site. You can build an image which contains required plugins and disable plugin download. Take a look: jenkins-kubernetes.
See more: jenkins-helm-issues - in this case problem lays in plug-in compatibility.
Environment information:
Computer detail: One master node and four slave nodes. All are CentOS Linux release 7.8.2003 (Core).
Kubernetes version: v1.18.0.
Zero to JupyterHub version: 0.9.0.
Helm version: v2.11.0
I recently try to deploy an online code environment(like Google Colab) in new lab servers via Zero to JupyterHub. Unfortunately, I failed to deploy Persistent Volume(PV) for JupyterHub and I got a failure message such below:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s (x27 over 35m) default-scheduler running "VolumeBinding" filter plugin for pod "hub-7b9cbbcf59-747jl": pod has unbound immediate PersistentVolumeClaims
I followed the installing process by the tutorial of JupyterHub, and I was used Helm to install JupyterHub on k8s. That config file such below:
config.yaml
proxy:
secretToken: "2fdeb3679d666277bdb1c93102a08f5b894774ba796e60af7957cb5677f40706"
singleuser:
storage:
dynamic:
storageClass: local-storage
Here, I was config a local-storage for JupyterHub, the local-storage was observed k8s: Link. And its yaml file
such like that:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Then I use kubectl get storageclass to check it work, I got the message below:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 64m
So, I thought I deployed a storage for JupyterHub, but I so naive. I am so disappointed about that because my other Pods(JupyterHub) are all running. And I have been search some solutions so long, but also failed.
So now, my problems are:
What is the true way to solve the PV problems? (Better using local storage.)
Is the local storage way will using other nodes disk not only master?
In fact, my lab had a could storage service, so if Q2 answer is No, and how I using my lab could storage service to deploy PV?
I had been addressed above problem with #Arghya Sadhu's solution. But now, I got a new problem is the Pod hub-db-dir also pending, it result my service proxy-public pending.
The description of hub-db-dir such below:
Name: hub-7b9cbbcf59-jv49z
Namespace: jhub
Priority: 0
Node: <none>
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=7b9cbbcf59
release=jhub
Annotations: checksum/config-map: c20a64c7c9475201046ac620b057f0fa65ad6928744f7d265bc8705c959bce2e
checksum/secret: 1beaebb110d06103988476ec8a3117eee58d97e7dbc70c115c20048ea04e79a4
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hub-7b9cbbcf59
Containers:
hub:
Image: jupyterhub/k8s-hub:0.9.0
Port: 8081/TCP
Host Port: 0/TCP
Command:
jupyterhub
--config
/etc/jupyterhub/jupyterhub_config.py
--upgrade-db
Requests:
cpu: 200m
memory: 512Mi
Readiness: http-get http://:hub/hub/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jhub
POD_NAMESPACE: jhub (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'proxy.token' in secret 'hub-secret'> Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/etc/jupyterhub/cull_idle_servers.py from config (rw,path="cull_idle_servers.py")
/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
/etc/jupyterhub/secret/ from secret (rw)
/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
/srv/jupyterhub from hub-db-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hub-token-vlgwz (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub-secret
Optional: false
hub-db-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
hub-token-vlgwz:
Type: Secret (a volume populated by a Secret)
SecretName: hub-token-vlgwz
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 61s (x43 over 56m) default-scheduler 0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't find available persistent volumes to bind.
The information with kubectl get pv,pvc,sc.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/hub-db-dir Pending local-storage 162m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/local-storage (default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 8h
So, how to fix it?
In addition to #Arghya Sadhu answer, in order to make it work using local storage you have to create a PersistentVolume manually.
For example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: hub-db-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: <path_to_local_volume>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <name_of_the_node>
Then you can deploy the chart:
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
The config.yaml file can be left as is:
proxy:
secretToken: "<token>"
singleuser:
storage:
dynamic:
storageClass: local-storage
I think you need to make local-storage as default storage class
kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Local storage will use the local disk storage of the node where the pod get scheduled.
Hard to tell without more details. You can either create PV manually or use a storage class which does dynamic volume provisioning.
Trying to pull jenkins image using helm install stable/jenkins --values jenkins.values --name jenkins
to into my local cluster,but every time, when i do build pod its gives the error as following.
`
[root#kube-master tmp]# kubectl describe pod newjenkins-84cd855fb6-mr9rm
Name: newjenkins-84cd855fb6-mr9rm
Namespace: default
Priority: 0
Node: worker-node2/192.168.20.56
Start Time: Thu, 14 May 2020 14:58:13 +0500
Labels: app.kubernetes.io/component=jenkins-master
app.kubernetes.io/instance=newjenkins
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=jenkins
helm.sh/chart=jenkins-1.16.0
pod-template-hash=84cd855fb6
Annotations: checksum/config: 70d4b49bc5cd79a1a978e1bbafdb8126f8accc44871772348fd481642e33cffb
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/newjenkins-84cd855fb6
Init Containers:
copy-default-config:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment:
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'newjenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'newjenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from newjenkins-token-jmfsz (ro)
Containers:
jenkins:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
--argumentsRealm.roles.$(ADMIN_USER)=admin
--httpPort=8080
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: newjenkins-84cd855fb6-mr9rm (v1:metadata.name)
JAVA_OPTS:
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'newjenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'newjenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from newjenkins-token-jmfsz (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: newjenkins
Optional: false
secrets-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: newjenkins
ReadOnly: false
newjenkins-token-jmfsz:
Type: Secret (a volume populated by a Secret)
SecretName: newjenkins-token-jmfsz
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/newjenkins-84cd855fb6-mr9rm to worker-node2
Normal Pulling 9m41s kubelet, worker-node2 Pulling image "jenkins/jenkins:lts"
[root#kube-master tmp]# kubectl describe pod newjenkins-84cd855fb6-mr9rm
Name: newjenkins-84cd855fb6-mr9rm
Namespace: default
Priority: 0
Node: worker-node2/192.168.20.56
Start Time: Thu, 14 May 2020 14:58:13 +0500
Labels: app.kubernetes.io/component=jenkins-master
app.kubernetes.io/instance=newjenkins
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=jenkins
helm.sh/chart=jenkins-1.16.0
pod-template-hash=84cd855fb6
Annotations: checksum/config: 70d4b49bc5cd79a1a978e1bbafdb8126f8accc44871772348fd481642e33cffb
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/newjenkins-84cd855fb6
Init Containers:
copy-default-config:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
/var/jenkins_config/apply_config.sh
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Environment:
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'newjenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'newjenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (rw)
/var/jenkins_home from jenkins-home (rw)
/var/jenkins_plugins from plugin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from newjenkins-token-jmfsz (ro)
Containers:
jenkins:
Container ID:
Image: jenkins/jenkins:lts
Image ID:
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
--argumentsRealm.roles.$(ADMIN_USER)=admin
--httpPort=8080
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 50m
memory: 256Mi
Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:http/login delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: newjenkins-84cd855fb6-mr9rm (v1:metadata.name)
JAVA_OPTS:
JENKINS_OPTS:
JENKINS_SLAVE_AGENT_PORT: 50000
ADMIN_PASSWORD: <set to the key 'jenkins-admin-password' in secret 'newjenkins'> Optional: false
ADMIN_USER: <set to the key 'jenkins-admin-user' in secret 'newjenkins'> Optional: false
Mounts:
/tmp from tmp (rw)
/usr/share/jenkins/ref/plugins/ from plugin-dir (rw)
/usr/share/jenkins/ref/secrets/ from secrets-dir (rw)
/var/jenkins_config from jenkins-config (ro)
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from newjenkins-token-jmfsz (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
plugins:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: newjenkins
Optional: false
secrets-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
plugin-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: newjenkins
ReadOnly: false
newjenkins-token-jmfsz:
Type: Secret (a volume populated by a Secret)
SecretName: newjenkins-token-jmfsz
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/newjenkins-84cd855fb6-mr9rm to worker-node2
Warning Failed 50s kubelet, worker-node2 Failed to pull image "jenkins/jenkins:lts": rpc error: code = Unknown desc = unauthorized: authentication required
Warning Failed 50s kubelet, worker-node2 Error: ErrImagePull
Normal SandboxChanged 49s kubelet, worker-node2 Pod sandbox changed, it will be killed and re-created.
Normal Pulling 28s (x2 over 10m) kubelet, worker-node2 Pulling image "jenkins/jenkins:lts"
Pull the jenkins image manually using docker pull jenkins with docker hub login and without docker hub account login, every time Give me error ErrImagePull.
The hint is in the Events section:
Failed to pull image "jenkins/jenkins:lts": rpc error: code = Unknown desc = unauthorized: authentication required
The kubelet on worker nodes performs a docker pull prior to executing the pod when it spins up the containers.
Make sure the node is logged in with docker login so the local worker nodes can pull the image manually, if you haven't already.
If you have and it's still happening, you may need a secret in place to access the repository in question. If it's still happening even then, don't use a short name for your image (not jenkins/jenkins:lts, specify the full path, like my-image-registry:5001/jenkins/jenkins:lts) to ensure it's pulling from the right place and not the default registries Docker is configured with. Hope that helps.
I am trying to use gcePersistentDisk as ReadOnlyMany so that my pods on multiple nodes can read the data on this disk. Following the documentation here for the same.
To create and later format the gce Persistent Disk, I have followed the instructions in the documentation here. Following this doc, I have sshed into one of the nodes and have formatted the disk. See below the complete error and also the other yaml files.
kubectl describe pods -l podName
Name: punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-mycluster-default-pool-b1c1d316-d016/10.160.0.12
Start Time: Thu, 25 Apr 2019 23:55:38 +0530
Labels: app.kubernetes.io/instance=punk-fly
app.kubernetes.io/name=nodejs
pod-template-hash=1866836461
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nodejs
Status: Pending
IP:
Controlled By: ReplicaSet/punk-fly-nodejs-deployment-5dbbd7b8b5
Containers:
nodejs:
Container ID:
Image: rajesh12/smartserver:server
Image ID:
Port: 3002/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment:
MYSQL_HOST: mysqlservice
MYSQL_DATABASE: app
MYSQL_ROOT_PASSWORD: password
Mounts:
/usr/src/ from helm-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jpkzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
helm-vol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-readonly-pvc
ReadOnly: true
default-token-jpkzg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jpkzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs to gke-mycluster-default-pool-b1c1d316-d016
Normal SuccessfulAttachVolume 1m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f"
Warning FailedMount 10s (x8 over 1m) kubelet, gke-mycluster-default-pool-b1c1d316-d016 MountVolume.MountDevice failed for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f" : failed to mount unformatted volume as read only
Warning FailedMount 0s kubelet, gke-mycluster-default-pool-b1c1d316-d016 Unable to mount volumes for pod "punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs_default(86293044-6787-11e9-ad35-42010aa0000f)": timeout expired waiting for volumes to attach or mount for pod "default"/"punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs". list of unmounted volumes=[helm-vol]. list of unattached volumes=[helm-vol default-token-jpkzg]
readonly_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-readonly-pv
spec:
storageClassName: ""
capacity:
storage: 1G
accessModes:
- ReadOnlyMany
gcePersistentDisk:
pdName: mydisk0
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-readonly-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1G
deployment.yaml
volumes:
- name: helm-vol
persistentVolumeClaim:
claimName: my-readonly-pvc
readOnly: true
containers:
- name: {{ .Values.app.backendName }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tagServer }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MYSQL_HOST
value: mysqlservice
- name: MYSQL_DATABASE
value: app
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- name: http-backend
containerPort: 3002
volumeMounts:
- name: helm-vol
mountPath: /usr/src/
It sounds like your PVC is dynamically provisioning a new volume that is not formatted with the default StorageClass
It could be that your Pod is being created in a different availability from where you have the PV provisioned. The gotcha with having multiple Pod readers for the gce volume is that the Pods always have to be in the same availability zone.
Some options:
Simply create and format the PV on the same availability zone where your node is.
When you define your PV you could specify Node Affinity to make sure it always gets assigned to a specific node.
Define a StorageClass that specifies the filesystem
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mysc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
And then use it in your PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1G
storageClassName: mysc
The volume will be automatically provisioned and formatted.
I received the same error message when trying to provision a persistent disk with an access mode of ReadWriteOnce. What fixed the issue for me was removing the property readOnly: true from the volumes declaration of the Deployment spec. In the case of your deployment.yaml file, that would be this block:
volumes:
- name: helm-vol
persistentVolumeClaim:
claimName: my-readonly-pvc
readOnly: true
Try removing that line and see if the error goes away.
I had the same error and managed to fix it using a few lines from a related article about using preexisting disks
https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd
You need to add storageClassName and volumeName to your persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-demo
spec:
# It's necessary to specify "" as the storageClassName
# so that the default storage class won't be used, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
storageClassName: ""
volumeName: pv-demo
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 500G
I want to use ceph rbd with kubernetes.
I have a kubernetes 1.9.2 and ceph 12.2.5 cluster and on my k8s nodes I have installed ceph-common pag.
[root#docker09 manifest]# ceph auth get-key client.admin|base64
QVFEcmxmcGFmZXlZQ2hBQVFJWkExR0pXcS9RcXV4QmgvV3ZFWkE9PQ==
[root#docker09 manifest]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFEcmxmcGFmZXlZQ2hBQVFJWkExR0pXcS9RcXV4QmgvV3ZFWkE9PQ==
kubectl create -f ceph-secret.yaml
Then:
[root#docker09 manifest]# cat ceph-pv.yaml |grep -v "#"
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 10.211.121.61:6789
- 10.211.121.62:6789
- 10.211.121.63:6789
pool: rbd
image: ceph-image
user: admin
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
persistentVolumeReclaimPolicy: Recycle
[root#docker09 manifest]# rbd info ceph-image
rbd image 'ceph-image':
size 2048 MB in 512 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.341d374b0dc51
format: 2
features: layering
flags:
create_timestamp: Fri Jun 15 15:58:04 2018
[root#docker09 manifest]# cat task-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[root#docker09 manifest]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/ceph-pv 2Gi RWO Recycle Bound default/ceph-claim 54m
pv/host 10Gi RWO Retain Bound default/hostv 24d
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/ceph-claim Bound ceph-pv 2Gi RWO 53m
pvc/hostv Bound host 10Gi RWO 24d
I create a pod use this pvc .
[root#docker09 manifest]# cat ceph-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod2
spec:
containers:
- name: ceph-busybox
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-vol1
mountPath: /usr/share/busybox
readOnly: false
volumes:
- name: ceph-vol1
persistentVolumeClaim:
claimName: ceph-claim
[root#docker09 manifest]# kubectl get pod ceph-pod2 -o wide
NAME READY STATUS RESTARTS AGE IP NODE
ceph-pod2 0/1 ContainerCreating 0 14m <none> docker10
The pod is still in ContainerCreating status.
[root#docker09 manifest]# kubectl describe pod ceph-pod2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned ceph-pod2 to docker10
Normal SuccessfulMountVolume 15m kubelet, docker10 MountVolume.SetUp succeeded for volume "default-token-85rc7"
Warning FailedMount 1m (x6 over 12m) kubelet, docker10 Unable to mount volumes for pod "ceph-pod2_default(56af9345-7073-11e8-aeb6-1c98ec29cbec)": timeout expired waiting for volumes to attach/mount for pod "default"/"ceph-pod2". list of unattached/unmounted volumes=[ceph-vol1]
I don't know why this happening, need your help... Best regards.
There's no need to reinvent a wheel here. There's already project called ROOK, which deploys ceph on kubernetes and it's super easy to run.
https://rook.io/
rbd -v (included in ceph-common) should return the same version as your cluster. You should also check the messages of kubelet.