I am trying to setup Jenkins with stable/helm charts but Jenkins pod always remains in Init status and doesn't give any errors while I am describing the Jenkins pod. I am not able to debug is as it's in Init status.
I have already created PV & PVC and assigned PVC in values files.
Below is my configuration:
master:
componentName: "jenkins-master"
image: "jenkins/jenkins"
tag: "lts"
imagePullPolicy: "IfNotPresent"
lifecycle:
numExecutors: 0
customJenkinsLabels: []
useSecurity: true
enableXmlConfig: true
securityRealm: |-
<securityRealm class="hudson.security.LegacySecurityRealm"/>
authorizationStrategy: |-
<authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy">
<denyAnonymousReadAccess>true</denyAnonymousReadAccess>
</authorizationStrategy>
hostNetworking: false
adminUser: "admin"
adminPassword: "admin"
rollingUpdate: {}
resources:
requests:
cpu: "50m"
memory: "256Mi"
limits:
cpu: "2000m"
memory: "2048Mi"
usePodSecurityContext: true
servicePort: 8080
targetPort: 8080
serviceType: NodePort
serviceAnnotations: {}
deploymentLabels: {}
serviceLabels: {}
podLabels: {}
nodePort: 32323
healthProbes: true
healthProbesLivenessTimeout: 5
healthProbesReadinessTimeout: 5
healthProbeLivenessPeriodSeconds: 10
healthProbeReadinessPeriodSeconds: 10
healthProbeLivenessFailureThreshold: 5
healthProbeReadinessFailureThreshold: 3
healthProbeLivenessInitialDelay: 90
healthProbeReadinessInitialDelay: 60
slaveListenerPort: 50000
slaveHostPort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
cli: false
slaveListenerServiceType: "ClusterIP"
slaveListenerServiceAnnotations: {}
slaveKubernetesNamespace:
loadBalancerSourceRanges:
- 0.0.0.0/0
extraPorts:
installPlugins:
- kubernetes:1.18.1
- workflow-job:2.33
- workflow-aggregator:2.6
- credentials-binding:1.19
- git:3.11.0
- blueocean:1.18.1
- kubernetes-cd:2.0.0
enableRawHtmlMarkupFormatter: false
scriptApproval:
initScripts:
jobs: {}
JCasC:
enabled: false
pluginVersion: "1.27"
supportPluginVersion: "1.18"
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'.
customInitContainers: []
sidecars:
configAutoReload:
enabled: false
image: shadwell/k8s-sidecar:0.0.2
imagePullPolicy: IfNotPresent
resources: {}
sshTcpPort: 1044
folder: "/var/jenkins_home/casc_configs"
nodeSelector: {}
tolerations: []
podAnnotations: {}
customConfigMap: false
overwriteConfig: false
overwriteJobs: false
ingress:
enabled: false
apiVersion: "extensions/v1beta1"
labels: {}
annotations: {}
hostName:
tls:
backendconfig:
enabled: false
apiVersion: "extensions/v1beta1"
name:
labels: {}
annotations: {}
spec: {}
route:
enabled: false
labels: {}
annotations: {}
additionalConfig: {}
hostAliases: []
prometheus:
enabled: false
serviceMonitorAdditionalLabels: {}
scrapeInterval: 60s
scrapeEndpoint: /prometheus
alertingRulesAdditionalLabels: {}
alertingrules: []
agent:
enabled: true
image: "jenkins/jnlp-slave"
tag: "3.27-1"
customJenkinsLabels: []
imagePullSecretName:
componentName: "jenkins-slave"
privileged: false
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "200m"
memory: "256Mi"
alwaysPullImage: false
podRetention: "Never"
envVars:
volumes:
nodeSelector: {}
command:
args:
sideContainerName: "jnlp"
TTYEnabled: false
containerCap: 10
podName: "default"
idleMinutes: 0
yamlTemplate:
persistence:
enabled: true
existingClaim: jenkins-pvc
storageClass:
annotations: {}
accessMode: "ReadWriteOnce"
size: "2Gi"
volumes:
mounts:
networkPolicy:
enabled: false
apiVersion: networking.k8s.io/v1
rbac:
create: true
serviceAccount:
create: true
name:
annotations: {}
serviceAccountAgent:
create: false
name:
annotations: {}
backup:
enabled: false
componentName: "backup"
schedule: "0 2 * * *"
annotations:
iam.amazonaws.com/role: "jenkins"
image:
repository: "nuvo/kube-tasks"
tag: "0.1.2"
extraArgs: []
existingSecret: {}
env:
- name: "AWS_REGION"
value: "us-east-1"
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
destination: "s3://nuvo-jenkins-data/backup"
checkDeprecation: true```
We recently had this issue while trying to run Jenkins using helm. The issue was that the pod couldn't inintialize because of an error that occurred while Jenkins was trying to configure itself and pull updates down from jenkins.io. You can find these log messages using a command similar to the following:
kubectl logs solemn-quoll-jenkins-abc78900-xxx -c copy-default-config
Replace solemn-quoll-jenkins-abc78900-xxx above with whatever name helm assigns to your jenkins pod. The issue was in the copy-default-config container, so the -c option allows you to peek at the logs of this container within the jenkins pod. In our case, it was an http proxy issue where the copy-default-config container was failing because it could not connect to https://updates.jenkins.io/ to download updates for plugins. You can test if it is a plugin update issue by going into your values.yaml file and commenting out all the plugins under the installPlugins: heading in the yaml file.
For example:
installPlugins:
#- kubernetes:1.18.1
#- workflow-job:2.33
#- workflow-aggregator:2.6
#- credentials-binding:1.19
#- git:3.11.0
#- blueocean:1.18.1
#- kubernetes-cd:2.0.0
Related
I'm building a jenkins cluster in Amazon EKS and am trying to register Jenkins with the AWS Load Balancer Controller. I could use a bit of advice from some more experienced folks.
Here is my values for Jenkins helm3 install (I'm still a bit new at helm):
clusterZone: "cluster.local"
renderHelmLabels: true
controller:
componentName: "jenkins-controller"
image: "jenkins/jenkins"
tag: "2.263.3"
imagePullPolicy: "Always"
adminUser: "admin"
adminPassword: "admin"
jenkinsHome: "/var/jenkins_home"
jenkinsWar: "/usr/share/jenkins/jenkins.war"
resources:
requests:
cpu: "50m"
memory: "256Mi"
limits:
cpu: "2000m"
memory: "4096Mi"
usePodSecurityContext: true
runAsUser: 1000
fsGroup: 1000
servicePort: 8080
targetPort: 8080
serviceType: NodePort
serviceAnnotations:
alb.ingress.kubernetes.io/healthcheck-path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
alb.ingress.kubernetes.io/group.name: "jenkins-ingress"
healthProbes: true
probes:
startupProbe:
httpGet:
path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
port: http
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 12
livenessProbe:
failureThreshold: 5
httpGet:
path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
port: http
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
httpGet:
path: '{{ default "" .Values.controller.jenkinsUriPrefix }}/login'
port: http
periodSeconds: 10
timeoutSeconds: 5
agentListenerPort: 50000
agentListenerHostPort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
agentListenerServiceType: "ClusterIP"
installPlugins:
- kubernetes:1.29.0
- workflow-aggregator:2.6
- git:4.5.2
- configuration-as-code:1.47
JCasC:
defaultConfig: true
securityRealm: |-
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${chart-admin-username}"
name: "Jenkins Admin"
password: "${chart-admin-password}"
authorizationStrategy: |-
loggedInUsersCanDoAnything:
allowAnonymousRead: false
sidecars:
configAutoReload:
enabled: true
image: kiwigrid/k8s-sidecar:0.1.275
imagePullPolicy: IfNotPresent
reqRetryConnect: 10
sshTcpPort: 1044
folder: "/var/jenkins_home/casc_configs"
ingress:
enabled: true
paths:
- backend:
serviceName: >-
{{ template "jenkins.fullname" . }}
servicePort: 8080
# path: "/jenkins"
apiVersion: "extensions/v1beta1"
annotations:
alb.ingress.kubernetes.io/group.name: "jenkins-ingress"
kubernetes.io/ingress.class: "alb"
persistence:
enabled: true
existingClaim: jenkins-0-claim
rbac:
create: true
readSecrets: false
serviceAccount:
create: true
name: "jenkins"
Here is the contents of my ingress.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/group.name: jenkins-ingress
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}, {"HTTP":
8080}, {"HTTPS": 8443}]'
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/tags: Environment=dev,Team=test
kubernetes.io/ingress.class: alb
name: app-ingress
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: app1-nginx-nodeport-service
servicePort: 80
path: /app1/*
- backend:
serviceName: app2-nginx-nodeport-service
servicePort: 80
path: /app2/*
- backend:
serviceName: app3-nginx-nodeport-service
servicePort: 80
path: /app3/*
- backend:
serviceName: jenkins
servicePort: 8080
path: /jenkins/*
Here is the error, I suspect it is due to the namespace. Jenkins is in it's own namespace:
❯ kubectl describe ingress app-ingress
Name: app-ingress
Namespace: default
Address: internal-k8s-jenkinsingress-9f4e69d9f1-2066345703.us-west-2.elb.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/app1/* app1-nginx-nodeport-service:80 (10.216.66.254:80)
/app2/* app2-nginx-nodeport-service:80 (10.216.66.248:80)
/app3/* app3-nginx-nodeport-service:80 (10.216.66.174:80)
/jenkins/* jenkins:8080 (<error: endpoints "jenkins" not found>)
Annotations: alb.ingress.kubernetes.io/group.name: jenkins-ingress
alb.ingress.kubernetes.io/listen-ports: [{"HTTP": 80}, {"HTTPS": 443}, {"HTTP": 8080}, {"HTTPS": 8443}]
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/tags: Environment=dev,Team=test
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedDeployModel 35m (x16 over 37m) ingress Failed deploy model due to InvalidParameter: 1 validation error(s) found.
- minimum field value of 1, CreateTargetGroupInput.Port.
Warning FailedBuildModel 7m2s (x15 over 34m) ingress Failed build model due to ingress: default/app-ingress: Service "jenkins" not found
I was able to resolve my issue. Turns out I was defining the the jenkins path in too many places. I removed it from the primary ingress definition and altered my jenkins helm values.
I also set service type to NodePort instead of ClusterIP
Removed this from app-ingress.yaml:
- backend:
serviceName: jenkins
servicePort: 8080
path: /jenkins/*
Removed path value from jenkins helm ingress definition and set the jenkinsUriPrefix to "/jenkins".
I'm running Jenkins on EKS cluster with k8s plugin and i'd like to write a declarative pipeline in which I specify the pod template in each stage. So a basic example would be the following, in which in the first stage a file is created and in the second one is printed :
pipeline{
agent none
stages {
stage('First sample') {
agent {
kubernetes {
label 'mvn-pod'
yaml """
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
"""
}
}
steps {
container('maven'){
sh "echo 'hello' > test.txt"
}
}
}
stage('Second sample') {
agent {
kubernetes {
label 'bysbox-pod'
yaml """
spec:
containers:
- name: busybox
image: busybox
"""
}
}
steps {
container('busybox'){
sh "cat test.txt"
}
}
}
}
}
This clearly doesn't work since the two pods don't have any kind of shared memory. Reading this doc I realized I can use workspaceVolume dynamicPVC () in the yaml declaration of the pod so that the plugin creates and manages a persistentVolumeClaim in which hopefully i can write the data I need to share between stages.
Now, with workspaceVolume dynamicPVC (...) both pv and pvc are successfully created but the pod goes on error and terminates. In particular, the pods provisioned is the following :
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: eks.privileged
runUrl: job/test-libraries/job/sample-k8s/12/
creationTimestamp: "2020-08-07T08:57:09Z"
deletionGracePeriodSeconds: 30
deletionTimestamp: "2020-08-07T08:58:09Z"
labels:
jenkins: slave
jenkins/label: bibibu
name: bibibu-ggb5h-bg68p
namespace: jenkins-slaves
resourceVersion: "29184450"
selfLink: /api/v1/namespaces/jenkins-slaves/pods/bibibu-ggb5h-bg68p
uid: 1c1e78a5-fcc7-4c86-84b1-8dee43cf3f98
spec:
containers:
- image: maven:3.3.9-jdk-8-alpine
imagePullPolicy: IfNotPresent
name: maven
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5bt8c
readOnly: true
- env:
- name: JENKINS_SECRET
value: ...
- name: JENKINS_AGENT_NAME
value: bibibu-ggb5h-bg68p
- name: JENKINS_NAME
value: bibibu-ggb5h-bg68p
- name: JENKINS_AGENT_WORKDIR
value: /home/jenkins/agent
- name: JENKINS_URL
value: ...
image: jenkins/inbound-agent:4.3-4
imagePullPolicy: IfNotPresent
name: jnlp
resources:
requests:
cpu: 100m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5bt8c
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ...
nodeSelector:
kubernetes.io/os: linux
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: workspace-volume
persistentVolumeClaim:
claimName: pvc-bibibu-ggb5h-bg68p
- name: default-token-5bt8c
secret:
defaultMode: 420
secretName: default-token-5bt8c
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
message: 'containers with unready status: [jnlp]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
message: 'containers with unready status: [jnlp]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
image: jenkins/inbound-agent:4.3-4
imageID: docker-pullable://jenkins/inbound-agent#sha256:62f48a12d41e02e557ee9f7e4ffa82c77925b817ec791c8da5f431213abc2828
lastState: {}
name: jnlp
ready: false
restartCount: 0
state:
terminated:
containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
exitCode: 1
finishedAt: "2020-08-07T08:57:35Z"
reason: Error
startedAt: "2020-08-07T08:57:35Z"
- containerID: docker://96f747a132ee98f7bf2488bd3cde247380aea5dd6f84bdcd7e6551dbf7c08943
image: maven:3.3.9-jdk-8-alpine
imageID: docker-pullable://maven#sha256:3ab854089af4b40cf3f1a12c96a6c84afe07063677073451c2190cdcec30391b
lastState: {}
name: maven
ready: true
restartCount: 0
state:
running:
startedAt: "2020-08-07T08:57:35Z"
hostIP: 10.108.171.224
phase: Running
podIP: 10.108.171.158
qosClass: Burstable
startTime: "2020-08-07T08:57:16Z"
Retrieving logs from jnlp container on the pod with kubectl logs name-of-the-pod -c jnlp -n jenkins-slaves led me towards this error :
Exception in thread "main" java.io.IOException: The specified working directory should be fully accessible to the remoting executable (RWX): /home/jenkins/agent
at org.jenkinsci.remoting.engine.WorkDirManager.verifyDirectory(WorkDirManager.java:249)
at org.jenkinsci.remoting.engine.WorkDirManager.initializeWorkDir(WorkDirManager.java:201)
at hudson.remoting.Engine.startEngine(Engine.java:288)
at hudson.remoting.Engine.startEngine(Engine.java:264)
at hudson.remoting.jnlp.Main.main(Main.java:284)
at hudson.remoting.jnlp.Main._main(Main.java:279)
at hudson.remoting.jnlp.Main.main(Main.java:231)
I also tried to specify the accessModes as parameter of dynamicPVC, but the error is the same.
What am I doing wrong?
Thanks
The docker image being used is configured to run as a non-root user jenkins. By default PVCs will be created only allowing root-user access.
This can be configured using the security context, e.g.
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
(The jenkins user in that image is ID 1000)
Jenkins is running in AWS EKS cluster under a jenkins-ci namespace. When multibranch pipeline job "Branch-A" started the build, it is picking up correct configurations (KubernetesPod.yaml) and ran successfully and when job "Branch-B" has started the build it is using job A configurations like docker image and buildurl.
Gitlab Configuration:
Branch-A -- KubernetesPod.yaml
apiVersion: v1
kind: Pod
spec:
serviceAccount: jenkins
nodeSelector:
env: jenkins-build
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: env
operator: In
values:
- jenkins-build
tolerations:
- key: "highcpu"
operator: "Equal"
value: "true"
effect: "NoSchedule"
volumes:
- name: dev
hostPath:
path: /dev
imagePullSecrets:
- name: gitlab
containers:
- name: build
image: registry.gitlab.com/mycompany/sw-group/docker/ycp:docker-buildtest-1
imagePullPolicy: IfNotPresent
command:
- cat
securityContext:
privileged: true
volumeMounts:
- mountPath: /dev
name: dev
tty: true
resources:
requests:
memory: "4000Mi"
cpu: "3500m"
limits:
memory: "4000Mi"
cpu: "3500m"
Branch-B -- KubernetesPod.yaml
apiVersion: v1
kind: Pod
spec:
serviceAccount: jenkins
nodeSelector:
env: jenkins-build
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: env
operator: In
values:
- jenkins-build
tolerations:
- key: "highcpu"
operator: "Equal"
value: "true"
effect: "NoSchedule"
volumes:
- name: dev
hostPath:
path: /dev
imagePullSecrets:
- name: gitlab
containers:
- name: build
image: registry.gitlab.com/mycompany/sw-group/docker/ycp:docker-buildtest-2
imagePullPolicy: IfNotPresent
command:
- cat
securityContext:
privileged: true
volumeMounts:
- mountPath: /dev
name: dev
tty: true
resources:
requests:
memory: "4000Mi"
cpu: "3500m"
limits:
memory: "4000Mi"
cpu: "3500m"
Jenkins Branch-A console output:
Seen branch in repository origin/unknownMishariBranch
Seen branch in repository origin/vikg/base
Seen 471 remote branches
Obtained Jenkinsfile.kubernetes from 85b8ab296342b98be52cbef26acf20b15503c273
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] readTrusted
Obtained KubernetesPod.yaml from 85b8ab296342b98be52cbef26acf20b15503c273
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
Waiting for next available executor
Agent company-pod-8whw9-wxflb is provisioned from template Kubernetes Pod Template
---
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "https://jenkins.mycompany.com/job/multibranch/job/branch-A/3/"
labels:
jenkins: "slave"
jenkins/mycompany-pod: "true"
name: "mycompany-pod-8whw9-wxflb"
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: "env"
operator: "In"
values:
- "jenkins-build"
weight: 1
containers:
- command:
- "cat"
image: "registry.gitlab.com/mycompany/sw-group/docker/ycp:docker-buildtest-1"
imagePullPolicy: "IfNotPresent"
name: "build"
resources:
limits:
memory: "4000Mi"
cpu: "3500m"
requests:
memory: "4000Mi"
cpu: "3500m"
Jenkins Branch-B console output:
Seen branch in repository origin/unknownMishariBranch
Seen branch in repository origin/viking/base
Seen 479 remote branches
Obtained Jenkinsfile.kubernetes from 38ace636171311ef35dc14245bf7a36f49f24e11
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] readTrusted
Obtained KubernetesPod.yaml from 38ace636171311ef35dc14245bf7a36f49f24e11
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
Waiting for next available executor
Agent mycompany-pod-qddx4-08xtm is provisioned from template Kubernetes Pod Template
---
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "https://jenkins.mycompany.com/job/multibranch/job/branch-A/3/"
labels:
jenkins: "slave"
jenkins/mycompany-pod: "true"
name: "mycompany-pod-qddx4-08xtm"
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: "env"
operator: "In"
values:
- "jenkins-build"
weight: 1
containers:
- command:
- "cat"
image: "registry.gitlab.com/mycompany/sw-group/docker/ycp:docker-buildtest-1"
imagePullPolicy: "IfNotPresent"
name: "build"
resources:
limits:
memory: "4000Mi"
cpu: "3500m"
requests:
memory: "4000Mi"
cpu: "3500m"
Whenever the build gets triggered it is using same label name in Jenkinsfile.
I am posting below part of my jenkinsfile script.
The below solution solved my problem.
Before:
pipeline {
agent {
kubernetes {
label "sn-optimus"
defaultContainer "jnlp"
yamlFile "KubernetesPod.yaml"
}
}
After:
pipeline {
agent {
kubernetes {
label "sn-optimus-${currentBuild.startTimeInMillis}"
defaultContainer "jnlp"
yamlFile "KubernetesPod.yaml"
}
}
I am able to mount jenkins-home Volume as PersistentVolumeClaim
I am unable to mount the tmp Volume as Persistent volume from the values.yaml , it keeps appearing as EmptyDir and connected to directly to Host
I have tried a both the volume options and defining here
https://github.com/helm/charts/blob/77c2f8c632b939af76b4487e0d8032c542568445/stable/jenkins/values.yaml#L478
It still appears as EmptyDir and connected to Host.
https://github.com/helm/charts/blob/master/stable/jenkins/values.yaml
Values.yaml below
clusterZone: "cluster.local"
nameOverride: ""
fullnameOverride: ""
namespaceOverride: test-project
master:
componentName: "jenkins-master"
image: "jenkins/jenkins"
tag: "lts"
imagePullPolicy: "Always"
imagePullSecretName:
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Script from the postStart handler to install jq and aws > /usr/share/message && apt-get upgrade -y && apt-get update -y && apt-get install vim -y && apt-get install jq -y && apt-get install awscli -y && apt-get install -y -qq groff && apt-get install -y -qq less"]
numExecutors: 10
customJenkinsLabels: []
useSecurity: true
enableXmlConfig: true
securityRealm: |-
<securityRealm class="hudson.security.LegacySecurityRealm"/>
authorizationStrategy: |-
<authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy">
<denyAnonymousReadAccess>true</denyAnonymousReadAccess>
</authorizationStrategy>
hostNetworking: false
# login user for Jenkins
adminUser: "ctjenkinsadmin"
rollingUpdate: {}
resources:
requests:
cpu: "50m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "4096Mi"
usePodSecurityContext: true
servicePort: 8080
targetPort: 8080
# Type NodePort for minikube
serviceAnnotations: {}
deploymentLabels: {}
serviceLabels: {}
podLabels: {}
# NodePort for Jenkins Service
healthProbes: true
healthProbesLivenessTimeout: 5
healthProbesReadinessTimeout: 5
healthProbeLivenessPeriodSeconds: 10
healthProbeReadinessPeriodSeconds: 10
healthProbeLivenessFailureThreshold: 5
healthProbeReadinessFailureThreshold: 3
healthProbeLivenessInitialDelay: 90
healthProbeReadinessInitialDelay: 60
slaveListenerPort: 50000
slaveHostPort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
cli: false
slaveListenerServiceType: "ClusterIP"
slaveListenerServiceAnnotations: {}
slaveKubernetesNamespace:
loadBalancerSourceRanges:
- 0.0.0.0/0
extraPorts: []
installPlugins:
- configuration-as-code:latest
- kubernetes:latest
- workflow-aggregator:latest
- workflow-job:latest
- credentials-binding:latest
- git:latest
- git-client:latest
- git-server:latest
- greenballs:latest
- blueocean:latest
- strict-crumb-issuer:latest
- http_request:latest
- matrix-project:latest
- jquery:latest
- artifactory:latest
- jdk-tool:latest
- matrix-auth:latest
enableRawHtmlMarkupFormatter: false
scriptApproval: []
initScripts:
- |
#!groovy
import hudson.model.*;
import jenkins.model.*;
import jenkins.security.*;
import jenkins.security.apitoken.*;
// script parameters
def userName = 'user'
def tokenName = 'token'
def uploadscript =['/bin/sh', '/var/lib/jenkins/update_token.sh']
def user = User.get(userName, false)
def apiTokenProperty = user.getProperty(ApiTokenProperty.class)
def result = apiTokenProperty.tokenStore.generateNewToken(tokenName)
def file = new File("/tmp/token.txt")
file.delete()
file.write result.plainValue
uploadscript.execute()
uploadscript.waitForOrKill(100)
user.save()
return result.plainValue
value = result.plainValue
jobs:
Test-Job: |-
<?xml version='1.0' encoding='UTF-8'?>
<project>
<keepDependencies>false</keepDependencies>
<properties/>
<scm class="hudson.scm.NullSCM"/>
<canRoam>false</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers/>
<concurrentBuild>false</concurrentBuild>
<builders/>
<publishers/>
<buildWrappers/>
</project>
JCasC:
enabled: true
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to Jenkins Server.
customInitContainers: []
sidecars:
configAutoReload:
enabled: false
image: kiwigrid/k8s-sidecar:0.1.20
imagePullPolicy: IfNotPresent
resources: {}
sshTcpPort: 1044
folder: "/var/jenkins_home/casc_configs"
other: []
nodeSelector: {}
tolerations: []
#- key: "node.kubernetes.io/disk-pressure"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/memory-pressure"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/pid-pressure"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/not-ready"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/unreachable"
# operator: "Equal"
# effect: "NoSchedule"
#- key: "node.kubernetes.io/unschedulable"
# operator: "Equal"
# effect: "NoSchedule"
podAnnotations: {}
customConfigMap: false
overwriteConfig: false
overwriteJobs: false
jenkinsUrlProtocol: "https"
# If you set this prefix and use ingress controller then you might want to set the ingress path below
#jenkinsUriPrefix: "/jenkins"
ingress:
enabled: true
apiVersion: "extensions/v1beta1"
labels: {}
annotations: {}
kubernetes.io/secure-backends: "true"
kubernetes.io/ingress.class: nginx
name: ""
#service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:454211873573:certificate/a3146344-5888-48d5-900c-80a9d1532781 #replace this value
#service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
#kubernetes.io/ingress.class: nginx
#kubernetes.io/tls-acme: "true"
#path: "/jenkins"
kubernetes.io/ssl-redirect: "true"
#nginx.ingress.kubernetes.io/ssl-redirect: "true"
hostName: ""
tls:
#- secretName: jenkins.cluster.local
# hosts:
# - jenkins.cluster.local
backendconfig:
enabled: false
apiVersion: "extensions/v1beta1"
name:
labels: {}
annotations: {}
spec: {}
route:
enabled: false
labels: {}
annotations: {}
additionalConfig: {}
hostAliases: []
prometheus:
enabled: false
serviceMonitorAdditionalLabels: {}
scrapeInterval: 60s
scrapeEndpoint: /prometheus
alertingRulesAdditionalLabels: {}
alertingrules: []
testEnabled: true
agent:
enabled: true
image: "jenkins/jnlp-slave"
tag: "latest"
customJenkinsLabels: []
imagePullSecretName:
componentName: "jenkins-slave"
privileged: false
resources:
requests:
cpu: "1"
memory: "1Gi"
limits:
cpu: "1"
memory: "4Gi"
alwaysPullImage: false
podRetention: "Never"
envVars: []
# mount docker in agent pod
volumes:
- type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
nodeSelector: {}
command:
args:
- echo installing jq;
apt-get update;
apt-get install jq -y;
apt-get install -y git;
apt-get install -y java-1.8.0-openjdk;
apt-get install awscli;
sideContainerName: "jnlp"
TTYEnabled: true
containerCap: 10
podName: "default"
idleMinutes: 0
yamlTemplate: ""
persistence:
enabled: true
existingClaim: test-project-pvc
storageClass: test-project-pv
annotations: {}
accessMode: "ReadWriteOnce"
size: "20Gi"
volumes:
mounts:
networkPolicy:
enabled: false
apiVersion: networking.k8s.io/v1
rbac:
create: true
readSecrets: false
serviceAccount:
create: true
name:
annotations: {}
Please create a PersistentVolumeClaim with following yaml file in the namespace for jenkins (by updating the namespace field):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-tmp-pvc
namespace: test-project
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "10Gi"
storageClassName: gp2
Then add persistence volume, mount and javaOpts as follows in Jenkins values yml file:
master
...
javaOpts: "-Djava.io.tmpdir=/var/jenkins_tmp"
persistence:
...
volumes:
- name: jenkins-tmp
persistentVolumeClaim:
claimName: jenkins-tmp-pvc
mounts:
- mountPath: /var/jenkins_tmp
name: jenkins-tmp
This will first create the persistent volume claim "jenkins-tmp-pvc" and underlying persistent volume and then Jenkins will use the claim mount path "/var/jenkins_tmp" as tmp directory. Also, make sure your "gp2" storageclass is created with "allowVolumeExpansion: true" attribute so that "jenkins-tmp-pvc" is expandable whenever you need to increase tmp disk space.
I have 2 Jenkins instances, one use version 1.8 and second version 1.18.
Oldest version is able to create both containers.
Agent specification [Kubernetes Pod Template] (mo-aio-build-supplier):
* [jnlp] mynexus.services.com/mo-base/jenkins-slave-mo-aio:1.8.2-ca(resourceRequestCpu: 0.25, resourceRequestMemory: 256Mi, resourceLimitCpu: 1, resourceLimitMemory: 1.5Gi)
* [postgres] mynexus.services.com:443/mo-base/mo-base-postgresql-95-openshift
Newest version are not able to create postgres container
Container postgres exited with error 1. Logs: mkdir: cannot create directory '/home/jenkins': Permission denied
Both use same podTemplate
podTemplate(
name: label,
label: label,
cloud: 'openshift',
serviceAccount: 'jenkins',
containers: [
containerTemplate(
name: 'jnlp',
image: 'mynexus.services.theosmo.com/jenkins-slave-mo-aio:v3.11.104-14_jdk8',
resourceRequestCpu: env.CPU_REQUEST,
resourceLimitCpu: env.CPU_LIMIT,
resourceRequestMemory: env.RAM_REQUEST,
resourceLimitMemory: env.RAM_LIMIT,
workingDir: '/tmp',
args: '${computer.jnlpmac} ${computer.name}',
command: ''
),
containerTemplate(
name: 'postgres',
image: 'mynexus.services.theosmo.com:443/mo-base/mo-base-postgresql-95-openshift',
envVars: [
envVar(key: "POSTGRESQL_USER", value: "admin"),
envVar(key: "POSTGRESQL_PASSWORD", value: "admin"),
envVar(key: "POSTGRESQL_DATABASE", value: "supplier_data"),
]
)
],
volumes: [emptyDirVolume(mountPath: '/dev/shm', memory: true)]
)
Also, I've noticed YAML created by newest version is a bit weird
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "http://jenkins.svc:80/job/build-supplier/473/"
labels:
jenkins: "slave"
jenkins/mo-aio-build-supplier: "true"
name: "mo-aio-build-supplier-xfgmn-qmrdl"
spec:
containers:
- args:
- "********"
- "mo-aio-build-supplier-xfgmn-qmrdl"
env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "jenkins-jnlp.svc:50000"
- name: "JENKINS_AGENT_NAME"
value: "mo-aio-build-supplier-xfgmn-qmrdl"
- name: "JENKINS_NAME"
value: "mo-aio-build-supplier-xfgmn-qmrdl"
- name: "JENKINS_AGENT_WORKDIR"
value: "/tmp"
- name: "JENKINS_URL"
value: "http://jenkins.svc:80/"
- name: "HOME"
value: "/home/jenkins"
image: "mynexus.services.com/mo-base/jenkins-slave-mo-aio:1.8.2-ca"
imagePullPolicy: "IfNotPresent"
name: "jnlp"
resources:
limits:
memory: "1.5Gi"
cpu: "1"
requests:
memory: "256Mi"
cpu: "0.25"
securityContext:
privileged: false
tty: false
volumeMounts:
- mountPath: "/dev/shm"
name: "volume-0"
readOnly: false
- mountPath: "/tmp"
name: "workspace-volume"
readOnly: false
workingDir: "/tmp"
- env:
- name: "POSTGRESQL_DATABASE"
value: "supplier_data"
- name: "POSTGRESQL_USER"
value: "admin"
- name: "HOME"
value: "/home/jenkins"
- name: "POSTGRESQL_PASSWORD"
value: "admin"
image: "mynexus.services.com:443/mo-base/mo-base-postgresql-95-openshift"
imagePullPolicy: "IfNotPresent"
name: "postgres"
resources:
limits: {}
requests: {}
securityContext:
privileged: false
tty: false
volumeMounts:
- mountPath: "/dev/shm"
name: "volume-0"
readOnly: false
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
workingDir: "/home/jenkins/agent"
nodeSelector: {}
restartPolicy: "Never"
serviceAccount: "jenkins"
volumes:
- emptyDir:
medium: "Memory"
name: "volume-0"
- emptyDir: {}
name: "workspace-volume"
As you are able to see above:
postgres container is under an env tree
Any suggestion? Thanks in advance
As far as I checked there
The problem
Since Kubernetes Plugin version 1.18.0, the default working directory of the pod containers was changed from /home/jenkins to /home/jenkins/agent. But the default HOME environment variable enforcement is still pointing to /home/jenkins. The impact of this change is that if pod container images do not have a /home/jenkins directory with sufficient permissions for the running user, builds will fail to do anything directly under their HOME directory, /home/jenkins.
Resolution
There are different workaround to that problem:
Change the default HOME variable
The simplest and preferred workaround is to add the system property -Dorg.csanchez.jenkins.plugins.kubernetes.PodTemplateBuilder.defaultHome=/home/jenkins/agent on Jenkins startup. This requires a restart.
This workaround will reflect the behavior of kubernetes plugin pre-1.18.0 but on the new working directory /home/jenkins/agent
Use /home/jenkins as the working directory
A workaround is to change the working directory of pod containers back to /home/jenkins. This workaround is only possible when using YAML to define agent pod templates (see JENKINS-60977).
Prepare images for Jenkins
A workaround could be to ensure that the images used in agent pods have a /home/jenkins directory that is owned by the root group and writable by the root group as mentioned in OpenShift Container Platform-specific guidelines.
Additionaly there is the issue on jenkins.
Hope this helps.