I am trying to setup declarative pipeline where I would like to persiste workspace as volume claim so large git checkout can be faster. Based on doc there are options workspaceVolume and persistentVolumeClaimWorkspaceVolume but I am not able to make it work - jenkins always does following:
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
volumes:
- emptyDir: {}
name: "workspace-volume"
Try something like
podTemplate(
containers: [
containerTemplate(name: 'tree', image: 'iankoulski/tree', ttyEnabled: true, command: 'cat')
],
workspaceVolume: persistentVolumeClaimWorkspaceVolume(claimName: 'workspace', readOnly: false),
) {
node(POD_LABEL) {
stage('read workspace') {
checkout scm
container('tree') {
sh 'env'
sh 'tree'
sh 'test -f old-env.txt && cat old-env.txt'
sh 'env > old-env.txt'
}
}
}
}
Here is an example for declarative pipeline:
pipeline {
agent {
kubernetes {
yamlFile 'jenkins/pv-pod.yaml'
workspaceVolume persistentVolumeClaimWorkspaceVolume(claimName: 'workspace', readOnly: false)
}
}
If you post your jenkins deployment then I might help in that.
Mean while you can visit this yaml that I used and worked very well for me.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins:2.32.2
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
Related
I want to get a Docker-Image from my private-docker-registry. I can not find a good solution to describe a authentication in the Jenkinsfile. What do I need to add in the Jenkinsfile to get my Image "my-private-registry.image-name:tag"?
pipeline {
agent {
kubernetes {
label "${jenkins_slave_id}"
defaultContainer 'jnlp'
serviceAccount 'jenkins'
yaml """
apiVersion: v1
kind: Pod
spec:
restartPolicy: Never
containers:
- name: "my-private-container"
image: "my-private-registry.image-name:tag"
tty: true
command:
- cat
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: "jnlp"
stages {
stage("Do something") {
steps {
script {
container('my-private-container') {
script {
//Do something
}
}
}
}
}
Yes, I have added my imagePullSecret like this:
- name: "my-private-container"
image: "my-private-registry.image-name:tag"
imagePullSecrets: ['Docker-Registry']
tty: true
command:
- cat
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
I configured my imagePullSecrets in the secrets as "Username and Password".
When I try to run it I get following message:
Failed to pull image "my-private-registry.image-name:tag": unknown: Authentication is required
All you need to create cluster object secret with your private registry credentials and then add the secret name in your Pod template as below:
kubectl create secret docker-registry regcred
--docker-server=YOUR-PRIVATE-REGISTRY --docker-username=USER --docker-password=PASSWORD
pipeline {
agent {
kubernetes {
label "${jenkins_slave_id}"
defaultContainer 'jnlp'
serviceAccount 'jenkins'
yaml """
apiVersion: v1
kind: Pod
spec:
imagePullSecrets:
- name: regcred
restartPolicy: Never
containers:
- name: "my-private-container"
image: "my-private-registry.image-name:tag"
tty: true
command:
- cat
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: "jnlp"
I am looking to create pv/pvc for each jenkins jobs running on slave agent on runtime.
Basically what I am trying to achieve is, create a pv and share it between pods and later delete it when job is done.
pipeline {
agent {
kubernetes {
label 'scm'
yaml """
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim1
spec:
accessModes:
- ReadWriteMany
storageClassName: fast
resources:
requests:
storage: 1Gi
---
apiVersion: "v1"
kind: "Pod"
spec:
containers:
image: "jenkins/jnlp-slave:3.35-5-alpine"
name: "jnlp"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
- command:
- "cat"
tty: true
volumeMounts:
- mountPath: "/home/jenkins/wsp1"
name: "workspace-volume"
readOnly: false
volumes:
- name: "workspace-volume"
persistentVolumeClaim:
claimName: claim1
"""
}
}
stages {
stage('Checkout code') {
agent { label 'scm'}
steps {
git branch: 'master',
credentialsId: 'key',
url: 'giturl'
sh "ls -lat"
}
}
stage('Build ') {
agent {
kubernetes {
label 'Build-pod'
yaml """
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command:
- cat
tty: true
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
"""
}
}
steps {
sh "echo Workspace dir is ${pwd()}"
sh "mvn clean install
}
}
}
}
Above script does not work for obvious reasons. Do we have any other solutions.
and how to use runtime name for pvc and use it pod.
metadata:
name: claim1
I am running jenkins helm on k8s.
You can create a PVC per pod using something like workspaceVolume: dynamicPVC(requestsSize: "10Gi") but it will be tied to the pod and deleted when the pod is deleted
https://github.com/jenkinsci/kubernetes-plugin/blob/342166c1864e84791f2e94dd823709eb6e672a6e/src/test/resources/org/csanchez/jenkins/plugins/kubernetes/pipeline/dynamicPVC.groovy
I have 2 Jenkins instances, one use version 1.8 and second version 1.18.
Oldest version is able to create both containers.
Agent specification [Kubernetes Pod Template] (mo-aio-build-supplier):
* [jnlp] mynexus.services.com/mo-base/jenkins-slave-mo-aio:1.8.2-ca(resourceRequestCpu: 0.25, resourceRequestMemory: 256Mi, resourceLimitCpu: 1, resourceLimitMemory: 1.5Gi)
* [postgres] mynexus.services.com:443/mo-base/mo-base-postgresql-95-openshift
Newest version are not able to create postgres container
Container postgres exited with error 1. Logs: mkdir: cannot create directory '/home/jenkins': Permission denied
Both use same podTemplate
podTemplate(
name: label,
label: label,
cloud: 'openshift',
serviceAccount: 'jenkins',
containers: [
containerTemplate(
name: 'jnlp',
image: 'mynexus.services.theosmo.com/jenkins-slave-mo-aio:v3.11.104-14_jdk8',
resourceRequestCpu: env.CPU_REQUEST,
resourceLimitCpu: env.CPU_LIMIT,
resourceRequestMemory: env.RAM_REQUEST,
resourceLimitMemory: env.RAM_LIMIT,
workingDir: '/tmp',
args: '${computer.jnlpmac} ${computer.name}',
command: ''
),
containerTemplate(
name: 'postgres',
image: 'mynexus.services.theosmo.com:443/mo-base/mo-base-postgresql-95-openshift',
envVars: [
envVar(key: "POSTGRESQL_USER", value: "admin"),
envVar(key: "POSTGRESQL_PASSWORD", value: "admin"),
envVar(key: "POSTGRESQL_DATABASE", value: "supplier_data"),
]
)
],
volumes: [emptyDirVolume(mountPath: '/dev/shm', memory: true)]
)
Also, I've noticed YAML created by newest version is a bit weird
apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
buildUrl: "http://jenkins.svc:80/job/build-supplier/473/"
labels:
jenkins: "slave"
jenkins/mo-aio-build-supplier: "true"
name: "mo-aio-build-supplier-xfgmn-qmrdl"
spec:
containers:
- args:
- "********"
- "mo-aio-build-supplier-xfgmn-qmrdl"
env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_TUNNEL"
value: "jenkins-jnlp.svc:50000"
- name: "JENKINS_AGENT_NAME"
value: "mo-aio-build-supplier-xfgmn-qmrdl"
- name: "JENKINS_NAME"
value: "mo-aio-build-supplier-xfgmn-qmrdl"
- name: "JENKINS_AGENT_WORKDIR"
value: "/tmp"
- name: "JENKINS_URL"
value: "http://jenkins.svc:80/"
- name: "HOME"
value: "/home/jenkins"
image: "mynexus.services.com/mo-base/jenkins-slave-mo-aio:1.8.2-ca"
imagePullPolicy: "IfNotPresent"
name: "jnlp"
resources:
limits:
memory: "1.5Gi"
cpu: "1"
requests:
memory: "256Mi"
cpu: "0.25"
securityContext:
privileged: false
tty: false
volumeMounts:
- mountPath: "/dev/shm"
name: "volume-0"
readOnly: false
- mountPath: "/tmp"
name: "workspace-volume"
readOnly: false
workingDir: "/tmp"
- env:
- name: "POSTGRESQL_DATABASE"
value: "supplier_data"
- name: "POSTGRESQL_USER"
value: "admin"
- name: "HOME"
value: "/home/jenkins"
- name: "POSTGRESQL_PASSWORD"
value: "admin"
image: "mynexus.services.com:443/mo-base/mo-base-postgresql-95-openshift"
imagePullPolicy: "IfNotPresent"
name: "postgres"
resources:
limits: {}
requests: {}
securityContext:
privileged: false
tty: false
volumeMounts:
- mountPath: "/dev/shm"
name: "volume-0"
readOnly: false
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
workingDir: "/home/jenkins/agent"
nodeSelector: {}
restartPolicy: "Never"
serviceAccount: "jenkins"
volumes:
- emptyDir:
medium: "Memory"
name: "volume-0"
- emptyDir: {}
name: "workspace-volume"
As you are able to see above:
postgres container is under an env tree
Any suggestion? Thanks in advance
As far as I checked there
The problem
Since Kubernetes Plugin version 1.18.0, the default working directory of the pod containers was changed from /home/jenkins to /home/jenkins/agent. But the default HOME environment variable enforcement is still pointing to /home/jenkins. The impact of this change is that if pod container images do not have a /home/jenkins directory with sufficient permissions for the running user, builds will fail to do anything directly under their HOME directory, /home/jenkins.
Resolution
There are different workaround to that problem:
Change the default HOME variable
The simplest and preferred workaround is to add the system property -Dorg.csanchez.jenkins.plugins.kubernetes.PodTemplateBuilder.defaultHome=/home/jenkins/agent on Jenkins startup. This requires a restart.
This workaround will reflect the behavior of kubernetes plugin pre-1.18.0 but on the new working directory /home/jenkins/agent
Use /home/jenkins as the working directory
A workaround is to change the working directory of pod containers back to /home/jenkins. This workaround is only possible when using YAML to define agent pod templates (see JENKINS-60977).
Prepare images for Jenkins
A workaround could be to ensure that the images used in agent pods have a /home/jenkins directory that is owned by the root group and writable by the root group as mentioned in OpenShift Container Platform-specific guidelines.
Additionaly there is the issue on jenkins.
Hope this helps.
I'm using jenkins to start my builds in a kubernetes cluster via the kubernetes plugin.
When trying to set my jenkins workspace-volume to medium: Memory so that it runs in RAM, I receive the following error:
spec.volumes[1].name: Duplicate value: "workspace-volume"
This is the corresponding yaml:
apiVersion: v1
kind: Pod
metadata:
name: jenkins-job-xyz
labels:
identifier: jenkins-job-xyz
spec:
restartPolicy: Never
containers:
- name: jnlp
image: 'jenkins/jnlp-slave:alpine'
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins
- name: maven
image: maven:latest
imagePullPolicy: Always
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins
volumes:
- name: workspace-volume
emptyDir:
medium: Memory
The only thing I added is the volumes: part at the end.
The volume workspace-volume is auto-generated by the kubernetes plugin and so a manual declaration will result in a duplicate entry.
For running the workspace-volume in RAM, set
workspaceVolume: emptyDirWorkspaceVolume(memory: true)
inside the podTemplate closure according to the documentation.
Summary
Running a declarative pipeline job in jenkins which was deployed to a kubernetes cluster fails when using the docker agent with the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=node&tag=10.15.1: dial unix /var/run/docker.sock: connect: permission denied
How can I solve this permission error in the kubernetes declaration?
Background
We have a jenkins server which was deployed to a kubernetes cluster using the jenkinsci/blueocean image. The kubernetes declaration as done as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
We then declare a declarative pipeline jenkins job as follows:
pipeline {
agent {
docker {
image 'node:10.15.1'
label 'master'
}
}
stages {
stage('Checkout source code') {
steps {
checkout scm
}
}
stage('Build project') {
steps {
sh 'npm install'
sh 'npm run compile'
}
}
stage('Run quality assurance') {
steps {
sh 'npm run style:check'
sh 'npm run test:coverage'
}
}
}
}
This job fails with the aforementioned error. My suspicion is that the docker socket was mounted into the system, but the user running the job does not have permission to execute the socket. I, however, cannot add the user to the group in the created pod using sudo usermod -a -G docker $USER since the pod will be recreated upon each redeploy.
Questions
Is it possible to mount the docker volume using the correct user in the kubernetes declaration?
Can I declare the pipeline differently, if it is not possible to set up the permission in the kubernetes declaration?
Is there some other solution which I have not thought about?
Thanks.
I, however, cannot add the user to the group in the created pod using
sudo usermod -a -G docker $USER since the pod will be recreated upon
each redeploy.
Actually, you can.
Define a usermod command for your container in the deployment yaml, e.g
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-master
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
containers:
- name: jenkins-master
image: jenkinsci/blueocean
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
env:
- name: "JAVA_OPTS"
value: "-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=3600"
- name: "USER"
value: "Awemo"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-socket
mountPath: /var/run/docker.sock
command: ["/bin/sh"]
args: ["-c", "usermod -aG docker $USER"]
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: File
So, whenever a new pod is created, a user will be added to the docker usergroup