Only one pipeline of a set of multiple pipelines should run - jenkins

I try to configure different pipelines in jenkins 2. My Problem ist that all my pipelines need the same workspace path (configugerd with customWorkspace in my configuration script).
Now I have to prevent that more than one pipeline is running.
My search always leads me back to the same pages, which unfortunately do not help me :-(
Has anyone already solved the same problem and can give me a hint?
Thank you very much

def locked = false;
pipeline {
agent any
stages {
stage('check workspace lock status') {
steps {
script {
locked = fileExists file: '.lock'
if(locked == false) {
touch file: '.lock'
}
}
}
}
stage('build') {
when {
beforeAgent true
expression { locked == false }
}
steps {
// do something you want
}
}
}
post {
always {
sh 'rm -f .lock'
}
}
}

Related

Trigger Jenkins Pipeline job on merge event for gitlab MR

this should be fairly basic, but when I research I come to things like gerrit triggrs and whatnot, which seem way too complicated for doing something simple like this.
I would like to do something like either this in the JobDSL script:
pipelineJob('deploy-game') {
definition {
environmentVariables {
env('ENVIRONMENT', "${ENVIRONMENT}")
keepBuildVariables(true)
}
cpsScm {
scm {
git{
remote {
url('https://blabla.git')
credentials('gitlab-credentials')
}
branches('${gitlabsourcebranch}')
}
}
scriptPath('path/to/this.jenkinsfile')
}
triggers {
gitlabPush {
buildOnMergeRequestEvents(true)
if ($gitlabMergeRequestState == 'merged') // this part
}
}
}
}
Or, trigger on all MR events, and then filter out in the pipeline script:
pipeline {
agent none
environment {
ENVIRONMENT = "${ENVIRONMENT}"
}
triggers {
$gitlabMergeRequestState == 'merged' // this one
}
stages {
stage ('do-stuff') {
agent {
label 'agent'
}
steps {
sh 'some commands ...'
}
}
}
}
How do I do this ?
So this is how it should be, I hope this is what you are looking for it.
pipelineJob('Job_Name') {
definition {
cpsScm {
lightweight(true)
triggers {
gitlabPush {
buildOnMergeRequestEvents(true) // it will trigger build when MR is opened.
buildOnPushEvents(true)
commentTrigger('retry a build') // When you write the comment on MR on gitlab. it will also trigger build
enableCiSkip(true)
rebuildOpenMergeRequest('source')
skipWorkInProgressMergeRequest(false)
targetBranchRegex('.*master.*|.*release.*') //This mean only push happened to master or release then only trigger jenkins build. Do not trigger build on normal feature branch push until the MR is opened.
}
}
configure {
it / triggers / 'com.dabsquared.gitlabjenkins.GitLabPushTrigger' << secretToken('ADD_TOKEN_FROM_JENKINS_JOB')
}
scm {
git {
remote {
credentials('ID')
url("git#URL.git")
branch("refs/heads/master")
}
}
}
scriptPath("jenkinsfile")
}
}
}

Conditional post section in Jenkins pipeline

Say I have a simple Jenkins pipeline file as below:
pipeline {
agent any
stages {
stage('Test') {
steps {
sh ...
}
}
stage('Build') {
steps {
sh ...
}
}
stage('Publish') {
when {
buildingTag()
}
steps {
sh ...
send_slack_message("Built tag")
}
}
}
post {
failure {
send_slack_message("Error building tag")
}
}
}
Since there's a lot non-tag builds everyday, I don't want to send any slack message about non-tag builds. But for the tag builds, I want to send either a success message or a failure message, despite of which stage it failed.
So for the above example, I want:
When it's a tag build, and stage 'Test' failed, I shall see a "Error building tag" message. (This is a yes in the example)
When it's a tag build, and all stages succeeded, I shall see a "Built tag" message. (This is also a yes in the example)
When it's not a tag build, no slack message will ever been sent. (This is not the case in the example, for example, when the 'Test' stage fails, there's will be a "Error building tag" message)
As far as I know, there's no such thing as "conditional post section" in Jenkins pipeline syntax, which could really help me out here. So my question is, is there any other way I can do this?
post {
failure {
script {
if (isTagBuild) {
send_slack_message("Error building tag")
}
}
}
}
where isTagBuild is whatever way you have to differentiate between a tag or no tag build.
You could also apply the same logic, and move send_slack_message("Built tag") down to a success post stage.
In the postbuild step you can also use script step inside and use if. And inside this if step you can add emailext plugin.
Well, for those who just want some copy-pastable code, here's what I ended-up with based on #eez0's answer.
pipeline {
agent any
environment {
BUILDING_TAG = 'no'
}
stages {
stage('Setup') {
when {
buildingTag()
}
steps {
script {
BUILDING_TAG = 'yes'
}
}
}
stage('Test') {
steps {
sh ...
}
}
stage('Build') {
steps {
sh ...
}
}
stage('Publish') {
when {
buildingTag()
}
steps {
sh ...
}
}
}
post {
failure {
script {
if (BUILDING_TAG == 'yes') {
slackSend(color: '#dc3545', message: "Error publishing")
}
}
}
success {
script {
if (BUILDING_TAG == 'yes') {
slackSend(color: '#28a745', message: "Published")
}
}
}
}
}
As you can see, I'm really relying on Jenkins built-in buidingTag() function to help me sort things out, by using an env-var as a "bridge". I'm really not good at Jenkins pipeline, so please leave comments if you have any suggestions.

Dynamically select agent in Jenkinsfile

I want to be able select whether a pipeline stage is going to be executed with the dockerfile agent depending on the presence of a Dockerfile in the repository. If there's no Dockerfile, the stage should be run locally.
I tried something like
pipeline {
stage('AwesomeStage') {
when {
beforeAgent true
expression { return fileExists("Dockerfile") }
}
agent { dockerfile }
steps {
// long list of awesome steps that should be run either on Docker either locally, depending on the presence of a Dockerfile
}
}
}
But the result is that the whole stage is skipped when there's no Dockerfile.
Is it possible to do something like the following block?
//...
if (fileExists("Dockerfile")) {
agent {dockerfile}
}
else {
agent none
}
//...
I came up with this solution that relies on defining a function to avoid repetion and defines two different stages according to type of agent.
If anyone has a more elegant solution, please let me know.
def awesomeScript() {
// long list of awesome steps that should be run either on Docker either locally, depending on the presence of a Dockerfile
}
pipeline {
stage('AwesomeStageDockerfile') {
when {
beforeAgent true
expression { return fileExists("Dockerfile") }
}
agent { dockerfile }
steps {
awesomeScript()
}
}
stage('AwesomeStageLocal') {
when {
beforeAgent true
expression { return !fileExists("Dockerfile") }
}
agent none
steps {
awesomeScript()
}
}
}

Run Jenkins stage on different nodes

I have the following Jenkinsfile of a multibranch pipeline architecture
#!/usr/bin/groovy
pipeline {
agent {
node {
label 'ubuntu'
customWorkspace "/src/$BUILD_NUMBER"
}
}
environment {
SRC_DIR = "$WORKSPACE"
BUILD_DIR="/build/$BUILD_NUMBER"
}
stages {
stage('Build') {
steps {
dir(BUILD_DIR) {
sh '$SRC_DIR/build.sh'
}
}
}
stage('Test') {
steps {
dir(BUILD_DIR) {
sh '$SRC_DIR/test.sh'
}
}
}
}
}
I am trying to run the 'Build' stage on Ubuntu and Red Hat nodes in parallel, and the 'Test' stage on the Ubuntu node only.
Can anybody help me in specifying how to choose which stage are run on which nodes. I found few solutions online but they recommended rewriting the build stage twice: once for the Red Hat node and the other for the Ubuntu node. Isn't there a way to do this without code duplication ?
Thank you very much
Sure, you would want to label your slave nodes somehow. Basically configure all the node on Jenkins and give them meaningful names.
stage('Build') {
steps {
node('os_linux') {
sh './build.sh'
}
node('os_redhat') {
sh './build.sh'
}
}
This will run the tasks in serial, and Jenkinsfile syntax also supports executing commands in parallel on different nodes.
Thanks,
A bit late to the party, but still ...
You can use script {} so you can create the label you need.
Something like this:
stage('Build') {
steps {
script {
dev label = 'RHEL'
if (env.ENV == 'ubuntu') {
label = 'Ubuntu'
}
node("${label}") {
dir(BUILD_DIR) {
sh '$SRC_DIR/build.sh'
}
}
}
}
}

How to aggregate test results in jenkins parallel pipeline?

I have a Jenkinsfile with a definition for parallel test execution, and the task is to grab the test results from both in order to process them in a post step somewhere.
Problem is: How to do this? Searching for anything acting as an example code does not bring up anything - either the examples deal with explaining parallel, or they explain post with junit.
pipeline {
agent { node { label 'swarm' } }
stages {
stage('Testing') {
parallel {
stage('Unittest') {
agent { node { label 'swarm' } }
steps {
sh 'unittest.sh'
}
}
stage ('Integrationtest') {
agent { node { label 'swarm' } }
steps {
sh 'integrationtest.sh'
}
}
}
}
}
}
Defining a post {always{junit(...)}} step at both parallel stages yielded a positive reaction from the BlueOcean GUI, but the test report recorded close to double the amount of tests - very odd, some file must have been scanned twice. Adding this post step to the surrounding "Testing" stage gave an error.
I am missing an example detailing how to post-process test results that get created in a parallel block.
Just to record my solution for the internet:
I came up with stashing the test results in both parallel steps, and adding a final step that unstashes the files, then post-processes them:
pipeline {
agent { node { label 'swarm' } }
stages {
stage('Testing') {
parallel {
stage('Unittest') {
agent { node { label 'swarm' } }
steps {
sh 'rm build/*'
sh 'unittest.sh'
}
post {
always {
stash includes: 'build/**', name: 'testresult-unittest'
}
}
}
stage ('Integrationtest') {
agent { node { label 'swarm' } }
steps {
sh 'rm build/*'
sh 'integrationtest.sh'
}
post {
always {
stash includes: 'build/**', name: 'testresult-integrationtest'
}
}
}
}
}
stage('Reporting') {
steps {
unstash 'testresult-unittest'
unstash 'testresult-integrationtest'
}
post {
always {
junit 'build/*.xml'
}
}
}
}
}
My observation though is that you have to pay attention to clean up your workspace: Both test stages do create one file, but on the second run, both workspaces are inherited from the previous run and have both previously created test results present in the build directory.
So you have to remove any remains of test results before you start a new run, otherwise you'd stash an old version of the test result from the "other" stage. I don't know if there is a better way to do this.
To ensure the stage('Reporting') will be always executed, put all the step in the 'post':
post {
always {
unstash 'testresult-unittest'
unstash 'testresult-integrationtest'
junit 'build/*.xml'
}
}

Resources