Jenkins multibranch pipeline postbuild only on specific branch (declarative) - jenkins

How can you create a pipeline that does its stages on every branch, but the postbuild action only on specific branches.
I have seen the when {branch 'production'} option, but it seems to me, that this only works for stage blocks and not for post blocks.
Is there a way to do something like
pipeline {
agent { any }
stages {
stage('build') {
steps {
bat script:"echo build"
}
post {
always {
when {
branch 'production'
}
bat script:"echo publish"
}
}
}
}
}
the if (env.BRANCH_NAME == 'production') seems to be only for scripted pipelines

Conditional post section in Jenkins pipeline has an answer: use script and if inside the post condition.
post {
always {
script {
if (env.BRANCH_NAME == 'production') {
// I have not verified whether your step works here;
// conditional emailext works as expected.
bat script:"echo publish"
}
}
}
}

Related

Running script (stash) prior to parallel stages being invoked

I have a parallel stage setup, and would like to know if it's possible to run a script prior to the nested stages, so something like this:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
I know the above is wrong, I get the error Only one of "matrix", "parallel", "stages", or "steps" allowed for stage "E2E-PR-CYPRESS"
I already have the stash script in a setup stage at the beginning of my pipeline, but I'd like to be able to restart from this stage above on Jenkins, and so need the stash part in this stage as the parallel stages need to unstash the contents.
Updated Answer:
After playing a bit with the Restart from a Stage option there is seems to be a nice feature designed exactly for your needs called Preserving stashes for Use with Restarted Stages:
Normally, when you run the stash step in your Pipeline, the resulting
stash of artifacts is cleared when the Pipeline completes, regardless
of the result of the Pipeline. Since stash artifacts aren’t accessible
outside of the Pipeline run that created them, this has not created
any limitations on usage. But with Declarative stage restarting, you
may want to be able to unstash artifacts from a stage which ran before
the stage you’re restarting from.
To enable this, there is a job property that allows you to configure a
maximum number of completed runs whose stash artifacts should be
preserved for reuse in a restarted run. You can specify anywhere from
1 to 50 as the number of runs to preserve.
This job property can be configured in your Declarative Pipeline’s options section, as below:
options {
preserveStashes()
// or
preserveStashes(buildCount: 5)
}
This built in feature is exactly what you need to solve your issue without any special modifications to your code, as it will allow you to rerun the pipeline from any stage and still use the existing file that were previously stashed.
Original Answer:
You can actually achieve this quite simply using the scripted syntax for the parallel command, and it will also allow you to avoid the duplicate code in the parallel stages.
parallel: Execute in parallel
Takes a map from branch names to closures and an optional argument failFast which will terminate all branches upon a failure in any other branch:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
In your case it can look like:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
// Define the parallel execution stages
def stages = ['Cypress Tests 1', 'Cypress Tests 2']
// Create the parallel executions and run them
parallel stages.collectEntries {
["Running ${it}": {
node('aws_micro_slave_e2e') {
skipDefaultCheckout()
runE2eTests()
}
}]
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
This way you can easily add more parallel steps by updating the stages list, or even receive it as an input parameter. In addition you can create the parallel executions by different labels or tests suits, instead of the stage name.
You can add a Prepare stage at the top like this:
stages{
stage('Preperation'){
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
}
stage('E2E-PR-CYPRESS') {
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
An out of the box concept
Propose splitting the job into 2 parts taking the following into consideration:
Currently use an EC2 plugin, as the current agents are EC2
Running the parallel stages with the same stashed content ready to unstash
Create jenkins pipeline job 1:
This job will checkout the workspace with any type of agent
Create a packer json to create a customised AMI for the EC2
The customised AMI will stash the contents and move to a directory that will appear on the EC2 when the agent is built
Output the AMI ID, run a groovy job to update the EC2 plugin AMI ID with the customised AMI ID to temporarily set the AMI in memory on Jenkins
pipeline {
agent {
docker {
test-container
}
}
options {
buildDiscarder(
logRotator(
numToKeepStr: '10',
artifactNumToKeepStr: '10'
)
)
ansiColor('xterm')
gitConnection("git")
}
stages {
stage('Run Stash Cypress Functional Test') {
steps {
dir('functional-test') {
// develop branch is canary build, all other branches are stable builds
script {
sh """
# script to stash cypress tests
"""
}
}
}
}
stage('Functional Test AMI Build') {
steps {
dir('functional-test/packer') {
withAWS(role: 'PackerBuild', roleAccount: '123456789012', roleSessionName: 'Jenkins-Workflow-FunctionalTest-Packer') {
script {
sh """
# packer json script will require to copy contents from workspace, run the script to stash content
# packer json script will require to capture new AMI ID
# https://discuss.devopscube.com/t/how-to-get-the-ami-id-after-a-packer-build/36
# https://www.packer.io/docs/post-processors/manifest
packer validate FunctionalTestPacker.json
packer build -debug FunctionalTestPacker.json
# grab AMI ID and export as jenkins env variable
"""
}
}
}
}
}
stage('run groovy script to update AMI ID on EC2 plugin') {
steps {
dir(groovy job dir) {
script {
sh """
# run groovy job to update AMI on Jenkins EC2 plugin
# https://gist.github.com/vrivellino/97954495938e38421ba4504049fd44ea
"""
}
}
}
}
stage('Kickoff Functional Test Deploy') {
// pipeline checkbox parameter, when ticked it will automatically kick off the functional test pipeline
when {
expression {params.RUN_TESTS.toBoolean()}
}
steps {
script{
env.branch = params.BRANCH
sh """
echo "Branch is ${branch}"
"""
}
build job: 'workflow/CypressFunctionaTestDeployAndRun',
parameters: [
string(name: 'BRANCH', value: env.branch)
],
wait : false
}
}
}
post {
always {
cleanWs()
}
}
}
Create jenkins pipeline job 2:
This job will create the EC2 agents via the plugin from the customised AMI from pipeline job 1
This means your agents will have the same workspace ready to unstash - so you can execute a parallel run
Also you could move a lot of your user data script that is in the EC2 plugin as part of the customised AMI build, thus cut down the time for each EC2 agent to get ready to carry out execution
pipeline {
stages {
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}

How do I run the same stages on multiple nodes in Jenkins declarative pipeline?

We have a pipeline like this:
pipeline {
agent none
stages {
stage('Build') {
// ...
}
stage('Test') {
parallel {
stage('Test on Debian') {
agent {
label 'debian'
}
steps {
unstash 'compile-artifacts'
unstash 'dot-gradle'
sh './gradlew check --stacktrace'
}
post {
always {
junit '*/build/test-results/**/*.xml'
}
}
}
stage('Test on CentOS') {
agent {
label 'centos'
}
steps {
unstash 'compile-artifacts'
unstash 'dot-gradle'
sh './gradlew check --stacktrace'
}
post {
always {
junit '*/build/test-results/**/*.xml'
}
}
}
stage('Test on Windows') {
agent {
label 'windows'
}
steps {
unstash 'compile-artifacts'
unstash 'dot-gradle'
bat "gradlew.bat check --stacktrace"
}
post {
always {
junit '*/build/test-results/**/*.xml'
}
}
}
stage('Test on macOS') {
agent {
label 'macos'
}
steps {
unstash 'compile-artifacts'
unstash 'dot-gradle'
sh './gradlew check --stacktrace'
}
post {
always {
junit '*/build/test-results/**/*.xml'
}
}
}
}
}
}
}
Every stage is essentially identical, save for one line in the Windows block which I already know how to deal with, so is there a way to template out the common parts of these stages to remove the duplication?
I already tried putting a loop inline, but it's not something that declarative pipelines let you do. :(
You can refactor your step{}-blocks with groovy-methods:
def stageX(boolean linux) {
unstash 'compile-artifacts'
unstash 'dot-gradle'
if (linux) {
sh './gradlew check --stacktrace' }
else {
bat "gradlew.bat check --stacktrace" }
}
which you have to call like the following in your step{}:
steps {
script { stageX( true) } // or with false for your windows agent
}
Of course you can do the same for your junit-plugin-call:
def junitCall() {
junit '*/build/test-results/**/*.xml'
}
and call it like:
post {
always {
script { junitCall()
}
}
}
You won't win a lot of lines but it will improve the handling of the code a lot. If you want to cleanup your Jenkinsfile even more you could put the methods into a shared-library which you import so they aren't even declared in your Jenkinsfile.
Essentially what you want to do is currently not possible. As https://jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines states:
Only entire pipelines can be defined in shared libraries as of this
time. This can only be done in vars/*.groovy, and only in a call
method. Only one Declarative Pipeline can be executed in a single
build, and if you attempt to execute a second one, your build will
fail as a result.
So you can define methods to bundle several steps or you can bundle a whole pipeline in a shared library but nothing in between. Which is a shame, really.

Declarative pipeline when condition in post

As far as declarative pipelines go in Jenkins, I'm having trouble with the when keyword.
I keep getting the error No such DSL method 'when' found among steps. I'm sort of new to Jenkins 2 declarative pipelines and don't think I am mixing up scripted pipelines with declarative ones.
The goal of this pipeline is to run mvn deploy after a successful Sonar run and send out mail notifications of a failure or success. I only want the artifacts to be deployed when on master or a release branch.
The part I'm having difficulties with is in the post section. The Notifications stage is working great. Note that I got this to work without the when clause, but really need it or an equivalent.
pipeline {
agent any
tools {
maven 'M3'
jdk 'JDK8'
}
stages {
stage('Notifications') {
steps {
sh 'mkdir tmpPom'
sh 'mv pom.xml tmpPom/pom.xml'
checkout([$class: 'GitSCM', branches: [[name: 'origin/master']], doGenerateSubmoduleConfigurations: false, submoduleCfg: [], userRemoteConfigs: [[url: 'https://repository.git']]])
sh 'mvn clean test'
sh 'rm pom.xml'
sh 'mv tmpPom/pom.xml ../pom.xml'
}
}
}
post {
success {
script {
currentBuild.result = 'SUCCESS'
}
when {
branch 'master|release/*'
}
steps {
sh 'mvn deploy'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result,
)
}
failure {
script {
currentBuild.result = 'FAILURE'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result
)
}
}
}
In the documentation of declarative pipelines, it's mentioned that you can't use when in the post block. when is allowed only inside a stage directive.
So what you can do is test the conditions using an if in a script:
post {
success {
script {
if (env.BRANCH_NAME == 'master')
currentBuild.result = 'SUCCESS'
}
}
// failure block
}
Using a GitHub Repository and the Pipeline plugin I have something along these lines:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh '''
make
'''
}
}
}
post {
always {
sh '''
make clean
'''
}
success {
script {
if (env.BRANCH_NAME == 'master') {
emailext (
to: 'engineers#green-planet.com',
subject: "${env.JOB_NAME} #${env.BUILD_NUMBER} master is fine",
body: "The master build is happy.\n\nConsole: ${env.BUILD_URL}.\n\n",
attachLog: true,
)
} else if (env.BRANCH_NAME.startsWith('PR')) {
// also send email to tell people their PR status
} else {
// this is some other branch
}
}
}
}
}
And that way, notifications can be sent based on the type of branch being built. See the pipeline model definition and also the global variable reference available on your server at http://your-jenkins-ip:8080/pipeline-syntax/globals#env for details.
Ran into the same issue with post. Worked around it by annotating the variable with #groovy.transform.Field. This was based on info I found in the Jenkins docs for defining global variables.
e.g.
#!groovy
pipeline {
agent none
stages {
stage("Validate") {
parallel {
stage("Ubuntu") {
agent {
label "TEST_MACHINE"
}
steps {{
sh "run tests command"
recordFailures('Ubuntu', 'test-results.xml')
junit 'test-results.xml'
}
}
}
}
}
post {
unsuccessful {
notify()
}
}
}
// Make testFailures global so it can be accessed from a 'post' step
#groovy.transform.Field
def testFailures = [:]
def recordFailures(key, resultsFile) {
def failures = ... parse test-results.xml script for failures ...
if (failures) {
testFailures[key] = failures
}
}
def notify() {
if (testFailures) {
... do something here ...
}
}

Jenkins DSL Pipeline: delete a job from its pipeline

I have a Jenkins pipeline job that (among other things) creates another pipelineJob (to cleanup everything afterwards) using Job DSL plugin.
pipeline {
agent { label 'Deployment' }
stages {
stage('Clean working directory and Checkout') {
steps {
deleteDir()
checkout scm
}
}
// Complex logic omitted
stage('Generate cleanup job') {
steps {
build job: 'cleanup-job-template',
parameters: [
string(name: 'REGION', value: "${REGION}"),
string(name: 'DEPLOYMENT_TYPE', value: "${DEPLOYMENT_TYPE}")
]
}
}
}
}
The thing is that I need this newly generated job to be built only once and then, if the build was successful, the job should be deleted.
pipeline {
stages {
stage('Cleanup afterwards') {
// cleanup logic
}
}
post {
success {
// delete this job?
}
}
}
I thought, that this can be done using Pipeline Post Action, but, unfortunately, I couldn't find any out-of-the-box solution for this.
Is it possible to achieve this at all?
You can achieve this using the post Groovy and then you will need to write some groovy code in order to delete the job:
#!/usr/bin/env groovy
import hudson.model.*
pipeline {
agent none
stages {
stage('Cleanup afterwards') {
// cleanup logic
steps {
node('worker') {
sh 'ls -la'
}
}
}
}
post {
success {
script {
jobsToDelete = ["<JOB_TO_DELETE"]
deleteJob(Hudson.instance.items, jobsToDelete)
}
}
}
}
def deleteJob(items, jobsToDelete) {
items.each { item ->
if (item.class.canonicalName != 'com.cloudbees.hudson.plugins.folder.Folder') {
if (jobsToDelete.contains(item.fullName)) {
manager.listener.logger.println(item.fullName)
item.delete()
}
}
}
}
Tested both cases and work on Jenkins 2.89.4
You should do that all in one job instead of creating and deleting jobs. Use multiple stages for that, e.g. deploy test system, run tests / wait for tests to be finished, undeploy. No need for extra jobs. Example posted here: Can a Jenkins pipeline have an optional input step?

How do I assure that a Jenkins pipeline stage is always executed, even if a previous one failed?

I am looking for a Jenkinsfile example of having a step that is always executed, even if a previous step failed.
I want to assure that I archive some builds results in case of failure and I need to be able to have an always-running step at the end.
How can I achieve this?
We switched to using Jenkinsfile Declarative Pipelines, which lets us do things like this:
pipeline {
agent any
stages {
stage('Test') {
steps {
sh './gradlew check'
}
}
}
post {
always {
junit 'build/reports/**/*.xml'
}
}
}
References:
Tests and Artifacts
Jenkins Pipeline Syntax
try {
sh "false"
} finally {
stage 'finalize'
echo "I will always run!"
}
Another possibility is to use a parallel section in combination with a lock. For example:
pipeline {
stages {
parallel {
stage('Stage 1') {
steps {
lock('MY_LOCK') {
echo 'do stuff 1'
}
}
}
stage('Stage 2') {
steps {
lock('MY_LOCK') {
echo 'do stuff 2'
}
}
}
}
}
}
Parallel stages in a parallel section only abort other stages in the same parallel section if the fail fast option for the parallel section is set. See the docs.

Resources