How to use post steps with Jenkins pipeline on multiple agents? - jenkins

When using the Jenkins pipeline where each stage runs on a different agent, it is good practice to use agent none at the beginning:
pipeline {
agent none
stages {
stage('Checkout') {
agent { label 'master' }
steps { script { currentBuild.result = 'SUCCESS' } }
}
stage('Build') {
agent { label 'someagent' }
steps { bat "exit 1" }
}
}
post {
always {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "test#test.com", sendToIndividuals: true])
}
}
}
But doing this leads to Required context class hudson.FilePath is missing error message when the email should go out:
[Pipeline] { (Declarative: Post Actions)
[Pipeline] step
Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node
[Pipeline] error
[Pipeline] }
When I change from agent none to agent any, it works fine.
How can I get the post step to work without using agent any?

wrap the step that does the mailing in a node step:
post {
always {
node('awesome_node_label') {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "test#test.com", sendToIndividuals: true])
}
}
}

I know this is old but I stumbled on this looking for something related. If you want to run the post step on any node, you can use
post {
always {
node(null) {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "test#test.com", sendToIndividuals: true])
}
}
}
https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#node-allocate-node Says that the label may be left blank. Many times in a declarative pipeline if something is left blank this results in an error. To work around this, setting it to null will often work.

Related

Jenkins stage doesn't call custom method

I have a Jenkins pipeline that does some code linting through different environments. I have a linting method that I call based on what parameters are passed. However, during my build, the stage that calls the method does nothing and returns nothing. Everything appears to be sane to me. Below is my code, and the stages showing the null results.
Jenkinsfile:
IAMMap = [
"west": [
account: "XXXXXXXX",
],
"east": [
account: "YYYYYYYYY",
],
]
pipeline {
options {
ansiColor('xterm')
}
parameters {
booleanParam(
name: 'WEST',
description: 'Whether to lint code from west account or not. Defaults to "false"',
defaultValue: false
)
booleanParam(
name: 'EAST',
description: 'Whether to lint code from east account or not. Defaults to "false"',
defaultValue: true
)
booleanParam(
name: 'LINT',
description: 'Whether to perform linting. This should always default to "true"',
defaultValue: true
)
}
environment {
CODE_DIR = "/code"
}
stages {
stage('Start Lint') {
steps {
script {
if (params.WEST && params.LINT) {
codeLint("west")
}
if (params.EAST && params.LINT) {
codeLint("east")
}
}
}
}
}
post {
always {
cleanWs disableDeferredWipeout: true, deleteDirs: true
}
}
}
def codeLint(account) {
return {
stage('Code Lint') {
dir(env.CODE_DIR) {
withAWS(IAMMap[account]) {
sh script: "./lint.sh"
}
}
}
}
}
Results:
15:00:20 [Pipeline] { (Start Lint)
15:00:20 [Pipeline] script
15:00:20 [Pipeline] {
15:00:20 [Pipeline] }
15:00:20 [Pipeline] // script
15:00:20 [Pipeline] }
15:00:20 [Pipeline] // stage
15:00:20 [Pipeline] stage
15:00:20 [Pipeline] { (Declarative: Post Actions)
15:00:20 [Pipeline] cleanWs
15:00:20 [WS-CLEANUP] Deleting project workspace...
15:00:20 [WS-CLEANUP] Deferred wipeout is disabled by the job configuration...
15:00:20 [WS-CLEANUP] done
As you can see nothing gets executed. I assure you I am checking the required parameters when running Build with Parameters in the console. As far as I know, this is the correct syntax for a declarative pipeline.
Don't return the Stage, just execute it within the codeLint function.
def codeLint(account) {
stage('Code Lint') {
dir(env.CODE_DIR) {
withAWS(IAMMap[account]) {
sh script: "./lint.sh"
}
}
}
}
Or once the Stage is returned you can run it. This may need Script approval.
codeLint("west").run()

Re-run a pipeline using script jenkins

I have a pipeline with some information detailed behind
pipeline {
parameters {
booleanParam(name: 'RERUN', defaultValue: false, description: 'Run Failed Tests')
}
stage('Run tests ') {
steps {
runTest()
}
}
post {
always {
reRun()
}
}
}
def reRun() {
if ("SUCCESS".equals(currentBuild.result)) {
echo "LAST BUILD WAS SUCCESS"
} else if ("UNSTABLE".equals(currentBuild.result)) {
echo "LAST BUILD WAS UNSTABLE"
}
}
but I want that after the stage "Run tests" execute, if some tests fail I want to re-run the pipeline with parameters RERUN true instead of false. How can I replay via script instead of using plugins ?
I wasn't able to find how to re-run using parameters on my search, if someone could help me I will be grateful.
First of you can use the post step to determine if the job was unstable:
post{
unstable{
echo "..."
}
}
Then you could just trigger the same job with the new parameter like this:
build job: 'your-project-name', parameters: [[$class: 'BooleanParameterValue', name: 'RERUN', value: Boolean.valueOf("true")]]

Declarative pipeline when condition in post

As far as declarative pipelines go in Jenkins, I'm having trouble with the when keyword.
I keep getting the error No such DSL method 'when' found among steps. I'm sort of new to Jenkins 2 declarative pipelines and don't think I am mixing up scripted pipelines with declarative ones.
The goal of this pipeline is to run mvn deploy after a successful Sonar run and send out mail notifications of a failure or success. I only want the artifacts to be deployed when on master or a release branch.
The part I'm having difficulties with is in the post section. The Notifications stage is working great. Note that I got this to work without the when clause, but really need it or an equivalent.
pipeline {
agent any
tools {
maven 'M3'
jdk 'JDK8'
}
stages {
stage('Notifications') {
steps {
sh 'mkdir tmpPom'
sh 'mv pom.xml tmpPom/pom.xml'
checkout([$class: 'GitSCM', branches: [[name: 'origin/master']], doGenerateSubmoduleConfigurations: false, submoduleCfg: [], userRemoteConfigs: [[url: 'https://repository.git']]])
sh 'mvn clean test'
sh 'rm pom.xml'
sh 'mv tmpPom/pom.xml ../pom.xml'
}
}
}
post {
success {
script {
currentBuild.result = 'SUCCESS'
}
when {
branch 'master|release/*'
}
steps {
sh 'mvn deploy'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result,
)
}
failure {
script {
currentBuild.result = 'FAILURE'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result
)
}
}
}
In the documentation of declarative pipelines, it's mentioned that you can't use when in the post block. when is allowed only inside a stage directive.
So what you can do is test the conditions using an if in a script:
post {
success {
script {
if (env.BRANCH_NAME == 'master')
currentBuild.result = 'SUCCESS'
}
}
// failure block
}
Using a GitHub Repository and the Pipeline plugin I have something along these lines:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh '''
make
'''
}
}
}
post {
always {
sh '''
make clean
'''
}
success {
script {
if (env.BRANCH_NAME == 'master') {
emailext (
to: 'engineers#green-planet.com',
subject: "${env.JOB_NAME} #${env.BUILD_NUMBER} master is fine",
body: "The master build is happy.\n\nConsole: ${env.BUILD_URL}.\n\n",
attachLog: true,
)
} else if (env.BRANCH_NAME.startsWith('PR')) {
// also send email to tell people their PR status
} else {
// this is some other branch
}
}
}
}
}
And that way, notifications can be sent based on the type of branch being built. See the pipeline model definition and also the global variable reference available on your server at http://your-jenkins-ip:8080/pipeline-syntax/globals#env for details.
Ran into the same issue with post. Worked around it by annotating the variable with #groovy.transform.Field. This was based on info I found in the Jenkins docs for defining global variables.
e.g.
#!groovy
pipeline {
agent none
stages {
stage("Validate") {
parallel {
stage("Ubuntu") {
agent {
label "TEST_MACHINE"
}
steps {{
sh "run tests command"
recordFailures('Ubuntu', 'test-results.xml')
junit 'test-results.xml'
}
}
}
}
}
post {
unsuccessful {
notify()
}
}
}
// Make testFailures global so it can be accessed from a 'post' step
#groovy.transform.Field
def testFailures = [:]
def recordFailures(key, resultsFile) {
def failures = ... parse test-results.xml script for failures ...
if (failures) {
testFailures[key] = failures
}
}
def notify() {
if (testFailures) {
... do something here ...
}
}

How to abort a declarative pipeline

I'm trying the new declarative pipeline syntax.
I wonder, how can I abort all the stages and steps of a pipeline, when for example a parameter has an invalid value.
I could add a when clause to every stage, but this isn't optimal for me. Is there a better way to do so?
This should work fine with a when directive, if you make use of the error step.
For example, you could do an up-front check and abort the build if the given parameter value is not acceptable — preventing subsequent stages from running:
pipeline {
agent any
parameters {
string(name: 'targetEnv',
defaultValue: 'dev',
description: 'Must be "dev", "qa", or "staging"')
}
stages {
stage('Validate parameters') {
when {
expression {
// Only run this stage if the targetEnv is invalid
!['dev', 'qa', 'staging'].contains(params.targetEnv)
}
}
steps {
// Abort the build, skipping subsequent stages
error("Invalid target environment: ${params.targetEnv}")
}
}
stage('Checkout') {
steps {
echo 'Checking out source code...'
}
}
stage('Build') {
steps {
echo 'Building...'
}
}
}
}
You could use the FlowInterruptedException, e.g.:
import hudson.model.Result
import org.jenkinsci.plugins.workflow.steps.FlowInterruptedException
pipeline {
...
steps {
script {
throw new FlowInterruptedException(Result.ABORTED)
}
...
}
This will stop execution immediately like the error step, but with more control over the result.
Please note that it requires you to approve a signature:
new org.jenkinsci.plugins.workflow.steps.FlowInterruptedException hudson.model.Result
Apart from Result.ABORTED there's also: Result.SUCCESS, Result.UNSTABLE, Result.FAILURE and Result.NOT_BUILT.
Disclaimer: it's a bit of a hack.

Use Jenkins 'Mailer' inside pipeline workflow

I'd like to leverage the existing Mailer plugin from Jenkins within a Jenkinsfile that defines a pipeline build job. Given the following simple failure script I would expect an email on every build.
stage 'Test'
node {
try {
sh 'exit 1'
} finally {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: 'me#me.com', sendToIndividuals: true])
}
}
The output from the build is:
Started by user xxxxx
[Pipeline] stage (Test)
Entering stage Test
Proceeding
[Pipeline] node
Running on master in /var/lib/jenkins/jobs/rpk-test/workspace
[Pipeline] {
[Pipeline] sh
[workspace] Running shell script
+ exit 1
[Pipeline] step
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
As you can see, it does record that it performs the pipeline step immediately after the failure, but no emails get generated.
Emails in other free-style jobs that leverage the mailer work fine, its just invoking via pipeline jobs.
This is running with Jenkins 2.2 and mailer 1.17.
Is there a different mechanism by which I should be invoking failed build emails? I don't need all the overhead of the mail step, just need notifications on failures and recoveries.
In Pipeline failed sh doesn't immediately set the currentBuild.result to FAILURE whereas its initial value is null. Hence, build steps that rely on the build status like Mailer might work seemingly incorrect.
You can check it by adding a debug print:
stage 'Test'
node {
try {
sh 'exit 1'
} finally {
println currentBuild.result // this prints null
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: 'me#me.com', sendToIndividuals: true])
}
}
This whole pipeline is wrapped with exception handler provided by Jenkins that's why Jenkins marks the build as failed in the the end.
So if you want to utilize Mailer you need to maintain the build status properly. For instance:
stage 'Test'
node {
try {
sh 'exit 1'
currentBuild.result = 'SUCCESS'
} catch (any) {
currentBuild.result = 'FAILURE'
throw any //rethrow exception to prevent the build from proceeding
} finally {
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: 'me#me.com', sendToIndividuals: true])
}
}
If you don't need to re-throw the exception, you can use catchError. It is a Pipeline built-in which catches any exception within its scope, prints it into console and sets the build status. For instance:
stage 'Test'
node {
catchError {
sh 'exit 1'
}
step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: 'me#me.com', sendToIndividuals: true])
}
In addition to izzekil's excellent answer, you may wish to choose email recipients based on the commit authors. You can use email-ext to do this (based on their pipeline examples):
step([$class: 'Mailer',
notifyEveryUnstableBuild: true,
recipients: emailextrecipients([[$class: 'CulpritsRecipientProvider'],
[$class: 'RequesterRecipientProvider']])])
If you're using a recent email-ext (2.50+), you can use that in your pipeline:
emailext(body: '${DEFAULT_CONTENT}', mimeType: 'text/html',
replyTo: '$DEFAULT_REPLYTO', subject: '${DEFAULT_SUBJECT}',
to: emailextrecipients([[$class: 'CulpritsRecipientProvider'],
[$class: 'RequesterRecipientProvider']]))
If you're not using a declarative Jenkinsfile, you will need to put checkout scm so Jenkins can find the committers. See JENKINS-46431.
If you're still on an older version of email-ext, you'll hit JENKINS-25267. You could roll your own HTML email:
def emailNotification() {
def to = emailextrecipients([[$class: 'CulpritsRecipientProvider'],
[$class: 'DevelopersRecipientProvider'],
[$class: 'RequesterRecipientProvider']])
String currentResult = currentBuild.result
String previousResult = currentBuild.getPreviousBuild().result
def causes = currentBuild.rawBuild.getCauses()
// E.g. 'started by user', 'triggered by scm change'
def cause = null
if (!causes.isEmpty()) {
cause = causes[0].getShortDescription()
}
// Ensure we don't keep a list of causes, or we get
// "java.io.NotSerializableException: hudson.model.Cause$UserIdCause"
// see http://stackoverflow.com/a/37897833/509706
causes = null
String subject = "$env.JOB_NAME $env.BUILD_NUMBER: $currentResult"
String body = """
<p>Build $env.BUILD_NUMBER ran on $env.NODE_NAME and terminated with $currentResult.
</p>
<p>Build trigger: $cause</p>
<p>See: $env.BUILD_URL</p>
"""
String log = currentBuild.rawBuild.getLog(40).join('\n')
if (currentBuild != 'SUCCESS') {
body = body + """
<h2>Last lines of output</h2>
<pre>$log</pre>
"""
}
if (to != null && !to.isEmpty()) {
// Email on any failures, and on first success.
if (currentResult != 'SUCCESS' || currentResult != previousResult) {
mail to: to, subject: subject, body: body, mimeType: "text/html"
}
echo 'Sent email notification'
}
}
I think a better way to send mail notifications in jenkins pipelines is to use the post section of a pipeline as described in the jenkins docs instead of using try catch:
pipeline {
agent any
stages {
stage('whatever') {
steps {
...
}
}
}
post {
always {
step([$class: 'Mailer',
notifyEveryUnstableBuild: true,
recipients: "example#example.com",
sendToIndividuals: true])
}
}
}
}
}

Resources