I need to share some code between several stages, which would also need to add post actions. To do so, I thought about putting everything in a method, which will be called from
pipeline {
stages {
stage('Some') {
steps {
script { commonCode() }
}
}
}
}
However, I'm not sure how could I install post actions in from commonCode. Documentation does not mention a thing. Looking at the code, implies that this DSL is basically just playing with a hash map, but I don't know would it be possible to access it from the method and modify on the fly.
Basically I would like to do something like this in commonCode:
if (something) {
attachPostAction('always', { ... })
} else {
attachPostAction('failure', { ... })
}
The only thing that works so far is that in commonCode I do:
try {
...
onSuccess()
} catch (e) {
onError()
} finally {
onAlways()
}
But was wondering if there is a more elegant way...
Now that I better understand the question (I hope)...
This is a pretty interesting idea--generate your post actions on the fly in previous stages.
It turns out to be really easy. I tried one option (success) that stored various closures in a list, then iterate through the list and run all the closures in the post action. Then I did another (failure) where I just saved a single closure as a variable and ran that. Both work well.
Below is the code that does this. Uncomment the error line to simluate a failed build.
def postSuccess = []
def postFailure
pipeline {
agent any
stages {
stage('Success'){
steps {
script {
println "Configure Success Post Steps"
postSuccess[0] = {echo "This is a successful build"}
postSuccess[1] = {
echo "Running multiple steps"
sh "ls -latr"
}
}
}
}
stage('Failure'){
steps {
script {
println "Configure Failure Post Steps"
postFailure = {
echo "This build failed"
echo "Running multiple steps for failure"
sh """
whoami
pwd
"""
}
}
// error "Simulate a failed build" //uncomment this line to make the build fail
}
}
} // stages
post {
success {
echo "SUCCESS"
script {
for (def my_closure in postSuccess) {
my_closure()
}
}
}
failure {
echo "FAILURE!"
script {
postFailure()
}
}
}
} // pipeline
You can use regular groovy scripting outside of the pipeline block. While I haven't tried it, you should be able to define a method outside of there and then call it from inside the pipeline. But method calls can't be called as steps. You would need to wrap it in a script step. But post actions take the same steps as steps{} blocks, so if you can use it insteps, you can use it in the post sections. You will need to watch scoping carefully or you will end up trying to sort out why things are null in some places.
You can also used a shared library. You could define a step in the shared library and then use it like any other step in a steps{} block or one of the post blocks.
Related
On a scripted pipeline written in Groovy, I have 2 Jenkinsfiles namely - Jenkinsfile1 and Jenkinsfile2.
Is it possible to call Jenkinsfile2 from Jenkinsfile1.
Lets say following is my Jenkinsfile1
#!groovy
stage('My build') {
node('my_build_node') {
def some_output = True
if (some_output) {
// How to call Jenkinsfile2 here?
}
}
}
How do I call Jenkinsfile2 above when output has a value which is not empty?
Or is it possible to call another Jenkins job which uses Jenkinsfile2?
Your question wasn't quite clear to me. If you just want to load and evaluate some piece of Groovy code into yours, you can use load() (as #JoseAO previously stated). Apart from his example, if your file (Jenkinsfile2.groovy) has a call() method, you can use it directly, like this:
node('master') {
pieceOfCode = load 'Jenkinsfile2.groovy'
pieceOfCode()
pieceOfCode.bla()
}
Now, if you want to trigger another job, you can use the build() step, even if you're not using a declarative pipeline. The thing is that the pipeline you're calling must be created in Jenkins, because build() takes as an parameter the job name, not the pipeline filename. Here's an example of how to call a job named pipeline2:
node('master') {
build 'pipeline2'
}
Now, as for your question "How do I call Jenkinsfile2 above when output has a value which is not empty?", if I understood correctly, you're trying to run some shell command, and if it's empty, you'll load the Jenkinsfile/pipeline. Here's how to achieve that:
// Method #1
node('master') {
try {
sh 'my-command-goes-here'
build 'pipeline2' // if you're trying to call another job
// If you're trying to load and evaluate a piece of code
pieceOfCode = load 'Jenkinsfile2.groovy'
pieceOfCode()
pieceOfCode.bla()
}
catch(Exception e) {
print("${e}")
}
}
// Method #2
node('master') {
def commandResult = sh script: 'my-command-goes-here', returnStdout: true
if (commandResult.length() != 0) {
build 'pipeline2' // if you're trying to call another job
// If you're trying to load and evaluate a piece of code
pieceOfCode = load 'Jenkinsfile2.groovy'
pieceOfCode()
pieceOfCode.bla()
}
else {
print('Something went bad with the command.')
}
}
Best regards.
For example, your Jenkisfile2 it's my "pipeline2.groovy".
def pipeline2 = load (env.PATH_PIPELINE2 + '/pipeline2.groovy')
pipeline2.method()
I'm trying to dynamically set environment variables in the jenkins pipeline script.
I'm using a combination of .groovy and .jenkinsfile scripts to generate the stage{} definitions for a pipeline as DRY as possible.
I have a method below:
def generateStage(nameOfTestSet, pathToTestSet, machineLabel, envVarName, envVarValue)
{
echo "Generating stage for ${nameOfTestSet} on ${machineLabel}"
return node("${machineLabel}") {
stage(nameOfTestSet)
{
/////// Area of interest ////////////
environment {
"${envVarName} = ${envVarValue}"
}
/////////////////////////////////////
try {
echo "Would run: "+pathToTestSet
} finally {
echo "Archive results here"
}
}
}
}
There's some wrapper code running this, but abstracting away we'd have the caller essentially use:
generateStage("SimpleTestSuite", "path.to.test", "MachineA", "SOME_ENV_VAR", "ENV_VALUE")
Where the last two parameters are the environment name (SOME_ENV_VAR) and the value (ENV_VALUE)
The equivalent declarative code would be:
stage("SimpleTestSuite")
{
agent {
label "MachineA"
}
environment = {
SOME_ENV_VAR = ENV_VALUE
}
steps {
echo "Would run" + "path.to.test"
}
post {
always {
echo "Archive results"
}
}
}
However, when running this script, the environment syntax in first code block doesn't seem to affect the actual execution at all. If I echo the ${SOME_ENV_VAR} (or even echo ${envVarName} in case it took this variable name as the actual environment value) they both return null.
I'm wondering what's the best way to make this environment{} section as DRY / dynamic as possible?
I would prefer it if there's an extendable solution that can take in a list of environmentName=Value pairs, as this would be more general case.
Note: I have tried the withEnv[] solution for scripted pipelines, however this seems to have the same issue.
I figured out the solution to this.
It is to use the withEnv([]) step.
def generateStage(nameOfTestSet, pathToTestSet, machineLabel, listOfEnvVarDeclarations=[])
{
echo "Generating stage for ${nameOfTestSet} on ${machineLabel}"
return node("${machineLabel}") {
stage(nameOfTestSet)
{
withEnv(listOfEnvVarDeclarations) {
try {
echo "Would run: "+pathToTestSet
} finally {
echo "Archive results here"
}
}
}
}
}
And the caller method would be:
generateStage("SimpleTestSuite", "path.to.test", "MachineA", ["SOME_ENV_VAR=\"ENV_VALUE\""])
Since the withEnv([]) step can take in multiple environment variables, we can also do:
generateStage("SimpleTestSuite", "path.to.test", "MachineA", ["SOME_ENV_VAR=\"ENV_VALUE\"", "SECOND_VAR=\"SECOND_VAL\""])
And this would be valid and should work.
So I have created a shared library in jenkins with a listener that gets triggered each time the pipelines reads a FlowNode so I can run groovy code before and after each stage, step, etc...
I'm able to call the shared library in a step phase like this:
pipeline {
agent any
stages {
stage('prepare') {
steps{
prepareStepsWrapper()
}
}
stage('step1') {
steps {
echo 'step1'
}
}
stage('step2') {
steps {
echo 'step2'
}
}
stage('step3') {
steps {
echo 'step3'
// fail on purpose
sh 'notfoundexecutablelol'
}
}
stage('step4') {
steps {
echo 'step4'
}
}
}
post{
always{
println env.getEnvironment()
}
}
}
And works pretty great!
With this approach the 'prepare' stage needs to be filtered out so I've switched to the options directive:
pipeline {
agent any
options {
prepareStepsWrapper()
}
stages {
stage('step1') {
steps {
echo 'step1'
}
}
...
}
}
But the pipeline fails with
WorkflowScript: 4: Invalid option type "prepareStepsWrapper"
tl;dr; How can I load a shared library within the options directive?
What does the option-stage do?
The options directive allows configuring Pipeline-specific options
from within the Pipeline itself.
You can't call the shared-library in the options-stage. This stage should not be used for execute any logic, rather it sets configurations for the pipeline. All availables options and the documentation can be found here.
You could try to create a stage that simply calls your prepareStepsWrapper() and use locks to avoid that other stages are executed before this stage.
I have many jenkins pipelines for several different platforms but my "post{}" block for all those pipelines is pretty samey. And its quite large at this point because I include success,unstable,failure and aborted in it.
Is there a way to parameterize a reusable post{} block I can import in all my pipelines? I'd like to be able to import it and pass it params as well (because while its almost the same it varies very slightly for different pipelines).
Example post block that is currently copy and pasted inside all my pipeline{}s
post {
success{
script {
// I'd like to be able to pass in values for param1 and param2
someGroovyScript {
param1 = 'blah1'
param2 = 'blah2'
}
// maybe id want a conditional here that does something with a passed in param
if (param3 == 'blah3') {
echo 'doing something'
}
}
}
unstable{
... you get the idea
}
aborted{
... you get the idea
}
failure{
... you get the idea
}
}
The following does not work:
// in mypipeline.groovy
...
post {
script {
myPost{}
}
}
// in vars/myPost.groovy
def call(body) {
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
body()
return always {
echo 'test'
}
}
Invalid condition "myPost" - valid conditions are [always, changed, fixed, regression, aborted, success, unstable, failure, notBuilt, cleanup]
Can i override post{} somehow or something?
Shared libraries is one approach for this, you were pretty close.
#Library('my-shared-library')_
pipeline {
...
post {
always {
script {
myPost()
}
}
}
}
Answer based on https://stackoverflow.com/a/48563538/1783362
Shared Libraries link: https://jenkins.io/doc/book/pipeline/shared-libraries/
I setup a post action like in the examples:
pipeline {
agent any
stages {
stage('Example1') {
steps {
bat 'return 1'
}
stage('Example2') {
steps {
echo 'Wont see this'
}
}
}
post {
always {
echo 'I will always say Hello'
}
}
}
So I do something in the first stage to make it fail. And I have a post action that always runs, but what happens when I run my pipeline in blueocean is it fails at the first stage and then just stops. Where do I see the post action that is always supposed to run??
I hade a similar problem when I used agent none at the beginning of the pipeline. Try using a node in your post action:
post {
always {
node('master') {
echo 'I will always say Hello'
}
}
}
Kind of late to the party but you have to use catchError before any steps that may fail. Something like this:
steps {
catchError {
bat 'return 1'
}
}