How to return a value from Jenkins function to the build stage? - jenkins

I want to return the value from groovy function back to my jenkins build stage so that the value can be used as a condition in other stages. I am not able to figure out how to implement this. I have tried something like below but that didn't work.
I have Jenkinsfile something like this:
pipeline
{
agent any
stages
{
stage('Sum')
{
steps
{
output=sum()
echo output
}
}
stage('Check')
{
when
{
expression
{
output==5
}
}
steps
{
echo output
}
}
}
}
def sum()
{
def a=2
def b=3
def c=a+b
return c
}
The above approach doesn't work. Can someone provide correct implementation.

You are missing a script-step. It is necessary if you want to execute plain groovy in your Jenkinsfile. Furthermore output has to be set as global variable if you want to access it later.
def output // set as global variable
pipeline{
...
stage('Sum')
{
steps
{
script
{
output = sum()
echo "The sum is ${output}"
}
}
}
...

Related

Declarative dynamic parallel stages

I figure I’m doing something unorthodox here, but I’d like to stick to declarative for convenience while dynamically generating parallel steps.
I found a way to do something like that, but mixing both paradigms, which doesn’t seem to work well with the BlueOcean UI (multiple stages inside each parallel branch do not show up properly).
The closest I got was with something like this:
def accounts() {
return ["dynamic", "list"]
}
def parallelJobs() {
jobs = []
for (account in accounts()) {
jobs[] = stage(account) {
steps {
echo "Step for $account"
}
}
}
return jobs
}
# this is inside a shared library, called by my Jenkinsfile, like what is described
# under "Defining Declarative Pipelines in Shared Libraries" in
# https://www.jenkins.io/blog/2017/09/25/declarative-1/
def call() {
pipeline {
stages {
stage('Build all variations') {
parallel parallelJobs()
}
}
}
}
The problem is Jenkins errors like this:
Expected a block for parallel # line X, column Y.
parallel parallelJobs()
^
So, I was wondering if there is a way I could transform that list of stages, returned by parallelJobs(), into the block expected by Jenkins...
Yes, you can. You need to return a map of stages. Following is a working pipeline example.
pipeline {
agent any
stages {
stage('Parallel') {
steps {
script {
parallel parallelJobs()
}
}
}
}
}
def accounts() {
return ["dynamic", "list"]
}
def parallelJobs() {
jobs = [:]
for (account in accounts()) {
jobs[account] = { stage(account) {
echo "Step for $account"
}
}
}
return jobs
}

How to create a generic Jenkins stage, where the agent, and the steps are parameters?

I have multiple jenkinsifles, doing basically the same thing:
pipeline{
parameters { ... }
environment { ... }
stages {
stage ('setup') { ... }
stage ('run') {
agent { AGENT }
steps { STEPS }
}
}
The STEPS & AGENT parameters are values I get in the setup stage. Is it possible to define somewhere a function that returns a stage?
e.g.
def stage_factory(name, agent, steps, post ...){
return
stage (name) {
agent { agnet }
steps { steps }
post { post }
}
}
}
which later will be called inside the pipeline, right after the setup stage
?
The following works in scripted pipeline, you need to try the declarative syntax yourself. Note the use of surrounding {}
def stage_factory(name, agent, steps, post ...){
return {
node(agent){
stage (name) {
steps()
}
}
}
}
With this approach you need to put the post action in try-catch blocks, but this is the gist of it.
If you change it like so, you can even pass the steps to it as you would expect from a Jenkins stage.
def stage_factory(name, agent){
return { steps ->
node(agent){
stage (name) {
steps()
}
}
}
}
usage:
def myDtage = stage_factory("foo", "bar")
myStage{
//...
}

Jenkinsfile: How to provide a function to `parallel` block, instead of a predefined map?

This really helpful answer, got me 95% of the way there. Using this solution, I'm able to start n build stages in parallel. However, the map of parallel stages is essentially hardcoded. I want to be able to create it dynamically. The first step in this process is changing parallelStagesMap from a map, to a function that returns a map.
Unfortunatey, this small change causes my build to fail without any apparent error logs related to syntax.
How can I accomplish this? Am I using malformed Groovy syntax? I'd be grateful for any help.
def jobs = ["JobA", "JobB", "JobC"]
def parallelStagesMap() { // This is now a function that returns a map.
return jobs.collectEntries {
["${it}" : generateStage(it)]
}
}
def generateStage(job) {
return {
stage("stage: ${job}") {
echo "This is ${job}."
sh script: "sleep 15"
}
}
}
pipeline {
agent any
stages {
stage('parallel stage') {
steps {
script {
parallel parallelStagesMap() // I call the function here.
}
}
}
}
}
I got a working solution! It's not perfect, because I would like to extract the jobs.collectEntries part to my own function, but now I can define the contents of my parallel stages inline, instead of at the top of the file!
I tried writing a function matching the same signature as Map.collectEntries: ({ Closure -> Map }), but the Jenkins build fails without any logs once it hits my function. If someone's able to work that out, I'd be grateful.
def jobs = ["JobA", "JobB", "JobC"]
pipeline {
agent any
stages {
stage('parallel stage') {
steps {
script {
parallel jobs.collectEntries { j ->
["${j}" : { job -> return {
stage("stage: ${job}") {
echo "This is ${job}."
sh script: "sleep 15"
}
}}(j)]
}
}
}
}
}
}

Value returned from a script does not assigned to a variable declared in jenkins declarative pipeline stage

I am working on adding a jenkins Declarative pipeline for automation testing. In the test run stage i want to extract the failed tests from the log. i am using a groovy function for extracting the test result. this function is not a part of the jenkins pipeline. It is another script file. The function works fine and it build a string containing the failure details. Inside a pipeline stage i am calling this function and assinging the returned string to another variable. But when i echo the variable value it prints empty string.
pipeline {
agent {
kubernetes {
yamlFile 'kubernetesPod.yml'
}
}
environment{
failure_msg = ""
}
stages {
stage('Run Test') {
steps {
container('ansible') {
script {
def notify = load('src/TestResult.groovy')
def result = notify.extractTestResult("${WORKSPACE}/testreport.xml")
sh "${result}"
if (result != "") {
failure_msg = failure_msg + result
}
}
}
}
}
post {
always {
script {
sh 'echo Failure message.............${failure_msg}'
}
}
}
}
here 'sh 'echo ${result}'' print empty string. But 'extractTestResult()' returns a non-empty string.
Also i am not able to use the environment variable 'failure_msg' in post section it return an error 'groovy.lang.MissingPropertyException: No such property: failure_msg for class: groovy.lang.Binding'
can anyone please help me with this ?
EDIT:
Even after i fixed the string interpolation, i was getting the same
error. That was because jenkins does not allow using 'sh' inside
docker container. there is an open bug ticket in jenkins issue
board
I would suggest to use a global variable for holding the error message. My guess is that the variable is not existing in your scope.
def FAILURE_MSG // Global Variable
pipeline {
...
stages {
stage(...
steps {
container('ansible') {
script {
...
if (result != "") {
FAILURE_MSG = FAILURE_MSG + result
}
}
}
}
}
post {
always {
script {
sh "${FAILURE_MSG}" // Hint: Use correct String Interpolation
}
}
}
}
(Similar SO question can be found here)

How do you handle global variables in a declarative pipeline?

Previously asked a question about how to overwrite variables defined in an environment directive and it seems that's not possible.
I want to set a variable in one stage and have it accessible to other stages.
In a declarative pipeline it seems the only way to do this is in a script{} block.
For example I need to set some vars after checkout. So at the end of the checkout stage I have a script{} block that sets those vars and they are accessible in other stages.
This works, but it feels wrong. And for the sake of readability I'd much prefer to declare these variables at the top of the pipeline and have them overwritten. So that would mean having a "set variables" stage at the beginning with a script{} block that just defines vars- thats ugly.
I'm pretty sure I'm missing an obvious feature here. Do declarative pipelines have a global variable feature or must I use script{}
This is working without an error,
def my_var
pipeline {
agent any
environment {
REVISION = ""
}
stages {
stage('Example') {
steps {
script{
my_var = 'value1'
}
}
}
stage('Example2') {
steps {
script{
echo "$my_var"
}
}
}
}
}
Like #mkobit says, you can define the variable to global level out of pipeline block. Have you tried that?
def my_var
pipeline {
agent any
stages {
stage('Example') {
steps {
my_var = 'value1'
}
}
stage('Example2') {
steps {
printl(my_var)
}
}
}
}
For strings, add it to the 'environment' block:
pipeline {
environment {
myGlobalValue = 'foo'
}
}
But for non-string variables, the easiest solution I've found for declarative pipelines is to wrap the values in a method.
Example:
pipeline {
// Now I can reference myGlobalValue() in my pipeline.
...
}
def myGlobalValue() {
return ['A', 'list', 'of', 'values']
// I can also reference myGlobalValue() in other methods below
def myGlobalSet() {
return myGlobalValue().toSet()
}
#Sameera's answer is good for most use cases. I had a problem with appending operator += though. So this did NOT work (MissingPropertyException):
def globalvar = ""
pipeline {
stages {
stage("whatever) {
steps {
script {
globalvar += "x"
}
}
}
}
}
But this did work:
globalvar = ""
pipeline {
stages {
stage("whatever) {
steps {
script {
globalvar += "x"
}
}
}
}
}
The correct syntax is:
For global static variable
somewhere at the top of the file, before pipeline {, declare:
def MY_VAR = 'something'
For global variable that you can edit and reuse accross stages:
At the top of your file, add an import to Field:
import groovy.transform.Field
somewhere before pipeline {, declare:
#Field def myVar
then inside your step, inside a script, set the variable
stage('some stage') {
steps {
script {
myVar = 'I mutate myVar with success'
}
}
}
to go even further, you can declare functions:
before the pipeline {
def initSteps() {
cleanWs()
checkout scm
}
and then
stages {
stage('Init') {
steps {
initSteps()
}
}
}
This worked for me
pipeline {
agent any
stages {
stage('Example') {
steps {
script{
env.my_var = 'value1'
}
}
}
stage('Example2') {
steps {
printl(my_var)
}
}
}
}

Resources