Issue porting Jenkinsfile scripted to declarative withEnv{} => environment{} - jenkins

I have issue porting scripted to declarative pipeline. I used to have in scripted:
//Scripted
def myEnv = [:]
stage ('Prepare my env') { [...] myEnv = ... }
stage ('Fancy stuff') {
node() {
withEnv(myEnv) {
// here use what is defined in myEnv
}
}
stage ('Fancy stuff2') {
node() {
withEnv(myEnv) {
// here use what is defined in myEnv
} }
}
and now in declarative I would like to have
//Declarative
def myEnv = [:]
pipeline {
agent none
stage('Prepare my env') {
steps {
script {
[...]
myEnv = ...
}
}
}
stages {
environment { myEnv }
stage('Fancy stuff') {
[...]
}
stage('Fancy stuff2') {
[...]
}
} }
when I try to run this, it fails withEnv
org.codehaus.groovy.control.MultipleCompilationErrorsException:
startup failed: WorkflowScript: xx: "myEnv" is not a valid environment
expression. Use "key = value" pairs with valid Java/shell keys.
Fair enough.
What should I do to be able to use declarative environment { } to avoid using withEnv(myEnv) one in every further steps?

it seems that the part you are missing is the usage of environment clause.
Instead of
environment { myEnv }
It should be
environment { myEnvVal = myEnv }
Just as the error method mentions this should be key = value pair.

Your issue comes from the type of your variable myEnv. You define it as a map when you do def myEnv = [:].
So it works with withEnv that takes a map as parameter but it does not work with environment {...} that takes only "key = value" statements.
The solution depends on how you add environment variables contained in myEnv.
The simplest way is using environment directive by listing all the key/values contained in your former variable myEnv:
pipeline{
agent none
environment {
test1 = 'test-1'
test2 = 'test-2'
}
stages{
stage('Fancy stuff'){
steps{
echo "${test1}"
}
}
stage('Fancy stuff2'){
steps{
echo "${test2}"
}
}
}
}
But you also do it the scripted way :
pipeline{
agent none
stages{
stage('Prepare my env') {
steps {
script {
def test = []
for (int i = 1; i < 3; ++i) {
test[i] = 'test-' + i.toString()
}
test1 = test[1]
test2 = test[2]
}
}
}
stage('Fancy stuff'){
steps{
echo "${test1}"
}
}
stage('Fancy stuff2'){
steps{
echo "${test2}"
}
}
}
}

Related

Jenkins Pipeline Conditional Environmental Variables

I have a set of static environmental variables in the environmental directive section of a declarative pipeline. These values are available to every stage in the pipeline.
I want the values to change based on an arbitrary condition.
Is there a way to do this?
pipeline {
agent any
environment {
if ${params.condition} {
var1 = '123'
var2 = abc
} else {
var1 = '456'
var2 = def
}
}
stages {
stage('One') {
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
}
stag('Two'){
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
Looking for the same thing I found a nice answer in other question:
Basically is to use the ternary conditional operator
pipeline {
agent any
environment {
var1 = "${params.condition == true ? "123" : "456"}"
var2 = "${params.condition == true ? abc : def}"
}
}
Note: keep in mind that in the way you wrote your question (and I did my answer) the numbers are Strings and the letters are variables.
I would suggest you to create a stage "Environment" and declare your variable according to the condition you want, something like below:-
pipeline {
agent any
environment {
// Declare variables which will remain same throughout the build
}
stages {
stage('Environment') {
agent { node { label 'master' } }
steps {
script {
//Write condition for the variables which need to change
if ${params.condition} {
env.var1 = '123'
env.var2 = abc
} else {
env.var1 = '456'
env.var2 = def
}
sh "printenv"
}
}
}
stage('One') {
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
stage('Two'){
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
}
}
Suppose we want to use optional params for downstream job if it is called from upsteam job, and default params if downsteam job is called by itself.
But we don't want to have "holder" params with default value in downstream for some reason.
This could be done via groovy function:
upstream Jenkinsfile - param CREDENTIALS_ID is passed downsteam
pipeline {
stage {
steps {
build job: "my_downsteam_job_name",
parameters [string(name: 'CREDENTIALS_ID', value: 'other_credentials_id')]
}
}
}
downstream Jenkinsfile - if param CREDENTIALS_ID not passed from upsteam, function returns default value
def getCredentialsId() {
if(params.CREDENTIALS_ID) {
return params.CREDENTIALS_ID;
} else {
return "default_credentials_id";
}
}
pipeline {
environment{
TEST_PASSWORD = credentials("${getCredentialsId()}")
}
}
you can get another level of flexibility, using maps:
stage("set_env_vars") {
steps {
script {
def MY_MAP1 = [A: "123", B: "456", C: "789"]
def MY_MAP2 = [A: "abc", B: "def", C: "ghi"]
env.var1 = MY_MAP1."${env.switching_var}"
env.var2 = MY_MAP2."${env.switching_var}"
}
}
}
This way, more choices are possible.

Call Groovy method inside Jenkinsfile's environment block

I have a Jenkinsfile that looks like this:
static def randomUser() {
final def POOL = ["a".."z"].flatten()
final Random rand = new Random(System.currentTimeMillis())
return (0..5).collect { POOL[rand.nextInt(POOL.size())] }.join("")
}
pipeline {
agent any
environment {
//CREATOR = sh(script: "randomUser()", returnStdout: true)
CREATOR = "fixed-for-now"
...
}
stages {
...
stage("Terraform Plan") {
when { not { branch "master" } }
steps {
sh "terraform plan -out=plan.out -var creator=${CREATOR} -var-file=env.tfvars "
}
}
...
stage("Terraform Destroy") {
when { not { branch "master" } }
steps {
sh "terraform destroy -auto-approve -var creator=${CREATOR} -var-file=env.tfvars "
}
}
...
}
My problem is I cannot call randomUser while being inside the environment block. I would need to have the CREATOR variable as a random string every time. I would prefer to have CREATOR as a global environment variable since it's going to be used in many stages.
Is there a way to achieve (or workaround) this?
Given your specific use case, it might be better to use the CREATOR variable as a parameter instead of an environment variable, and to assign its defaultValue as the return of your randomUser method.
pipeline {
agent any
parameters {
string(name: 'CREATOR', defaultValue: sh(script: "randomUser()", returnStdout: true))
}
...
}
You can then use it in your pipeline like so:
stage("Terraform Plan") {
when { not { branch "master" } }
steps {
sh "terraform plan -out=plan.out -var creator=${params.CREATOR} -var-file=env.tfvars "
}
}
This way you have a correctly assigned and useful defaultValue for CREATOR, but with the ability to override it per-pipeline when necessary.
You can achieve this by removing environment block and defining global variable CREATOR before pipeline block
def CREATOR
pipeline {
agent any
stages {
stage('Initialize the variables') {
steps{
script{
CREATOR = randomUser()
}
}
}
...

How do you handle global variables in a declarative pipeline?

Previously asked a question about how to overwrite variables defined in an environment directive and it seems that's not possible.
I want to set a variable in one stage and have it accessible to other stages.
In a declarative pipeline it seems the only way to do this is in a script{} block.
For example I need to set some vars after checkout. So at the end of the checkout stage I have a script{} block that sets those vars and they are accessible in other stages.
This works, but it feels wrong. And for the sake of readability I'd much prefer to declare these variables at the top of the pipeline and have them overwritten. So that would mean having a "set variables" stage at the beginning with a script{} block that just defines vars- thats ugly.
I'm pretty sure I'm missing an obvious feature here. Do declarative pipelines have a global variable feature or must I use script{}
This is working without an error,
def my_var
pipeline {
agent any
environment {
REVISION = ""
}
stages {
stage('Example') {
steps {
script{
my_var = 'value1'
}
}
}
stage('Example2') {
steps {
script{
echo "$my_var"
}
}
}
}
}
Like #mkobit says, you can define the variable to global level out of pipeline block. Have you tried that?
def my_var
pipeline {
agent any
stages {
stage('Example') {
steps {
my_var = 'value1'
}
}
stage('Example2') {
steps {
printl(my_var)
}
}
}
}
For strings, add it to the 'environment' block:
pipeline {
environment {
myGlobalValue = 'foo'
}
}
But for non-string variables, the easiest solution I've found for declarative pipelines is to wrap the values in a method.
Example:
pipeline {
// Now I can reference myGlobalValue() in my pipeline.
...
}
def myGlobalValue() {
return ['A', 'list', 'of', 'values']
// I can also reference myGlobalValue() in other methods below
def myGlobalSet() {
return myGlobalValue().toSet()
}
#Sameera's answer is good for most use cases. I had a problem with appending operator += though. So this did NOT work (MissingPropertyException):
def globalvar = ""
pipeline {
stages {
stage("whatever) {
steps {
script {
globalvar += "x"
}
}
}
}
}
But this did work:
globalvar = ""
pipeline {
stages {
stage("whatever) {
steps {
script {
globalvar += "x"
}
}
}
}
}
The correct syntax is:
For global static variable
somewhere at the top of the file, before pipeline {, declare:
def MY_VAR = 'something'
For global variable that you can edit and reuse accross stages:
At the top of your file, add an import to Field:
import groovy.transform.Field
somewhere before pipeline {, declare:
#Field def myVar
then inside your step, inside a script, set the variable
stage('some stage') {
steps {
script {
myVar = 'I mutate myVar with success'
}
}
}
to go even further, you can declare functions:
before the pipeline {
def initSteps() {
cleanWs()
checkout scm
}
and then
stages {
stage('Init') {
steps {
initSteps()
}
}
}
This worked for me
pipeline {
agent any
stages {
stage('Example') {
steps {
script{
env.my_var = 'value1'
}
}
}
stage('Example2') {
steps {
printl(my_var)
}
}
}
}

How do I pass variables between stages in a declarative Jenkins pipeline?

How do I pass variables between stages in a declarative pipeline?
In a scripted pipeline, I gather the procedure is to write to a temporary file, then read the file into a variable.
How do I do this in a declarative pipeline?
E.g. I want to trigger a build of a different job, based on a variable created by a shell action.
stage("stage 1") {
steps {
sh "do_something > var.txt"
// I want to get var.txt into VAR
}
}
stage("stage 2") {
steps {
build job: "job2", parameters[string(name: "var", value: "${VAR})]
}
}
If you want to use a file (since a script is the thing generating the value you need), you could use readFile as seen below. If not, use sh with the script option as seen below:
// Define a groovy local variable, myVar.
// A global variable without the def, like myVar = 'initial_value',
// was required for me in older versions of jenkins. Your mileage
// may vary. Defining the variable here maybe adds a bit of clarity,
// showing that it is intended to be used across multiple stages.
def myVar = 'initial_value'
pipeline {
agent { label 'docker' }
stages {
stage('one') {
steps {
echo "1.1. ${myVar}" // prints '1.1. initial_value'
sh 'echo hotness > myfile.txt'
script {
// OPTION 1: set variable by reading from file.
// FYI, trim removes leading and trailing whitespace from the string
myVar = readFile('myfile.txt').trim()
}
echo "1.2. ${myVar}" // prints '1.2. hotness'
}
}
stage('two') {
steps {
echo "2.1 ${myVar}" // prints '2.1. hotness'
sh "echo 2.2. sh ${myVar}, Sergio" // prints '2.2. sh hotness, Sergio'
}
}
// this stage is skipped due to the when expression, so nothing is printed
stage('three') {
when {
expression { myVar != 'hotness' }
}
steps {
echo "three: ${myVar}"
}
}
}
}
Simply:
pipeline {
parameters {
string(name: 'custom_var', defaultValue: '')
}
stage("make param global") {
steps {
tmp_param = sh (script: 'most amazing shell command', returnStdout: true).trim()
env.custom_var = tmp_param
}
}
stage("test if param was saved") {
steps {
echo "${env.custom_var}"
}
}
}
I had a similar problem as I wanted one specific pipeline to provide variables and many other ones using it to get this variables.
I created a my-set-env-variables pipeline
script
{
env.my_dev_version = "0.0.4-SNAPSHOT"
env.my_qa_version = "0.0.4-SNAPSHOT"
env.my_pp_version = "0.0.2"
env.my_prd_version = "0.0.2"
echo " My versions [DEV:${env.my_dev_version}] [QA:${env.my_qa_version}] [PP:${env.my_pp_version}] [PRD:${env.my_prd_version}]"
}
I can reuse these variables in a another pipeline my-set-env-variables-test
script
{
env.dev_version = "NOT DEFINED DEV"
env.qa_version = "NOT DEFINED QA"
env.pp_version = "NOT DEFINED PP"
env.prd_version = "NOT DEFINED PRD"
}
stage('inject variables') {
echo "PRE DEV version = ${env.dev_version}"
script
{
def variables = build job: 'my-set-env-variables'
def vars = variables.getBuildVariables()
//println "found variables" + vars
env.dev_version = vars.my_dev_version
env.qa_version = vars.my_qa_version
env.pp_version = vars.my_pp_version
env.prd_version = vars.my_prd_version
}
}
stage('next job') {
echo "NEXT JOB DEV version = ${env.dev_version}"
echo "NEXT JOB QA version = ${env.qa_version}"
echo "NEXT JOB PP version = ${env.pp_version}"
echo "NEXT JOB PRD version = ${env.prd_version}"
}
there is no need for (hidden plugin) parameter definitions or temp-file access. Sharing varibles across stages can be acomplished by using global Groovy variables in a Jenkinsfile like so:
#!/usr/bin/env groovy
def MYVAR
def outputOf(cmd) { return sh(returnStdout:true,script:cmd).trim(); }
pipeline {
agent any
stage("stage 1") {
steps {
MYVAR = outputOf('echo do_something')
sh "echo MYVAR has been set to: '${MYVAR}'"
}
}
stage("stage 2") {
steps {
sh '''echo "...in multiline quotes: "''' + MYVAR + '''" ... '''
build job: "job2", parameters[string(name: "var", value: MYVAR)]
}
}
}
I have enhanced the existing solution by correcting syntax .Also used hidden parameter plugin so that it does not show up as an extra parameter in Jenkins UI. Works well :)
properties([parameters([[$class: 'WHideParameterDefinition', defaultValue: 'yoyo', name: 'hidden_var']])])
pipeline {
agent any
stages{
stage("make param global") {
steps {
script{
env.hidden_var = "Hello"
}
}
}
stage("test if param was saved") {
steps {
echo"About to check result"
echo "${env.hidden_var}"
}
}
}
}

Jenkinsfile Declarative Pipeline defining dynamic env vars

I'm new to Jenkins pipeline; I'm defining a declarative syntax pipeline and I don't know if I can solve my problem, because I didn't find a solution.
In this example, I need to pass a variable to ansible plugin (in old version I use an ENV_VAR or injecting it from file with inject plugin) that variable comes from a script.
This is my perfect scenario (but it doesn't work because environment{}):
pipeline {
agent { node { label 'jenkins-node'}}
stages {
stage('Deploy') {
environment {
ANSIBLE_CONFIG = '${WORKSPACE}/chimera-ci/ansible/ansible.cfg'
VERSION = sh("python3.5 docker/get_version.py")
}
steps {
ansiblePlaybook credentialsId: 'example-credential', extras: '-e version=${VERSION}', inventory: 'development', playbook: 'deploy.yml'
}
}
}
}
I tried other ways to test how env vars work in other post, example:
pipeline {
agent { node { label 'jenkins-node'}}
stages {
stage('PREPARE VARS') {
steps {
script {
env['VERSION'] = sh(script: "python3.5 get_version.py")
}
echo env.VERSION
}
}
}
}
but "echo env.VERSION" return null.
Also tried the same example with:
- VERSION=python3.5 get_version.py
- VERSION=python3.5 get_version.py > props.file (and try to inject it, but didnt found how)
If this is not possible I will do it in the ansible role.
UPDATE
There is another "issue" in Ansible Plugin, to use vars in extra vars it must have double quotes instead of single.
ansiblePlaybook credentialsId: 'example-credential', extras: "-e version=${VERSION}", inventory: 'development', playbook: 'deploy.yml'
You can create variables before the pipeline block starts. You can have sh return stdout to assign to these variables. You don't have the same flexibility to assign to environment variables in the environment stanza. So substitute in python3.5 get_version.py where I have echo 0.0.1 in the script here (and make sure your python script just returns the version to stdout):
def awesomeVersion = 'UNKNOWN'
pipeline {
agent { label 'docker' }
stages {
stage('build') {
steps {
script {
awesomeVersion = sh(returnStdout: true, script: 'echo 0.0.1').trim()
}
}
}
stage('output_version') {
steps {
echo "awesomeVersion: ${awesomeVersion}"
}
}
}
}
The output of the above pipeline is:
awesomeVersion: 0.0.1
In Jenkins 2.76 I was able to simplify the solution from #burnettk to:
pipeline {
agent { label 'docker' }
environment {
awesomeVersion = sh(returnStdout: true, script: 'echo 0.0.1')
}
stages {
stage('output_version') {
steps {
echo "awesomeVersion: ${awesomeVersion}"
}
}
}
}
Using the "pipeline utility steps" plugin, you can define general vars available to all stages from a properties file. For example, let props.txt as:
version=1.0
fix=alfa
and mix script and declarative Jenkins pipeline as:
def props
def VERSION
def FIX
def RELEASE
node {
props = readProperties file:'props.txt'
VERSION = props['version']
FIX = props['fix']
RELEASE = VERSION + "_" + FIX
}
pipeline {
stages {
stage('Build') {
echo ${RELEASE}
}
}
}
A possible variation of the main answer is to provide variable using another pipeline instead of a sh script.
example (set the variable pipeline) : my-set-env-variables pipeline
script
{
env.my_dev_version = "0.0.4-SNAPSHOT"
env.my_qa_version = "0.0.4-SNAPSHOT"
env.my_pp_version = "0.0.2"
env.my_prd_version = "0.0.2"
echo " My versions [DEV:${env.my_dev_version}] [QA:${env.my_qa_version}] [PP:${env.my_pp_version}] [PRD:${env.my_prd_version}]"
}
(use these variables) in a another pipeline my-set-env-variables-test
script
{
env.dev_version = "NOT DEFINED DEV"
env.qa_version = "NOT DEFINED QA"
env.pp_version = "NOT DEFINED PP"
env.prd_version = "NOT DEFINED PRD"
}
stage('inject variables') {
echo "PRE DEV version = ${env.dev_version}"
script
{
// call set variable job
def variables = build job: 'my-set-env-variables'
def vars = variables.getBuildVariables()
//println "found variables" + vars
env.dev_version = vars.my_dev_version
env.qa_version = vars.my_qa_version
env.pp_version = vars.my_pp_version
env.prd_version = vars.my_prd_version
}
}
stage('next job') {
echo "NEXT JOB DEV version = ${env.dev_version}"
echo "NEXT JOB QA version = ${env.qa_version}"
echo "NEXT JOB PP version = ${env.pp_version}"
echo "NEXT JOB PRD version = ${env.prd_version}"
}
For those who wants the environment's key to be dynamic, the following code can be used:
stage('Prepare Environment') {
steps {
script {
def data = [
"k1": "v1",
"k2": "v2",
]
data.each { key ,value ->
env."$key" = value
// env[key] = value // Deprecated, this can be used as well, but need approval in sandbox ScriptApproval page
}
}
}
}
You can also dump all your vars into a file, and then use the '-e #file' syntax. This is very useful if you have many vars to populate.
steps {
echo "hello World!!"
sh """
var1: ${params.var1}
var2: ${params.var2}
" > vars
"""
ansiblePlaybook inventory: _inventory, playbook: 'test-playbook.yml', sudoUser: null, extras: '-e #vars'
}
You can do use library functions in the environments section, like so:
#Library('mylibrary') _ // contains functions.groovy with several functions.
pipeline {
environment {
ENV_VAR = functions.myfunc()
}
…
}

Resources