I'm trying to dynamically set environment variables in the jenkins pipeline script.
I'm using a combination of .groovy and .jenkinsfile scripts to generate the stage{} definitions for a pipeline as DRY as possible.
I have a method below:
def generateStage(nameOfTestSet, pathToTestSet, machineLabel, envVarName, envVarValue)
{
echo "Generating stage for ${nameOfTestSet} on ${machineLabel}"
return node("${machineLabel}") {
stage(nameOfTestSet)
{
/////// Area of interest ////////////
environment {
"${envVarName} = ${envVarValue}"
}
/////////////////////////////////////
try {
echo "Would run: "+pathToTestSet
} finally {
echo "Archive results here"
}
}
}
}
There's some wrapper code running this, but abstracting away we'd have the caller essentially use:
generateStage("SimpleTestSuite", "path.to.test", "MachineA", "SOME_ENV_VAR", "ENV_VALUE")
Where the last two parameters are the environment name (SOME_ENV_VAR) and the value (ENV_VALUE)
The equivalent declarative code would be:
stage("SimpleTestSuite")
{
agent {
label "MachineA"
}
environment = {
SOME_ENV_VAR = ENV_VALUE
}
steps {
echo "Would run" + "path.to.test"
}
post {
always {
echo "Archive results"
}
}
}
However, when running this script, the environment syntax in first code block doesn't seem to affect the actual execution at all. If I echo the ${SOME_ENV_VAR} (or even echo ${envVarName} in case it took this variable name as the actual environment value) they both return null.
I'm wondering what's the best way to make this environment{} section as DRY / dynamic as possible?
I would prefer it if there's an extendable solution that can take in a list of environmentName=Value pairs, as this would be more general case.
Note: I have tried the withEnv[] solution for scripted pipelines, however this seems to have the same issue.
I figured out the solution to this.
It is to use the withEnv([]) step.
def generateStage(nameOfTestSet, pathToTestSet, machineLabel, listOfEnvVarDeclarations=[])
{
echo "Generating stage for ${nameOfTestSet} on ${machineLabel}"
return node("${machineLabel}") {
stage(nameOfTestSet)
{
withEnv(listOfEnvVarDeclarations) {
try {
echo "Would run: "+pathToTestSet
} finally {
echo "Archive results here"
}
}
}
}
}
And the caller method would be:
generateStage("SimpleTestSuite", "path.to.test", "MachineA", ["SOME_ENV_VAR=\"ENV_VALUE\""])
Since the withEnv([]) step can take in multiple environment variables, we can also do:
generateStage("SimpleTestSuite", "path.to.test", "MachineA", ["SOME_ENV_VAR=\"ENV_VALUE\"", "SECOND_VAR=\"SECOND_VAL\""])
And this would be valid and should work.
Related
Say I have a Jenkinsfile with stages based on a conditional such as the following. Then, in the Jenkins UI for a particular Pipeline Project, I may or may not have a (Global Variable String Parameter) parameter named CITY defined for a particular job.
If the CITY parameter is defined in the Jenkins project in the UI, I'd like it to use whatever city the user inputs. If the user doesn't input anything, I'd like it to default to a value in "Global Properties" (in "Manage Jenkins > Configure System > Global Properties")
If the CITY parameter is NOT defined in the UI (like if someone forgot to define that parameter in the Pipeline Project UI), I'd like it to somehow default to a value defined in the Jenkinsfile.
Simply put, I'd like to define a default in the Jenkinsfile such that if someone forgets to configure a Global Variable String Parameter in the Job UI, the default will be used.
I'm pretty new to Jenkins and Groovy, I'm not quite sure how to do this. How can I go about defining a default parameter in a Jenkinsfile that can be overridden by user input in the UI or a default from "Global Properties"? Any advice is appreciated.
pipeline {
agent any
stages {
stage('Always') {
steps {
script {
sh 'echo Welcome CITY=$CITY'
}
}
stage('Chicago') {
when {
expression {
params.CITY == "CHICAGO"
}
}
steps {
script {
sh 'echo Welcome to Chicago "CITY=$CITY"'
}
}
}
stage('NYC') {
when {
expression {
params.CITY == "NYC"
}
}
steps {
script {
sh 'echo Welcome to NYC "CITY=$CITY"'
}
}
}
}
}
there is no need to declare each city just add a string parameter with a default value and create a single stage that will take the default parameter if the user did not enter one
if this works for you please mark answer as correct :-)
pipeline {
agent any
parameters {
string(name: 'CITY', defaultValue: 'TEL_AVIV', description: 'this is the default if user dosnt enter parameter in the UI')
}
stages {
stage('CITY') {
steps {
script {
sh 'echo Welcome $CITY'
}
}
}
}
}
I am working on a project and i have to configure a jenkins using JCasC (config as code plugin).
I have to create a job BUT i can't pass variables in the script.
My code:
freeStyleJob("SEED") {
parameters {
stringParam("MY_PARAMETER", "defaultValue", "A parameter")
}
steps {
jobDsl {
scriptText('''
job("seedJOB") {
displayName('${MY_PARAMETER}') // don't work
description("${MY_PARAMETER}") // don't work
//description("$MY_PARAMETER") // don't work
//description('$MY_PARAMETER') // don't work
// i tried to use triple full quotes instead of triple single quote but it's not working...
... here the job...
'''.stripIndent())
}
}
EDIT: BEST SOLUTION HERE:
i'm writing groovy code in """ quotes so if I want to evaluate variable : I don't have to put ${} just write your variable name:
With the solution:
freeStyleJob("SEED") {
parameters {
stringParam("MY_PARAMETER", "defaultValue", "A parameter")
}
steps {
jobDsl {
scriptText('''
job("seedJOB") {
displayName('MY_PARAMETER) // solution
... here the job...
'''.stripIndent())
}
}
easy!
May you could write it to a file ? You'll get something like that in your step:
steps {
shell('echo $DISPLAY_NAME > display_name.txt')
jobDsl {
scriptText('''
job("seedjob") {
String jobname = readFileFromWorkspace('display_name.txt').trim()
displayName(jobname)
}
'''.stripIndent())
}
}
You could also use a .properties file to do it more properly.
I want to return the value from groovy function back to my jenkins build stage so that the value can be used as a condition in other stages. I am not able to figure out how to implement this. I have tried something like below but that didn't work.
I have Jenkinsfile something like this:
pipeline
{
agent any
stages
{
stage('Sum')
{
steps
{
output=sum()
echo output
}
}
stage('Check')
{
when
{
expression
{
output==5
}
}
steps
{
echo output
}
}
}
}
def sum()
{
def a=2
def b=3
def c=a+b
return c
}
The above approach doesn't work. Can someone provide correct implementation.
You are missing a script-step. It is necessary if you want to execute plain groovy in your Jenkinsfile. Furthermore output has to be set as global variable if you want to access it later.
def output // set as global variable
pipeline{
...
stage('Sum')
{
steps
{
script
{
output = sum()
echo "The sum is ${output}"
}
}
}
...
I need to share some code between several stages, which would also need to add post actions. To do so, I thought about putting everything in a method, which will be called from
pipeline {
stages {
stage('Some') {
steps {
script { commonCode() }
}
}
}
}
However, I'm not sure how could I install post actions in from commonCode. Documentation does not mention a thing. Looking at the code, implies that this DSL is basically just playing with a hash map, but I don't know would it be possible to access it from the method and modify on the fly.
Basically I would like to do something like this in commonCode:
if (something) {
attachPostAction('always', { ... })
} else {
attachPostAction('failure', { ... })
}
The only thing that works so far is that in commonCode I do:
try {
...
onSuccess()
} catch (e) {
onError()
} finally {
onAlways()
}
But was wondering if there is a more elegant way...
Now that I better understand the question (I hope)...
This is a pretty interesting idea--generate your post actions on the fly in previous stages.
It turns out to be really easy. I tried one option (success) that stored various closures in a list, then iterate through the list and run all the closures in the post action. Then I did another (failure) where I just saved a single closure as a variable and ran that. Both work well.
Below is the code that does this. Uncomment the error line to simluate a failed build.
def postSuccess = []
def postFailure
pipeline {
agent any
stages {
stage('Success'){
steps {
script {
println "Configure Success Post Steps"
postSuccess[0] = {echo "This is a successful build"}
postSuccess[1] = {
echo "Running multiple steps"
sh "ls -latr"
}
}
}
}
stage('Failure'){
steps {
script {
println "Configure Failure Post Steps"
postFailure = {
echo "This build failed"
echo "Running multiple steps for failure"
sh """
whoami
pwd
"""
}
}
// error "Simulate a failed build" //uncomment this line to make the build fail
}
}
} // stages
post {
success {
echo "SUCCESS"
script {
for (def my_closure in postSuccess) {
my_closure()
}
}
}
failure {
echo "FAILURE!"
script {
postFailure()
}
}
}
} // pipeline
You can use regular groovy scripting outside of the pipeline block. While I haven't tried it, you should be able to define a method outside of there and then call it from inside the pipeline. But method calls can't be called as steps. You would need to wrap it in a script step. But post actions take the same steps as steps{} blocks, so if you can use it insteps, you can use it in the post sections. You will need to watch scoping carefully or you will end up trying to sort out why things are null in some places.
You can also used a shared library. You could define a step in the shared library and then use it like any other step in a steps{} block or one of the post blocks.
I'm trying to get a declarative pipeline that looks like this:
pipeline {
environment {
ENV1 = 'default'
ENV2 = 'default also'
}
}
The catch is, I'd like to be able to override the values of ENV1 or ENV2 based on an arbitrary condition. My current need is just to base it off the branch but I could imagine more complicated conditions.
Is there any sane way to implement this? I've seen some examples online that do something like:
stages {
stage('Set environment') {
steps {
script {
ENV1 = 'new1'
}
}
}
}
But I believe this isn't setting the actually environment variable, so much as it is setting a local variable which is overriding later calls to ENV1. The problem is, I need these environment variables read by a nodejs script, and those need to be real machine environment variables.
Is there any way to set environment variables to be dynamic in a jenkinsfile?
Maybe you can try Groovy's ternary-operator:
pipeline {
agent any
environment {
ENV_NAME = "${env.BRANCH_NAME == "develop" ? "staging" : "production"}"
}
}
or extract the conditional to a function:
pipeline {
agent any
environment {
ENV_NAME = getEnvName(env.BRANCH_NAME)
}
}
// ...
def getEnvName(branchName) {
if("int".equals(branchName)) {
return "int";
} else if ("production".equals(branchName)) {
return "prod";
} else {
return "dev";
}
}
But, actually, you can do whatever you want using the Groovy syntax (features that are supported by Jenkins at least)
So the most flexible option would be to play with regex and branch names...So you can fully support Git Flow if that's the way you do it at VCS level.
use withEnv to set environment variables dynamically for use in a certain part of your pipeline (when running your node script, for example). like this (replace the contents of an sh step with your node script):
pipeline {
agent { label 'docker' }
environment {
ENV1 = 'default'
}
stages {
stage('Set environment') {
steps {
sh "echo $ENV1" // prints default
// override with hardcoded value
withEnv(['ENV1=newvalue']) {
sh "echo $ENV1" // prints newvalue
}
// override with variable
script {
def newEnv1 = 'new1'
withEnv(['ENV1=' + newEnv1]) {
sh "echo $ENV1" // prints new1
}
}
}
}
}
}
Here is the correct syntax to conditionally set a variable in the environment section.
environment {
MASTER_DEPLOY_ENV = "TEST" // Likely set as a pipeline parameter
RELEASE_DEPLOY_ENV = "PROD" // Likely set as a pipeline parameter
DEPLOY_ENV = "${env.BRANCH_NAME == 'master' ? env.MASTER_DEPLOY_ENV : env.RELEASE_DEPLOY_ENV}"
CONFIG_ENV = "${env.BRANCH_NAME == 'master' ? 'MASTER' : 'RELEASE'}"
}
I managed to get this working by explicitly calling shell in the environment section, like so:
UPDATE_SITE_REMOTE_SUFFIX = sh(returnStdout: true, script: "if [ \"$GIT_BRANCH\" == \"develop\" ]; then echo \"\"; else echo \"-$GIT_BRANCH\"; fi").trim()
however I know that my Jenkins is on nix, so it's probably not that portable
Here is a way to set the environment variables with high flexibility, using maps:
stage("Environment_0") {
steps {
script {
def MY_MAP = [ME: "ASSAFP", YOU: "YOUR_NAME", HE: "HIS_NAME"]
env.var3 = "HE"
env.my_env1 = env.null_var ? "not taken" : MY_MAP."${env.var3}"
echo("env.my_env1: ${env.my_env1}")
}
}
}
This way gives a wide variety of options, and if it is not enough, map-of-maps can be used to enlarge the span even more.
Of course, the switching can be done by using input parameters, so the environment variables will be set according to the input parameters value.
pipeline {
agent none
environment {
ENV1 = 'default'
ENV2 = 'default'
}
stages {
stage('Preparation') {
steps {
script {
ENV1 = 'foo' // or variable
ENV2 = 'bar' // or variable
}
echo ENV1
echo ENV2
}
}
stage('Build') {
steps {
sh "echo ${ENV1} and ${ENV2}"
}
}
// more stages...
}
}
This method is more simple and looks better. Overridden environment variables will be applied to all other stages also.
I tried to do it in a different way, but unfortunately it does not entirely work:
pipeline {
agent any
environment {
TARGET = "${changeRequest() ? CHANGE_TARGET:BRANCH_NAME}"
}
stages {
stage('setup') {
steps {
echo "target=${TARGET}"
echo "${BRANCH_NAME}"
}
}
}
}
Strangely enough this works for my pull request builds (changeRequest() returning true and TARGET becoming my target branch name) but it does not work for my CI builds (in which case the branch name is e.g. release/201808 but the resulting TARGET evaluating to null)