How to set Jenkins Declarative Pipeline environments with Global Variables? - jenkins

I am trying to do this
pipeline {
agent any
environment {
LOCAL_BUILD_PATH=env.WORKSPACE+'/build/'
}
stages {
stage('Stuff'){
steps{
echo LOCAL_BUILD_PATH
}
}
}
}
Result:
null/build/
How can I use Global Environments to create my environments?

So this is method that I ended up using
pipeline {
agent {
label 'master'
}
stages {
stage ("Setting Variables"){
steps {
script{
LOCAL_BUILD_PATH = "$env.WORKSPACE/build"
}
}
}
stage('Print Varliabe'){
steps{
echo LOCAL_BUILD_PATH
}
}
}
}

You can use something like this...
LOCAL_BUILD_PATH="${env.WORKSPACE}/build/"
Remember: use "(double quote) for variable in string

I think you should use:
steps {
echo "${env.LOCAL_BUILD_PATH}"
}
as in "environment" step you're defining environmental variables which are later accessible by env.your-variable-name

This a scope issue. Declare the variable, at the top and set it to null. Something like
def var = null
You should be able to set the value in a block/closure/stage and access it in another

Related

how to add shell environment variable to global env variable in jenkins?

I have a below jenkins pipeline and it is working fine
pipeline {
agent
{
node
{
label 'test'
}
}
environment{
ansible_pass = 'credentials('ans-pass')'
}
stages {
stage('Load Vars'){
steps{
script{
configFileProvider([configFile(fileId: "${ENV_CONFIG_ID}", targetLocation: "${ENV_CONFIG_FILE}")]) {
load "${ENV_CONFIG_FILE}"
}
}
}
}
stage('svc install') {
steps {
sshagent(["${SSH_KEY_ID}"])
{
sh '''
ansible-playbook main.yaml -i hosts.yaml -b --vault-password-file $ansible_pass
'''
}
}
}
}
}
Now i want to pass the global environment variable id from shell instead of hartcoding
ansible_pass = 'credentials('ans-pass')'===>>>>
this ansible-pass1 should come from managed files(config provider)
I have already below from managed files
env.ARTI_TOKEN_ID='art-token'
env.PLAYBOOK_REPO='dep.stg'
env.SSH_KEY_ID = 'test_key'
Now how to add this credential id in this file?.Tried like below
env.ansible_pass = 'ansible-pass1'
and in jenkins pipeline refered the same as below
environment{
ansible_pass = 'credentials($ansible_pass)'
}
But it didn't worked.Could you please advice
As you are using secrets in config file it is better to use secret type 'secret file' in jenkins. Follow the link to read about different types of credentials.
Also correct way of setting credentials is:
environment{
ansible_pass = credentials('credentials-id-here')
}

Call Groovy method inside Jenkinsfile's environment block

I have a Jenkinsfile that looks like this:
static def randomUser() {
final def POOL = ["a".."z"].flatten()
final Random rand = new Random(System.currentTimeMillis())
return (0..5).collect { POOL[rand.nextInt(POOL.size())] }.join("")
}
pipeline {
agent any
environment {
//CREATOR = sh(script: "randomUser()", returnStdout: true)
CREATOR = "fixed-for-now"
...
}
stages {
...
stage("Terraform Plan") {
when { not { branch "master" } }
steps {
sh "terraform plan -out=plan.out -var creator=${CREATOR} -var-file=env.tfvars "
}
}
...
stage("Terraform Destroy") {
when { not { branch "master" } }
steps {
sh "terraform destroy -auto-approve -var creator=${CREATOR} -var-file=env.tfvars "
}
}
...
}
My problem is I cannot call randomUser while being inside the environment block. I would need to have the CREATOR variable as a random string every time. I would prefer to have CREATOR as a global environment variable since it's going to be used in many stages.
Is there a way to achieve (or workaround) this?
Given your specific use case, it might be better to use the CREATOR variable as a parameter instead of an environment variable, and to assign its defaultValue as the return of your randomUser method.
pipeline {
agent any
parameters {
string(name: 'CREATOR', defaultValue: sh(script: "randomUser()", returnStdout: true))
}
...
}
You can then use it in your pipeline like so:
stage("Terraform Plan") {
when { not { branch "master" } }
steps {
sh "terraform plan -out=plan.out -var creator=${params.CREATOR} -var-file=env.tfvars "
}
}
This way you have a correctly assigned and useful defaultValue for CREATOR, but with the ability to override it per-pipeline when necessary.
You can achieve this by removing environment block and defining global variable CREATOR before pipeline block
def CREATOR
pipeline {
agent any
stages {
stage('Initialize the variables') {
steps{
script{
CREATOR = randomUser()
}
}
}
...

How to get environment that have been set in Jenkinsfile

In Jenkinsfile
No ideas why and how this simple code doesn't work.
node {
environment {
ENV_1 = 'value1'
}
env.ENV_2 = 'value2'
echo "${env.ENV_1}" // not worked (null)
echo "${env.ENV_2}" // worked (value2)
}
Does environment{} work the save way of env.VAR = xx?
edited: Is it related that I'm using with pipline not multibranchs pipeline?
The environment closure is for jenkins-declarative-pipeline.
In scripted pipelines, use the withEnv closure/step to define the scope of an env var:
node {
withEnv(['ENV_1=value2']) {
echo env.ENV_1
}
}

How do you handle global variables in a declarative pipeline?

Previously asked a question about how to overwrite variables defined in an environment directive and it seems that's not possible.
I want to set a variable in one stage and have it accessible to other stages.
In a declarative pipeline it seems the only way to do this is in a script{} block.
For example I need to set some vars after checkout. So at the end of the checkout stage I have a script{} block that sets those vars and they are accessible in other stages.
This works, but it feels wrong. And for the sake of readability I'd much prefer to declare these variables at the top of the pipeline and have them overwritten. So that would mean having a "set variables" stage at the beginning with a script{} block that just defines vars- thats ugly.
I'm pretty sure I'm missing an obvious feature here. Do declarative pipelines have a global variable feature or must I use script{}
This is working without an error,
def my_var
pipeline {
agent any
environment {
REVISION = ""
}
stages {
stage('Example') {
steps {
script{
my_var = 'value1'
}
}
}
stage('Example2') {
steps {
script{
echo "$my_var"
}
}
}
}
}
Like #mkobit says, you can define the variable to global level out of pipeline block. Have you tried that?
def my_var
pipeline {
agent any
stages {
stage('Example') {
steps {
my_var = 'value1'
}
}
stage('Example2') {
steps {
printl(my_var)
}
}
}
}
For strings, add it to the 'environment' block:
pipeline {
environment {
myGlobalValue = 'foo'
}
}
But for non-string variables, the easiest solution I've found for declarative pipelines is to wrap the values in a method.
Example:
pipeline {
// Now I can reference myGlobalValue() in my pipeline.
...
}
def myGlobalValue() {
return ['A', 'list', 'of', 'values']
// I can also reference myGlobalValue() in other methods below
def myGlobalSet() {
return myGlobalValue().toSet()
}
#Sameera's answer is good for most use cases. I had a problem with appending operator += though. So this did NOT work (MissingPropertyException):
def globalvar = ""
pipeline {
stages {
stage("whatever) {
steps {
script {
globalvar += "x"
}
}
}
}
}
But this did work:
globalvar = ""
pipeline {
stages {
stage("whatever) {
steps {
script {
globalvar += "x"
}
}
}
}
}
The correct syntax is:
For global static variable
somewhere at the top of the file, before pipeline {, declare:
def MY_VAR = 'something'
For global variable that you can edit and reuse accross stages:
At the top of your file, add an import to Field:
import groovy.transform.Field
somewhere before pipeline {, declare:
#Field def myVar
then inside your step, inside a script, set the variable
stage('some stage') {
steps {
script {
myVar = 'I mutate myVar with success'
}
}
}
to go even further, you can declare functions:
before the pipeline {
def initSteps() {
cleanWs()
checkout scm
}
and then
stages {
stage('Init') {
steps {
initSteps()
}
}
}
This worked for me
pipeline {
agent any
stages {
stage('Example') {
steps {
script{
env.my_var = 'value1'
}
}
}
stage('Example2') {
steps {
printl(my_var)
}
}
}
}

Conditional environment variables in Jenkins Declarative Pipeline

I'm trying to get a declarative pipeline that looks like this:
pipeline {
environment {
ENV1 = 'default'
ENV2 = 'default also'
}
}
The catch is, I'd like to be able to override the values of ENV1 or ENV2 based on an arbitrary condition. My current need is just to base it off the branch but I could imagine more complicated conditions.
Is there any sane way to implement this? I've seen some examples online that do something like:
stages {
stage('Set environment') {
steps {
script {
ENV1 = 'new1'
}
}
}
}
But I believe this isn't setting the actually environment variable, so much as it is setting a local variable which is overriding later calls to ENV1. The problem is, I need these environment variables read by a nodejs script, and those need to be real machine environment variables.
Is there any way to set environment variables to be dynamic in a jenkinsfile?
Maybe you can try Groovy's ternary-operator:
pipeline {
agent any
environment {
ENV_NAME = "${env.BRANCH_NAME == "develop" ? "staging" : "production"}"
}
}
or extract the conditional to a function:
pipeline {
agent any
environment {
ENV_NAME = getEnvName(env.BRANCH_NAME)
}
}
// ...
def getEnvName(branchName) {
if("int".equals(branchName)) {
return "int";
} else if ("production".equals(branchName)) {
return "prod";
} else {
return "dev";
}
}
But, actually, you can do whatever you want using the Groovy syntax (features that are supported by Jenkins at least)
So the most flexible option would be to play with regex and branch names...So you can fully support Git Flow if that's the way you do it at VCS level.
use withEnv to set environment variables dynamically for use in a certain part of your pipeline (when running your node script, for example). like this (replace the contents of an sh step with your node script):
pipeline {
agent { label 'docker' }
environment {
ENV1 = 'default'
}
stages {
stage('Set environment') {
steps {
sh "echo $ENV1" // prints default
// override with hardcoded value
withEnv(['ENV1=newvalue']) {
sh "echo $ENV1" // prints newvalue
}
// override with variable
script {
def newEnv1 = 'new1'
withEnv(['ENV1=' + newEnv1]) {
sh "echo $ENV1" // prints new1
}
}
}
}
}
}
Here is the correct syntax to conditionally set a variable in the environment section.
environment {
MASTER_DEPLOY_ENV = "TEST" // Likely set as a pipeline parameter
RELEASE_DEPLOY_ENV = "PROD" // Likely set as a pipeline parameter
DEPLOY_ENV = "${env.BRANCH_NAME == 'master' ? env.MASTER_DEPLOY_ENV : env.RELEASE_DEPLOY_ENV}"
CONFIG_ENV = "${env.BRANCH_NAME == 'master' ? 'MASTER' : 'RELEASE'}"
}
I managed to get this working by explicitly calling shell in the environment section, like so:
UPDATE_SITE_REMOTE_SUFFIX = sh(returnStdout: true, script: "if [ \"$GIT_BRANCH\" == \"develop\" ]; then echo \"\"; else echo \"-$GIT_BRANCH\"; fi").trim()
however I know that my Jenkins is on nix, so it's probably not that portable
Here is a way to set the environment variables with high flexibility, using maps:
stage("Environment_0") {
steps {
script {
def MY_MAP = [ME: "ASSAFP", YOU: "YOUR_NAME", HE: "HIS_NAME"]
env.var3 = "HE"
env.my_env1 = env.null_var ? "not taken" : MY_MAP."${env.var3}"
echo("env.my_env1: ${env.my_env1}")
}
}
}
This way gives a wide variety of options, and if it is not enough, map-of-maps can be used to enlarge the span even more.
Of course, the switching can be done by using input parameters, so the environment variables will be set according to the input parameters value.
pipeline {
agent none
environment {
ENV1 = 'default'
ENV2 = 'default'
}
stages {
stage('Preparation') {
steps {
script {
ENV1 = 'foo' // or variable
ENV2 = 'bar' // or variable
}
echo ENV1
echo ENV2
}
}
stage('Build') {
steps {
sh "echo ${ENV1} and ${ENV2}"
}
}
// more stages...
}
}
This method is more simple and looks better. Overridden environment variables will be applied to all other stages also.
I tried to do it in a different way, but unfortunately it does not entirely work:
pipeline {
agent any
environment {
TARGET = "${changeRequest() ? CHANGE_TARGET:BRANCH_NAME}"
}
stages {
stage('setup') {
steps {
echo "target=${TARGET}"
echo "${BRANCH_NAME}"
}
}
}
}
Strangely enough this works for my pull request builds (changeRequest() returning true and TARGET becoming my target branch name) but it does not work for my CI builds (in which case the branch name is e.g. release/201808 but the resulting TARGET evaluating to null)

Resources