jenkins Declarative pipeline, condition for using specific nodes - jenkins

My non working pipeline ( declarative):
def parentWorkspace = "${params.WS}" // comes from upsteam job
def parentNode = "${params.NODENAME}" // comes from upsteam job
pipeline {
agent none
stages {
stage("checkout source and run tests") {
steps{
script {
if(parentWorkspace != "null")
{
agent {
node{
customWorkspace "${parentWorkspace} "
label "${parentNode} "
}
}
else {
agent {
label 'linuxVM'
}
}
}
steps {
git url: gitLocation, branch: branchName, poll: false
sh "mvn test ..."
}
}
}
}
if the "parentworkspace" varibale is not null, use custom workspace with provided parentworkspace path and node label. Otherwise use default workspace and node with label "linux" . is it possible ?
tried few options couldn't really make it work. any help is appreciated.

Try to define the if condition outside your pipeline block like below:
def parentWorkspace = "${params.WS}"
def parentNode = "${params.NODENAME}"
script {
if(parentWorkspace != "null") {
label "${parentNode} "
}
else {
label 'linuxVM'
}
}
pipeline {
//your pipeline code
}
For allocation of workspace use ws("${params.WS}")
Reply back if this doesn't solve the issue.

I found this works perfect for me for the custom workspace issue
agent {
label "whatever"
customWorkspace params.WS
}
If params.WS is null or empty string, you get the original workspace since the value used for customWorkspace can be a relative path with respect to the original workspace.

Related

Jenkins declarative pipeline: if-else statement inside agent directive

I've tried to use different agent for different environments (dev/prod) using if-else inside agent directive. But I'm getting errors if I use the below pipeline script. Any help is much appreciated!!
pipeline {
agent {
if (env.ENVIRONMENT == 'prod') {
label {
label "EC2-1"
customWorkspace "/home/ubuntu/eks-prod-backend/"
}
}
else if (env.ENVIRONMENT == 'dev') {
label {
label "EC2-2"
customWorkspace "/home/ubuntu/eks-dev-backend/"
}
}
}
}
This is the approach I would suggest. Define a variable before the "pipeline" block, for example:
def USED_LABEL = env.ENVIRONMENT == 'prod' ? "EC2-1" : "EC2-2"
def CUSTOM_WORKSPACE = env.ENVIRONMENT == 'prod' ? "/home/ubuntu/eks-prod-backend/" : "/home/ubuntu/eks-dev-backend/"
Then, just use it like this:
pipeline {
agent {
label USED_LABEL
customWorkspace CUSTOM_WORKSPACE
}
}
I am not sure if label inside label is needed, but you hopefully get the point. Use variables specified before the pipeline execution.
Maybe something like this could help you in case you have only two environments ?
pipeline {
agent {
label {
label env.ENVIRONMENT == 'prod' ? "EC2-1" : "EC2-2"
customWorkspace env.ENVIRONMENT == 'prod' ? "/home/ubuntu/eks-prod-backend/" : "/home/ubuntu/eks-dev-backend/"
}
}
stages {
stage("Build") {
steps {
echo "Hello, World!"
}
}
}
}
Otherwise, you can check this thread, this will perhaps help you.

Using Declarative Jenkins pipeline from SCM - Subversion - How to get svn Url?

This looks like very basic question about Jenkins usage.
I have Jenkinsfile located in root folder of my Subversion repository tree. There are many branches (versions/tags) of the product - everywhere is the same Jenkinsfile. So far very basic setup, I suppose.
I need to provide some steps with current Subversion repository branch/url.
There are some similar questions like this or this, but none is working solution for Subversion.
pipeline {
agent { label 'master' }
stages {
stage("test") {
steps {
echo "Start pipeline "
// commented-out = not working
//echo scm.getUserRemoteConfigs()
//echo scm
script {
println "Current svn url/branch: "//??? + scm.getUserRemoteConfigs()[0].getUrl()
}
}
}
}
}
It will be like this
pipeline {
agent any;
stages {
stage('test'){
steps {
script {
def s = checkout scm;
if (s.GIT_URL != null) {
print s.GIT_URL
print s.GIT_BRANCH
print s.GIT_COMMIT
}
else if (s.SVN_URL != null) {
print s.SVN_REVISION
print s.SVN_REVISION_1
print s.SVN_URL
print s.SVN_URL_1
}
}
}
}
}
}
Note- This works fine with GIT and SVN, but a bit differently.

Only one pipeline of a set of multiple pipelines should run

I try to configure different pipelines in jenkins 2. My Problem ist that all my pipelines need the same workspace path (configugerd with customWorkspace in my configuration script).
Now I have to prevent that more than one pipeline is running.
My search always leads me back to the same pages, which unfortunately do not help me :-(
Has anyone already solved the same problem and can give me a hint?
Thank you very much
def locked = false;
pipeline {
agent any
stages {
stage('check workspace lock status') {
steps {
script {
locked = fileExists file: '.lock'
if(locked == false) {
touch file: '.lock'
}
}
}
}
stage('build') {
when {
beforeAgent true
expression { locked == false }
}
steps {
// do something you want
}
}
}
post {
always {
sh 'rm -f .lock'
}
}
}

how to access folder variables across pipeline stages?

I am trying to create multiple pipeline jobs under a folder. Under this folder I have created some folder properties. I am having a hard time to use this folder properties across multiple stages in a job.
plugin used : https://wiki.jenkins.io/display/JENKINS/Folder+Properties+Plugin
def region
pipeline {
agent any
stages {
stage('Assign values to global properties') {
steps {
withFolderProperties{
region = "${env.appRegion}"
}
}
}
stage('Print') {
steps {
print(region)
}
}
}
}
Error:
Expected a step # line 8, column 21.
region = "${env.appRegion}"
Thanks in Advance
region = "${env.appRegion}" is not pipeline reserved name of step or directive. It's groovy statement. You should put them inside script step. If you used Scripted Pipeline you can put any kinds of groovy statement in anywhere. But for Declarative Pipeline any groovy statement should wrapped in script step.
steps {
script {
withFolderProperties{
region = "${env.appRegion}"
}
}
}
steps {
withFolderProperties{
script {
region = "${env.appRegion}"
}
}
}
I'm not sure which one code block above is work, but you can give a try.
#!groovy
def CI_NAMESPACE = ""
withFolderProperties{
CI_NAMESPACE = "${env.CI_NAMESPACE}"
}
println "CI_NAMESPACE = ${CI_NAMESPACE}"
if (CI_NAMESPACE == '' || CI_NAMESPACE == null || CI_NAMESPACE == 'null') {
currentBuild.result = 'ABORTED'
error('Not defined CI_NAMESPACE in Folder properies plugin!')
}
pipeline {
environment {
CI_NAMESPACE = "${CI_NAMESPACE}"
}
stages {
stage('Test') {
steps {
echo "CI_NAMESPACE: ${env.CI_NAMESPACE}"
}
}
}
}

Conditional environment variables in Jenkins Declarative Pipeline

I'm trying to get a declarative pipeline that looks like this:
pipeline {
environment {
ENV1 = 'default'
ENV2 = 'default also'
}
}
The catch is, I'd like to be able to override the values of ENV1 or ENV2 based on an arbitrary condition. My current need is just to base it off the branch but I could imagine more complicated conditions.
Is there any sane way to implement this? I've seen some examples online that do something like:
stages {
stage('Set environment') {
steps {
script {
ENV1 = 'new1'
}
}
}
}
But I believe this isn't setting the actually environment variable, so much as it is setting a local variable which is overriding later calls to ENV1. The problem is, I need these environment variables read by a nodejs script, and those need to be real machine environment variables.
Is there any way to set environment variables to be dynamic in a jenkinsfile?
Maybe you can try Groovy's ternary-operator:
pipeline {
agent any
environment {
ENV_NAME = "${env.BRANCH_NAME == "develop" ? "staging" : "production"}"
}
}
or extract the conditional to a function:
pipeline {
agent any
environment {
ENV_NAME = getEnvName(env.BRANCH_NAME)
}
}
// ...
def getEnvName(branchName) {
if("int".equals(branchName)) {
return "int";
} else if ("production".equals(branchName)) {
return "prod";
} else {
return "dev";
}
}
But, actually, you can do whatever you want using the Groovy syntax (features that are supported by Jenkins at least)
So the most flexible option would be to play with regex and branch names...So you can fully support Git Flow if that's the way you do it at VCS level.
use withEnv to set environment variables dynamically for use in a certain part of your pipeline (when running your node script, for example). like this (replace the contents of an sh step with your node script):
pipeline {
agent { label 'docker' }
environment {
ENV1 = 'default'
}
stages {
stage('Set environment') {
steps {
sh "echo $ENV1" // prints default
// override with hardcoded value
withEnv(['ENV1=newvalue']) {
sh "echo $ENV1" // prints newvalue
}
// override with variable
script {
def newEnv1 = 'new1'
withEnv(['ENV1=' + newEnv1]) {
sh "echo $ENV1" // prints new1
}
}
}
}
}
}
Here is the correct syntax to conditionally set a variable in the environment section.
environment {
MASTER_DEPLOY_ENV = "TEST" // Likely set as a pipeline parameter
RELEASE_DEPLOY_ENV = "PROD" // Likely set as a pipeline parameter
DEPLOY_ENV = "${env.BRANCH_NAME == 'master' ? env.MASTER_DEPLOY_ENV : env.RELEASE_DEPLOY_ENV}"
CONFIG_ENV = "${env.BRANCH_NAME == 'master' ? 'MASTER' : 'RELEASE'}"
}
I managed to get this working by explicitly calling shell in the environment section, like so:
UPDATE_SITE_REMOTE_SUFFIX = sh(returnStdout: true, script: "if [ \"$GIT_BRANCH\" == \"develop\" ]; then echo \"\"; else echo \"-$GIT_BRANCH\"; fi").trim()
however I know that my Jenkins is on nix, so it's probably not that portable
Here is a way to set the environment variables with high flexibility, using maps:
stage("Environment_0") {
steps {
script {
def MY_MAP = [ME: "ASSAFP", YOU: "YOUR_NAME", HE: "HIS_NAME"]
env.var3 = "HE"
env.my_env1 = env.null_var ? "not taken" : MY_MAP."${env.var3}"
echo("env.my_env1: ${env.my_env1}")
}
}
}
This way gives a wide variety of options, and if it is not enough, map-of-maps can be used to enlarge the span even more.
Of course, the switching can be done by using input parameters, so the environment variables will be set according to the input parameters value.
pipeline {
agent none
environment {
ENV1 = 'default'
ENV2 = 'default'
}
stages {
stage('Preparation') {
steps {
script {
ENV1 = 'foo' // or variable
ENV2 = 'bar' // or variable
}
echo ENV1
echo ENV2
}
}
stage('Build') {
steps {
sh "echo ${ENV1} and ${ENV2}"
}
}
// more stages...
}
}
This method is more simple and looks better. Overridden environment variables will be applied to all other stages also.
I tried to do it in a different way, but unfortunately it does not entirely work:
pipeline {
agent any
environment {
TARGET = "${changeRequest() ? CHANGE_TARGET:BRANCH_NAME}"
}
stages {
stage('setup') {
steps {
echo "target=${TARGET}"
echo "${BRANCH_NAME}"
}
}
}
}
Strangely enough this works for my pull request builds (changeRequest() returning true and TARGET becoming my target branch name) but it does not work for my CI builds (in which case the branch name is e.g. release/201808 but the resulting TARGET evaluating to null)

Resources