Jenkins pipeline agent to use environment variable - jenkins

Can an agent label make use of environment variable? Something like this:
pipeline {
environment {
SLAVE_NODE = 'MY_COMPUTER_NAME'
}
agent { label $SLAVE_NODE}
...
Since the editor for pipelines is so small, I would like to have the available space (visible by default) to be the "environment" block, so when I copy a jenkins job I just need to adjust a few environment variables used further in the script... I think I tried all the obvious syntax possibilities by now.

Stumbled upon it by try and error... (and found a duplicate here): Add a string parameter to your jenkins job (e.g. jenkinsNode) and use this in your script:
agent { label "${jenkinsNode}" }

Related

Expand variable inside Jenkins pipeline tool directive

I would like to be able to programmatically decide which tool will be installed in an Agent for a Jenkins pipeline.
This is something I have that's working today:
withEnv(["JAVA_HOME=${tool 'OPENJDK11'}",
"PATH+JAVA=${tool 'OPENJDK11'}"]) {
... do stuff ...
}
So I have a global tool OPENJDK11 installed, along with OPENJDK14, and now I would like to change the Groovy script to be able to decide which JDK to install.
So before the part above I have saved the name of the tool in a variable jdkToInstall, how am I able to reference this variable inside the tool directive?
I have tried:
${tool '${jdkToInstall}'} and ${tool '$jdkToInstall'}.
That doesn't expand my variable, so I get an error message saying it can't find the tool "$jdkToInstall".
I also tried with string concatenation, but that ended up with a similar error message with my plus and everything.
It is sufficient to expand (${}) the variable only once. Following works as expected:
withEnv(["JAVA_HOME=${tool jdkToInstall}", "PATH+JAVA=${tool jdkToInstall}"]) {
... do stuff ...
}

How to run pipeline with constant variables between production and non production

I have my jenkins pipelines working and all in source code management, within my pipelines I have some constants which are variables that do not change/rarely change, so these would rarely change and the pipelines requires these values, 90% of it doesnt change but I have some that does change based on the environment type (production/pre-production/test etc)
The problem I have right now is that I would like to take thesame code from non-production to production without having to change things like the file server details, as production/non-production use different file servers, as it stands one has to remember to change the file server when promoting code to production, is it possible to have like a configuration file and the pipeline can read the values from the configuration file, I do not want to change the pipelines or make as little changes as possible when my code moves from non-production to production.
Thanks in advance.
Install the Pipeline Utility Steps plugin and create a properties file each in the repositories/branches for production and non-production respectively. Then read and populate the properties from the file in your pipeline during the build.
Sample server.properties
fileServerUrl=ftp://prod.company.net
fileServerPort=21
// more properties here
Sample Jenkinsfile
pipeline {
agent {
label 'production'
}
stages {
stage('prepare-env') {
script {
def props = readProperties file: 'server.properties'
env.fileServerUrl = props.fileServerUrl
env.fileServerPort = props.fileServerPort
// more properties here
}
}
stage('deploy') {
println("INFO: The file server is ${env.fileServerUrl}:${env.fileServerPort}")
// do stuff here
}
}
}

Jenkins - Add a dynamic label to a build

I am trying to achieve the following task in Jenkins:
1) Build a maven project
2) When running the test cases I print certain messages to the console output
3) Parse the console output of the build and determine if certain patterns exist in the output
4) If the pattern exists I want to label the build with a specific string
I have achieved steps 1-3. I am not able to create a dynamic label and tie it to a build. I have a Groovy script that parses the console output and determines if the pattern exists in the build's output.
Bamboo provides this feature to label a build based on regular expressions present in the build's console output.
Link - https://confluence.atlassian.com/bamboo0606/using-bamboo/jobs-and-tasks/configuring-jobs/configuring-miscellaneous-settings-for-a-job/configuring-automatic-labeling-of-job-build-results
I have gone through various existing Jenkins plugins but have not been successful in achieving this functionality. Is there a plugin to achieve this functionality or can I add additional lines in the Groovy script to create a dynamic build label.
Any help is appreciated.
you can use if to set agent:
def AGENT_LABEL = null
node('master') {
stage('Checkout and set agent'){
checkout scm
### Or just use any other approach to figure out agent label: read file, etc
if (env.BRANCH_NAME == 'master') {
AGENT_LABEL = "prod"
} else {
AGENT_LABEL = "dev"
}
}
}
pipeline {
agent {
label "${AGENT_LABEL}"
}

Jenkins is re-using a pipeline workspace and I wish for each build to have a unique workspace

So, most of the questions and answers I've found on this subject is for people who want to use the SAME workspace for different runs. (Which baffles me, but then I require a clean slate each time I start a job. Leftover stuff will only break things)
My issue is the EXACT opposite - I MUST have a separate workspace for each run (or I need to know how to create files with the same name in different runs that stay with that run only, and which are easily reachable from bash scripts started by the pipeline!)
So, my question is - how do I either force Jenkins to NOT use the same workspace for two concurrently-running jobs on different hosts, OR what variable can I use in the 'custom workspace' field to accomplish this?
After I responded to the question by #Joerg S I realized that I'm saying the thing that Joerg S says CAN'T happen is EXACTLY what I'm observing! Jenkins is using the SAME workspace for 2 different, concurrent, jobs on 2 different hosts. Is this a Jenkins pipeline bug?
See below for a bewildering amount of information.
Given the way I have to go onto and off of nodes during the run, I've found that I can start 2 different builds on different hosts of the same job, and they SHARE the workspace dir! Since each job has shell scripts which are busy writing files into that directory, this is extremely bad.
In Custom workspace in jenkins we are told to use custom workspace, and I'm set up just like that
In Jenkins: how to run builds in unique directories we are told to use ${BUILD_NUMBER} in the above custom workspace field, so what I tried was:
${JENKINS_HOME}/workspace/${ITEM_FULLNAME}/${BUILD_NUMBER}
All that happens to me when I use that is that the workspace name is, you guessed it, "${BUILD_NUMBER}" (and I even got a "${BUILD_NUMBER}#2" just for good measure!)
I tried {$BUILD_ID}, same thing (uses that literally, does not substitute the number).
I have the 'allow concurrent builds' turned on.
I'm using pipelines exclusively.
All jobs here, as part of normal execution, cause the slave, non-master host to reboot into an OS that does not have the capability to run slave.jar (indeed, it has no network access at all), so I cannot run the entire pipeline on that host.
All jobs use the following construct somewhere inside them:
tests=Arrays.asList(tests.split("\\r?\n"))
shellerror=231
for( line in tests){
So let's call an example job 'foo' that loops through a list, as above, that I want to run on 2 different hosts. The pipeline for that job starts running on master (since the above for (line in tests) is REQUIRED to run on a node!)). Then goes back and forth between master and slave, often multiple times.
If I start this job on host A and host B at about the same time, they will BOTH use the workspace ${JENKINS_HOME}/workspace/${JOB_NAME}, or in my case /var/lib/jenkins/jenkins/workspace/job
Since they write different data to files with the same name in that directory, I'm clearly totally broken immediately.
So, how do I force Jenkins to use a unique workspace EVERY SINGLE JOB?
Or, what???
Other things: pipeline build step version 2.5.1, Jenkins 2.46.2
I've been trying to get the workspace statement ('ws') to work, but that doesn't quite work as I expected either - some files are in the workspace I explicitly name, and some are still in the 'built-in' workspace (workspace/).
I was asked to provide code. The 'standard' pipeline I use is about 26K bytes, composing about 590 lines. So, I'm going to GREATLY reduce. That being said:
node("master") { // 1
..... lots of stuff....
} // this matches the "node('master')" above
node(HOST) {
echo "on $HOST, check what os"
if (isUnix())
...some more stuff...
} // end of 'node(HOST)' above
if (isok == 0 ) {
node("master") {
echo "----------------- Running on MASTER 19 $shellerror waiting on boot out of windows ------------"
sleep 120
echo "----------------- Leaving MASTER ------------"
}
}
... lots 'o code ...
node(HOST) {
... etc
} // matches the latest 'node HOST' above
node("master") { // 120
.... code ...
for( line in tests) {
...code...
}
}
... and on and on and on, switching back and forth from one to the other
FWIW, when I tried to make the above use 'ws' so that I could make certain the ws name was unique, I simply added a 'ws wsname' block directly under (almost) every 'node' opening so it was
node(name) { ws (wsname) { ..stuff that was in node block before... } }
But then I've got two directories to worry about checking - both the 'default' workspace/jobname dir AND the new wsname one.
Try using customWorkspace node common option:
pipeline {
agent {
node {
label 'node(s)-defined-label'
customWorkspace "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
}
}
stages {
// Your pipeline logic here
}
}
customWorkspace
A string. Run the Pipeline or individual stage this
agent is applied to within this custom workspace, rather than the
default. It can be either a relative path, in which case the custom
workspace will be under the workspace root on the node, or an absolute
path.
Edit
Since this doesn't work for your complex pipeline. Maybe try this silly solution:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
sh(script: "cd ${WORKSPACE}")
//Do stuff here
}
or if dir() is accessible:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
dir(WORKSPACE) {
//Do stuff here
}
}
customWorkspace didn't work for me.
What worked:
stages {
stage("SCM (For commit trigger)"){
steps {
ws('custom-workspace') { // Because we don't want to switch from the pipeline checkout
// Generated from http://lstool01:8080/job/Permanent%20Build/pipeline-syntax/
checkout(xxx)
}
}
}
'${SOMEVAR}'
will not get substituted
"${SOMEVAR}"
will - this is how groovy strings are being handled
see groovy string handling
so if you have a
ws("/some/path/somewhere/${BUILD_ID}")
{
//something
}
on your node in your pipeline Jenkinsfile it should do the trick in this regard
the problem with #2 workspaces can occur when you allow concurrent builds of the project - I had the exact same problem with a custom ws() with #2 - simply disallow concurrent builds or work around that.

How to abort Jenkins pipeline build if label is not matched

I have a Jenkinsfile multibranch pipeline script, which runs on two different Jenkins systems. Jenkinsfile relies on a specific label name. In one of the systems, the label based agent is available and in another not (intentionally). In the former it runs fine. In the Jenkins system without the matching label, the job just hangs because it cant find a matching agent.
Is there a way to specify an option to abort (or not start) a build if a label is not found?
Some discussion here:
https://issues.jenkins-ci.org/browse/JENKINS-35905
Might not be possible anytime soon
If they are calling in to a shared library then you can check for label being online/available and then fail the build
def computers = Jenkins.instance.computers
for(computer in computers){
if(computer.isOnline()){
labelStr = computer.node.getLabelString()
}
if labelStr ~= /user input/
break;
}
System.exit(1); // no label
For a declarative pipeline it may be possible to use when{beforeAgent} to test whether a label exists.
This would only be useful where the agent is specified for a stage rather than the whole pipeline.
...and caveat that this is an as yet untested hypothesis.
Just a workaround, but in order to avoid dependency on shared lib I run the below every X minutes to clean-up culprits from queue:
import hudson.model.*
def q = Jenkins.instance.queue
q.items.each {
if (it =~ /someregex or match all/) {
why = it.getWhy()
if (why =~ /.*There are no nodes with the label.*/) {
println "No node found for $it.task.runId. It's stuck in damn jenkins queue forever and ever. Killing it"
q.cancel(it.task)
}
}
}

Resources