How to run pipeline with constant variables between production and non production - jenkins

I have my jenkins pipelines working and all in source code management, within my pipelines I have some constants which are variables that do not change/rarely change, so these would rarely change and the pipelines requires these values, 90% of it doesnt change but I have some that does change based on the environment type (production/pre-production/test etc)
The problem I have right now is that I would like to take thesame code from non-production to production without having to change things like the file server details, as production/non-production use different file servers, as it stands one has to remember to change the file server when promoting code to production, is it possible to have like a configuration file and the pipeline can read the values from the configuration file, I do not want to change the pipelines or make as little changes as possible when my code moves from non-production to production.
Thanks in advance.

Install the Pipeline Utility Steps plugin and create a properties file each in the repositories/branches for production and non-production respectively. Then read and populate the properties from the file in your pipeline during the build.
Sample server.properties
fileServerUrl=ftp://prod.company.net
fileServerPort=21
// more properties here
Sample Jenkinsfile
pipeline {
agent {
label 'production'
}
stages {
stage('prepare-env') {
script {
def props = readProperties file: 'server.properties'
env.fileServerUrl = props.fileServerUrl
env.fileServerPort = props.fileServerPort
// more properties here
}
}
stage('deploy') {
println("INFO: The file server is ${env.fileServerUrl}:${env.fileServerPort}")
// do stuff here
}
}
}

Related

Expand variable inside Jenkins pipeline tool directive

I would like to be able to programmatically decide which tool will be installed in an Agent for a Jenkins pipeline.
This is something I have that's working today:
withEnv(["JAVA_HOME=${tool 'OPENJDK11'}",
"PATH+JAVA=${tool 'OPENJDK11'}"]) {
... do stuff ...
}
So I have a global tool OPENJDK11 installed, along with OPENJDK14, and now I would like to change the Groovy script to be able to decide which JDK to install.
So before the part above I have saved the name of the tool in a variable jdkToInstall, how am I able to reference this variable inside the tool directive?
I have tried:
${tool '${jdkToInstall}'} and ${tool '$jdkToInstall'}.
That doesn't expand my variable, so I get an error message saying it can't find the tool "$jdkToInstall".
I also tried with string concatenation, but that ended up with a similar error message with my plus and everything.
It is sufficient to expand (${}) the variable only once. Following works as expected:
withEnv(["JAVA_HOME=${tool jdkToInstall}", "PATH+JAVA=${tool jdkToInstall}"]) {
... do stuff ...
}

Jenkins pipeline agent to use environment variable

Can an agent label make use of environment variable? Something like this:
pipeline {
environment {
SLAVE_NODE = 'MY_COMPUTER_NAME'
}
agent { label $SLAVE_NODE}
...
Since the editor for pipelines is so small, I would like to have the available space (visible by default) to be the "environment" block, so when I copy a jenkins job I just need to adjust a few environment variables used further in the script... I think I tried all the obvious syntax possibilities by now.
Stumbled upon it by try and error... (and found a duplicate here): Add a string parameter to your jenkins job (e.g. jenkinsNode) and use this in your script:
agent { label "${jenkinsNode}" }

Jenkins pipeline configure the options via code

I use declarative Jenkins pipeline and each stage has options definition where I want to define the timeout and retry.
The values for those options come from configuration file.
I tried the below options, but it's not worked for me
I wrote method that do calculation of options
...
options { myOptionCalculation() }
....
I tried to wrap it with the script block.
....
options { script { myOption() } }
...
In both cases I got the error that options is not supported my configuration
How I can control the option definition based on external configuration ?

Jenkins is re-using a pipeline workspace and I wish for each build to have a unique workspace

So, most of the questions and answers I've found on this subject is for people who want to use the SAME workspace for different runs. (Which baffles me, but then I require a clean slate each time I start a job. Leftover stuff will only break things)
My issue is the EXACT opposite - I MUST have a separate workspace for each run (or I need to know how to create files with the same name in different runs that stay with that run only, and which are easily reachable from bash scripts started by the pipeline!)
So, my question is - how do I either force Jenkins to NOT use the same workspace for two concurrently-running jobs on different hosts, OR what variable can I use in the 'custom workspace' field to accomplish this?
After I responded to the question by #Joerg S I realized that I'm saying the thing that Joerg S says CAN'T happen is EXACTLY what I'm observing! Jenkins is using the SAME workspace for 2 different, concurrent, jobs on 2 different hosts. Is this a Jenkins pipeline bug?
See below for a bewildering amount of information.
Given the way I have to go onto and off of nodes during the run, I've found that I can start 2 different builds on different hosts of the same job, and they SHARE the workspace dir! Since each job has shell scripts which are busy writing files into that directory, this is extremely bad.
In Custom workspace in jenkins we are told to use custom workspace, and I'm set up just like that
In Jenkins: how to run builds in unique directories we are told to use ${BUILD_NUMBER} in the above custom workspace field, so what I tried was:
${JENKINS_HOME}/workspace/${ITEM_FULLNAME}/${BUILD_NUMBER}
All that happens to me when I use that is that the workspace name is, you guessed it, "${BUILD_NUMBER}" (and I even got a "${BUILD_NUMBER}#2" just for good measure!)
I tried {$BUILD_ID}, same thing (uses that literally, does not substitute the number).
I have the 'allow concurrent builds' turned on.
I'm using pipelines exclusively.
All jobs here, as part of normal execution, cause the slave, non-master host to reboot into an OS that does not have the capability to run slave.jar (indeed, it has no network access at all), so I cannot run the entire pipeline on that host.
All jobs use the following construct somewhere inside them:
tests=Arrays.asList(tests.split("\\r?\n"))
shellerror=231
for( line in tests){
So let's call an example job 'foo' that loops through a list, as above, that I want to run on 2 different hosts. The pipeline for that job starts running on master (since the above for (line in tests) is REQUIRED to run on a node!)). Then goes back and forth between master and slave, often multiple times.
If I start this job on host A and host B at about the same time, they will BOTH use the workspace ${JENKINS_HOME}/workspace/${JOB_NAME}, or in my case /var/lib/jenkins/jenkins/workspace/job
Since they write different data to files with the same name in that directory, I'm clearly totally broken immediately.
So, how do I force Jenkins to use a unique workspace EVERY SINGLE JOB?
Or, what???
Other things: pipeline build step version 2.5.1, Jenkins 2.46.2
I've been trying to get the workspace statement ('ws') to work, but that doesn't quite work as I expected either - some files are in the workspace I explicitly name, and some are still in the 'built-in' workspace (workspace/).
I was asked to provide code. The 'standard' pipeline I use is about 26K bytes, composing about 590 lines. So, I'm going to GREATLY reduce. That being said:
node("master") { // 1
..... lots of stuff....
} // this matches the "node('master')" above
node(HOST) {
echo "on $HOST, check what os"
if (isUnix())
...some more stuff...
} // end of 'node(HOST)' above
if (isok == 0 ) {
node("master") {
echo "----------------- Running on MASTER 19 $shellerror waiting on boot out of windows ------------"
sleep 120
echo "----------------- Leaving MASTER ------------"
}
}
... lots 'o code ...
node(HOST) {
... etc
} // matches the latest 'node HOST' above
node("master") { // 120
.... code ...
for( line in tests) {
...code...
}
}
... and on and on and on, switching back and forth from one to the other
FWIW, when I tried to make the above use 'ws' so that I could make certain the ws name was unique, I simply added a 'ws wsname' block directly under (almost) every 'node' opening so it was
node(name) { ws (wsname) { ..stuff that was in node block before... } }
But then I've got two directories to worry about checking - both the 'default' workspace/jobname dir AND the new wsname one.
Try using customWorkspace node common option:
pipeline {
agent {
node {
label 'node(s)-defined-label'
customWorkspace "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
}
}
stages {
// Your pipeline logic here
}
}
customWorkspace
A string. Run the Pipeline or individual stage this
agent is applied to within this custom workspace, rather than the
default. It can be either a relative path, in which case the custom
workspace will be under the workspace root on the node, or an absolute
path.
Edit
Since this doesn't work for your complex pipeline. Maybe try this silly solution:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
sh(script: "cd ${WORKSPACE}")
//Do stuff here
}
or if dir() is accessible:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
dir(WORKSPACE) {
//Do stuff here
}
}
customWorkspace didn't work for me.
What worked:
stages {
stage("SCM (For commit trigger)"){
steps {
ws('custom-workspace') { // Because we don't want to switch from the pipeline checkout
// Generated from http://lstool01:8080/job/Permanent%20Build/pipeline-syntax/
checkout(xxx)
}
}
}
'${SOMEVAR}'
will not get substituted
"${SOMEVAR}"
will - this is how groovy strings are being handled
see groovy string handling
so if you have a
ws("/some/path/somewhere/${BUILD_ID}")
{
//something
}
on your node in your pipeline Jenkinsfile it should do the trick in this regard
the problem with #2 workspaces can occur when you allow concurrent builds of the project - I had the exact same problem with a custom ws() with #2 - simply disallow concurrent builds or work around that.

How to include multiple pipeline scripts into jenkinsfile

I have a jenkins file as below
pipelineJob('My pipeline job'){
displayName('display name')
logRotator {
numToKeep(10)
daysToKeep(30)
artifactDaysToKeep(7)
artifactNumToKeep(1)
}
definition{
cps {
script(readFileFromWorkspace('./cicd/pipelines/clone_git_code.groovy'))
script(readFileFromWorkspace('./cicd/pipelines/install_dependencies_run_quality_checks.groovy'))
}
}
}
with above jenkinsfile the last script file is replacing other scripts.
Basically I have split tasks into multiple groovy files so that i wont repeat the same code in all jenkinsfile and reuse the same for other jobs as well, like I can now use the clone_git_code.groovy script in dev build as well as QA builds.
You have to use shared libraries (https://jenkins.io/doc/book/pipeline/shared-libraries/). You can define multiple groovy files with classes to return a processed object or simply creating calls with method where you define a step and the execution will be sequential.
I had this same issue when trying to include multiple scripts into a Jenkins job. After doing some research, I found the below solution to be the simplest:
definition {
cps {
script (
ScriptsLibrary.pipelineTest('did it work?') +
ScriptsLibrary.scmConf('repoURL_input', 'accessCredentials', 'activeBranch')
)
}
}
Add the "+" to concatenate the Strings. Got the job done for me :)

Resources