Using same Jenkinsfile for two separate jobs in same repo - jenkins

I have two separate Jenkins jobs that will run on one repository: My Jenkinsfile has a step that will run with this property enabled: enableZeroDownTime. The purpose of the 2nd Jenkins Job is to run the step with this property enableZeroDownTime disabled. Does anyone know how I can control it using the same Jenkinsfile? Can I pass that using some parameter based on any properties file? I am really confused on this.
stage('CreateCustomer') {
steps {
script {
common.runStage("#CreateCustomer")
common.runStage("#SetOnboardingCustomerManifest")
common.runStage("#enableZeroDownTime")
}
}
}

Solution
I currently run multiple pipelines that use the same Jenkinsfile. The change to conditionally execute a stage is trivial.
stage('CreateCustomer') {
when {
environment name: 'enableZeroDownTime', value: 'true'
}
steps {
script {
common.runStage("#CreateCustomer")
common.runStage("#SetOnboardingCustomerManifest")
common.runStage("#enableZeroDownTime")
}
}
}
The CreateCustomer stage will only run when the enableZeroDownTime parameter is set to true ( it can be a String parameter with value true, or a boolean parameter ).
The trick here is that you cannot add the parameters{} block to your declarative pipeline. For example if you had the following
parameters {
string(name: 'enableZeroDownTime', defaultValue: 'true')
}
Both pipelines would default to true. If you had the following
parameters {
string(name: 'enableZeroDownTime', defaultValue: '')
}
Both pipelines would default to a blank default value.
Even if you manually save a different default value to the pipeline after creation it will be overwritten next run with a blank default value.
Instead you simply need to remove the parameters{} block altogether and manually add the parameters through the web interface
Additionally...
Additionally it is possible to have two pipelines use the same Jenkinsfile with different parameters. For example, lets say Pipeline A had a enableZeroDownTime parameter defaulted to true and Pipeline B had no parameters at all. In this case you can add an environment variable of the same name and set the value equal to the following ternary expression
environment {
enableZeroDownTime = "${params.enableZeroDownTime != null ? "${params.enableZeroDownTime}" : false}"
}
You can then reference this parameter in the when declarative ( or anywhere in the pipeline ) without fear of the pipeline throwing a null pointer exception.

Related

How to call a method inside triggers block in Jenkinfile

I have a pipeline which needs to be scheduled to run at a particular time. There are some dynamic parameters that needs to be passed while running the pipeline.
I have created a function that gives me the desired parameter value. However this pipeline does not get triggered as the function value is not getting resolved inside trigger block & is getting treated as string.
getlatest is the method I created which takes in 3 parameters. The value of this method is not getting resolved & instead treated as string. The pipeline rund as expected if I hardcode some value for version.
triggers{
parameterizedCron("H/5 * * * * % mod=test; version=getlatest('abc','xyz','lmn');")
}
The problem is that the code that calculates the parameter — just like any other code in Jenkins — needs an executor to run. To get an executor, you need to run your pipeline. To run your pipeline, you need to give Jenkins the parameters. But to give Jenkins the parameters, you need to run your code.
So there's a chicken and egg problem, there.
To break out of this cycle, you may want to run scripted pipeline before you run the declarative one:
node('built-in') { // or "master", or any other
def version = getlatest('abc','xyz','lmn')
def cron_parameters = "H/5 * * * * % mod= test; version=${version}"
println "cron_parameters is ${cron_parameters}"
env.CRON_PARAM = cron_parameters
}
pipeline {
agent { node { label "some_label" } }
triggers {
parameterizedCron(env.CRON_PARAM)
}
// ...
}
I've never seen this being tried before so I don't know if what you are doing is something Jenkins is capable of. Instead, remove the parameter and create an environment variable called version and assign the function result to that:
environment {
VERSION = getlatest('abc','xyz','lmn')
}
And reference this VERSION variable instead of your input parameter.
How to reference:
env.VERSION or ${VERSION} or ${env.VERSION}
Examples:
currentBuild.displayName=env.VERSION
env.SUBJECT="Checkout Failure on ${VERSION}"
string(name: 'VERSION', value: "${env.VERSION}")

Dynamic assignment of values as default parameters in Jenkinsfile

Whenever I run this pipeline in Jenkins I have to manually copy-paste some values from a YAML file in a remote Gitlab repository. What I would like to achieve is an auto-fill of the values that .
This is how my Jenkinsfile and the YAML look like:
Jenkinsfile
pipeline {
agent {
docker {
image 'artifactory...'
args "..."
}
}
parameters {
string(name: 'BACKEND_TAG_1', defaultValue: '', description: 'Tag...')
string(name: 'BACKEND_TAG_2', defaultValue: '', description: 'Tag...')
}
stage('prepare') {
steps {
script {
dir('application') {
git url: env.PIPELINE_APPLICATION_GIT_URL, branch: env.PIPELINE_APPLICATION_GIT_BRANCH
}
Values = readYaml file: 'application/values.yaml'
values.yaml
version:
default: 0.1.2
company_tag_1: 0.1.124
company_tag_2: 0.1.230
So I need to loop into the parameters and assign the corresponding values:
Values.each { Value ->
Value.version.minus('company')
/* This value should be assigned to the corresponding parameter BACKEND_TAG_* parameter.
e.g.: BACKEND_TAG_1.default=company_tag_1
BACKEND_TAG_2.default=company_tag_2
*/
}
Reading the YAML works fine but I don't know how to proceed in the assignment of the values.
I presume you would like to populate all parameters before click Build button. I mean after clicking "Build with Parameters" button, you basically would like to see your parameters are populated from your YAML file.
If this is the case You can use Active Choice Parameter or Extended Choice Parameter plugins for this purpose. These Plugins are able to run Groovy Script, so you can develop a small Groovy Script read and select parameters automatically.

Jenkins declarative pipeline: How to access value of parameter when it has spaces in its name?

I want to execute one stage of my Jenkins pipeline only if a particular boolean parameter is true. Currently I have this in my declarative Jenkinsfile:
booleanParam(
defaultValue: false,
name: 'forceGenerateNuGet'
)
...
when {
expression { forceGenerateNuGet == "true" }
}
This works fine. But the above is ugly since the name shows up in the UI.
I would like to have this (pseudocode):
booleanParam(
defaultValue: false,
name: 'Force generate NuGet'
)
...
when {
expression { getValueOfParam('Force generate NuGet') == "true" }
}
Is this possible?
EDIT: This post at Devops Stack Exchange says that I should be able to use params['Force generate NuGet'].
https://devops.stackexchange.com/questions/4711/in-jenkins-how-can-parameters-that-contain-spaces-be-referenced/4712?newreg=bc037a0b92d94b609be577272600c7fd
That doesn't work for me, though. I tried to apply it and run a build where I set the parameter, but it doesn't pick up the value and hence skips the step it was supposed to execute. :(
If I tell it to output the parameter values, it just tells me they are null.
script {
def force = params["Force generate NuGet"]
echo "Force generate NuGet: ${force}"
}
Output:
[2021-02-09T15:13:37.571Z] Force generate NuGet: null
Update: It turns out that params['Force generate NuGet'] actually does work. I had a long Jenkinsfile with some legacy code that overwrote the global variable params. That is why it was not working for me. I revised the legacy code, and now I can get the parameters.

Define you own global variable for JenkinsJob (Not for ALL jobs!!)

I have ha Jenkins job that has a string input parameter of the build flags for the make command in my Jenkins job. My problem is that some users forget to change the parameter values when we have a release branch. So I want to overwrite the existing string input parameter (or create a new one) that should be used if the job is a release job.
This is the statement I want to add:
If branch "release" then ${params.build_flag} = 'DEBUGSKIP=TRUE'
and the code that is not working is:
pipeline {
agent none
parameters {
string(name: 'build_flag', defaultValue: 'DEBUGSKIP=TRUE', description: 'Flags to pass to build')
If {
allOf {
branch "*release*"
expression {
${params.build_flag} = 'DEBUGSKIP=TRUE'
}
}
}else{
${params.build_flag} = 'DEBUGSKIP=FALSE'
}
}
The code above explains what I want to do but I don't know to do it.
If you can, see if you could use the JENKINS EnvInject Plugin, with your pipeline, using the supported use-case:
Injection of EnvVars defined in the "Properties Content" field of the Job Property
These EnvVars are being injected to the script environment and will be inaccessible via the "env" Pipeline global variable (as in here)
Or writing the right values in a file, and using that file content as "Properties Content" of a downstream job (as shown there).

How can I parameterize Jenkinsfile jobs

I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name

Resources