I am trying to setup a Jenkinsfile that gets a job to call other jobs, based on the parameters passed into itself.
Instead of having multiple when conditions, I'm thinking that it would be smarter (and manageable for future scaling) if the names of the jobs being called would ideally be concatenating a common prefix with the parameter being passed in, for example:
CICD_api-gateway
CICD_front-end
CICD_customer-service
I'm having difficulty mixing string interpolation with string concatenation to achieve this:
build job: 'CICD_"${params.SERVICE_NAME}"', wait : false
In Linux, we are able to use eval to achieve this. I'm not sure what the equivalent is in Jenkinsfile syntax.
The full code below:
pipeline {
agent any
parameters { string(name: 'SERVICE_NAME', defaultValue: '', description: 'Service being deployed.') }
stages {
stage('Build Trigger'){
steps{
echo "CICD_${params.SERVICE_NAME}"
build job: 'CICD_"${params.SERVICE_NAME}"', wait : false
}
}
}
}
Change it to be a Gstring from the beginning, no need for the single quotes:
build job: "CICD_${params.SERVICE_NAME}", wait : false
Related
I have a parent pipeline job that takes parameters and passes them to a downstream job. I've achieves this in multiple ways with no issue however I keep getting a string interpolation warning that I am trying to fix, but am unable to do so. Based on the documentation (https://www.jenkins.io/doc/book/pipeline/jenkinsfile/#string-interpolation) in most cases using single quotes should work, however this passes the literal name rather than the value (e.g if I set my variable as SECRET_PWD and call it like '${SECRET_PWD}' it shows up as ${SECRET_PWD} on the downstream job instead of the value passed to the parameter.
Here's what I've tried to so far:
Parent Pipeline
pipeline {
agent any
parameters {
password(defaultValue: "", description: 'The admin password', name: 'SUPER_SECRET_ADMIN_PWD')
stages {
stage("Stage1") {
steps {
build job: "secret_job/${env.BRANCH}", propagate: true, wait: true, parameters: [
[$class: 'StringParameterValue', name: 'SUPER_SECRET_ADMIN_PWD', value: "${params.SUPER_SECRET_ADMIN_PWD}" ]
]
}
}
This gives the following error in when calling the downstream job.
Warning: A secret was passed to "build" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [SUPER_SECRET_ADMIN_PWD]
See https://jenkins.io/redirect/groovy-string-interpolation for details.
The parameter 'SUPER_SECRET_ADMIN_PWD' did not have the type expected by secret_job ยป secret_branch. Converting to Password Parameter.
Note: I am aware that the StringParameterValue is the reason for the first error. I have changed this in a few different ways to solve that but i still get the interpolation issue.
The other ways I've tried are:
password(name: 'SUPER_SECRET_ADMIN_PWD', value: "${SUPER_SECRET_ADMIN_PWD}") = This works but still interpolation issue
password(name: 'SUPER_SECRET_ADMIN_PWD', value: "${SUPER_SECRET_ADMIN_PWD}") = This does NOT work as it passes ${SUPER_SECRET_ADMIN_PWD} as the value rather than the one entered into the parameter. HOWEVER the interpolation warning goes away
[$class: 'StringParameterValue', name: 'SUPER_SECRET_ADMIN_PWD', value: '${params.SUPER_SECRET_ADMIN_PWD}' = This does NOT work as it passes ${SUPER_SECRET_ADMIN_PWD} as the value rather than the one entered into the parameter. HOWEVER the interpolation warning goes away
I've also used ${env.SUPER_SECRET_ADMIN_PWD} similar to ${params.SUPER_SECRET_ADMIN_PWD}
Note that I've changed my downstream job to use single quotes and i'm doing a simple sh script something like below with no interpolation errors (I have them before though).
stages{
stage("test"){
steps{
script{
sh '''
echo ${SUPER_SECRET_ADMIN_PWD}
'''
}
}
}
How do I go about solving interpolation and still passing the password parameter down to a downstream job?
I have two separate Jenkins jobs that will run on one repository: My Jenkinsfile has a step that will run with this property enabled: enableZeroDownTime. The purpose of the 2nd Jenkins Job is to run the step with this property enableZeroDownTime disabled. Does anyone know how I can control it using the same Jenkinsfile? Can I pass that using some parameter based on any properties file? I am really confused on this.
stage('CreateCustomer') {
steps {
script {
common.runStage("#CreateCustomer")
common.runStage("#SetOnboardingCustomerManifest")
common.runStage("#enableZeroDownTime")
}
}
}
Solution
I currently run multiple pipelines that use the same Jenkinsfile. The change to conditionally execute a stage is trivial.
stage('CreateCustomer') {
when {
environment name: 'enableZeroDownTime', value: 'true'
}
steps {
script {
common.runStage("#CreateCustomer")
common.runStage("#SetOnboardingCustomerManifest")
common.runStage("#enableZeroDownTime")
}
}
}
The CreateCustomer stage will only run when the enableZeroDownTime parameter is set to true ( it can be a String parameter with value true, or a boolean parameter ).
The trick here is that you cannot add the parameters{} block to your declarative pipeline. For example if you had the following
parameters {
string(name: 'enableZeroDownTime', defaultValue: 'true')
}
Both pipelines would default to true. If you had the following
parameters {
string(name: 'enableZeroDownTime', defaultValue: '')
}
Both pipelines would default to a blank default value.
Even if you manually save a different default value to the pipeline after creation it will be overwritten next run with a blank default value.
Instead you simply need to remove the parameters{} block altogether and manually add the parameters through the web interface
Additionally...
Additionally it is possible to have two pipelines use the same Jenkinsfile with different parameters. For example, lets say Pipeline A had a enableZeroDownTime parameter defaulted to true and Pipeline B had no parameters at all. In this case you can add an environment variable of the same name and set the value equal to the following ternary expression
environment {
enableZeroDownTime = "${params.enableZeroDownTime != null ? "${params.enableZeroDownTime}" : false}"
}
You can then reference this parameter in the when declarative ( or anywhere in the pipeline ) without fear of the pipeline throwing a null pointer exception.
I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
Start jenkins job immediately after creation by seed job
I can start a job from within the job dsl like this:
queue('my-job')
But how do I start a job with argument or parameters? I want to pass that job some arguments somehow.
Afaik, you can't.
But what you can do is creating it from a pipeline (jobDsl step), then run it. Something more or less like...
pipeline {
stages {
stage('jobs creation') {
steps {
jobDsl targets: 'my_job.dsl',
additionalParameters: [REQUESTED_JOB_NAME: "my_job's_name"]
build job: "my_job's_name",
parameters: [booleanParam(name: 'DRY_RUN', value: true)]
}
}
}
}
With a barebones 'my_job.dsl'...
pipelineJob(REQUESTED_JOB_NAME) {
definition {
// blah...
}
}
NOTE: As you see, I explicitly set the name of the job from the calling pipeline (the REQUESTED_JOB_NAME var) because otherwise I don't know how to make the jobDSL code to return the name of the job it creates back to the calling pipeline.
I use this "trick" to avoid the "job params go one run behind" problem. I use the DRY_RUN param of the job (I use a hidden param, in fact) to run a "do-nothing" build as its name implies, so by the time others need to use the job for "real stuff" its params section has already been properly parsed.
I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name