This method cannot be achieved, is there a way to achieve my needs.
def VERSION = readFile(file: '/opt/version').trim()
pipeline {
agent {
label 'maven'
}
parameters {
string(name: 'version', defaultValue: VERSION, description: 'version')
}
}
readFile is a Declarative Pipeline Step
https://www.jenkins.io/doc/pipeline/steps/workflow-basic-steps/#readfile-read-file-from-workspace
This means it can only be used in a steps block.
e.g.
stage("Stage1"){
steps{
script{
fileContents = readFile('/opt/version').trim()
}
}
}
To make dynamic parameters you would need to use the https://plugins.jenkins.io/uno-choice/ plugin.
But it would also depend on where the file you are trying to open is located as to whether this is possible or not.
Related
I'm new to Jenkins and groovy scripting. I'm trying to reassign the parameters in the Jenkins script.
I tried the following
def reasignParams() {
if(params.B == '') {
params.B = params.A
}
}
pipeline{
parameters {
string(name: 'A', defaultValue: '1.1', description: "Master Value")
string(name: 'B', defaultValue: '', description: "Slave value")
}
}
After running the above Jenkins pipeline script (groovy), I ran into the following error
java.lang.UnsupportedOperationException
The alternative that I thought to this is as below
def reasignParams() {
if(params.B == '') {
def temp = params.A
# use temp variable instead of params.B; But this is inconvenient
}
}
I would like to learn if there is a way to reassign parameters in the Jenkins pipeline script? Any help would be greatly appreciated, Thanks in advance!
The params object in Jenkins Pipeline does not support write operations on its member variables. You can only initially assign them in the parameters directive (think of it like a constructor in that sense). If you want to reassign parameter values, then you do indeed need to make a deep copy like the following:
newParams = [:]
newParams.A = params.A
In my declarative pipeline I have a choice parameter as follows:
parameters {
choice(name: 'sleep_time',
choices: ['2.5m', '2m', '15s', '50s', '4m', '1m', '1.5m', '1.5s', 'random'],
description: "the sleep time to execute after the command")
}
How can I get all the values of the parameter from within my declarative pipeline?
In this example I am expecting to get the list of 2.5m, 2m, 15s and so on.
During execution the parameter value will hold only the selected choice and not the entire list of options, however because the parameter is defined as part of the pipeline script, you can define the option list as a global parameter for your pipeline and use in in the parameter definition and in any other place in the pipeline.
Something like:
// Define the options as a global parameter
SLEEP_OPTIONS = ['2.5m', '2m', '15s', '50s', '4m', '1m', '1.5m', '1.5s', 'random']
pipeline {
agent any
parameters {
choice(name: 'sleep_time',
choices: SLEEP_OPTIONS, // Use the global parameter
description: "the sleep time to execute after the command")
}
stages {
stage('Use Global Parameter') {
steps {
script {
SLEEP_OPTIONS.each { // SLEEP_OPTIONS is available for all steps in the pipeline
println it
}
}
}
}
}
}
I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
I am trying to setup a Jenkinsfile that gets a job to call other jobs, based on the parameters passed into itself.
Instead of having multiple when conditions, I'm thinking that it would be smarter (and manageable for future scaling) if the names of the jobs being called would ideally be concatenating a common prefix with the parameter being passed in, for example:
CICD_api-gateway
CICD_front-end
CICD_customer-service
I'm having difficulty mixing string interpolation with string concatenation to achieve this:
build job: 'CICD_"${params.SERVICE_NAME}"', wait : false
In Linux, we are able to use eval to achieve this. I'm not sure what the equivalent is in Jenkinsfile syntax.
The full code below:
pipeline {
agent any
parameters { string(name: 'SERVICE_NAME', defaultValue: '', description: 'Service being deployed.') }
stages {
stage('Build Trigger'){
steps{
echo "CICD_${params.SERVICE_NAME}"
build job: 'CICD_"${params.SERVICE_NAME}"', wait : false
}
}
}
}
Change it to be a Gstring from the beginning, no need for the single quotes:
build job: "CICD_${params.SERVICE_NAME}", wait : false
A Jenkins pipeline project is configured to fetch its Jenkinsfile from a Git repo:
If I change the list of parameters, for example, from:
properties([
parameters([
string(name: 'FOO', description: 'Choose foo')
])
])
to:
properties([
parameters([
string(name: 'FOO', description: 'Choose foo'),
string(name: 'BAR', description: 'Choose bar')
])
])
And run the build, the first run does not show the newly added BAR parameter:
As the updated Jenkins file expects the BAR parameter to be present, this causes the first build after the change to fail as the user is not presented with an input to enter this value.
Is there a way to prevent this? To make sure the Jenkinsfile is up-to-date before showing the parameter entry page?
Short answer: No. It would be nice if there was some facility for parsing and processing the Jenkinsfile separate from the build, but there's not.
Jenkins does not know about the new parameters until it retrieves, parses, and runs the Jenkinsfile, and the only way to do that is to...run a build.
In effect, the build history will always be "one run behind" the Jenkinsfile; when you change something in the Jenkinsfile, the next build will run with the "old" Jenkinsfile, but pick up and process the new Jenkinsfile for the build after that.
The only solution to this problem afaik is to manually add an "skip_run" boolean parameter, than add a when{} clause to every stage of the job.
properties([
parameters([
BooleanParameter(name: 'skip_run', description: 'Skips all stages. Used to update parameters in case of changes.', default: False)
])
])
...
stage('Doing Stuff') {
when {
expression { return params.skip_run ==~ /(?i)(N|NO|F|FALSE|OFF|STOP)/ }
}
steps {
...
}
}
This is, of course, very prone to error.
Alternatively, you could add a single stage as the very beginning of the pipeline and fail the build on purpose.
stage('Update Build Info only') {
when {
expression { return params.skip_run ==~ /(?i)(Y|YES|T|TRUE|ON|RUN)/ }
}
steps {
error("This was done deliberately to update the build info.")
}
}
UPDATE:
Thanks to Abort current build from pipeline in Jenkins, i came up with this solution:
To prevent the build from actually appearing red, you could wrap this with a try - catch and exit the build gracefully.
final updateOnly = 'updateOnly'
try {
stage('Update Build Info only') {
when {
expression { return params.skip_run ==~ /(?i)(Y|YES|T|TRUE|ON|RUN)/ }
}
steps {
error(updateOnly)
}
}
...
//other stages here
...
} catch (e) {
if (e.message == updateOnly) {
currentBuild.result = 'ABORTED'
echo('Skipping the Job to update the build info')
// return here instead of throwing error to keep the build "green"
return
}
// normal error handling
throw e
}
I have a function that skips the build unless the job has all the required parameters, something like:
if (job.hasParameters(['FOO', 'BAR'])) {
// pipeline code
}
An issue was reported a few years ago in Jenkins related with this issue
https://issues.jenkins-ci.org/browse/JENKINS-41929
Still open, so no elegant solution yet.
try..
parameters {
string(name: 'GRADLE_ARGS', defaultValue: '--console=plain', description: 'Gradle arguments')
}
environment{
GRADLE_ARGS = "${params.GRADLE_ARGS}"
}