Pass parameters from KEY=VALUE properties file to downstream Jenkins job - jenkins

In my declarative pipeline, I do have the below line to call the downstream job.
build job: 'my_downstream_job'
I have a file in KEY=VALUE format, how to pass in the parameters from this properties file? Downstream job receive this parameter as KEY. Using Jenkins GUI, I use "Parameters from properties file" and put this filename in there and it works, like to know how to do the same with pipeline.

You might need some extra logic to process this file and construct the appropriate list. See https://jenkins.io/doc/pipeline/steps/pipeline-build-step/ for documentation but typically it will look like:
build(job: "my_downstream_job",
parameters: [
new StringParameterValue('MY_NAME', myname_var),
],
propagate: true
)
So you might need to parse your file and create a list if new StringParameterValue's for each line.

It doesn't appear that there is a one-liner way to do this, but I got this to work:
params = readProperties file: "params.properties"
build job: 'some_jenkins_job', parameters: [
string(name: 'PROP_STR', value: params.PROP_STR),
booleanParam(name: 'PROP_BOOL', value: params.PROP_BOOL),
]
And maybe this is for the best, so it's clear exactly which params are being passed in, just by reading the Jenkinsfile.

Related

How to access stringParam in groovy Jenkins pipeline

I have a job which takes stringParam and i am trying to access that stringParam from workflow, however i am getting No such property exception.
parameters {
stringParam("COMMIT_SHA", "", "[Required] Short SHA of the commit used to build the image")
}
print "The commit sha val is: $env.COMMIT_SHA"
I have tried different options like ${params.COMMIT_SHA}, ${COMMIT_SHA}
Solution
Modify your parameters block as follows
parameters {
string(name: 'COMMIT_SHA', defaultValue: '', description: '[Required] Short SHA of the commit used to build the image')
}
You should then be able to reference it using ${params.COMMIT_SHA} or ${COMMIT_SHA} or ${env.COMMIT_SHA}
Your syntax is for the Jobs DSL plugin but you stated you are using a pipeline. If you are, in fact, using a Freestyle Job to create other jobs instead of a Pipeline please provide the entire script and I will update my answer. However, your post appears to want to set the parameter for a pipeline ( not specify the metadata for the creation of a separate job via the Jobs DSL plugin )
The following Job DSL script, when executed, will create a Job called "example" with COMMIT_SHA as a parameter. It won't be added as a parameter to the job that contains the Job DSL script
job('example') {
parameters {
stringParam('myParameterName', 'my default stringParam value', 'my description')
}
}
In other words, there is nothing to print in the job that contains the stringParam code because that code configures a different job to contain a string parameter, it doesn't add that parameter to the current job

Jenkins Active Choice Reactive Reference Parameter Formatted HTML in MultiBranch pipeline job get current branch name in script

I am trying for a lot of time to get the current branch name in MultibranchPipeline Job inside an Active Choice Reactive Reference Parameter Formatted HTML parameter script block
[
$class: 'DynamicReferenceParameter',
choiceType: 'ET_FORMATTED_HTML',
name: 'TestParam',
omitValueField: true,
description: 'Test.',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: false,
script: '''
return """
<p>FallbackScript. Error in main script</p>
"""
'''
],
script: [
classpath: [],
sandbox: false,
script: '''
String branchName = env.BRANCH_NAME
return """
<p>${branchName}</p>
"""
'''
]
]
]
The thing is that, I believe, the BRANCH_NAME param is injected after you press the Build button.
I've tried a lot of things, and I mean, A LOT, still I didn't manage to find a way. The scm variable doesn't exist as well, I tried to find something with the jenkins.model.Jenkins.instance but no luck.
Is it possible? I would love to ask this question on their Github repo, but issues are not allowed to be opened. Also to open an issue on Jenkins you need a Jira account or something. SO is the only place.
Thanks to Michael's answer, I managed to find a way to make this work. There are a lot more to it than meets the eye, but I will get through all details. I also answered this question here.
I make the assumption that the reader is familiar with the Active Choices plugin. Also, I played with this in a multibranch pipeline job. You might encounter different behaviours with other kinds of jobs.
The parameters sadly don't have access to the environment variables. This is a bit of a limitation which I hope will be fixed/thought of in the future by the plugin's maintainers.
Some environment variables are only populated at build time, like BRANCH_NAME. In this case, even if we had access to the env vars we wouldn't have the actual value at hand.
To be able to use the env.BRANCH_NAME we need two reactive parameters.
The plugin has a parameter named FORMATTED_HIDDEN_HTML. This parameter doesn't get displayed to the user. This is great since we wouldn't want to see in a multibranch pipeline job a parameter with the same name as the branch we are currently on.
To set this parameter, we can write something like this in a Jenkinsfile.
[
$class: 'DynamicReferenceParameter',
choiceType: 'ET_FORMATTED_HIDDEN_HTML',
name: 'BranchName',
omitValueField: true,
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: true,
script: '''
return '<p>error</p>'
'''
],
script: [
classpath: [],
sandbox: true,
script: """
return '<input name="value" value="${env.BRANCH_NAME}" type="text">'
"""
]
]
]
There are a lot of things to note here.
The sandbox property is set to true. If you don't do that, you would need to accept the script in the ScriptApproval menu in Jenkins.
We use triple-double quotes when we define the script property.
script: """
return '<input name="value" value="${env.BRANCH_NAME}" type="text">'
"""
When the job is started for the first time, the BRANCH_NAME variable is populated. This results in a string interpolation which gets your script property in the following state:
script: """
return '<input name="value" value="myBranchName" type="text">'
"""
If we would've used triple-single quotes, we would get an error like:
hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: env for class: WorkflowScript
This gets us back to the fact that we don't have access to the environment variables.
What to conclude from this? Well, if we use triple-double quotes, first we have a string interpolation, then the script is run.
The HTML element that must be used is input. This is explained in the docs if you read it carefully. Not only that but also the name property must be set to value. This is also explained in the docs.
omitValueField should be set to true, or else you will get a trailing comma in your value. E.g.: myBranchName,
Basically, the first time you run the job you get your branch name populated via string interpolation. Only after the second build, you will have the value to use. You will always reference the previous value.
After all that, you can reference this parameter in other Active Choices parameter types via referencedParameters property.
I desperately needed this because I have a complex use case scenario. I'm making requests to an Azure Container Registry to get all the tags for a certain image for a certain branch.
This plugin is great, I'm glad it exists. I would've loved a lot more documentation and examples thoguh.
Have a look at Groovy's string interpolation.
tl;dr You can access values by using """ and ${variable}
script: """
return <p>${env.BRANCH_NAME}</p>
"""

jenkins : update parameters via groovy script

I have a text file on server, e.g. /var/lib/jenkins/.../myChoices.txt
FirstChoice,SecondChoice
As the files will updated from time to time, I want the script update the parameters every time will I click "build with parameters"
But my code only works when I build the job, i.e. is not updating in real time.
def getMyChoices() {
List<String> choices = Arrays.asList(readFileFromWorkspace('/var/lib/jenkins/.../myChoices.txt').split(','))
return choices
}
job(jobName) {
description("Deploy something based on choice.")
parameters {
...
...
choiceParam('EB_ACTIVE_ENV_NAME', getMyChoices(), '')
}
}
I do not want to use the hudson plugin too due to some vulnerability reason.
Groovy scripts would be executed only when the job is run. Hence, until the job is run, the parameters would not be refreshed
The only solution available is to this job at regular intervals with an additional flag to refresh the parameters alone and then exit.
This way whenever you click on Build on Parameters options, you will have the latest parameters that exists in the file.
It is required to regenerate the job in order to refresh the parameters.
What I would do is create a job that generates the jobs with jobdsl step when I get a change on the repository where myChoices.txt is versioned
here is an exemple of use of jobDsl
jobDsl removedJobAction: 'DELETE',
removedViewAction: 'DELETE',
targets: targetFile,
unstableOnDeprecation: true,
additionalParameters: [
pipelineJobs: arrFiles,
props: [
basePath: destination,
gitRemoteUrl: config.gitRemoteUrl,
gitConfigJenkinsBranch: config.gitConfigJenkinsBranch,
localPath: config.localPath ?: ''
]
]
I use it with a shared library I created that allows me to abstract jobDSL and only write pipelineDSL https://github.com/SAP/jenkins-pipelayer/ but there are restriction to this lib, because I parse the pipelineDSL, getMyChoices() would not be evaluated in the current version of the lib

Pipeline jobs - pass parameters upstream?

TL;DR: Obviously in a Jenkins pipeline job you can easily pass parameters downstream. What I want to know is if you can pass them upstream.
Use case:
We have three jobs; job_one, job_two, and job_three. These are frequently run separately as only one stage is needed, but in increasingly more-frequent cases we'd like to be able to run all three back to back.
The first and second rely on parameters you can define ahead of time, but the third needs a parameter that is generated from the second job (a file name whose structure is unknown until job_two runs).
I have built umbrella, which calls something like the following for each job. In this case, PARAM1 is populated because umbrella runs as "Build with parameters".
build job: 'job_one', parameters: [[$class: 'StringParameterValue', name: 'PARAM1', value: "$PARAM1"]]
All fine and dandy, I can then use PARAM1 in job_one just fine.
The Problem:
For job_three I need the parameter filename. This is generated within job_two, and therefore from what I can tell is inaccessible because job_three has no idea what job_two is doing.
In an ideal world I would just have job_two pass the filename to the umbrella job, which would feed it back into job_three. Therefore, how can I pass the generated filename back up to the umbrella job?
I'm picturing a final script something like this;
node('on-demand-t2small'){
stage ('Build 1') {
build job: 'job_one', parameters: [[$class: 'StringParameterValue', name: 'PARAM1', value: "$PARMA1"]]
}
stage ('Build 2') {
build job: 'job_two', parameters: [[$class: 'StringParameterValue', name: 'PARAM2', value: "$PARMA2"]]
//somehow get the filename parameter out of job_two here so that I can move it to job three...
}
stage ('Build 3') {
build job: 'job_three', parameters: [[$class: 'StringParameterValue', name: 'filename', value: "$filename"]]
} }
Additional Notes:
I recognize that the first question will be "why not have job_two trigger job_three? I can't set the system up this way for two reasons;
job_two needs to be able to run without triggering job_three, and three can't always require two's input to run.
I debated having the umbrella kick off two and then have a clause in two that would trigger three ONLY IF it had been started by the umbrella, but as far as I can tell this will limit feedback in the umbrella job; you won't know if two failed because two failed, or because three (as a part of two) failed. If I'm wrong about this assumption please let me know.
I had thought about setting the parameter as an environment variable but I believe that's node-specific and I can't guarantee both jobs will run on the same node so that seemed to not be the solution.
Umbrella is a pipeline job written in groovy, the other three may be pipeline or freestyle jobs, if that matters.
I would appreciate detailed answers where possible, I'm still new to Groovy, Jenkins, and coding in general.
It should be as simple as that:
stage ('Build 3') {
res = build job: 'job_three', parameters: [[$class: 'StringParameterValue', name: 'filename', value: "$filename"]]
echo "$res.buildVariables.filename"
}
Assuming that in job_three you do
env.filename = "col new file name"

Cannot get build parameters in Jenkins Pipeline Job

I'm trying to get some build parameters in a Jenkins pipeline job. In this context the parameters are defined in the checkbox "this project is parameterized" and passed at build time.
In the job I call two branches:
parallel firstBranch: {
build job: 'Run Blah', parameters: [string(name: 'BLAH', value: '$app.blah.blah')]
}, secondBranch: {
build job: 'Run BlahBlah', parameters: [string(name: 'BLAH', value: '$app.blah.blah')]
}
I've tried accessing the build parameter: app.blah.blah in these various ways:
${app.blah.blah}
$app.blah.blah
"${app.blah.blah}"
app.blah.blah
currentBuild.buildVariableResolver.resolve("app.blah.blah")
System.getenv("app.blah.blah")
I always get some exception that I can somewhat understand, but I'm starting to get very annoyed. It should not be this hard to get a build parameter in the script for God's sake. What am I doing wrong?
This is working for me:
println blah
So I guess it should be enough for you to do it like this:
parallel firstBranch: {
build job: 'Run Blah', parameters: [string(name: 'BLAH', value: blah)]
}, secondBranch: {
build job: 'Run BlahBlah', parameters: [string(name: 'BLAH', value: blah)]
}
Well, looks like you can't have dots in your build parameter names! Confuses Groovy into thinking you're accessing a class. Sucks that I won't be able to keep my parameters consistent across Ant scripts and Jenkins jobs, but it's not a huge deal right now. If anyone knows how to access dotted variables, please feel free to add input!
Correct syntax for build parameter variable: alphanumeric_underline_only
Correct syntax for accessing: println(alphanumeric_underline_only)
If your pipeline has parameter with the dot inside it name, you can try to get value of this parameter by using getProperty(), for example (tested):
//println params.MyApp.Api.Server // will not work
//println "${MyApp.Api.Server}" // will not work
println getProperty('MyApp.Api.Server') //works perfectly

Resources