Add Jenkins job parameter to all jobs - jenkins

All of my Jenkins jobs need the same two parameters added to them. I have way too many jobs to configure by hand in a reasonable amount of time.
Is there a way to add a job parameter to them all at once? Even one folder at a time would save me a great deal of effort. Currently, hacking the config.xml files seems faster than adding them one by one via the UI.
Again, these parameters do not exist yet, so unless Configuration Slicing has some hidden feature, I am not sure how to accomplish this.
TIA for any answers!

I can think of following possibilities.
For declarative pipelines the parameters directive could be added into the pipeline scripts.
parameters {
string(name: 'FIRST_ PARAM', defaultValue: 'Test')
string(name: 'SECOND_PARAM', defaultValue: 'Test2')
}
If the jobs are scripted pipelines you could add
properties(
[
parameters(
[string(defaultValue: 'Test', name: 'FIRST_PARAM'),
string(defaultValue: 'Test2', name: 'SECOND_PARAM')]
)
]
)
The jobs need to be built once in order for parameters to be visible.
Another possibility would be to iterate over all jobs and add parameters
<hudson.model.ParametersDefinitionProperty>
<parameterDefinitions>
<hudson.model.StringParameterDefinition>
<name>FIRST_PARAM</name>
<description></description>
<defaultValue>Test</defaultValue>
<trim>false</trim>
</hudson.model.StringParameterDefinition>
</parameterDefinitions>
</hudson.model.ParametersDefinitionProperty>
to properties tag in config.xml of the jobs.

Related

post build action with parameters

I have 2 parameterized pipeline A and B.
project A execute project B as a post build action.
Im using "Predefine parameters" to pass parameters to project B, but seems project B using default values and not the provided one.
the pass parameter is project A parameter.
Jenkins can get weird around parameters. If you are using a declarative pipeline, then define the parameters within the code instead of using the option on the Jenkins page:
build([
job : 'B',
wait : false,
parameters: [
string(name: 'process_id', value: "${id}")
]
])
And in pipeline B:
parameters {
string(defaultValue: null, description: 'parameter', name: 'process_id')
}
If using freestyle jobs, the way you have defined the parameter is correct. If Jenkins is not using the correct parameter and instead is using some cached value, then try these steps:
Clone your downstream job
Rename you downstream job to something else
Rename the cloned downstream job to the correct name that should be used
Run the downstream job once
If the problem is that Jenkins caches the parameters used, this should fix it.

How to maintain a single Jenkinsfile and create two jenkins jobs with different choice parameters

I wanted to use single Jenkinsfile and create 2 pipeline jobs, one should populate DEV, IT, UAT and other should populate PROD, DR as choices for choice parameter (ENV).
I was able to achieve this by below code and defining choices in UI:
[$class: 'ChoiceParameterDefinition', name: 'ENV', choices:params.DEPLOY_ENV, description: 'Select ENV'],
This works if I only have one choice (DEV) defined in UI, but if I have more than one choice (DEV, IT, UAT) and if user selects 2nd choice (IT) and runs the job, it deletes the other choices for some reason and now it only shows one choice (IT) in drop down.
I can achieve this by not defining parameters in Jenkinsfile and selecting 'This Project is parameterized' in UI and using the defined parameters with 'params.' but I don't think it's the right approach for my project.
Is there a better way to use single jenkinsfile for creating two pipeline jobs with different choice parameters?

Dynamically evaluate default in Jenkins pipeline build parameter

In Jenkins declarative pipeline we can define build parameters like
pipeline {
…
parameters {
string(name: 'PARAMETER', defaultValue: 'INITIAL_DEFAULT')
choice(name: 'CHOICE', choices: ['THIS', 'THAT'])
}
…
}
However the parameter definitions of the job are only updated when the job runs after the build parameters dialog was already shown. That is, when I change the INITIAL_DEFAULT to something else, the next build will still default to INITIAL_DEFAULT and only the one after that will use the new value.
The same problem is with the choices, and there it is even more serious, because string default can be overwritten easily when starting the build, but if the new option isn't there, it cannot be selected at all.
So is there a way to define functions or expressions that will be executed before the parameter dialog to calculate current values (from files, variable in global settings or any other suitable external configuration)?
I remember using some plugins for this in the past with free-style jobs, but searching the plugin repository I can't find any that would mention how to use it with pipelines.
I don't care too much that the same problem applies to adding and removing parameters, because that occurs rarely. But we have some parameters where the default changes often and we need the next nightly to pick up the updated value.
It turns out the extended-choice-parameter does work with pipeline, and the configurations can be generated by the directive generator. It looks something like
extendedChoice(
name: 'PARAMETER',
type: 'PT_TEXTBOX',
defaultPropertyFile: '/var/lib/jenkins/something.properties',
defaultPropertyKey: 'parameter'
)
(there are many more options available in the generator)
Groovy script to get global environment variables can be had from this other answer.

ensure jenkins pipeline using same node for download stream job

Case:
I have 3 machine (A,B,C) for slave (sharing the same node label e.g 'build')
I have a pipeline which may trigger different downstream job. And i need to make sure that all the job and downstream job using same node (for sharing some file etc.). How i can do that?
a) I pass the node label to downstream but i am not sure if the downstream will take the same node.(parent job using slave "A" and i pass the node label 'build' to downstream job but maybe in downstream job it take slave 'B')
b) is that some way to get the runtime slave when the pipeline is executing, when i pass this slave name to downstream?
or is that any better way to do that?
I advice you to try NodeLable Parameter Plugin.
Once installed, check 'This project is parametrized' option and select 'node' from 'Add Parameter' drop down.
It will populate all nodes as drop down while building job with parameters.
It also have some other options which may help you.
Most important question to me would be: Why do they need to run on the very same node?
Anyway. One way to achieve this would be to retrieve the name of the node in the node block in the first pipeline, like (CAUTION: was not able to verify code written below):
// Code for upstream job
#NonCPS
def getNodeName(def context) {
context.toComputer().name
}
def nodeName = 'undefined'
node('build') {
nodeName = steps.getContext(FilePath)
}
build job: 'downstream', parameters: [string(name: 'nodeName', value: nodeName)]
In the downtstream you use that string parameter as input to your node block - of course you should make sure that the downstream actually is parameterized in the first place having a string parameter named nodeName:
node(nodeName) {
// do some stuff
}
Even having static agents, workspaces are eventually cleaned up, so don't rely on existence of files in the workspace on your builds.
Just archive whatever you need in the upstream job (using the archive step) and then use Copy Artifact Plugin in downstream jobs to get what you need there. You'll probably need to parameterize downstream jobs to pass them the reference to the upstream artifact(s) you need (there are plenty of selectors available in the Copy Artifact plugin that you can play with to achieve what you want.
If you are triggering child jobs manually from pipeline, then you can use syntax as this to pass the specific node label
build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester1']]
build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester2']]
current label of node you should be able to get this way ${env.NODE_NAME}"
found at How to trigger a jenkins build on specific node using pipeline plugin?
ref. to docs- https://jenkins.io/doc/pipeline/steps/pipeline-build-step/
But yes, if you want to manipulate with some files from this job in other jobs, then you will need to use eg. mentioned copy artifacts plugin, because workspaces of the jobs are independent and each job will have different content.

Jenkins plugin for collecting user input

I am creating a Jenkins Pipeline job.
I want to achieve this: in the job home page, I want an HTML input tag, before each time manually triggering the build, I first fill in something in the tag, then the value can be retrieved and used in the pipeline script during the build.
It there a plugin for this purpose?
Thanks.
This is a so-called Parameterized Build.
In your pipeline definition, you can add these build parameters using the properties step, which comes with the workflow-multibranch plugin.
A simple example would be as follows:
properties([
parameters([
string(name: 'DEPLOY_ENV', defaultValue: 'TESTING', description: 'The target environment', )
])
])
P.S: As this feature is quite hidden, I wrote a blog post about this a few weeks ago.

Resources