I have two Jenkins jobs - job A (located on server A) triggers job B (located on server B). These servers are internal to my company - we have it set up now that server A can trigger the job on server B. The job on server B requires parameters for it to build.
There a multiple build job stages that get triggered in job B when a parameter is set to a particular value (when job type is set to install). Since all these stages used the same parameters, I am trying to do it in such a way that I only need to set the parameters once instead of just copying and pasting the code from the pipeline for each build job.
I have tried looking online for a way to do this but there doesn't seem to much. All I have found is the following:
params.myParam = "some value"
and the following: How to access parameters in a Parameterized Build?
From the above linked question, would I have to do the following in my JenkinsfilePreCommit (a groovy file):
properties([
parameters([
booleanParam(
defaultValue: false,
description: 'isFoo should be false',
name: 'isFoo'
),
where booleanParam can be replaced with whatever type of param it is, defaultValue is the value I want to set it to, description being optional, and name is the name of the parameter in the Jenkins build? i.e job_type
Related
I have 2 parameterized pipeline A and B.
project A execute project B as a post build action.
Im using "Predefine parameters" to pass parameters to project B, but seems project B using default values and not the provided one.
the pass parameter is project A parameter.
Jenkins can get weird around parameters. If you are using a declarative pipeline, then define the parameters within the code instead of using the option on the Jenkins page:
build([
job : 'B',
wait : false,
parameters: [
string(name: 'process_id', value: "${id}")
]
])
And in pipeline B:
parameters {
string(defaultValue: null, description: 'parameter', name: 'process_id')
}
If using freestyle jobs, the way you have defined the parameter is correct. If Jenkins is not using the correct parameter and instead is using some cached value, then try these steps:
Clone your downstream job
Rename you downstream job to something else
Rename the cloned downstream job to the correct name that should be used
Run the downstream job once
If the problem is that Jenkins caches the parameters used, this should fix it.
I have a parameterized Jenkins job that has 1 parameter that accepts webooks to kick off builds.
I use the parameter in the branch specifier and it works except for one use case.
The parameter is called a, the branch specifier in the job configuration is defined as refs/${a} the default value for a is set to heads/master
this all works, but as soon as I kick off the job manually (passing in the default parameter value), WebHooks no longer kick off the job after I kick off the job manually.
Any ideas?
In Jenkins declarative pipeline we can define build parameters like
pipeline {
…
parameters {
string(name: 'PARAMETER', defaultValue: 'INITIAL_DEFAULT')
choice(name: 'CHOICE', choices: ['THIS', 'THAT'])
}
…
}
However the parameter definitions of the job are only updated when the job runs after the build parameters dialog was already shown. That is, when I change the INITIAL_DEFAULT to something else, the next build will still default to INITIAL_DEFAULT and only the one after that will use the new value.
The same problem is with the choices, and there it is even more serious, because string default can be overwritten easily when starting the build, but if the new option isn't there, it cannot be selected at all.
So is there a way to define functions or expressions that will be executed before the parameter dialog to calculate current values (from files, variable in global settings or any other suitable external configuration)?
I remember using some plugins for this in the past with free-style jobs, but searching the plugin repository I can't find any that would mention how to use it with pipelines.
I don't care too much that the same problem applies to adding and removing parameters, because that occurs rarely. But we have some parameters where the default changes often and we need the next nightly to pick up the updated value.
It turns out the extended-choice-parameter does work with pipeline, and the configurations can be generated by the directive generator. It looks something like
extendedChoice(
name: 'PARAMETER',
type: 'PT_TEXTBOX',
defaultPropertyFile: '/var/lib/jenkins/something.properties',
defaultPropertyKey: 'parameter'
)
(there are many more options available in the generator)
Groovy script to get global environment variables can be had from this other answer.
I seem to have found a bug with trying to pass environment variables from one Jenkins job to another.
I have a Jenkins job which contains a Powershell build step. My question is not about the Powershell script, as that does exactly what I want (it goes to Artifactory, finds a list of all the builds and then gets the build number of the latest one). The script ends up with the Artifactory build number as a text string '$LATEST_BUILD_NO_SLASH' (for clarity, this is not the Jenkins build number). This is eventually stored as an environment variable called 'LATEST_BUILD_NUM_VAL'
This is definitely creating an environment variable with my value stored in it, as it can be seen in the 'Environment Variables' list.
This environment variable is passed in the standard way in the parameterized build step.
My issue is that when I use this environment variable in a downstream build having passed it using 'LATEST_BUILD_NUM = ${LATEST_BUILD_NUM_VAL}', I get '${LATEST_BUILD_NUM_VAL}' as the value passed to the downstream job:
But, if I pass a Jenkins created environment variable i.e.'LATEST_BUILD_NUM = ${JOB_BASE_NAME}' I get the correct variable in the downstream job:
I have spent all day banging my head around this and don't really know where to go from here. I seem to be creating the environment variable correctly, as it is in the environment variables list and it works if I use a standard environment variable. I have declared 'LATEST_BUILD_NUM' as a parameter in my downstream build.
Is there any other way of achieving what I am trying to do?
I have checked in the 'Jenkins Issues' log for issues with parameterised builds and I can't find anything similar to my issue.
In case it is of any relevance, the Jenkins Environment Injector plugin is v2.1.6 and the Parameterized Trigger plugin is v2.35.2.
This is easy to achieve in Jenkins Pipeline:
Your second job (JobB) is called from your first job (JobA) as a downstream job. Thus somewhere, (probably the end of your JobA pipeline) you will have:
build job: 'CloudbeeFolder1/Path/To/JobB', propagate: false, wait: false, parameters: [[$class: 'StringParameterValue', name: 'MY_PARAM', value: "${env.SOME_VALUE}"]]
Then in JobB on the "other side" you have:
environment {
PARAM_FROM_PIPELINE = "${params.MY_PARAM}"
}
This gets the value of your parameter into an environment variable in JobB.
in first job in post build action--> Trigger Parameterized build on other projects select this
In that project build --> give name of downward jobname
select Add parameter in that add Predefined parameter give parameter in key value format e.g Temp=${BUILD_ID}
In second job select project is parameterized in select any option e.g string parameter and put name as Temp and used this parameter in shell or anywhere as $Temp ...
Case:
I have 3 machine (A,B,C) for slave (sharing the same node label e.g 'build')
I have a pipeline which may trigger different downstream job. And i need to make sure that all the job and downstream job using same node (for sharing some file etc.). How i can do that?
a) I pass the node label to downstream but i am not sure if the downstream will take the same node.(parent job using slave "A" and i pass the node label 'build' to downstream job but maybe in downstream job it take slave 'B')
b) is that some way to get the runtime slave when the pipeline is executing, when i pass this slave name to downstream?
or is that any better way to do that?
I advice you to try NodeLable Parameter Plugin.
Once installed, check 'This project is parametrized' option and select 'node' from 'Add Parameter' drop down.
It will populate all nodes as drop down while building job with parameters.
It also have some other options which may help you.
Most important question to me would be: Why do they need to run on the very same node?
Anyway. One way to achieve this would be to retrieve the name of the node in the node block in the first pipeline, like (CAUTION: was not able to verify code written below):
// Code for upstream job
#NonCPS
def getNodeName(def context) {
context.toComputer().name
}
def nodeName = 'undefined'
node('build') {
nodeName = steps.getContext(FilePath)
}
build job: 'downstream', parameters: [string(name: 'nodeName', value: nodeName)]
In the downtstream you use that string parameter as input to your node block - of course you should make sure that the downstream actually is parameterized in the first place having a string parameter named nodeName:
node(nodeName) {
// do some stuff
}
Even having static agents, workspaces are eventually cleaned up, so don't rely on existence of files in the workspace on your builds.
Just archive whatever you need in the upstream job (using the archive step) and then use Copy Artifact Plugin in downstream jobs to get what you need there. You'll probably need to parameterize downstream jobs to pass them the reference to the upstream artifact(s) you need (there are plenty of selectors available in the Copy Artifact plugin that you can play with to achieve what you want.
If you are triggering child jobs manually from pipeline, then you can use syntax as this to pass the specific node label
build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester1']]
build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester2']]
current label of node you should be able to get this way ${env.NODE_NAME}"
found at How to trigger a jenkins build on specific node using pipeline plugin?
ref. to docs- https://jenkins.io/doc/pipeline/steps/pipeline-build-step/
But yes, if you want to manipulate with some files from this job in other jobs, then you will need to use eg. mentioned copy artifacts plugin, because workspaces of the jobs are independent and each job will have different content.