I'm using the Build Flow plugin to perform parallel builds. I need to pass a choice parameter (branch_name) from the parent job to the child jobs. I'm unsure how to get this working. The choice parameter has multiple branch names. How can I do this?
Here's an sample of the code,
// Here's where I set the variable for the choice parameter (branch_name)
branch_name = ${branch_name}
// Here's where I call the variable to pass to the other jobs
parallel (
{ build("build1", branch_name:params["branch_name"], },
{ build("build2", branch_name:params["branch_name"], },
{ build("build3", branch_name:params["branch_name"], },
{ build("build4", branch_name:params["branch_name"], },
)
What am I doing wrong? Please help. Thx.
Assume you got this working? For future Googlers, I believe it would be as follows:
// Get the parameter of the parent, flow job - should be available as environment variable
def branch = build.environment.get("PARENT_PARAM_NAME")
parallel (
// pass to child job
{ build("build1", CHILD_PARAM_NAME: branch)},
// repeat as required
)
Related
I have a pipeline which needs to be scheduled to run at a particular time. There are some dynamic parameters that needs to be passed while running the pipeline.
I have created a function that gives me the desired parameter value. However this pipeline does not get triggered as the function value is not getting resolved inside trigger block & is getting treated as string.
getlatest is the method I created which takes in 3 parameters. The value of this method is not getting resolved & instead treated as string. The pipeline rund as expected if I hardcode some value for version.
triggers{
parameterizedCron("H/5 * * * * % mod=test; version=getlatest('abc','xyz','lmn');")
}
The problem is that the code that calculates the parameter — just like any other code in Jenkins — needs an executor to run. To get an executor, you need to run your pipeline. To run your pipeline, you need to give Jenkins the parameters. But to give Jenkins the parameters, you need to run your code.
So there's a chicken and egg problem, there.
To break out of this cycle, you may want to run scripted pipeline before you run the declarative one:
node('built-in') { // or "master", or any other
def version = getlatest('abc','xyz','lmn')
def cron_parameters = "H/5 * * * * % mod= test; version=${version}"
println "cron_parameters is ${cron_parameters}"
env.CRON_PARAM = cron_parameters
}
pipeline {
agent { node { label "some_label" } }
triggers {
parameterizedCron(env.CRON_PARAM)
}
// ...
}
I've never seen this being tried before so I don't know if what you are doing is something Jenkins is capable of. Instead, remove the parameter and create an environment variable called version and assign the function result to that:
environment {
VERSION = getlatest('abc','xyz','lmn')
}
And reference this VERSION variable instead of your input parameter.
How to reference:
env.VERSION or ${VERSION} or ${env.VERSION}
Examples:
currentBuild.displayName=env.VERSION
env.SUBJECT="Checkout Failure on ${VERSION}"
string(name: 'VERSION', value: "${env.VERSION}")
I have two separate Jenkins jobs that will run on one repository: My Jenkinsfile has a step that will run with this property enabled: enableZeroDownTime. The purpose of the 2nd Jenkins Job is to run the step with this property enableZeroDownTime disabled. Does anyone know how I can control it using the same Jenkinsfile? Can I pass that using some parameter based on any properties file? I am really confused on this.
stage('CreateCustomer') {
steps {
script {
common.runStage("#CreateCustomer")
common.runStage("#SetOnboardingCustomerManifest")
common.runStage("#enableZeroDownTime")
}
}
}
Solution
I currently run multiple pipelines that use the same Jenkinsfile. The change to conditionally execute a stage is trivial.
stage('CreateCustomer') {
when {
environment name: 'enableZeroDownTime', value: 'true'
}
steps {
script {
common.runStage("#CreateCustomer")
common.runStage("#SetOnboardingCustomerManifest")
common.runStage("#enableZeroDownTime")
}
}
}
The CreateCustomer stage will only run when the enableZeroDownTime parameter is set to true ( it can be a String parameter with value true, or a boolean parameter ).
The trick here is that you cannot add the parameters{} block to your declarative pipeline. For example if you had the following
parameters {
string(name: 'enableZeroDownTime', defaultValue: 'true')
}
Both pipelines would default to true. If you had the following
parameters {
string(name: 'enableZeroDownTime', defaultValue: '')
}
Both pipelines would default to a blank default value.
Even if you manually save a different default value to the pipeline after creation it will be overwritten next run with a blank default value.
Instead you simply need to remove the parameters{} block altogether and manually add the parameters through the web interface
Additionally...
Additionally it is possible to have two pipelines use the same Jenkinsfile with different parameters. For example, lets say Pipeline A had a enableZeroDownTime parameter defaulted to true and Pipeline B had no parameters at all. In this case you can add an environment variable of the same name and set the value equal to the following ternary expression
environment {
enableZeroDownTime = "${params.enableZeroDownTime != null ? "${params.enableZeroDownTime}" : false}"
}
You can then reference this parameter in the when declarative ( or anywhere in the pipeline ) without fear of the pipeline throwing a null pointer exception.
I've been trying to construct multiple jobs from a list and everything seems to be working as expected. But as soon as I execute the first build (which works correctly) the parameters in the job disappears. This is how I've constructed the pipelineJob for the project.
import javaposse.jobdsl.dsl.DslFactory
def repositories = [
[
id : 'jenkins-test',
name : 'jenkins-test',
displayName: 'Jenkins Test',
repo : 'ssh://<JENKINS_BASE_URL>/<PROJECT_SLUG>/jenkins-test.git'
]
]
DslFactory dslFactory = this as DslFactory
repositories.each { repository ->
pipelineJob(repository.name) {
parameters {
stringParam("BRANCH", "master", "")
}
logRotator{
numToKeep(30)
}
authenticationToken('<TOKEN_MATCHES_WITH_THE_BITBUCKET_POST_RECEIVE_HOOK>')
displayName(repository.displayName)
description("Builds deploy pipelines for ${repository.displayName}")
definition {
cpsScm {
scm {
git {
branch('${BRANCH}')
remote {
url(repository.repo)
credentials('<CREDENTIAL_NAME>')
}
extensions {
localBranch('${BRANCH}')
wipeOutWorkspace()
cloneOptions {
noTags(false)
}
}
}
scriptPath('Jenkinsfile)
}
}
}
}
}
After running the above script, all the required jobs are created successfully. But then once I build any job, the parameters disappear.
After that when I run the seed job again, the job starts showing the parameter. I'm having a hard time figuring out where the problem is.
I've tried many things but nothing works. Would appreciate any help. Thanks.
This comment helped me to figure out similar issue with my .groovy file:
I called parameters property twice (one at the node start and then tried to set other parameters in if block), so the latter has overwritten the initial parameters.
BTW, as per the comments in the linked ticket, it is an issue with both scripted and declarative pipelines.
Fixed by providing all job parameters in each parameters call - for the case with ifs.
Though I don't see repeated calls in the code you've provided, please check the full groovy files for your jobs and add all parameters to all parameters {} blocks.
Start jenkins job immediately after creation by seed job
I can start a job from within the job dsl like this:
queue('my-job')
But how do I start a job with argument or parameters? I want to pass that job some arguments somehow.
Afaik, you can't.
But what you can do is creating it from a pipeline (jobDsl step), then run it. Something more or less like...
pipeline {
stages {
stage('jobs creation') {
steps {
jobDsl targets: 'my_job.dsl',
additionalParameters: [REQUESTED_JOB_NAME: "my_job's_name"]
build job: "my_job's_name",
parameters: [booleanParam(name: 'DRY_RUN', value: true)]
}
}
}
}
With a barebones 'my_job.dsl'...
pipelineJob(REQUESTED_JOB_NAME) {
definition {
// blah...
}
}
NOTE: As you see, I explicitly set the name of the job from the calling pipeline (the REQUESTED_JOB_NAME var) because otherwise I don't know how to make the jobDSL code to return the name of the job it creates back to the calling pipeline.
I use this "trick" to avoid the "job params go one run behind" problem. I use the DRY_RUN param of the job (I use a hidden param, in fact) to run a "do-nothing" build as its name implies, so by the time others need to use the job for "real stuff" its params section has already been properly parsed.
Assume I have the following downstream job:
// DOWNSTREAM JOB
DYNAMIC_VAR = ""
parallel(
{
DYNAMIC_VAR = new Date() // Some other value determined
// at runtime by this job
},
{
// Some other stuff...
}
)
As part of my upstream job (see example below) I want to be able to call the downstream job, and access the variable that was set during the downstream job.
// UPSTREAM JOB
my_build = build("my-custom-job")
// Would like to beable to do something like
// out.println my_build.build.get_var('DYNAMIC_VAR')
// or
// out.println my_build.build.DYNAMIC_VAR
Looking through the output it seems that the variable is not returned, and hence is not accessible. I suspect this is because the variable in question (DYNAMIC_VAR) is only available during the scope of the downstream job, and hence once the job finishes the variable is removed.
My two questions I wanted to ask were:
Is it correct that the variables are removed upon job completion?
Does anyone have an idea how this could (if it can) be achieved (additional plugins are fine if required)?
1) Would outputting the variable=value pair to some file be an acceptable solution for you?
2) I haven't used groovy in Jenkins much, but all the job's environment variables are stored under:
${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/injectedEnvWars.txt
This may or may not require EnvInject plugin.
According to the comments here: https://issues.jenkins-ci.org/browse/JENKINS-18784
You can do the following:
// – In job First, I am setting the environment variable testKey
b = build( "First" )
// Then, using it in workflow:
out.println b.build.properties.environment['testKey']
// Or
b.build.properties.environment.testKey