Accessing a downstream parameter using build flow - jenkins

Assume I have the following downstream job:
// DOWNSTREAM JOB
DYNAMIC_VAR = ""
parallel(
{
DYNAMIC_VAR = new Date() // Some other value determined
// at runtime by this job
},
{
// Some other stuff...
}
)
As part of my upstream job (see example below) I want to be able to call the downstream job, and access the variable that was set during the downstream job.
// UPSTREAM JOB
my_build = build("my-custom-job")
// Would like to beable to do something like
// out.println my_build.build.get_var('DYNAMIC_VAR')
// or
// out.println my_build.build.DYNAMIC_VAR
Looking through the output it seems that the variable is not returned, and hence is not accessible. I suspect this is because the variable in question (DYNAMIC_VAR) is only available during the scope of the downstream job, and hence once the job finishes the variable is removed.
My two questions I wanted to ask were:
Is it correct that the variables are removed upon job completion?
Does anyone have an idea how this could (if it can) be achieved (additional plugins are fine if required)?

1) Would outputting the variable=value pair to some file be an acceptable solution for you?
2) I haven't used groovy in Jenkins much, but all the job's environment variables are stored under:
${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_NUMBER}/injectedEnvWars.txt
This may or may not require EnvInject plugin.

According to the comments here: https://issues.jenkins-ci.org/browse/JENKINS-18784
You can do the following:
// – In job First, I am setting the environment variable testKey
b = build( "First" )
// Then, using it in workflow:
out.println b.build.properties.environment['testKey']
// Or
b.build.properties.environment.testKey

Related

How to call a method inside triggers block in Jenkinfile

I have a pipeline which needs to be scheduled to run at a particular time. There are some dynamic parameters that needs to be passed while running the pipeline.
I have created a function that gives me the desired parameter value. However this pipeline does not get triggered as the function value is not getting resolved inside trigger block & is getting treated as string.
getlatest is the method I created which takes in 3 parameters. The value of this method is not getting resolved & instead treated as string. The pipeline rund as expected if I hardcode some value for version.
triggers{
parameterizedCron("H/5 * * * * % mod=test; version=getlatest('abc','xyz','lmn');")
}
The problem is that the code that calculates the parameter — just like any other code in Jenkins — needs an executor to run. To get an executor, you need to run your pipeline. To run your pipeline, you need to give Jenkins the parameters. But to give Jenkins the parameters, you need to run your code.
So there's a chicken and egg problem, there.
To break out of this cycle, you may want to run scripted pipeline before you run the declarative one:
node('built-in') { // or "master", or any other
def version = getlatest('abc','xyz','lmn')
def cron_parameters = "H/5 * * * * % mod= test; version=${version}"
println "cron_parameters is ${cron_parameters}"
env.CRON_PARAM = cron_parameters
}
pipeline {
agent { node { label "some_label" } }
triggers {
parameterizedCron(env.CRON_PARAM)
}
// ...
}
I've never seen this being tried before so I don't know if what you are doing is something Jenkins is capable of. Instead, remove the parameter and create an environment variable called version and assign the function result to that:
environment {
VERSION = getlatest('abc','xyz','lmn')
}
And reference this VERSION variable instead of your input parameter.
How to reference:
env.VERSION or ${VERSION} or ${env.VERSION}
Examples:
currentBuild.displayName=env.VERSION
env.SUBJECT="Checkout Failure on ${VERSION}"
string(name: 'VERSION', value: "${env.VERSION}")

Jenkins pipeline - how to load a Jenkinsfile without first calling node()?

I have a somewhat unique setup where I need to be able to dynamically load Jenkinsfiles that live outside of the src I'm building. The Jenkinsfiles themselves usually call node() and then some build steps. This causes multiple executors to be eaten up unnecessarily because I need to have already called node() in order to use the load step to run a Jenkinsfile, or to execute the groovy if I read the Jenkinsfile as a string and execute it.
What I have in the job UI today:
#Library(value='myGlobalLib#head', changelog=fase) _
node{
load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
The Jenkinsfile that's loaded usually also calls node(). For example:
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
This causes 2 executors to be consumed during the pipeline run. Ideally, I load the Jenkinsfiles without first calling node(), but whenever I try, I get an error message stating:
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
Is there any way to load a Jenkinsfile or execute groovy without first having hudson.FilePath context? I can't seem to find anything in the doc. I'm at the point where I'm going to preprocess the Jenkinsfiles to remove their initial call to node() and call node() with the value the Jenkinsfile was using, then load the rest of the file, but, that's somewhat too brittle for me to be happy with.
When using load step Jenkins evaluates the file. You can wrap your Jenkinsfile's logics into a function (named run() in my example) so that it will load but not run automatically.
def run() {
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
}
// This return statement is important in the end of Jenkinsfile
return this
Call it from your job script like this:
def jenkinsfile
node{
jenkinsfile = load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
jenkinsfile.run()
This way there is no more nested node blocks because the first gets closed before run() function is called.

Start jenkins job immediately after creation by seed job, with parameters?

Start jenkins job immediately after creation by seed job
I can start a job from within the job dsl like this:
queue('my-job')
But how do I start a job with argument or parameters? I want to pass that job some arguments somehow.
Afaik, you can't.
But what you can do is creating it from a pipeline (jobDsl step), then run it. Something more or less like...
pipeline {
stages {
stage('jobs creation') {
steps {
jobDsl targets: 'my_job.dsl',
additionalParameters: [REQUESTED_JOB_NAME: "my_job's_name"]
build job: "my_job's_name",
parameters: [booleanParam(name: 'DRY_RUN', value: true)]
}
}
}
}
With a barebones 'my_job.dsl'...
pipelineJob(REQUESTED_JOB_NAME) {
definition {
// blah...
}
}
NOTE: As you see, I explicitly set the name of the job from the calling pipeline (the REQUESTED_JOB_NAME var) because otherwise I don't know how to make the jobDSL code to return the name of the job it creates back to the calling pipeline.
I use this "trick" to avoid the "job params go one run behind" problem. I use the DRY_RUN param of the job (I use a hidden param, in fact) to run a "do-nothing" build as its name implies, so by the time others need to use the job for "real stuff" its params section has already been properly parsed.

Define you own global variable for JenkinsJob (Not for ALL jobs!!)

I have ha Jenkins job that has a string input parameter of the build flags for the make command in my Jenkins job. My problem is that some users forget to change the parameter values when we have a release branch. So I want to overwrite the existing string input parameter (or create a new one) that should be used if the job is a release job.
This is the statement I want to add:
If branch "release" then ${params.build_flag} = 'DEBUGSKIP=TRUE'
and the code that is not working is:
pipeline {
agent none
parameters {
string(name: 'build_flag', defaultValue: 'DEBUGSKIP=TRUE', description: 'Flags to pass to build')
If {
allOf {
branch "*release*"
expression {
${params.build_flag} = 'DEBUGSKIP=TRUE'
}
}
}else{
${params.build_flag} = 'DEBUGSKIP=FALSE'
}
}
The code above explains what I want to do but I don't know to do it.
If you can, see if you could use the JENKINS EnvInject Plugin, with your pipeline, using the supported use-case:
Injection of EnvVars defined in the "Properties Content" field of the Job Property
These EnvVars are being injected to the script environment and will be inaccessible via the "env" Pipeline global variable (as in here)
Or writing the right values in a file, and using that file content as "Properties Content" of a downstream job (as shown there).

Pass Choice Parameter to Another Job in Jenkins with DSL Build Flow

I'm using the Build Flow plugin to perform parallel builds. I need to pass a choice parameter (branch_name) from the parent job to the child jobs. I'm unsure how to get this working. The choice parameter has multiple branch names. How can I do this?
Here's an sample of the code,
// Here's where I set the variable for the choice parameter (branch_name)
branch_name = ${branch_name}
// Here's where I call the variable to pass to the other jobs
parallel (
{ build("build1", branch_name:params["branch_name"], },
{ build("build2", branch_name:params["branch_name"], },
{ build("build3", branch_name:params["branch_name"], },
{ build("build4", branch_name:params["branch_name"], },
)
What am I doing wrong? Please help. Thx.
Assume you got this working? For future Googlers, I believe it would be as follows:
// Get the parameter of the parent, flow job - should be available as environment variable
def branch = build.environment.get("PARENT_PARAM_NAME")
parallel (
// pass to child job
{ build("build1", CHILD_PARAM_NAME: branch)},
// repeat as required
)

Resources