Passing data (variable/parameter) from one downstream job to upstream job in order to pass the data to another downstream job - jenkins

I have a scenario where I am triggering two downstream jobs one after another sequentially from an upstream job.
I need to return data xyz = 3.1416 (parameter/variable) generated in the first downstream job (Job A) back to the upstream job or read data xyz (parameter/variable) generated in the first downstream job (Job A) from the upstream job.
I want to do that as the upstream job needs to pass this data to the other downstream job (Job B).
All these jobs are pipeline jobs.
I am writing the upstream job as an abstraction layer and to automate the trigger of the 2 downstream jobs sequentially one after another.
structure / flowchart of jobs

There are a couple of approaches trying to solve that problem. Calling jobs up and downstream isn't a good idea, because you can create a circular reference between them (i.e.: A calls B, that calls A again, that calls B...). Your Jenkins probably won't break because it is limited by the number of workers, but still...
Solution A: Use artifacts
You can store your values in JSON or YAML files and then create Jenkins artifacts using the archiveArtifacts() step and the Copy Artifact plugin. That way, jobs and builds can share information amongst them.
Solution B: Use buildVariables
There's a way to downstream jobs return values back to upstream jobs, using a resource known as buildVariables. Here's the code from the upstream job:
def ret = build job: 'downstream_job'
print "The returned value from the triggered job was ${ret.buildVariables.RETURNED_VALUE}"
And in the downstream job:
environment {
RETURNED_VALUE = ""
}
stages {
stage('Doing something') {
steps {
script {
print("Hi, I was triggered!")
env.RETURNED_VALUE = "Blah blah blah"
}
}
}
}
buildVariables can access any environment variable from the downstream job, except build parameters.
Best regards.

Related

Result of each jenkins job and send by email

In my company I have a pipeline that runs several jobs. I wanted to get the result of each job and write each of these results in a file or variable, later email it to me. Is there such a possibility? Remembering that: I don't want the result of the pipeline, but the result of each of the jobs that are inside it.
I even tried to make requests via api, but for each pipeline it would have to have a code and that is not feasible at all, the maintenance issue.
When you trigger a job inside a pipeline, you use the build job step.
This step has a property called propagate that:
If enabled (default state), then the result of this step is that of the downstream build (e.g., success, unstable, failure, not built, or aborted). If disabled, then this step succeeds even if the downstream build is unstable, failed, etc.; use the result property of the return value as needed.
You can write a wrapper for calling jobs, that stores the result of each job (and maybe other data useful for debugging, like build url), so you can use it later to construct the contents of an email.
E.g.
def jobResults = [:]
def buildJobAndStoreResult(jobName, jobParams) {
def run = build job: jobName, parameters: jobParams, propagate: false
jobResults[jobName] = [
result: run.result
]
}
Then you can constuct the body of an email by iterating through the map e.g.
emailBody = "SUMMARY\n\n"
jobResults.each() { it ->
​ str += "${it.key}: ${it.value.result}\n"
}
And use the mail step to send out a report.
It's worth thinking if you want your pipeline to fail after sending the email if any of the called jobs failed, and adding links from your email report to the failed jobs and caller pipeline.

How can I retrieve the execution status of parallel triggered child jobs to a pipeline script

have a pipeline script that executes child jobs in parallel.
Say I have 5 data (a,b,c,d,e) that has to be executed on 3 jobs (J1, J2, J3)
My pipeline script is in the below format
for (int i = 0; i < size; i++) { def index = i branches["branch${i}"] = { build job: 'SampleJob', parameters: [ string(name: 'param1', value:'${data}'), string(name:'dummy', value: "${index}")] } } parallel branches
My problem is, say the execution is happening on Job 1 with the data 1,2,3,4,5 and if the data 3 execution is failed on Job 1 then the data 3 execution should be stopped there itself and should not happen on the subsequent parallel execution on Jobs 2 and 3.
Is there any way that I can read the execution status of parallelly execution job status on the Pipeline script so that I can restrict data 3 execution to block in Jobs 2 and 3.
I am quite blocked here for a long time. Hoping for a solution from my community. Thanks a lot in advance.
In summary, it sounds like you want to
run multiple jobs in parallel against different pieces of data. I will call the set of related jobs the "batch".
avoid starting a queued job if any of the jobs in the batch have failed
automatically abort a running job if any of the jobs in the batch have failed
The jobs need some way to communicate their failure to the others. Use a shared storage location to store the "failure flag". If the file exists, then one or more of the jobs have failed.
For example, a shared NFS path: /shared/jenkins/jobstate/<BATCH_ID>/failed
At the start of the job, check for the existence of this path. Exit if it does. The file doesn't necessarily need to contain any data - its presence is enough.
Since you need running jobs to abort early if the failure flag exists, you will need to poll that location periodically. For example, after each unit of work. Again, if the file exists then exit early.
If you don't use NFS, that's ok. You could also use an object storage bucket. The important thing is that the state is accessible to all the relevant build jobs.

Jenkins pipeline queue gets full when all agents are offline

I am using a Jenkins pipeline script and when all nodes are offline, the builds keep on queuing up. How do I stop Jenkins from adding jobs to the queue while all slaves are offline?
pipeline {
triggers {
pollSCM('H/3 * * * 1-5')
}
}
Is your agent's availability configured to 'Keep this agent online as much as possible' ?
One way to tackle this situation is, run the below script on master node and build your pipeline(s) only if at least one of the nodes is online. You can pass the online node name to your downstream job as a parameter.
def axis = []
for (slave in jenkins.model.Jenkins.instance.getNodes()) {
if (slave.toComputer().isOnline()) {
axis += slave.getDisplayName()
}
}
return axis
Above script source: Jenkins: skip if node is offline
Other links that may help are:
Monitor and restart your slave nodes - https://wiki.jenkins.io/display/JENKINS/Monitor+and+Restart+Offline+Slaves
I found this script handy in some situations:
https://github.com/jenkinsci/jenkins-scripts/blob/master/scriptler/clearBuildQueue.groovy
I'm not into pipeline jobs, but for regular freestyle jobs, this kind of queueing will only happen if your builds are parameterized. Seperate builds are needed then to ensure that the project will run seperately for each and every parameter value (it does not matter whether the value is actually different).
So, removing build parameters in your project might solve the problem.

Pass large amount of parameters between jobs

I have two Jenkins jobs that tun on separate computers. On computer 1 I read properties file and use it for environment variables. But i need the same file on PC 2 and it only exist on the first one. When the first Jenkins job finishes it starts the second one and it can pass parameters file via job but I have to receive with creation of separate parameter with Parameterized Trigger Plugin for each parameter, and I have a lot and don`t want to do so. Is there simple solution for this issue?
Forget Jenkins 1 and the plugins Parameterized Trigger Plugin. Using Jenkins 2, here's an example of your need:
node ("pc1") {
stage "step1"
stash name: "app", includes: "properties_dir/*"
}
node ("pc2") {
stage "step2"
dir("dir_to_unstash") {
unstash "app"
}
}

jenkins, how to run multiple remote jobs without stopping on failure

I have a jenkins job that I'm using to aggregate the execution of multiple other jobs that only perform testing. Because they are testing, I want all the jobs to run regardless of any failures. I do want to keep track of wether or not there has been a failure so that I can set the end result to FAILURE rather than SUCCESS if need be.
At the moment I am calling 1 remote job via bash script and jenkins-cli. I have a 2nd child job that is local, so I'm using "trigger/call builds on other jobs" build step to run that one.
Any ideas on how to accomplish this?
If you can use build_flow-plugin it is easy, if you use pipeline it is possible too but can't give you example. Have to look it up if that is the case.
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin:
def result = SUCCESS
ignore(FAILURE){
def job1 = build('job1')
result = job1.result.combine(result)
}
ignore(FAILURE){
def job2 = build('job2')
result = job1.result.combine(result)
}
build.result = result.combine(build.result)
http://javadoc.jenkins.io/hudson/model/Result.html

Resources