Result of each jenkins job and send by email - jenkins

In my company I have a pipeline that runs several jobs. I wanted to get the result of each job and write each of these results in a file or variable, later email it to me. Is there such a possibility? Remembering that: I don't want the result of the pipeline, but the result of each of the jobs that are inside it.
I even tried to make requests via api, but for each pipeline it would have to have a code and that is not feasible at all, the maintenance issue.

When you trigger a job inside a pipeline, you use the build job step.
This step has a property called propagate that:
If enabled (default state), then the result of this step is that of the downstream build (e.g., success, unstable, failure, not built, or aborted). If disabled, then this step succeeds even if the downstream build is unstable, failed, etc.; use the result property of the return value as needed.
You can write a wrapper for calling jobs, that stores the result of each job (and maybe other data useful for debugging, like build url), so you can use it later to construct the contents of an email.
E.g.
def jobResults = [:]
def buildJobAndStoreResult(jobName, jobParams) {
def run = build job: jobName, parameters: jobParams, propagate: false
jobResults[jobName] = [
result: run.result
]
}
Then you can constuct the body of an email by iterating through the map e.g.
emailBody = "SUMMARY\n\n"
jobResults.each() { it ->
​ str += "${it.key}: ${it.value.result}\n"
}
And use the mail step to send out a report.
It's worth thinking if you want your pipeline to fail after sending the email if any of the called jobs failed, and adding links from your email report to the failed jobs and caller pipeline.

Related

How to discard old builds in Jenkins, based on their completion status?

I have a Jenkins job set up to poll for a change in a git repository every 10 minutes. If it doesn't find one (99/100 times, this is what happens) it aborts the build early and marks it as UNSTABLE . If it finds a change, it goes through with it and marks it as SUCCESS, storing its artifacts. I know I can use a plugin to discard old builds, but it only allows for these options:
As you can see, there is no option to filter by completion status.
Ideally, I want to discard all but the latest UNSTABLE build and keep all SUCCESS or FAILED builds and their artifacts. If this is not possible, simply discarding all UNSTABLE builds would also work.
Note: I am using a declarative Pipeline
One possibility would be to discard builds programmatically. Get your job object with def job = Jenkins.instance.getItem("JobName") Since you are using declarative pipeline, job is of type WorkflowJob [1] and you can get all its builds with
job.getBuilds(). Now you can check the result of each build (WorkflowRun objects [2]) and decide if you want to delete it or not. Something like follows should work
def job = Jenkins.instance.getItem("JobName")
job.getBuilds().each {
if(it.result.toString() == "UNSTABLE") {
it.delete()
job.save()
}
}
You could create a new job that executes the code above and is triggered after YourJob has been built.
[1] https://javadoc.jenkins.io/plugin/workflow-job/org/jenkinsci/plugins/workflow/job/WorkflowJob.html
[2] https://javadoc.jenkins.io/plugin/workflow-job/org/jenkinsci/plugins/workflow/job/WorkflowRun.html

Passing data (variable/parameter) from one downstream job to upstream job in order to pass the data to another downstream job

I have a scenario where I am triggering two downstream jobs one after another sequentially from an upstream job.
I need to return data xyz = 3.1416 (parameter/variable) generated in the first downstream job (Job A) back to the upstream job or read data xyz (parameter/variable) generated in the first downstream job (Job A) from the upstream job.
I want to do that as the upstream job needs to pass this data to the other downstream job (Job B).
All these jobs are pipeline jobs.
I am writing the upstream job as an abstraction layer and to automate the trigger of the 2 downstream jobs sequentially one after another.
structure / flowchart of jobs
There are a couple of approaches trying to solve that problem. Calling jobs up and downstream isn't a good idea, because you can create a circular reference between them (i.e.: A calls B, that calls A again, that calls B...). Your Jenkins probably won't break because it is limited by the number of workers, but still...
Solution A: Use artifacts
You can store your values in JSON or YAML files and then create Jenkins artifacts using the archiveArtifacts() step and the Copy Artifact plugin. That way, jobs and builds can share information amongst them.
Solution B: Use buildVariables
There's a way to downstream jobs return values back to upstream jobs, using a resource known as buildVariables. Here's the code from the upstream job:
def ret = build job: 'downstream_job'
print "The returned value from the triggered job was ${ret.buildVariables.RETURNED_VALUE}"
And in the downstream job:
environment {
RETURNED_VALUE = ""
}
stages {
stage('Doing something') {
steps {
script {
print("Hi, I was triggered!")
env.RETURNED_VALUE = "Blah blah blah"
}
}
}
}
buildVariables can access any environment variable from the downstream job, except build parameters.
Best regards.

PullRequest Build Validation with Jenkins and OnPrem Az-Devops

First off the setup in question:
A Jenkins Instance with several build nodes and on prem Azure-Devops server containing the Git Repositories.
The Repo in question is too large to always build on push for all branches and all devs, so a small workaround was done:
The production branches have a polling enabled twice a day (because of testing duration which is handled downstream more builds would not help with quality)
All other branches have their automated building suppressed. They still can start it manually for Builds/Deployments/Unittests if they so choose.
The jenkinsfile has parameterization for which platforms to build, on prod* all the platforms are true, on all other branches false.
This helps because else the initial build of a feature branch would always build/deploy locally all platforms which would take too much of a load on the server infrastructure.
I added a service endpoint for Jenkins in the Azure Devops, added a Buildvalidation .yml - this basically works because when I call the sourcebranch of the pull request with the merge commitID i added a parameter
isPullRequestBuild which contains the ID of the PR.
snippet of the yml:
- task: JenkinsQueueJob#2
inputs:
serverEndpoint: 'MyServerEndpoint'
jobName: 'MyJob'
isMultibranchJob: true
captureConsole: true
capturePipeline: true
isParameterizedJob: true
multibranchPipelineBranch: $(System.PullRequest.SourceBranch)
jobParameters: |
stepsToPerform=Build
runUnittest=true
pullRequestID=$(System.PullRequest.PullRequestId)
Snippet of the Jenkinsfile:
def isPullRequest = false
if ( params.pullRequestID?.trim() )
{
isPullRequest = true
//do stuff to change how the pipeline should react.
}
In the jenkinsfile I look whether the parameter is not empty and reset the platforms to build to basically all and to run the unittests.
The problem is: if the branch has never run, Jenkins does not already know the parameter in the first run, so it is ignored, building nothing, and returning with 0 because "nothing had to be done".
Is there any way to only run the jenkins build if it hasnt run already?
Or is it possible to get information from the remote call if this was the build with ID 1?
The only other thing would be to Call the Jenkins via web api and check for the last successful build, but in that case I would have have the token somewhere stored in source control.
Am I missing something obvious here? I dont want to trigger the feature branch builds to do nothing more than once, because Devs could lose useful information about their started builds/deployments.
Any ideas appreciated
To whom it may concern with similar problems:
In the end I used the following workaround:
The Jenkins Endpoint is called via a user that only is used for automated builds. So, in case that this user triggered the build, I set everything to run a Pull Request Validation, even if it is the first build. Along the lines of
def causes = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')
if (causes != null)
{
def buildCauses= readJSON text: currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause').toString()
buildCauses.each
{
buildCause ->
if (buildCause['userId'] == "theNameOfMyBuildUser")
{
triggeredByAzureDevops = true
}
}
}
getBuildcauses must be allowed to run by a Jenkins Admin for that to work.

jenkins fails on building a downstream job

I'm trying to trigger a downstream job from my current job like so
pipeline {
stages {
stage('foo') {
steps{
build job: 'my-job', propagate: true, wait: true
}
}
}
}
The purpose is to wait on the job result and fail or succeed according to that result. Jenkins is always failing with the message Waiting for non-job items is not supported . The job mentioned above does not have any parameters and is defined like the rest of my jobs, using multibranch pipeline plugin.
All i can think of is that this type of jenkins item is not supported as a build step input, but that seems counterintuitive and would prove to be a blocker to me. Can anyone confirm if this is indeed the case?
If so, can anyone suggest any workarounds?
Thank you
I actually managed to fix this by paying more attention to the definition of the build step. Since all my downstream jobs are defined as multibranch pipeline jobs, their structure is folder-like, with each item in the folder representing a separate job. Thus the correct way to call the downstream jobs was not build job: 'my-job', propagate: true, wait: true, but rather build job: "my-job/my-branch-name", propagate: true, wait: true.
Also, unrelated to the question but related to the issue at hand, make sure you always have at least one more executor free on the jenkins machine, since the wait on syntax will consume one thread for the waiting job and one for the job being waited on, and you can easily find yourself in a resource-starvation type situation.
Hope this helps
This looks like JENKINS-45443 which includes the comment
Pipeline has no support for the upstream/downstream job system, in part due to technical limitations, in part due to the fact that there is no static job configuration that would make this possible except by inspecting recent build metadata.
But it also offer the workaround:
as long as the solution is still ongoing, I include here our workaround. It is based in the rtp (Rich Text Publisher) plugin, that you should have installed to make it work:
At the end of our Jenkinsfile and after triggering the job, we wait it to finish. In that case, build() returns the object used to run the downstream job. We get the info from it.
Warning: getAbsoluteUrl() function is a critical one. Use it at your own risk!
def startedBld = build(
job: YOUR_DOWNSTREAM_JOB,
wait: true, // VERY IMPORTANT, otherwise build () does not return expected object
propagate: true
)
// Publish the started build information in the Build result
def text = '<h2>Downstream jobs</h2>Started job ' + startedBld.rawBuild.toString () + ''
rtp (nullAction: '1',parserName: 'HTML', stableText: text)
This issue is part of JENKINS-29913, opened for the past two years:
Currently DependencyGraph is limited to AbstractProject, making it impossible for Workflows to participate in upstream/downstream relationships (in cases where job chaining is required, for example due to security constraints).
It refers the RFE (Request for Enhancement) JENKINS-37718, based on another (unanswered) Stack Overflow question.

jenkins, how to run multiple remote jobs without stopping on failure

I have a jenkins job that I'm using to aggregate the execution of multiple other jobs that only perform testing. Because they are testing, I want all the jobs to run regardless of any failures. I do want to keep track of wether or not there has been a failure so that I can set the end result to FAILURE rather than SUCCESS if need be.
At the moment I am calling 1 remote job via bash script and jenkins-cli. I have a 2nd child job that is local, so I'm using "trigger/call builds on other jobs" build step to run that one.
Any ideas on how to accomplish this?
If you can use build_flow-plugin it is easy, if you use pipeline it is possible too but can't give you example. Have to look it up if that is the case.
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin:
def result = SUCCESS
ignore(FAILURE){
def job1 = build('job1')
result = job1.result.combine(result)
}
ignore(FAILURE){
def job2 = build('job2')
result = job1.result.combine(result)
}
build.result = result.combine(build.result)
http://javadoc.jenkins.io/hudson/model/Result.html

Resources