Jenkins MultiJob - merging console output from child jobs into one output - jenkins

I have a Jenkins Multijob project (https://wiki.jenkins.io/display/JENKINS/Multijob+Plugin), with let's say 10 child jobs. Each of these jobs has console output that is essential for me to watch. Rather than open 10 tabs and jump between them to watch the output, is there a way I can funnel all the console output of each job into one? Can I perhaps send all this output to the console of the master/Multijob, instead of it simply listing [SUCCESS] or [FAILURE] of each of the child jobs?

I was looking for something similar and I found that post Jenkins hierarchical jobs and jobs status aggregation
Base on that I did something similar to my multijob with only one level of phase jobs
import hudson.model.*
import com.tikal.jenkins.plugins.multijob.*;
void log(msg) {
manager.listener.logger.println(msg)
}
threshold = Result.SUCCESS
void aggregate_results() {
failed = false
mainJob = manager.build.getProject().getName()
job = hudson.model.Hudson.instance.getItem(mainJob)
log "---------------------------------------------------------------------------------------------------------------"
log "Aggregated status report"
log "---------------------------------------------------------------------------------------------------------------"
log("${mainJob} #${manager.build.getNumber()} - ${manager.build.getResult()}")
job.getLastBuild().getSubBuilds().each { subBuild->
subJob = subBuild.getJobName()
subJobNumber = subBuild.getBuildNumber()
job = hudson.model.Hudson.instance.getItem(subBuild.getJobName())
log "${subJob} #${subJobNumber} - ${job.getLastCompletedBuild().getResult()}"
log job.getLastCompletedBuild().getLog()
}
}
try {
aggregate_results()
} catch(Exception e) {
log("ERROR: ${e.message}")
log("ERROR: Failed Status report aggregation")
}

Related

Can scheduled (cron) Jenkins jobs access previous job status

Below is a simplified case.
I have one node named comp01. And I have a Jenkins job named Compatibility.
Compatibility is scheduled as follows:
0 12 * * 1 %IntegrationNode=Software_1
0 17 * * 1 %IntegrationNode=Software_2
0 22 * * 1 %IntegrationNode=Software_3
0 2 * * 2 %IntegrationNode=Software_4
0 7 * * 2 %IntegrationNode=Software_5
The jobs start as scheduled. But sometimes, because of some verification failure, the previous job takes more than expected time. So, the next job starts before the completion of the previous job.
Is there a way available in Jenkins, in which the next scheduled job stays in a queue until previous job is complete? Or can we schedule based on previous job status?
We have tried limiting executors for this job, but when more than a couple of jobs are queued, then the expected behavior is not observed.
We have also tried by creating resource-groups and adding multiple nodes to it, but still, expected behavior is not observed when multiple jobs are in queue.
EDIT-1:
We can't use options { disableConcurrentBuilds() } since we start the job concurrently on different nodes. Here we are struggling to ensure that when a job is started on a node, then the other scheduled jobs for the same node should wait till the current job completes.
Have you tried setting the below option?
options { disableConcurrentBuilds() }
Update
AFAIK there is no OOB solution for your problem. But you can definitely implement something. Without seeing your actual Pipelines I can't give a concrete answer. But here ae some options.
Option 01
Use Lockable Resources and create a resource per Jenkins IntegrationNode and acquire it when running the Job, the next build will wait until the lock is released.
lock(resource: 'IntegrationNode1', skipIfLocked: false) {
echo "Run your logic"
}
Option 02
You can implement a waiting logic to check the status of the previous Build. Here is an sample Pipeline and possible Groovy code you can leverage.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
echo "Waiting"
def jobName = "JobA"
def buildNum = "92"
waitUntil { !isPending(jobName, buildNum) }
echo "Actual Run"
}
}
}
}
}
def isPending(def JobName, def buildNumber) {
def buildA = Jenkins.instance.getItemByFullName(JobName).getBuild(buildNumber)
return buildA.isInProgress()
}

Jenkins pipeline - parse logs of specific branch

I have jenkins pipeline that was build in parallel, and when I go to "<jenkins_pipeline>/<build_id>/consoleFull", I can see logs like that:
[branch-1] hi
[branch-2] log11
[branch-3] my logg
second line of logg
[branch-1] yooo
[branch-2] loggerr
hii
hiiiiiii
[branch-1] log line
How can I parse logs of specific branch (e.g. branch-2).
I prefer to have the logic in my code, and not to use third-party packages.
NOTE: scripted pipeline
node {
stage('CheckLog') {
steps {
def loglist = manager.build.logFile.readLines()
filteredLog = loglist.grep(~/^branch\-2.*/)
//< do your stuff >
}
}
}

How to disable or remove multibranch pipeline trigger on existing jobs

I have hundreds of jenkins jobs(multibranch pipeline) with trigger enabled to periodically scan respective repositories every 5 mins. I'm trying to disable "scan multibranch pipeline triggers" on all the existing jobs in a particular folder(development/microservice). I'm running below script from Jenkins script console and getting exception at removeTrigger
import hudson.model.*
import hudson.triggers.*
import jenkins.model.*
import com.cloudbees.hudson.plugins.folder.Folder
for (it in Jenkins.instance.getAllItems(jenkins.branch.MultiBranchProject.class)) {
if(it.fullName.length() > 25 && it.fullName.substring(0,25) ==
'development/microservice/' && it.fullName.split("/").length == 3) {
println it.fullName
it.triggers.each { descriptor, trigger ->
it.removeTrigger(descriptor)
it.save()
}
}
}
Can someone please help me how to disable triggers on multibranch pipeline jobs programmatically.
It seems one just needs to iterate over the triggers, and pass the right part to removeTrigger(); that means passing the trigger rather than the descriptor:
for (p in Jenkins.instance.getAllItems(jenkins.branch.MultiBranchProject.class)) {
p.triggers.each { descriptor, trigger ->
//println descriptor
//println trigger
p.removeTrigger(trigger)
}
}
Output sample for a single trigger, when println statements are not commented out:
com.cloudbees.hudson.plugins.folder.computed.PeriodicFolderTrigger$DescriptorImpl#30f35b28
com.cloudbees.hudson.plugins.folder.computed.PeriodicFolderTrigger#3745f994
Many thanks for your question, your almost working code helped me a lot. ;)

Jenkins - how to show downstream jobs build result on Gerrit patch set?

below is my use case,
I have jobs A, B, C - A is upstream and B and C are downstream jobs. When a Patch set created in Gerrit, based on the patchset created event I trigger Job A and based on the result of this job, we trigger B and C. After the B and C is executed, I want to display the result of all three jobs on Gerrit patch set. like
Job A SUCCESS
JOB B SUCCESS
JOB C FAILED
right now I see only JOB A Build result showing up on GERRIT PATCH SET as
JOB A SUCCESS
is there any way to do this?
Do the following:
1) Configure all jobs (A, B and C) to trigger when the patch set is created.
2) Configure the jobs B and C to depend on job A
2.1) Click on "Advanced..." in Gerrit Trigger job configuration
2.2) Add the job A on the "Other jobs on which this job depends" field
With this configuration jobs B and C will wait for job A finish before they start and you'll get the result you want:
The best way to solve this is to create a small wrapper pipeline job. Let name it Build_ABC.
Configure Build_ABC to trigger on the Gerrit event you wish. The job will be responsible for running the other 3 builds and in the event of any failures in these jobs your Build_ABC will fail and report this back to Gerrit. You will not be able to see immediately which job failed in your Gerrit message but you will be able to see in in your Jenkins pipeline overview.
In the below scripted pipeline script you see a pipeline that calls Build_A and waits for the result. If the build succeeds it will continue to execute Build B and C in parallel. In my example I made Build C failed which caused the whole pipeline job to fail.
This is a revised version of my fist answer and the script has grown a bit. As it is required to have the individual build results in the message posted to Gerrit the pipeline has been changed to catch the individual results and record them. If build A fails builds B+C will be skipped and the status will be skipped.
Next it is possible to use the gerrit review ssh command line tool to perform manual review. This way it is possible to have a custom message generated to include the individual build results. It looks like the screen shot below:
I haven't figured out how to make it a multi line comment but there is also an option to use json in the command line, have a look at that.
def build_a = "Skipped"
def build_b = "Skipped"
def build_c = "Skipped"
def build_result = "+1"
try {
stage("A") {
try {
build( job: '/Peter/Build_A', wait: true)
build_a = "Pass"
} catch (e) {
build_a = "Failed"
// throw again or else the job to make the build fail
// throwing here will prevent B+C from running
throw e
}
}
stage("After A") {
parallel B: {
try {
build( job: '/Peter/Build_B', wait: true)
build_b = "Pass"
} catch (e) {
build_b = "Failed"
// throw again or else the job to make the build fail
throw e
}
}, C: {
try {
build( job: '/Peter/Build_C', wait: true)
build_c = "Pass"
} catch (e) {
build_c = "Failed"
// throw again or else the job to make the build fail
throw e
}
}
}
} catch(e) {
build_result = "-1"
// throw again or else the job to make the build fail
throw e
} finally {
node('master') {
// Perform a custom review using the environment vars
sh "ssh -p ${env.GERRIT_PORT} ${env.GERRIT_HOST} gerrit review --verified ${build_result} -m '\"Build A:${build_a} Build B: ${build_a} Build C: ${build_c}\"' ${env.GERRIT_PATCHSET_REVISION}"
}
}
Next you should configure the Gerrit trigger to ignore the results from Jenkins or else there will be a double vote.
One more advantage is that with the Ocean Blue plugin you can get a nice graphical representation, see below image, of your build and you can examine what went wrong by clicking on the jobs.

Restart Jenkins job if a particular error is found

We are building a large number of variants of our code in every nightly build and naturally there will often be intermittent errors, even if the chance of a single error is only a fraction of a percent. Some of the most common ones are slaves that disconnect during a build and servers that don't respond.
The build failure analyzer plugin can categorize different failure causes but what we need is a plugin that can act on those problems and retrigger the build if there is an intermittent error. Preferebly the solution should fit into our build flow so that the results propagate to the job that creates the build report.
Is there such a plugin or other tool for doing this?
Here's Naginator Plugin!
you can :
Rerun build for unstable builds as well as failures
Only rebuild the job if the build's log output contains a given regular expression
Rerun build only for the failed parts of a matrix job
If you are using a Pipeline, you can define a generic retry method that takes an invocation and a retriable strategy as input.
What it does is to tee the execution in a separate file (using the tee step of Pipeline Utility Steps Plugin), then read the file and check if the retriable strategy applies.
Something like:
def retry(execution, isRetriable) {
def retryCount = 0
while (true) {
def file = "execution-${System.currentTimeMillis()}.log"
try {
tee(file) {
execution()
}
break
} catch (Exception e) {
def output = readFile(file: file)
if (isRetriable(output)) {
retryCount++
if (retryCount == 5) {
throw e
}
} else {
throw e
}
}
}
}
And then wrap your invocation with retry:
retry(
{ stepThatOccasionallyFails() },
{ output -> output.contains('a random error!') }
)

Resources