Temporarily disable SCM polling on Jenkins Server in System Groovy - jenkins

We have a Jenkins server which is running somewhere between 20 and 30 jobs.
Since the build process is reasonably complex we're broken the actual build down into 1 sub-builds, some of which can run concurrently, others have to follow previous build steps. As a result we've grouped each of the build steps into 3 groups, which block while the builds are in pogress.
For example:
Main Build : GroupA : Builds A1, A2 & A3
: GroupB : Builds B1, B2 & B3
: GroupC : Builds C1, C2, C3, C4, C5 & C6
: GroupD : HW_Tests T1, T2, T3, T4 & T5
Builds B1, B2 & B3 rely on the output from A1, A2, A3 etc
Since there are builds and tests running pretty much 24/7, I am finding it difficult to schedule a restart of the Jenkins Master. Choosing "Prepare for shutdown" will mean new jobs are queued, but it will invariably block a running job since, to use my example above, if GroupB is active, builds C1, C2, etc will be queued also, and the Main Build will be blocked.
As a work around, I would like to disable the SCM polling on the server until all running jobs have finished. This will prevent new jobs from triggering but also allow the running jobs to finish. I can then restart Jenkins and , re-enable SCM polling, allowing normal service to resume.
The SCM we are using is Perforce.
I have not been able to find anywhere which suggests the above is possible, however, I am sure it must be feasible in System Groovy ... just not sure how. Does anyone here have any ideas please?
Many Thanks

You could disable only those jobs which have an SCM polling trigger. This groovy script will do that:
Hudson.instance.items.each { job ->
if ( job.getTrigger( hudson.triggers.SCMTrigger ) != null ) {
println "will disable job ${job.name}"
job.disable()
}
}
Re-enabling the jobs will be left as an exercise : )

Jenkins 2.204.1 + version of using the groovy script console to disable all jobs with an SCM trigger:
Jenkins.instance.getAllItems(Job.class).each{ job ->
if ( job.getSCMTrigger() != null ) {
println "will disable job ${job.name}"
job.setDisabled(true)
}
}

Try following Groovy script to comment out SCM Polling:
// WARNING: Use on your own risk! Without any warranty!
import hudson.triggers.SCMTrigger
import hudson.triggers.TriggerDescriptor
// from: https://issues.jenkins-ci.org/browse/JENKINS-12785
TriggerDescriptor SCM_TRIGGER_DESCRIPTOR = Hudson.instance.getDescriptorOrDie(SCMTrigger.class)
assert SCM_TRIGGER_DESCRIPTOR != null;
MAGIC = "#MAGIC# "
// comment out SCM Trigger
def disable_scmpoll_trigger(trig){
if ( !trig.spec.startsWith(MAGIC) ){
return new SCMTrigger(MAGIC + trig.spec)
}
return null
}
// enable commented out SCM Trigger
def enable_scmpoll_trigger(trig){
if ( trig.spec.startsWith(MAGIC) ){
return new SCMTrigger(trig.spec.substring(MAGIC.length()))
}
return null
}
Hudson.instance.items.each {
job ->
//println("Checking job ${job.name} of type ${job.getClass().getName()} ...")
// from https://stackoverflow.com/a/39100687
def trig = job.getTrigger( hudson.triggers.SCMTrigger )
if ( trig == null ) return
println("Job ${job.name} has SCMTrigger: '${trig.spec}'")
SCMTrigger newTrig = disable_scmpoll_trigger(trig)
// SCMTrigger newTrig = enable_scmpoll_trigger(trig)
if (newTrig != null ){
newTrig.ignorePostCommitHooks = trig.ignorePostCommitHooks
newTrig.job = job
println("Updating SCMTrigger '${trig.spec}' -> '${newTrig.spec}' for job: ${job.name}")
job.removeTrigger(SCM_TRIGGER_DESCRIPTOR)
job.addTrigger(newTrig)
job.save()
}
}
return ''
To enable SCM polling again just change these two lines
//SCMTrigger newTrig = disable_scmpoll_trigger(trig)
SCMTrigger newTrig = enable_scmpoll_trigger(trig)
Tested on Jenkins ver. 2.121.3
Known limitations:
supports single line "Schedule" (spec property) only

If it is one or 2 builds that are configured to do SCM polling, you can go into the configuration of each build and uncheck the box. It is that simple :)
If you are using jenkins job builder, it should be even easier to change the configuration of several jobs at a time.
If you using slaves or even on master, SCM polling is dependent on JAVA ? remove JAVA from the machine temporarily from the location where it is configured in master jenkins ;) The polling will fail :P This is a stupid hack ;)
I hope that helps !! ?

Related

Jenkins pipeline with a conditional trigger

My Jenkins pipeline is as follow:
pipeline {
triggers {
cron('H */5 * * *')
}
stages {
stage('Foo') {
...
}
}
}
The repository is part of a Github Organization on Jenkins - every branch or PR pushed results in a Jenkins job being created for that branch or PR.
I would like the trigger to only be run on the "main" branch because we don't need all branches and PRs to be run on a cron schedule; we only need them to be run on new commits which they already do.
Is it possible?
yes - it's possible. To schedule cron trigger only for a specific branch you can do it like this in your Jenkinsfile:
String cron_string = (scm.branches[0].name == "main") ? 'H */5 * * *' : ''
pipeline {
triggers {
cron(cron_string)
}
// whatever other code, options, stages etc. is in your pipeline ...
}
What it does:
Initialize a variable based on a branch name. For main branch it sets requested cron configuration, otherwise there's no scheduling (empty string is set).
Use this variable within pipeline
Further comments:
it's possible to use it also with parameterizedCron (in a case you'd want / need to).
you can use also some other variables for getting branch name, e.g: env.BRANCH_NAME instead of scm.branches[0].name. Whatever fits your needs...
This topic and solution is discussed also in Jenkins community: https://issues.jenkins.io/browse/JENKINS-42643?focusedCommentId=293221#comment-293221
EDIT: actually a similar question that leads to the same configuration - here on Stack: "Build Periodically" with a Multi-branch Pipeline in Jenkins
You can simply add a when condition to your pipeline.
when { branch 'main' }

Can scheduled (cron) Jenkins jobs access previous job status

Below is a simplified case.
I have one node named comp01. And I have a Jenkins job named Compatibility.
Compatibility is scheduled as follows:
0 12 * * 1 %IntegrationNode=Software_1
0 17 * * 1 %IntegrationNode=Software_2
0 22 * * 1 %IntegrationNode=Software_3
0 2 * * 2 %IntegrationNode=Software_4
0 7 * * 2 %IntegrationNode=Software_5
The jobs start as scheduled. But sometimes, because of some verification failure, the previous job takes more than expected time. So, the next job starts before the completion of the previous job.
Is there a way available in Jenkins, in which the next scheduled job stays in a queue until previous job is complete? Or can we schedule based on previous job status?
We have tried limiting executors for this job, but when more than a couple of jobs are queued, then the expected behavior is not observed.
We have also tried by creating resource-groups and adding multiple nodes to it, but still, expected behavior is not observed when multiple jobs are in queue.
EDIT-1:
We can't use options { disableConcurrentBuilds() } since we start the job concurrently on different nodes. Here we are struggling to ensure that when a job is started on a node, then the other scheduled jobs for the same node should wait till the current job completes.
Have you tried setting the below option?
options { disableConcurrentBuilds() }
Update
AFAIK there is no OOB solution for your problem. But you can definitely implement something. Without seeing your actual Pipelines I can't give a concrete answer. But here ae some options.
Option 01
Use Lockable Resources and create a resource per Jenkins IntegrationNode and acquire it when running the Job, the next build will wait until the lock is released.
lock(resource: 'IntegrationNode1', skipIfLocked: false) {
echo "Run your logic"
}
Option 02
You can implement a waiting logic to check the status of the previous Build. Here is an sample Pipeline and possible Groovy code you can leverage.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
echo "Waiting"
def jobName = "JobA"
def buildNum = "92"
waitUntil { !isPending(jobName, buildNum) }
echo "Actual Run"
}
}
}
}
}
def isPending(def JobName, def buildNumber) {
def buildA = Jenkins.instance.getItemByFullName(JobName).getBuild(buildNumber)
return buildA.isInProgress()
}

Jenkins build frequency based on previous build status

I have a Jenkins pipeline which is scheduled to trigger every 4 hours. However, my requirements is that once the build fails, I want the builds to happen more frequently and keep sending constant reminders that the build is broken. In short, the build schedule must depend on the status of the previous build.
Is that possible in Jenkins?
Thanks,
In Scripted Pipeline, you can do something like this :
def triggers = []
if(currentBuild.getPreviousBuild().result != 'SUCCESS') {
triggers << cron('0 */1 * * *') // every hour
} else {
triggers << cron('0 */4 * * *') // every 4 hours
}
properties ([
pipelineTriggers(triggers)
])
node {
...
}
I can't think of a direct way but you can have a workaround. You can have a replica of the same job(let's call it job 'B') and trigger it when the build of the first job fails(let's call it job 'A'). Now if B fails again then you can retrigger it(B) (adding some wait time) and send a notification after it fails, keep doing it until it passes. This will be done in a much easier way if you are using the scripted Jenkins pipeline. Hope this answer helps you in some way.

Check if Jenkins node is online for the job, otherwise send email alert

Having the Jenkins job dedicated to special node I'd like to have a notification if the job can't be run because the node is offline. Is it possible to set up this functionality?
In other words, the default Jenkins behavior is waiting for the node if the job has been started when the node is offline ('pending' job status). I want to fail (or don't start at all) the job in this case and send 'node offline' mail.
This node checking stuff should be inside the job because the job is executed rarely and I don't care if the node is offline when it's not needed for the job. I've tried external node watching plugin, but it doesn't do exactly what I want - it triggers emails every time the node goes offline and it's redundant in my case.
I found an answer here.
You can add a command-line or PowerShell block which invokes the curl command and processes a result
curl --silent $JENKINS_URL/computer/$JENKINS_NODENAME/api/json
The result json contains offline property with true/false value
I don't think checking if the node is available can be done inside the job (e.g JobX) you want to run. The act of checking, specifically for your JobX at time of execution, will itself need a job to run - I don't know of a plugin/configuration option that'll do this. JobX can't check if the node is free for JobX.
I use a lot of flow jobs (in process of converting to pipeline logic) where JobA will trigger the JobB, thus JobA could run on master check the node for JobB, JobX in your case, triggering it if up.
JobA would need to be a freestyle job and run a 'execute system groovy script > Groovy command' build step. The groovy code below is pulled together from a number of working examples, so untested:
import hudson.model.*;
import hudson.AbortException;
import java.util.concurrent.CancellationException;
def allNodes = jenkins.model.Jenkins.instance.nodes
def triggerJob = false
for (node in allNodes) {
if ( node.getComputer().isOnline() && node.nodeName == "special_node" ) {
println node.nodeName + " " + node.getComputer().countBusy() + " " + node.getComputer().getOneOffExecutors().size
triggerJob = true
break
}
}
if (triggerJob) {
println("triggering child build as node available")
def job = Hudson.instance.getJob('JobB')
def anotherBuild
try {
def params = [
new StringParameterValue('ParamOne', '123'),
]
def future = job.scheduleBuild2(0, new Cause.UpstreamCause(build), new ParametersAction(params))
anotherBuild = future.get()
} catch (CancellationException x) {
throw new AbortException("${job.fullDisplayName} aborted.")
}
} else {
println("failing parent build as node not available")
build.getExecutor().interrupt(hudson.model.Result.FAILURE)
throw new InterruptedException()
}
To get the node offline email, you could just trigger a post build action to send emails on failure.

Jenkins - how to show downstream jobs build result on Gerrit patch set?

below is my use case,
I have jobs A, B, C - A is upstream and B and C are downstream jobs. When a Patch set created in Gerrit, based on the patchset created event I trigger Job A and based on the result of this job, we trigger B and C. After the B and C is executed, I want to display the result of all three jobs on Gerrit patch set. like
Job A SUCCESS
JOB B SUCCESS
JOB C FAILED
right now I see only JOB A Build result showing up on GERRIT PATCH SET as
JOB A SUCCESS
is there any way to do this?
Do the following:
1) Configure all jobs (A, B and C) to trigger when the patch set is created.
2) Configure the jobs B and C to depend on job A
2.1) Click on "Advanced..." in Gerrit Trigger job configuration
2.2) Add the job A on the "Other jobs on which this job depends" field
With this configuration jobs B and C will wait for job A finish before they start and you'll get the result you want:
The best way to solve this is to create a small wrapper pipeline job. Let name it Build_ABC.
Configure Build_ABC to trigger on the Gerrit event you wish. The job will be responsible for running the other 3 builds and in the event of any failures in these jobs your Build_ABC will fail and report this back to Gerrit. You will not be able to see immediately which job failed in your Gerrit message but you will be able to see in in your Jenkins pipeline overview.
In the below scripted pipeline script you see a pipeline that calls Build_A and waits for the result. If the build succeeds it will continue to execute Build B and C in parallel. In my example I made Build C failed which caused the whole pipeline job to fail.
This is a revised version of my fist answer and the script has grown a bit. As it is required to have the individual build results in the message posted to Gerrit the pipeline has been changed to catch the individual results and record them. If build A fails builds B+C will be skipped and the status will be skipped.
Next it is possible to use the gerrit review ssh command line tool to perform manual review. This way it is possible to have a custom message generated to include the individual build results. It looks like the screen shot below:
I haven't figured out how to make it a multi line comment but there is also an option to use json in the command line, have a look at that.
def build_a = "Skipped"
def build_b = "Skipped"
def build_c = "Skipped"
def build_result = "+1"
try {
stage("A") {
try {
build( job: '/Peter/Build_A', wait: true)
build_a = "Pass"
} catch (e) {
build_a = "Failed"
// throw again or else the job to make the build fail
// throwing here will prevent B+C from running
throw e
}
}
stage("After A") {
parallel B: {
try {
build( job: '/Peter/Build_B', wait: true)
build_b = "Pass"
} catch (e) {
build_b = "Failed"
// throw again or else the job to make the build fail
throw e
}
}, C: {
try {
build( job: '/Peter/Build_C', wait: true)
build_c = "Pass"
} catch (e) {
build_c = "Failed"
// throw again or else the job to make the build fail
throw e
}
}
}
} catch(e) {
build_result = "-1"
// throw again or else the job to make the build fail
throw e
} finally {
node('master') {
// Perform a custom review using the environment vars
sh "ssh -p ${env.GERRIT_PORT} ${env.GERRIT_HOST} gerrit review --verified ${build_result} -m '\"Build A:${build_a} Build B: ${build_a} Build C: ${build_c}\"' ${env.GERRIT_PATCHSET_REVISION}"
}
}
Next you should configure the Gerrit trigger to ignore the results from Jenkins or else there will be a double vote.
One more advantage is that with the Ocean Blue plugin you can get a nice graphical representation, see below image, of your build and you can examine what went wrong by clicking on the jobs.

Resources