Jenkins job dsl causes a branch scan to happen for multibranchPipelineJob jobs on every run even if there are no changes - jenkins

I think this is related to the "id" field maybe, previously we weren't setting that and had issues where the multibranch job was reindexed all branches as new. That's fixed now but there is another issue.
Every time the job dsl runs it causes a branch scan job to kick off for all of our multibranchPipelineJobs.
Why does this happen ? Is there a way to prevent this ? For a few jobs it's not a big deal but we have almost 200 multibranchPipelineJobs. So this huge branch scan queue builds up every time the seed job is run. Also, according to Cloudbees there is no way to increase the number of scan jobs Jenkins processes at a time. So it always takes forever to burn down.
This is stupid, am I doing something wrong ? This happens even if there are no changes but frankly I don't think it should happen even if there are. I notice if I modify the config of a Jenkins job and save it, it usually just kicks off a branch scan job so maybe this is Jenkins behavior?
It seems like the ugliest way to handle this, but can you have the job dsl kill scan jobs in the queue for jobs it just configured and not affect other scan jobs that aren't related to the seed job run?

Related

Give a scheduled jenkins job temporary priority

I'm working on a busy Jenkins server with many big matrix jobs being built all the time. Working with another job can be really annoying if you want instantaneous results... Is there an option to give a job temporary priority for just one build so it skips the queue and gets built on the next available executor?
You can try using Priority Sorter Plugin and add your jobs with higher priority than the rest

How may I configure a Jenkins job to run at a specific time if an upstream job succeeds?

My use case:
Job A is set to run Monday through Friday at 18:00.
Job B is dependent upon Job A succeeding but should only run Monday through Friday at 06:00. (Monday morning's run would depend upon Friday evening's run). I prefer set times rather than delays between jobs.
On any given morning, if I see that Job A failed (thus Job B never ran), I would like to be able to run (fix) Job A then immediately trigger Job B.
What I have found so far only offers part of this use case. I have tinkered with Pipeline and recently upgraded my Jenkins instance to 2.89.3, so I have access to the most recent features and plugins. Filesystem triggering seems doable.
Any suggestions are appreciated.
You can use the options available in "Build Triggers".
Ex:
Build Trigger
Hope this work for you!
This is a tricky Use Case as generally you want a job to immediately follow on from another one rather than waiting for potentially three days.
Further complicated by wanting it to run straight away when you want it to.
I do not believe there is a "I have finished so kick this job at this time" downstream trigger So for the first part the only things I can think of are:
Job A kicks Job B as soon as it is finished and job B sits there with a time checker in it and starts its task when the time matches.
or Job A artefacts a file with its exit status and job B has a cron trigger for 6am mon-fri and picks up this artefact and then runs or doesn't dependent on the file contents
For the second part you could get the build Cause (see how to get $CAUSE in workflow for pipeline implementation and vote on https://issues.jenkins-ci.org/browse/JENKINS-41272 to get the feature when using sandbox).
And then get your pipeline to behave differently depending on trigger
i.e. if you went for the second option above then In job B you could do if triggered by Cron then read the artefact and do as needed. If triggered by Upstream then just run regardless.

Jenkins: how to block a job to make it unrunnable

This is not just another question about concurrent job execution in Jenkins. The problem I have is that there are several jobs that run independently from one another. When they finish it should be possible to run a manual job. The condition though is that all those automated jobs should be in successful state. Otherwise it should not be possible to run this manual job. It should also not be possible to run or even schedule run of this manual job if those other jobs are running.
I searched for the answer everywhere and checked every possible plugin that serves synchronization. But I did not figure it out how to solve the above problem.
IMHO the delivery pipeline plugin (see https://wiki.jenkins-ci.org/display/JENKINS/Delivery+Pipeline+Plugin for the download and http://www.infoq.com/articles/orch-pipelines-jenkins for a thorough description) could do what you want.
You can run a lot of jobs (in parallel or not), and when (and only when) they succeed another job (or more). You even can add manual steps (needing a button click when your pipeline may continue).
Everything is configurable - and quite stable at this moment.
No-one should be able to manually (or otherwise) start a job that is in "waiting state" for other jobs to finish.
Regarding this question:
Otherwise it should not be possible to run this manual job. It should also not be possible to run or even schedule run of this manual job if those other jobs are running.
You can use the Throttle Concurrent Builds Plugin and create a category which will include your automated jobs and the manual jobs.
If one automated job is running, it will be impossible to launch the manual jobs.
Regarding your first question, did you have a look to the Join plugin?
Cheers
https://wiki.jenkins-ci.org/display/JENKINS/Promoted+Builds+Plugin can also be option. Setup promotions in that way that manual approval is needed and build will not fail only if automated jobs are done.

Blocking a triggered Jenkins job until something *outside* Jenkins is done

I have a Jenkins job which starts a long-running process outside of Jenkins. The job itself is triggered by Gerrit.
If this job is triggered again while the long-running process is ongoing, I need to ensure that the job remains on the Jenkins queue until said process has completed. Effectively I want to ensure that the job never runs in parallel with itself, with the wrinkle that "the job" is really the Jenkins job plus the external long-running process.
I can't find any way to achieve this. The External Resource Dispatcher plugin seems like it could work, but every time I've configured it on our system, Jenkins got extremely unstable (refusing page loads for minutes on end, slave threads dying with NPEs). Everything else I can see, such as the Exclusions plugin, depend on Jenkins itself controlling the entirety of the job.
I've tried hacking something together with node labels - having the job depend on a label "can_run", assigning that label to master, and then having the job execute a Groovy script that removes that label from master. (Theoretically there would be another Jenkins job that adds the label back, which would be triggered by the end of the long-running process.) But it didn't work: if there were any queued instances of the job on Jenkins, they went ahead and started right away even though the label had been removed.
I don't know what else to try! Is there anything other than a required node label being missing which will cause Jenkins to queue the job if it is triggered, but not start it?
I guess the long-running process is triggered and your job return immediately, which make it an async process, right? I would suggest you handle the long-running process detection and waiting logic in your trigger process. Every time before you trigger the job, check if the long-running process is running, if not, trigger it.
Actually I am not quite getting what you are trying to do. Basically because of that long-running process, it is impossible for you to run 2 jobs in parallel. If this is true, make it non parallel job.

Trigger jobs without occupying workers

We have designed our test jobs to some sort of "abstract" test jobs, that run according to a set of parameters. These jobs are triggered use "runner" jobs that simply trigger them with the correct parameters (mostly generated by matrix jobs).
When we run multiple "runners", that all they do is simply trigger the abstract jobs, they occupy much needed workers (especially when it is a matrix job, that creates multiple temporary "runner" jobs).
Is there a way to tell jenkins to not spend a worker on a job that only trigger other jobs, or trigger jobs within the same worker?
It depends on what you use to trigger the jobs
If you use Trigger/call builds on other projects action, it has an option to Block until the triggered projects finish their builds. If that is checked, the triggering parent job will remain running and waiting for the triggered job to finish (thus occupying at least 2 executors). However, if you keep that unchecked, it will launch the triggered job, and the triggering job will end soon after.
I want my builds to wait until triggered jobs are completed, for reporting purposes and such (I don't want that logic in the triggered jobs due to their abstract nature).
What I decided to do, since the triggering jobs are very lightweight, I restricted them all to the master. I allocated a large number of workers to the master, since they won't do much work and they will simply manage the triggering of other jobs.

Resources