Trigger jobs without occupying workers - jenkins

We have designed our test jobs to some sort of "abstract" test jobs, that run according to a set of parameters. These jobs are triggered use "runner" jobs that simply trigger them with the correct parameters (mostly generated by matrix jobs).
When we run multiple "runners", that all they do is simply trigger the abstract jobs, they occupy much needed workers (especially when it is a matrix job, that creates multiple temporary "runner" jobs).
Is there a way to tell jenkins to not spend a worker on a job that only trigger other jobs, or trigger jobs within the same worker?

It depends on what you use to trigger the jobs
If you use Trigger/call builds on other projects action, it has an option to Block until the triggered projects finish their builds. If that is checked, the triggering parent job will remain running and waiting for the triggered job to finish (thus occupying at least 2 executors). However, if you keep that unchecked, it will launch the triggered job, and the triggering job will end soon after.

I want my builds to wait until triggered jobs are completed, for reporting purposes and such (I don't want that logic in the triggered jobs due to their abstract nature).
What I decided to do, since the triggering jobs are very lightweight, I restricted them all to the master. I allocated a large number of workers to the master, since they won't do much work and they will simply manage the triggering of other jobs.

Related

I do need to reorder jobs from build queue which are blocked by Block Queued Job Plugin

I do have a job which requires external ressources and therefore it should not executed twice or more often. I used Block Queued Job Plugin to block the job if of a list of jobs is currently running.
This creates sometimes a build queue with some jobs blocked by the plugin ... which is correct.
But now I do need to reorder the build queue to give a specific build a chance to be executed.
There is usually just the fifo principle in place but I do need to overwrite this in specific situations manually.
simple queue plugin ... can not deal with blocked jobs
priority sorter .... sees to be outdated and not working for such a simple thing ...
Currently I write down the parameter handed over per job delete all and afterwards rebuild with the new order and with the parameters which were manually written down.
This is quit bad and I do need a working solution. Maybe I missed the right plugin.

Jenkins job dsl causes a branch scan to happen for multibranchPipelineJob jobs on every run even if there are no changes

I think this is related to the "id" field maybe, previously we weren't setting that and had issues where the multibranch job was reindexed all branches as new. That's fixed now but there is another issue.
Every time the job dsl runs it causes a branch scan job to kick off for all of our multibranchPipelineJobs.
Why does this happen ? Is there a way to prevent this ? For a few jobs it's not a big deal but we have almost 200 multibranchPipelineJobs. So this huge branch scan queue builds up every time the seed job is run. Also, according to Cloudbees there is no way to increase the number of scan jobs Jenkins processes at a time. So it always takes forever to burn down.
This is stupid, am I doing something wrong ? This happens even if there are no changes but frankly I don't think it should happen even if there are. I notice if I modify the config of a Jenkins job and save it, it usually just kicks off a branch scan job so maybe this is Jenkins behavior?
It seems like the ugliest way to handle this, but can you have the job dsl kill scan jobs in the queue for jobs it just configured and not affect other scan jobs that aren't related to the seed job run?

How to increase maximum concurrent jobs?

In my newly installed Jenkins, I have four jobs. I can only run two concurrently. If I trigger the build of a third job, it is set in the queue and triggered once one of the first two finishes.
I know my server can handle more than two concurrent jobs at a time. How can I increase this default threshold of two?
If it means anything, these are not build-a-deployable package kind of jobs but environment prep jobs that instantiate various DBs. So the jobs simply invoke a python script on the Jenkins server, which is the same script across multiple jobs but each job invokes it with different input params. The jobs are 100% independent of one another and do not share any resource except the script.
You go to Manage Jenkins --> Configure System, then change # of executors:

How to run a job concurrently in Jenkins

I am using throttle concurrent build to run job in parallel. But I am not able to run the job in parallel. Only single build is triggered.
In Job configuration : selected Throttle Concurrent Builds and specified Maximum Total (ex:4)and/or Maximum Per Node(Ex:2)
selected “Execute concurrent build if possible” option also
I have one Master(2 Executors) and one Agent(2 Executors) in Jenkins.
Kindly help me to resolve this problem.
From the Throttle Concurrent Builds Plugin
It should be noted that Jenkins, by default, never executes the same
Job in parallel, so you do not need to actually throttle anything if
you go with the default. However, there is the option Execute
concurrent builds if necessary, which allows for running the same Job
multiple time in parallel, and of course if you use the categories
below, you will also be able to restrict multiple Jobs.)
So you need to check the box, which I think might be in the advance settings

Blocking a triggered Jenkins job until something *outside* Jenkins is done

I have a Jenkins job which starts a long-running process outside of Jenkins. The job itself is triggered by Gerrit.
If this job is triggered again while the long-running process is ongoing, I need to ensure that the job remains on the Jenkins queue until said process has completed. Effectively I want to ensure that the job never runs in parallel with itself, with the wrinkle that "the job" is really the Jenkins job plus the external long-running process.
I can't find any way to achieve this. The External Resource Dispatcher plugin seems like it could work, but every time I've configured it on our system, Jenkins got extremely unstable (refusing page loads for minutes on end, slave threads dying with NPEs). Everything else I can see, such as the Exclusions plugin, depend on Jenkins itself controlling the entirety of the job.
I've tried hacking something together with node labels - having the job depend on a label "can_run", assigning that label to master, and then having the job execute a Groovy script that removes that label from master. (Theoretically there would be another Jenkins job that adds the label back, which would be triggered by the end of the long-running process.) But it didn't work: if there were any queued instances of the job on Jenkins, they went ahead and started right away even though the label had been removed.
I don't know what else to try! Is there anything other than a required node label being missing which will cause Jenkins to queue the job if it is triggered, but not start it?
I guess the long-running process is triggered and your job return immediately, which make it an async process, right? I would suggest you handle the long-running process detection and waiting logic in your trigger process. Every time before you trigger the job, check if the long-running process is running, if not, trigger it.
Actually I am not quite getting what you are trying to do. Basically because of that long-running process, it is impossible for you to run 2 jobs in parallel. If this is true, make it non parallel job.

Resources