I have a Jenkinsfile declarative pipeline job that runs on Master node and executes some stages on other nodes. When I execute a lot of this pipeline, It fills entirely the Master node job limit, but those jobs don't take any resource but only wait stages to finish.
Is there a way to bypass node executor limit for specific pipelines or should I handle it another way?
Thanks by advance
Related
Anyone has idea how to limit number of concurrent builds on Jenkins for multi-branch pipeline?
I was searching out but almost every approach was about putting properties([disableConcurrentBuilds()]) which doesn't work in my case.
We use resource lock for unit tests, so they don't overlap. In my case no matter how many build executors I have they will just wait on lock to be released so they continue one by one while reserving build slot instead waiting in queue. I found some similar post:
Jenkins limit multibranch
for gitlab pipelines use parameter resource_group for create a resource group that ensures a job is mutually exclusive across different pipelines for the same project:
https://docs.gitlab.com/ee/ci/yaml/#resource_group
for jenkins: plugin and parameter "Block build if certain jobs are running"
https://plugins.jenkins.io/build-blocker-plugin/
I am setting up a Jenkins locally without SCM.
I have several Pipelines with the same JenkinsFile but with different parameters.
I would like to centralize the JenkinsFile for all the pipelines.
I found a solution originally for a scripted pipeline:
node {
load "/path/to/local/jenkinsFile.groovy"
}
The problem here is that Jenkins needs two agents to run the previous pipeline (one for the loading node, the other to run the pipeline) and both of them won't finish before the end of the pipeline.
Since I have multiples pipelines cron triggered, at some point, all the agents are busy to load the pipeline files, but since there is no more available agent to run each pipeline, the Jenkins process is stuck and then there is a bottleneck in the job queue.
For solutions I imagine:
How can I release the agent after the load of the pipeline?
Is there a way at the first pipeline (load the jenkinsFile) to keep an agent for running the pipeline?
I'm using Jenkins multibranch pipeline for CI process and for CD using Spinnaker.
I've gone through almost all documents, support channels etc. from spinnaker for "how to create spinnaker multibranch pipeline similarly as jenkins" but didn't find anywhere.
After integrating jenkins to spinnaker, in drop down list of jenkins jobs in spinnaker pipeline configuration, it shows all multibranch jobs separately. Hence for each branch I'd need to go to spinnaker and create pipeline manually.
To solve this, I'm thinking this solution: while running jenkins multibranch pipeline job > create spinnaker pipeline(if not exist) using spin cli with required parameters(branch, version, trigger using jenkins of this running branch job etc) > and trigger the same created spinnaker pipeline after jenkins job executed.
Please advise if there is any other better way to accomplish this.
Thanks.
I am not super familiar with the multibranch plugin, but you can make this simpler by doing [ triggers ] -> [ pipeline stage calling the same pipeline ] rather than calling the entire pipeline via the spin-cli.
Alternatively, if the list of jobs generated is small or well known, you could just update the list of triggers for the same pipeline programmatically as part of your release process.
i.e, in your jenkins job
add this job to list of triggers
run rest of jenkins job
job finishes, spinnaker pipeline triggers
I am using a groovy pipeline script for a build job, so in jenkins pipeline is like,
node
{
git url : 'myurl.git'
load 'mydir/myfile.groovy'
}
Its working well as expected. but in build executor status, It is showing it as two jobs running.
Why it is showing one job as two jobs with same name ?.
Is there something which i have missed to tell something to jenkins for pipeline job?
I can't find a better documentation source than this README (issue JENKINS-35710 also has some information), but the short of it is the Groovy pipeline executes on master (on a flyweight executor) while node blocks run on an allocated executor.
Here is a relevant snippet taken from the linked documentation:
[...]
Why are there two executors consumed by one Pipeline build?
Every Pipeline build itself runs on the master, using a flyweight executor — an uncounted slot that is assumed to not take any significant computational power.
This executor represents the actual Groovy script, which is almost always idle, waiting for a step to complete.
Flyweight executors are always available.
There is a requirement in our CI set up( We are using Jenkins as CI ), where in we need to lock the slave for a particular time until a particular or high priority job finishes on Jenkins. This is to ensure that no other jobs should run on this slave.
Once the Job starts execution then it needs to be ensured that no other jobs should run on that slave?
Do we have any Jenkins plugins that can help us to lock the slave if a job is running and then release the slave node to be used by other jobs once the job finishes(fail or success).?
use Heavy Job Plugin https://wiki.jenkins-ci.org/display/JENKINS/Heavy+Job+Plugin and set the job weight equal the number of executors on your slave. Then no other job can run on this slave.