I have Multi branch build Jenkins pipeline which uses docker containers. The issue I am having is I don't want more than one branch to triggered the build at a time. Because I use postgres DB in a container and when more than one branch starts localhost port 5432 gets occupied with branch build which kicks in first then second branch fails.
Is there a way to avoid this in Jenkinsfile or any other way ?
pipeline {
options { lock resource: 'build-lock' }
stages {...}
}
use this in your pipelines. At any given point in time, only one instance of the pipeline will execute, even in a multi-branch pipeline.
For more info Lockable resources
I would perhaps tackle your postgres db rather than try and solve it this way. Could your build pick a random port and spin up the required db on alt ports instead?
If you do want to try limit concurrent builds...
1) You could limit the agent this repo runs on and provide it only 1 executor. This would cause builds to queue whilst they waited
2) If you wanted to do it programmatically, you would need to put in a check in the pipeline to abort/wait the build if it finds current executions that match.. I don't recommend this, if your running in sandbox you will likely need to approve script access. Plus it seems like you would be digging under the hood and it might cause issues with upgrade paths if the interface gets refactored... but you would be digging around in https://javadoc.jenkins-ci.org/hudson/model/Executor.html getCurrentExecutable() or perhaps something like this https://github.com/cloudbees/jenkins-scripts/blob/master/get-build-information.groovy#L24
Related
I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it
Following the Jenkins Best Practices, I want to avoid that Build Jobs/Pipelines could be executed into my Jenkins Master.
To do so, I've installed the Job Restrictions Plugin, using it to configure the Master to run only some Maintenance Pipelines.
The problem is that now Build Pipelines that are configured to run on specific Agents, are not executed anymore. I see that the Build Queue continuously grows, and the Pipelines are not runned. I think that this behaviour could be related to Flyweight Executors of the Master.
So, the question is the following: How can I execute on Master just a little subset of Maintenance Pipelines and, in the mean time, execute Build Pipelines only on specific Agent?
You can configure the master node to only be used when explicitly named. Just click the master node > go to configure and change Use this node as much as possible to Only build jobs with label expressions matching this node
I found the solution that perfectly fits with my needs, here.
To quickly sum up the solution, I was to able to exclude all the user Builds from Master and run on it only the Jobs/Pipelines of a specific Jenkins folder (IuA in my case), configuring the Job Restrictions Plugin in the following way:
In order to better understand the logic behind this solution, I recommend you to give a look at link that I posted above.
I am relatively new to Jenkins.
I created a declarative pipeline in Jenkins where users are asked to enter their branch name and then Jenkins builds that specific branch (for example, origin/mybranch).
This allows me to run a quick set of tests for specific branches.
The developers can run the pipeline multiple times and today I block multiple such pipelines from running simultaneously because if they do, one overwrites the other.
This happens because the first pipeline writes to c:\Jenkins\workspace\QuickBuild and when another such job run is writes to that exact same folder, killing the original run.
Blocking was the solution I found to prevent this but I would like it so that when one run is finishing up (using less than 8 cores) the next run in queue will already start running with whatever cores are freed up.
I would have though this would be a basic concept of Jenkins.
Am I missing something? Am I doing it wrong?
Following MaratC and Zett42's suggestions, I ended up adding this to my script:
agent
{
node {
customWorkspace "${params.Branch}"
}
}
This causes Jenkins to create each build in a different folder and they don't step on each others' toes.
The only downside is that you can't build the same branch simultaneously but that's a corner case.
Also, I could add a random number to the workspace to enable this as well.
I have configured a multibranch-pipeline project in Jenkins. This project run integration test on all my feature branches (git). For each job in the pipeline project it creates an instance of my webapp (start tomcat and other dependencies). Because of port binding issues this result in many broken jobs.
Can I throttle the builds in the multibranch-pipeline project, so that the jobs for each feature branch run sequentially instead of parallel?
Or if there any more elegant solution?
Edit:
Situation and problem:
I want to have a multibranch pipeline project in Jenkins (because I have many feature branches in git)
The jobs which are created from the multibranch pipeline (for each feature branch in git), run in parallel
Polling scm is at midnight (commits on x branches are new, so the related jobs started at midnight)
every job started one instance of my webapp (and other dependencies) which bind to some ports
The problem is, that there can start many of these jobs at midnight. Every job will try to start an instance of my webapp. The first job can start the webapp without any problem. The second job cannot start the webapp because the ports are already taken from the first instance.
I don't want to configure a new port binding for each feature branch in my git repository. I need a solution to throttle the builds in the multibranch pipeline, so that only on "feature" can run concurrently.
From what I've read in other answers the disableConcurrentBuilds command only prevents multiple builds on the same branch.
If you want only one build running at a time, period, go to your Nodes/Build Executor configuration for the specific VM that your app is running on, drop the number of executors to 1 and configure the node labels so that only jobs from your multibranch pipeline can run on that VM.
My project has strict memory, licensing and storage constraints, so with this setup, all the jobs on the master and feature branches start, but only one can run at a time until the executor becomes available.
The most elegant solution would be to make your Integration Tests to be able to run concurrently.
One solution would be to use an embedded tomcat with a dynamic port. In that way each job instance would run in tomcat with different ports.
This is also a better solution than relying on an external server.
If this is too much work, you can always use the following code in your "jenkinsfile" pipeline:
node {
// This limits build concurrency to 1 per branch
properties([disableConcurrentBuilds()])
// continue your pipeline ...
}
The solution comes from this SO answer.
I'm trying to find a way to ensure that an entire pipeline completes on a specific executor without allowing other jobs to be run on that executor:
my pipeline essentially looks like:
Build -> Deploy -> API testing -> selenium testing
as we have multiple teams, all running multiple paralell pipelines i want to ensure that on a per slave basis all builds complete in the pipeline before any others begin
Is anyone aware of a plugin that does this?
You can look into the Locks and Latches plugin that can help you enforce this by assigning a lock to the relevant pipeline.
I have never tried your setup, but it might work.
Also, consider restricting the number of executors on the slave to 1, so only a single pipeline can "fit in".
I hope this helps.