Jenkins Pipeline & Docker Plugin - concurrent builds on unique agents - docker

I'm using Jenkins version 2.7.1 with the Pipeline suite of plugins to implement a pipeline in a Jenkinsfile, together with the Docker Plugin. My goal is to execute multiple project builds in parallel, with each project build running inside its own dedicated container. My Jenkinsfile looks like:
node('docker-agent') {
stage('Checkout') {
checkout scm
}
stage('Setup') {
build(job: 'Some External Job', node: env.NODE_NAME, workspace: env.WORKSPACE)
}
}
I have a requirement to call an external job, but I need this to execute on the same workspace where the checkout scm step has checked out the code, hence the node and workspace parameters. I understand that by wrapping a build call inside a node block effectively wastes an executor, but I'm fine with that since the agent is a container on a Docker Cloud and isn't really wasting any resources.
The one problem with my approach is that another instance of this project build could steal the executors from a different running instance in the time gap between the 2 stages.
How can I essentially ensure that (1) project builds can run concurrently, but (2) each build runs on a new instance of an agent labelled by docker-agent?
I've tried the Locking plugin, but a new build will simply wait to acquire the lock on existing agent rather than spinning up its own agent.

To prevent other builds running on the same agent, limit the number of executors to 1 for each agent in your docker cloud environment (that's a setting when configuring docker for that label). That will require a new container to start per executor.
That said, I wouldn't design a pipeline like this. Instead, I'd use stash and unstash to copy your checkout and any other small artifacts between the nodes so that you can pause execution without holding a node running.

Related

Why declarative pipelines need to run on master if there are build executors available?

I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it

How to checkout and run pipeline file from TFS on specific node in Jenkins?

I am trying to run a pipeline job that get its' pipeline file from TFS but the mapping of the workspace and the checkout is done on the Master instead of the Slave.
I have Jenkins-master which is installed on a linux machine and I connected a windows machine as a slave to it. I created a pipeline job with 'Pipeline script from SCM' option selected for TFS.
How can I make the windows slave run that pipeline job?
The master can't run that job because it is running on linux and it fails when it is trying to map a workspace to TFS in order to download the pipeline script and run it.
Even if I create another pipeline job and select to hard-code a script to run my original pipeline job like this:
node('WIN_SLAVE') {
build job: 'My_Pipeline'
}
It doesn't work.
And I can see in the output that the initiali script (above) is in fact running on my windows slave, but when it's building the job 'My_Pipeline' it still tries to map a workspace to the Jenkins-master at it's linux machine path /var/jenkins/... and it fails.
If the initial pipeline script ran at the windows slave, why does the other pipeline script not running on the same node? Why is it trying again to checkout the pipeline file from TFS to the Jenkins-Master?
How can I make the windows slave checkout the pipeline file and run it?
Here are some things to check...
Make sure you disabled the original job, or you are completely redefining it for running on the slave, because you indicated you set up “another job” for the slave. It appears that this other job is just triggering the previous job, rather than defining its own specifications. When the job is ran on the slave, it’s just running whatever settings are in that original job.
Also, If you have the box checked to build when a change is pushed to TFS, then your original job could still be trying to run every time a change is made to TFS.
Verify the slaves Remote root directory is set properly in the slave configuration under Manage Jenkins -> Manage Nodes.
Since this slave job is triggering the other job you originally created on the master, then it will build on the master as expected.
Instead of referencing the My_Pipeline job, change the My_Pipeline job itself to run on the slave. If you are using a declarative Pipeline for the original job, then change that original job to run on the slave within the original job settings. You can do it similarly to how you have indicated above, just define the node in the original job.
If the original job is a freestyle project, there is a checkbox titled Restrict where this project can be run. Check that and include the name of the slave in the Label Expression. When you run the job, it will then be restricted to the slave.
Lastly, posting the My_Pipeline job will be helpful.

Why build executor status showing two jobs for one pipeline job?

I am using a groovy pipeline script for a build job, so in jenkins pipeline is like,
node
{
git url : 'myurl.git'
load 'mydir/myfile.groovy'
}
Its working well as expected. but in build executor status, It is showing it as two jobs running.
Why it is showing one job as two jobs with same name ?.
Is there something which i have missed to tell something to jenkins for pipeline job?
I can't find a better documentation source than this README (issue JENKINS-35710 also has some information), but the short of it is the Groovy pipeline executes on master (on a flyweight executor) while node blocks run on an allocated executor.
Here is a relevant snippet taken from the linked documentation:
[...]
Why are there two executors consumed by one Pipeline build?
Every Pipeline build itself runs on the master, using a flyweight executor — an uncounted slot that is assumed to not take any significant computational power.
This executor represents the actual Groovy script, which is almost always idle, waiting for a step to complete.
Flyweight executors are always available.

How to run Jobs in a Multibranch Project sequential instead of parallel

I have configured a multibranch-pipeline project in Jenkins. This project run integration test on all my feature branches (git). For each job in the pipeline project it creates an instance of my webapp (start tomcat and other dependencies). Because of port binding issues this result in many broken jobs.
Can I throttle the builds in the multibranch-pipeline project, so that the jobs for each feature branch run sequentially instead of parallel?
Or if there any more elegant solution?
Edit:
Situation and problem:
I want to have a multibranch pipeline project in Jenkins (because I have many feature branches in git)
The jobs which are created from the multibranch pipeline (for each feature branch in git), run in parallel
Polling scm is at midnight (commits on x branches are new, so the related jobs started at midnight)
every job started one instance of my webapp (and other dependencies) which bind to some ports
The problem is, that there can start many of these jobs at midnight. Every job will try to start an instance of my webapp. The first job can start the webapp without any problem. The second job cannot start the webapp because the ports are already taken from the first instance.
I don't want to configure a new port binding for each feature branch in my git repository. I need a solution to throttle the builds in the multibranch pipeline, so that only on "feature" can run concurrently.
From what I've read in other answers the disableConcurrentBuilds command only prevents multiple builds on the same branch.
If you want only one build running at a time, period, go to your Nodes/Build Executor configuration for the specific VM that your app is running on, drop the number of executors to 1 and configure the node labels so that only jobs from your multibranch pipeline can run on that VM.
My project has strict memory, licensing and storage constraints, so with this setup, all the jobs on the master and feature branches start, but only one can run at a time until the executor becomes available.
The most elegant solution would be to make your Integration Tests to be able to run concurrently.
One solution would be to use an embedded tomcat with a dynamic port. In that way each job instance would run in tomcat with different ports.
This is also a better solution than relying on an external server.
If this is too much work, you can always use the following code in your "jenkinsfile" pipeline:
node {
// This limits build concurrency to 1 per branch
properties([disableConcurrentBuilds()])
// continue your pipeline ...
}
The solution comes from this SO answer.

How to nest dependent Jenkins Pipelines to execute on a single machine?

We are implementing Continuous Integration of several repositories using Jenkins.
For this example, let's assume libA is a dependency of libB which is a dependency of clientC.
libA pipeline
libA has external dependencies, so we can write the pipeline build-A-pipe to build it : one of the stages being responsible for gathering such dependencies, and a subsequent stage actually invoking the build command.
libB pipeline
libB would ideally be built within a separate pipeline, called build-B-pipe. In the stage to gather libB dependencies, we have to build libA. It seems to us that the best way to achieve such thing is to call build job: 'build-A-pipe' within the pipeline that builds libB (this way it allows to reuse the build-A-pipe, which already describes all steps required to successfully build libA).
clientC pipeline
Now, if we wanted to build clientC, we would follow a similar procedure. Thus, there would be a call like build job: 'build-B-pipe' in the dependencies gathering stage of the pipeline building clientC. The issue is that it results in nested calls to the build command, which deadlocks the single machine :
at the top level calling build job: 'build-B-pipe' schedules build-B-pipe, and starts it on the master machine (our only "execution node").
build-B-pipe then calls build job: 'build-A-pipe', which is then scheduled but cannot start, as the only "execution node" is already taken.
How should we approach this problem to make this inherently sequential build work within Jenkins ?
The issue is that it results in nested calls to the build command, which deadlocks the single machine
By deadlock, do you mean that the slave agent which is responsible for executing the nested pipeline is running out of resources? Or is the node which is responsible for running these nested pipelines running out of executors?
If the machine responsible for running pipelines is exhausting all resources (assuming that this is the only responsibility of this machine), then your pipeline is too complex and should delegate more to other nodes/agents.
If the node is running out of executors, you can increase those in the node config.

Resources