How to nest dependent Jenkins Pipelines to execute on a single machine? - jenkins

We are implementing Continuous Integration of several repositories using Jenkins.
For this example, let's assume libA is a dependency of libB which is a dependency of clientC.
libA pipeline
libA has external dependencies, so we can write the pipeline build-A-pipe to build it : one of the stages being responsible for gathering such dependencies, and a subsequent stage actually invoking the build command.
libB pipeline
libB would ideally be built within a separate pipeline, called build-B-pipe. In the stage to gather libB dependencies, we have to build libA. It seems to us that the best way to achieve such thing is to call build job: 'build-A-pipe' within the pipeline that builds libB (this way it allows to reuse the build-A-pipe, which already describes all steps required to successfully build libA).
clientC pipeline
Now, if we wanted to build clientC, we would follow a similar procedure. Thus, there would be a call like build job: 'build-B-pipe' in the dependencies gathering stage of the pipeline building clientC. The issue is that it results in nested calls to the build command, which deadlocks the single machine :
at the top level calling build job: 'build-B-pipe' schedules build-B-pipe, and starts it on the master machine (our only "execution node").
build-B-pipe then calls build job: 'build-A-pipe', which is then scheduled but cannot start, as the only "execution node" is already taken.
How should we approach this problem to make this inherently sequential build work within Jenkins ?

The issue is that it results in nested calls to the build command, which deadlocks the single machine
By deadlock, do you mean that the slave agent which is responsible for executing the nested pipeline is running out of resources? Or is the node which is responsible for running these nested pipelines running out of executors?
If the machine responsible for running pipelines is exhausting all resources (assuming that this is the only responsibility of this machine), then your pipeline is too complex and should delegate more to other nodes/agents.
If the node is running out of executors, you can increase those in the node config.

Related

Why build executor status showing two jobs for one pipeline job?

I am using a groovy pipeline script for a build job, so in jenkins pipeline is like,
node
{
git url : 'myurl.git'
load 'mydir/myfile.groovy'
}
Its working well as expected. but in build executor status, It is showing it as two jobs running.
Why it is showing one job as two jobs with same name ?.
Is there something which i have missed to tell something to jenkins for pipeline job?
I can't find a better documentation source than this README (issue JENKINS-35710 also has some information), but the short of it is the Groovy pipeline executes on master (on a flyweight executor) while node blocks run on an allocated executor.
Here is a relevant snippet taken from the linked documentation:
[...]
Why are there two executors consumed by one Pipeline build?
Every Pipeline build itself runs on the master, using a flyweight executor — an uncounted slot that is assumed to not take any significant computational power.
This executor represents the actual Groovy script, which is almost always idle, waiting for a step to complete.
Flyweight executors are always available.

Manual Build Step in Jenkins Declarative Pipeline?

This is a follow-up question to this previous post that doesn't seem like it was ever truly answered with more than a "this looks promising":
Jenkins how to create pipeline manual step.
This is a major functionality gap for CICD pipelines. The current "input step" of declarative (1.2.9) requires the whole pipeline to have to wait for the input step before the pipeline is completed (or have a time-out that won't allow you to re-trigger later). Depending on how agents are scoped it can also hold up an executor or require you to have to start up a new slave for every build step.
This is the closest I've come to a solution that doesn't eat up an executor (pipeline level "agent none" with agents defined in all stages described here: https://jenkins.io/blog/2018/04/09/whats-in-declarative/) but starting a new slave for every build step seems time wasteful and requires additional considerations for persisting your workspace. The final solution offered was to throw a "time-out" for the input, but this still doesn't work because then you can never move that build to stage later and will need to re-build.
Any solutions or suggestions here would be very appreciated.
If you are using Kubernetes Plugin for Jenkins Agent to run as containers in Kubernetes cluster, then there is a setting call idleMinutes.
idleMinutes Allows the pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it. Use this only when defining a pod template in the user interface.
There you can define your agent at pipeline level without defining it at all stages. (given your agent is designed to have the capabilities run in all stages). When it comes to user input stage, set agent to none at stage level so that it is not holding up the executor.

Jenkins Pipeline & Docker Plugin - concurrent builds on unique agents

I'm using Jenkins version 2.7.1 with the Pipeline suite of plugins to implement a pipeline in a Jenkinsfile, together with the Docker Plugin. My goal is to execute multiple project builds in parallel, with each project build running inside its own dedicated container. My Jenkinsfile looks like:
node('docker-agent') {
stage('Checkout') {
checkout scm
}
stage('Setup') {
build(job: 'Some External Job', node: env.NODE_NAME, workspace: env.WORKSPACE)
}
}
I have a requirement to call an external job, but I need this to execute on the same workspace where the checkout scm step has checked out the code, hence the node and workspace parameters. I understand that by wrapping a build call inside a node block effectively wastes an executor, but I'm fine with that since the agent is a container on a Docker Cloud and isn't really wasting any resources.
The one problem with my approach is that another instance of this project build could steal the executors from a different running instance in the time gap between the 2 stages.
How can I essentially ensure that (1) project builds can run concurrently, but (2) each build runs on a new instance of an agent labelled by docker-agent?
I've tried the Locking plugin, but a new build will simply wait to acquire the lock on existing agent rather than spinning up its own agent.
To prevent other builds running on the same agent, limit the number of executors to 1 for each agent in your docker cloud environment (that's a setting when configuring docker for that label). That will require a new container to start per executor.
That said, I wouldn't design a pipeline like this. Instead, I'd use stash and unstash to copy your checkout and any other small artifacts between the nodes so that you can pause execution without holding a node running.

How to run Jobs in a Multibranch Project sequential instead of parallel

I have configured a multibranch-pipeline project in Jenkins. This project run integration test on all my feature branches (git). For each job in the pipeline project it creates an instance of my webapp (start tomcat and other dependencies). Because of port binding issues this result in many broken jobs.
Can I throttle the builds in the multibranch-pipeline project, so that the jobs for each feature branch run sequentially instead of parallel?
Or if there any more elegant solution?
Edit:
Situation and problem:
I want to have a multibranch pipeline project in Jenkins (because I have many feature branches in git)
The jobs which are created from the multibranch pipeline (for each feature branch in git), run in parallel
Polling scm is at midnight (commits on x branches are new, so the related jobs started at midnight)
every job started one instance of my webapp (and other dependencies) which bind to some ports
The problem is, that there can start many of these jobs at midnight. Every job will try to start an instance of my webapp. The first job can start the webapp without any problem. The second job cannot start the webapp because the ports are already taken from the first instance.
I don't want to configure a new port binding for each feature branch in my git repository. I need a solution to throttle the builds in the multibranch pipeline, so that only on "feature" can run concurrently.
From what I've read in other answers the disableConcurrentBuilds command only prevents multiple builds on the same branch.
If you want only one build running at a time, period, go to your Nodes/Build Executor configuration for the specific VM that your app is running on, drop the number of executors to 1 and configure the node labels so that only jobs from your multibranch pipeline can run on that VM.
My project has strict memory, licensing and storage constraints, so with this setup, all the jobs on the master and feature branches start, but only one can run at a time until the executor becomes available.
The most elegant solution would be to make your Integration Tests to be able to run concurrently.
One solution would be to use an embedded tomcat with a dynamic port. In that way each job instance would run in tomcat with different ports.
This is also a better solution than relying on an external server.
If this is too much work, you can always use the following code in your "jenkinsfile" pipeline:
node {
// This limits build concurrency to 1 per branch
properties([disableConcurrentBuilds()])
// continue your pipeline ...
}
The solution comes from this SO answer.

Jenkins pipeline using upstream and downstream dependency

I had some jenkins standalone jobs to build, package and deploy. Now I am connecting them and making 'build' job trigger 'package' job , and 'package' job to trigger 'deploy' job and am passing the required parameters between them.I can also see them neatly in pipeline view.
My question is, can this technically be called a pipeline? Or can I call it a pipeline only if I use pipeline plugin and write groovy script?
Thanks
p.s: Please do not devote this question. It is a sincere question for which I am not able to find the right answer. I want to be technically correct.
In Jenkins context, a pipeline is a job that defines a workflow using pipeline DSL (here, based on Groovy). A pipeline aims to define a bunch of steps (e.g. build + package + deploy in your case) in a single place, allows to define a complex workflow (e.g. parallel steps, input step, try/catch instructions) that can be both replayed and versionned (because it can be saved to git). For more information you should read Jenkins official pipeline documentation that explains in details what a pipeline is.
The kind of jobs you are currently using are called freestyle jobs, and even if they do define a "flow" (by chaining jobs together), they cannot be called pipelines jobs.
In short, pipelines are jobs that use pipeline plugin and groovy script syntax to define the whole application lifecycle, and standard Jenkins 1.x jobs are called freestyle jobs.

Resources