How to give priority to stages in a Jenkins parallel pipeline? - jenkins

I have a Jenkins pipeline with some parallel stages:
pipeline {
stages {
stage("Parallel build") {
parallel {
stage("A") { /* takes 5 min */ }
stage("B") { /* takes 5 min */ }
stage("C") { /* takes 15 min */ }
}
}
}
}
Assuming there aren't enough free executors to start all stages at once, how do I make sure stage C is among those started first? This will reduce build time from 20 minutes to 15, since A and B can run on the same executor consecutively.

Related

Jenkins execute all sub jobs before marking a top job fail or pass?

def jobs = [
'subjob1': true,
'subjob2': false,
'subjob3': true
]
pipeline
{
agent { label "ag1" }
stages
{
stage('stage1')
{
steps
{
script
{
jobs.each
{
if ("$it.value".toBoolean())
{
stage("Stage $it.key")
{
build([job:"$it.key", wait:true, propagate:true])
}
}
}
}
}
}
}
}
This Jenkins job triggers other sub-jobs (via pipeline build step): subjob1, subjob2, subjob3. If any of the sub-jobs fail, this job immediately fails (propagate:true).
However, what I'd like to do is continue executing all jobs. And mark this one as failed if one or more sub-jobs fail. How would I do that?
Here is how you can do it. You can simply use a catchError block for this.
def jobs = [
'subjob1': true,
'subjob2': false,
'subjob3': true
]
pipeline
{
agent any
stages
{
stage('stage1')
{
steps
{
script
{
jobs.each
{
if ("$it.value".toBoolean())
{
stage("Stage $it.key")
{
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE')
{
echo "Building"
build([job:"$it.key", wait:true, propagate:true])
}
}
}
}
}
}
}
}
}
Instead of executing all the jobs one by one, you can execute them in parallel. This way, all the jobs will be executed independently of each other and stage1 will fail only if one or more jobs fails.
According to the documentation
The parallel directive takes a map from branch names to closures and an optional argument failFast which will terminate all branches upon a failure in any other branch.
So, we have to transform the jobs to a Map of stage names to Closures that will execute in parallel. We will use jobs.collectEntries() to build the mapping and pass it as the argument to the parallel directive:
stage('Parallel') {
steps {
script {
parallel(jobs.collectEntries {
[(it.key): {
if (it.value) {
build(job: it.key)
} else {
echo "Skipping job execution: ${it.key}"
// This is required to mark the parallel stage as skipped - it is not required for the solution to work
org.jenkinsci.plugins.pipeline.modeldefinition.Utils.markStageSkippedForConditional(it.key)
}
}]
})
}
}
}
We can omit the wait and propagate flags in the build step because they are set by default.
In the provided solution the Parallel stage (and the resulting build) will fail only if one or more jobs fails. Additionally, if you have Blue Ocean plugin installed you will see a nice view graph of Parallel stage along with all the parallel children:

What is the order of execution of jobs in a Jenkins pipeline?

I have a pipeline with multiple jobs inside it, but I'm facing a dilemma. What is the order of execution of the jobs inside the pipeline? Is it the order from the script? The reason I'm interested is because I want JOB1 to run at the beginning of the pipeline and somewhere in the middle. However, when the pipeline is running JOB1 in the beginning for whatever reason it runs twice in succession. Is there a particular reason for that or am I missing something?
pipeline {
agent any;
options {
timeout(time: 4, unit: 'HOURS')
}
stages {
stage('All tests in parallel')
{
parallel
{
stage('JOB1') {
steps {
callJobByName("JOB1")
}
}
stage('JOB2') {
steps {
callJobByName("JOB2")
}
}
stage('JOB1') {
steps {
callJobByName("JOB1")
}
}
stage('JOB3') {
steps {
callJobByName("JOB3")
}
}
}
}
}
}
As per the pipeline above, every Stage within the parallel block will run in parallel. So you can't guarantee an order. If you want to execute them in order, remove the parallel block. Then the Stages will execute in the order they are defined. If you just want to JOB1 to execute first and then to execute others and JOB1 again in parallel, you can simply move the first stage out from the parallel block.
pipeline {
agent any;
options {
timeout(time: 4, unit: 'HOURS')
}
stages {
stage('All tests in parallel')
{
stage('JOB1') {
steps {
callJobByName("JOB1")
}
}
parallel
{
stage('JOB2') {
steps {
callJobByName("JOB2")
}
}
stage('JOB1') {
steps {
callJobByName("JOB1")
}
}
stage('JOB3') {
steps {
callJobByName("JOB3")
}
}
}
}
}
}

Jenkins Pipeline - Start a stage 2 hours after the 1st one

In a declarative pipeline parallel block, it is possible to specify 2nd stage to start with a lag of 2 hours after the first one has started?
Let's say I have 2 stages as below:
parallel {
stage('A') {
steps {
script {
sh do something
}
}
}
stage('B') {
steps {
script {
sh do something
}
}
}
}
When the job is kicked off, stage A starts. 2 hours later, Stage B would start. Is this possible?
You can use "sleep" within a stage to pause its execution.
stage("B") {
steps {
echo "Pausing stage B"
sleep(time: 2, unit: "HOURS")
}
}

Re-use agent in parallel stages of declarative pipeline

I'm using Declarative Pipelines 1.3.2 plugin and I want to use the same agent (as in only specifying the agent directive once) in multiple parallel stages:
stage('Parallel Deployment')
{
agent { dockerfile { label 'docker'; filename 'Dockerfile'; } }
parallel
{
stage('A') { steps { ... } }
stage('B') { steps { ... } }
}
}
However, Jenkins complains:
"agent" is not allowed in stage "Parallel Deployment" as it contains parallel stages
A solution is to duplicate the agent directive for each parallel stage, but this is tedious and leads to lot of duplicated code with many parallel stages:
stage('Parallel Deployment')
{
parallel
{
stage('A') {
agent { dockerfile { label 'docker'; filename 'Dockerfile'; } }
steps { ... }
}
stage('B') {
agent { dockerfile { label 'docker'; filename 'Dockerfile'; } }
steps { ... }
}
}
}
Is there a more idiomatic solution, or is duplicating agent directive necessary for each of the parallel stages?
Specifying the agent at pipeline level can be a solution, but has the potential downside that the agent is up & running for the whole duration of the build.
Also note that this means each stage (that doesn't define its own agent) is run on the same agent instance, not agent type. If the parallel processes are CPU / resource intensive, this might not be what you want.
Still, if you want to run parallel stages on one instance, and can't or want not to define the agent at pipeline level, here's a workaround for the declarative syntax:
stage('Parallel Deployment') {
agent { dockerfile { label 'docker'; filename 'Dockerfile'; } }
stages {
stage('A & B') {
parallel {
stage('A') { steps { ... } }
stage('B') { steps { ... } }
}
}
}
}
Or you go for a scripted pipeline, which doesn't have this limitation.
Declare the agent at Pipeline level so all stages run on the same agent.

Jenkins 'agent: none' lightweight executor equivalent with scripted pipeline

With Jenkins declarative syntax, it's possible to run parallel stages with no top-level agent. This ends up consuming two executors, since the top level agent is marked 'none':
pipeline {
agent none
stages {
stage('Run on parallel nodes') {
parallel {
stage('Do one thing') {
agent any
steps {
...
}
stage('Do another thing') {
agent any
steps {
...
}
}
}
}
}
}
With scripted pipelines, which requires a top-level 'node' element, this is seemingly not possible. This ends up consuming three executors, even though only two are doing real work:
node {
stage('Run on parallel nodes') {
parallel ([
'Do one thing': {
node() {
...
}
},
'Do another thing': {
node() {
...
}
}
])
}
}
Is a 'lightweight' top level executor possible with scripted pipelines?
Scripted pipelines don't require a top-level node allocation. This is just wrong and can be left out.

Resources