Quicker syntax for Jenkins identical parallel stages - jenkins

I have some parallel stages in my Jenkins pipeline. They are all identical, except that they run on different agents:
stage {
parallel {
stage {
agent {
label 'agent-1'
}
steps {
sh 'do task number 468'
}
}
stage {
agent {
label 'agent-2'
}
steps {
sh 'do task number 468'
}
}
stage {
agent {
label 'agent-3'
}
steps {
sh 'do task number 468'
}
}
}
}
I want to add more parallel stages on more nodes, but the script is long and repetetive. What's the best way to rewrite this to tell jenkins to parallelize the same steps across agents 1, 2, 3, 4...etc?

Please see below code which will create and run the stage on multiple agents:
// Define your agents
def agents = ['agent-1','agent-2','agent-3']
def createStage(label) {
return {
stage("Runs on ${label}") {
node(label) {
// build steps that should happen on all nodes go here
echo "Running on ${label}"
sh 'do task number 468'
}
}
}
}
def parallelStagesMap = agents.collectEntries {
["${it}" : createStage(it)]
}
pipeline {
agent none
stages {
stage('parallel stage') {
steps {
script {
parallel parallelStagesMap
}
}
}
}
}
More information is available at : Jenkins examples

Related

How to run all stages in parallel in jenkinsfile

I want to execute all the stages in parallel with the loop based on user input.
This gives error because script is not allowed under stages.
How should I achieve the same?
pipeline {
agent {
node {
label 'ec2'
}
}
stages{
script{
int[] array = params.elements;
for(int i in array) {
parallel{
stage('Preparation') {
echo 'Preparation'
println(i);
}
stage('Build') {
echo 'Build'
println(i);
}
}
}
}
}
}
If you are using declarative pipelines you have two options, first is to use static parallel stages which is an integral part of the declarative syntax but does not allow dynamic or runtime modifications.
The second option (which is probably what you attempted) is to use the scripted parallel function:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false```
When using it inside a declarative pipeline it should be used inside a script block like you did but the declarative basic directive must still be kept: pipeline -> stages -> stage -> steps -> script. In addition the scripted parallel function receives a specifically formatted map alike the example above.
In your case it can look somethong like:
pipeline {
agent {
node {
label 'ec2'
}
}
stages {
stage('Parallel Execution') {
steps {
script {
parallel params.elements.collectEntries {
// the key of each entry is the parallel execution branch name
// and the value of each entry is the code to execute
["Iteration for ${it}" : {
stage('Preparation') {
echo 'Preparation'
println(it);
}
stage('Build') {
echo 'Build'
println(it);
}
}]
}
}
}
}
}
}
Or if you want to use the for loop:
pipeline {
agent {
node {
label 'ec2'
}
}
stages {
stage('Parallel Execution') {
steps {
script {
map executions = [:]
for(int i in params.elements) {
executions["Iteration for ${it}" ] = {
stage('Preparation') {
echo 'Preparation'
println(i);
}
stage('Build') {
echo 'Build'
println(i);
}
}]
}
parallel executions
}
}
}
}
}
Other useful examples for the parallel function can be found here

Jenkins Pipeline Make a Stage as a Variable?

stages {
stage('Setup') {
}
stage('Parallel Stage') {
parallel {
stage('Executor 1') {
}
stage('Executor 2') {
}
stage('Executor 3') {
}
stage('Executor 4') {
}
}
}
}
Above is a skeleton of my Jenkins pipeline that has a setup stage and then a parallel stage that does the same thing four times for faster execution time.
Is there a way to define a stage as a variable to reduce the 4x code repetition and to reduce the number of edits I would have to make?
Yes, best way is to defined a function which generates stage and can be called in parallel.
Presuming that you are executing the stages into 1 agent in parallel.
In below sample pipeline generateStage is a function which replaces nested stages with function.
def jobs = ["Executor1", "Executor2", "Executor3"]
def parallelStagesMap = jobs.collectEntries {
["${it}" : generateStage(it)]
}
def generateStage(job) {
return {
stage("${job}") {
echo "Running stage ${job}."
}
}
}
pipeline {
agent any
stages {
stage('setup') {
steps {
echo 'This stage will be executed first.'
}
}
stage('parallel stage') {
steps {
script {
parallel parallelStagesMap
}
}
}
}
}
Output of the pipeline is as below:
For more details please see my answer LINK
Only drawback is that you can not execute this pipeline arrangement directly after stages thats why parallelStageMap is called inside the script.

How to build a combination of parallel and sequential stages in Jenkins pipeline with dynamic data

I am trying to build a Jenkins pipeline which has a combination of parallel and sequential stages. I am able to accomplish the same with static data but failing to get it working when using dynamic data, i.e. when using a parameterized build and reading data from the build parameters.
Below snippet works fine
pipeline {
agent any
stages {
stage('Parallel Tests') {
parallel {
stage('Ordered Tests Set') {
stages {
stage('Building seq test 1') {
steps {
echo "build seq test 1"
}
}
stage('Building seq test 2') {
steps {
echo "build seq test 2"
}
}
}
}
stage('Building Parallel test 1') {
steps {
echo "Building Parallel test 1"
}
}
stage('Building Parallel test 2') {
steps {
echo "Building Parallel test 2"
}
}
}
}
}
}
Gives me the following execution result
Now i want to read the values from my build parameters and just loop the stages . This is what i have tried but could not get it to work. This bit of snippet is taken from another answer i found few months back in SO but unable to trace now, else would have added the link -
def parallelStagesMap = params['Parallel Job Set'].split(',').collectEntries {
["${it}" : generateStage(it)]
}
def orderedStagesMap = params['Ordered Job Set'].split(',').collectEntries {
["${it}" : generateStage(it)]
}
def orderedMap (){
def orderedStagesMapList= [:]
orderedStagesMapList['Ordered Tests Set']= {
stage('Ordered Tests Set') {
stages{
orderedStagesMap
}
}
}
return orderedStagesMapList;
}
def generateStage(job) {
return {
stage("stage: ${job}") {
echo "This is ${job}."
}
}
}
pipeline {
agent none
stages {
stage ("Parallel Stage to trigger Tests"){
steps {
script {
parallel orderedMap()+parallelStagesMap
}
}
}
}
}
Declarative and Scripted Pipeline syntax do not mix in Pipeline, see Pipeline Syntax. Since you are dynamically creating a Pipeline definition based on the parameters, you should most likely go completely to Scripted Syntax, unless your use-case matches matrix.
Removing the Declarative syntax from your Pipeline Definition would give something like below. Note that I did not test it on the live Jenkins instance.
def parallelStagesMap = params['Parallel Job Set'].split(',').collectEntries {
["${it}" : generateStage(it)]
}
def orderedStagesMap = params['Ordered Job Set'].split(',').collectEntries {
["${it}" : generateStage(it)]
}
def orderedMap (){
def orderedStagesMapList= [:]
orderedStagesMapList['Ordered Tests Set']= {
stage('Ordered Tests Set') {
orderedStagesMap.each { key, value ->
value.call()
}
}
}
return orderedStagesMapList;
}
def generateStage(job) {
return {
stage("stage: ${job}") {
echo "This is ${job}."
}
}
}
stage("Parallel Stage to trigger Tests") {
parallel orderedMap()+parallelStagesMap
}

Run parallel inside steps of a stage in declarative jenkins

So, I want to run my parallel stages inside a stage but I also want to write some shared code by each parallel stage which I have written in steps of parallel parent stage
The problem I faced is that that the parallel stages are not being run
stages {
stage('partent stage 1'){
something here
}
stage('parent stage 2') {
steps {
// common code for parallel stages
parallel {
stage ('1'){
// some shell command
}
stage('2') {
// some shell command
}
}
}
}
}
For executing shared code you can define variables and functions outside of the declarative pipeline:
def foo = true
def checkFoo {
return foo
}
pipeline {
stage('parallel stage') {
parallel {
stage('stage 1') {
steps {
script {
def baz = checkFoo()
}
sh “echo ${baz}”
}
}
stage('stage 2') {
steps {
script {
def baz = checkFoo()
}
sh “echo ${baz}”
}
}
}
}
}
You can also write a shared library, which you can use in all or certain jobs.
I’ve deleted my first answer, since it was pure BS.

Use a lightweight executor for a declarative pipeline stage (agent none)

I'm using Jenkins Pipeline with the declarative syntax, currently with the following stages:
Prepare
Build (two parallel sets of steps)
Test (also two parallel sets of steps)
Ask if/where to deploy
Deploy
For steps 1, 2, 3, and 5 I need and agent (an executor) because they do actual work on the workspace. For step 4, I don't need one, and I would like to not block my available executors while waiting for user input. This seem to be referred to as either a "flyweight" or "lightweight" executor for the classic, scripted syntax, but I cannot find any information on how to achieve this with the declarative syntax.
So far I've tried:
Setting an agent directly in the pipeline options, and then setting agent none on the stage. This has no effect, and the pipeline runs as normalt, blocking the executor while waiting for input. It is also mentioned in the documentation that it will have no effect, but I thought I'd give it a shot anyway.
Setting agent none in the pipeline options, and then setting an agent for each stage except #4. Unfortunately, but expectedly, this allocates a new workspace for every stage, which in turn requires me to stash and unstash. This is both messy and gives me further problems in the parallel stages (2 and 3) because I cannot have code outside the parallel construct. I assume the parallel steps run in the same workspace, so stashing/unstashing in both would have unfortunate results.
Here is an outline of my Jenkinsfile:
pipeline {
agent {
label 'build-slave'
}
stages {
stage("Prepare build") {
steps {
// ...
}
}
stage("Build") {
steps {
parallel(
frontend: {
// ...
},
backend: {
// ...
}
)
}
}
stage("Test") {
steps {
parallel(
jslint: {
// ...
},
phpcs: {
// ...
},
)
}
post {
// ...
}
}
stage("Select deploy target") {
steps {
script {
// ... code that determines choiceParameterDefinition based on branch name ...
try {
timeout(time: 5, unit: 'MINUTES') {
deployEnvironment = input message: 'Deploy target', parameters: [choiceParameterDefinition]
}
} catch(ex) {
deployEnvironment = null
}
}
}
}
stage("Deploy") {
when {
expression {
return binding.variables.get("deployEnvironment")
}
}
steps {
// ...
}
}
}
post {
// ...
}
}
Am I missing something here, or is it just not possible in the current version?
Setting agent none at the top level, then agent { label 'foo' } on every stage, with agent none again on the input stage seems to work as expected for me.
i.e. Every stage that does some work runs on the same agent, while the input stage does not consume an executor on any agent.
pipeline {
agent none
stages {
stage("Prepare build") {
agent { label 'some-agent' }
steps {
echo "prepare: ${pwd()}"
}
}
stage("Build") {
agent { label 'some-agent' }
steps {
parallel(
frontend: {
echo "frontend: ${pwd()}"
},
backend: {
echo "backend: ${pwd()}"
}
)
}
}
stage("Test") {
agent { label 'some-agent' }
steps {
parallel(
jslint: {
echo "jslint: ${pwd()}"
},
phpcs: {
echo "phpcs: ${pwd()}"
},
)
}
}
stage("Select deploy target") {
agent none
steps {
input message: 'Deploy?'
}
}
stage("Deploy") {
agent { label 'some-agent' }
steps {
echo "deploy: ${pwd()}"
}
}
}
}
However, there are no guarantee that using the same agent label within a Pipeline will always end up using the same workspace, e.g. as another build of the same job while the first build is waiting on the input.
You would have to use stash after the build steps. As you note, this cannot be done normally with parallel at the moment, so you'd have to additionally use a script block, in order to write a snippet of Scripted Pipeline for the stashing/unstashing after/before the parallel steps.
There is a workaround to use the same build slave in the other stages.
You can set a variable with the node name and use it in the others.
ie:
pipeline {
agent none
stages {
stage('First Stage Gets Agent Dynamically') {
agent {
node {
label "some-agent"
}
}
steps {
echo "first stage running on ${NODE_NAME}"
script {
BUILD_AGENT = NODE_NAME
}
}
}
stage('Second Stage Setting Node by Name') {
agent {
node {
label "${BUILD_AGENT}"
}
}
steps {
echo "Second stage using ${NODE_NAME}"
}
}
}
}
As of today (2021), you can use nested stages (https://www.jenkins.io/doc/book/pipeline/syntax/#sequential-stages) to group all the stages that must run in the same workspace before the input step, and all the stages that must be run in the same workspace after the input step. Of course, you need to stash or to store artifacts in some external repository before the input step, because the second workspace may not be the same than the first one:
pipeline {
agent none
stages {
stage('Deployment to Preproduction') {
agent any
stages {
stage('Stage PRE.1') {
steps {
echo "StagePRE.1"
sleep(10)
}
}
stage('Stage PRE.2') {
steps {
echo "Stage PRE.2"
sleep(10)
}
}
}
}
stage('Stage Ask Deploy') {
steps {
input message: 'Deploy to production?'
}
}
stage('Deployment to Production') {
agent any
stages {
stage('Stage PRO.1') {
steps {
echo "Stage PRO.1"
sleep(10)
}
}
stage('Stage PRO.2') {
steps {
echo "Stage PRO.2"
sleep(10)
}
}
}
}
}
}

Resources