Can I define multiple agent labels in a declarative Jenkins Pipeline? - jenkins

I'm using declarative Jenkins pipelines to run some of my build pipelines and was wondering if it is possible to define multiple agent labels.
I have a number of build agents hooked up to my Jenkins and would like for this specific pipeline to be able to be built by various agents that have different labels (but not by ALL agents).
To be more concrete, let's say I have 2 agents with a label 'small', 4 with label 'medium' and 6 with label 'large'. Now I have a pipeline that is very resource-low and I want it to be executed on only a 'small'- or 'medium'-sized agent, but not on a large one as it may cause larger builds to wait in the queue for an unnecessarily long time.
All the examples I've seen so far only use one single label.
I tried something like this:
agent { label 'small, medium' }
But it failed.
I'm using version 2.5 of the Jenkins Pipeline Plugin.

You can see the 'Pipeline-syntax' help within your Jenkins installation and see the sample step "node" reference.
You can use exprA||exprB:
node('small||medium') {
// some block
}

EDIT: I misunderstood the question. This answer is only if you know
which specific agent you want to run for each stage.
If you need multiple agents you can declare agent none and then declare the agent at each stage.
https://jenkins.io/doc/book/pipeline/jenkinsfile/#using-multiple-agents
From the docs:
pipeline {
agent none
stages {
stage('Build') {
agent any
steps {
checkout scm
sh 'make'
stash includes: '**/target/*.jar', name: 'app'
}
}
stage('Test on Linux') {
agent {
label 'linux'
}
steps {
unstash 'app'
sh 'make check'
}
post {
always {
junit '**/target/*.xml'
}
}
}
stage('Test on Windows') {
agent {
label 'windows'
}
steps {
unstash 'app'
bat 'make check'
}
post {
always {
junit '**/target/*.xml'
}
}
}
}
}

This syntax appears to work for me:
agent { label 'linux && java' }

As described in Jenkins pipeline documentation and by Vadim Kotov one can use operators in label definition.
So in your case if you want to run your jobs on nodes with specific labels, the declarative way goes like this:
agent { label('small || medium') }
And here are some examples from Jenkins page using different operators
// with AND operator
agent { label('windows && jdk9 )') }
// a more complex one
agent { label('postgres && !vm && (linux || freebsd)') }
Notes
When constructing those definitions one just needs to consider following rules/restrictions:
All operators are left-associative
Labels or agent names can be surrounded with quotation marks if they contain characters that would conflict with the operator syntax
Expressions can be written without whitespace
Jenkins will ignore whitespace when evaluating expressions
Matching labels or agent names with wildcards or regular expressions is not supported
An empty expression will always evaluate to true, matching all agents

Create a another label call 'small-or-medium' that has 6 all agents. Then in Jenkinsfile:
agent { label 'small-or-medium' }

Related

How to run Jenkins job on multiple agents in parallel using declarative pipelines

I have a supposedly simple task: I want to run a job on multiple agents in parallel.
Even though I'm a bit of a noob with Jenkins, I've googled a bit, and got to the conclusion that the preferred solution is to use Matrix directive.
I've read Matrix official docs and this blog and still can't solve it completely.
But I'm close, so I assume I just need a bit of help.
The agents I need job to run on - have label: 'vms'.
The below pipeline will run job on some of required agents that have 'vms' label - equivalent to the amount of values for DUMMY_AXIS axis.
For example, 'vms' label has 3 agents, pipeline below will run stages on 2 out of 3.
How to fix the issue, so that stages would run once on each agent from given label , regardless of how many agents there are?
pipeline {
agent none
stages {
stage('Update TestHostAgent') {
matrix {
agent {
label 'vms'
}
axes {
axis {
name 'DUMMY_AXIS'
values 'dummy_val_1', 'dummy_val_2'
}
}
stages {
stage('Build') {
steps {
echo "Build stage"
}
}
stage('Test') {
steps {
script {
echo "Test Step"
}
}
}
}
}
}
}
}

Multiconfiguration / matrix build pipeline in Jenkins

What is modern best practice for multi-configuration builds (with Jenkins)?
I want to support multiple branches and multiple configurations.
For example for each version V1, V2 of the software I want builds targeting
platforms P1 and P2.
We have managed to set up multi-branch declarative pipelines. Each build has its own docker so its easy to support multiple platforms.
pipeline {
agent none
stages {
stage('Build, test and deploy for P1) {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P1.Dockerfile'
}
}
steps {
sh buildit...
}
}
stage('Build, test and deploy for P2) {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P2.Dockerfile'
}
}
steps {
sh buildit...
}
}
}
}
This gives one job covering multiple platforms but there is no separate red/blue status for each platform.
There is good argument that this does not matter as you should not release unless the build works for all platforms.
However, I would like a separate status indicator for each configuration. This suggests I should use a multi-configuration build which triggers a parameterised build for each configuration as below (and the linked question):
pipeline {
parameters {
choice(name: 'Platform',choices: ['P1', 'P2'], description: 'Target OS platform', )
}
agent {
filename someMagicToGetDockerfilePathFromPlatform()
}
stages {
stage('Build, test and deploy for P1) {
steps {
sh buildit...
}
}
}
}
There are several problems with this:
A declarative pipeline has more constraints over how it is scripted
Multi-configuration builds cannot trigger declarative pipelines (even with the parameterized triggers plugin I get "project is not buildable").
This also begs the question what use are parameters in declarative pipelines?
Is there a strategy that gives the best of both worlds i.e:
pipeline as code
separate status indicators
limited repetition?
This is a partial answer. I think others with better experience will be able to improve on it.
This is currently untested. I may be barking up the wrong tree.
Please comment or add a better answer.
Do not use pipeline parameters except where you need user input
Use a hybrid of a scripted and declarative pipeline
(see also https://stackoverflow.com/a/46675227/1569204)
Have a function which declares a pipeline based on parameters:
(see also https://jenkins.io/doc/book/pipeline/shared-libraries/)
Use nodes to create visible indicators in the pipeline (at least in blue ocean)
So something like the following:
def build(string platform) {
switch(platform) {
case P1:
dockerFile = 'foo'
indicator = 'build for foo'
break
case P2:
dockerFile = 'bar'
indicator = 'build for bar'
break
}
pipeline {
agent {
dockerfile {
filename "$dockerFile"
}
node {
label "$indicator"
}
}
stages {
steps {
echo "build it"
}
}
}
}
The relevant code could be moved to a shared library (even if you don't actually need to share it).
I think the cleanest approach is to have this all in a pipeline similar to the first one you presented, the only modification I would see here is making those parallel, so you would actually try and build/test for both platforms.
To reuse the previous stage's workspace you could do: reuseNode true
Something similar to this flow, that would have parallel build for platforms
pipeline {
agent 'docker'
stages {
stage('Common pre') { ... }
stage('Build all platforms') {
parallel {
stage('Build, test and deploy for P1') {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P1.Dockerfile'
reuseNode true
}
}
steps {
sh buildit...
}
}
stage('Build, test and deploy for P2') {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P2.Dockerfile'
reuseNode true
}
}
steps {
sh buildit...
}
}
}
}
stage('Common post parallel') { ... }
}
}

Calculated-String as the parameter to Jenkins's Groovy "STAGE"

I put this in the script section of a Jenkins UI job's configuration -
pipeline {
agent any
stages{
stage('Project') {
...
That works, however -
pipeline {
agent any
stages{
stage('Project ' + 'Josh') {
...
throws and displays an incorrect error message because the parser gets all confused due to the constructed string inside the stage.
Moreover,
String description = 'Project' + ' Josh'
pipeline {
agent any
stages{
stage(description) {
...
does not fail, but displays 'description' as the stage's description.
Now, if you try to load a groovy PaC file with this in it:
node {
stage('Project' + 'Josh') {
...
it works without a hitch.
Is it possible that there are two different Groovy parsers employed, one for the UI and another for loaded PaC's? This means that the UI one has this really horrible bug in it...
Ideas?
.a.
Your example has nothing to do with Jenkins UI. You have shown two different pipeline types - a declarative and scripted one.
Declarative pipeline
A declarative pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
// do something here
}
}
}
}
introduces more simplified, limited and opinionated syntax. This type of a pipeline sets boundaries for Groovy code execution - it is only available inside a dedicated script block, e.g.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
def name = 'Joe'
echo "My name is ${name}"
}
}
}
}
}
This is why stage block expects a literal and not a variable nor expression.
Scripted pipeline
The second example you have shown is a scripted pipeline. This kind of pipeline is more powerful comparing to a declarative pipeline - the whole pipeline script is more or less a Groovy script so you can put any code almost everywhere. A scripted pipeline starts with node block and it allows you to put any Groovy code inside this block. Consider following example:
node {
stage("Test") {
echo "1,2,3"
}
for (int i = 0; i < 5; i++) {
stage("Stage ${i}") {
echo "This is ${i}"
}
}
}
This pipeline script generates 6 stages:
As you can see there are actually no limits what kind of stuff you put inside node block. Declarative pipeline does not allow you doing that - its syntax is strict and you have to follow it directly.
Differences
As a final note I will quote Jenkins official docs:
Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.
Source: https://jenkins.io/doc/book/pipeline/syntax/#compare
The script you configured via UI is using declarative pipeline syntax, while the other uses the scripted node syntax. I'd say that's probably where the other parser comes in and would agree that the one for pipeline has a bug.

How can I parameterize Jenkinsfile jobs

I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name

Visualize Jenkins pipeline or multibranch pipeline jobs

I have one Pipeline job for each component in my Jenkins 2.0. All of them
consist of many stages (build, UT, IT etc.), so they're working as a
pipeline for a component.
The components are depending on each other in a specified order, so I used "Build after other projects are built" (I also tried JobFanIn Plugin) to trigger these "mini-pipelines" after each other. This works like a pipeline of "mini pipelines"
I'd like to visualize the relationship between the jobs. For this purpose I've found 2 plugins:
Delivery Pipeline Plugin
Build Pipeline Plugin
Both introduce a new View type, but none of them supports the "Pipeline" or "Multibranch pipeline" job types (introduced in Jenkins 2.0), these jobs are not visible in the related dropdown list on the view config page.
How can I visualize the relation of these job types? Is there any other plugin which supports these types?
Thinking about this.
I don't think a visualisation of multi branch pipelines makes sense in the same way it would for a single branch build.
The reason is that each bench of a mb pipeline can have a different build configuration. Eg with master triggering a promotion job but branch doing something else or nothing.
Do the best one could do I think is trace an individual build number and it's links. Can't do it at the job level.
Jenkins blue ocean plugins give the rich view to visualize all types (parallel, sequential stages) view out of the box.
Let say if you have a pipeline like this
pipeline {
agent any;
stages {
stage('build') {
stages {
stage('compile') {
steps {
echo "steps for unitest"
}
}
stage('security scan') {
parallel {
stage('sonarqube') {
steps {
echo "steps for parallel sonarqube"
}
}
stage('blackduck') {
steps {
echo "steps for parallel blackduck"
}
}
}
}
stage('package') {
steps {
echo "steps for package"
}
}
}
}
stage('deployment') {
stages {
stage('dev') {
steps {
echo "Development"
}
}
stage('pp') {
when { branch 'master' }
steps {
echo "PreProduction"
}
}
stage('prod') {
when {
branch 'master'
beforeInput true
}
input {
message "Deploy to production?"
id "simple-input"
}
steps {
echo "Production"
}
}
}
}
}
}
It will visualize like this :
is this what you are looking for?
Note- it can customize. but this view is per build ..you can't create a dashboard from it and combine it all in one

Resources