What I'm trying to achieve:
I'm trying to execute a pipeline script where SCM (AccuRev) is checked out on 'any' agent and then the stages are executed on that same agent per the local workspace. The build stage specifically is expecting the code checkout to just be available in the workspace that is mapped into the container.
The problem:
When I have more than one agent added to the Jenkins configuration, the SCM step will checkout the code on one agent and then start the build step starting the container on the other agent, which is a problem because the code was checked out on the other agent.
What works:
Jenkins configured with a single agent/node
pipeline {
agent none
stages {
stage('Checkout') {
agent any
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
What I have tried, but doesn't work:
Jenkins configured with 2 agent(s)/node(s)
pipeline {
agent {
docker {
image 'ubuntu'
}
}
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
The above doesn't work because it is expecting AccuRev to be installed in the container. I could go this route, but it is not really scalable and will cause issues on containers that are based on an older OS. There are also permission issues within the container.
I also tried adding 'reuseNode true' to the docker agent, as in the below:
pipeline {
agent none
stages {
stage('Checkout') {
agent any
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
I'm somewhat aware or have read about 'automatic checkout scm' as with the following, but this is odd as there is no place to define the target stream/branch to checkout. This is why I'm declaring a specific stage to handle scm checkout. It is possible this would handle the checkout without needing to specify the agent, but I don't get how to do this.
pipeline {
agent any
stages {
stage ('Build') {
steps {
sh 'cat Jenkinsfile'
}
}
}
}
Edit: adding a solution that seems to work, but need more testing before confirming.
The following seems to do what I want, executing the checkout stage on 'any' agent and then reusing the same agent to execute the build state in a container.
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
The below appears to have given me the functionality that I needed. The pipeline starts on "any" agent allowing the host level to handle the Checkout stage, and the "reuseNode" informs the pipeline to start the container on the same node, where the workspace is located.
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
Related
My jenkinsfile looks like this:
pipeline {
agent {label "master"}
parameters {
string(name: "host", defaultValue: "ci_agent_001 || ci_agent_002")
}
stages {
stage ("build") {
agent ( label "${params.host}" )
steps {
script {
sh "./script.py --build"
}
}
}
stage ("deploy") {
agent ( label "${params.host}" )
steps {
script {
sh "./script.py --deploy"
}
}
}
stage ("test") {
agent ( label "${params.host}" )
steps {
script {
sh "./script.py --test"
}
}
}
}
Each Python script handles all the logic I need, however I must have the same agent to run the stages, and if I allow more than one option in the host param, I can't assert the agent stage I got, will be the same agent stage II will get (and without downtime, i.e. I can't allow another job use that agent between my stages).
Can I specify an agent to the whole pipeline?
Yes you can. In fact, you can free your master from running pipelines at all:
Replace agent {label "master"} with agent {label params.host}
Clear all the agent ( label "${params.host}" ) lines inside individual stages (you also don't need the script block, as you can run sh steps directly within the steps block)
If later on you decide you don't want to assign a single node to all stages you'll have to use scripted pipeline inside the declarative pipeline to group stages that should run on the same node:
stage("stages that share the same node") {
agent { label params.host }
steps {
script {
stage("$NODE_NAME - build") {
sh "./script.py --build"
}
stage("$NODE_NAME - deploy") {
sh "./script.py --deploy"
}
stage("$NODE_NAME - test") {
sh "./script.py --test"
}
}
}
}
stage("look at me I might be on another node") {
agent { label params.host }
steps {
echo NODE_NAME
}
}
I am using declarative Jenkinsfile for a multi-branch pipeline as shown here. SCM is set to poll for every 5 minutes.
pipeline {
agent none
stages {
stage('Build Jar') {
agent {
docker {
image 'maven:3.6.0-jdk-11'
args '-v $HOME/.m2:/root/.m2'
}
}
steps {
sh 'mvn clean package release:clean release:prepare release:perform -Darguments="-Dmaven.deploy.skip=true" -DscmCommentPrefix="[skip ci]"'
}
}
stage('Build Image') {
steps {
script {
app = docker.build("myname/myimage")
}
}
}
//other stages here
}
Problem:
maven release commits changes to the repo which triggers another build. So it gets triggered indefintely. I came across this SCM Skip plugin.
scmSkip(deleteBuild: true, skipPattern:'.*\\[skip ci\\].*')
But unfortunately it needs an agent to run!!
I also tried by using agent any. no luck.
pipeline {
agent any
stages {
stage('SCM Check') {
steps {
scmSkip(deleteBuild: true, skipPattern:'.*\\[skip ci\\].*')
}
}
stage('Build Jar') {
steps {
sh 'mvn clean package release:clean release:prepare release:perform -Darguments="-Dmaven.deploy.skip=true" -DscmCommentPrefix="[skip ci]"'
}
}
stage('Build Image') {
steps {
script {
app = docker.build("myname/myimage")
}
}
}
//other stages here
}
How do you guys skip build on certain messages?
I had to go with the below plugin which excludes the certain commiter. It works great.
https://github.com/jenkinsci/ignore-committer-strategy-plugin
Our current Jenkins pipeline looks like this:
pipeline {
agent {
docker {
label 'linux'
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Build') {
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
We mount /.gradle because we want to reuse cached data between builds. The problem is, if the machine is a brand new build machine, the directory on the host does not yet exist.
Where do I put setup logic which runs before, so that I can ensure this directory exists before the docker image is run?
You can run a prepare stage before all the stages and change agent after that
pipeline {
agent { label 'linux' } // slave where docker agent needs to run
environment {
GRADLE_USER_HOME = '/.gradle'
GRADLE_PROPERTIES = credentials('gradle.properties')
}
stages {
stage('Prepare') {
// prepare host
}
stage('Build') {
agent {
docker {
label 'linux' // should be same as slave label
image 'java:8'
args '-v /home/tester/.gradle:/.gradle'
}
}
steps {
sh 'cp ${GRADLE_PROPERTIES} ${GRADLE_USER_HOME}/'
sh './gradlew clean check'
}
}
}
post {
always {
junit 'build/test-results/**/*.xml'
}
}
}
Specifying a Docker Label
Pipeline provides a global option in the Manage Jenkins page, and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines.
How to restrict the jenkins pipeline docker agent in specific slave?
I have a pipeline with multiple stages, and I want to reuse a docker container between only "n" number of stages, rather than all of them:
pipeline {
agent none
stages {
stage('Install deps') {
agent {
docker { image 'node:10-alpine' }
}
steps {
sh 'npm install'
}
}
stage('Build, test, lint, etc') {
agent {
docker { image 'node:10-alpine' }
}
parallel {
stage('Build') {
agent {
docker { image 'node:10-alpine' }
}
// This fails because it runs in a new container, and the node_modules created during the first installation are gone at this point
// How do I reuse the same container created in the install dep step?
steps {
sh 'npm run build'
}
}
stage('Test') {
agent {
docker { image 'node:10-alpine' }
}
steps {
sh 'npm run test'
}
}
}
}
// Later on, there is a deployment stage which MUST deploy using a specific node,
// which is why "agent: none" is used in the first place
}
}
See reuseNode option for Jenkins Pipeline docker agent:
https://jenkins.io/doc/book/pipeline/syntax/#agent
pipeline {
agent any
stages {
stage('NPM install') {
agent {
docker {
/*
* Reuse the workspace on the agent defined at top-level of
* Pipeline, but run inside a container.
*/
reuseNode true
image 'node:12.16.1'
}
}
environment {
/*
* Change HOME, because default is usually root dir, and
* Jenkins user may not have write permissions in that dir.
*/
HOME = "${WORKSPACE}"
}
steps {
sh 'env | sort'
sh 'npm install'
}
}
}
}
You can use scripted pipelines, where you can put multiple stage steps inside a docker step, e.g.
node {
checkout scm
docker.image('node:10-alpine').inside {
stage('Build') {
sh 'npm run build'
}
stage('Test') {
sh 'npm run test'
}
}
}
(code untested)
For Declarative pipeline, one solution can be to use Dockerfile in the root of the project. For e.g.
Dockerfile
FROM node:10-alpine
// Further Instructions
Jenkinsfile
pipeline{
agent {
dockerfile true
}
stage('Build') {
steps{
sh 'npm run build'
}
}
stage('Test') {
steps{
sh 'npm run test'
}
}
}
I'm using Jenkins Pipeline with the declarative syntax, currently with the following stages:
Prepare
Build (two parallel sets of steps)
Test (also two parallel sets of steps)
Ask if/where to deploy
Deploy
For steps 1, 2, 3, and 5 I need and agent (an executor) because they do actual work on the workspace. For step 4, I don't need one, and I would like to not block my available executors while waiting for user input. This seem to be referred to as either a "flyweight" or "lightweight" executor for the classic, scripted syntax, but I cannot find any information on how to achieve this with the declarative syntax.
So far I've tried:
Setting an agent directly in the pipeline options, and then setting agent none on the stage. This has no effect, and the pipeline runs as normalt, blocking the executor while waiting for input. It is also mentioned in the documentation that it will have no effect, but I thought I'd give it a shot anyway.
Setting agent none in the pipeline options, and then setting an agent for each stage except #4. Unfortunately, but expectedly, this allocates a new workspace for every stage, which in turn requires me to stash and unstash. This is both messy and gives me further problems in the parallel stages (2 and 3) because I cannot have code outside the parallel construct. I assume the parallel steps run in the same workspace, so stashing/unstashing in both would have unfortunate results.
Here is an outline of my Jenkinsfile:
pipeline {
agent {
label 'build-slave'
}
stages {
stage("Prepare build") {
steps {
// ...
}
}
stage("Build") {
steps {
parallel(
frontend: {
// ...
},
backend: {
// ...
}
)
}
}
stage("Test") {
steps {
parallel(
jslint: {
// ...
},
phpcs: {
// ...
},
)
}
post {
// ...
}
}
stage("Select deploy target") {
steps {
script {
// ... code that determines choiceParameterDefinition based on branch name ...
try {
timeout(time: 5, unit: 'MINUTES') {
deployEnvironment = input message: 'Deploy target', parameters: [choiceParameterDefinition]
}
} catch(ex) {
deployEnvironment = null
}
}
}
}
stage("Deploy") {
when {
expression {
return binding.variables.get("deployEnvironment")
}
}
steps {
// ...
}
}
}
post {
// ...
}
}
Am I missing something here, or is it just not possible in the current version?
Setting agent none at the top level, then agent { label 'foo' } on every stage, with agent none again on the input stage seems to work as expected for me.
i.e. Every stage that does some work runs on the same agent, while the input stage does not consume an executor on any agent.
pipeline {
agent none
stages {
stage("Prepare build") {
agent { label 'some-agent' }
steps {
echo "prepare: ${pwd()}"
}
}
stage("Build") {
agent { label 'some-agent' }
steps {
parallel(
frontend: {
echo "frontend: ${pwd()}"
},
backend: {
echo "backend: ${pwd()}"
}
)
}
}
stage("Test") {
agent { label 'some-agent' }
steps {
parallel(
jslint: {
echo "jslint: ${pwd()}"
},
phpcs: {
echo "phpcs: ${pwd()}"
},
)
}
}
stage("Select deploy target") {
agent none
steps {
input message: 'Deploy?'
}
}
stage("Deploy") {
agent { label 'some-agent' }
steps {
echo "deploy: ${pwd()}"
}
}
}
}
However, there are no guarantee that using the same agent label within a Pipeline will always end up using the same workspace, e.g. as another build of the same job while the first build is waiting on the input.
You would have to use stash after the build steps. As you note, this cannot be done normally with parallel at the moment, so you'd have to additionally use a script block, in order to write a snippet of Scripted Pipeline for the stashing/unstashing after/before the parallel steps.
There is a workaround to use the same build slave in the other stages.
You can set a variable with the node name and use it in the others.
ie:
pipeline {
agent none
stages {
stage('First Stage Gets Agent Dynamically') {
agent {
node {
label "some-agent"
}
}
steps {
echo "first stage running on ${NODE_NAME}"
script {
BUILD_AGENT = NODE_NAME
}
}
}
stage('Second Stage Setting Node by Name') {
agent {
node {
label "${BUILD_AGENT}"
}
}
steps {
echo "Second stage using ${NODE_NAME}"
}
}
}
}
As of today (2021), you can use nested stages (https://www.jenkins.io/doc/book/pipeline/syntax/#sequential-stages) to group all the stages that must run in the same workspace before the input step, and all the stages that must be run in the same workspace after the input step. Of course, you need to stash or to store artifacts in some external repository before the input step, because the second workspace may not be the same than the first one:
pipeline {
agent none
stages {
stage('Deployment to Preproduction') {
agent any
stages {
stage('Stage PRE.1') {
steps {
echo "StagePRE.1"
sleep(10)
}
}
stage('Stage PRE.2') {
steps {
echo "Stage PRE.2"
sleep(10)
}
}
}
}
stage('Stage Ask Deploy') {
steps {
input message: 'Deploy to production?'
}
}
stage('Deployment to Production') {
agent any
stages {
stage('Stage PRO.1') {
steps {
echo "Stage PRO.1"
sleep(10)
}
}
stage('Stage PRO.2') {
steps {
echo "Stage PRO.2"
sleep(10)
}
}
}
}
}
}