Multiple Jenkins Nodes in Pipeline - jenkins

Currently we have a jenkins pipeline with 4 stages. Setup, Build, Deploy, Teardown. Deploy and Teardown prompt for manual user input. Because of this, we don`t want manual user input to take up an executor. So, we want to use agent none. However, when resuming, there is no guarentee we get the same jenkins workspace. Stash/unstash says it uses alot of resources, so if you have large files not to use it. Is there a way to get the exact slave, and when resuming, run back on that same slave?
I have something like this now I also tried agent gcp at top level, and putting agent none in manual input
pipeline {
agent none
environment {
userInput = false
}
stages {
stage('Setup') {
agent { node { label 'gcp' } }
steps {
deleteDir()
dir('pipelines') {
checkout scm
}
dir('deployment_pipelines'){
git branch: __deployment_scripts_code_branch, credentialsId: 'jenkins', url: __deployment_scripts_code_repo
}
dir('gcp_template_core'){
git branch: __gcp_template_code_branch, credentialsId: 'jenkins', url: __gcp_template_code_repo
}
dir('control_repo'){
git branch: _control_repo_branch, credentialsId: 'jenkins', url: _control_repo
}
// Copy core templates to the project
sh('bash deployment_pipelines/deployment/setup.sh gcp_template_core/gcp_foundation/ control_repo')
}
}
stage('Build') {
agent { node { label 'gcp' } }
steps {
sh('printenv') //TODO: Remove. Debug only
sh('python deployment_pipelines/deployment/build.py control_repo --env ${_env_type_long}')
}
}
stage('Deploy') {
agent { node { label 'gcp' } }
steps {
sh('python deployment_pipelines/deployment/deploy.py control_repo --env ${_env_type_short}')
}
}
stage('Release') {
steps {
agent none
script {
sh('python deployment_pipelines/deployment/set_manual_approvers.py deployment_pipelines/config/production-release-approvers.yaml -o approver.txt')
def approvers = readFile('approver.txt')
try {
userInput = input(
message: 'Do you want to proceed with Release?',
submitter: approvers)
} catch(err) { // input false
//def user = err.getCauses()[0].getUser() //need script approval for getUser()
userInput = false
// echo "Aborted by [${user}]"
}
agent { node { label 'gcp' } }
if(userInput)
{
sh("echo 'Do Release'")
}
}
}
}
stage('Teardown'){
agent { node { label 'gcp' } }
steps {
script {
def approvers = readFile('approver.txt')
try {
userInput = input(
message: 'Do you want to proceed with Teardown?',
submitter: approvers)
} catch(err) { // input false
//def user = err.getCauses()[0].getUser() //need script approval for getUser()
userInput = false
// echo "Aborted by [${user}]"
}
if(userInput)
{
sh("echo 'Do Teardown'")
}
}
}
}
}
post {
always {
echo 'DO TEARDOWN REGARDLESS'
}
}
}

agent none should be above step block in stage('Release'). You can refer https://jenkins.io/doc/book/pipeline/syntax/#agent for syntax and flow

Related

Skip Stages in Jenkins shared library based on repository

I have a common Jenkins shared library for all the repositories as below.
vars/_publish.groovy
pipeline {
environment {
abc= credentials(’abc')
def= credentials(‘def’)
}
stages {
stage('Build') {
steps{
sh ‘docker build'
}
}
stage('Unit-test') {
steps{
sh ‘mvn test'
}
}
jenkinsfile
#Library('my-shared-library#branch') _
_publish() {
}
I have 10 Repository each has its own Jenkinsfile as shown above which refers to the jenkins shared library(vars/_publish.groovy). I have a condition here that I need to Pass. For few repository I want to skip the Unit test and just execute the build stage. For rest other repository I want both the stages. Is there anyone I can skip the particular stage based on the repository or repository name
Yes it's possible you can use when expression like this
pipeline {
agent any
stages {
stage('Test') {
when { expression { return repositoryName.contains('dev') } } <---------Add put your repository name 'dev' so whenever the repository names is ''dev' then execute this stage
steps {
script {
}
}
}
}
}
def repositoryName() {
def repositoryName = ['dev', 'test'] <----Add here the 10 repo name
return repositoryName
}
Here in my case repo names are dev and test so you can add yours accondigly
I would decorate my shared library and Jenkinsfile like this to achieve your scenario.
vars/_publish.groovy
def call(body={}) {
def pipelineParams = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()
pipeline {
agent any;
stages {
stage('build') {
steps {
echo "BUILD"
}
}
stage('unitest') {
when {
anyOf {
equals expected: true, actual: pipelineParams.isEmpty();
equals expected: false, actual: pipelineParams.skipUnitest
}
}
steps {
echo "UNITEST"
}
}
}
}
}
I am enabling my shared library to accept parameter from Jenkinsfile and with when{} DSL deciding whether to skip unitest stage or not
Jenkinsfile
If your Jenkins file from the repo has below details, will skip the unitest stage
#Library('jenkins-shared-library')_
_publish(){
skipUnitest = true
}
below both scenario will run the unitest stage
#Library('jenkins-shared-library')_
_publish(){
skipUnitest = false
}
and
#Library('jenkins-shared-library')_
_publish(){
}

Jenkins Pipeline Template - Approaches

I came across a blog post for defining pipeline templates here. What is the difference between the below 2 declarations -
vars/myDeliveryPipeline.groovy
def call(Map pipelineParams) {
pipeline {
agent any
stages {
stage('checkout git') {
steps {
git branch: pipelineParams.branch, credentialsId: 'GitCredentials', url: pipelineParams.scmUrl
}
}
stage('build') {
steps {
sh 'mvn clean package -DskipTests=true'
}
}
stage ('test') {
steps {
parallel (
"unit tests": { sh 'mvn test' },
"integration tests": { sh 'mvn integration-test' }
)
}
}
stage('deploy developmentServer'){
steps {
deploy(pipelineParams.developmentServer, pipelineParams.serverPort)
}
}
stage('deploy staging'){
steps {
deploy(pipelineParams.stagingServer, pipelineParams.serverPort)
}
}
stage('deploy production'){
steps {
deploy(pipelineParams.productionServer, pipelineParams.serverPort)
}
}
}
post {
failure {
mail to: pipelineParams.email, subject: 'Pipeline failed', body: "${env.BUILD_URL}"
}
}
}
}
2nd Approach
vars/myDeliveryPipeline.groovy
def call(body) {
// evaluate the body block, and collect configuration into the object
def pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()
pipeline {
// our complete declarative pipeline can go in here
...
}
}
The essential difference here is in the usage for the passing of the Pipeline parameters to the method containing the pipeline during invocation.
For the first example, you will be passing a Map directly via myDeliveryPipeline(params):
myDeliveryPipeline(branch: 'master',
scmUrl: 'ssh://git#myScmServer.com/repos/myRepo.git',
email: 'team#example.com', serverPort: '8080',
serverPort: '8080',
developmentServer: 'dev-myproject.mycompany.com',
stagingServer: 'staging-myproject.mycompany.com',
productionServer: 'production-myproject.mycompany.com')
For the second example, you will be passing a Map via a closure that resembles a DSL via myDeliveryPipeline { params }:
myDeliveryPipeline {
branch = 'master'
scmUrl = 'ssh://git#myScmServer.com/repos/myRepo.git'
email = 'team#example.com'
serverPort = '8080'
developmentServer = 'dev-myproject.mycompany.com'
stagingServer = 'staging-myproject.mycompany.com'
productionServer = 'production-myproject.mycompany.com'
}
Other than argument usage, the methods are identical. It will come down to your preference.

Jenkins Multibranch job with declarative pipeline cloning repo for every stage

Trying to create a workflow in Jenkins using Declarative Pipeline to do something like this:
Checkout the code on 'master'
Build solution on 'master' (I know this is not a secure way to do it, but Jenkins is in the intranet so it should be fine for us)
Stash artifacts (.dll, .exe, .pdb, etc) => 1st stage
Unstash artifacts on nodes depending on what it's needed (Unit tests on a slave, Integration tests on another one and Selenium tests on a another one) => 2nd stage
Run tests depending on the slave => 3rd stage running in parallel
The problem that I'm facing is that the git checkout (GitSCM) is executed for every stage.
My pipeline looks like this:
pipeline {
agent {
label {
label "master"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
options {
timestamps()
}
stages {
stage("Build") {
agent {
label {
label "master"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
/*
steps to build the solution here
*/
//Sleep because stashing fails otherwise
script {
sleep(1)
}
dir("${env.WORKSPACE}\\UnitTests\\bin\\Release") {
stash name: 'unit-tests'
}
dir("${env.WORKSPACE}\\WebUnitTests\\bin\\x64\\Release") {
stash name: 'web-unit-tests'
}
}
stage('Export artefacts') {
agent {
label {
label "UnitTest"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
echo "Copying dlls from master to ${env.NODE_NAME}"
dir("${env.WORKSPACE}\\UnitTests\\bin\\Release") {
unstash 'unit-tests'
}
}
}
stage('Run tests') {
parallel {
stage("Run tests #1") {
agent {
label {
label "UnitTest"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
/*
run tests here
*/
}
post {
//post results here
}
}
//other parallel stages
}
}
}
}
So, as mentioned earlier, the GitSCM (code checkout) is a part of and performed for every stage:
Build stage
Export stage
A couple simple changes should solve this. You need to tell the pipeline script not to checkout by default every time a node is allocated. Then you need to tell it to do the checkout where you need it:
pipeline {
agent {
label {
label "master"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
options {
timestamps()
skipDefaultCheckout() // Don't checkout automatically
}
stages {
stage("Build") {
agent {
label {
label "master"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
checkout scm //this will checkout the appropriate commit in this stage
/*
steps to build the solution here
*/
//Sleep because stashing fails otherwise
script {
sleep(1)
}
dir("${env.WORKSPACE}\\UnitTests\\bin\\Release") {
stash name: 'unit-tests'
}
dir("${env.WORKSPACE}\\WebUnitTests\\bin\\x64\\Release") {
stash name: 'web-unit-tests'
}
}
stage('Export artefacts') {
agent {
label {
label "UnitTest"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
echo "Copying dlls from master to ${env.NODE_NAME}"
dir("${env.WORKSPACE}\\UnitTests\\bin\\Release") {
unstash 'unit-tests'
}
}
}
stage('Run tests') {
parallel {
stage("Run tests #1") {
agent {
label {
label "UnitTest"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
/*
run tests here
*/
}
post {
//post results here
}
}
//other parallel stages
}
}
}
I have added 2 lines there. One in the options section (skipDefaultCheckout()), and a checkout scm in the first stage.

Declarative Pipeline When Conditions

I'm in need of help with the when condition on Jenkins. I want to take the words FULL_BUILD from the merge request title and then execute different stages based on whether it is a FULL_BUILD or someone just submitting a merge request to master that doesn't need to go through Veracode, SonarQube, etc etc (these stages are not pasted in as they are just repeats of the when conditions below). However, I have to repeat this crazy when condition on EVERY stage, as well as sometimes creating a special stage that only executes on FULL_BUILD or "regular" builds.
Has anyone created a #NonCPS script to set a true/false variable? Or is there a crafty way to execute the script on startup to set a reusable variable?
I want to have the users execute everything from their Gitlab MR and not have to go to Jenkins to hit a button (hence I do not use a parameter of boolean).
pipeline {
agent {
node {
label 'master'
}
}
parameters{
//I am trying to pull the information for a full build from the Merge Request
}
environment {
//Assume random variables all work fine
}
options {
skipDefaultCheckout()
gitLabConnection('GitLab_Generic')
timeout(time: 60, unit: 'MINUTES')
}
triggers {
gitlab(triggerOnPush: true, triggerOnMergeRequest: true, branchFilterType: 'All')
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build') {
steps {
dir("${env.WORKSPACE}\\${params.PROJECT_NAME}") {
bat 'nuget.exe restore %PROJECT_NAME%.sln'
}
dir("${env.WORKSPACE}\\${params.PROJECT_NAME}\\build") {
bat "\"${tool 'msbuild'}\" %COMPONENT%.XML /p:Configuration=Debug "
}
dir("${env.WORKSPACE}") {
echo "Creating a Build status file"
writeFile file: "output/MR_Title.txt", text: "BUILD STATUS:"
}
}
}
stage('Check MR FULL_BUILD' ){
when {
branch 'master'
}
steps{
dir("${env.WORKSPACE}") {
//writeFile file: "MR_Title.txt", text: "BUILD STATUS:"
powershell '& "./build/scripts/MergeRequestAPI.ps1" -GIT_CREDENTIALS $env:GIT_API_TOKEN -PROJECT_ID $env:GIT_PROJECT_ID | Out-File output/MR_Title.txt -Encoding utf8"'
}
}
}
stage('Package Snapshot') {
when {
allOf {
branch 'master'
not {
expression {
return readFile('output/MR_Title.txt').contains("FULL BUILD")
}
}
}
}
steps {
dir("${env.WORKSPACE}\\${params.PROJECT_NAME}\\build") {
bat "\"${tool 'msbuild'}\" %COMPONENT%.XML /t:Publish /p:version=${env.SnapshotComponentVersion} "
}
}
}
stage('Package Full Build') {
when {
allOf {
branch 'master'
expression {
return readFile('output/MR_Title.txt').contains("FULL BUILD")
}
}
}
steps {
dir("${env.WORKSPACE}\\${params.PROJECT_NAME}\\build") {
bat "\"${tool 'msbuild'}\" %COMPONENT%.XML /t:Publish /p:version=${env.ComponentVersion} "
}
}
}
}
}

Use a lightweight executor for a declarative pipeline stage (agent none)

I'm using Jenkins Pipeline with the declarative syntax, currently with the following stages:
Prepare
Build (two parallel sets of steps)
Test (also two parallel sets of steps)
Ask if/where to deploy
Deploy
For steps 1, 2, 3, and 5 I need and agent (an executor) because they do actual work on the workspace. For step 4, I don't need one, and I would like to not block my available executors while waiting for user input. This seem to be referred to as either a "flyweight" or "lightweight" executor for the classic, scripted syntax, but I cannot find any information on how to achieve this with the declarative syntax.
So far I've tried:
Setting an agent directly in the pipeline options, and then setting agent none on the stage. This has no effect, and the pipeline runs as normalt, blocking the executor while waiting for input. It is also mentioned in the documentation that it will have no effect, but I thought I'd give it a shot anyway.
Setting agent none in the pipeline options, and then setting an agent for each stage except #4. Unfortunately, but expectedly, this allocates a new workspace for every stage, which in turn requires me to stash and unstash. This is both messy and gives me further problems in the parallel stages (2 and 3) because I cannot have code outside the parallel construct. I assume the parallel steps run in the same workspace, so stashing/unstashing in both would have unfortunate results.
Here is an outline of my Jenkinsfile:
pipeline {
agent {
label 'build-slave'
}
stages {
stage("Prepare build") {
steps {
// ...
}
}
stage("Build") {
steps {
parallel(
frontend: {
// ...
},
backend: {
// ...
}
)
}
}
stage("Test") {
steps {
parallel(
jslint: {
// ...
},
phpcs: {
// ...
},
)
}
post {
// ...
}
}
stage("Select deploy target") {
steps {
script {
// ... code that determines choiceParameterDefinition based on branch name ...
try {
timeout(time: 5, unit: 'MINUTES') {
deployEnvironment = input message: 'Deploy target', parameters: [choiceParameterDefinition]
}
} catch(ex) {
deployEnvironment = null
}
}
}
}
stage("Deploy") {
when {
expression {
return binding.variables.get("deployEnvironment")
}
}
steps {
// ...
}
}
}
post {
// ...
}
}
Am I missing something here, or is it just not possible in the current version?
Setting agent none at the top level, then agent { label 'foo' } on every stage, with agent none again on the input stage seems to work as expected for me.
i.e. Every stage that does some work runs on the same agent, while the input stage does not consume an executor on any agent.
pipeline {
agent none
stages {
stage("Prepare build") {
agent { label 'some-agent' }
steps {
echo "prepare: ${pwd()}"
}
}
stage("Build") {
agent { label 'some-agent' }
steps {
parallel(
frontend: {
echo "frontend: ${pwd()}"
},
backend: {
echo "backend: ${pwd()}"
}
)
}
}
stage("Test") {
agent { label 'some-agent' }
steps {
parallel(
jslint: {
echo "jslint: ${pwd()}"
},
phpcs: {
echo "phpcs: ${pwd()}"
},
)
}
}
stage("Select deploy target") {
agent none
steps {
input message: 'Deploy?'
}
}
stage("Deploy") {
agent { label 'some-agent' }
steps {
echo "deploy: ${pwd()}"
}
}
}
}
However, there are no guarantee that using the same agent label within a Pipeline will always end up using the same workspace, e.g. as another build of the same job while the first build is waiting on the input.
You would have to use stash after the build steps. As you note, this cannot be done normally with parallel at the moment, so you'd have to additionally use a script block, in order to write a snippet of Scripted Pipeline for the stashing/unstashing after/before the parallel steps.
There is a workaround to use the same build slave in the other stages.
You can set a variable with the node name and use it in the others.
ie:
pipeline {
agent none
stages {
stage('First Stage Gets Agent Dynamically') {
agent {
node {
label "some-agent"
}
}
steps {
echo "first stage running on ${NODE_NAME}"
script {
BUILD_AGENT = NODE_NAME
}
}
}
stage('Second Stage Setting Node by Name') {
agent {
node {
label "${BUILD_AGENT}"
}
}
steps {
echo "Second stage using ${NODE_NAME}"
}
}
}
}
As of today (2021), you can use nested stages (https://www.jenkins.io/doc/book/pipeline/syntax/#sequential-stages) to group all the stages that must run in the same workspace before the input step, and all the stages that must be run in the same workspace after the input step. Of course, you need to stash or to store artifacts in some external repository before the input step, because the second workspace may not be the same than the first one:
pipeline {
agent none
stages {
stage('Deployment to Preproduction') {
agent any
stages {
stage('Stage PRE.1') {
steps {
echo "StagePRE.1"
sleep(10)
}
}
stage('Stage PRE.2') {
steps {
echo "Stage PRE.2"
sleep(10)
}
}
}
}
stage('Stage Ask Deploy') {
steps {
input message: 'Deploy to production?'
}
}
stage('Deployment to Production') {
agent any
stages {
stage('Stage PRO.1') {
steps {
echo "Stage PRO.1"
sleep(10)
}
}
stage('Stage PRO.2') {
steps {
echo "Stage PRO.2"
sleep(10)
}
}
}
}
}
}

Resources