I came across a blog post for defining pipeline templates here. What is the difference between the below 2 declarations -
vars/myDeliveryPipeline.groovy
def call(Map pipelineParams) {
pipeline {
agent any
stages {
stage('checkout git') {
steps {
git branch: pipelineParams.branch, credentialsId: 'GitCredentials', url: pipelineParams.scmUrl
}
}
stage('build') {
steps {
sh 'mvn clean package -DskipTests=true'
}
}
stage ('test') {
steps {
parallel (
"unit tests": { sh 'mvn test' },
"integration tests": { sh 'mvn integration-test' }
)
}
}
stage('deploy developmentServer'){
steps {
deploy(pipelineParams.developmentServer, pipelineParams.serverPort)
}
}
stage('deploy staging'){
steps {
deploy(pipelineParams.stagingServer, pipelineParams.serverPort)
}
}
stage('deploy production'){
steps {
deploy(pipelineParams.productionServer, pipelineParams.serverPort)
}
}
}
post {
failure {
mail to: pipelineParams.email, subject: 'Pipeline failed', body: "${env.BUILD_URL}"
}
}
}
}
2nd Approach
vars/myDeliveryPipeline.groovy
def call(body) {
// evaluate the body block, and collect configuration into the object
def pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()
pipeline {
// our complete declarative pipeline can go in here
...
}
}
The essential difference here is in the usage for the passing of the Pipeline parameters to the method containing the pipeline during invocation.
For the first example, you will be passing a Map directly via myDeliveryPipeline(params):
myDeliveryPipeline(branch: 'master',
scmUrl: 'ssh://git#myScmServer.com/repos/myRepo.git',
email: 'team#example.com', serverPort: '8080',
serverPort: '8080',
developmentServer: 'dev-myproject.mycompany.com',
stagingServer: 'staging-myproject.mycompany.com',
productionServer: 'production-myproject.mycompany.com')
For the second example, you will be passing a Map via a closure that resembles a DSL via myDeliveryPipeline { params }:
myDeliveryPipeline {
branch = 'master'
scmUrl = 'ssh://git#myScmServer.com/repos/myRepo.git'
email = 'team#example.com'
serverPort = '8080'
developmentServer = 'dev-myproject.mycompany.com'
stagingServer = 'staging-myproject.mycompany.com'
productionServer = 'production-myproject.mycompany.com'
}
Other than argument usage, the methods are identical. It will come down to your preference.
Related
I have a common Jenkins shared library for all the repositories as below.
vars/_publish.groovy
pipeline {
environment {
abc= credentials(’abc')
def= credentials(‘def’)
}
stages {
stage('Build') {
steps{
sh ‘docker build'
}
}
stage('Unit-test') {
steps{
sh ‘mvn test'
}
}
jenkinsfile
#Library('my-shared-library#branch') _
_publish() {
}
I have 10 Repository each has its own Jenkinsfile as shown above which refers to the jenkins shared library(vars/_publish.groovy). I have a condition here that I need to Pass. For few repository I want to skip the Unit test and just execute the build stage. For rest other repository I want both the stages. Is there anyone I can skip the particular stage based on the repository or repository name
Yes it's possible you can use when expression like this
pipeline {
agent any
stages {
stage('Test') {
when { expression { return repositoryName.contains('dev') } } <---------Add put your repository name 'dev' so whenever the repository names is ''dev' then execute this stage
steps {
script {
}
}
}
}
}
def repositoryName() {
def repositoryName = ['dev', 'test'] <----Add here the 10 repo name
return repositoryName
}
Here in my case repo names are dev and test so you can add yours accondigly
I would decorate my shared library and Jenkinsfile like this to achieve your scenario.
vars/_publish.groovy
def call(body={}) {
def pipelineParams = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()
pipeline {
agent any;
stages {
stage('build') {
steps {
echo "BUILD"
}
}
stage('unitest') {
when {
anyOf {
equals expected: true, actual: pipelineParams.isEmpty();
equals expected: false, actual: pipelineParams.skipUnitest
}
}
steps {
echo "UNITEST"
}
}
}
}
}
I am enabling my shared library to accept parameter from Jenkinsfile and with when{} DSL deciding whether to skip unitest stage or not
Jenkinsfile
If your Jenkins file from the repo has below details, will skip the unitest stage
#Library('jenkins-shared-library')_
_publish(){
skipUnitest = true
}
below both scenario will run the unitest stage
#Library('jenkins-shared-library')_
_publish(){
skipUnitest = false
}
and
#Library('jenkins-shared-library')_
_publish(){
}
I am trying to build a Jenkins pipeline which has a combination of parallel and sequential stages. I am able to accomplish the same with static data but failing to get it working when using dynamic data, i.e. when using a parameterized build and reading data from the build parameters.
Below snippet works fine
pipeline {
agent any
stages {
stage('Parallel Tests') {
parallel {
stage('Ordered Tests Set') {
stages {
stage('Building seq test 1') {
steps {
echo "build seq test 1"
}
}
stage('Building seq test 2') {
steps {
echo "build seq test 2"
}
}
}
}
stage('Building Parallel test 1') {
steps {
echo "Building Parallel test 1"
}
}
stage('Building Parallel test 2') {
steps {
echo "Building Parallel test 2"
}
}
}
}
}
}
Gives me the following execution result
Now i want to read the values from my build parameters and just loop the stages . This is what i have tried but could not get it to work. This bit of snippet is taken from another answer i found few months back in SO but unable to trace now, else would have added the link -
def parallelStagesMap = params['Parallel Job Set'].split(',').collectEntries {
["${it}" : generateStage(it)]
}
def orderedStagesMap = params['Ordered Job Set'].split(',').collectEntries {
["${it}" : generateStage(it)]
}
def orderedMap (){
def orderedStagesMapList= [:]
orderedStagesMapList['Ordered Tests Set']= {
stage('Ordered Tests Set') {
stages{
orderedStagesMap
}
}
}
return orderedStagesMapList;
}
def generateStage(job) {
return {
stage("stage: ${job}") {
echo "This is ${job}."
}
}
}
pipeline {
agent none
stages {
stage ("Parallel Stage to trigger Tests"){
steps {
script {
parallel orderedMap()+parallelStagesMap
}
}
}
}
}
Declarative and Scripted Pipeline syntax do not mix in Pipeline, see Pipeline Syntax. Since you are dynamically creating a Pipeline definition based on the parameters, you should most likely go completely to Scripted Syntax, unless your use-case matches matrix.
Removing the Declarative syntax from your Pipeline Definition would give something like below. Note that I did not test it on the live Jenkins instance.
def parallelStagesMap = params['Parallel Job Set'].split(',').collectEntries {
["${it}" : generateStage(it)]
}
def orderedStagesMap = params['Ordered Job Set'].split(',').collectEntries {
["${it}" : generateStage(it)]
}
def orderedMap (){
def orderedStagesMapList= [:]
orderedStagesMapList['Ordered Tests Set']= {
stage('Ordered Tests Set') {
orderedStagesMap.each { key, value ->
value.call()
}
}
}
return orderedStagesMapList;
}
def generateStage(job) {
return {
stage("stage: ${job}") {
echo "This is ${job}."
}
}
}
stage("Parallel Stage to trigger Tests") {
parallel orderedMap()+parallelStagesMap
}
As far as declarative pipelines go in Jenkins, I'm having trouble with the when keyword.
I keep getting the error No such DSL method 'when' found among steps. I'm sort of new to Jenkins 2 declarative pipelines and don't think I am mixing up scripted pipelines with declarative ones.
The goal of this pipeline is to run mvn deploy after a successful Sonar run and send out mail notifications of a failure or success. I only want the artifacts to be deployed when on master or a release branch.
The part I'm having difficulties with is in the post section. The Notifications stage is working great. Note that I got this to work without the when clause, but really need it or an equivalent.
pipeline {
agent any
tools {
maven 'M3'
jdk 'JDK8'
}
stages {
stage('Notifications') {
steps {
sh 'mkdir tmpPom'
sh 'mv pom.xml tmpPom/pom.xml'
checkout([$class: 'GitSCM', branches: [[name: 'origin/master']], doGenerateSubmoduleConfigurations: false, submoduleCfg: [], userRemoteConfigs: [[url: 'https://repository.git']]])
sh 'mvn clean test'
sh 'rm pom.xml'
sh 'mv tmpPom/pom.xml ../pom.xml'
}
}
}
post {
success {
script {
currentBuild.result = 'SUCCESS'
}
when {
branch 'master|release/*'
}
steps {
sh 'mvn deploy'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result,
)
}
failure {
script {
currentBuild.result = 'FAILURE'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result
)
}
}
}
In the documentation of declarative pipelines, it's mentioned that you can't use when in the post block. when is allowed only inside a stage directive.
So what you can do is test the conditions using an if in a script:
post {
success {
script {
if (env.BRANCH_NAME == 'master')
currentBuild.result = 'SUCCESS'
}
}
// failure block
}
Using a GitHub Repository and the Pipeline plugin I have something along these lines:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh '''
make
'''
}
}
}
post {
always {
sh '''
make clean
'''
}
success {
script {
if (env.BRANCH_NAME == 'master') {
emailext (
to: 'engineers#green-planet.com',
subject: "${env.JOB_NAME} #${env.BUILD_NUMBER} master is fine",
body: "The master build is happy.\n\nConsole: ${env.BUILD_URL}.\n\n",
attachLog: true,
)
} else if (env.BRANCH_NAME.startsWith('PR')) {
// also send email to tell people their PR status
} else {
// this is some other branch
}
}
}
}
}
And that way, notifications can be sent based on the type of branch being built. See the pipeline model definition and also the global variable reference available on your server at http://your-jenkins-ip:8080/pipeline-syntax/globals#env for details.
Ran into the same issue with post. Worked around it by annotating the variable with #groovy.transform.Field. This was based on info I found in the Jenkins docs for defining global variables.
e.g.
#!groovy
pipeline {
agent none
stages {
stage("Validate") {
parallel {
stage("Ubuntu") {
agent {
label "TEST_MACHINE"
}
steps {{
sh "run tests command"
recordFailures('Ubuntu', 'test-results.xml')
junit 'test-results.xml'
}
}
}
}
}
post {
unsuccessful {
notify()
}
}
}
// Make testFailures global so it can be accessed from a 'post' step
#groovy.transform.Field
def testFailures = [:]
def recordFailures(key, resultsFile) {
def failures = ... parse test-results.xml script for failures ...
if (failures) {
testFailures[key] = failures
}
}
def notify() {
if (testFailures) {
... do something here ...
}
}
Currently we have a jenkins pipeline with 4 stages. Setup, Build, Deploy, Teardown. Deploy and Teardown prompt for manual user input. Because of this, we don`t want manual user input to take up an executor. So, we want to use agent none. However, when resuming, there is no guarentee we get the same jenkins workspace. Stash/unstash says it uses alot of resources, so if you have large files not to use it. Is there a way to get the exact slave, and when resuming, run back on that same slave?
I have something like this now I also tried agent gcp at top level, and putting agent none in manual input
pipeline {
agent none
environment {
userInput = false
}
stages {
stage('Setup') {
agent { node { label 'gcp' } }
steps {
deleteDir()
dir('pipelines') {
checkout scm
}
dir('deployment_pipelines'){
git branch: __deployment_scripts_code_branch, credentialsId: 'jenkins', url: __deployment_scripts_code_repo
}
dir('gcp_template_core'){
git branch: __gcp_template_code_branch, credentialsId: 'jenkins', url: __gcp_template_code_repo
}
dir('control_repo'){
git branch: _control_repo_branch, credentialsId: 'jenkins', url: _control_repo
}
// Copy core templates to the project
sh('bash deployment_pipelines/deployment/setup.sh gcp_template_core/gcp_foundation/ control_repo')
}
}
stage('Build') {
agent { node { label 'gcp' } }
steps {
sh('printenv') //TODO: Remove. Debug only
sh('python deployment_pipelines/deployment/build.py control_repo --env ${_env_type_long}')
}
}
stage('Deploy') {
agent { node { label 'gcp' } }
steps {
sh('python deployment_pipelines/deployment/deploy.py control_repo --env ${_env_type_short}')
}
}
stage('Release') {
steps {
agent none
script {
sh('python deployment_pipelines/deployment/set_manual_approvers.py deployment_pipelines/config/production-release-approvers.yaml -o approver.txt')
def approvers = readFile('approver.txt')
try {
userInput = input(
message: 'Do you want to proceed with Release?',
submitter: approvers)
} catch(err) { // input false
//def user = err.getCauses()[0].getUser() //need script approval for getUser()
userInput = false
// echo "Aborted by [${user}]"
}
agent { node { label 'gcp' } }
if(userInput)
{
sh("echo 'Do Release'")
}
}
}
}
stage('Teardown'){
agent { node { label 'gcp' } }
steps {
script {
def approvers = readFile('approver.txt')
try {
userInput = input(
message: 'Do you want to proceed with Teardown?',
submitter: approvers)
} catch(err) { // input false
//def user = err.getCauses()[0].getUser() //need script approval for getUser()
userInput = false
// echo "Aborted by [${user}]"
}
if(userInput)
{
sh("echo 'Do Teardown'")
}
}
}
}
}
post {
always {
echo 'DO TEARDOWN REGARDLESS'
}
}
}
agent none should be above step block in stage('Release'). You can refer https://jenkins.io/doc/book/pipeline/syntax/#agent for syntax and flow
I need to launch a dynamic set of tests in a declarative pipeline.
For better visualization purposes, I'd like to create a stage for each test.
Is there a way to do so?
The only way to create a stage I know is:
stage('foo') {
...
}
I've seen this example, but I it does not use declarative syntax.
Use the scripted syntax that allows more flexibility than the declarative syntax, even though the declarative is more documented and recommended.
For example stages can be created in a loop:
def tests = params.Tests.split(',')
for (int i = 0; i < tests.length; i++) {
stage("Test ${tests[i]}") {
sh '....'
}
}
As JamesD suggested, you may create stages dynamically (but they will be sequential) like that:
def list
pipeline {
agent none
options {buildDiscarder(logRotator(daysToKeepStr: '7', numToKeepStr: '1'))}
stages {
stage('Create List') {
agent {node 'nodename'}
steps {
script {
// you may create your list here, lets say reading from a file after checkout
list = ["Test-1", "Test-2", "Test-3", "Test-4", "Test-5"]
}
}
post {
cleanup {
cleanWs()
}
}
}
stage('Dynamic Stages') {
agent {node 'nodename'}
steps {
script {
for(int i=0; i < list.size(); i++) {
stage(list[i]){
echo "Element: $i"
}
}
}
}
post {
cleanup {
cleanWs()
}
}
}
}
}
That will result in:
dynamic-sequential-stages
If you don't want to use for loop, and generated pipeline to be executed in parallel then, here is an answer.
def jobs = ["JobA", "JobB", "JobC"]
def parallelStagesMap = jobs.collectEntries {
["${it}" : generateStage(it)]
}
def generateStage(job) {
return {
stage("stage: ${job}") {
echo "This is ${job}."
}
}
}
pipeline {
agent none
stages {
stage('non-parallel stage') {
steps {
echo 'This stage will be executed first.'
}
}
stage('parallel stage') {
steps {
script {
parallel parallelStagesMap
}
}
}
}
}
Note that all generated stages will be executed into 1 node.
If you are willing to executed the generated stages to be executed into different nodes.
def agents = ['master', 'agent1', 'agent2']
// enter valid agent name in array.
def generateStage(nodeLabel) {
return {
stage("Runs on ${nodeLabel}") {
node(nodeLabel) {
echo "Running on ${nodeLabel}"
}
}
}
}
def parallelStagesMap = agents.collectEntries {
["${it}" : generateStage(it)]
}
pipeline {
agent none
stages {
stage('non-parallel stage') {
steps {
echo 'This stage will be executed first.'
}
}
stage('parallel stage') {
steps {
script {
parallel parallelStagesMap
}
}
}
}
}
You can of course add more than 1 parameters and can use collectEntries for 2 parameters.
Please remember return in function generateStage is must.
#Jorge Machado: Because I cannot comment I had to post it as an answer. I've solved it recently. I hope it'll help you.
Declarative pipeline:
A simple static example:
stage('Dynamic') {
steps {
script {
stage('NewOne') {
echo('new one echo')
}
}
}
}
Dynamic real-life example:
// in a declarative pipeline
stage('Trigger Building') {
when {
environment(name: 'DO_BUILD_PACKAGES', value: 'true')
}
steps {
executeModuleScripts('build') // local method, see at the end of this script
}
}
// at the end of the file or in a shared library
void executeModuleScripts(String operation) {
def allModules = ['module1', 'module2', 'module3', 'module4', 'module11']
allModules.each { module ->
String action = "${operation}:${module}"
echo("---- ${action.toUpperCase()} ----")
String command = "npm run ${action} -ddd"
// here is the trick
script {
stage(module) {
bat(command)
}
}
}
}
You might want to take a look at this example - you can have a function return a closure which should be able to have a stage in it.
This code shows the concept, but doesn't have a stage in it.
def transformDeployBuildStep(OS) {
return {
node ('master') {
wrap([$class: 'TimestamperBuildWrapper']) {
...
} } // ts / node
} // closure
} // transformDeployBuildStep
stage("Yum Deploy") {
stepsForParallel = [:]
for (int i = 0; i < TargetOSs.size(); i++) {
def s = TargetOSs.get(i)
def stepName = "CentOS ${s} Deployment"
stepsForParallel[stepName] = transformDeployBuildStep(s)
}
stepsForParallel['failFast'] = false
parallel stepsForParallel
} // stage
Just an addition to what #np2807 and #Anton Yurchenko have already presented: you can create stages dynamically and run the in parallel by simply delaying list of stages creation (but keeping its declaration), e.g. like that:
def parallelStagesMap
def generateStage(job) {
return {
stage("stage: ${job}") {
echo "This is ${job}."
}
}
}
pipeline {
agent { label 'master' }
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def list = ["Test-1", "Test-2", "Test-3", "Test-4", "Test-5"]
// you may create your list here, lets say reading from a file after checkout
// personally, I like to use scriptler scripts and load the as simple as:
// list = load '/var/lib/jenkins/scriptler/scripts/load-list-script.groovy'
parallelStagesMap = list.collectEntries {
["${it}" : generateStage(it)]
}
}
}
}
stage('Run Stages in Parallel') {
steps {
script {
parallel parallelStagesMap
}
}
}
}
}
That will result in Dynamic Parallel Stages:
I use this to generate my stages which contain a Jenkins job in them.
build_list is a list of Jenkins jobs that i want to trigger from my main Jenkins job, but have a stage for each job that is trigger.
build_list = ['job1', 'job2', 'job3']
for(int i=0; i < build_list.size(); i++) {
stage(build_list[i]){
build job: build_list[i], propagate: false
}
}
if you are using Jenkinsfile then, I achieved it via dynamically creating the stages, running them in parallel and also getting Jenkinsfile UI to show separate columns. This assumes parallel steps are independent of each other (otherwise don't use parallel) and you can nest them as deep as you want (depending upon the # of for loops you'll nest for creating stages).
Jenkinsfile Pipeline DSL: How to Show Multi-Columns in Jobs dashboard GUI - For all Dynamically created stages - When within PIPELINE section see here for more.