I have created a PipelineJob in Jenkins UI, which calls 2 other Jenkins jobs:
pipeline {
agent any
stages {
stage("Trigger disable script approval") {
steps {
script{
build job: 'Tools/Disable_Script_Approval'
}
}
}
stage("Trigger Jobs loading into Jenkins") {
steps {
script{
build job: 'Tools/Seed_Job_Executor'
}
}
}
}
}
Then, I used xml-job-to-job-dsl plugin in order to get the job syntax in DSL:
pipelineJob("testLior") {
description()
keepDependencies(false)
definition {
cpsScm {
"""pipeline {
agent any
stages {
stage("Trigger disable script approval") {
steps {
script{
build job: 'Tools/Disable_Script_Approval'
}
}
}
stage("Trigger Jobs loading into Jenkins") {
steps {
script{
build job: 'Tools/Seed_Job_Executor'
}
}
}
}
}""" }
}
disabled(false)
configure {
it / 'properties' / 'com.sonyericsson.rebuild.RebuildSettings' {
'autoRebuild'('false')
'rebuildDisabled'('false')
}
}
}
I took the above code, and tried to use in JCasC configuration (we are running Jenkins with helm chart on top of EKS), and created this values file:
controller:
JCasC:
configScripts:
casc-jobs: |
jobs:
- script: >
pipelineJob('DSL_Seed_Job') {
definition {
cpsScm {
'''pipeline {
agent any
stages {
stage('Trigger disable script approval') {
steps {
script{
build job: 'Tools/Disable_Script_Approval'
}
}
}
stage('Trigger Jobs loading into Jenkins') {
steps {
script{
build job: 'Tools/Seed_Job_Executor'
}
}
}
}
}'''
}
}
}
...
...
So once I'm running helm upgrade I see that Jenkins pod fails to read the JCasC jobs configuration, and this error message appears:
2021-10-21 11:04:37.178+0000 [id=22] SEVERE hudson.util.BootFailure#publish: Failed to initialize Jenkins
while scanning a simple key
in /var/jenkins_home/casc_configs/casc-jobs.yaml, line 3, column 3:
pipelineJob('DSL_Seed_Job') {
^
could not find expected ':'
in /var/jenkins_home/casc_configs/casc-jobs.yaml, line 12, column 38:
... build job: 'Tools/Disable_Script_Approval'
What can be the cause for this error? I got the DSL syntax from the xml-job-to-dsl-job Jenkins plugin so I don't understand what am I missing here.
Thanks in advance,
Lior
You probably figured this out by now but it looks to me like a yaml indentation issue, I believe the block starting with "pipelineJob" should be indented like so:
jobs:
- script: >
pipelineJob('DSL_Seed_Job') {
...
}
Related
As far as declarative pipelines go in Jenkins, I'm having trouble with the when keyword.
I keep getting the error No such DSL method 'when' found among steps. I'm sort of new to Jenkins 2 declarative pipelines and don't think I am mixing up scripted pipelines with declarative ones.
The goal of this pipeline is to run mvn deploy after a successful Sonar run and send out mail notifications of a failure or success. I only want the artifacts to be deployed when on master or a release branch.
The part I'm having difficulties with is in the post section. The Notifications stage is working great. Note that I got this to work without the when clause, but really need it or an equivalent.
pipeline {
agent any
tools {
maven 'M3'
jdk 'JDK8'
}
stages {
stage('Notifications') {
steps {
sh 'mkdir tmpPom'
sh 'mv pom.xml tmpPom/pom.xml'
checkout([$class: 'GitSCM', branches: [[name: 'origin/master']], doGenerateSubmoduleConfigurations: false, submoduleCfg: [], userRemoteConfigs: [[url: 'https://repository.git']]])
sh 'mvn clean test'
sh 'rm pom.xml'
sh 'mv tmpPom/pom.xml ../pom.xml'
}
}
}
post {
success {
script {
currentBuild.result = 'SUCCESS'
}
when {
branch 'master|release/*'
}
steps {
sh 'mvn deploy'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result,
)
}
failure {
script {
currentBuild.result = 'FAILURE'
}
sendNotification(recipients,
null,
'https://link.to.sonar',
currentBuild.result
)
}
}
}
In the documentation of declarative pipelines, it's mentioned that you can't use when in the post block. when is allowed only inside a stage directive.
So what you can do is test the conditions using an if in a script:
post {
success {
script {
if (env.BRANCH_NAME == 'master')
currentBuild.result = 'SUCCESS'
}
}
// failure block
}
Using a GitHub Repository and the Pipeline plugin I have something along these lines:
pipeline {
agent any
stages {
stage('Build') {
steps {
sh '''
make
'''
}
}
}
post {
always {
sh '''
make clean
'''
}
success {
script {
if (env.BRANCH_NAME == 'master') {
emailext (
to: 'engineers#green-planet.com',
subject: "${env.JOB_NAME} #${env.BUILD_NUMBER} master is fine",
body: "The master build is happy.\n\nConsole: ${env.BUILD_URL}.\n\n",
attachLog: true,
)
} else if (env.BRANCH_NAME.startsWith('PR')) {
// also send email to tell people their PR status
} else {
// this is some other branch
}
}
}
}
}
And that way, notifications can be sent based on the type of branch being built. See the pipeline model definition and also the global variable reference available on your server at http://your-jenkins-ip:8080/pipeline-syntax/globals#env for details.
Ran into the same issue with post. Worked around it by annotating the variable with #groovy.transform.Field. This was based on info I found in the Jenkins docs for defining global variables.
e.g.
#!groovy
pipeline {
agent none
stages {
stage("Validate") {
parallel {
stage("Ubuntu") {
agent {
label "TEST_MACHINE"
}
steps {{
sh "run tests command"
recordFailures('Ubuntu', 'test-results.xml')
junit 'test-results.xml'
}
}
}
}
}
post {
unsuccessful {
notify()
}
}
}
// Make testFailures global so it can be accessed from a 'post' step
#groovy.transform.Field
def testFailures = [:]
def recordFailures(key, resultsFile) {
def failures = ... parse test-results.xml script for failures ...
if (failures) {
testFailures[key] = failures
}
}
def notify() {
if (testFailures) {
... do something here ...
}
}
I have a Jenkins pipeline job that (among other things) creates another pipelineJob (to cleanup everything afterwards) using Job DSL plugin.
pipeline {
agent { label 'Deployment' }
stages {
stage('Clean working directory and Checkout') {
steps {
deleteDir()
checkout scm
}
}
// Complex logic omitted
stage('Generate cleanup job') {
steps {
build job: 'cleanup-job-template',
parameters: [
string(name: 'REGION', value: "${REGION}"),
string(name: 'DEPLOYMENT_TYPE', value: "${DEPLOYMENT_TYPE}")
]
}
}
}
}
The thing is that I need this newly generated job to be built only once and then, if the build was successful, the job should be deleted.
pipeline {
stages {
stage('Cleanup afterwards') {
// cleanup logic
}
}
post {
success {
// delete this job?
}
}
}
I thought, that this can be done using Pipeline Post Action, but, unfortunately, I couldn't find any out-of-the-box solution for this.
Is it possible to achieve this at all?
You can achieve this using the post Groovy and then you will need to write some groovy code in order to delete the job:
#!/usr/bin/env groovy
import hudson.model.*
pipeline {
agent none
stages {
stage('Cleanup afterwards') {
// cleanup logic
steps {
node('worker') {
sh 'ls -la'
}
}
}
}
post {
success {
script {
jobsToDelete = ["<JOB_TO_DELETE"]
deleteJob(Hudson.instance.items, jobsToDelete)
}
}
}
}
def deleteJob(items, jobsToDelete) {
items.each { item ->
if (item.class.canonicalName != 'com.cloudbees.hudson.plugins.folder.Folder') {
if (jobsToDelete.contains(item.fullName)) {
manager.listener.logger.println(item.fullName)
item.delete()
}
}
}
}
Tested both cases and work on Jenkins 2.89.4
You should do that all in one job instead of creating and deleting jobs. Use multiple stages for that, e.g. deploy test system, run tests / wait for tests to be finished, undeploy. No need for extra jobs. Example posted here: Can a Jenkins pipeline have an optional input step?
I can create pipelines by putting the following code into "Jenkinsfile" in my repository(called repo1) and creating a new item, through Jenkins GUI, to poll the repository.
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
post {
always {
junit 'target/surefire-reports/*.xml'
archiveArtifacts artifacts: 'target/*.jar', fingerprint: true
}
}
}
stage('Deploy') {
steps {
sh 'echo \'uploading artifacts to some repositories\''
}
}
}
}
But I have a case where I am not allowed create new items through Jenkins GUI but have a pre-defined job which reads JobDSL files in a repository I provide. So, I need to create the same pipeline through JobDSL but I cannot find the corresponding syntax for all the things, for instance, I couldn't find 'agent' DSL command.
Here is a job DSL code I was trying to change.
pipelineJob('the-same-pipeline') {
definition {
cps {
sandbox()
script("""
node {
stage('prepare') {
steps {
sh '''echo 'hello''''
}
}
}
""".stripIndent())
}
}
}
For instance, I could not find 'agent' command. Is it really possible to have the exact pipeline by using job DSL?
I found a way to create the pipeline item through jobDSL. So, the following jobDSL is creating another item which is just a pipeline.
pipelineJob('my-actual-pipeline') {
definition {
cpsScmFlowDefinition {
scm {
gitSCM {
userRemoteConfigs {
userRemoteConfig {
credentialsId('')
name('')
refspec('')
url('https://github.com/muatik/jenkins-as-code-example')
}
}
branches {
branchSpec {
name('*/master')
}
}
browser {
gitWeb {
repoUrl('')
}
}
gitTool('')
doGenerateSubmoduleConfigurations(false)
}
}
scriptPath('Jenkinsfile')
lightweight(true)
}
}
}
You can find the Jenkinsfile and my test repo here: https://github.com/muatik/jenkins-as-code-example
I want to run multiple stages inside a lock within a declarative Jenkins pipeline:
pipeline {
agent any
stages {
lock(resource: 'myResource') {
stage('Stage 1') {
steps {
echo "my first step"
}
}
stage('Stage 2') {
steps {
echo "my second step"
}
}
}
}
}
I get the following error:
Started by user anonymous
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 10: Expected a stage # line 10, column 9.
lock(resource: 'myResource') {
^
WorkflowScript: 10: Stage does not have a name # line 10, column 9.
lock(resource: 'myResource') {
^
WorkflowScript: 10: Nothing to execute within stage "null" # line 10, column 9.
lock(resource: 'myResource') {
^
3 errors
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:116)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:430)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:393)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:257)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Finished: FAILURE
What's the problem here? The documentation explicitly states:
lock can be also used to wrap multiple stages into a single
concurrency unit
It should be noted that you can lock all stages in a pipeline by using the lock option:
pipeline {
agent any
options {
lock resource: 'shared_resource_lock'
}
stages {
stage('will_already_be_locked') {
steps {
echo "I am locked before I enter the stage!"
}
}
stage('will_also_be_locked') {
steps {
echo "I am still locked!"
}
}
}
}
This has since been addressed.
You can lock multiples stages by grouping them in a parent stage, like this :
stage('Parent') {
options {
lock('something')
}
stages {
stage('one') {
...
}
stage('two') {
...
}
}
}
(Don't forget you need the Lockable Resources Plugin)
The problem is that, despite the fact that declarative pipelines were technically available in beta in September, 2016, the blog post you reference (from October) is documenting scripted pipelines, not declarative (it doesn't say as much, so I feel your pain). Lockable resources hasn't been baked in as a declarative pipeline step in a way that would enable the feature you're looking for yet.
You can do:
pipeline {
agent { label 'docker' }
stages {
stage('one') {
steps {
lock('something') {
echo 'stage one'
}
}
}
}
}
But you can't do:
pipeline {
agent { label 'docker' }
stages {
lock('something') {
stage('one') {
steps {
echo 'stage one'
}
}
stage('two') {
steps {
echo 'stage two'
}
}
}
}
}
And you can't do:
pipeline {
agent { label 'docker' }
stages {
stage('one') {
lock('something') {
steps {
echo 'stage one'
}
}
}
}
}
You could use a scripted pipeline for this use case.
If the resource is only used by this pipeline you could also disable concurrent builds:
pipeline {
agent any
options {
disableConcurrentBuilds()
}
stages {
stage('will_already_be_locked') {
steps {
echo "I am locked before I enter the stage!"
}
}
stage('will_also_be_locked') {
steps {
echo "I am still locked!"
}
}
}
}
Altho the options{} block offers this functionality it is not posible to use it in some use cases.
Lets say that you have to name your lock() with a specific name depending on a branch or an environment. You have a pipeline which you dont want to be block by disableConcurrentBuilds() and lock resources depending on a discriminator. You can not name your lock() inside the options{} block by using a environment variable or any other variable from the pipeline because the block is evaluated outside the agent.
The best solution in my opinion is the following:
pipeline {
agent { label 'docker' }
stages {
stage('Wrapper') {
steps {
script {
lock(env.BRANCH_NAME) {
stage('Stage 1') {
sh('echo "stage1"')
}
stage('Stage 2') {
sh('echo "stage2"')
}
}
}
}
}
}
}
Keep in mind that the script {} block takes a block of Scripted Pipeline and executes that in the Declarative Pipeline so no steps{} are allowed inside.
I run multiple build and test containers on the same build nodes. The test containers must lock up the node name as db username for the tests.
lock(resource: "${env.NODE_NAME}" as String, variable: 'DBUSER')
Locks in options are computed at load time, but NODE_NAME is unknown that early. In order to lock multiple stages for visual effect, we can create stages inside script block, i.e. 'run test' stage in the snippet. The stage visualization is just as good as other stage blocks.
pipeline {
agent any
stages {
stage('refresh') {
steps {
echo "freshing on $NODE_NAME"
lock(resource: "${env.NODE_NAME}" as String, variable: 'DBUSER') {
sh '''
printenv | sort
'''
script {
stage('run test')
sh '''
printenv | sort
'''
}
}
}
}
}
}
I'm using Jenkins Pipeline with the declarative syntax, currently with the following stages:
Prepare
Build (two parallel sets of steps)
Test (also two parallel sets of steps)
Ask if/where to deploy
Deploy
For steps 1, 2, 3, and 5 I need and agent (an executor) because they do actual work on the workspace. For step 4, I don't need one, and I would like to not block my available executors while waiting for user input. This seem to be referred to as either a "flyweight" or "lightweight" executor for the classic, scripted syntax, but I cannot find any information on how to achieve this with the declarative syntax.
So far I've tried:
Setting an agent directly in the pipeline options, and then setting agent none on the stage. This has no effect, and the pipeline runs as normalt, blocking the executor while waiting for input. It is also mentioned in the documentation that it will have no effect, but I thought I'd give it a shot anyway.
Setting agent none in the pipeline options, and then setting an agent for each stage except #4. Unfortunately, but expectedly, this allocates a new workspace for every stage, which in turn requires me to stash and unstash. This is both messy and gives me further problems in the parallel stages (2 and 3) because I cannot have code outside the parallel construct. I assume the parallel steps run in the same workspace, so stashing/unstashing in both would have unfortunate results.
Here is an outline of my Jenkinsfile:
pipeline {
agent {
label 'build-slave'
}
stages {
stage("Prepare build") {
steps {
// ...
}
}
stage("Build") {
steps {
parallel(
frontend: {
// ...
},
backend: {
// ...
}
)
}
}
stage("Test") {
steps {
parallel(
jslint: {
// ...
},
phpcs: {
// ...
},
)
}
post {
// ...
}
}
stage("Select deploy target") {
steps {
script {
// ... code that determines choiceParameterDefinition based on branch name ...
try {
timeout(time: 5, unit: 'MINUTES') {
deployEnvironment = input message: 'Deploy target', parameters: [choiceParameterDefinition]
}
} catch(ex) {
deployEnvironment = null
}
}
}
}
stage("Deploy") {
when {
expression {
return binding.variables.get("deployEnvironment")
}
}
steps {
// ...
}
}
}
post {
// ...
}
}
Am I missing something here, or is it just not possible in the current version?
Setting agent none at the top level, then agent { label 'foo' } on every stage, with agent none again on the input stage seems to work as expected for me.
i.e. Every stage that does some work runs on the same agent, while the input stage does not consume an executor on any agent.
pipeline {
agent none
stages {
stage("Prepare build") {
agent { label 'some-agent' }
steps {
echo "prepare: ${pwd()}"
}
}
stage("Build") {
agent { label 'some-agent' }
steps {
parallel(
frontend: {
echo "frontend: ${pwd()}"
},
backend: {
echo "backend: ${pwd()}"
}
)
}
}
stage("Test") {
agent { label 'some-agent' }
steps {
parallel(
jslint: {
echo "jslint: ${pwd()}"
},
phpcs: {
echo "phpcs: ${pwd()}"
},
)
}
}
stage("Select deploy target") {
agent none
steps {
input message: 'Deploy?'
}
}
stage("Deploy") {
agent { label 'some-agent' }
steps {
echo "deploy: ${pwd()}"
}
}
}
}
However, there are no guarantee that using the same agent label within a Pipeline will always end up using the same workspace, e.g. as another build of the same job while the first build is waiting on the input.
You would have to use stash after the build steps. As you note, this cannot be done normally with parallel at the moment, so you'd have to additionally use a script block, in order to write a snippet of Scripted Pipeline for the stashing/unstashing after/before the parallel steps.
There is a workaround to use the same build slave in the other stages.
You can set a variable with the node name and use it in the others.
ie:
pipeline {
agent none
stages {
stage('First Stage Gets Agent Dynamically') {
agent {
node {
label "some-agent"
}
}
steps {
echo "first stage running on ${NODE_NAME}"
script {
BUILD_AGENT = NODE_NAME
}
}
}
stage('Second Stage Setting Node by Name') {
agent {
node {
label "${BUILD_AGENT}"
}
}
steps {
echo "Second stage using ${NODE_NAME}"
}
}
}
}
As of today (2021), you can use nested stages (https://www.jenkins.io/doc/book/pipeline/syntax/#sequential-stages) to group all the stages that must run in the same workspace before the input step, and all the stages that must be run in the same workspace after the input step. Of course, you need to stash or to store artifacts in some external repository before the input step, because the second workspace may not be the same than the first one:
pipeline {
agent none
stages {
stage('Deployment to Preproduction') {
agent any
stages {
stage('Stage PRE.1') {
steps {
echo "StagePRE.1"
sleep(10)
}
}
stage('Stage PRE.2') {
steps {
echo "Stage PRE.2"
sleep(10)
}
}
}
}
stage('Stage Ask Deploy') {
steps {
input message: 'Deploy to production?'
}
}
stage('Deployment to Production') {
agent any
stages {
stage('Stage PRO.1') {
steps {
echo "Stage PRO.1"
sleep(10)
}
}
stage('Stage PRO.2') {
steps {
echo "Stage PRO.2"
sleep(10)
}
}
}
}
}
}