I am trying to use artifactResolver in a Jenkins pipeline step. However, it is failing.
Trying the following config:
pipeline {
agent any
stages {
stage('Download artifact') {
steps {
artifactResolver {
artifacts {
artifact {
groupId('ch.qos.logback')
artifactId('logback-classic')
version('1.1.1')
classifier('sources')
}
}
}
}
}
}
}
However, I get the following error when I build on Jeninks:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 7: Missing required parameter: "artifacts" # line 7, column 17.
artifactResolver {
^
I have created a PipelineJob in Jenkins UI, which calls 2 other Jenkins jobs:
pipeline {
agent any
stages {
stage("Trigger disable script approval") {
steps {
script{
build job: 'Tools/Disable_Script_Approval'
}
}
}
stage("Trigger Jobs loading into Jenkins") {
steps {
script{
build job: 'Tools/Seed_Job_Executor'
}
}
}
}
}
Then, I used xml-job-to-job-dsl plugin in order to get the job syntax in DSL:
pipelineJob("testLior") {
description()
keepDependencies(false)
definition {
cpsScm {
"""pipeline {
agent any
stages {
stage("Trigger disable script approval") {
steps {
script{
build job: 'Tools/Disable_Script_Approval'
}
}
}
stage("Trigger Jobs loading into Jenkins") {
steps {
script{
build job: 'Tools/Seed_Job_Executor'
}
}
}
}
}""" }
}
disabled(false)
configure {
it / 'properties' / 'com.sonyericsson.rebuild.RebuildSettings' {
'autoRebuild'('false')
'rebuildDisabled'('false')
}
}
}
I took the above code, and tried to use in JCasC configuration (we are running Jenkins with helm chart on top of EKS), and created this values file:
controller:
JCasC:
configScripts:
casc-jobs: |
jobs:
- script: >
pipelineJob('DSL_Seed_Job') {
definition {
cpsScm {
'''pipeline {
agent any
stages {
stage('Trigger disable script approval') {
steps {
script{
build job: 'Tools/Disable_Script_Approval'
}
}
}
stage('Trigger Jobs loading into Jenkins') {
steps {
script{
build job: 'Tools/Seed_Job_Executor'
}
}
}
}
}'''
}
}
}
...
...
So once I'm running helm upgrade I see that Jenkins pod fails to read the JCasC jobs configuration, and this error message appears:
2021-10-21 11:04:37.178+0000 [id=22] SEVERE hudson.util.BootFailure#publish: Failed to initialize Jenkins
while scanning a simple key
in /var/jenkins_home/casc_configs/casc-jobs.yaml, line 3, column 3:
pipelineJob('DSL_Seed_Job') {
^
could not find expected ':'
in /var/jenkins_home/casc_configs/casc-jobs.yaml, line 12, column 38:
... build job: 'Tools/Disable_Script_Approval'
What can be the cause for this error? I got the DSL syntax from the xml-job-to-dsl-job Jenkins plugin so I don't understand what am I missing here.
Thanks in advance,
Lior
You probably figured this out by now but it looks to me like a yaml indentation issue, I believe the block starting with "pipelineJob" should be indented like so:
jobs:
- script: >
pipelineJob('DSL_Seed_Job') {
...
}
Below is the skeleton of my Jenkinsfile. The post directive is executed on success but not in case of failure. Is this the expected behavior of jenkins?
Thanks
#!/usr/bin/env groovy
pipeline {
agent {
node { label 'ent_linux_node' }
}
stages {
stage('Prepare'){
steps {
//some steps
}
}
stage('Build') {
steps {
//Fails at this stage
}
}
stage('ArtifactoryUploads') {
steps {
//skips since previous stage failed
}
}
}
post {
always {
//Doesn't get executed but I am expecting it to execute
}
}
}
Here is my Jenkinsfile for multi-branch pipeline project:
pipeline {
agent any
stages {
stage('MASTER build') {
when {
branch 'master'
}
steps {
sh 'mvn -P x clean deploy'
}
}
stage('BRANCH build') {
when {
not { branch 'master' }
}
steps {
sh 'mvn -P x clean package'
}
}
}
post {
failure {
emailext "${EMAIL_TEMPLATE}"
}
}
}
When I build my project in Jenkis following error occurs:
WorkflowScript: 26: Step does not take a single required parameter - use named parameters instead # line 26, column 13.
emailext "${EMAIL_TEMPLATE}"
Why I cant use EMAIL_TEMPLATE global variable containing all emailext definition?
I want to run multiple stages inside a lock within a declarative Jenkins pipeline:
pipeline {
agent any
stages {
lock(resource: 'myResource') {
stage('Stage 1') {
steps {
echo "my first step"
}
}
stage('Stage 2') {
steps {
echo "my second step"
}
}
}
}
}
I get the following error:
Started by user anonymous
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 10: Expected a stage # line 10, column 9.
lock(resource: 'myResource') {
^
WorkflowScript: 10: Stage does not have a name # line 10, column 9.
lock(resource: 'myResource') {
^
WorkflowScript: 10: Nothing to execute within stage "null" # line 10, column 9.
lock(resource: 'myResource') {
^
3 errors
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:116)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:430)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:393)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:257)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Finished: FAILURE
What's the problem here? The documentation explicitly states:
lock can be also used to wrap multiple stages into a single
concurrency unit
It should be noted that you can lock all stages in a pipeline by using the lock option:
pipeline {
agent any
options {
lock resource: 'shared_resource_lock'
}
stages {
stage('will_already_be_locked') {
steps {
echo "I am locked before I enter the stage!"
}
}
stage('will_also_be_locked') {
steps {
echo "I am still locked!"
}
}
}
}
This has since been addressed.
You can lock multiples stages by grouping them in a parent stage, like this :
stage('Parent') {
options {
lock('something')
}
stages {
stage('one') {
...
}
stage('two') {
...
}
}
}
(Don't forget you need the Lockable Resources Plugin)
The problem is that, despite the fact that declarative pipelines were technically available in beta in September, 2016, the blog post you reference (from October) is documenting scripted pipelines, not declarative (it doesn't say as much, so I feel your pain). Lockable resources hasn't been baked in as a declarative pipeline step in a way that would enable the feature you're looking for yet.
You can do:
pipeline {
agent { label 'docker' }
stages {
stage('one') {
steps {
lock('something') {
echo 'stage one'
}
}
}
}
}
But you can't do:
pipeline {
agent { label 'docker' }
stages {
lock('something') {
stage('one') {
steps {
echo 'stage one'
}
}
stage('two') {
steps {
echo 'stage two'
}
}
}
}
}
And you can't do:
pipeline {
agent { label 'docker' }
stages {
stage('one') {
lock('something') {
steps {
echo 'stage one'
}
}
}
}
}
You could use a scripted pipeline for this use case.
If the resource is only used by this pipeline you could also disable concurrent builds:
pipeline {
agent any
options {
disableConcurrentBuilds()
}
stages {
stage('will_already_be_locked') {
steps {
echo "I am locked before I enter the stage!"
}
}
stage('will_also_be_locked') {
steps {
echo "I am still locked!"
}
}
}
}
Altho the options{} block offers this functionality it is not posible to use it in some use cases.
Lets say that you have to name your lock() with a specific name depending on a branch or an environment. You have a pipeline which you dont want to be block by disableConcurrentBuilds() and lock resources depending on a discriminator. You can not name your lock() inside the options{} block by using a environment variable or any other variable from the pipeline because the block is evaluated outside the agent.
The best solution in my opinion is the following:
pipeline {
agent { label 'docker' }
stages {
stage('Wrapper') {
steps {
script {
lock(env.BRANCH_NAME) {
stage('Stage 1') {
sh('echo "stage1"')
}
stage('Stage 2') {
sh('echo "stage2"')
}
}
}
}
}
}
}
Keep in mind that the script {} block takes a block of Scripted Pipeline and executes that in the Declarative Pipeline so no steps{} are allowed inside.
I run multiple build and test containers on the same build nodes. The test containers must lock up the node name as db username for the tests.
lock(resource: "${env.NODE_NAME}" as String, variable: 'DBUSER')
Locks in options are computed at load time, but NODE_NAME is unknown that early. In order to lock multiple stages for visual effect, we can create stages inside script block, i.e. 'run test' stage in the snippet. The stage visualization is just as good as other stage blocks.
pipeline {
agent any
stages {
stage('refresh') {
steps {
echo "freshing on $NODE_NAME"
lock(resource: "${env.NODE_NAME}" as String, variable: 'DBUSER') {
sh '''
printenv | sort
'''
script {
stage('run test')
sh '''
printenv | sort
'''
}
}
}
}
}
}