java.lang.NoSuchMethodError: No such DSL method 'agent' found among steps - jenkins

This question ties in with one of my earlier questions here
Tl;dr of the linked question:
Basically I want a generic pipeline to generate distributable bundles (zip files etc) of any of my applications. An application can have multiple components (almost all components are Java/Spring or NodeJS projects).
The plan was to store a pipeline descriptor of each application in a JSON file like such:
{
"app": "MyApp",
"components": [
{
"name": "Component1",
"scmUrl": "https://git.mycompany.com/app/component1.git",
"buildCmd": "mvn clean install"
},
{
"name": "Component2",
"scmUrl": "https://git.mycompany.com/app/component2.git",
"buildCmd": "npm run build"
},
]
}
There will be a descriptor for each application and will be checked into separate repository.
When the pipeline is run the required application name will be an input parameter and the above repo will be cloned and the respective JSON descriptor will be loaded.
This is where things start to get tricky. All components will have some common stages (Checkout, Build, Docker Build). So I am trying to loop the components array and run the stages in parallel:
def parallelCheckoutStages = components.collectEntries {
[ "Checkout ${it.name}", generateCheckoutStage(it) ]
}
generateCheckoutStage(component) {
return {
stage("Stage: ${component.name}") {
script {
git(url: component.scmUrl, branch: component.branch)
}
}
}
}
def parallelBuildStages = components.collectEntries {
[ "Build ${it.name}", generateBuildStage(it) ]
}
generateBuildStage(component) {
return {
stage("Stage: ${component.name}") {
script {
sh script "${component.buildCmd}"
}
}
}
}
pipeline {
agent any
stages {
.
. // clone repo and load json
.
stage("Checkout Components") {
steps {
script {
parallel parallelCheckoutStages
}
}
}
stage("Build Components") {
steps {
script {
parallel parallelBuildStages
}
}
}
}
}
Sometimes I need run the build command inside a docker container (only for some components). To accomplish this I want to edit the generateBuildStage method to something like this:
generateBuildStage(component) {
if(component.requiresDocker) {
return {
stage("Stage: ${component.name}") {
agent {
docker {
image 'jdk11-mvn3.6'
}
}
script {
sh script "${component.buildCmd}"
}
}
}
} else {
return {
stage("Stage: ${component.name}") {
script {
sh script "${component.buildCmd}"
}
}
}
}
}
When I run the above code I get an error java.lang.NoSuchMethodError: No such DSL method 'agent' found among steps
Sort of a second part to my question, does my pipeline seem hacky? I could replace the entire parallel stages by creating individual jobs for each component and calling them from the pipeline. Which approach is better?

Related

Running script (stash) prior to parallel stages being invoked

I have a parallel stage setup, and would like to know if it's possible to run a script prior to the nested stages, so something like this:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
I know the above is wrong, I get the error Only one of "matrix", "parallel", "stages", or "steps" allowed for stage "E2E-PR-CYPRESS"
I already have the stash script in a setup stage at the beginning of my pipeline, but I'd like to be able to restart from this stage above on Jenkins, and so need the stash part in this stage as the parallel stages need to unstash the contents.
Updated Answer:
After playing a bit with the Restart from a Stage option there is seems to be a nice feature designed exactly for your needs called Preserving stashes for Use with Restarted Stages:
Normally, when you run the stash step in your Pipeline, the resulting
stash of artifacts is cleared when the Pipeline completes, regardless
of the result of the Pipeline. Since stash artifacts aren’t accessible
outside of the Pipeline run that created them, this has not created
any limitations on usage. But with Declarative stage restarting, you
may want to be able to unstash artifacts from a stage which ran before
the stage you’re restarting from.
To enable this, there is a job property that allows you to configure a
maximum number of completed runs whose stash artifacts should be
preserved for reuse in a restarted run. You can specify anywhere from
1 to 50 as the number of runs to preserve.
This job property can be configured in your Declarative Pipeline’s options section, as below:
options {
preserveStashes()
// or
preserveStashes(buildCount: 5)
}
This built in feature is exactly what you need to solve your issue without any special modifications to your code, as it will allow you to rerun the pipeline from any stage and still use the existing file that were previously stashed.
Original Answer:
You can actually achieve this quite simply using the scripted syntax for the parallel command, and it will also allow you to avoid the duplicate code in the parallel stages.
parallel: Execute in parallel
Takes a map from branch names to closures and an optional argument failFast which will terminate all branches upon a failure in any other branch:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
In your case it can look like:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
// Define the parallel execution stages
def stages = ['Cypress Tests 1', 'Cypress Tests 2']
// Create the parallel executions and run them
parallel stages.collectEntries {
["Running ${it}": {
node('aws_micro_slave_e2e') {
skipDefaultCheckout()
runE2eTests()
}
}]
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
This way you can easily add more parallel steps by updating the stages list, or even receive it as an input parameter. In addition you can create the parallel executions by different labels or tests suits, instead of the stage name.
You can add a Prepare stage at the top like this:
stages{
stage('Preperation'){
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
}
stage('E2E-PR-CYPRESS') {
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
An out of the box concept
Propose splitting the job into 2 parts taking the following into consideration:
Currently use an EC2 plugin, as the current agents are EC2
Running the parallel stages with the same stashed content ready to unstash
Create jenkins pipeline job 1:
This job will checkout the workspace with any type of agent
Create a packer json to create a customised AMI for the EC2
The customised AMI will stash the contents and move to a directory that will appear on the EC2 when the agent is built
Output the AMI ID, run a groovy job to update the EC2 plugin AMI ID with the customised AMI ID to temporarily set the AMI in memory on Jenkins
pipeline {
agent {
docker {
test-container
}
}
options {
buildDiscarder(
logRotator(
numToKeepStr: '10',
artifactNumToKeepStr: '10'
)
)
ansiColor('xterm')
gitConnection("git")
}
stages {
stage('Run Stash Cypress Functional Test') {
steps {
dir('functional-test') {
// develop branch is canary build, all other branches are stable builds
script {
sh """
# script to stash cypress tests
"""
}
}
}
}
stage('Functional Test AMI Build') {
steps {
dir('functional-test/packer') {
withAWS(role: 'PackerBuild', roleAccount: '123456789012', roleSessionName: 'Jenkins-Workflow-FunctionalTest-Packer') {
script {
sh """
# packer json script will require to copy contents from workspace, run the script to stash content
# packer json script will require to capture new AMI ID
# https://discuss.devopscube.com/t/how-to-get-the-ami-id-after-a-packer-build/36
# https://www.packer.io/docs/post-processors/manifest
packer validate FunctionalTestPacker.json
packer build -debug FunctionalTestPacker.json
# grab AMI ID and export as jenkins env variable
"""
}
}
}
}
}
stage('run groovy script to update AMI ID on EC2 plugin') {
steps {
dir(groovy job dir) {
script {
sh """
# run groovy job to update AMI on Jenkins EC2 plugin
# https://gist.github.com/vrivellino/97954495938e38421ba4504049fd44ea
"""
}
}
}
}
stage('Kickoff Functional Test Deploy') {
// pipeline checkbox parameter, when ticked it will automatically kick off the functional test pipeline
when {
expression {params.RUN_TESTS.toBoolean()}
}
steps {
script{
env.branch = params.BRANCH
sh """
echo "Branch is ${branch}"
"""
}
build job: 'workflow/CypressFunctionaTestDeployAndRun',
parameters: [
string(name: 'BRANCH', value: env.branch)
],
wait : false
}
}
}
post {
always {
cleanWs()
}
}
}
Create jenkins pipeline job 2:
This job will create the EC2 agents via the plugin from the customised AMI from pipeline job 1
This means your agents will have the same workspace ready to unstash - so you can execute a parallel run
Also you could move a lot of your user data script that is in the EC2 plugin as part of the customised AMI build, thus cut down the time for each EC2 agent to get ready to carry out execution
pipeline {
stages {
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}

Jenkins. Use a shared library on the options phase

So I have created a shared library in jenkins with a listener that gets triggered each time the pipelines reads a FlowNode so I can run groovy code before and after each stage, step, etc...
I'm able to call the shared library in a step phase like this:
pipeline {
agent any
stages {
stage('prepare') {
steps{
prepareStepsWrapper()
}
}
stage('step1') {
steps {
echo 'step1'
}
}
stage('step2') {
steps {
echo 'step2'
}
}
stage('step3') {
steps {
echo 'step3'
// fail on purpose
sh 'notfoundexecutablelol'
}
}
stage('step4') {
steps {
echo 'step4'
}
}
}
post{
always{
println env.getEnvironment()
}
}
}
And works pretty great!
With this approach the 'prepare' stage needs to be filtered out so I've switched to the options directive:
pipeline {
agent any
options {
prepareStepsWrapper()
}
stages {
stage('step1') {
steps {
echo 'step1'
}
}
...
}
}
But the pipeline fails with
WorkflowScript: 4: Invalid option type "prepareStepsWrapper"
tl;dr; How can I load a shared library within the options directive?
What does the option-stage do?
The options directive allows configuring Pipeline-specific options
from within the Pipeline itself.
You can't call the shared-library in the options-stage. This stage should not be used for execute any logic, rather it sets configurations for the pipeline. All availables options and the documentation can be found here.
You could try to create a stage that simply calls your prepareStepsWrapper() and use locks to avoid that other stages are executed before this stage.

How to aggregate test results in jenkins parallel pipeline?

I have a Jenkinsfile with a definition for parallel test execution, and the task is to grab the test results from both in order to process them in a post step somewhere.
Problem is: How to do this? Searching for anything acting as an example code does not bring up anything - either the examples deal with explaining parallel, or they explain post with junit.
pipeline {
agent { node { label 'swarm' } }
stages {
stage('Testing') {
parallel {
stage('Unittest') {
agent { node { label 'swarm' } }
steps {
sh 'unittest.sh'
}
}
stage ('Integrationtest') {
agent { node { label 'swarm' } }
steps {
sh 'integrationtest.sh'
}
}
}
}
}
}
Defining a post {always{junit(...)}} step at both parallel stages yielded a positive reaction from the BlueOcean GUI, but the test report recorded close to double the amount of tests - very odd, some file must have been scanned twice. Adding this post step to the surrounding "Testing" stage gave an error.
I am missing an example detailing how to post-process test results that get created in a parallel block.
Just to record my solution for the internet:
I came up with stashing the test results in both parallel steps, and adding a final step that unstashes the files, then post-processes them:
pipeline {
agent { node { label 'swarm' } }
stages {
stage('Testing') {
parallel {
stage('Unittest') {
agent { node { label 'swarm' } }
steps {
sh 'rm build/*'
sh 'unittest.sh'
}
post {
always {
stash includes: 'build/**', name: 'testresult-unittest'
}
}
}
stage ('Integrationtest') {
agent { node { label 'swarm' } }
steps {
sh 'rm build/*'
sh 'integrationtest.sh'
}
post {
always {
stash includes: 'build/**', name: 'testresult-integrationtest'
}
}
}
}
}
stage('Reporting') {
steps {
unstash 'testresult-unittest'
unstash 'testresult-integrationtest'
}
post {
always {
junit 'build/*.xml'
}
}
}
}
}
My observation though is that you have to pay attention to clean up your workspace: Both test stages do create one file, but on the second run, both workspaces are inherited from the previous run and have both previously created test results present in the build directory.
So you have to remove any remains of test results before you start a new run, otherwise you'd stash an old version of the test result from the "other" stage. I don't know if there is a better way to do this.
To ensure the stage('Reporting') will be always executed, put all the step in the 'post':
post {
always {
unstash 'testresult-unittest'
unstash 'testresult-integrationtest'
junit 'build/*.xml'
}
}

How to use Jenkins declarative pipeline to build and test on multiple platforms

I'm trying to do something that I feel should be simple to do, but I can't figure out how.
Basically I have a Jenkins master (running on Linux) and two slaves, one on Windows and the other on macOS.
I want to build my project on all 3 platforms and run GTest tests on all 3 platforms too.
I can build and run the test, but the junit step doesn't seem to collect any test results.
I tried to put the post block everywhere, but it just doesn't work. If I try to put the post block in the Test stage or as a sibling of stages, I get the following error:
Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node
which is caused by agent none - the post block doesn't know where to run.
So I tried to put the post block inside the node block in my parallel step for the Test stage, but it doesn't seem to do anything - it doesn't even show up in the console output.
Here's my Jenkinsfile:
pipeline {
agent none
stages {
stage ('Clean') {
steps {
parallel (
"linux" : {
node ("linux") {
dir("build") {
deleteDir()
writeFile file:'dummy', text:'' // Creates the directory
}
}
},
"windows" : {
node('windows') {
dir("build") {
deleteDir()
writeFile file:'dummy', text:'' // Creates the directory
}
}
},
"mac" : {
node('mac') {
dir("build") {
deleteDir()
writeFile file:'dummy', text:'' // Creates the directory
}
}
}
)
}
}
stage ('Build') {
steps {
parallel (
"linux" : {
node ("linux") {
checkout scm
dir("build") {
sh '/opt/cmake/bin/cmake .. -DCMAKE_BUILD_TYPE=Release'
sh 'make'
}
}
},
"windows" : {
node('windows') {
checkout(changelog: false, scm: scm) // Changelog to false, otherwise Jenkins shows duplicates. Only linux (the Jenkins master) has the changelog enabled.
dir("build") {
bat 'cmake .. -G "Visual Studio 15 2017 Win64" -DCMAKE_PREFIX_PATH=C:/Qt/5.9.1/msvc2017_64'
bat "\"${tool 'MSBuild'}\" project.sln /p:Configuration=Release /p:Platform=\"x64\" /p:ProductVersion=1.0.0.${env.BUILD_NUMBER} /m"
}
}
},
"mac" : {
node('mac') {
checkout(changelog: false, scm: scm) // Changelog to false, otherwise Jenkins shows duplicates. Only linux (the Jenkins master) has the changelog enabled.
dir("build") {
sh 'cmake .. -DCMAKE_PREFIX_PATH=/usr/local/Cellar/qt/5.9.1 -DCMAKE_BUILD_TYPE=Release'
sh 'make'
}
}
}
)
}
}
stage ('Test') {
steps {
parallel (
"linux" : {
node ("linux") {
dir('Build') {
sh './bin/project-tests --gtest_output=xml:project-tests-results.xml'
// Add other test executables here.
}
post {
always {
junit '*-tests-results.xml'
}
}
}
},
"windows" : {
node('windows') {
dir("build") {
bat 'tests\\project\\Release\\project-tests --gtest_output=xml:project-tests-results.xml'
// Add other test executables here.
}
post {
always {
junit '*-tests-results.xml'
}
}
}
},
"mac" : {
node('mac') {
dir("build") {
sh './bin/project-tests --gtest_output=xml:project-tests-results.xml'
// Add other test executables here.
}
post {
always {
junit '*-tests-results.xml'
}
}
}
}
)
}
}
}
}
What am I doing wrong?
post{} block should only follow steps{} or parallel{} (for parallel stages) to take effect.
If you require post to be executed in a node environment, you should provide a node to the entire stage (agent{} statement).
You could try to use parallel stages execution. Also I'd suggest to use functions to shorten the code.
Something like this:
void Clean() {
dir("build") {
deleteDir()
writeFile file:'dummy', text:'' // Creates the directory
}
}
void SmthElse(def optionalParams) {
// some actions here
}
pipeline {
agent none
options {
skipDefaultCheckout(true) // to avoid force checkouts on every node in a first stage
disableConcurrentBuilds() // to avoid concurrent builds on same nodes
}
stages {
stage('Clean') {
failfast false
parallel {
stage('Linux') {
agent {label 'linux'}
steps {Clean()}
post {
// post statements for 'linux' node
SmthElse(someParameter)
}
}
stage('Windows') {
agent {label 'windows'}
steps {Clean()}
post {
// post statements for 'windows' node
}
}
stage('MacOS') {
agent {label 'mac'}
steps {Clean()}
post {
// post statements for 'mac' node
}
}
}
post {
// Post statements OUTSIDE of nodes (i.e. send e-mail of a stage completion)
}
}
// other stages (Build/Test/Etc.)
}
}
Alternatively you can use node in post statements:
stage('Test') {
steps {
// your parallel Test steps
}
post {
always {
script {
parallel (
"linux" : {
node('linux') {
// 'linux' node post steps
}
},
"windows" : {
node('windows') {
// 'windows' node post steps
}
}
// etc
)
}
}
}
}

Job DSL to create "Pipeline" type job

I have installed Pipeline Plugin which used to be called as Workflow Plugin earlier.
https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin
I want to know how can i use Job Dsl to create and configure a job which is of type Pipeline
You should use pipelineJob:
pipelineJob('job-name') {
definition {
cps {
script('logic-here')
sandbox()
}
}
}
You can define the logic by inlining it:
pipelineJob('job-name') {
definition {
cps {
script('''
pipeline {
agent any
stages {
stage('Stage 1') {
steps {
echo 'logic'
}
}
stage('Stage 2') {
steps {
echo 'logic'
}
}
}
}
}
'''.stripIndent())
sandbox()
}
}
}
or load it from a file located in workspace:
pipelineJob('job-name') {
definition {
cps {
script(readFileFromWorkspace('file-seedjob-in-workspace.jenkinsfile'))
sandbox()
}
}
}
Example:
Seed-job file structure:
jobs
\- productJob.groovy
logic
\- productPipeline.jenkinsfile
then productJob.groovy content:
pipelineJob('product-job') {
definition {
cps {
script(readFileFromWorkspace('logic/productPipeline.jenkinsfile'))
sandbox()
}
}
}
I believe this question is asking something how to use the Job DSL to create a pipeline job which references the Jenkinsfile for the project, and doesn't combine the job creation with the detailed step definitions as has been given in the answers to date. This makes sense: the Jenkins job creation and metadata configuration (description, triggers, etc) could belong to Jenkins admins, but the dev team should have control over what the job actually does.
#meallhour, is the below what you're after? (works as at Job DSL 1.64)
pipelineJob('DSL_Pipeline') {
def repo = 'https://github.com/path/to/your/repo.git'
triggers {
scm('H/5 * * * *')
}
description("Pipeline for $repo")
definition {
cpsScm {
scm {
git {
remote { url(repo) }
branches('master', '**/feature*')
scriptPath('misc/Jenkinsfile.v2')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
// the single line below also works, but it
// only covers the 'master' branch and may not give you
// enough control.
// git(repo, 'master', { node -> node / 'extensions' << '' } )
}
}
}
}
Ref the Job DSL pipelineJob: https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob, and hack away at it on http://job-dsl.herokuapp.com/ to see the generated config.
This example worked for me. Here's another example based on what worked for me:
pipelineJob('Your App Pipeline') {
def repo = 'https://github.com/user/yourApp.git'
def sshRepo = 'git#git.company.com:user/yourApp.git'
description("Your App Pipeline")
keepDependencies(false)
properties{
githubProjectUrl (repo)
rebuild {
autoRebuild(false)
}
}
definition {
cpsScm {
scm {
git {
remote { url(sshRepo) }
branches('master')
scriptPath('Jenkinsfile')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
}
}
}
If you build the pipeline first through the UI, you can use the config.xml file and the Jenkins documentation https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob to create your pipeline job.
In Job DSL, pipeline is still called workflow, see workflowJob.
The next Job DSL release will contain some enhancements for pipelines, e.g. JENKINS-32678.
First you need to install Job DSL plugin and then create a freestyle project in jenkins and select Process job DSLs from the dropdown in the build section.
Select Use the provided DSL script and provide following script.
pipelineJob('job-name') {
definition {
cps {
script('''
pipeline {
agent any
stages {
stage('Stage name 1') {
steps {
// your logic here
}
}
stage('Stage name 2') {
steps {
// your logic here
}
}
}
}
}
''')
}
}
}
Or you can create your job by pointing the jenkinsfile located in remote git repository.
pipelineJob("job-name") {
definition {
cpsScm {
scm {
git {
remote {
url("<REPO_URL>")
credentials("<CREDENTIAL_ID>")
}
branch('<BRANCH>')
}
}
scriptPath("<JENKINS_FILE_PATH>")
}
}
}
If you are using a git repo, add a file called Jenkinsfile at the root directory of your repo. This should contain your job dsl.

Resources