Why is Jenkins unable to wrap this Closure in a script block? - jenkins

So I have the follow code structure for my Jenkins pipeline:
Shared lib in vars/myWrapper.groovy:
def call(Closure buildScript) {
script {
buildScript()
}
}
Jenkinsfile:
#Library('mySharedLib') _
pipeline {
agent any
stages {
stage {
steps {
myWrapper {
/* Some groovy code that needs to be wrapped */
}
}
}
}
}
In reality, myWrapper does a little more work, but for the sake of brevity, the important part is that it should wrap my Closure in a script block. However, when I run this pipeline I am getting this error on my Groovy code that I wrote inside myWrapper
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 24: Method calls on objects not allowed outside "script" blocks.
For context, the object it's referring to is docker, since I do some docker.build calls in my real closure.
Is there some reason Jenkins ignores the script block from myWrapper?

Related

"Ambiguous expression could be either a parameterless closure expression or an isolated open code block in jenkins parallel execution throws error

following code is throwing below error.
if(!SkipLanguageComponentTests){
^
WorkflowScript: : Groovy compilation error(s) in script. Error(s): "Ambiguous expression could be either a parameterless closure expression or an isolated open code block;
solution: Add an explicit closure parameter list,
script {
2 errors
`
def SkipLanguageComponentTests = false;
pipeline {
parameters {
booleanParam(name: 'SkipLanguageComponentTests', defaultValue: false, description: 'XYZ')
}
stages {
stage('Checkout Source') {
steps {
checkout scm
}
}
stage("Component & Language Tests"){
steps{
parallel (
"componentTestsTask":{
//component test start
dir("docker"){
sh script: "docker-compose -f blah blah\""
}
// some xyz step here
//component test ends here
},
"integrationTestTasks":{
// language test script starts
if(!SkipLanguageComponentTests){
//run lang test and publish report
} else {
echo "Skip Language Component Tests"
}
// language test script ends
}
)
}
}
}
`
I have tried as per the documentation https://www.jenkins.io/blog/2017/09/25/declarative-1/
I have tried this based on the answer mentioned in : Running stages in parallel with Jenkins workflow / pipeline
stage("Parallel") { steps { parallel ( "firstTask" : { //do some stuff }, "secondTask" : { // Do some other stuff in parallel } ) } }
Can someone help me to resolve this ?
Ok, here is the working version of your pipeline - with proper IF inside:
parameters {
booleanParam(name: 'SkipLanguageComponentTests', defaultValue: false, description: '')
}
agent { label 'master' }
stages {
stage("Component & Language Tests") {
parallel {
stage("componentTestsTask") {
steps {
//component test start
echo "docker"
// some xyz step here
//component test ends here
}
}
stage("integrationTestTasks") {
steps {
script {
// language test script starts
if (!params.SkipLanguageComponentTests) {
echo "not skipped"
//run lang test and publish report
} else {
echo "Skip Language Component Tests"
}
}
}
// language test script ends
}
}
}
}
}
This pipeline is not optimal, use below information to improve it.
Notes:
you are using declarative pipeline so I think it is better to stay with parallel section expressed in declarative way
help is here: Jenkins doc about parallel
there is scripted pipeline as well Jenkins doc about scripted pipeline
as I stated in original answer, you have to use params to refer to input parameter properly
if you are using code you have to enclose it in script section - this is like putting scripted pipeline inside declarative one
IF statement can also be done declaratively: Jenkins doc about WHEN
I recommend not to mix these 2 styles: so if you are using both for some good reason they should be separated from one another as much as possible for example by using function or library inside declarative pipeline - the goal is to have pipeline as clear and readable as possible
You are using input variable, so try to refer as it should be done for input in Jenkins:
if(!params.SkipLanguageComponentTests)

jenkins question - Adding post section to script based & difference between methods

I have 2 questions regarding jenkins
1. What is the difference between the 2 methods?
Method 1
pipeline {
agent any
stages {
stage('Example') {
steps {
echo 'Hello World'
}
}
}
Method 2
node ("jenkins-nodes") {
stage("git clone"){
echo 'Hello World' }
}
As I understand in the first method I can add Post section that will be running regardless of the result of the job. I wish to add the same post section for the second method, but it is not working. Any ideas?
What is the difference between the 2 methods?
As Noam Helmer wrote, pipeline{} is declarative syntax and and node{} is scripted syntax.
In general I recommend to always use pipeline{} as it makes common tasks easier to write and visualization using Blue Ocean plugin works best with declarative pipeline.
When declarative pipeline becomes too unflexible, you can insert scripted blocks using the script{} step in a declarative pipeline:
pipeline {
agent any
stages {
stage('Example') {
steps {
script {
echo 'Hello World'
}
}
}
}
}
A cleaner approach is to define a function, which is scripted by definition and use that as a custom step in declarative pipeline. Note this works even without script{}!
pipeline {
agent any
stages {
stage('Example') {
steps {
myStep 42
}
}
}
}
void myStep( def x ) {
echo "Hello World $x" // prints "Hello World 42"
}
In complex pipelines that use lots of custom code, I usually have one function per stage. This keeps the pipeline{} clean and makes it easy to see the overall structure of the pipeline, without script{} clutter all over the place.
As I understand in the first method I can add Post section that will
be running regardless of the result of the job. I wish to add the same
post section for the second method, but it is not working. Any ideas?
post{} is only available in declarative pipeline, but not within a scripted pipeline or a scripted section of a declarative pipeline. You can use try{} catch{} finally{} instead. catch{} runs only when an error occurs, finally{} always runs. You can use both or either of catch{} and finally{}.
node ("jenkins-nodes") {
stage("git clone"){
echo 'Hello World'
try {
// some steps that may fail
}
catch( Exception e ) {
echo "An error happened: $e"
// Rethrow exception, to let the build fail.
// If you remove "throw", the error would be ignored by Jenkins.
throw
}
finally {
echo "Cleaning up some stuff"
}
}
}

Reduce the pipeline script for running multiple jobs parallel

I have below snippet for getting the matching job name then trigger all of them run parallelly.
Shared library file CommonPipelineMethods.groovy
import jenkins.instance.*
import jenkins.model.*
import hudson.model.Result
import hudson.model.*
import org.jenkinsci.plugins.workflow.support.steps.*
class PipelineMethods {
def buildSingleJob(downstreamJob) {
return {
result = build job: downstreamJob.fullName, propagate: false
echo "${downstreamJob.fullName} finished: ${result.rawBuild.result}"
}
}
}
return new PipelineMethods();
The main Jenkinsfile script:
def commonPipelineMethods;
pipeline {
stages {
stage('Load Common Methods into Pipeline') {
def JenkinsFilePath = '/config/jenkins/jobs'
commonPipelineMethods = load "${WORKSPACE}${JenkinsFilePath}/CommonPipelineMethods.groovy"
}
stage('Integration Test Run') {
steps {
script {
matchingJobs = commonPipelineMethods.getIntegrationTestJobs(venture_to_test, testAgainst)
parallel matchingJobs.collectEntries{downstreamJob-> [downstreamJob.name, commonPipelineMethods.buildSingleJob(downstreamJob)]}
}
}
}
}
}
The script works fine but looking at from Map .... parallel step for parallel the script is a bit busy and not easy to get it. The main purpose of this is I want to reduce the pipeline script to be cleaner and easy for others to help maintain. Something simple like calling the external methods as matchingJobs = commonMethods.getIntegrationTestJobs(venture, environment), so others can understand it right away and know what the code does in this context.
I tried several ways to improve it but seem if it put part of them into building the single job outside the pipeline itself but into the external library, for example
def buildSingleJobParallel (jobFullName) {
String tempPipelineResult = 'SUCCESS'
result = build job: jobFullName, propagate: false
echo "${jobFullName} finished: ${result.rawBuild.result.toString()}"
if (result.rawBuild.result.isWorseThan(Result.SUCCESS)) {
tempPipelineResult = 'FAILURE'
}
}
then Jenkins prompted me that
groovy.lang.MissingMethodException: No signature of method: PipelineMethods.build() is applicable for argument types: (java.util.LinkedHashMap) values: [[job:test_1, propagate:false]]
I can understand that build() method is from Jenkins Pipeline Build Steps Plugins, but I failed to import it and use it inside that commonMethods library (this local library I just use load () method in the very first phase of my pipeline script.
So my question is
Could I use Jenkins Pipeline Build Step plugins inside the external library I mentioned above?
If it's not possible for the first question, I wonder if there's any cleaner way to make my script simpler and cleaner?
Thanks, everybody!
not sure if it runnable and looks clearer but i just tried to put all together from question and comments
//function that returns closure to be used as one of parallel jobs
def buildSingleJobParallel(steps, mjob){
return {
def result = steps.build job: mjob.fullName, propagate: false
steps.echo "${mjob.fullName} finished: ${steps.result.rawBuild.result}"
if (result.rawBuild.result.isWorseThan(Result.SUCCESS)) {
steps.currentBuild.result = 'FAILURE'
}
}
}
stage('Integration Test Run') {
steps {
script {
//build map<jobName, Closure> and run jobs in parallel
parallel matchingJobs.collectEntries{mjob-> [mjob.name, buildSingleJobParallel(this, mjob)]}
}
}
}

Calling Groovy code from Kotlin and passing a Closure as an argument

I am writing some code for generating Jenkins jobs and I am using Kotlin for the logic to generate the Jenkins jobs. The Jenkins plugin I am using is the Jenkins Job DSL plugin which is written in Groovy to generate the jobs. I am having trouble setting the definition parameter when calling from the Kotlin code to the Groovy code due to not knowing how to create an appropriate groovy.lang.Closure object.
Here my my Kotlin code:
val pipelineJob = dslFactory.pipelineJob("my-job")
// pipelineJob.definition(JOB_DEFINITION_GOES_HERE) <-- this is the part I can't figure out
Here is the code in Groovy that I am trying to port to work in Kotlin:
dslFactory.pipelineJob("my-job").with {
definition {
cps {
script("deleteDir()")
sandbox()
}
}
}
Here is the definition of the method I am calling:
void definition(#DslContext(WorkflowDefinitionContext) Closure definitionClosure) {
Other Links:
DslFactory

Required context class hudson.FilePath is missing Perhaps you forgot to surround the code with a step that provides this, such as: node

When i load another groovy file in Jenkinsfile it show me following error.
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
I made a groovy file which contains a function and i want to call it in my Declarative Jenkinsfile. but it shows an error.
My Jenkinsfile--->
def myfun = load 'testfun.groovy'
pipeline{
agent any
environment{
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages{
stage('calling function'){
steps{
script{
myfun('${REPO_PATH}','${APP_NAME}')
}
}
}
}
}
Result--
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node
Suggest me what is the right way to do it.
You either need to use a scripted pipeline and put "load" instruction inside the node section (see this question) or if you are already using a declarative pipeline (which seems to be the case), you can include it in "environment" section:
environment {
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
MY_FUN = load 'testfun.groovy'
}
We have to wrap with node {}, so that jenkins executors will execute on node, Incase if we would like to execute on any specific agent node, we can mention like node('agent name'){}
example here :
node {
def myfun = load 'testfun.groovy'
pipeline{
agent any
environment{
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages{
stage('calling function'){
steps{
script{
myfun('${REPO_PATH}','${APP_NAME}')
}
}
}
}
}
}
Loading the function in an initial script block inside the pipeline worked for me. Something like below:
def myfun
pipeline {
agent any
environment {
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages {
stage('load function') {
steps {
script {
myfun = load 'testfun.groovy'
}
}
}
stage('calling function') {
steps {
script {
myfun("${REPO_PATH}","${APP_NAME}")
}
}
}
}
}
I got this error message when I was calling a sh script that does not not exist in the repository / file system. Please look in the stack trace the following line:
at WorkflowScript.run(WorkflowScript:135)
The 135 marks the line in Jenkinsfile, on which the missing script or error is happening.
Another possibility is that due to earlier/underlying errors, the context has been removed for example by multiple executor machines. This happens if you are missing the node (e.g. script) -block, but especially at the post always -block. You can use the if -check also in other places. After this you get another error, that was causing this error message.
post {
always {
script {
//skip the step if context is missing
if (getContext(hudson.FilePath)) {
echo "It works"
}
}
}
}
See https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/troubleshooting-guides/how-to-troubleshoot-hudson-filepath-is-missing-in-pipeline-run

Resources