Calling Groovy code from Kotlin and passing a Closure as an argument - jenkins

I am writing some code for generating Jenkins jobs and I am using Kotlin for the logic to generate the Jenkins jobs. The Jenkins plugin I am using is the Jenkins Job DSL plugin which is written in Groovy to generate the jobs. I am having trouble setting the definition parameter when calling from the Kotlin code to the Groovy code due to not knowing how to create an appropriate groovy.lang.Closure object.
Here my my Kotlin code:
val pipelineJob = dslFactory.pipelineJob("my-job")
// pipelineJob.definition(JOB_DEFINITION_GOES_HERE) <-- this is the part I can't figure out
Here is the code in Groovy that I am trying to port to work in Kotlin:
dslFactory.pipelineJob("my-job").with {
definition {
cps {
script("deleteDir()")
sandbox()
}
}
}
Here is the definition of the method I am calling:
void definition(#DslContext(WorkflowDefinitionContext) Closure definitionClosure) {
Other Links:
DslFactory

Related

Why is Jenkins unable to wrap this Closure in a script block?

So I have the follow code structure for my Jenkins pipeline:
Shared lib in vars/myWrapper.groovy:
def call(Closure buildScript) {
script {
buildScript()
}
}
Jenkinsfile:
#Library('mySharedLib') _
pipeline {
agent any
stages {
stage {
steps {
myWrapper {
/* Some groovy code that needs to be wrapped */
}
}
}
}
}
In reality, myWrapper does a little more work, but for the sake of brevity, the important part is that it should wrap my Closure in a script block. However, when I run this pipeline I am getting this error on my Groovy code that I wrote inside myWrapper
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 24: Method calls on objects not allowed outside "script" blocks.
For context, the object it's referring to is docker, since I do some docker.build calls in my real closure.
Is there some reason Jenkins ignores the script block from myWrapper?

parallel steps on different nodes on declarative jenkins pipeline cannot access variables defined in global scope

Let me preface this by saying that I don't yet fully understand how jenkins DSL / groovy handles namespace, scope, variables etc.
In order to keep my code DRY I put repeated command sequences into variables.
It turns out the variable script below is not readable by the code in doParallelStuff. Why is that? Is there a way to share global variables defined in the script (or elsewhere) among both the main pipleine steps and the doParallelStuff code?
def script = """\
#/bin/bash
python xyz.py
"""
def doParallelStuff() {
tests["1"] = {
node {
stage('ps1') {
sh script
}
}
}
tests["2"] = {
node {
stage('ps2') {
sh script
}
}
}
parallel tests
}
pipeline {
stages {
stage("myStage") {
steps {
script {
sh script
doParallelStuff()
}
}
}
}
}
The actual steps are a bit more complicated, but this causes an error like the following to be thrown:
hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: script for class: WorkflowScript
When you define a variable outside of the pipeline directive scope using the def keyword you are defining it in the local scope of the main script, because the pipeline keyword is actually a method that is executed in the main script it can access the variable is they are defined and executed in the same scope (they are actually transformed into a separated class).
When you define a function outside of the pipeline directive, that function has its own scope for variables which is separated from the scope of the main script and therefore it cannot access the defined variable in the top level.
To solve it you can define the variable without the def keyword which will affect the scope in which this variable is created, as without the def (in a groovy script, not class) the variable is added to the global variables of the script (The Binding) which makes it accessible from any function or code within the groovy script. You can read more on the following question: What is the difference between defining variables using def and without?
So in your case you want a variable that is available for both the pipeline code itself and for the defined functions - so it needs to be available anywhere in the script as a global variable and therefore just define it without the def keyword, and it should do the trick:
script = """\
#/bin/bash
python xyz.py
"""

Jenkins pipelineJob DSL not interpreting variables in pipeline script

I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}

Calculated-String as the parameter to Jenkins's Groovy "STAGE"

I put this in the script section of a Jenkins UI job's configuration -
pipeline {
agent any
stages{
stage('Project') {
...
That works, however -
pipeline {
agent any
stages{
stage('Project ' + 'Josh') {
...
throws and displays an incorrect error message because the parser gets all confused due to the constructed string inside the stage.
Moreover,
String description = 'Project' + ' Josh'
pipeline {
agent any
stages{
stage(description) {
...
does not fail, but displays 'description' as the stage's description.
Now, if you try to load a groovy PaC file with this in it:
node {
stage('Project' + 'Josh') {
...
it works without a hitch.
Is it possible that there are two different Groovy parsers employed, one for the UI and another for loaded PaC's? This means that the UI one has this really horrible bug in it...
Ideas?
.a.
Your example has nothing to do with Jenkins UI. You have shown two different pipeline types - a declarative and scripted one.
Declarative pipeline
A declarative pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
// do something here
}
}
}
}
introduces more simplified, limited and opinionated syntax. This type of a pipeline sets boundaries for Groovy code execution - it is only available inside a dedicated script block, e.g.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
def name = 'Joe'
echo "My name is ${name}"
}
}
}
}
}
This is why stage block expects a literal and not a variable nor expression.
Scripted pipeline
The second example you have shown is a scripted pipeline. This kind of pipeline is more powerful comparing to a declarative pipeline - the whole pipeline script is more or less a Groovy script so you can put any code almost everywhere. A scripted pipeline starts with node block and it allows you to put any Groovy code inside this block. Consider following example:
node {
stage("Test") {
echo "1,2,3"
}
for (int i = 0; i < 5; i++) {
stage("Stage ${i}") {
echo "This is ${i}"
}
}
}
This pipeline script generates 6 stages:
As you can see there are actually no limits what kind of stuff you put inside node block. Declarative pipeline does not allow you doing that - its syntax is strict and you have to follow it directly.
Differences
As a final note I will quote Jenkins official docs:
Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.
Source: https://jenkins.io/doc/book/pipeline/syntax/#compare
The script you configured via UI is using declarative pipeline syntax, while the other uses the scripted node syntax. I'd say that's probably where the other parser comes in and would agree that the one for pipeline has a bug.

Jenkins DSL : Job SLack publisher : baseUrl() method not available

I use Jenkins 2.63 with and Slack Notifier plugin 2.2
I need to generate Jobs with SlackNotifier by Job DSL but I can not set the Base URL in the DSL, I get this message :
ERROR: (script, line 145) No signature of method: baseUrl() is applicable for argument types: (java.lang.String) values: [https://my.domain.slack.com/services/hooks/jenkins-ci/] Possible solutions: authToken(), authTokenCredentialId(), botUser(), commitInfoChoice(), customMessage(), includeCustomMessage(), includeTestSummary(), notifyAborted(), notifyBackToNormal(), notifyFailure(), notifyNotBuilt(), notifyRegression(), notifyRepeatedFailure(), notifySuccess(), notifyUnstable(), room(), sendAs(), startNotification(), teamDomain() Finished: FAILURE
here is my DSL script
publishers {
def slackParam = new groovy.json.JsonSlurper().parse(new File(channelFile))
slackNotifier {
baseUrl(slackParam.url)
authTokenCredentialId(slackParam.authTokenCredentialId)
includeTestSummary(true)
notifyAborted(true)
notifyBackToNormal(true)
notifyFailure(true)
notifyNotBuilt(true)
notifyRegression(true)
notifyRepeatedFailure(true)
notifyUnstable(true)
room(slackParam.room)
}
}
But in the job config.xml I can find this parameter.
Can anyone help me to set the base URL parameter ?
Thanks a lot.
Using the Configure Block: https://github.com/jenkinsci/job-dsl-plugin/wiki/The-Configure-Block
configure { project ->
project / publishers << 'jenkins.plugins.slack.SlackNotifier' {
baseUrl("https://whatever.slack.com/services/hooks/jenkins-ci/")
room("#room")
notifyAborted(true)
notifyFailure(true)
notifyNotBuilt(false)
notifyUnstable(true)
notifyBackToNormal(true)
notifySuccess(true)
notifyRepeatedFailure(false)
startNotification(false)
includeTestSummary(false)
includeCustomMessage(true)
customMessage("Environment")
sendAs(null)
commitInfoChoice("AUTHORS_AND_TITLES")
teamDomain("yourDomain")
authTokenCredentialId("token")
}
}
Place this outside of your steps at the bottom of your job. You should see the changes reflected in the UI and config.xml for the job.

Resources