parallel steps on different nodes on declarative jenkins pipeline cannot access variables defined in global scope - jenkins

Let me preface this by saying that I don't yet fully understand how jenkins DSL / groovy handles namespace, scope, variables etc.
In order to keep my code DRY I put repeated command sequences into variables.
It turns out the variable script below is not readable by the code in doParallelStuff. Why is that? Is there a way to share global variables defined in the script (or elsewhere) among both the main pipleine steps and the doParallelStuff code?
def script = """\
#/bin/bash
python xyz.py
"""
def doParallelStuff() {
tests["1"] = {
node {
stage('ps1') {
sh script
}
}
}
tests["2"] = {
node {
stage('ps2') {
sh script
}
}
}
parallel tests
}
pipeline {
stages {
stage("myStage") {
steps {
script {
sh script
doParallelStuff()
}
}
}
}
}
The actual steps are a bit more complicated, but this causes an error like the following to be thrown:
hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: script for class: WorkflowScript

When you define a variable outside of the pipeline directive scope using the def keyword you are defining it in the local scope of the main script, because the pipeline keyword is actually a method that is executed in the main script it can access the variable is they are defined and executed in the same scope (they are actually transformed into a separated class).
When you define a function outside of the pipeline directive, that function has its own scope for variables which is separated from the scope of the main script and therefore it cannot access the defined variable in the top level.
To solve it you can define the variable without the def keyword which will affect the scope in which this variable is created, as without the def (in a groovy script, not class) the variable is added to the global variables of the script (The Binding) which makes it accessible from any function or code within the groovy script. You can read more on the following question: What is the difference between defining variables using def and without?
So in your case you want a variable that is available for both the pipeline code itself and for the defined functions - so it needs to be available anywhere in the script as a global variable and therefore just define it without the def keyword, and it should do the trick:
script = """\
#/bin/bash
python xyz.py
"""

Related

Jenkins pipelineJob DSL not interpreting variables in pipeline script

I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}

How to inject environment variables in jenkinsfile using shared libraries before beginning the pipeline code?

I want to inject multiple environment variables using shared libraries into multiple jenkinsfile, as the env variables are common across these multiple jenkinsfile. Motive is to inject properties at a global level so that the variables are global and accessible throughout the pipeline.
I have tried the following:
a. Used the environment tag in jenkinsfile. This is required to be done in all the jenkinsfile, hence no code re-usability.
b. I am able to inject env variables inside the script tag of a stage. But I want to do it before the pipeline code begins. This will be like global properties which can be accessed from anywhere in the pipeline.
Instead of the below:
//Jenkinsfile
pipeline {
environment {
TESTWORKSPACE="some_value"
BUILDWORKSPACE="some_value"
...
...
30+ such env properties
}
}
I am looking for something where I can declare these env variables in a shared library groovy script and then access it throughout the pipeline. Something like below:
//Jenkinsfile
def call(Map pipelineParams) {
pipeline {
<code>
<Use pipelineParams.TESTWORKSPACE as a variable anywhere in my pipeline>
}
}
I think this is possible. Maybe your jenkinsfile could be something like :
#Library(value="my-shared-lib#master", changelog=false) _
import com.MyClass
def extendsEnv(env) {
def myClass = new MyClass()
myClass.getSharedVars().each { String key, String value ->
env[key] = value
}
}
pipeline {
...
stage('init') {
steps {
extendsEnv(env)
}
}
}
Note that your shared vars should be string values if you plan to add them as environment variable later. But I didn't try this code.
You could also imagine another solution : factorize the pipeline for all teams and use parameters override system (project level > team leavel > global level) :
#Library(value="my-shared-lib#master", changelog=false) _
def projectConfig = [:]
projectConfig['RESOURCE_REQUEST_CPU'] = '2000m'
projectConfig['RESOURCE_LIMIT_MEMORY'] = '2000Mi'
teamPipeline(projectConfig)
teamPipeline function is declared in the shared lib in vars directory, with additionnal teams parameters, which finaly calls a common pipeline with a parameter map. You set those parameters as env var as shown above.

Dynamic variable in Jenkins pipeline with groovy method variable

I have a Jenkinsfile in Groovy for a declarative pipeline and two created Jenkins variables with names OCP_TOKEN_VALUE_ONE and OCP_TOKEN_VALUE_TWO and the corresponding values. The problem comes when I try to pass a method variable and use it in an sh command.
I have the next code:
private def deployToOpenShift(projectProps, environment, openshiftNamespaceGroupToken) {
sh """/opt/ose/oc login ${OCP_URL} --token=${openshiftNamespaceGroupToken} --namespace=${projectProps.namespace}-${environment}"""
}
The problem is, the method deployToOpenShift has in the openshiftNamespaceGroupToken variable, a value that is the name of variable that has been set in Jenkins. It needs to be dynamic and the problem is that Jenkins don't resolve the Jenkins variable value, just the one passed as String, I mean, the result is:
--token=OCP_TOKEN_VALUE_ONE
If I put in the code
private def deployToOpenShift(projectProps, environment, openshiftNamespaceGroupToken) {
sh """/opt/ose/oc login ${OCP_URL} --token=${OCP_TOKEN_VALUE_ONE} --namespace=${projectProps.namespace}-${environment}"""
}
works perfect but is not dynamic that is the point of the method variable. I have tried with the """ stuff as you can see, but not working.
Any extra idea?
Edited with the code that calls the method:
...
projectProps = readProperties file: './gradle.properties'
openShiftTokenByGroup = 'OCP_TOKEN_' + projectProps.namespace.toUpperCase()
...
stage ('Deploy-Dev') {
agent any
steps {
milestone ordinal : 10, label: "Deploy-Dev Milestone"
deployToOpenShift(projectProps, 'dev', openShiftTokenByGroup)
}
}
I have got two different ways to do that. One is using evaluate from groovy like this:
def openShiftTokenByGroup = 'OCP_TOKEN_' + projectProps.namespace.toUpperCase()
evaluate("${openShiftTokenByGroup}") //This will resolve the configured value in Jenkins
The second one is the same approach but in the sh command with eval escaping the $ character:
sh """
eval \$$openShiftTokenByGroup
echo "Token: $openShiftTokenByGroup
"""
This will do the magic too and you'll get the Jenkins configured value.

How to invoke DLS Step in overriding Global Library Function

Through experimentation I have determined that I can mask the built-in pipeline steps such as build by defining a global function of the same name in shared library.
Example:
(root)
+- vars
+- build.groovy
where build.groovy is:
def call(Map args) {
echo "BUILD: ${args}"
}
If I load this library, then none of my calls to build actually do anything. They just echo that build was called and with what args. This is very useful for testing out pipeline scripts to ensure the script logic itself is correct while avoiding actually doing long running tasks.
But testing is only one use of this. What I really want to do is decorate build, node, stage and a few other steps to capture usage metrics. For example to record for every node that is ever allocated, what time of day it was allocated, and how long it was allocated for. This could be really useful for capacity analysis and planning.
Another application would be to enforce certain policies, such that nodes always be allocated by label and never by explicit node name.
To make any of this work though, the node.groovy decorator needs some way to invoke the real node step that it is masking. Any ideas how to do this?
Figured this out this evening. All the dsl steps are available as members of the steps variable. Which allows me to write something like:
#Library('pipeline-utils')
import mycompany.analytics.AnalyticsClient
import mycompany.analytics.Utils
node('linux') { sh 'echo test' }
def node(String label, Closure nodeAction) {
def executionTime
def actualNode
def allocationTime = Utils.startMeasureDuration()
steps.node(label){
allocationTime.stop()
actualNode = env.NODE_NAME
executionTime = Utils.measureDuration(nodeAction)
}
def fact = [
type: 'node_usage',
job_name: currentBuild.getProjectName(),
node_label: label,
node_name: actualNode,
ts: allocationTime.startTS,
time_in_queue: allocationTime.durationMillis,
execution_time: executionTime.durationMillis
]
AnalyticsClient.recordFact(fact)
if(!executionTime.success) throw executionTime.exception
}

How does variable scoping work when splitting a workflow into smaller chunks?

I have a very long workflow for building and testing our application. So long, in fact, that when we try to load the main workflow script, we get this exception:
java.lang.ClassFormatError: Invalid method Code length 67768 in class file WorkflowScript
I am not proud of this. I'm tying to split the workflow into smaller scripts that we load from the main workflow script, but are running into an issue with variable scoping. For example:
def a = 'foo' //some variable referenced in multiple workflow stages
node {
echo a
}
//... and then a whole bunch of other stages
might become
def a = 'foo' //some variable referenced in multiple workflow stages
node {
git: ...
load 'flowPartA.groovy'
}()
where flowPartA.groovy looks like:
{ ->
node {
echo a
}
}
Based on my understanding of the documentation, where flowPartA.groovy is interpreted as a closure, I expect the variable 'a' would remain in scope, but instead, I get an exception to the contrary.
groovy.lang.MissingPropertyException: No such property: a for class: groovy.lang.Binding
Am I missing something about the way workflow interprets the flow scripts? Is there a good way to take a huge workflow that uses many, many parameters and split it into smaller chunks?
You have to define a function in the external groovy and call it passing all required parameters:
def a = 'foo'
node('slave') {
git '…'
def flow = load 'flowPartA.groovy'
flow.echoFromA(a)
}
And flowPartA.groovy contains:
def echoFromA(String a) {
echo a
}
return this
See the documentation for more information.

Resources