There's a Jenkins Shared Library containing a declarative pipeline definition which I intend to use in my project. It's available in this form:
// vars/sharedLibrary.groovy
def call(Map config = [:]) {
pipeline {
stages {
// ...
}
}
}
I'm not the owner of the library code and don't really want (or can) change or fork it.
Now, using the library in my project would look like this:
// Jenkinsfile
sharedLibrary param1: 'value', param2: 'values'
The problem is that I need to execute a few custom initialization steps before sharedLibrary is run. I'm struggling to implement it since sharedLibrary declares the "full" pipeline with the pipeline {} block, not allowing me to inject any custom logic prior to that.
This is what I want (which is obviously incorrect):
// Jenkinsfile
pipeline {
stages {
stage('My custom initialization logic') {
// ...
}
}
// The rest of the shared logic goes here:
sharedLibrary param1: 'value', param2: 'values'
}
What would be your advice on making this possible?
Use scrpted pipeline syntax instead.
#Library('pipeline-sample')_
node {
echo 'Do your stuff here'
}
sharedLibrary param1: 'value', param2: 'values'
Related
I have a bunch of repositories which use (parts of) the same Jenkins shared library for running tests, docker builds, etc. So far the shared library has greatly reduced the maintenance costs for these repos.
However, it turned out that basically all pipelines use the same set of options, e.g.:
#Library("myExample.jenkins.shared.library") _
import org.myExample.Constants
pipeline {
options {
disableConcurrentBuilds()
parallelsAlwaysFailFast()
}
agent {
label 'my-label'
}
stages {
stage {
runThisFromSharedLibrary(withParameter: "foo")
runThatFromSharedLibrary(withAnotherParameter: "bar")
...
...
In other words, I need to copy-and-paste the same option snippets in any new specific pipeline that I create.
Also, this means that I need to edit separately each Jenkinsfile (along with any peer-review processes we use internally) when I decide to change the set of options.
I'd very much like to remove this maintenance overhead somehow.
How can I delegate the option-setting to a shared library, or otherwise configure the required options for all pipelines at once?
Two options will help you the most:
Using global variables on Master/Agent level.
go to Jenkins-->Manage Jenkins-->Configure System--> Global properties.
Mark the Environment variables box then add name and value for the variable.
then you will be able to use it normally in your Jenkins pipelines as below code snippets.
Wrap the whole pipeline in a function inside shared-library.
Jenkinsfile will look like below:
#Library('shared-library') _
customServicePipeline(agent: 'staging',
timeout: 3,
server:'DEV')
shared library function
// customServicePipeline.groovy
def call(Map pipelineParams = [:]) {
pipeline {
agent { label "${pipelineParams.agent}" }
tools {
maven 'Maven-3.8.6'
jdk 'JDK 17'
}
options {
timeout(time: "${pipelineParams.timeout}", unit: 'MINUTES')
}
stages {
stage('Prep') {
steps {
echo 'prep started'
pingServer(pipelineParams.get("server"))
}
}
}
}
}
I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
When i load another groovy file in Jenkinsfile it show me following error.
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
I made a groovy file which contains a function and i want to call it in my Declarative Jenkinsfile. but it shows an error.
My Jenkinsfile--->
def myfun = load 'testfun.groovy'
pipeline{
agent any
environment{
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages{
stage('calling function'){
steps{
script{
myfun('${REPO_PATH}','${APP_NAME}')
}
}
}
}
}
Result--
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node
Suggest me what is the right way to do it.
You either need to use a scripted pipeline and put "load" instruction inside the node section (see this question) or if you are already using a declarative pipeline (which seems to be the case), you can include it in "environment" section:
environment {
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
MY_FUN = load 'testfun.groovy'
}
We have to wrap with node {}, so that jenkins executors will execute on node, Incase if we would like to execute on any specific agent node, we can mention like node('agent name'){}
example here :
node {
def myfun = load 'testfun.groovy'
pipeline{
agent any
environment{
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages{
stage('calling function'){
steps{
script{
myfun('${REPO_PATH}','${APP_NAME}')
}
}
}
}
}
}
Loading the function in an initial script block inside the pipeline worked for me. Something like below:
def myfun
pipeline {
agent any
environment {
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages {
stage('load function') {
steps {
script {
myfun = load 'testfun.groovy'
}
}
}
stage('calling function') {
steps {
script {
myfun("${REPO_PATH}","${APP_NAME}")
}
}
}
}
}
I got this error message when I was calling a sh script that does not not exist in the repository / file system. Please look in the stack trace the following line:
at WorkflowScript.run(WorkflowScript:135)
The 135 marks the line in Jenkinsfile, on which the missing script or error is happening.
Another possibility is that due to earlier/underlying errors, the context has been removed for example by multiple executor machines. This happens if you are missing the node (e.g. script) -block, but especially at the post always -block. You can use the if -check also in other places. After this you get another error, that was causing this error message.
post {
always {
script {
//skip the step if context is missing
if (getContext(hudson.FilePath)) {
echo "It works"
}
}
}
}
See https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/troubleshooting-guides/how-to-troubleshoot-hudson-filepath-is-missing-in-pipeline-run
We have many, many projects which all have their own Jenkinsfile which simply executes a pipeline defined in our shared library. The pipeline ensures that all projects are built, packaged, and installed in the same exact way.
project-a/Jenkinsfile
library 'the-shared-library'
buildProject name: 'project-a', buildApi: true, ...
the-shared-library/vars/buildProject.groovy
def call(Map config) {
pipeline {
// standard stages go here
}
}
We want to extend this to allow for an additional stage to be executed during the pipeline, for certain projects (e.g. 1 out of the many). I was thinking of doing it as follows, if possible:
pass a config param which is a Stage, into buildProject
in buildProject.call, if a custom stage was provided, tack it on to the end of the pipeline, or perhaps between two (known) stages, and run it
Something like this ...
project-a/Jenkinsfile
library 'the-shared-library'
def myCustomStage = ... // not sure how
buildProject name: 'project-a', buildApi: true, ..., customStage: myCustomStage
the-shared-library/vars/buildProject.groovy
def call(Map config) {
def customStage = config.customStage
pipeline {
// standard stages 1 through 3
// if customStage provided, it goes here
// standard stages 5 through 5
}
}
I'm not sure what is a correct solution here.
I haven't tested, but something like this looks like it should work:
//project-a/Jenkinsfile
library 'the-shared-library'
def myCustomStage = { echo 'Hello' }
buildProject name: 'project-a', buildApi: true, ..., myCustomStage
//the-shared-library/vars/buildProject.groovy
def call(Map config, Closure customStage=null) {
def customStage = config.customStage
pipeline {
// standard stages 1 through 3
// if customStage provided, it goes here
stage('Conditional'){
when{
expression { customStage }
}
steps { script {customStage()} }
// standard stages 5 through 5
}
}
see Can I use a Closure to define a stage in a Jenkins Declarative Pipeline?
So I have a use case with Jenkinsfile that I know is not common, and I haven't found a solution for it yet.
Background
We currently have a multi-branch pipeline job configured to build multiple branches. This is uses to run system-testing of the products across multiple release. The Jenkins job
Clone all required repositories
Deploy the environment
Execute the automated test cases
Undeploy the environment
In order to avoid having to define the same Jenkinsfile on each branches, we created a shared library. The shared library defines the Declarative pipeline stages for the Jenkins file. The shared library has the following:
/* File name var/myStep.groovy */
def call(Map pipelineParams) {
callASharedLibraryFunction()
properties([
parameters(sharedLibraryGetParameters(pipelineParams))
])
pipeline {
// snip
stages {
stage("clone repos") { }
stage("Deploy environment") { }
stage("Executed Tests") { }
stage("Undeploy environment") { }
}
// post directives
}
}
And the Jenkins file simply defines a map, and then call myStep call.
e.g.:
/* Sample Jenkinsfile */
pipelineParams = [
FOO = "foo"
]
myStep pipelineParams
The problem
We now have a need for another Jenkins job, where some of the stages will be the same. For example, the new jobs will need to
Clone all required repositories
Deploy the environment
Do something else
And changing the behaviour of a common stage (e.g.: Clone the repo), should take effect across all the jobs that define this stage. I know we can use the when directive in the stage, however from a usability perspective, I want the jobs to be different as they are exercising different things. And the users of one job don't care about the additional stages the other job runs.
I want to avoid code duplication, and better yet, I don't want to duplicate the stage code. (including steps, when, post, etc..).
Is there a way a shared library can define the stage "implementation" with all the directives (steps, when, post, etc) once, but have it get called multiple times?
e.g.:
/* File: vars/cloneReposStageFunction.groovy */
def call() {
stage("Clone Repos") { }
}
/* File: vars/myStep.groovy */
def call(Map pipelineParams) {
pipeline {
// snip
stages {
cloneReposStageFunction()
stage("Deploy environment") { }
stage("Executed Tests") { }
stage("Undeploy environment") { }
}
// post directives
}
}
/* File: vars/myNewStep.groovy */
def call(Map pipelineParams) {
pipeline {
// snip
stages {
cloneReposStageFunction()
stage("Deploy environment") { }
stage("Do something else") { }
}
// post directives
}
}
It's an open Jenkins' Feature Request.
I've seen different ways to template a pipeline, but it's far from what you'd like to achieve.