I'm new to jenkins and inherited a bunch of declarative pipelines of unknown code quality. Each pipeline uses folder properties to set shared default param values. This puts essential variables outside of source control, which kills our PR process and our history for debugging. For example
//pipelineA/Jenkinsfile
pipeline {
parameters {
string name: 'important_variable', defaultValue: folderProperty('important_variable')
}
//etc
}
//pipelineB/Jenkinsfile
pipeline {
parameters {
string name: 'important_variable', defaultValue: folderProperty('important_variable')
}
//etc
}
Then in the root folder a property important_variable is set to "Hello World"
Is there a way to get this into source control either by setting the folder property to extract the variable from a yaml, or by using shared libraries?
Thank you for any help!
In case anyone reads this, we ended up:
Create a bootstrap.groovy file
This file MUST go in a /vars directory at the absolute top of your repo
Using the Jenkins UI we went to the pipeline's parent directory > config and created a shared library called config-lib that points at our repo and automatically exposes the bootstrap.groovy file methods as long as the file is in the right place
The bootstrap.groovy file has a call method that returns a map with key value pairs for our default parameters. This method has to be named call
In the Jenkinsfile for the pipeline we include the following two lines:
#Library("config-lib") _
config = bootstrap()
The library decorator (note it ends with _) imports the config-lib methods defined in the jenkins ui
The bootstrap function calls the call method from the bootstrap.groovy file in that config-lib library
in the Jenkinsfile use the config map to populate the param default values
pipeline {
parameters {
string name: 'foo', defaultValue: config.foo
}
And it's done.
This video helped immensely: https://youtu.be/Wj-weFEsTb0
Related
I have the following code in a Pipelineconstant.groovy file:
public static final list ACTION_CHOICES = [
N_A,
FULL_BLUE_GREEN,
STAGE,
FLIP,
CLEANUP
]
and this PARAMETERS in Jenkins multi-Rapper-file:
parameters {
string (name: 'ChangeTicket', defaultValue: '000000', description : 'Prod change ticket otherwise 000000')
choice (name: 'AssetAreaName', choices: ['fpukviewwholeof', 'fpukdocrhs', 'fpuklegstatus', 'fpukbooksandjournals', 'fpukleglinks', 'fpukcasesoverview'], description: 'Select the AssetAreaName.')
/* groovylint-disable-next-line DuplicateStringLiteral */
choice (name: 'AssetGroup', choices: ['pdc1c', 'pdc2c'])
}
I would like to ref ACTION_CHOICES in the parameter as this:
choice (name: 'Action', choices: constants.ACTION_CHOICES, description: 'Multi Version deployment actions')
but it doesn't work for me.
I tried to do this:
choice (name: 'Action', choices: constants.ACTION_CHOICES, description: 'Multi Version deployment actions')
but it doesn't work for me.
You're almost there! Jenkinsfile(s) can be extended with variables / constants defined (directly in your file or (better I'd say) from a Jenkins shared library (this scenario).
The parameter syntax within you pipeline was fine as well as the idea of lists of constants, but what was missing: a proper interlink of those parts together - proper library import. See example below (the names below in the example are not carved in stone and can be of course changed but watch out - Jenkins is quite sensitive about filenames, paths, ... (especially in shared libraries]):
Pipelineconstant.groovy should be placed in src/org/pipelines of your Jenkins shared library.
Pipelineconstant.groovy
package org.pipelines
class Pipelineconstant {
public static final List<String> ACTION_CHOICES = ["N_A", "FULL_BLUE_GREEN", "STAGE", "FLIP", "CLEANUP"]
}
and then you can reference this list of constants within your Jenkinsfile pipeline.
Jenkinsfile
#Library('jsl-constants') _
import org.pipelines.Pipelineconstant
pipeline {
agent any
parameters {
choice (name: 'Action', choices: Pipelineconstant.ACTION_CHOICES , description: 'Multi Version deployment actions')
}
// rest of your pipeline code
}
The first two lines of the pipeline are important - the first loads the JSL itself! Therefore the second line of that import can be used (otherwise Jenkins would not know where find that Pipelineconstant.groovy file.
B) Without Jenkins shared library (files in one repo):
I've found this topic discussed and solved for scripted pipeline here: Load jenkins parameters from external groovy file
I am trying to use a global class that I've defined in a shared library to help organise job parameters. It's not working, and I'm not even sure if it is possible.
My job looks something like this:
pipelineJob('My-Job') {
definition {
// Job definition goes here
}
parameters {
choiceParam('awsAccount', awsAccount.ALL)
}
}
In a file in /vars/awsAccount.groovy I have the following code:
class awsAccount implements Serializable {
final String SANDPIT = "sandpit",
final String DEV = "dev",
final String PROD = "prod"
static String[] ALL = [SANDPIT, DEV, PROD]
}
Global pipeline libraries are configured to load implicitly from the my repository's master branch.
When attempting to update the DSL scripts I receive the error:
ERROR: (myJob.groovy, line 67) No such property: awsAccount for class: javaposse.jobdsl.dsl.helpers.BuildParametersContext
Why does it not find the class, and is it even possible to use shared library classes like this in pipeline job?
Disclaimer: I know it works using Jenkinsfile. Unfortunatelly, not tested usng Declarative Pipelines - but no answers yet, so it may be worth a try
Regarding your first question: there are some reasons why a class from your shared-lib could not be found. Starting from the library import, the library syntax, etc. But they definitvely work for DSL. To be more precise about it, additional information would be great. But be sure that:
You have your groovy class definition using exactly the directory structure as described in the documentation (https://www.jenkins.io/doc/book/pipeline/shared-libraries/)
Give a name to the shared-lib in jenkins as you configure it and be sure is exactly the name you use in the import
Use the import as described in the documentation (under Using Libraries)
Regarding your second question (the one that names this SO question): yes, you can include parameter jobs from information in your shared-lib. At least, using Jenkinsfiles. You can even define properties to be included in the pipelie. I got it working with a tricky syntax due to different problems.
Again, I am using Jenkinsfile and this is what worked for me:
In my shared-lib class, I added a static function that introduces the build parameters. Notice the input parameters that function needs and its usage:
class awsAccount implements Serializable {
//
static giveMeParameters (script) {
return [
// Some parms
script.string(defaultValue: '', description: 'A default parameter', name: 'textParm'),
script.booleanParam(defaultValue: false, description: 'If set to True, do whatever you need - otherwise, do not do it', name: 'boolOption'),
]
}
}
To introduce those parameters in the pipeline, you need to place the returned value of the function into the parameters array
properties (
parameters (
awsAccount.giveMeParameters (this)
)
Again, notice the syntax when calling the function. Similar to this, you can also define functions in the shared-lib that return properties and use them in multiple jobs (disableConcurrentBuilds, buildDiscarder, etc)
I want to inject multiple environment variables using shared libraries into multiple jenkinsfile, as the env variables are common across these multiple jenkinsfile. Motive is to inject properties at a global level so that the variables are global and accessible throughout the pipeline.
I have tried the following:
a. Used the environment tag in jenkinsfile. This is required to be done in all the jenkinsfile, hence no code re-usability.
b. I am able to inject env variables inside the script tag of a stage. But I want to do it before the pipeline code begins. This will be like global properties which can be accessed from anywhere in the pipeline.
Instead of the below:
//Jenkinsfile
pipeline {
environment {
TESTWORKSPACE="some_value"
BUILDWORKSPACE="some_value"
...
...
30+ such env properties
}
}
I am looking for something where I can declare these env variables in a shared library groovy script and then access it throughout the pipeline. Something like below:
//Jenkinsfile
def call(Map pipelineParams) {
pipeline {
<code>
<Use pipelineParams.TESTWORKSPACE as a variable anywhere in my pipeline>
}
}
I think this is possible. Maybe your jenkinsfile could be something like :
#Library(value="my-shared-lib#master", changelog=false) _
import com.MyClass
def extendsEnv(env) {
def myClass = new MyClass()
myClass.getSharedVars().each { String key, String value ->
env[key] = value
}
}
pipeline {
...
stage('init') {
steps {
extendsEnv(env)
}
}
}
Note that your shared vars should be string values if you plan to add them as environment variable later. But I didn't try this code.
You could also imagine another solution : factorize the pipeline for all teams and use parameters override system (project level > team leavel > global level) :
#Library(value="my-shared-lib#master", changelog=false) _
def projectConfig = [:]
projectConfig['RESOURCE_REQUEST_CPU'] = '2000m'
projectConfig['RESOURCE_LIMIT_MEMORY'] = '2000Mi'
teamPipeline(projectConfig)
teamPipeline function is declared in the shared lib in vars directory, with additionnal teams parameters, which finaly calls a common pipeline with a parameter map. You set those parameters as env var as shown above.
I have a somewhat unique setup where I need to be able to dynamically load Jenkinsfiles that live outside of the src I'm building. The Jenkinsfiles themselves usually call node() and then some build steps. This causes multiple executors to be eaten up unnecessarily because I need to have already called node() in order to use the load step to run a Jenkinsfile, or to execute the groovy if I read the Jenkinsfile as a string and execute it.
What I have in the job UI today:
#Library(value='myGlobalLib#head', changelog=fase) _
node{
load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
The Jenkinsfile that's loaded usually also calls node(). For example:
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
This causes 2 executors to be consumed during the pipeline run. Ideally, I load the Jenkinsfiles without first calling node(), but whenever I try, I get an error message stating:
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
Is there any way to load a Jenkinsfile or execute groovy without first having hudson.FilePath context? I can't seem to find anything in the doc. I'm at the point where I'm going to preprocess the Jenkinsfiles to remove their initial call to node() and call node() with the value the Jenkinsfile was using, then load the rest of the file, but, that's somewhat too brittle for me to be happy with.
When using load step Jenkins evaluates the file. You can wrap your Jenkinsfile's logics into a function (named run() in my example) so that it will load but not run automatically.
def run() {
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
}
// This return statement is important in the end of Jenkinsfile
return this
Call it from your job script like this:
def jenkinsfile
node{
jenkinsfile = load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
jenkinsfile.run()
This way there is no more nested node blocks because the first gets closed before run() function is called.
I have ha Jenkins job that has a string input parameter of the build flags for the make command in my Jenkins job. My problem is that some users forget to change the parameter values when we have a release branch. So I want to overwrite the existing string input parameter (or create a new one) that should be used if the job is a release job.
This is the statement I want to add:
If branch "release" then ${params.build_flag} = 'DEBUGSKIP=TRUE'
and the code that is not working is:
pipeline {
agent none
parameters {
string(name: 'build_flag', defaultValue: 'DEBUGSKIP=TRUE', description: 'Flags to pass to build')
If {
allOf {
branch "*release*"
expression {
${params.build_flag} = 'DEBUGSKIP=TRUE'
}
}
}else{
${params.build_flag} = 'DEBUGSKIP=FALSE'
}
}
The code above explains what I want to do but I don't know to do it.
If you can, see if you could use the JENKINS EnvInject Plugin, with your pipeline, using the supported use-case:
Injection of EnvVars defined in the "Properties Content" field of the Job Property
These EnvVars are being injected to the script environment and will be inaccessible via the "env" Pipeline global variable (as in here)
Or writing the right values in a file, and using that file content as "Properties Content" of a downstream job (as shown there).