Providing different values in Jenkins dsl configure block to create different jobs - jenkins

I need my builds to timeout at a specific time (deadline) but currently Jenkins dsl only support the "absolute" strategy. So I tried to write the configure block but couldn't create jobs with different deadline values.
def settings = [
[
jobname: 'job1',
ddl: '13:10:00'
],
[
jobname: 'job2',
ddl: '14:05:00'
]
]
for (i in settings) {
job(i.jobname) {
configure {
it / buildWrappers << 'hudson.plugins.build__timeout.BuildTimeoutWrapper' {
strategy(class:'hudson.plugins.build_timeout.impl.DeadlineTimeOutStrategy') {
deadlineTime(i.ddl)
deadlineToleranceInMinutes(1)
}
}
}
steps {
// some stuff to do here
}
}
}
The above script gives me two jobs with the same deadline time(14:05:00):
<project>
<actions></actions>
<description></description>
<keepDependencies>false</keepDependencies>
<properties></properties>
<scm class='hudson.scm.NullSCM'></scm>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers></triggers>
<concurrentBuild>false</concurrentBuild>
<builders></builders>
<publishers></publishers>
<buildWrappers>
<hudson.plugins.build__timeout.BuildTimeoutWrapper>
<strategy class='hudson.plugins.build_timeout.impl.DeadlineTimeOutStrategy'>
<deadlineTime>14:05:00</deadlineTime>
<deadlineToleranceInMinutes>1</deadlineToleranceInMinutes>
</strategy>
</hudson.plugins.build__timeout.BuildTimeoutWrapper>
</buildWrappers>
</project>
I found this question but still couldn't get it to work.

You can use the Automatic Generated API
The generated DSL is only supported when running in Jenkins, e.g. it is
not available when running from the command line or in the Playground.
Use The Configure Block to generate custom config elements when not
running in Jenkins.
The generated DSL will not work for all plugins, e.g. if a plugin does
not use the #DataBoundConstructor and #DataBoundSetter annotations to
declare parameters. In that case The Configure Block can be used to
generate the config XML.
Fortunately the Timeout plugin support DataBoundConstructors
#DataBoundConstructor
public DeadlineTimeOutStrategy(String deadlineTime, int deadlineToleranceInMinutes) {
this.deadlineTime = deadlineTime;
this.deadlineToleranceInMinutes = deadlineToleranceInMinutes <= MINIMUM_DEADLINE_TOLERANCE_IN_MINUTES ? MINIMUM_DEADLINE_TOLERANCE_IN_MINUTES
: deadlineToleranceInMinutes;
}
So you should be able to do something like
def settings = [
[
jobname: 'job1',
ddl: '13:10:00'
],
[
jobname: 'job2',
ddl: '14:05:00'
]
]
for (i in settings) {
job(i.jobname) {
wrappers {
buildTimeoutWrapper {
strategy {
deadlineTimeOutStrategy {
deadlineTime(i.ddl)
deadlineToleranceInMinutes(1)
}
}
timeoutEnvVar('WHAT_IS_THIS_FOR')
}
}
steps {
// some stuff to do here
}
}
}
There is an extra layer in BuildTimeoutWrapper which houses the different strategies
When using nested classes you need to set the first letter of the class to lowercase
EDIT
You can see this in your own Jenkins install by using the 'Job DSL API Reference' link in a jobs page
http://<your jenkins>/plugin/job-dsl/api-viewer/index.html#method/javaposse.jobdsl.dsl.helpers.wrapper.WrapperContext.buildTimeoutWrapper

I saw very similar behaviour to this in a Jenkins DSL groovy script.
I was looping over a List of Maps in a for each, and I also have a configure closure like your example.
The behaviour I saw was that the Map object in the configure closure seemed to be the same for all iterations of the for each loop. Similar to how you are seeing the same deadline time.
I was actually referencing the same value in the Map both inside and outside the configure closure and the DSL was outputting different values. Outside the configure closure was as expected, but inside was the same value for all iterations.
My solution was just to use a variable to reference the Map value and use that both inside and outside the configure closure, when I did that, the value was consistent.
For your example (just adding a deadlineValue variable, and setting it outside the configure closure):
for (i in settings) {
def deadlineValue = i.ddl
job(i.jobname) {
configure {
it / buildWrappers << 'hudson.plugins.build__timeout.BuildTimeoutWrapper' {
strategy(class:'hudson.plugins.build_timeout.impl.DeadlineTimeOutStrategy') {
deadlineTime(deadlineValue)
deadlineToleranceInMinutes(1)
}
}
}
steps {
// some stuff to do here
}
}
}
I would not expect this to make a difference, but it worked for me.
However I agree as per the the other solution it is better to use buildTimeoutWrapper, so you can avoid using the configure block.
See: <Your Jenkins URL>/plugin/job-dsl/api-viewer/index.html#path/javaposse.jobdsl.dsl.DslFactory.job-wrappers-buildTimeoutWrapper-strategy-deadlineTimeOutStrategy for more details, obviously you'll need the Build Timeout plugin installed.
For my example I still needed the configure closure for the MultiJob plugin where some parameters were still not configurable via the DSL api.

Related

Jenkins: Parameters disappear from pipeline job after running the job

I've been trying to construct multiple jobs from a list and everything seems to be working as expected. But as soon as I execute the first build (which works correctly) the parameters in the job disappears. This is how I've constructed the pipelineJob for the project.
import javaposse.jobdsl.dsl.DslFactory
def repositories = [
[
id : 'jenkins-test',
name : 'jenkins-test',
displayName: 'Jenkins Test',
repo : 'ssh://<JENKINS_BASE_URL>/<PROJECT_SLUG>/jenkins-test.git'
]
]
DslFactory dslFactory = this as DslFactory
repositories.each { repository ->
pipelineJob(repository.name) {
parameters {
stringParam("BRANCH", "master", "")
}
logRotator{
numToKeep(30)
}
authenticationToken('<TOKEN_MATCHES_WITH_THE_BITBUCKET_POST_RECEIVE_HOOK>')
displayName(repository.displayName)
description("Builds deploy pipelines for ${repository.displayName}")
definition {
cpsScm {
scm {
git {
branch('${BRANCH}')
remote {
url(repository.repo)
credentials('<CREDENTIAL_NAME>')
}
extensions {
localBranch('${BRANCH}')
wipeOutWorkspace()
cloneOptions {
noTags(false)
}
}
}
scriptPath('Jenkinsfile)
}
}
}
}
}
After running the above script, all the required jobs are created successfully. But then once I build any job, the parameters disappear.
After that when I run the seed job again, the job starts showing the parameter. I'm having a hard time figuring out where the problem is.
I've tried many things but nothing works. Would appreciate any help. Thanks.
This comment helped me to figure out similar issue with my .groovy file:
I called parameters property twice (one at the node start and then tried to set other parameters in if block), so the latter has overwritten the initial parameters.
BTW, as per the comments in the linked ticket, it is an issue with both scripted and declarative pipelines.
Fixed by providing all job parameters in each parameters call - for the case with ifs.
Though I don't see repeated calls in the code you've provided, please check the full groovy files for your jobs and add all parameters to all parameters {} blocks.

Jenkins pipelineJob DSL not interpreting variables in pipeline script

I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}

custom jenkins declarative pipeline dsl with named arguments

I've been trying for a while now to start working towards moving our free style projects over to pipeline. To do so I feel like it would be best to build up a shared library since most of our builds are the same. I read through this blog post from Jenkins. I came up with the following
// vars/buildGitWebProject.groovy
def call(body) {
def args= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = args
body()
pipeline {
agent {
node {
label 'master'
customWorkspace "c:\\jenkins_repos\\${args.repositoryName}\\${args.branchName}"
}
}
environment {
REPOSITORY_NAME = "${args.repositoryName}"
BRANCH_NAME = "${args.branchName}"
SOLUTION_NAME = "${args.solutionName}"
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
skipStagesAfterUnstable()
timestamps()
}
stages {
stage("checkout") {
steps {
script{
assert REPOSITORY_NAME != null : "repositoryName is null. Please include it in configuration."
assert BRANCH_NAME != null : "branchName is null. Please include it in configuration."
assert SOLUTION_NAME != null : "solutionName is null. Please include it in configuration."
}
echo "building with ${REPOSITORY_NAME}"
echo "building with ${BRANCH_NAME}"
echo "building with ${SOLUTION_NAME}"
checkoutFromGitWeb(args)
}
}
stage('build and test') {
steps {
executeRake(
"set_assembly_to_current_version",
"build_solution[$args.solutionName, Release, Any CPU]",
"copy_to_deployment_folder",
"execute_dev_dropkick"
)
}
}
}
post {
always {
sendEmail(args)
}
}
}
}
in my pipeline project I configured the Pipeline to use Pipeline script and the script is as follows:
buildGitWebProject {
repositoryName:'my-git-repo'
branchName: 'qa'
solutionName: 'my_csharp_solution.sln'
emailTo='testuser#domain.com'
}
I've tried with and without the environment block but the result ends up being the same that the value is 'null' for each of those arguments. Oddly enough the script portion of the code doesn't make the build fail either... so not sure what's wrong with that. Also the echo parts show null as well. What am I doing wrong?
Your Closure body is not behaving the way you expect/believe it should.
At the beginning of your method you have:
def call(body) {
def args= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = args
body()
Your call body is:
buildGitWebProject {
repositoryName:'my-git-repo'
branchName: 'qa'
solutionName: 'my_csharp_solution.sln'
emailTo='testuser#domain.com'
}
Let's take a stab at debugging this.
If you add a println(args) after the body() in your call(body) method you will see something like this:
[emailTo:testuser#domain.com]
But, only one of the values got set. What is going on?
There are a few things to understand here:
What does setting a delegate of a Closure do?
Why does repositoryName:'my-git-repo' not do anything?
Why does emailTo='testuser#domain.com' set the property in the map?
What does setting a delegate of a Closure do?
This one is mostly straightforward, but I think it helps to understand. Closure is powerful and is the Swiss Army knife of Groovy. The delegate essentially sets what the this is in the body of the Closure. You are also using the resolveStrategy of Closure.DELEGATE_FIRST, so methods and properties from the delegate are checked first, and then from the enclosing scope (owner) - see the Javadoc for an in-depth explanation. If you call methods like size(), put(...), entrySet(), etc., they are all first called on the delegate. The same is true for property access.
Why does repositoryName:'my-git-repo' not do anything?
This may appear to be a Groovy map literal, but it is not. These are actually labeled statements. If you surround it instead with square brackets like [repositoryName:'my-git-repo'] then that would be a map literal. But, that is all you would be doing there - is creating a map literal. We want to make sure that these objects are consumed in the Closure
Why does emailTo='testuser#domain.com' set the property in the map?
This is using the map property notation feature of Groovy. As mentioned earlier, you have set the delegate of the Closure to def args= [:], which is a Map. You also set the resolveStrategy of Closure.DELEGATE_FIRST. This makes your emailTo='testuser#domain.com' resolve to being called on args, which is why the emailTo key is set to the value. This is equivalent to calling args.emailTo='testuser#domain.com'.
So, how do you fix this?
If you want to keep your Closure syntax approach, you could change the body of your call to anything that essentially stores values in the delegated args map:
buildGitWebProject {
repositoryName = 'my-git-repo'
branchName = 'qa'
solutionName = 'my_csharp_solution.sln'
emailTo = 'testuser#domain.com'
}
buildGitWebProject {
put('repositoryName', 'my-git-repo')
put('branchName', 'qa')
put('solutionName', 'my_csharp_solution.sln')
put('emailTo', 'testuser#domain.com')
}
buildGitWebProject {
delegate.repositoryName = 'my-git-repo'
delegate.branchName = 'qa'
delegate.solutionName = 'my_csharp_solution.sln'
delegate.emailTo = 'testuser#domain.com'
}
buildGitWebProject {
// example of Map literal where the square brackets are not needed
putAll(
repositoryName:'my-git-repo',
branchName: 'qa',
solutionName: 'my_csharp_solution.sln',
emailTo: 'testuser#domain.com'
)
}
Another way would be to have your call take in the Map as an argument and remove your Closure.
def call(Map args) {
// no more args and delegates needed right now
}
buildGitWebProject(
repositoryName: 'my-git-repo',
branchName: 'qa',
solutionName: 'my_csharp_solution.sln',
emailTo: 'testuser#domain.com'
)
There are also some other ways you could model your API, it will depend on the UX you want to provide.
Side note around declarative pipelines in shared library code:
It's worth keeping in mind the limitations of declarative pipelines in shared libraries. It looks like you are already doing it in vars, but I'm just adding it here for completeness. At the very end of the documentation it is stated:
Only entire pipelines can be defined in shared libraries as of this time. This can only be done in vars/*.groovy, and only in a call method. Only one Declarative Pipeline can be executed in a single build, and if you attempt to execute a second one, your build will fail as a result.

Looking for a Jenkins plugin to allow per-branch default parameter values

I have a multi-branch pipeline job set to build by Jenkinsfile every minute if new changes are available from the git repo. I have a step that deploys the artifact to an environment if the branch name is of a certain format. I would like to be able to configure the environment on a per-branch basis without having to edit Jenkinsfile every time I create a new such branch. Here is a rough sketch of my Jenkinsfile:
pipeline {
agent any
parameters {
string(description: "DB name", name: "dbName")
}
stages {
stage("Deploy") {
steps {
deployTo "${params.dbName}"
}
}
}
}
Is there a Jenkins plugin that will let me define a default value for the dbName parameter per branch in the job configuration page? Ideally something like the mock-up below:
The values should be able to be reordered to set priority. The plugin stops checking for matches after the first one. Matching can be exact or regex.
If there isn't such a plugin currently, please point me to the closest open-source one you can think of. I can use it as a basis for coding a custom plugin.
A possible plugin you could use as a starting point for a custom plugin is the Dynamic Parameter Plugin
Here is a workaround :
Using the Jenkins Config File Provider plugin create a config json with parameters defined in it per branch. Example:
{
"develop": {
"dbName": "test_db",
"param2": "value"
},
"master": {
"dbName": "prod_db",
"param2": "value1"
},
"test_branch_1": {
"dbName": "zss_db",
"param2": "value2"
},
"default": {
"dbName": "prod_db",
"param2": "value3"
}
}
In your Jenkinsfile:
final commit_data = checkout(scm)
BRANCH = commit_data['GIT_BRANCH']
configFileProvider([configFile(fileId: '{Your config file id}', variable: 'BRANCH_SETTINGS')]) {
def config = readJSON file:"$BRANCH_SETTINGS"
def branch_config = config."${BRANCH}"
if(branch_config){
echo "using config for branch ${BRANCH}"
}
else{
branch_config = config.default
}
echo branch_config.'dbName'
}
You can then use branch_config.'dbName', branch_config.'param2' etc. You can even set it to a global variable and then use throughout your pipeline.
The config file can easily be edited via the Jenkins UI(Provided by the plugin) to provision for new branches/params in the future. This doesn't need access to any non sandbox methods.
Not really an answer to your question, but possibly a workaround...
I don't know that the rest of your parameter list looks like, but if it is a static list, you could potentially have your static list with a "use Default" option as the first one.
When the job is run, if the value is "use Default", then gather the default from a file stored in the SCM branch and use that.

how to get the trigger information in Jenkins programmatically

I need to add the next build time scheduled in a build email notification after a build in Jenkins.
The trigger can be "Build periodically" or "Poll SCM", or anything with schedule time.
I know the trigger info is in the config.xml file e.g.
<triggers>
<hudson.triggers.SCMTrigger>
<spec>8 */2 * * 1-5</spec>
<ignorePostCommitHooks>false</ignorePostCommitHooks>
</hudson.triggers.SCMTrigger>
</triggers>
and I also know how to get the trigger type and spec with custom scripting from the config.xml file, and calculate the next build time.
I wonder if Jenkins has the API to expose this information out-of-the-box. I have done the search, but not found anything.
I realise you probably no longer need help with this, but I just had to solve the same problem, so here is a script you can use in the Jenkins console to output all trigger configurations:
#!groovy
Jenkins.instance.getAllItems().each { it ->
if (!(it instanceof jenkins.triggers.SCMTriggerItem)) {
return
}
def itTrigger = (jenkins.triggers.SCMTriggerItem)it
def triggers = itTrigger.getSCMTrigger()
println("Job ${it.name}:")
triggers.each { t->
println("\t${t.getSpec()}")
println("\t${t.isIgnorePostCommitHooks()}")
}
}
This will output all your jobs that use SCM configuration, along with their specification (cron-like expression regarding when to run) and whether post-commit hooks are set to be ignored.
You can modify this script to get the data as JSON like this:
#!groovy
import groovy.json.*
def result = [:]
Jenkins.instance.getAllItems().each { it ->
if (!(it instanceof jenkins.triggers.SCMTriggerItem)) {
return
}
def itTrigger = (jenkins.triggers.SCMTriggerItem)it
def triggers = itTrigger.getSCMTrigger()
triggers.each { t->
def builder = new JsonBuilder()
result[it.name] = builder {
spec "${t.getSpec()}"
ignorePostCommitHooks "${t.isIgnorePostCommitHooks()}"
}
}
}
return new JsonBuilder(result).toPrettyString()
And then you can use the Jenkins Script Console web API to get this from an HTTP client.
For example, in curl, you can do this by saving your script as a text file and then running:
curl --data-urlencode "script=$(<./script.groovy)" <YOUR SERVER>/scriptText
If Jenkins is using basic authentication, you can supply that with the -u <USERNAME>:<PASSWORD> argument.
Ultimately, the request will result in something like this:
{
"Build Project 1": {
"spec": "H/30 * * * *",
"ignorePostCommitHooks": "false"
},
"Test Something": {
"spec": "#hourly",
"ignorePostCommitHooks": "false"
},
"Deploy ABC": {
"spec": "H/20 * * * *",
"ignorePostCommitHooks": "false"
}
}
You should be able to tailor these examples to fit your specific use case. It seems you won't need to access this remotely but just from a job, but I also included the remoting part as it might come in handy for someone else.

Resources