Jenkins Job DSL configure block - jenkins

I am new to Jenkins Job DSL and trying to figure out how to use configure block in the scm section of my DSL. I have a section that is generated by default in my Jenkins config.xml
<scm class='hudson.plugins.git.GitSCM'>
<browser class='hudson.plugins.git.browser.GithubWeb'>
<url>https://github.com/repository/</url>
</browser>
</scm>
I know there is a browser method in the Jenkins Job DSL Plugin API and you can set it to gitblit, gitiles, gitLab, gitWeb, gogs, and stash.
I would like to set it (Auto). I've tried using the configure block method but it returns an error:
javaposse.jobdsl.dsl.DslScriptException: (script, line 12) Ambiguous method overloading for method groovy.util.Node#div.
Cannot resolve which method to invoke for [null] due to overlapping prototypes between:
line 12 is the it statement.
Code:
scm {
git {
remote {
github
credentials
}
branch("refs/heads/master")
configure {
it / 'scm' / 'browser' {}
}
}
}
So I am not sure how to fix this with code.
Any help would be appreciated.
thank you.

In the documentation it said that process xml might be a bit tricky, I think It should go something like this(format, content don't know if it's like this):
it / scm / browser(class: 'hudson.plugins.git.browser.GithubWeb') / url('https://github.com/repository/')

Related

Disable or auto approve Script Approval for scripts executed in Job Dsl (Active Choice Parameters)?

Running Jenkins 2.289.1.
I have this pipelineJob Job Dsl setting up Active Choice parameters:
https://plugins.jenkins.io/uno-choice/
pipelineJob("test") {
parameters {
activeChoiceParam('CHOICE-1') {
description('Allows user choose from multiple choices')
filterable()
choiceType('SINGLE_SELECT')
groovyScript {
script('return ["choice1", "choice2", "choice3"];')
fallbackScript('"fallback choice"')
}
}
}
definition {
cpsScm {
scm {
git {
remote {
credentials("${creds}")
url("${gitUrl}")
}
branch("${gitBranch}")
}
}
scriptPath("${pathToFile}")
}
}
}
To make sure I can run Job Dsl in the first place without having to manually approve that I have added the following to jcasc:
jenkins:
security:
globalJobDslSecurityConfiguration:
useScriptSecurity: false
But that is not enough. Before I can run the generated pipeline based on above Job Dsl I still need to manually approve:
How do I configure Job Dsl, jcasc or something else to either disable script approval for anything that goes on in a Job Dsl or automatically approve any script that might be created inside a job dsl?
Hopefully I don't have to hack my way around that like suggested here:
https://stackoverflow.com/a/64364086/363603
I am aware that there is a reason for this feature but its for a local only jenkins that I am using for experimenting and this is currently killing my productivity. Related:
https://issues.jenkins.io/browse/JENKINS-28178?focusedCommentId=376405&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-376405
What worked for me:
Manage Jenkins > Configure Global Security > CSRF Protection (section header -- not sure why) > Enable script security for Job DSL scripts (the name of the option that I disabled).

Jenkins DSL script - Test Failure - Found multiple extensions which provide method lastCompleted

Trying to create multijobs in Jenkins with DSL scripting.
There are multiple jobs in a phase and I want to create a consolidated report for the multijob from downstream jobs.
I am using copy artifact to copy the results of downstream jobs to the multijob's target dir. Using selector - lastCompleted()
However I am getting this an error saying multiple extensions providing the method and tests are failing. lastCompleted() is apparently present in copyArtifact and multijob plugins where in this case I require both.
Here is my script:
multiJob('dailyMultiJob') {
concurrentBuild(true)
logRotator(-1, 10, -1, 10)
triggers {
cron('H H(0-4) * * 0-6')
}
steps {
phase('Smoke Tests'){
phaseJob('JobA')
phaseJob('JobB')
phaseJob('JobC')
}
copyArtifacts{
selector{
lastCompleted()
}
projectName('JobA')
filter('target/allure-results/*.*')
target('/path/to/this/multijob/workspace')
flatten(false)
}
copyArtifacts{
selector{
lastCompleted()
}
projectName('JobB')
filter('target/allure-results/*.*')
target('/path/to/this/multijob/workspace')
flatten(false)
}
copyArtifacts{
selector{
lastCompleted()
}
projectName('JobC')
filter('target/allure-results/*.*')
target('/path/to/this/multijob/workspace')
flatten(false)
}
}
publishers {
allure {
results {
resultsConfig {
path('target/allure-results')
}
}
}
archiveArtifacts {
pattern('target/reports/**/*.*')
pattern('target/allure-results/**/*.*')
allowEmpty(true)
}
}
}
Getting this below error after running gradle tests
Caused by: javaposse.jobdsl.dsl.DslException: Found multiple extensions which provide method lastCompleted with arguments []: [[hudson.plugins.copyartifact.LastCompletedBuildSelector, com.tikal.jenkins.plugins.multijob.MultiJobBuildSelector]]
I am not sure if there is a way to indicate use specific artifact's method.
Been stuck on this for quite some time. Any helps are highly appreciated. Thank you in advance!
I had come across the same issue few months back.
There are two possible solutions to this issue.
1 - Keep only one plugin that will avoid the conflict. (Not recommended as it might break other jobs)
2- Use configure block to modify the xml file which will avoid this conflict & you can keep multiple plugins that support the same extensions. (Recommended solution)
Thanks,
Late update:
What I had to do is to switch to scripted pipeline jobs instead.
Configure blocks are not really allowed on all the methods you want to use and they are limited by design. I believe some plugins also don't allow it for security reasons.
Better do use Pipelines.

How to use upstream triggers in declarative Jenkisfile

What is the correct usage of the upstream trigger in a declarative Jenkinsfile?
I'm trying to add dependency triggers, so that the pipeline is triggered after another project has built successfully.
The jenkisci doku on github is listing upstream events as possible pipeline triggers here.
My Jenkisfile is currently looking like this:
pipeline {
agent {
docker {
...
}
}
options {
timeout(time: 1, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr:'10'))
}
triggers {
upstream 'project-name,other-project-name', hudson.model.Result.SUCCESS
}
which leads to the following error:
WorkflowScript: 16: Arguments to "upstream" must be explicitly named. # line 16, column 9.
upstream 'project-name,other-project-name', hudson.model.Result.SUCCESS
^
Update
I changed the syntax for the upstream trigger according to the code snippet here. So, now there is at least no syntax error anymore. But the trigger is still not working as intended.
triggers {
upstream(
upstreamProjects: 'project-name,other-project-name',
threshold: hudson.model.Result.SUCCESS)
}
If I understand the documentation correctly this pipeline should be triggered if one of the two declared jobs has completed successfully right?
If both projects are in same folder and jenkins is aware of jenkinsfile (with below mentioned code) of downstream project(i.e downstream project has ran atleast once), this should work for you :
triggers {
upstream(upstreamProjects: "../projectA/master", threshold: hudson.model.Result.SUCCESS)
}
I'm still relatively new to Jenkins and I have been struggling to get this to work, too. Specifically, I thought that just saving the pipeline code with the triggers directive in the web UI editor would connect the two jobs. It doesn't.
However, if one then manually runs the downstream job, the upstream... code in its pipeline appears to modify the job definition and configure the triggers, resulting in the same situation one would have if they had just set things up in the "build triggers" section of the job config form. In other words, the directive appears to be just a way to tell Jenkins to do the web UI config work for you, when/if the job gets run for some reason.
I spent hours on this and it could have been documented better. Also, the "declarative directive generator" in Jenkins for triggers gave me something like upstream threshold: 'FAILURE' which resulted in something like:
WorkflowScript: 5: Expecting "class hudson.model.Result" for parameter "threshold" but got "FAILURE" of type class java.lang.String instead # line 5, column 23.
To fix this I had to change the parameter to read upstream threshold: hudson.model.Result.FAILURE. Fair enough, but then the generator is broken. Not impressed.

How to enabe SCM polling with the Jenkins DSL plugin

I'd like to enable SCM polling in Jenkins by DSL code. As it's easily possible manually ( without DSL ) and works perfectly, but I'm looking for DSL code to
Make it enable -- check attached image for reference.
I already checked below link, but no any solution here.
https://jenkinsci.github.io/job-dsl-plugin/#method/javaposse.jobdsl.dsl.helpers.triggers.TriggerContext.scm
GitHub hook trigger for GITScm polling
and
Poll SCM
click here to check image
I'm not using Jenkins pipeline
Finally I got solution for this:
Following are the DSL code to enable scm polling:
triggers {
configure {
it / 'triggers' << 'com.cloudbees.jenkins.GitHubPushTrigger'{
spec''
}
scm('')
}
}
I have tested, It's working perfectly
Another Solution:
job('myjob') {
configure { it / 'triggers' / 'com.cloudbees.jenkins.GitHubPushTrigger' / 'spec' }
}
I had a similar situation when trying to enable scm polling for a pipeline.
I am configuring pipelines via job-dsl and CasC, and I specifically wanted to enable SCM polling.
So here's what I have working; I'm within the pipelineJob context, but I believe the solution is the same for the job context:
pipelineJob('myPipelineName') {
environmentVariables {
...
}
definition {
...
}
configure { project ->
project / 'properties' / 'org.jenkinsci.plugins.workflow.job.properties.PipelineTriggersJobProperty' / 'triggers' / 'hudson.triggers.SCMTrigger' {
'spec'('* * * * *')
}
}
}
The way I landed on this was manually changing a pipeline config (in the UI) to enable polling, then going and looking at the job's .xml on disk.
The slash delimited stuff you see in the code above represents the xml tag path to the value i want to change.

How can I set the job timeout for all jobs using the Jenkins DSL

I read How can I set the job timeout using the Jenkins DSL. That sets the timeout for one job. I want to set it for all jobs, and with slightly different settings: 150%, averaged over 10 jobs, with a max of 30 minutes.
According to the relevant job-dsl-plugin documentation I should use this syntax:
job('example-3') {
wrappers {
timeout {
elastic(150, 10, 30)
failBuild()
writeDescription('Build failed due to timeout after {0} minutes')
}
}
}
I tested in http://job-dsl.herokuapp.com/ and this is the relevant XML part:
<buildWrappers>
<hudson.plugins.build__timeout.BuildTimeoutWrapper>
<strategy class='hudson.plugins.build_timeout.impl.ElasticTimeOutStrategy'>
<timeoutPercentage>150</timeoutPercentage>
<numberOfBuilds>10</numberOfBuilds>
<timeoutMinutesElasticDefault>30</timeoutMinutesElasticDefault>
</strategy>
<operationList>
<hudson.plugins.build__timeout.operations.FailOperation></hudson.plugins.build__timeout.operations.FailOperation>
<hudson.plugins.build__timeout.operations.WriteDescriptionOperation>
<description>Build failed due to timeout after {0} minutes</description>
</hudson.plugins.build__timeout.operations.WriteDescriptionOperation>
</operationList>
</hudson.plugins.build__timeout.BuildTimeoutWrapper>
</buildWrappers>
I verified with a job I edited manually before, and the XML is correct. So I know that the Jenkins DSL syntax up to here is correct.
Now I want to apply this to all jobs. First I tried to list all the job names:
import jenkins.model.*
jenkins.model.Jenkins.instance.items.findAll().each {
println("Job: " + it.name)
}
This works too, all job names are printed to console.
Now I want to plug it all together. This is the full code I use:
import jenkins.model.*
jenkins.model.Jenkins.instance.items.findAll().each {
job(it.name) {
wrappers {
timeout {
elastic(150, 10, 30)
failBuild()
writeDescription('Build failed due to timeout after {0} minutes')
}
}
}
}
When I push this code and Jenkins runs the DSL seed job, I get this error:
ERROR: Type of item "jobname" does not match existing type, item type can not be changed
What am I doing wrong here?
The Job-DSL plugin can only be used to maintain jobs that have been created by that plugin before. You're trying to modify the configuration of jobs that have been created in some other way -- this will not work.
For mass-modification of existing jobs (like, in your case, adding the timeout) the most straightforward way is to change the job's XML specification directly,
either by changing the config.xml file on disk, or
using the REST or CLI API
xmlstarlet is a powerful tool for performing such tasks directly on shell level.
Alternatively, it is possible to perform the change via a Groovy script from the "Script Console" -- but for that you need some understanding of Jenkins' internal workings and data structures.

Resources