I have a DSL plugin file which creates couple of jobs like pipeline, freshly jobs. I wanted to know what would be syntax that i can create different views in this file, Like 5 jobs in 1 view, 5 another jobs in 2 view, I know how to do it using console, but looking forward to update file and it would be automatically created.
worked fine as used groovy syntax
listView('testlist') {
description('All new jobs for testlist')
filterBuildQueue()
filterExecutors()
jobs {
name('fruit')
name('cake')
}
columns {
status()
weather()
name()
lastSuccess()
lastFailure()
lastDuration()
buildButton()
}
}
Related
I have a repository with multiple Jenkinsfiles (at least there will be multiple Jenkins files) and I want to setup the Jobs in Jenkins using a SEED job.
So far I can set up one job based on my remote repository.
#!/usr/bin/env groovy
/*
* Setup jobs from gitlab project docker-jenkins-pipelines
*/
def createPipelineJob(final String repo) {
String repoName = repo.substring(repo.lastIndexOf("/") + 1, repo.length())
pipelineJob(repoName) {
definition {
cpsScm {
scm {
git {
remote {
url('git#gitlab.com:' + repo +'.git')
}
branches('*/main')
//branches('*/feat*')
}
}
scriptPath("src/main/jobs/ADMIN-initialize-repository/Jenkinsfile")
}
}
}
}
createPipelineJob('sommerfeld.sebastian/docker-jenkins-pipelines')
Now I would like to iterate all folders in my repo (https://gitlab.com/sommerfeld.sebastian/docker-jenkins-pipelines/-/tree/main/src/main/jobs) and create separate jobs for all Jenkinsfiles.
I would like to have some sort of wildcard for src/main/jobs/*/Jenkinsfile. But looping the folder would be okay too and mybe even better because I could better define the jobnames.
But I don't know how to iterate the folders. Can anyone give me a hint on how to do that? Is there an APi call for gitlab.com or something?
I would suggest to not use the API. You do have groovy at hand, and you can iterate through the files. When you checkout the repository you have all information.
https://stackoverflow.com/a/38899519/3708208 is a good starting point to iterate over the files with groovy, there might be some sandbox security limitations, but this shows how you can iterate over a set of files. Calling the method to create the pipeline jobs should be something like:
new File(parentPath).traverse(type: groovy.io.FileType.FILES, nameFilter: ~/Jenkinsfile/) { it ->
createPipelineJob("sommerfeld.sebastian/docker-jenkins-pipelines/${it.parent.name}")
} //code untested :)
Creating Build Monitor view with DSL Script, but there is no detail onto how to set the number of columns.
Using https://jenkinsci.github.io/job-dsl-plugin/#path/buildMonitorView documents for some insight. Thinking the configure function may allow but I still have the same question of how to do it.
Assumed it may have been like list view and add a column to it but this does not work.
My current code so far:
buildMonitorView('Automation Wall') {
description('All QA Test Suites ')
recurse(true)
configure()
columns(1)
jobs {
regex(".*.Tests.*")
}
}
buildMonitorView('Automation Wall') {
description('All QA Test Suites ')
recurse(true)
configure { project ->
(project / columns ).value = 1
}
jobs {
regex(".*.Tests.*")
}
}
I'm creating Build Monitor view with DSL Script, but there is no method in API to set the job order. I can set the order manually in configuration after view is created, but I need to do that within the script.
I'm using https://jenkinsci.github.io/job-dsl-plugin/#path/buildMonitorView as a reference. The only way I suspect it could be possible is configure(Closure) method, but I would still have the same question of how to do it.
My current code:
biuldMonitorView("name-of-the-view") {
jobs {
regex("some regex to include jobs")
recurse()
}
// I would expect something like:
view {
orderByFullName()
}
}
After some trial and error and println calls everywhere I came to this solution:
biuldMonitorView("name-of-the-view") {
jobs { // This part is as before
regex("some regex to include jobs")
recurse()
}
// The solution:
view.remove(view / order)
view / order(class: "com.smartcodeltd.jenkinsci.plugins.buildmonitor.order.ByFullName")
}
Above solution sets job order to "Full name" instead of default "Name".
I found the remove idea at Configure SVN section of job-dsl-plugin, fully qualified names of job order options can be found in the source of jenkins-build-monitor-plugin.
I had the same question today and managed to get Aivaras's proposal to work in the following way:
buildMonitorView("name-of-the-view") {
// Set properties like jobs
jobs {
regex("some regex to include jobs")
recurse()
}
// Directly manipulate the config to set the ordering
configure { view ->
view.remove(view / order)
view / order(class: "com.smartcodeltd.jenkinsci.plugins.buildmonitor.order.ByFullName")
}
I am trying to do a poc of jenkins pipeline as code. I am using the Github organization folder plugin to scan Github orgs and create jobs per branch. Is there a way to explicitly define the names for the pipeline jobs that get from Jenkinsfile? I also want to add some descriptions for the jobs.
You need to use currentBuild like below. The node part is important
node {
currentBuild.displayName = "$yournamevariable-$another"
currentBuild.description = "$yourdescriptionvariable-$another"
}
Edit: Above one renames build where as Original question is about renaming jobs.
Following script in pipeline will do that(this requires appropriate permissions)
item = Jenkins.instance.getItemByFullName("originalJobName")
item.setDescription("This description was changed by script")
item.save()
item.renameTo("newJobName")
I'm late to the party on this one, but this question forced me in the #jenkins chat where I spent most of my day today. I would like to thank #tang^ from that chat for helping solve this in a graceful way for my situation.
To set the JOB description and JOB display name for a child in a multi-branch DECLARATIVE pipeline use the following steps block in a stage:
steps {
script {
if(currentBuild.rawBuild.project.displayName != 'jobName') {
currentBuild.rawBuild.project.description = 'NEW JOB DESCRIPTION'
currentBuild.rawBuild.project.setDisplayName('NEW JOB DISPLAY NAME')
}
else {
echo 'Name change not required'
}
}
}
This will require that you approve the individual script calls through the Jenkins sandbox approval method, but it was far simpler than anything else I'd found across the web about renaming the actual children of the parent pipeline. The last thing to note is that this should work in a Jenkinsfile where you can use the environment variables to manipulate the job items being set.
I tried to used code snippet from accepted answer to describe my Jenkins pipeline in Jenkinsfile. I had to wrap code snippet into function with #NonCPS annotation and use def for item variable. I have placed code snippet in root of Jenkinsfile, not in node section.
#NonCPS
def setDescription() {
def item = Jenkins.instance.getItemByFullName(env.JOB_NAME)
item.setDescription("Some description.")
item.save()
}
setDescription()
I am very new to Jenkins and Job DSL plugin. After a little research, I found how to create a job using DSL and now I am trying to delete a job using DSL.
I know to disable a job using this following code:
//create new job
//freeStyleJob("MyJob1", closure = null);
job("MyJob1"){
disabled(true);
}
It is working perfectly fine. But, I couldn't find any method to delete another job in jenkins.
Please help!
Thanks!
To delete a job, you have to set the "Action for removed jobs" option to "Delete" in the "Process Job DSLs" build step configuration. Then remove the job from your script and run the seed job.
Each instance of the Job Dsl plugin tracks what jobs (and views) it creates. When it is run again, you can configure what it does to jobs (and views) that were present the previous time this instance was run, but are not present this time.
Let's a assume you have two files you use to create jobs.
seed_jobdsl.groovy:
job('seed_all') {
steps {
dsl {
external('*_jobdsl.groovy')
// default behavior
// removeAction('IGNORE')
}
}
}
test_jobdsl.groovy:
job('test_stuff') {
steps {
shell('echo "I live!")
}
}
This will leave jobs created by seed_all unchanged even if they are not present in the list of job created the next time seed is run.
To get jobs to be deleted, change your seed job code:
seed_jobdsl.groovy:
job('seed_all') {
steps {
dsl {
external('*_jobdsl.groovy')
removeAction('DELETE')
}
}
}
Now, run seed_all job to apply your change (seed_all overwrites its own configuration when run). Then make the following change:
test_jobdsl.groovy:
job('test_other') {
steps {
shell('echo "The job is dead, long live the new job!"')
}
}
Run seed_all again. You notice test_stuff will be deleted and test_other will be created. If you remove test_jobdsl.groovy and then run seed_all, test_other will be deleted.