I try to use a groovy script to list all Jenkins jobs on a server but it fails to get the jobs that are inside multibranch pipelines. I am only able to get the "freestyle projects".
I use Jenkins.instance.getAllItems(AbstractProject.class) but what I understand from the documentation is that, it will list all jobs implementing the AbstractProject class, which is not the case for the multibranch pipelines. Is there another way to proceed to get those jobs?
The ultimate goal is that sometimes, I want to launch all jobs in a folder. That folder contains over 100 multibranch pipeline with a few branches each. I don't want to trigger each one individually as it would be very time consuming.
Thank you.
I was looking for the same thing to create an extensible choice parameter listing multibranch jobs. So here's what I came up with based on this answer to a similar question: https://stackoverflow.com/a/50163644
def getBranchNames(project){
project.getItems().each { job ->
println(job.getProperty(org.jenkinsci.plugins.workflow.multibranch.BranchJobProperty.class).getBranch().getName())
}
}
getBranchNames(jenkins.model.Jenkins.instance.getItemByFullName("/multibranch-job-name"))
Hope this helps the next person.
Related
I have two jenkins instances (jenkins1 and jenkins2)
Jenkins1 - Contains freestyle jobs (all runs on a specific template)
I need to extract all the jobs from jenkins1 and create those jobs as pipeline jobs in jenkins2.
I know simply copying the jobs doesnt work (because it is two different templates Freestyle and pipeline)
How can I do it in efficient way using a groovy/shell script to achieve this?
Every job has a config.xml where all the job step are listed in xml.
Parse that file and extract all the information than convert them in a pipeline job routine.
I think groovy/shell scripts are a perfect way to achieve it, just use the config.xml as source of information.
The below resources can help:
https://jenkinsworld20162017.sched.com/event/Bk3r/auto-convert-your-freestyle-jenkins-jobs-to-coded-pipeline?iframe=no&w=100%&sidebar=yes&bg=no
https://github.com/visualphoenix/jenkins-xml-to-jobdsl
I have used Jenkins DSL. Now I started a new project and considering using Pipeline instead Jenkins DSL.
When using Jenkins DSL there was a seed job and everybody was forced to store every job in version control in order to not have it overwritten.
I cannot find a way for forcing the same thing with Pipeline.
I liked this approach, because in my opinion it really helps to store everything in VCS.
When using Pipeline, you need to create the job configuration manually like the Job DSL seed job.
You can use a mixed approach by using Job DSL for creating the Pipeline jobs and keep the pipeline definition in a Jenkinsfile next to your project's code.
pipelineJob('example') {
definition {
cpsScm {
scm {
git('https://github.com/jenkinsci/job-dsl-plugin.git')
}
scriptPath('Jenkinsfile')
}
}
}
See https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob for details.
Also checkout the advanced Pipeline job types like Multibranch and Organization Folder which provide a dynamic job setup out-of-the-box. See https://jenkins.io/doc/book/pipeline/multibranch/. The job types are also supported by Job DSL.
I recently managed to convert several manually-created jobs to DSL scripts (inlined into temporary 'seed' jobs), and was pleasantly surprised how straightforward it was. Now I'd like to get rid of the multiple seed jobs and try to structure things more cleanly.
To that end, I created a new jenkins-ci repo and committed all the Groovy DSL scripts to it. Then I created a job-generator Jenkins job that pulls from the jenkins-ci repo and has a single Process Job DSLs step. This step has the Look on Filesystem box ticked, with the DSL Scripts field set to jobs/*.groovy. With global push notifications already in place, this works more-or-less as intended: if I make a change to the jenkins-ci repo, the job-generator job automatically runs and regenerates all the jobs—awesome!
What I don't like about this solution is that it has poor locality of reference: the DSL scripts for the job live in a completely separate repository from the code. What I'd really like is to keep the job DSL scripts in each individual code repository, in a jenkins subfolder, and have a single seed job that processes them all. That way, changes to CI setup could be code-reviewed right alongside the code. To me, that just feels like an ideal setup.
Unfortunately, I don't have a clear idea about how to make this happen. If I could figure out a way to make the seed job watch multiple repos, such that a commit to any one of them would trigger it, perhaps I could inject another build step before the Process Job DSLs step and (somehow) script my way to victory, but... I'm unsure how to even get to that point. (I certainly don't want to do full clones of each repo in the generator job just to pull in the DSL scripts!)
I suspect I'm not the first person to wish they could put the Job DSL scripts alongside the code, though perhaps I'm over-estimating the benefits. Any advice on this topic would be much appreciated—thanks!
Unfortunately there is no direct way of solving this. Several feature requests have been opened (JENKINS-33275, JENKINS-37220), but AFAIK no one is working on any of them.
As a workaround you can use the Pipeline Multibranch Plugin and create a multibranch project for each of your repositories. You must then add a simple Jenkinsfile to each repo/branch and use the Jenkinsfile to execute your Job DSL scripts. See Use Job DSL in Pipeline scripts for details. This would require minimal coding, but I think each repo must be cloned for this to work because the Job DSL files must be available on the file system.
You can use Job DSL to create the multibranch jobs, see multibranchPipelineJob in the API viewer. This would be your "root" seed job.
If your repos are hosted on GitHub, you can also checkout the GitHub Organization Folder Plugin. With that plugin you must only create one job for each organization instead of multiple multibranch jobs.
I wonder, if it is possible to create generic parametric jobs ready to copy where the only post copy action is to redefine its parameters.
During my investigation I find out that:
- I can use parameters in svn path definition
- I can define the flow of builds using *Build Flow Plugin*
However I cannot force Jenkins to use parameters inside job names definition for promotion process. Is any way to achieve that?
Since I create sometimes branches from master I would like to copy the whole configuration of jobs but only one difference most times is that in the job name definition I replace master with branch name.
Why it not possible to use a parameter as the branch name?
Then when you start to build the job, you can input the parameters based on the branch you want to build.
Or find some way to get the branch info and inject it.
For example, we have one common job, which will monitor maybe 20 projects, if any of those job was merged into git, our gerrit trigger plugin will start to work, and all of job name, and branch is got dynamically from parameters.
Hope this works for you.
I want to create a Jenkins job ( stage 1) that will gather all parameters needed throughout a build pipeline as i do not want to hard-code each stage individually as they are likely to change regularly.
In the pipeline at stage 3 there will be 5 simultaneous jobs being run with each containing different parameters which will have been got from stage 1.
Is there a way i can gather the parameters i need in stage 1 using a cron job which will be available for subsequent stages ?
I think what is throwing everyone off answering your question is the "cron" part. What has "cron" got to do with any of this?
If we ignore that, there is an answer here that deals with a similar situation:
How to build a pipeline of jobs in Jenkins?
Using the Parameterized Trigger Plugin, you can collect all your parameters in the first job, and then just pass it from one job to another as environment variables, using this plugin.