How to programmatically generate config.xml from Jenkinsfile? - jenkins

Jenkins has the ability to upload new jobs via its REST API. Those new jobs require an XML document which, to the best of my searching, has no schema available.
When creating jobs as part of an SCM repo, you can include a Jenkinsfile and it automatically gets translated into a job with the config.xml filled out.
I tried creating a minimal config.xml and including the Jenkinsfile content in the <script>…</script> section of the xml file. This works for trivial jobs, but does not work for jobs that have parameters: The job gets uploaded as a parameterless job. The first time you trigger a build of the job, it fails - but then the job turns into a job-with-parameters, and can properly be built.
How do I convert a Jenkinsfile, possibly with parameters or other "advanced" features, into a working config.xml file on the first try? Or, alternatively, is it possible to directly upload the Jenkinsfile to the Jenkins REST API to create the job?
Thanks in advance,
— Johnson

Related

Start a Jenkins pipeline job from a file parameter

I want to start a new Jenkins job by uploading a single file as a file parameter.
I have not been able to make it work with a single declarative pipeline job.
I have read many posts here, but I have not found a definitive guide or method to do it.
Could you please provide an example or link?
The problem with the pipelined job is that it is not clear where the file parameter is available.
So far, I have managed to have a freestyle job receiving the file, and then it copies the file to a directory where a second (pipeline) job takes it. The pipeline job is started by the freestyle job.
An awful solution from any point of view, but it works.

How to copy parameters from one pipeline to another without copying entire pipeline?

On our team, only few people have Jenkins access to perform admin operations as it is Production Jenkins server which developers continuously use for builds.
Sometimes I have to enhance any pipeline or fix issues of pipeline. For that admin has created one pipeline for me so I can add code there and test it. I am suppose to use only that pipeline to test anything.
But I test different pipelines, each pipelines has different parameters list. In this case, I've to add parameters one by one and copying all details of that parameter like Groovy Script, default value etc. which takes lot of time.
Is there any way/plugin using which we can simply copy only parameters from one pipeline to other?
I think you should know each job has a config.xml which represents the job configuration. You can get it by <job_url>/config.xml.
Get the config.xml of the job you want to debug, then extract the xml block for job parameters from the config.xml
Prepare an empty structure config.xml, inject the job parameters' xml block into the empty config.xml
Call Jenkins Rest API to update/save the config.xml to your debug job, then your debug job has target job's params.
You can write a script to implements above 3 steps.

Storing Jenkins pipeline job metadata?

Is there a way where to store some metadata from Jenkins pipeline job, e.g:
We have a Jenkinsfile which builds a gradle project, creates docker image and pushes it to google cloud
Then a "Subjob" is launched which runs integration tests (IT) on that docker image. Subjob receives a couple of parameters (one of them - the generated docker image name)
Now sometimes that IT job fails, and I would like to re-run it from the main job view, so idealy:
we have a plugin which renders a custom button in blue ocean UI on the main job
By clicking that button a subjob is invoked again with the same parameters (plugin queries the jenkins api, get params of this job, and resubmits the subjob).
The problem ? How to get/set those parameters. I could not seem to find a mechanism for that, expect artifact storage. I could get away with that by creating a simple json/text file and uploading it as artifact, and then retrieving it in my plugin, but maybe there is a better way?
Stage restart is not coming to Scripted Pipelines so that does not look like ant option.
Maybe you can use the Jenkins API to get the details of the build?
https://your_jenkins_url.com/job/job_name/lastBuild/api/json?pretty=true
Instead of lastBuild you can also use the build number or one of lastStableBuild, lastSuccessfulBuild, lastFailedBuild, lastUnstableBuild, lastUnsuccessfulBuild, lastCompletedBuild
There is a parameters key there with all parameter names and values used in the build.
More details on https://your_jenkins_url.com/job/job_name/api/
Also, any reason you can't use the replay button in the IT job?

Extract Freestyle Jobs and create pipeline Jobs in another Jenkins instance

I have two jenkins instances (jenkins1 and jenkins2)
Jenkins1 - Contains freestyle jobs (all runs on a specific template)
I need to extract all the jobs from jenkins1 and create those jobs as pipeline jobs in jenkins2.
I know simply copying the jobs doesnt work (because it is two different templates Freestyle and pipeline)
How can I do it in efficient way using a groovy/shell script to achieve this?
Every job has a config.xml where all the job step are listed in xml.
Parse that file and extract all the information than convert them in a pipeline job routine.
I think groovy/shell scripts are a perfect way to achieve it, just use the config.xml as source of information.
The below resources can help:
https://jenkinsworld20162017.sched.com/event/Bk3r/auto-convert-your-freestyle-jenkins-jobs-to-coded-pipeline?iframe=no&w=100%&sidebar=yes&bg=no
https://github.com/visualphoenix/jenkins-xml-to-jobdsl

Can a single seed job process DSLs from multiple repos?

I recently managed to convert several manually-created jobs to DSL scripts (inlined into temporary 'seed' jobs), and was pleasantly surprised how straightforward it was. Now I'd like to get rid of the multiple seed jobs and try to structure things more cleanly.
To that end, I created a new jenkins-ci repo and committed all the Groovy DSL scripts to it. Then I created a job-generator Jenkins job that pulls from the jenkins-ci repo and has a single Process Job DSLs step. This step has the Look on Filesystem box ticked, with the DSL Scripts field set to jobs/*.groovy. With global push notifications already in place, this works more-or-less as intended: if I make a change to the jenkins-ci repo, the job-generator job automatically runs and regenerates all the jobs—awesome!
What I don't like about this solution is that it has poor locality of reference: the DSL scripts for the job live in a completely separate repository from the code. What I'd really like is to keep the job DSL scripts in each individual code repository, in a jenkins subfolder, and have a single seed job that processes them all. That way, changes to CI setup could be code-reviewed right alongside the code. To me, that just feels like an ideal setup.
Unfortunately, I don't have a clear idea about how to make this happen. If I could figure out a way to make the seed job watch multiple repos, such that a commit to any one of them would trigger it, perhaps I could inject another build step before the Process Job DSLs step and (somehow) script my way to victory, but... I'm unsure how to even get to that point. (I certainly don't want to do full clones of each repo in the generator job just to pull in the DSL scripts!)
I suspect I'm not the first person to wish they could put the Job DSL scripts alongside the code, though perhaps I'm over-estimating the benefits. Any advice on this topic would be much appreciated—thanks!
Unfortunately there is no direct way of solving this. Several feature requests have been opened (JENKINS-33275, JENKINS-37220), but AFAIK no one is working on any of them.
As a workaround you can use the Pipeline Multibranch Plugin and create a multibranch project for each of your repositories. You must then add a simple Jenkinsfile to each repo/branch and use the Jenkinsfile to execute your Job DSL scripts. See Use Job DSL in Pipeline scripts for details. This would require minimal coding, but I think each repo must be cloned for this to work because the Job DSL files must be available on the file system.
You can use Job DSL to create the multibranch jobs, see multibranchPipelineJob in the API viewer. This would be your "root" seed job.
If your repos are hosted on GitHub, you can also checkout the GitHub Organization Folder Plugin. With that plugin you must only create one job for each organization instead of multiple multibranch jobs.

Resources