Jenkins configuration as code plugin vs Pipeline - jenkins

I'm looking at options of configuring Jenkins as code. What I've found so far are those two options:
Configuration as code plugin (https://github.com/jenkinsci/configuration-as-code-plugin/blob/master/README.md)
Pipeline (https://jenkins.io/doc/book/pipeline/)
What I don't understand yet is how those two work with each other? Do they both do the same and should I choose either one or another? Or maybe they do different things, in such case it would be great to know if those two can actually work together.

You can use pipeline to configure the build process as code.
You can use Jenkins Configuration as Code to configure the Jenkins instance as code.
You should also have a look at Job DSL Plugin to configure the jobs (everything but the build process) as code.
You may have a look at this repo to see it all work together: https://github.com/tomasbjerre/jenkins-configuration-as-code-sandbox

Related

What is the standard way of preconfiguring Jenkins?

I have a significant amount of pre-configuration that I want to automate for Jenkins. E.g. Pre configuring gerrit for the gerrit trigger plugins, pre configuring saml, libraries etc
I'm aware of two methods typically used to do similar tasks:
Configuration as code plugin + yaml configuration
Groovy scripts to execute from the init.groovy.d directory of jenkins home on Jenkins startup
My users want to be able to update Jenkins configuration from the UI without needing to update yaml, suggesting the config as code plugin isn't fit for our purpose as I believe it reapplies the config when the Jenkins container is restarted.
My hunch is to use groovy scripts that remove themselves after the first execution so that they don't reapply themselves on restart.
Is there a more standard way of pre configuring Jenkins? or is groovy my best bet?
TL;DR: Use the file system
Why? There is no "standard" way to achieve what you intend; the two approaches that you suggest are viable options for sure.
From operational point of view, however, it will be good to select a solution which is
generic (so it can cover all aspects of Jenkins configuration) and
"simple" to use
Now,
"Configuration as code" makes you depend on the corresponding plugin -- it may or may not support a specific configuration option
With groovy, it is sometimes quite difficult to find out how to set a Jenkins configuration option (and how to store the setting permanently).
Since all Jenkins configuration data is stored on-disk, another option for bootstrapping Jenkins with a well-defined configuration is to pre-fill those configuration files with proper content right away:
You can be sure that this works in all cases, including all border cases (like, secret/encrypyted data)
Users can change the data later on as needed
Usually, it's quite easy to find the proper configuration file
On the downside, there is a risk that the configuration file format might change with newer versions of the core or of some plugin. However, a similar risk exists for the two other solutions that you suggested.
Tip: for rolling out such pre-configured Jenkins setups, it is helpful to disable the Jenkins setup wizard by setting jenkins.install.runSetupWizard to false.
When you combine words like : pre-configuring Jenkins, init.groovy.d, jenkins home, jenkins startup, etc, it sounds confusing o_O
When Jenkins is ready to use, usual folks just need to create jobs or pipelines. If you need to create a job or pipeline, you just need to install and configure some plugins. Very few of them need groovy, because the goal is "Easy to use".
Advanced user are able to create its own plugins, with java. But almost all is available as plugins.
You can use groovy in a pipeline scripts or declarative pipelines.
So if your question is more like "What is the best way to create and configure jobs or pipelines", I can advise you:
Try much as possible to use pipeline scripts or declarative pipelines.
Use just verified and supported plugins.
Stop call shell scripts in hard drive.
Stop using complicated configurations. Almost all of requirements are already implemented and documented.
If you have a requirement and no one plugin seems to help you, ask here in stackoverflow or develop your own plugin focused in configurability, so you can release it, for the benefit of Jenkins Community.

Is it possible to have a common repository for multiple pipeline jobs?

I have 11 jobs running on the Jenkins master node, all of which have a very similar pipeline setup. For now I have integrated each job with its very own Jenkinsfile that specifies the stages within the job and all of them build just fine. But, wouldn't it be better to have a single repo that has some files (preferably a single Jenkinsfile and some libraries) required to run all the jobs that have similar pipeline structure with a few changes that can be taken care of with a work around?
If there is a way to accomplish this, please let me know.
Use a Shared Library to define common functionality. Your 11 Jenkinsfiles can then be as small as only a single call to the function implementing the pipeline.
Besides using a Shared Library, you can create a groovy file with common functionality and call its methods via load().
Documentation
and example. This is an easier approach, but in the future with the increasing complexity of pipelines, this may impose some limitations.

Is there a trick to debug shared groovy libraries without pushing?

I'm adding to, and maintaining, groovy files to build a set of repositories - previously they were built with freestyle Jenkins jobs. I support some code in shared libraries and to be honest (mainly for DRY reasons) I want to do that more.
However, the only way I know how to test and debug those library files is to push the changes on a git branch. I know about the "replay" trick to test the main Jenkins file. Is there some approach I've missed to do something similar for library code?
If you set up a job to load the shared library instead of relying on a globally set up shared library (you can have both going, for this particular job), then it is possible to hit "replay" and have all your shared library steps show up as editable files.
This can be helpful in iterative development without a million commits.
EDIT: Here's how that looks on an Organization job in Jenkins.
There is the 3rd party Jenkins Pipeline Unit testing framework.
While it does not yet cover all features of pipeline, it is well documented and maintained so that I would consider starting using it (once I revisit our Jenkins setup).

How can I share source code across many nodes in a Jenkins pipeline job?

I have a build that's currently using the old build flow plugin that I'm trying to convert to pipeline.
This build can be massively parallelized (many units of work can run on many different nodes) but we only want to extract the source code once at the beginning, preferably with the Pipeline script from SCM option. I'm at a loss to understand how I can share the source extract (which apparently is on the master) with all of the "downstream" nodes that will be used by the pipeline script.
For build flow we extracted to a well-known location on a shared file system and all of the downstream jobs invoked by the flow were passed (or could derive) that location. That always felt icky & I was hoping that pipeline would have solved this problem but I can't find anything to suggest that it has. What am I missing?
I believe the official recommendation for this is to make bundles of the source and then use "stash" and "unstash" to make them available to deeper steps of your pipeline script.
See https://www.cloudbees.com/blog/parallelism-and-distributed-builds-jenkins
Keep in mind that this doesn't do anything to help with line-endings. If you have builds that span OSs with different line endings you either need to make OS-specific stashes, or just checkout to a safe label in each downstream step.
After further research it seems like the External Workspace Manager Plugin does what I'm looking for.

Changing slaves in jenkins-workflow

I am configuring a Jenkins-workflow and requirement is to use Linux(server1) for compilation part of workflow and than windows (server2)for testing because testing tool is not compatible with Linux , After testing is complete need to switch back to same Linux(server1) to continue rest of workflow.
How to switch slaves in same workflow if not possible, what are other ways to achieve this.
Appreciate suggestions !
If by jenkins-workflow you mean Jenkins Pipeline then you can do it like this:
node('server1') {
//some compilation steps
node('server1') {
// more compilation steps
}
//continue workflow for server1
}
You can send any files between the nodes using stash/unstash steps.
A possible way is to use a wrapper job which starts compilation and test jobs with Trigger/call builds on other projects.
That way you can move the artifacts with help of Archive the artifacts option (which you would use ine the compilation post-build action) and the Copy Artifacts Plugin (which you'd use in test build step).
You can define on which machine/label your jobs run easily either static via default configuration in your job, or dynamically with help of NodeLabel Plugin
Note
You can also try option 3 mentioned here, however im not sure if it works when files have to be moved in between different machines.
Might be worth to check out though, as if it works, that could be alot more convenient.

Resources