Exporting and Importing Jenkins Pipeline script approvals - jenkins

I have a significant set of Groovy pipeline scripts for our Jenkins build process. I am in the process of moving those scripts onto another instances, and would like to replicate the set of approved scripts that were not originally white listed.
Is it possible to export the list of approved signatures and import them into another instance?
The only other solution I have is to constantly run and rerun the scripts and approving each signature as it breaks the build. Since the scripts are quite complex, and not every run is guaranteed to hit each line, this is not going to be a quick process.
The other option would be to create a master 'white list' script which runs all the currently non-approved scripts again and again until all instances had been approved.
Neither of these options is great, so I'm hoping for a simple import/export to avoid having to do this work altogether, but I certainly can't see an option available to be in the UI.
Cheers

I do not believe there is import/export functionality by default but maybe there's a plugin that'll do it.
If you have access to the directory Jenkins' is installed or runs in you should be able to find the scriptApproval.xml file.
If you explore that you'll find approvedScriptHashes and approvedSignatures etc. You can lift this file entirely and paste it in the new instance or copy across the specifics you need (either way you'll need a restart).
Looks like there's an open request for this sort of functionality here

Related

What is the standard way of preconfiguring Jenkins?

I have a significant amount of pre-configuration that I want to automate for Jenkins. E.g. Pre configuring gerrit for the gerrit trigger plugins, pre configuring saml, libraries etc
I'm aware of two methods typically used to do similar tasks:
Configuration as code plugin + yaml configuration
Groovy scripts to execute from the init.groovy.d directory of jenkins home on Jenkins startup
My users want to be able to update Jenkins configuration from the UI without needing to update yaml, suggesting the config as code plugin isn't fit for our purpose as I believe it reapplies the config when the Jenkins container is restarted.
My hunch is to use groovy scripts that remove themselves after the first execution so that they don't reapply themselves on restart.
Is there a more standard way of pre configuring Jenkins? or is groovy my best bet?
TL;DR: Use the file system
Why? There is no "standard" way to achieve what you intend; the two approaches that you suggest are viable options for sure.
From operational point of view, however, it will be good to select a solution which is
generic (so it can cover all aspects of Jenkins configuration) and
"simple" to use
Now,
"Configuration as code" makes you depend on the corresponding plugin -- it may or may not support a specific configuration option
With groovy, it is sometimes quite difficult to find out how to set a Jenkins configuration option (and how to store the setting permanently).
Since all Jenkins configuration data is stored on-disk, another option for bootstrapping Jenkins with a well-defined configuration is to pre-fill those configuration files with proper content right away:
You can be sure that this works in all cases, including all border cases (like, secret/encrypyted data)
Users can change the data later on as needed
Usually, it's quite easy to find the proper configuration file
On the downside, there is a risk that the configuration file format might change with newer versions of the core or of some plugin. However, a similar risk exists for the two other solutions that you suggested.
Tip: for rolling out such pre-configured Jenkins setups, it is helpful to disable the Jenkins setup wizard by setting jenkins.install.runSetupWizard to false.
When you combine words like : pre-configuring Jenkins, init.groovy.d, jenkins home, jenkins startup, etc, it sounds confusing o_O
When Jenkins is ready to use, usual folks just need to create jobs or pipelines. If you need to create a job or pipeline, you just need to install and configure some plugins. Very few of them need groovy, because the goal is "Easy to use".
Advanced user are able to create its own plugins, with java. But almost all is available as plugins.
You can use groovy in a pipeline scripts or declarative pipelines.
So if your question is more like "What is the best way to create and configure jobs or pipelines", I can advise you:
Try much as possible to use pipeline scripts or declarative pipelines.
Use just verified and supported plugins.
Stop call shell scripts in hard drive.
Stop using complicated configurations. Almost all of requirements are already implemented and documented.
If you have a requirement and no one plugin seems to help you, ask here in stackoverflow or develop your own plugin focused in configurability, so you can release it, for the benefit of Jenkins Community.

How to display configuration differences between two jenkins Jenkins builds?

I want to display non-code differences between current build and the latest known successful build on Jenkins.
By non-code differences I mean things like:
Environment variables (includes Jenkins parameters) (set), maybe with some filter
Version of system tool packages (rpm -qa | sort)
Versions of python packages installed (pip freeze)
While I know how to save and archive these files as part of the build, the only part that is not clear is how to generate the diff/change-report regarding differences found between current build and the last successful build.
Please note that I am looking for a pipeline compatible solution and ideally I would prefer to make this report easily accessible on Jenkins UI, like we currently have with SCM changelogs.
Or to rephrase this, how do I create build manifest and diff it against last known successful one? If anyone knows a standard manifest format that can easily be used to combine all these information it would be great.
you always ask the most baller questions, nice work. :)
we always try to push as many things into code as possible because of the same sort of lack of traceability you're describing with non-code configuration. we start with using Jenkinsfiles, so we capture a lot of the build configuration there (in a way that still shows changes in source control). for system tool packages, we get that into the app by using docker and by inheriting from a specific tag of the docker base image. so even if we want to change system packages or even the python version, for example, that would manifest as an update of the FROM line in the app's Dockerfile. Even environment variables can be micromanaged by docker, to address your other example. There's more detail about how we try to sidestep your question at https://jenkins.io/blog/2017/07/13/speaker-blog-rosetta-stone/.
there will always be things that are hard to capture as code, and builds will therefore still fail and be hard to debug occasionally, so i hope someone pipes up with a clean solution to your question.

Jenkins Global Pipeline Library - what's a sane development workflow?

What is a sane development workflow for writing jenkins global pipeline libraries and jenkinsFiles? It's kind of a pain to check in my changes to the global pipeline library and then run a build w/ retry to modify the jenkinsFile, then save the diff if it takes a couple iterations.
Anybody have any recommendations? What do you do?
There is a 3rd-party unit testing framework for Jenkins pipelines: lesfurets/JenkinsPipelineUnit. This also covers shared libraries and allows you to verify the call stack of your pipeline scripts.
Just based on the small amount of context from your question, I can share what I've learned. YMMV.
Source-control the library and make your changes on a branch. You might need another repo to act as a guinea pig to test the new changes - add #branchname to the library declaration in its Jenkinsfile.
Run and test your code on a Jenkins instance running on your local machine. It may not match exactly your production instance, but it's much closer, and therefore faster feedback, to your local. Also, you don't have to push your changes to test - commit locally, test, un-commit, change, commit, test, repeat. When you are done or need to test on the production instance, push.
Use the Script Console as a Groovy scratchpad to test code in. It's missing a lot of plugins/features, but it's great for throwing together some test code to make sure the basics work.
Create small throwaway pipelines to test bits of functionality that you want to iterate quickly on before putting it in a real pipeline, and that won't work in the script console. This lets you focus on the functionality you're building and not worry about the other bits.
Important note: I know of at least one function that doesn't work in the script console, but works fine in a real pipeline: readJSON. It throws an error as if you're doing something wrong but it's just broken in the console. I'm sure there are others.
I'll come back and add more as i think of it.

How I can run single Jenkins job by previously defined rules

I'd like to get a hint how (which plugin) it is possible run SINGLE Jenkins job by the user chosen way. User MUST be able to choose the job he/she wants to run and choose the rule of execution:
E.g:
Create only jar files;
Create jars and send them over ssh
Create jars, generate documentation, etc...
I've found out a few plugins (Artifactory, Release plugin) but seems they don't support such logic.
I know that such thing can be implemented by creating several jobs, but this would require additional disk space.
Many Thanks!
In order to solve my issue, I've decided to create a few Jenkins jobs with the same custom workspace. So that, when a IT engineer runs any of these "connected" (which have the same workspace) jobs the workspace is updated (have a look at the CVS rules for your job) and that's why we avoid wasting of space.
Additionally, its (job) behaviour can be configured easily => the sets of rules (shell scripts, gradle, batch etc) and their sequence in order to achieve the desired result.
The last advantage, but not the least one, is that the security (access control) is still very easy to configure.
I think, that is the correct way.

Is there a way for one ant script to check if another is already running?

We have several automated build scripts, some of which are run automatically every 2 hours, and some of which are only ever run manually.
If a build script is started manually while another is already running, it can cause...problems. Such as merging untested branches into the production branch.
I'd like to prevent this happening again, and the simplest solution in my mind is to have each build script start by checking that another is not currently running.
Is there a way in ant to directly check if another ant instance/script is currently running?
If not, what's the simplest way to add such a check? My first thought is a file created at the beginning and deleted at the end of a build. I'd prefer a way that handles user-cancelled builds nicely, but it's not necessary. It needs to work if a build succeeded and if a build failed (but was not killed by the user).
If these are separate Ant processes, then I think the only solution is to define a lockfile of some sort that each Ant process needs to acquire before it can continue.
Perhaps the tempfile task could be used for this?
Actually, a sort-of semaphore based on a directory might be better because the tempfile really is a unique tempfile. The first thing your script does is use mkdir to create a shared resource directory name, but it only does this if the directory does not exist.
Upon exit it invokes delete on this shared resource name.
The idea is that the content and name of the directory is meaningless -- it only serves as an "IPC" cooperative locking mechanism.
This isn't particularly elegant, but I think your only other option is to set up a build server that handles scheduled and continuous builds based on various triggers. One that many people use is Jenkins (or has it been renamed?)
[Update]
Perhaps Do I have a way to check the existence of a directory in Ant (not a file)? would do the trick?
To be honest, this approach may work in the short term, but it just moves the problem around. Instead of resetting unit test results you'll be removing lockfiles by hand to get builds working again. My advice is to set up a CI build system, but I recognize this is a fair amount of work (and introduces a whole different set of future problems.)

Resources