I am working on Jenkins pipeline for two projects. I built some customized configuration alerts messages via slack and emails. We expect my code can be used for my projects and also several other projects. So I am thinking to make it a small lib so that others don't need to ask me every time they onboard a Jenkins pipeline jobs. I was thinking using shared library with #Library() for other to use, as described in the docs.
However, since my lib depends on the existence of slack and emails plugin, it will not be usable when these plugin are not installed.
My question is: is there are way to declare dependency in pipeline Shared Libraries or I have to make jenkins plugin to address this issue?
As far as I know there is no way to declare dependencies to plugins right now (or version of Jenkins). Instead, what you can do is add a check for the plugin and give a proper error to the user of your library:
if (Jenkins.getInstance().getPluginManager().getPlugin("Slack+Plugin") == null) {
error "This shared library function requires "Slack plugin!"
}
Put this at the start of your shared library script, before any uses of the plugin. Note though, this gets tricky if you need to import classes from a plugin (since imports goes first in the groovy file). What you do in that situation is to make two scripts, the first script has the check and is the one the user calls, the second contains all the logic and imports, and is called by the first script once the checks pass.
Related
I have a Jenkins Shared Library which supports functions to checkout, build, etc. Is it possible to convert the entire library into a plugin making it more portable and also protect the code?
Or any other ways by which I might not expose the code but give users the ability to access all the functionalities by calling the functions?
Shared library logic doesn't translate directly to a plugin.
Usually Jenkins admins are looking for a solution the other way around: how to convert a plugin to a shared library. A plugin has to be installed by an admin, requires a restart every time there is an update to the code, can potentially bring your whole master down in case of an error.
You probably have good reasons to want to hide the library code from your users. Maybe you can put your library in a repo where users don't have read access. In order to use the library in a build, only credentials stored in Jenkins have to be able to access the repository.
Following the remarkably terse docs here:
https://www.jenkins.io/doc/book/pipeline/shared-libraries/#using-third-party-libraries
I am trying to use #Grab to access a third party library from Jenkins.
#Grab(group='org.jsoup', module='jsoup', version='1.13.1')
is in a Groovy class in a library. The library is specified in Jenkins/configure
under Global Pipeline Libraries
Sharable libraries available to any Pipeline jobs running on this
system. These libraries will be trusted, meaning they run without
“sandbox” restrictions and may use #Grab.
I am not using the default branch of the library, but I am not sure whether that is relevant to my problem.
When running the pipeline, I get the following error:
java.lang.SecurityException: Annotation Grab cannot be used in the
sandbox. at
org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.RejectASTTransformsCustomizer
Any hints would be deeply appreciated
This article on russian describes details about it: https://habr.com/ru/post/338032/
Summary: you need to create separate libarary in SCM and use Grab there.
I'm using a shared groovy library in my pipelines. I'm finding that when ever I merge to my library, a subset of jobs (but not all) that use the library are being triggered.
I've looked at the shared library configuration and verified that "Include #Library changes in job recent changes" is not checked. I've combed through logs, looking for clues, I'm finding that seemingly random jobs get triggered by the merge, but I haven't been able to identify why these particular jobs get run.
My current thought is that /github-webhook/ is just triggering too many jobs.
I'm using Jenkins 2.82 and 2.9 of the groovy libraries plugin
https://wiki.jenkins.io/display/JENKINS/Pipeline+Shared+Groovy+Libraries+Plugin
Further information:
If I delete one of the jobs that is getting triggered by the shared library, and recreate it, then it will no longer rebuild when the shared library is merged. Running a diff on the old config.xml vs the new one isn't helping a ton. The workflow-job#$id and other plugin versions change, but that seems unrelated.
I had the exact same behaviour you described in your question. In my case, disabling and enabling all jobs fixed this issue. Run the following code on the "Script Console":
for (item in Jenkins.instance.items) {
item.disabled = true
item.save()
item.disabled = false
item.save()
}
The shared library plugin, workflow-cps-global-lib, has a fix for this in version 2.9:
JENKINS-41497 - allow excluding shared libraries from changelogs (and
therefore from SCM polling as well) via global configuration option
and/or #Library(value="some-lib#master", changelog=false).
Simply configure it at the library or pipeline level to disable this behavior.
I'm adding to, and maintaining, groovy files to build a set of repositories - previously they were built with freestyle Jenkins jobs. I support some code in shared libraries and to be honest (mainly for DRY reasons) I want to do that more.
However, the only way I know how to test and debug those library files is to push the changes on a git branch. I know about the "replay" trick to test the main Jenkins file. Is there some approach I've missed to do something similar for library code?
If you set up a job to load the shared library instead of relying on a globally set up shared library (you can have both going, for this particular job), then it is possible to hit "replay" and have all your shared library steps show up as editable files.
This can be helpful in iterative development without a million commits.
EDIT: Here's how that looks on an Organization job in Jenkins.
There is the 3rd party Jenkins Pipeline Unit testing framework.
While it does not yet cover all features of pipeline, it is well documented and maintained so that I would consider starting using it (once I revisit our Jenkins setup).
I'm planning to reuse the same set of build parameters (like 10 of them) across dozens of jobs.
One way is to create a job, and clone it. But what if I want to change the build parameters at the later time when I have already hundred of similar jobs. Editing all of them one by one could be a nightmare.
Is there any way of managing parameterized projects?
As solution to this problem I would imaging some option or plugin where I can define global set of parameters and reuse them across my jobs.
You could try using Configuration Slicing Plugin. This plugin allows you to perform mass configuration (including parameters) for a group of jobs.
Alternatively you could try writing a groovy management script to set the group of parameters to all those jobs at once. A good starting point would be this, note that this is just printing the current jobs parameters, you would have to alter that script to do want you want.
Unfortunately mentioned Inheritance Plugin is not maintained anymore, it's buggy and it has some limitation such as Trigger Parameterized Builds cannot be implemented in Parent Projects, it's also difficult to override specific configuration and does not play well with Folders plugin.
Alternative ways are:
Job DSL Plugin which allows process jobs with DSLs which can be used as templates (a "seed" job), then run these DSL scripts into your jobs (read the tutorial). It's actively maintained on GitHub. For more advanced solutions you may use Pipelines instead.
Template Project Plugin which allows to set up a template project which has all the settings you want to share across your other jobs (by selecting use all the publishers from this project and pick the template project.
How about EZ Templates Plugin (check also GitHub page)?
Just remember that when you create a template, that job shouldn't actually do anything else then being a template (meaning: you should not run that job) and put only the minimum common configs there, nothing else or things can get messy. That way you shouldn't have any problems.
Using Parameterized Trigger Plugin you can save the properties in a property file and pass them across jobs. Then in you can override or use as is in the subsequent jobs.
Also this would help: Retrieve parameters from properties file.
You could also consider using Pipeline Global Library.
This plugin adds that functionality by creating a "shared library script" Git repository inside Jenkins. Every Pipeline script in your Jenkins see these shared library scripts in their classpath.
Try Inheritence-Plugin which can help to solve the problem. We can read from plugin description:
Instead of having to define the same property multiple times across as many projects; it should be possible for many projects to refer to the same property that is defined only once. In other words, everything that is defined multiple times, but used in the same way, should be defined only once and simply referred to many times.
So to define the property only once across multiple jobs, you need to:
Create a new job as Inheritance Project.
You may set it as abstract project choose This build is parameterized.
Add Inheritable Parameter and set it as Overwritable.
After saving, set this project as parent, so parameters can be inherited.
Check the Jenkins Inheritance Plugin Tutorial Video for overview of the main features. See also GitHub page.
Unfortunately the plugin is not well maintained and it can be buggy when using with the latest Jenkins (e.g. #22885).
You may manage this using single property file which can be injected in all the jobs