Jenkins shared groovy library git merge triggering jobs - jenkins

I'm using a shared groovy library in my pipelines. I'm finding that when ever I merge to my library, a subset of jobs (but not all) that use the library are being triggered.
I've looked at the shared library configuration and verified that "Include #Library changes in job recent changes" is not checked. I've combed through logs, looking for clues, I'm finding that seemingly random jobs get triggered by the merge, but I haven't been able to identify why these particular jobs get run.
My current thought is that /github-webhook/ is just triggering too many jobs.
I'm using Jenkins 2.82 and 2.9 of the groovy libraries plugin
https://wiki.jenkins.io/display/JENKINS/Pipeline+Shared+Groovy+Libraries+Plugin
Further information:
If I delete one of the jobs that is getting triggered by the shared library, and recreate it, then it will no longer rebuild when the shared library is merged. Running a diff on the old config.xml vs the new one isn't helping a ton. The workflow-job#$id and other plugin versions change, but that seems unrelated.

I had the exact same behaviour you described in your question. In my case, disabling and enabling all jobs fixed this issue. Run the following code on the "Script Console":
for (item in Jenkins.instance.items) {
item.disabled = true
item.save()
item.disabled = false
item.save()
}

The shared library plugin, workflow-cps-global-lib, has a fix for this in version 2.9:
JENKINS-41497 - allow excluding shared libraries from changelogs (and
therefore from SCM polling as well) via global configuration option
and/or #Library(value="some-lib#master", changelog=false).
Simply configure it at the library or pipeline level to disable this behavior.

Related

Is there a trick to debug shared groovy libraries without pushing?

I'm adding to, and maintaining, groovy files to build a set of repositories - previously they were built with freestyle Jenkins jobs. I support some code in shared libraries and to be honest (mainly for DRY reasons) I want to do that more.
However, the only way I know how to test and debug those library files is to push the changes on a git branch. I know about the "replay" trick to test the main Jenkins file. Is there some approach I've missed to do something similar for library code?
If you set up a job to load the shared library instead of relying on a globally set up shared library (you can have both going, for this particular job), then it is possible to hit "replay" and have all your shared library steps show up as editable files.
This can be helpful in iterative development without a million commits.
EDIT: Here's how that looks on an Organization job in Jenkins.
There is the 3rd party Jenkins Pipeline Unit testing framework.
While it does not yet cover all features of pipeline, it is well documented and maintained so that I would consider starting using it (once I revisit our Jenkins setup).

Jenkins pipeline shared library vs plugin

I am working on Jenkins pipeline for two projects. I built some customized configuration alerts messages via slack and emails. We expect my code can be used for my projects and also several other projects. So I am thinking to make it a small lib so that others don't need to ask me every time they onboard a Jenkins pipeline jobs. I was thinking using shared library with #Library() for other to use, as described in the docs.
However, since my lib depends on the existence of slack and emails plugin, it will not be usable when these plugin are not installed.
My question is: is there are way to declare dependency in pipeline Shared Libraries or I have to make jenkins plugin to address this issue?
As far as I know there is no way to declare dependencies to plugins right now (or version of Jenkins). Instead, what you can do is add a check for the plugin and give a proper error to the user of your library:
if (Jenkins.getInstance().getPluginManager().getPlugin("Slack+Plugin") == null) {
error "This shared library function requires "Slack plugin!"
}
Put this at the start of your shared library script, before any uses of the plugin. Note though, this gets tricky if you need to import classes from a plugin (since imports goes first in the groovy file). What you do in that situation is to make two scripts, the first script has the check and is the one the user calls, the second contains all the logic and imports, and is called by the first script once the checks pass.

How do I set up a Jenkins Pipeline global library using perforce as the SCM?

I've spent many hours on this without any success at all. According to this I should be able to use any available SCM but I don't know how to map the paths, where, if anywhere, to insert the ${library.RegLib.version} or what workspace name to use.
I have a library set up as per the abovementioned docs:
<root>/src/org/somelib/MyLib.groovy
which contains:
package org.registration;
def doTest() {
echo "test running..."
}
I've tried many different things but nothing works. I've also tried restarting Jenkins, as mentioned here. No change.
My build reports:
Loading library MyLib##1
java.lang.ArrayIndexOutOfBoundsException: 1
at org.jenkinsci.plugins.p4.tasks.AbstractTask.setEnvironment(AbstractTask.java:106)
at org.jenkinsci.plugins.p4.PerforceScm.checkout(PerforceScm.java:391)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:109)
at org.jenkinsci.plugins.workflow.libs.SCMSourceRetriever.doRetrieve(SCMSourceRetriever.java:107)
at org.jenkinsci.plugins.workflow.libs.SCMRetriever.retrieve(SCMRetriever.java:63)
at org.jenkinsci.plugins.workflow.libs.LibraryAdder.retrieve(LibraryAdder.java:150)
at org.jenkinsci.plugins.workflow.libs.LibraryAdder.add(LibraryAdder.java:131)
at org.jenkinsci.plugins.workflow.libs.LibraryDecorator$1.call(LibraryDecorator.java:99)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1053)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:429)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:392)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:221)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:404)
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: Loading libraries failed
"Default version" is set to 1 because there's only been one commit. I've also tried #1. I don't know whether to map specific files or the top-level directory. If I remove the default version the build fails and complains that I haven't set a version. It's supposed to be optional but clearly isn't.
I've also tried using the vars directory
<root>/vars/doTest.groovy
which contains:
def call(msg) {
echo msg
}
but I presume that also requires the library to be loaded. The docs are unclear about that.
So...
Will this work with perforce?
How do I map the paths to make it work?
How do I make the code in vars accessible? Is that loaded as part of the overall library?
Is there an error somewhere in my code?
Many thanks.
Install Pipeline Shared Libraries Plugin.
The configuration is in Manage Jenkins -> Global Pipeline Libraries
The retrieval method should be legacy mode. add repository
Tick the Load implicitly to load the scripts in every build
Put the groovy files in vars/yourGroovy.groovy and call it from Jenkinsfile:
yourGroovy()
Seems to be an open issue with the p4 plugin, related to the p4 plugin being unable to deal with perforce checkouts at locations different from the workspace root:
https://issues.jenkins-ci.org/browse/JENKINS-40055
https://issues.jenkins-ci.org/browse/JENKINS-36243
Edit: You may be able to get this to work using older plugin versions, according to the reporter of the first issue:
The crash is not present in version 2.4 of workflow-cps-global-lib, it
started to happen in version 2.5 only.
This is really late, but I was wondering if you found a solution.
In Amityo's answer you commented that your Perforce source path is //<prod>/trunk/src/apps/jenkinslib#${library.RegLib.version}/..., where ${library.RegLib.version} = 1 if no other version is explicitly specified in pipeline.
I think Jenkins will literally look for a folder named jenkinslib#1, which it won't find since your folder is just named jenkinslib.
I don't know how you would set up your structure to support different versions, but maybe having just //<prod>/trunk/src/apps/jenkinslib/... as your source path in the map might work, even though the config page tells you to add library.RegLib.version.
I would've commented all this on Amityo's post instead but I don't have enough reputation to do so.
In reply to #HS10, I did and I've been meaning to update this for the benefit of others for ages but everything else in life seems to become higher priority. Since you've asked, here's what I did.
In Jenkins/Configuration, under Global Pipeline Libraries I set the following:
Specifically, provide a Name and set Default version tohead. Set the Retrieval method to Legacy SCM. Perforce doesn't have Modeern SCM support, yet. Under Source Code Management select Perforce Software. Note that this is the p4 plugin, not the old Perforce one which is listed as Perforce. I suspect that it's important to use the version written by Perforce themselves. Select a Credential that you have configured and provide a matching workspace name and mapping. I may have had that wrong earlier, I don't know. Other settings should be at your discretion. The library directory structure is as per the docs. I did think for a while that the workspace name had to be _global_lib but recent experiments appear to have disproved that.
In your pipeline, import the library like this:
#Library('plib') _
// do something
You should now have a working library.
I think I had this wrong earlier, as well. Note that the underscore is important. See the Global Lirary docs for more details. Getting this working caused me a lot of pain so I hope this saves someone from having a similar experience.

How can I share source code across many nodes in a Jenkins pipeline job?

I have a build that's currently using the old build flow plugin that I'm trying to convert to pipeline.
This build can be massively parallelized (many units of work can run on many different nodes) but we only want to extract the source code once at the beginning, preferably with the Pipeline script from SCM option. I'm at a loss to understand how I can share the source extract (which apparently is on the master) with all of the "downstream" nodes that will be used by the pipeline script.
For build flow we extracted to a well-known location on a shared file system and all of the downstream jobs invoked by the flow were passed (or could derive) that location. That always felt icky & I was hoping that pipeline would have solved this problem but I can't find anything to suggest that it has. What am I missing?
I believe the official recommendation for this is to make bundles of the source and then use "stash" and "unstash" to make them available to deeper steps of your pipeline script.
See https://www.cloudbees.com/blog/parallelism-and-distributed-builds-jenkins
Keep in mind that this doesn't do anything to help with line-endings. If you have builds that span OSs with different line endings you either need to make OS-specific stashes, or just checkout to a safe label in each downstream step.
After further research it seems like the External Workspace Manager Plugin does what I'm looking for.

How to create complex value stream with multiple pipelines with Jenkins WorkFlow

How do you implement a complex value stream with multiple pipelines in Jenkins WorkFlow? Similar like you can do with Go CD: How do I do CD with Go?: Part 2: Pipelines and Value Streams.
For a distributed system I would like to have each dev team and operation team to start with their own delivery pipeline. One change needs to trigger only the pipeline of the team that made the change. It needs to trigger a new pipeline that needs to take the latest successful artifacts from each of the team's pipelines and move on from there. This mean that the artifacts from the other teams were not rebuild or retested as they were not changed. And after the Fan In we can run a set of automated tests to verify the correct behaviour of the distributed system with the change.
In the documentation I only find you can pull from multiple VCS's but I assume everything is then build and tested with every change. Which is something I want to avoid.
If each delivery pipeline is in it's own Jenkins Job. How can I visualize the complete pipeline and what is the best way to pull in the last successful artifacts or version from the other pipelines?
There is no direct equivalent in Jenkins for value streams, and Workflow jobs do not behave any differently in that respect: you can have upstream jobs and downstream jobs correlated with triggers (in this case the build step, or the core ReverseBuildTrigger), and use (for example) the Copy Artifact plugin to transfer artifacts to downstream builds. Similarly, you could use an external repository manager as the “source of truth” and define job triggers based on snapshots pushed to the repository.
That said, part of the purpose of Workflow is to avoid the need for complex job chains in most situations¹, since it is usually easier to reason about, debug, and customize a single script with standard control flow operators and local variables than to manage a set of interdependent jobs. If the main problem with a single flow is that you need to avoid rebuilding unmodified parts, one solution would be to use something like JENKINS-30412 to check the changelog of particular repository checkouts and skip build steps if empty. I think there would be more features needed to make such a system work in the general case that workspaces are clobbered or discarded by other builds.
¹One case where you definitely need separate jobs is that for security reasons the teams contributing to different projects must not be able to see one another’s sources or build logs.
Assuming that each of your dev teams works on a different module of your project and „One change needs to trigger only the pipeline of the team that made the change“ I'd use Git Submodules:
Submodules allow you to keep a Git repository as a subdirectory of another Git repository.
with one repo, that becomes a submodule of a main module repo, for each team. This will be transparent to the teams since they just work on their designated repos only.
The main module is also the aggregator project for your module projects in terms of the build tool. So, you have the options:
to build each repo/pipeline individually or
to build the whole (main) project at once.
A build pipeline that comprises one or more build jobs is associated to every team/repo/module.
The main pipeline is merely a collection of downstream jobs which represent the starting points of the team/repo/module pipelines.
The build triggers can be any of manually, timed or on source changes.
A decision has also to be made:
whether you version your modules individually, such that other modules depend on release versions only.
Advantage:
Others rely on released, usually more stable versions.
Modules can decide which version of a dependency they want to use.
Disadvantages:
Releases have to be prepared for each module.
It may take longer until the latest changes are available to others.
Modules have to decide which version of a dependency they want to use. And they have to adapt it every time they need functionality added in a newer version.
or whether you use one version for the entire project (which is inherited by the modules then): ...-SNAPSHOT during the development cycle, a release version when releasing the project.
In this case, if there are modules that are essential for others, e.g. a core module, a successful build of it should trigger a build of the dependent modules, as well, so that incompatibilities are recognized as early as possible.
Advantages:
Latest changes are immediately available to others.
A release is prepared for the whole project only once it is to be delivered.
Disadvantages:
Latest changes immediately available to others may introduce not so stable (snapshot) code.
Re „How can I visualize the complete pipeline“
I'm not aware of any plugin that can do this with Workflows at the moment.
There's the Build Graph View Plugin which originally has been created for Build Flows, but it's more than two years old now:
Downstream builds are identified by DownStreamRunDeclarer extension point.
Default one is using Jenkins dependencyGraph and UpstreamCause and as such can detect common build chain.
build-flow plugin is contributing one to render flow execution as a graph
some Jenkins plugins may later contribute dedicated solutions.
(You know, „may“ and „later“ often become will not and never in development. ;)
There's the Build Pipeline Plugin but it apparently is also not suitable for Workflows:
This plugin provides a Build Pipeline View of upstream and downstream connected jobs [...]
Re „way to pull in the last successful artifacts“
Apparently it's not that smooth with Gradle:
By default, Gradle does not define any repositories.
I'm using Maven and there exist local and remote repositories where the latter can also be:
[...] internal repositories set up on a file or HTTP server within your company, used to share private artifacts between development teams and for releases.
Have you considered using a binary repository manager like Artifactory or Nexus?
From what I have seen, people are moving towards smaller, independent pieces of code delivery rather than monolithic deployments. But clearly, there will still be dependencies between different components. At the very least, for example, if you had one script that provisioned your infrastructure and another that built and deployed your app, you would want to be sure your infrastructure update script was run before your app deployment. On the other hand, your infrastructure does not depend on deploying your app code - it can be updated at its own pace, so long as it ideally passes some testing.
As mentioned in another post, you really have two options to accomplish this dependency:
Have a single pipeline (workflow script) that checks out code from both repos and puts them through the same pipeline simultaneously. Any change to one requires the full boat pipeline for everything.
Have two pipelines and this would allow each to go at its own pace independent of what the other does. This isn't a problem for the infrastructure code, but it very well could be for the app code. If you pushed your app code to production without the infrastructure update having happened first, the results may not be pleasant.
What I've started to do with Jenkins Workflow is establish a dependency between my flows. Basically, I declare that one flow is dependent on a particular version (in this case, simply BUILD_NUM) and so before I do a production deploy I verify that the last successful build of the other pipeline has completed first. I'm able to do this using the Jenkins API as part of my flow script that waits for that build or greater to succeed, like so
import hudson.EnvVars
import hudson.model.*
int indepdentBuildNum = 16
waitUntil{
verifyDependentPipelineCompletion("FLDR_CM/WorkflowDepedencyTester2", indepdentBuildNum)
}
boolean verifyDependentPipelineCompletion(String jobName, int buildNum){
def hi = jenkins.model.Jenkins.instance
Item dep2 = hi.getItemByFullName(jobName)
hi = null
def jobs = dep2.getAllJobs().toArray()
def onlyJob = jobs[0] //always 1 job...I think?
def targetedBuild = onlyJob.getLastSuccessfulBuild()
EnvVars me = targetedBuild.getCharacteristicEnvVars()
def es = me.entrySet()
int targetBuildNum = 0;
def vars = es.iterator()
while(vars.hasNext()){
def envVar = vars.next()
if(envVar.getKey().equals("BUILD_ID")){
targetBuildNum = Integer.parseInt(envVar.getValue())
}
}
if (buildNum > targetBuildNum) {
return false
}
return true
}
Disclaimer that I am just beginning this process so I do not have much real-world experience with this yet, but will update this thread if I have more relevant information. Any feedback welcome.

Resources