Is it possible to have a common repository for multiple pipeline jobs? - jenkins

I have 11 jobs running on the Jenkins master node, all of which have a very similar pipeline setup. For now I have integrated each job with its very own Jenkinsfile that specifies the stages within the job and all of them build just fine. But, wouldn't it be better to have a single repo that has some files (preferably a single Jenkinsfile and some libraries) required to run all the jobs that have similar pipeline structure with a few changes that can be taken care of with a work around?
If there is a way to accomplish this, please let me know.

Use a Shared Library to define common functionality. Your 11 Jenkinsfiles can then be as small as only a single call to the function implementing the pipeline.

Besides using a Shared Library, you can create a groovy file with common functionality and call its methods via load().
Documentation
and example. This is an easier approach, but in the future with the increasing complexity of pipelines, this may impose some limitations.

Related

Jenkins pipeline Continuous Integration

I have 20-30 projects that I am working on with their own git repos, each repo have several branches not dependent on other projects. I am looking if there's some way to come up with Jenkins Pipeline to accommodate all the projects with CI/CD ecosystem. OR do I need to create separate pipeline for each repo.
Is there a way I can use one Jenkins-file into all these projects.
how do you share data between pipelines if module 3 is depenendent on data coming from Module1&2.
do I need to create 30 hooks/tokens if I have 30 projects?
I was able to create dependent build triggers between the very first three such that if A & B build then C will build using the SCM polling option and build triggers.
Thanks in advance. Appreciate any help, feedback or suggestions.
You can use shared libraries in Jenkins pipelines. It is a fairly involved process requiring writing libraries in groovy.
As Pipeline is adopted for more and more projects in an organization,
common patterns are likely to emerge. Oftentimes it is useful to share
parts of Pipelines between various projects to reduce redundancies and
keep code "DRY".
Pipeline has support for creating "Shared Libraries" which can be
defined in external source control repositories and loaded into
existing Pipelines.

File-based conditional steps in Jenkins Pipeline (like Make)

I'm considering the option to use Jenkins Pipeline to build data ETL pipelines. For the moment, it sounds more attractive, more modern and simpler to use than Make/Makefile.
However, I don't understand if the same Make/Makefile step-triggering behaviour is available, eg, let's say I have data2.xml built by the script csv2xml.sh, taking data1.csv as input: in a Makefile, it's pretty straightforward to declare that data2.xml must be built only if it doesn't exist or is older than data1.csv.
Is it possible to do the same in Jenkins Pipeline? Or am I looking at the wrong tool?
such steps are available under sh/bat execution, you can deal with your make/makefile as usual , Jenkins will just execute it and give you back results, afterwards inside pipeline you will decide what to do with it, e.g. upload to server or something similar

Is there a trick to debug shared groovy libraries without pushing?

I'm adding to, and maintaining, groovy files to build a set of repositories - previously they were built with freestyle Jenkins jobs. I support some code in shared libraries and to be honest (mainly for DRY reasons) I want to do that more.
However, the only way I know how to test and debug those library files is to push the changes on a git branch. I know about the "replay" trick to test the main Jenkins file. Is there some approach I've missed to do something similar for library code?
If you set up a job to load the shared library instead of relying on a globally set up shared library (you can have both going, for this particular job), then it is possible to hit "replay" and have all your shared library steps show up as editable files.
This can be helpful in iterative development without a million commits.
EDIT: Here's how that looks on an Organization job in Jenkins.
There is the 3rd party Jenkins Pipeline Unit testing framework.
While it does not yet cover all features of pipeline, it is well documented and maintained so that I would consider starting using it (once I revisit our Jenkins setup).

How can I share source code across many nodes in a Jenkins pipeline job?

I have a build that's currently using the old build flow plugin that I'm trying to convert to pipeline.
This build can be massively parallelized (many units of work can run on many different nodes) but we only want to extract the source code once at the beginning, preferably with the Pipeline script from SCM option. I'm at a loss to understand how I can share the source extract (which apparently is on the master) with all of the "downstream" nodes that will be used by the pipeline script.
For build flow we extracted to a well-known location on a shared file system and all of the downstream jobs invoked by the flow were passed (or could derive) that location. That always felt icky & I was hoping that pipeline would have solved this problem but I can't find anything to suggest that it has. What am I missing?
I believe the official recommendation for this is to make bundles of the source and then use "stash" and "unstash" to make them available to deeper steps of your pipeline script.
See https://www.cloudbees.com/blog/parallelism-and-distributed-builds-jenkins
Keep in mind that this doesn't do anything to help with line-endings. If you have builds that span OSs with different line endings you either need to make OS-specific stashes, or just checkout to a safe label in each downstream step.
After further research it seems like the External Workspace Manager Plugin does what I'm looking for.

Jenkins: a heavily branched chain of build jobs

We would like to set up Continuous Integration and Continuous Deployment processes on the base of Jenkins ecosystem. Currently we're trying to put together all the Jenkins build jobs we have (from sources to several endpoint processes launched on the testing server). There are three kinds of build/deployment processes in our case:
Building deb packages from C++ projects (some of them are dependent, others are dependencies);
Building images from Docker containers;
Launching some processes in the endpoint;
As you can notice, we faced with a heavily branched chain of jobs triggered by each other. And every update of any of the upstream projects must go throughout the chain of jobs and trigger the final job (process I). So It would be nice to use some kind of Jenkins plugins that will:
Control such a complicated structure of jobs (I tried to use Build Pipeline Plugin and I got the impression that this tool is suitable for "linear" job chain);
Provide clean way of passing the parameters between job environments.
As #slav mentioned, the Workflow plugin should be able to handle this kind of complex control flow, including parallel handling of subtasks, simple handling of variables throughout the process (would just be Groovy local variables), and Docker support.
You could of course arrange this whole process in a single build.gradle (or Makefile for that matter). That would be suitable if you did not mind running all the steps on the same Jenkins slave, and did not otherwise need to interact with or report to Jenkins in any particular way in the middle of the build.
Well, for passing parameters, you should be using Parameterized Trigger Plugin.
For a more asynchronous passing of parameters, you an use EnvInject plugin (it's extremely useful and flexible for all sorts of things, and considering your complexity, might prove useful regardless if you use it for passing parameters or not)
As for control, research into Workflow plugin. It allows to write the whole execution flow in it's own Groovy script, with fine granular control. More links:
Official - https://jenkins-ci.org/content/workflow-plugin-10
Tutorial - https://github.com/jenkinsci/workflow-plugin/blob/c15589f/TUTORIAL.md#pausing-flyweight-vs-heavyweight-executors

Resources