Polling ignores commits in certain paths in multibranch pipeline - jenkins

I have a multibranch pipeline with a project that I want to build only if there are changes in a specific directory.
I know that the Polling ignores commits in certain paths option can do exactly that but I can't find this option in the multibranch configuration.
Is this even possible for multibranch pipeline?

Theoretically, you can call the GitSCM class with an includedRegions path restriction in the checkout step (e.g. see here for the syntax).
However, this is not working with pipelines, as I just checked it last week. So unfortunately, Jenkins is really not well-suited for monorepos.

Related

Jenkins Pipeline from SCM. Can we select branch

We have our pipelines groovy scripts for Jenkins in SCM (git). I believe it currently gets the scripts from master as default.
Are we able to specify the branch we want to use for the groovy scripts?
There is a setting in the particular section but that seems to be for build branch if I understand correctly (as it allows for setting of multiple branches)
It appears that the branch option is the branch for the pipeline script.
I assume that multiple branches means you can run multiple branches of the different versions of that script. I suspect I am missing when that would be used but it does answer my original question.

Jenkins multibranch pipeline triggering build only for new commits

I would like to use the multibranch pipeline functionality offered by Jenkins with some projects that have been around for long time, with a lot of branches.
I am using it in conjunction with the Basic Branch Build Strategies Plugin and the Multibranch Scan Webhook Trigger.
Since the projects have a lot of branches I would like to just index the branches when the multibranch pipeline is triggered and run the actual pipeline for a branch only when a new commit is pushed (The Multibranch Scan Webhook Trigger is used to notify the multibranch pipeline).
I have noticed the option "Skip initial build on first branch indexing" provided by the Branch Build Strategies Plugin which looked perfect at first: the branch is just indexed and the build is not triggered the first time.
The only issue is that the same goes for every new branch that gets created whereas I would like this option to be active only for the old branches.
The "Tags" option of the Branch Build Strategies Plugin has a "Ignore tags older than" parameter... a parameter like that for the "Skip initial build on first branch indexing" option would cover my use case but unfortunately there are no parameters available for it.

Multibranch pipeline with jenkinsfile in svn:external

I have set up a multibranch pipeline job for a repository in SVN. Since I want to keep the jenkinsfiles the same in all branches, they are not really located in the branches, but in a different location and only referenced via svn:externals.
Unfortunately the multibranch pipeline does not seem to follow these references and doesn't find the jenkinsfiles (the paths are correctly set):
Checking candidate branch /branches/aaa/bbb/ccc#HEAD
‘ddd\eee\fff\jenkinsfile' not found
Does not meet criteria
Is there any way to tell Jenkins and the multibranch-pipeline plugin setup to also follow svn:externals when looking for the jenkinsfiles?
By default, Jenkins is trying to get the Jenkinsfile with a lightweight checkout which does not consider svn:externals.
This behavior can be (only generally) deactivated, see https://wiki.jenkins.io/display/JENKINS/Subversion+Plugin, chapter "lightweight checkout capability for Subversion on Multibranch Pipeline projects and Externals support"

Use one Jenkinsfile or multiple Jenkinfiles

We are currently using Windows \ Jenkins 2.107.1 (no pipeline), and I am researching going to pipeline. We have a nightly build job, that fetches from repositories, and submits and waits on other jobs. I see 9 jobs running on the same Master node (we only have a master), at the same time. I am not clear on if we should have one Jenkinsfile or multiple Jenkinsfiles. It will not be a multibranch pipeline, as we do not create test branches and then merge back to a master. In the repository we have product1.0 branch, product2.0 branch etc, and build only one branch (the latest one). While I do like the Blue Ocean editor, it is only for MultiBranch pipelines.
Do I combine all the jobs into one Jenkinsfile, or create multiple jenkins files for each of the existing jobs (Jenkinsfilestart, JenkinsfileFetchCVs, JenkinsFileFetchGit, Jenkinsfilenextjob,etc., and have one call the other)?. Do I create all the old jobs as Jenkinsfiles, or scripts executed by the one master Jenkinsfile? Do I do this in Declarative or script ?
Have set up Jenkins pipeline on test VM, but not clear on which way to go yet.
Looking for directions and\or examples. Is there documentation on how to convert existing Jenkins non-pipeline systems?
I found this after doing the initial post...https://wiki.jenkins.io/display/JENKINS/Convert+To+Pipeline+Plugin.
It does help a little in that it gives you some converted steps, but cannot convert all the steps, and will give comments in the pipeline script "//Unable to convert a build step referring to...please verify and convert manually if required." There is an option "Recursively convert downstream jobs if any" and if you select that, it appears to add all the downstream jobs to the same pipeline script, and really confuses the job parameters. There is also an option to "Commit JenkinsFile". I will play with this some more, but it is not the be all and end all of converting to pipeline, and I still am not sure of whether I should be have one or more scripts.
Added 07/26/19 -
Let’s see if I have my research to date correct…
A Declarative pipeline (Pipeline Script from SCM), is stored in a Jenkinsfile in the repository. Every time that this Jenkins job is executed, a fetch from the repository is done (to get the latest version of the Jenkinsfile).
A Pipeline script is stored as part of the config.xml file in the Jenkins\Jobs folder (it is not stored in the repository, or in a separate Jenkinsfile in the jobs folder). There is a fetch from the repository only if the job requires it (you do not need to do a repository fetch to get the Pipeline script).
Besides our nightly product build, we also have other jobs. I could create a separate Declarative Jenkinsfile for each of them (JenkinsfileA, JenkinsfileB, etc.) for each of the other jobs and store then in the repository also (in the same branch as the main Jenkinsfile), but that would mean that every one of those additional jobs, to get the particular Jenkinsfile for that job, would also need to do a repository fetch (basically fetching\cloning the repository branch for each job, and have multiple versions of the repository branch unnecessarily downloaded to the workspace of each job).
That does not make sense to me (unless my understanding of things to date is incorrect). Because the main product build does require a fetch every time it is run (to get any possible developer check-ins), I do not see a problem doing Declarative Jenkinsfile for that job. For the other jobs (if we do not leave then for the time being in the classic (non-pipeline) format)), they will be Pipeline scripts.
Is there any way of (or plans for), being able to do Declarative pipeline without having to store in the repository and doing a fetch every time (lessening the need to become a Groovy developer)? The Blue Ocean script editor appears to be an easier tool to use to create pipeline scripts, but it is only for MultiBranch pipelines (which we don’t do).
Serialization (restarting a job), is that only for when a node goes down, or can you restart a pipeline job (Declarative or Scripted), from any point if it fails?
I see that there are places to look to see what Jenkins plugin’s have been ported to pipeline, but is there anything that can be run to take a look at the classic jobs that you have, to determine up front which jobs are going to have problems being converted to pipeline?
08/02/19...
Studying and playing with pipelines. I see that you can use Declarative in the Pipeline Scrip window, but it still stores it in the config.xml file. And I have played with the combination of both Declarative and non Declarative in the same script.
I am trying to understand the Blue Ocean interface, the word "MultiBranch" is throwing me a little. We do not create test branches, and them merge them back into the master. In the repository, we have branches for each release of the product, and we rarely go back to previous branches\versions. So, if I am working on branchV9 right now, do I also need a Jenkinsfile in the Master branch, or any other of the previous version branches?
I have been playing with Blue Ocean (which only does MultiBranch pipelines). I am on a Windows system, Jenkins 2.176.2, and have all the latest Blue Ocean plugins as of today (1.18.0). I am accessing a local Git repository (not GitHub), and am running into the following...
If I try to use use “c:\GitRepos\Pipelines1.git”, i get "not a valid name"...
Why does it do this?
If you have a single job that you would be executed on multiple branches (with possibly optional stages, depending on the branch name or tag or other) then you still could utilize multi branch pipeline.
In general I would say that paradigm shift focuses mainly on converting the old jobs to stages in order to automate your build process. If you would have semi/fully automated CI/CD flow this could look like
Multibranch pipeline project (all branches) with the following stages (1st jenkinsfile)
build (all branches)
unit tests (all branches) publish report
publish artifacts (master and release branches)
build and publish docker (master and release branches)
deploy to test (master and release branches)
run integration tests (master and release branches)
deploy to staging (master and release branches) possibly ending with manual step if result of deployment was as expected
deploy to production (release branches)
Pipeline job for nightly tests (other jenkinsfile), what's the result here? Would it break CI/CD flow?

Jenkins Archive Downstream Multibranch Pipeline

I have a multibranch pipeline in Jenkins and need to pass the archive from the pipeline to a job it builds. The copy artifact plugin doesnt seem to support multibranch pipelines. It deletes my source project every time I save. Is there another plugin I can use to get the archive to get passed to the job? Or is there something I need to do to get this plugin working with multibranch pipeline?
The copy artifact plugin doesnt seem to support multibranch pipelines.
Copy Artifact does not care about multibranch. From its perspective, a branch project is simply a job that is in some folder. And it does support folders. You just need to use the correct syntax for the source job. Last I remember, it supports either relative (e.g., ../multibranch/master) or absolute (e.g., /organization/repo/master).

Resources