How to trigger jenkins multi-branch pipeline build from a gitlab merge request from a fork? - jenkins

Folks,
I'm actually having loads of fun since switching to multibranch pipelines in Jenkins which i use in combination with GitLab.
But something i still do not wrap my head around is how to build merge request that originates from a fork - the ones coming from the same remote triggers a build but not the ones from my fork !
I'd be really happy to hear any idea about this.
Thanks a lot community !

Maybe you could try a different approach:
A multibranch pipeline is for continuous build (with only compilation and testing). This is basically a quick feedback on our current work.
Building a MR is altogether another business. It is supposed to provide all necessary elements to the reviewer to determine wether a MR can be merged on the master branch or not. Therefore, it could imply code quality analysis, security analysis, gating, dashboards...
With this respect, it should be separated in different jobs.
Having two seperate jobs for the two functionalities is not only cleaner, but I believe, it would solve your trigger from fork problem.

Related

Best route to take for a Jenkins job with hundreds of sub jobs

Currently, at my organization we have a few repositories which contain ~500+ projects that need to be built to satisfy unit testing (really integration testing), and I am trying to think of a new way of approaching the situation.
Currently, the pipeline for building the projects is templatized and is stored on our Bitbucket server. All the projects get built in parallel, so once the jobs are queued, they all go to the master node to do a SCM check of the pipeline.
This creates stress on the master node, and for some reason it is not able to utilize every available node and executor on that node to it's fullest potential. Contrary, if the pipeline is not stored on SCM, it does the complete opposite to where it DOES use every possible node with any available executor on that node.
Is there something I am missing about the SCM checkout version that makes it different than storing the pipeline locally on Jenkins? I understand that you need to do an SCM poll, and I am assuming only the master can do the SCM poll for the original Jenkinsfile.
I've tried:
Looking to see if I am potentially throttling the build, but I do not see anything
Disable concurrent builds is not enabled within the pipeline
Lightweight checkout seems to work when I do it with Git plugin, but not the Bitbucket Server Integration plugin; however, Atlassian mentioned this will never be a feature, so this doesn't really matter.
I am trying to see if there is a possible way to change the infrastructure since I don't have much of a choice in how certain programs are setup since they are very tightly coupled.
I could in theory just have the pipeline locally on Jenkins and use that as a template rather than checking it into SCM; however, making changes locally to the template does not change the sub-jobs that uses it (I could implement this feature, but SCM already does it). Plus, having the template pipeline checked into Bitbucket allows a better control, so I am trying to avoid that option.

Jenkins CI workflow with separate build and automated test both in source control

I am trying to improve our continuous integration process using Jenkins and our source control system (currently svn, but git soon).
Maybe I am thinking about this overly complicated, or maybe I have not yet seen the right hints.
The process I envisioned has three steps and associated roles:
one or more developers would do their job and ultimately submit the code changes for the actual software ("main software") as well as unit tests into source control (git, or something else). Jenkins shall build the software, run unit tests and perhaps some other steps (e.g. static code analysis). If none of this fails, the work of the developers is done. As part of the build, the build number is baked into the main software itself as part of the version number.
one or more test engineers will subsequently pickup the build and perform tests. Some of them may be manual, most of them are desired to be automated/scripted tests. These shall ultimately be submitted into source control as well and be executed through the build server. However, this shall not trigger a new build of the main software (since there is nothing changed). If none of this fails, the test engineers are done. Note that our automated tests currently take several hours to complete.
As a last step, a project manager authorizes release of the software, which executes whatever delivery/deployment steps are needed. Also, the source of the main software, unit tests, and automated test scripts, the jenkins build script - and ideally all build artifacts ("binaries") - are archived (tagged) in the source control system.
Ideally, developers are able to also manually trigger execution of the automated tests to "preview" the outcome of their build.
I have been unable to figure out how to do this with Jenkins and Git - or any other source control system.
Jenkin's pipelines seem to assume that all steps are carried out in sequence automatically. It also seems to assume that committing code into source control starts at the beginning (which I believe is not true if the commit was "merely" automated test scripts). Triggering an unnecessary build of the main software really hurts our process, as it basically invalidates and manual testing and documentation, as it results in a new build number baked into the software.
If my approach is so uncommon, please direct me how to do this correctly. Otherwise I would appreciate pointers how to get this done (conceptually).
I will try to reply with some points. This is indeed conceptually approach as there are a lot of details and different approaches too, this is only one.
You need git :)
You need to setup a git branching strategy which will allow to have multiple developers to work simultaneously, pushing code and validating it agains the static code analysis. I would suggest that you start with Git Flow, it is widely used and can be adapted to whatever reality you do have - you do not need to use it in its pure state, so give some thought how to adapt it. Fundamentally, it will allow for each feature to be tested. Then, each developer can merge it on the develop branch - from this point on, you have your features validated and you can start to deploy and test.
Have a look at multibranch pipelines. This will allow you to test the several feature branches that you might have and to have different flows for the other branches - develop, release and master - depending on your deployment needs. So, when you have a merge on develop branch, you can trigger testing or just use it to run static code analysis.
To overcome the problem that you mention on your second point, there are ways to read your change sets on the pipeline, and in case the changes are only taken on testing scripts, you should not build your SW - check here how to read changes, and here an example of how to read changes and also to prevent your pipeline to build all the stages according to the changes being pushed to git.
In case you still have manual testing taking place, pipelines are pausable which means that you can pause the pipeline asking for approval to proceed. Before approving, testers should do whatever they have to, and whenever they are ready to proceed, just approve the build to proceed for the next steps.
Regarding deployments authorization, it is done the same way that I mention on the last point, with the approvals, but in this case, you can specify which users/roles are allowed to approve that step.
Whatever you need to keep from your builds, Jenkins has an archive artifacts utility. Let me just note that ideally you would look into a proper artefact repository such as Nexus.
To trigger manually a set of tests... You can have a manually triggered job on Jenkins apart from your CI/CD pipeline, that will only execute the automated tests. You can even trigger this same job as one pipeline stage - how to trigger another jobs
Lastly let me say that the branching strategy is the starting point.
Think on your big picture, what SDLC flows you need to have and setup those flows on your multibranch pipeline. Different git branches will facilitate whatever flows you need within the same Jenkinsfile - your pipeline definition. It really depends on how many environments you have to deploy to and what kind of steps you need.

How can I share source code across many nodes in a Jenkins pipeline job?

I have a build that's currently using the old build flow plugin that I'm trying to convert to pipeline.
This build can be massively parallelized (many units of work can run on many different nodes) but we only want to extract the source code once at the beginning, preferably with the Pipeline script from SCM option. I'm at a loss to understand how I can share the source extract (which apparently is on the master) with all of the "downstream" nodes that will be used by the pipeline script.
For build flow we extracted to a well-known location on a shared file system and all of the downstream jobs invoked by the flow were passed (or could derive) that location. That always felt icky & I was hoping that pipeline would have solved this problem but I can't find anything to suggest that it has. What am I missing?
I believe the official recommendation for this is to make bundles of the source and then use "stash" and "unstash" to make them available to deeper steps of your pipeline script.
See https://www.cloudbees.com/blog/parallelism-and-distributed-builds-jenkins
Keep in mind that this doesn't do anything to help with line-endings. If you have builds that span OSs with different line endings you either need to make OS-specific stashes, or just checkout to a safe label in each downstream step.
After further research it seems like the External Workspace Manager Plugin does what I'm looking for.

Jenkins continuous integration and nightly builds

I’m new to Jenkins and I like some help (reassurance) about how I think I should setup my jobs.
The end goal is fairly simple.
Objective 1: When a developer commits code to a mercurial repo Jenkins pulls the changes, builds the project and runs the unit tests. This happens continuously throughout the day so developers get the earliest possible feedback if they break something.
Objective 2: Nightly, Jenkins pulls the last stable build from above and runs automated UI tests. If those tests pass it publishes the nightly build somewhere.
I have a job configured that achieves objective 1 but I’m struggling with objective 2.
(Not the publishing part, the idea of seeding this job with the last stable build of objective 1).
At the moment, I’m planning to use branches in the HG repo to implement this.
My branches would look something like Main >> Int >> Dev.
The job in objective 1 would work on the tip of the Dev branch.
If the build succeeds and the tests pass it would commit to the Int branch.
The job in objective 2 could then simply work on the tip of the Int branch.
Is this how it’s generally done?
I’ve also been looking at/considering:
- plugins like Promoted Builds and Copy Artifacts
- parameterised builds
- downstream jobs
IMO my objectives are fairly common but I can’t find many examples of this approach online. Perhaps it’s so obvious there was no need but I just wanted to check.
In the past I've stored generated artifacts like this in an artifact repository. You could use something like Nexus or Artifactory for this, but I've also just used a flat file system.
You could put the build artifacts in source control, like you said, but there usually isn't a reason to have version control on compiled builds (you should be able to re-create them based on rev numbers) - they usually just take up a lot of space in your repo.
If your version numbers are incremental in nature your nightly job should be able to pull the latest one fairly easily.
Maybe you can capture the last good revision ID and post it somewhere. Then the nightly build can use that last known good revision. The method to go about doing this can vary but its the concept of using revision ID that I want to communicate here. This would prevent you from having to create a separate branch.

Problems with Jenkins Build pipeline and parallel jobs

Background
I am using Jenkins with the Build Pipeline plugin to build some fairly complicated projects that require multiple compilation steps:
Build source RPM.
Build binary RPM (this is performed twice, once for each platform).
Deploy to YUM repository.
My strategy for solving build requirements involves splitting the common work into parameterized jobs that can be reused across projects and branches, with each job representing one stage in the pipeline. Each stage is triggered with parameters, and build artifacts passed along to the next job in the pipeline. However, I'm having some trouble with this strategy, and could really use some tips on how to go about solving this problem in the most elegant and flexible way possible.
To be more specific, there are two common libraries, which are shared by other projects (but not all projects). The libraries are built differently from the dependent projects, but it should not be necessary to specify in Jenkins what the dependent projects are.
There are multiple branches, the master branch (rebuilt nightly), the develop branch (polled for changes), feature branches (also polled), and release branches (polled, but built for release). The branches are built the same way across multiple projects.
We create multiple repositories every month, and whilst it is feasible to expect a little setup for a new project, generally we want this to be as simple and automated as possible.
The Problems
I have many projects with multiple branches, and I do not wish to build all branches or even all projects in the same way. Because most of the build steps are similar I can turn these common steps into parameterized build jobs, and get each job to trigger the next in the chain, passing parameters and build artifacts along the chain. However, this falls apart if one of the steps needs to be skipped, because I don't know of a way to conditionally skip a build step. This implies I would need to copy the build jobs so that I can customise them for each pipeline, resulting in a very large number of build jobs. I could use a combination of plugins to create a job generator (eg. dsl flow, dsl job, etc), and hide as much as possible from the users, but what's the most elegant Jenkins solution to this? Are there any plugins, or examples that I might have missed? What's your experience of doing this?
Because step 2 can be split into two jobs that can be run in parallel, this introduces a complexity that is causing me problems with my pipeline. My first attempt would trigger a parameterized build job twice with different parameters, and then join the jobs afterwards using the join plugin, but it was beginning to look like it would be complicated to copy in the build artifacts from the two upstream jobs. This is significant, because I need the build artifacts from both jobs for stage 3. What's the most elegant solution to join parallel jobs and copy artifacts from them all? Are there any examples that I might have missed?
I need to combine test results generated from both of the jobs in stage 2, and copy them to the job that triggers the build. What's the best way to handle this?
I'm happy to read articles, presentations, technical articles, reference documentation, write scripts and whatever else necessary to make this work nicely, but I'm not a Jenkins expert. If anyone can give me some advice on these 3 problems then that would be helpful. Additionally, I would appreciate any constructive advice on how to get the best out of pipeline CI builds in Jenkins, if relevant.
For the first point, the Job Generator plugin I wrote has been developed to address this use case. You can find more info on the wiki page of Job Generator.
There is also the same type of plugin with a different approach (job generator as a build step), it is called Jobcopy Builder.
The other approaches you mentioned require some kind of DSL and can be a good choice too.

Resources