Gitlab pipeline - how to fail pipeline when Jacoco report coverage has a lower percentage than last commit in master branch? - code-coverage

Our team decide to define a task "done" when it meets several criterias. One of them is, "the coverage can never drop in comparison to last merge on master".
If we want to configure the pipeline so that it failed when the coverage drops, how could this be done? Now the coverage is shown in Gitlab merge request page but never used.

Related

Jenkins why need both pr-merge and branch build for checks

In our github pr and Jenkins pipeline after we commit code and open a pull request, two checks have to pass, one for pr merge and one for branch. Why do we need two checks? For github, the branch is required to be uptodate with main and the pipeline build passes, it should be sufficient, what would the pr-merge check do?
I could supply more stages for each build if it helps answering the questions, but for the large part they are similar pipelines: checkout, linter, unit test, functional test, integration test, docker publish
Some checks haven’t completed yet
2 pending checks
continuous-integration/jenkins/branch
continuous-integration/jenkins/pr-merge

Pytest + coverage regression test done when a pull request is submitted (azure pipelines and bitbucket)

I have a python package in a bitbucket repository. I would like to setup azure pipelines so that when I submit a pull request to the master branch of that repository, a bunch of pytest tests are run.
Along with this (either separately, or preferably in the pytest test suite itself) I would like for a coverage test to be run, and that test to fail if the coverage percentage of the master branch is higher than the coverage percentage of the branch that is being merged into it is.
The idea is that a pull request couldn't be merged unless the test coverage increased.
Does anyone know how I might do this?
Pytest + coverage regression test done when a pull request is submitted (azure pipelines and bitbucket)
I am afraid we could not manage the code in the bitbucket repository, but build in the Azure pipeline at this moment.
We could set the pull request validation to execute pytest tests when we submit a pull request to the master branch of bitbucket repository:
However, we also need a status check if the coverage percentage of the master branch is higher than the coverage percentage of the branch that is being merged for the pull request. But it seems Branch permissions or bitbucket does not have this feature.
If we migrate the repo from Bitbucket to Azure DevOps, then we could add a
Build Validation to execute pytest tests and add a coverage status check. to check the coverage percentage:
Code coverage for pull requests
Code coverage metrics and branch policy for pull requests
General steps:
Add a Build Validation to execute pytest tests
Add a task to get the coverage percentage of the master branch and overwrite the value of coverage target in the azurepipelines-coverage.yml in Azure repo.
Add a coverage policy.
Hope this helps.

Use one Jenkinsfile or multiple Jenkinfiles

We are currently using Windows \ Jenkins 2.107.1 (no pipeline), and I am researching going to pipeline. We have a nightly build job, that fetches from repositories, and submits and waits on other jobs. I see 9 jobs running on the same Master node (we only have a master), at the same time. I am not clear on if we should have one Jenkinsfile or multiple Jenkinsfiles. It will not be a multibranch pipeline, as we do not create test branches and then merge back to a master. In the repository we have product1.0 branch, product2.0 branch etc, and build only one branch (the latest one). While I do like the Blue Ocean editor, it is only for MultiBranch pipelines.
Do I combine all the jobs into one Jenkinsfile, or create multiple jenkins files for each of the existing jobs (Jenkinsfilestart, JenkinsfileFetchCVs, JenkinsFileFetchGit, Jenkinsfilenextjob,etc., and have one call the other)?. Do I create all the old jobs as Jenkinsfiles, or scripts executed by the one master Jenkinsfile? Do I do this in Declarative or script ?
Have set up Jenkins pipeline on test VM, but not clear on which way to go yet.
Looking for directions and\or examples. Is there documentation on how to convert existing Jenkins non-pipeline systems?
I found this after doing the initial post...https://wiki.jenkins.io/display/JENKINS/Convert+To+Pipeline+Plugin.
It does help a little in that it gives you some converted steps, but cannot convert all the steps, and will give comments in the pipeline script "//Unable to convert a build step referring to...please verify and convert manually if required." There is an option "Recursively convert downstream jobs if any" and if you select that, it appears to add all the downstream jobs to the same pipeline script, and really confuses the job parameters. There is also an option to "Commit JenkinsFile". I will play with this some more, but it is not the be all and end all of converting to pipeline, and I still am not sure of whether I should be have one or more scripts.
Added 07/26/19 -
Let’s see if I have my research to date correct…
A Declarative pipeline (Pipeline Script from SCM), is stored in a Jenkinsfile in the repository. Every time that this Jenkins job is executed, a fetch from the repository is done (to get the latest version of the Jenkinsfile).
A Pipeline script is stored as part of the config.xml file in the Jenkins\Jobs folder (it is not stored in the repository, or in a separate Jenkinsfile in the jobs folder). There is a fetch from the repository only if the job requires it (you do not need to do a repository fetch to get the Pipeline script).
Besides our nightly product build, we also have other jobs. I could create a separate Declarative Jenkinsfile for each of them (JenkinsfileA, JenkinsfileB, etc.) for each of the other jobs and store then in the repository also (in the same branch as the main Jenkinsfile), but that would mean that every one of those additional jobs, to get the particular Jenkinsfile for that job, would also need to do a repository fetch (basically fetching\cloning the repository branch for each job, and have multiple versions of the repository branch unnecessarily downloaded to the workspace of each job).
That does not make sense to me (unless my understanding of things to date is incorrect). Because the main product build does require a fetch every time it is run (to get any possible developer check-ins), I do not see a problem doing Declarative Jenkinsfile for that job. For the other jobs (if we do not leave then for the time being in the classic (non-pipeline) format)), they will be Pipeline scripts.
Is there any way of (or plans for), being able to do Declarative pipeline without having to store in the repository and doing a fetch every time (lessening the need to become a Groovy developer)? The Blue Ocean script editor appears to be an easier tool to use to create pipeline scripts, but it is only for MultiBranch pipelines (which we don’t do).
Serialization (restarting a job), is that only for when a node goes down, or can you restart a pipeline job (Declarative or Scripted), from any point if it fails?
I see that there are places to look to see what Jenkins plugin’s have been ported to pipeline, but is there anything that can be run to take a look at the classic jobs that you have, to determine up front which jobs are going to have problems being converted to pipeline?
08/02/19...
Studying and playing with pipelines. I see that you can use Declarative in the Pipeline Scrip window, but it still stores it in the config.xml file. And I have played with the combination of both Declarative and non Declarative in the same script.
I am trying to understand the Blue Ocean interface, the word "MultiBranch" is throwing me a little. We do not create test branches, and them merge them back into the master. In the repository, we have branches for each release of the product, and we rarely go back to previous branches\versions. So, if I am working on branchV9 right now, do I also need a Jenkinsfile in the Master branch, or any other of the previous version branches?
I have been playing with Blue Ocean (which only does MultiBranch pipelines). I am on a Windows system, Jenkins 2.176.2, and have all the latest Blue Ocean plugins as of today (1.18.0). I am accessing a local Git repository (not GitHub), and am running into the following...
If I try to use use “c:\GitRepos\Pipelines1.git”, i get "not a valid name"...
Why does it do this?
If you have a single job that you would be executed on multiple branches (with possibly optional stages, depending on the branch name or tag or other) then you still could utilize multi branch pipeline.
In general I would say that paradigm shift focuses mainly on converting the old jobs to stages in order to automate your build process. If you would have semi/fully automated CI/CD flow this could look like
Multibranch pipeline project (all branches) with the following stages (1st jenkinsfile)
build (all branches)
unit tests (all branches) publish report
publish artifacts (master and release branches)
build and publish docker (master and release branches)
deploy to test (master and release branches)
run integration tests (master and release branches)
deploy to staging (master and release branches) possibly ending with manual step if result of deployment was as expected
deploy to production (release branches)
Pipeline job for nightly tests (other jenkinsfile), what's the result here? Would it break CI/CD flow?

Jenkins, Multijob, how to run in parallel?

We have set up a Jenkins instance as a remote testing resource for our developers. Every time a tag is created matching our refspec a job is triggered and the results emailed to the developer.
A job is defined as follows:
1 phase consisting of three jobs (frontend tests, integration tests,
unit tests)
All subjobs are executed, irrespective of success
Email the developer the test results
This setup mostly works except for two issues:
I cannot get the job to run in parallel. The subjobs run in
parallel, but only one instance of the job runs at a time. Is this
something I can configure differently somewhere, or is this inherent
in the way the plugin works?
The main job checks out and occupies one of our build servers for
the duration of the job. Is there a way to do git polling and then
just grab the hashref and release the build server on which the
polling was done before continuing building the subjobs?
In the multi job plugin, everything runs in parallel that is listed in the same "Phase", however the multijob itself needs somewhere to run. If you have a build followed by a test phase, you can add a "Build Phase" prior to the test phase, and only that phase will require a "build server".
There is an option called "Execute concurrent builds if necessary" that will allow multiple jobs of the same name to run simultaneously. This option must be set for the parent job and the subjobs as the default behavior of Jenkins is to only allow one build of a Project (job) to run at a time. Beware: Read the comments as this may have unintended side effects.
Not clear what you mean about polling however if using git, you may want to use webhooks so that pushes to the git repository directly invoke Jenkins. No need to poll.

How to prevent some builds from affecting the stability of a Jenkins project

We're using Jenkins with the Gerrit Trigger plugin so when a changeset is uploaded to Gerrit for review, Jenkins will check if it compiles and post the results.
The workflow happens like this:
A changeset is uploaded to Gerrit
A gerrit hook notifies Jenkins of the new changeset, giving Jenkins certain information such as the Gerrit changeset ID, patch number, target branch, etc.
Jenkins launches a build in the project that is configured to listen to this repository.
When the project is finished building, jenkins reports the result back to Gerrit if the build was successful or not, either +1 or -1. This information is used by the code reviewers in gerrit to help them decide to accept the changeset or not.
Problem: When these builds fail, the Jenkins project is marked "Failing" or "Unstable". This isn't exactly accurate, because the changes that caused the failure are not accepted or merged into the repository yet, they are just newly proposed.
One feature of Jenkins is that it will measure the health of a project based on the ratio of successful to failed builds. If the builds are all working, it will show a "sunshine" symbol but if some are failing then you get a "thundercloud". How do I configure Jenkins so that these verification builds don't affect the stability rating of the project? We need the status for the project to show "sunshine" if the commits that are approved from Gerrit and merged into the git repository builds, regardless of the outcome of builds from changes that are not merged (those changes in Gerrit still pending review). It's ok for the individual build to be red instead of green.
At $DAYJOB, we handle this by using 2 separate build jobs. A commit validation job, which posts +1/-1 on Gerrit changes and which we don't care about build stability. And a build stability/regression job, which runs builds that have already been merged where we do care about stability. We ignore the status color for the commit validation job.
I'm not aware of any way to get the Jenkins plugin to give a -1/+1 vote but always show a status of green. It uses that status when determining what score to give.

Resources