how to prevent triggering of job by Webhook - jenkins

I have git monorepo repository with multiple projects inside. So I have following structure of Jenkinsfiles in this repo:
|-Jenkinsfile (root one)
|-projectA
| |-Jenkinsfile (for projectA)
| |- ... (project files)
|-projectB
|-Jenkinsfile (for projectB)
|- ... (project files)
In main root Jenkinsfile I have logic to check PR changed files and then trigger projects pipelines when files in specific project were changed.
And I have Webhook.
My problem is that all those 3 pipelines are triggered by Webhook. So my current state is:
I change something in projectA and create PR
Webhook triggers all pipeliens: root, projectA and projectB
root pipeline notices that changes were made on projectA and triggers projectA pipeline
So in runs I have:
one run of root pipeline triggered by webhook
one run of projectA pipeline triggered by webhook
one run of projectB pipeline triggered by webhook
one run of projectA pipeline triggered by upstream
And I want to have:
one run of root pipeline triggered by webhook
one run of projectA pipeline triggered by upstream
I cannot change webhook nature because it's monorepo so its sent every time anyone does anything in any project. So I need to prevent projectA and projectB pipelines from reacting on incoming webhook.
Does anyone know how to do it?

I found a solution:
multibranchPipelineJob(...) {
branchSources {
branchSource {
source {
...
}
strategy {
allBranchesSame {
props {
suppressAutomaticTriggering()
}
}
}
}
}
...
}
This setting prevents webhook triggers to start job.

Related

In Jenkins, Is there a way to execute a pipeline when the source branch of a PR is updated?

This is my Jenkinsfile:
pipeline {
agent { docker { image 'node:10.16' } }
stages {
stage('PR To Dev') {
when {
changeRequest target: 'dev'
}
steps {
sh 'npm install'
sh 'npm run lint'
}
}
}
}
I'm trying to run linting upon every PR made (on Github). This pipeline works and runs as intended when I make the initial PR to the dev branch. However, subsequent commits to the open PR are ignored by Jenkins, which defeats the usefulness of the initial lint check. How can I configure Jenkins to lint upon any updates to a branch that has an open PR to the dev (or any arbitrary) branch?
Achieving this goal is possible. It depends greatly on the plugin that you are using to integrate GitHub with Jenkins, and how you configure GitHub to use Jenkins' webhooks.
On the GitHub end, you can configure to trigger the webhook on different events. Default config is Push events (to any branch, whether on PR or not), All events (these can have many false positives), and the option to Select individual events (find your right balance between events coverage and false positives)
On the Jenkins end, some plugins will offer more customization options to discard unnecessary triggers, for example to avoid triggering a project on PR updates of the title or description (instead of code), etc.
I, personally, use the Generic Webhook Plugin on the Jenkins end and then I analyze the json of the webhook to determine whether to run a job or not

What does the pollSCM trigger refer to in this Jenkinsfile?

Consider the following setup using Jenkins 2.176.1:
A new pipeline project named Foobar
Poll SCM as (only) build trigger, with: H/5 * * * * ... under the assumption that this refers to the SCM configured in the next step
Pipeline script from SCM with SCM Git and a working Git repository URL
Uncheck Lightweight checkout because of JENKINS-42971 and JENKINS-48431 (I am using build variables in the real project and Jenkinsfile; also this may affect how pollSCM works, so I include this step here)
Said repository contains a simple Jenkinsfile
The Jenkinsfile looks approximately like this:
#!groovy
pipeline {
agent any
triggers { pollSCM 'H/5 * * * *' }
stages {
stage('Source checkout') {
steps {
checkout(
[
$class: 'GitSCM',
branches: [],
browser: [],
doGenerateSubmoduleConfigurations: false,
extensions: [],
submoduleCfg: [],
userRemoteConfigs: [
[
url: 'git://server/project.git'
]
]
]
)
stash 'source'
}
}
stage('OS-specific binaries') {
parallel {
stage('Linux') {
agent { label 'gcc && linux' }
steps {
unstash 'source'
echo 'Pretending to do a build here'
}
}
stage('Windows') {
agent { label 'windows' }
steps {
unstash 'source'
echo 'Pretending to do a build here'
}
}
}
}
}
}
My understanding so far was that:
a change to the Jenkinsfile (not the whole repo) triggers the pipeline on any registered agent (or as configured in the pipeline project).
said agent (which is random) uses the pollSCM trigger in the Jenkinsfile to trigger the pipeline stages.
But where does the pollSCM trigger poll (what SCM repo)? And if it's a random agent then how can it reasonably detect changes across poll runs?
then the stages are being executed on the agents as allocated ...
Now I am confused what refers to what. So here my questions (all interrelated which is why I keep it together in one question):
The pipeline project polls the SCM just for the Jenkinsfile or for any changes? The repository in my case is the same (for Jenkinsfile and source files to build binaries from).
If the (project-level) polling triggers at any change rather than changes to the Jenkinsfile
Does the pollSCM trigger in the Jenkinsfile somehow automagically refer to the checkout step?
Then ... what would happen, would I have multiple checkout steps with differing settings?
What determines what repository (and what contents inside of that) gets polled?
... or is this akin to the checkout scm shorthand and pollSCM actually refers to the SCM configured in the pipeline project and so I can shorten the checkout() to checkout scm in the steps?
Unfortunately the user handbook didn't answer any of those questions and pollSCM has a total of four occurrences on a single page within the entire handbook.
I'll take a crack at this one:
The pipeline project polls the SCM just for the Jenkinsfile or for any
changes? The repository in my case is the same (for Jenkinsfile and
source files to build binaries from).
The pipeline project will poll the repo for ANY file changes, not just the Jenkinsfile. A Jenkinsfile in the source repo is common practice.
If the (project-level) polling triggers at any change rather than
changes to the Jenkinsfile Does the pollSCM trigger in the Jenkinsfile
somehow automagically refer to the checkout step?
Your pipeline will be executed when a change to the repo is seen, and the steps are run in the order that they appear in your Jenkinsfile.
Then ... what would happen, would I have multiple checkout steps with
differing settings?
If you defined multiple repos with the checkout step (using multiple checkout SCM calls) then the main pipeline project repo would be polled for any changes and the repos you define in the pipeline would be checked out regardless of whether they changed or not.
What determines what repository (and what contents inside of that)
gets polled? ... or is this akin to the checkout scm shorthand and
pollSCM actually refers to the SCM configured in the pipeline project
and so I can shorten the checkout() to checkout scm in the steps?
pollSCM refers to the pipeline project's repo. The entire repo is cloned unless the project is otherwise configured (shallow clone, lightweight checkout, etc.).
The trigger defined as pollSCM polls the source-control-management (SCM), at the repository and branch in which this jenkinsfile itself (and other code) is located.
For Pipelines which are integrated with a source such as GitHub or BitBucket, triggers may not be necessary as webhooks-based integration will likely already be present. The triggers currently available are cron, pollSCM and upstream.
It works for a multibranch-pipeline as trigger to execute the pipeline.
When Jenkins polls the SCM, exactly this repository and branch, and detects a change (i.e. new commit), then this Pipeline (defined in jenkinsfile) is executed.
Usually then the following SCM Step checkout will be executed, so that the specified project(s) can be built, tested and deployed.
See also:
SCM Poll in jenkins multibranch pipeline
SehllHacks(2020): Jenkins: Scan Multibranch Pipeline Without Build

Jenkins multibranch pipeline only for subfolder

I have git monorepo with different apps. Currently I have single Jenkinsfile in root folder that contains pipeline for app alls. It is very time consuming to execute full pipeline for all apps when commit changed only one app.
We use GitFlow-like approach to branching so Multibranch Pipeline jobs in Jenkins as perfect fit for our project.
I'm looking for a way to have several jobs in Jenkins, each one will be triggered only when code of appropriate application was changed.
Perfect solution for me looks like this:
I have several Multibranch Pipeline jobs in Jenkins. Each one looks for changes only to given directory and subdirectories. Each one uses own Jenkinsfile. Jobs pull git every X minutes and if there are changes to appropriate directories in existing branches - initiates build; if there are new branches with changes to appropriate directories - initiates build.
What stops me from this implementation
I'm missing a way to define commit to which folders must be ignored during scan execution by Multibranch pipeline. "Additional behaviour" for Multibranch pipeline doesn't have "Polling ignores commits to certain paths" option, while Pipeline or Freestyle jobs have. But I want to use Multibranch pipeline.
Solution described here doesnt work for me because if there will be new branch with changes only to "project1" then whenever Multibranch pipeline for "project2" will be triggered it will discover this new branch anyway and build it. Means for every new branch each of my Multibranch pipelines will be executed at least once no matter if there was changes to appropriate code or not.
Appreciate any help or suggestions how I can implement few Multibranch pipelines watching over same git repository but triggered only when appropriate pieces of code changed
This can be accomplished by using the Multibranch build strategy extension plugin. With this plugin, you can define a rule where the build only initiates when the changes belong to a sub-directory.
Install the plugin
On the Multibranch pipeline configuration, add a Build strategy
Select Build included regions strategy
Put a sub-folder on the field, such as subfolder/**
This way the changes will still be discovered, but they won't initiate a build if it doesn't belong to a certain set of files or folders.
This is the best approach I'm aware so far. But I think the best way would be a case where the changes doesn't even get discovered.
Edit: Gerrit Code Review plugin configuration
In case you're using the Gerrit Code Review plugin, you can also prevent new changes to be discovered by using a custom query:
I solved this by creating a project that builds other projects depending on the files changed. For example, from your repo root:
/Jenkinsfile
#!/usr/bin/env groovy
pipeline {
agent any
options {
timestamps()
}
triggers {
bitbucketPush()
}
stages {
stage('Build project A') {
when {
changeset "project-a/**"
}
steps {
build 'project-a'
}
}
stage('Build project B') {
when {
changeset "project-b/**"
}
steps {
build 'project-b'
}
}
}
}
You would then have other Pipeline projects with their own Jenkinsfile (i.e., project-a/Jenkinsfile).
I know that this post is quite old, but I solved this problem by changing the "include branches" parameter for SVN repositories (this can possibly also be done using the property "Filter by name (with wildcards)" for git repos). Instead of supplying only the actual branch name, I also included the subfolder. So instead of only supplying "trunk", I used "trunk/subfolder". This limits scanning to only that specific directory. Note that I have not yet fully tested this solution.

Jenkins called job from pipeline

I am using a plugin in Jenkins called Deploy to container. I created a Job called 'Deploy' to build this feature. How can I call this from pipeline that I created from another job?
I am using this code in the pipeline, but it doesn't trigger the Called to Deploy container and the configuration.
stage('Tomcat') {
withMaven(maven: 'M2') {
build job: 'Deploy'
}
}
First of all, wrapping the build call inside withMaven is useless, as this will not affect the triggered Deploy job.
Assuming that you get an error message that a job called Deploy is not found, let me say that it searches for the job similar to like in directories:
build 'Deploy' would trigger a job "next" to the current one.
build '/Deploy' would trigger a job on the top level, no matter how deep inside folders (e.g. multi-branch projects or organisation folder projects) the current job is located.
build '../Deploy' would trigger a job one level above, in case of a multi-branch project, this would be needed, if you have such non-folder-based job and trigger it from a multi-branch project (so you have to go one level up from the job inside the multi-branch project).
If this does not help, edit your post and add the URLs of the Deploy job and the one that should trigger it.

Jenkins pipeline share information between jobs

We are trying to define a set of jobs on Jenkins that will do really specific actions. JobA1 will build maven project, while JobA2 will build .NET code, JobB will upload it to Artifactory, JobC will download it from Artifactory and JobD will deploy it.
Every job will have a set of parameters so we can reuse the same job for any product (around 100).
The idea behind this is to create black boxes, I call a job with some input and I get always some output, whatever happens between is something that I don't care. On the other side, this allows us to improve each job separately, adding the required complexity, and instantly all products will get benefit.
We want to use Jenkins Pipeline to orchestrate the execution of actions. We are going to have a pipeline per environment/usage.
PipelineA will call JobA1, then JobB to upload to artifactory.
PipelineB will download package JobC and then deploy to staging.
PipelineC will download package JobC and then deploy to production based on some internal validations.
I have tried to get some variables from JobA1 (POM basic stuff such as ArtifactID or Version) injected to JobB but the information seems not to be transfered.
Same happens while downloading files, I call JobC but the file is in the job workspace not available for any other and I'm afraid that"External Workspace Manager" plugin adds too much complexity.
Is there any way rather than share the workspace to achieve my purpose? I understand that share the workspace will make it impossible to run two pipelines at the same time
Am I following the right path or am I doing something weird?
There are two ways to share info between jobs:
You can use stash/unstash to share the files/data between multiple jobs in a single pipeline.
stage ('HostJob') {
build 'HostJob'
dir('/var/lib/jenkins/jobs/Hostjob/workspace/') {
sh 'pwd'
stash includes: '**/build/fiblib-test', name: 'app'
}
}
stage ('TargetJob') {
dir("/var/lib/jenkins/jobs/TargetJob/workspace/") {
unstash 'app'
build 'Targetjob'
}
In this manner, you can always copy the file/exe/data from one job to the other. This feature in pipeline plugin is better than Artifact as it saves only the data locally. The artifact is deleted after a build (helps in data management).
You can also use Copy Artifact Plugin.
There are two things to consider for copying an artifact:
a) Archive the artifacts in the host project and assign permissions.
b) After building a new job, select the 'Permission to copy artifact' → Projects to allow copy artifacts: *
c) Create a Post-build Action → Archive the artifacts → Files to archive: "select your files"
d) Copy the artifacts required from host to target project.
Create a Build action → Copy artifacts from another project → Enter the ' $Project name - Host project', which build 'e.g. Lastest successful build', Artifacts to copy '$host project folder', Target directory '$localfolder location'.
The first part of your question(to pass variables between jobs) please use the below command as a post build section:
post {
always {
build job:'/Folder/JobB',parameters: [string(name: 'BRANCH', value: "${params.BRANCH}")], propagate: false
}
}
The above post build action is for all build results. Similarly, the post build action could be triggered on the current build status. I have used the BRANCH parameter from current build(JobA) as a parameter to be consumed by 'JobB' (provide the exact location of the job). Please note that there should be a similar parameter defined in JobB.
Moreover, for sharing the workspace you can refer this link and share the workspace between the jobs.
You could use the Pipelines shared groovy libraries plugin. Have a look at its documentation to implement libraries that multiple pipelines share and define shared global variables.

Resources