How to skip SCM trigger on certain message - jenkins

I am using declarative Jenkinsfile for a multi-branch pipeline as shown here. SCM is set to poll for every 5 minutes.
pipeline {
agent none
stages {
stage('Build Jar') {
agent {
docker {
image 'maven:3.6.0-jdk-11'
args '-v $HOME/.m2:/root/.m2'
}
}
steps {
sh 'mvn clean package release:clean release:prepare release:perform -Darguments="-Dmaven.deploy.skip=true" -DscmCommentPrefix="[skip ci]"'
}
}
stage('Build Image') {
steps {
script {
app = docker.build("myname/myimage")
}
}
}
//other stages here
}
Problem:
maven release commits changes to the repo which triggers another build. So it gets triggered indefintely. I came across this SCM Skip plugin.
scmSkip(deleteBuild: true, skipPattern:'.*\\[skip ci\\].*')
But unfortunately it needs an agent to run!!
I also tried by using agent any. no luck.
pipeline {
agent any
stages {
stage('SCM Check') {
steps {
scmSkip(deleteBuild: true, skipPattern:'.*\\[skip ci\\].*')
}
}
stage('Build Jar') {
steps {
sh 'mvn clean package release:clean release:prepare release:perform -Darguments="-Dmaven.deploy.skip=true" -DscmCommentPrefix="[skip ci]"'
}
}
stage('Build Image') {
steps {
script {
app = docker.build("myname/myimage")
}
}
}
//other stages here
}
How do you guys skip build on certain messages?

I had to go with the below plugin which excludes the certain commiter. It works great.
https://github.com/jenkinsci/ignore-committer-strategy-plugin

Related

Executing Jenkins Pipeline on a single agent with docker

What I'm trying to achieve:
I'm trying to execute a pipeline script where SCM (AccuRev) is checked out on 'any' agent and then the stages are executed on that same agent per the local workspace. The build stage specifically is expecting the code checkout to just be available in the workspace that is mapped into the container.
The problem:
When I have more than one agent added to the Jenkins configuration, the SCM step will checkout the code on one agent and then start the build step starting the container on the other agent, which is a problem because the code was checked out on the other agent.
What works:
Jenkins configured with a single agent/node
pipeline {
agent none
stages {
stage('Checkout') {
agent any
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
What I have tried, but doesn't work:
Jenkins configured with 2 agent(s)/node(s)
pipeline {
agent {
docker {
image 'ubuntu'
}
}
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
The above doesn't work because it is expecting AccuRev to be installed in the container. I could go this route, but it is not really scalable and will cause issues on containers that are based on an older OS. There are also permission issues within the container.
I also tried adding 'reuseNode true' to the docker agent, as in the below:
pipeline {
agent none
stages {
stage('Checkout') {
agent any
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
I'm somewhat aware or have read about 'automatic checkout scm' as with the following, but this is odd as there is no place to define the target stream/branch to checkout. This is why I'm declaring a specific stage to handle scm checkout. It is possible this would handle the checkout without needing to specify the agent, but I don't get how to do this.
pipeline {
agent any
stages {
stage ('Build') {
steps {
sh 'cat Jenkinsfile'
}
}
}
}
Edit: adding a solution that seems to work, but need more testing before confirming.
The following seems to do what I want, executing the checkout stage on 'any' agent and then reusing the same agent to execute the build state in a container.
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}
The below appears to have given me the functionality that I needed. The pipeline starts on "any" agent allowing the host level to handle the Checkout stage, and the "reuseNode" informs the pipeline to start the container on the same node, where the workspace is located.
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout accurev(depot: 'MyDepot', serverName: 'AccuRev', stream: 'SomeStream', wspaceORreftree: 'none')
}
}
stage('Compile') {
agent {
docker {
image 'ubuntu'
reuseNode true
}
}
steps {
sh '''#!/bin/bash
make -j16
'''
}
}
}
}

Need to stop jenkins pipeline multiple times

I running our maven project on a jenkins server with multiple stages inside the pipeline.
Every time I decide that the branch test does not need to continue and click on abort in the jenkins ui, I need to repeat this many times until the jenkins pipeline really stops.
I guess that our jenkinsfile does not really pick up that the job was aborted and I need to abort every stage to come to the end.
Is there a way to help jenkins to get out of the pipeline?
For example a variable I can check?
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
}
}
if (!currentBuild.isAborted) {
stage('Unit Tests') {
steps {
echo 'Unit Testing'
}
}
}
if (!currentBuild.isAborted) {
stage('Deploy') {
steps {
echo 'Deploying'
}
}
}
if (!currentBuild.isAborted) {
stage('Backend Integration Tests') {
steps {
echo 'Backend Int Tests'
}
}
}
if (!currentBuild.isAborted) {
stage('Frontend Integration Tests') {
steps {
echo 'Deploying....'
}
}
}
// done
}
}

Jenkins Bitbucket Notifier does not notify when using pipeline

I have this JenkinsFile
pipeline {
agent {
node {
label 'SERVER'
}
}
stages {
stage('Notificando Inicio do Job') {
steps {
bitbucketStatusNotify buildState: 'INPROGRESS'
}
}
stage('Restore') {
steps {
powershell(script: 'dotnet restore', returnStatus: false)
}
}
stage('Build') {
steps {
powershell(script: 'dotnet build ./path', returnStatus: false)
}
}
stage('Test') {
steps {
powershell(script: 'dotnet test ./path', returnStatus: false)
}
}
}
post{
always{
echo "Finalizando Build..."
}
success{
bitbucketStatusNotify buildState: 'SUCCESS'
}
failure{
bitbucketStatusNotify buildState: 'FAILED'
}
}
}
My desire is notify Bitbucket with build status.
This Jenkinsfile works perfectly ok when I run from an Multi-Branch pipeline, but, when I use it with a simple pipeline, it does not work.
Then, I go to the jenkins logs, and the only this related is...
But, from plugin doc, these parameters are not mandatory.
My oauth client is correctly configured on bitbucket.
And on my jenkins, I have the credentials ok;
What I'm doing wrong?
How can I make it happen?

Creating Jenkins Pipeline inside Job DSL script

I can create pipelines by putting the following code into "Jenkinsfile" in my repository(called repo1) and creating a new item, through Jenkins GUI, to poll the repository.
pipeline {
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
post {
always {
junit 'target/surefire-reports/*.xml'
archiveArtifacts artifacts: 'target/*.jar', fingerprint: true
}
}
}
stage('Deploy') {
steps {
sh 'echo \'uploading artifacts to some repositories\''
}
}
}
}
But I have a case where I am not allowed create new items through Jenkins GUI but have a pre-defined job which reads JobDSL files in a repository I provide. So, I need to create the same pipeline through JobDSL but I cannot find the corresponding syntax for all the things, for instance, I couldn't find 'agent' DSL command.
Here is a job DSL code I was trying to change.
pipelineJob('the-same-pipeline') {
definition {
cps {
sandbox()
script("""
node {
stage('prepare') {
steps {
sh '''echo 'hello''''
}
}
}
""".stripIndent())
}
}
}
For instance, I could not find 'agent' command. Is it really possible to have the exact pipeline by using job DSL?
I found a way to create the pipeline item through jobDSL. So, the following jobDSL is creating another item which is just a pipeline.
pipelineJob('my-actual-pipeline') {
definition {
cpsScmFlowDefinition {
scm {
gitSCM {
userRemoteConfigs {
userRemoteConfig {
credentialsId('')
name('')
refspec('')
url('https://github.com/muatik/jenkins-as-code-example')
}
}
branches {
branchSpec {
name('*/master')
}
}
browser {
gitWeb {
repoUrl('')
}
}
gitTool('')
doGenerateSubmoduleConfigurations(false)
}
}
scriptPath('Jenkinsfile')
lightweight(true)
}
}
}
You can find the Jenkinsfile and my test repo here: https://github.com/muatik/jenkins-as-code-example

Jenkins Multibranch job with declarative pipeline cloning repo for every stage

Trying to create a workflow in Jenkins using Declarative Pipeline to do something like this:
Checkout the code on 'master'
Build solution on 'master' (I know this is not a secure way to do it, but Jenkins is in the intranet so it should be fine for us)
Stash artifacts (.dll, .exe, .pdb, etc) => 1st stage
Unstash artifacts on nodes depending on what it's needed (Unit tests on a slave, Integration tests on another one and Selenium tests on a another one) => 2nd stage
Run tests depending on the slave => 3rd stage running in parallel
The problem that I'm facing is that the git checkout (GitSCM) is executed for every stage.
My pipeline looks like this:
pipeline {
agent {
label {
label "master"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
options {
timestamps()
}
stages {
stage("Build") {
agent {
label {
label "master"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
/*
steps to build the solution here
*/
//Sleep because stashing fails otherwise
script {
sleep(1)
}
dir("${env.WORKSPACE}\\UnitTests\\bin\\Release") {
stash name: 'unit-tests'
}
dir("${env.WORKSPACE}\\WebUnitTests\\bin\\x64\\Release") {
stash name: 'web-unit-tests'
}
}
stage('Export artefacts') {
agent {
label {
label "UnitTest"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
echo "Copying dlls from master to ${env.NODE_NAME}"
dir("${env.WORKSPACE}\\UnitTests\\bin\\Release") {
unstash 'unit-tests'
}
}
}
stage('Run tests') {
parallel {
stage("Run tests #1") {
agent {
label {
label "UnitTest"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
/*
run tests here
*/
}
post {
//post results here
}
}
//other parallel stages
}
}
}
}
So, as mentioned earlier, the GitSCM (code checkout) is a part of and performed for every stage:
Build stage
Export stage
A couple simple changes should solve this. You need to tell the pipeline script not to checkout by default every time a node is allocated. Then you need to tell it to do the checkout where you need it:
pipeline {
agent {
label {
label "master"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
options {
timestamps()
skipDefaultCheckout() // Don't checkout automatically
}
stages {
stage("Build") {
agent {
label {
label "master"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
checkout scm //this will checkout the appropriate commit in this stage
/*
steps to build the solution here
*/
//Sleep because stashing fails otherwise
script {
sleep(1)
}
dir("${env.WORKSPACE}\\UnitTests\\bin\\Release") {
stash name: 'unit-tests'
}
dir("${env.WORKSPACE}\\WebUnitTests\\bin\\x64\\Release") {
stash name: 'web-unit-tests'
}
}
stage('Export artefacts') {
agent {
label {
label "UnitTest"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
echo "Copying dlls from master to ${env.NODE_NAME}"
dir("${env.WORKSPACE}\\UnitTests\\bin\\Release") {
unstash 'unit-tests'
}
}
}
stage('Run tests') {
parallel {
stage("Run tests #1") {
agent {
label {
label "UnitTest"
customWorkspace "C:\\Jenkins\\workspace\\CustomWorkspace"
}
}
steps {
/*
run tests here
*/
}
post {
//post results here
}
}
//other parallel stages
}
}
}
I have added 2 lines there. One in the options section (skipDefaultCheckout()), and a checkout scm in the first stage.

Resources