Why Pipeline Job is triggered although there is no change in SCM - jenkins

I am setting up a pipeline job for my project and what I would like to do is to setup the job to ping to the SCM every two minites to check if there is change, if yes then build the job using pipeline script.
Here is my pipeline job configuration:
and the pipeline script:
pipeline {
agent any
options {
disableConcurrentBuilds()
}
tools {
maven 'Maven 3.6'
}
stages {
stage('Git Checkout') {
steps {
sh 'echo "Git Checkout"'
checkout([$class: 'GitSCM', branches: [[name: 'master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'my-credential', url: 'my-git-url']]])
}
}
stage('Maven Build') {
steps {
sh 'mvn clean deploy -s $WORKSPACE/custom-config/settings.xml -Dsettings.security=$WORKSPACE/custom-config/settings-security.xml'
}
}
stage('Maven Build and SonarQube Analysis') {
steps {
withSonarQubeEnv('Sonar Qube') {
sh 'mvn sonar:sonar -Dsonar.projectKey=com.product:backend'
}
}
}
stage('SonarQube Quality Gate') {
steps {
timeout(time: 10, unit: 'MINUTES') {
waitForQualityGate abortPipeline: true
}
}
}
}
post {
failure {
mail bcc: '', body: "See ${env.BUILD_URL}", cc: '', charset: 'UTF-8', from: '', mimeType: 'text/html', replyTo: '', subject: "Build failed in Jenkins: ${env.JOB_NAME} #${env.BUILD_NUMBER}", to: "my.email#gmail.com";
}
unstable {
mail bcc: '', body: "See ${env.BUILD_URL}", cc: '', charset: 'UTF-8', from: '', mimeType: 'text/html', replyTo: '', subject: "Build unstable in Jenkins: ${env.JOB_NAME} #${env.BUILD_NUMBER}", to: "my.email#gmail.com";
}
}
}
Everything is working fine excepts that the job is triggered every two minutes although there is nothing changes in the master branch which I have setup in the script. Can anyone help me here to prevent this action. I would like the job to be triggered only when there is change in SCM. Thank you very much!

Related

Jenkins Pipeline - Git checkout collision

We have a multi-stage pipeline in our CI, and some of the stages have their own nested stages that are parallelized and may run on the same or different agents (we request a certain agent label).
As with most CI pipelines, we build our artifacts and deploy and run our tests later.
As the pipeline may take some time to complete, we had an issue where new commits that are merged to our master branch may be picked up in the later stages, and it creates an incompatibility between the pre-packaged code and the new checkout-out one.
I'm currently using the skipDefaultCheckout directive and added my own function to checkout the commit SHA1 that is set in the GIT_COMMIT in every one of the parallel stages
void gitCheckoutByCommitHash(credentialsId, gitCommit=GIT_COMMIT) {
script {
println("Explicitly checking out git commit: ${gitCommit}")
}
checkout changelog: false, poll: false,
scm: [
$class: 'GitSCM',
branches: [[name: gitCommit]],
doGenerateSubmoduleConfigurations: false,
extensions: [
[
$class: 'CloneOption',
noTags: true,
shallow: true
],
[
$class: 'SubmoduleOption',
disableSubmodules: false,
parentCredentials: true,
recursiveSubmodules: true,
reference: '',
trackingSubmodules: false
],
],
submoduleCfg: [],
userRemoteConfigs: [[
credentialsId: credentialsId,
url: GIT_URL
]]
]
}
The problem I'm facing is that sometimes, two or more of the parallel stages are trying to run on the same agent and perform the checkout, and I get an error that a process has already retained .git/index.lock, and the stage that is locked out fails.
Is there any way to work around that?
This is a sample pipeline
pipeline {
agent {
label 'docker_v2'
}
options {
timestamps()
timeout(time: 1, unit: 'HOURS')
}
stages {
stage('Prepare test environment') {
options {
skipDefaultCheckout()
}
steps {
gitCheckoutByCommitHash('some-creds-id')
}
}
stage('Parallel stuff'){
parallel {
stage('Checkout 1') {
agent {
label 'docker_v2'
}
options {
skipDefaultCheckout()
}
steps {
gitCheckoutByCommitHash('some-creds-id')
}
}
stage('Checkout 2') {
agent {
label 'docker_v2'
}
options {
skipDefaultCheckout()
}
steps {
gitCheckoutByCommitHash('some-creds-id')
}
}
}
}
}
}
The best way to solve this issue would be to only perform the checkout once, and then use stash, with unstash in later stages:
pipeline {
agent {
label 'docker_v2'
}
options {
timestamps()
timeout(time: 1, unit: 'HOURS')
skipDefaultCheckout()
}
stages {
stage('Prepare test environment') {
steps {
// you can use this:
gitCheckoutByCommitHash('some-creds-id')
// or the usual "checkoutScm"
stash name: 'sources', includes: '**/*', allowEmpty: true , useDefaultExcludes: false
}
}
stage('Parallel stuff'){
parallel {
stage('Checkout 1') {
agent {
label 'docker_v2'
}
steps {
unstash 'sources'
}
}
stage('Checkout 2') {
agent {
label 'docker_v2'
}
steps {
unstash 'sources'
}
}
}
}
}
}
This would also speed up your pipeline.
To prevent collisions in the workspace, you can also use one of the following:
workspace directive so that different parallel stages would use different workspaces; or
Define your agents so that there is only one executor per agent; or
Both of the above.

how to fail the jenkins build if any test cases are failed using findText plugin

I have a stage in Jenkins as follows, How do I mark the build to fail or unstable if there is a test case failure? I generated the script pipeline for textfinder plugin but it is not working. "findText alsoCheckConsoleOutput: true, regexp: 'There are test failures.', unstableIfFound: true" not sure where to place the textFinder regex.
pipeline {
agent none
tools {
maven 'maven_3_6_0'
}
options {
timestamps ()
buildDiscarder(logRotator(numToKeepStr:'5'))
}
environment {
JAVA_HOME = "/Users/jenkins/jdk-11.0.2.jdk/Contents/Home/"
imageTag = ""
}
parameters {
choice(name: 'buildEnv', choices: ['dev', 'test', 'preprod', 'production', 'prodg'], description: 'Environment for Image build')
choice(name: 'ENVIRONMENT', choices: ['dev', 'test', 'preprod', 'production', 'prodg'], description: 'Environment for Deploy')
}
stages {
stage("Tests") {
agent { label "xxxx_Slave"}
steps {
checkout([$class: 'GitSCM', branches: [[name: 'yyyyyyyyyyz']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'zzzzzzzzzzz', url: 'abcdefgh.git']]])
sh'''
cd dashboard
mvn -f pom.xml surefire-report:report -X -Dsurefire.suiteXmlFiles=src/test/resources/smoke_test.xml site -DgenerateReports=false
'''
}
}
}
}
All I did to make this request possible is as below:
added a post block of code below the steps block code.
post {
Success {
findText alsoCheckConsoleOutput: true, refexp: 'There are test failures.', unstableIfFound: true
}
}

Best way to clone or pull gitlab code using Jenkins to avoid merge issues

What is the best way to clone or pull gitlab code using Jenkins, I have this pipeline. However i am seeing merge issues popping up and then it ignored other builds. What is the best approach to do this. Below is my pipeline and errors:
pipeline {
agent any
environment {
APPIUM_PORT_ONE= 4723
APPIUM_PORT_TWO= 4724
}
tools {nodejs "node"}
stages {
stage('Checkout App 1') {
steps {
dir("/Users/Desktop/app1") {
sh 'git pull ###'
}
echo "Building.."
}
}
stage('Checkout App 2') {
steps {
dir("/Users//Desktop/app2") {
echo "Building.."
sh 'git pull ###'
}
}
}
stage('Checkout Mirror') {
steps {
echo "Building.."
}
}
stage('Checkout End to End Tests') {
steps {
dir("/Users/Desktop/qa-end-to-end/") {
sh 'git pull ###'
}
}
}
stage('Starting Appium Servers') {
steps {
parallel(
ServerOne: {
echo "Starting Appium Server 1"
dir("/Users/Desktop/qa-end-to-end/") {
}
},
ServerTwo: {
echo "Starting Appium Server 2"
})
}
}
stage('Starting End to End Tests') {
steps {
echo "Starting End to End Tests"
dir("/Users/Desktop/qa-end-to-end/") {
sh './tests.sh'
echo "Shutting Down Appium Servers"
}
}
}
stage('Publish Report') {
steps {
echo "Publishing Report"
}
}
}
}
Should i clone from scratch instead of doing pull?. Any documentation would be helpful.
Unless the repos are large and time consuming to clone from scratch then I would do that.
Then you are certain that you have clean correct code to run with
checkout([$class: 'GitSCM',
branches: [[name: '*/master']],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'CleanCheckout']],
submoduleCfg: [],
userRemoteConfigs: [[credentialsId: 'GIT', url: 'git#git.com:repo.git']]])
Can either run this in you DIR block or add the extension to checkout to a subdirectory
extensions: [[$class: 'RelativeTargetDirectory',
relativeTargetDir: 'checkout-directory']]
Dont forget to delete the old checkouts if you are persisting workspaces across builds.

Jenkins not sending email if changeset not empty and status is not failure

I have something really weird happening.
I use jenkins scripted pipeline to send an email with email ext plugin and template groovy-html.template.
The email is properly sent if the changeset is empty or if the build result is failure, but if the build result is in (SUCCESS, UNSTABLE) and the changeset not empty, i never get the email...
I looked into all jenkins logs and did not find any error that could explain this behavior.
The issue is also happening with email template jelly html or groovy text.
Any idea why i'm getting this behaviour?
Here is my codesnipped:
emailext(
subject: 'Deployment',
body: '${SCRIPT, template="groovy-html.template"}',
to: 'email#address.com')
And here is the complete pipeline.
Would you like to try to use a declarative pipeline?
change this section
node('master') {
checkout(scm: [$class: 'GitSCM',
branches: [[name: "*/develop"]],
extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: 'repo1']],
userRemoteConfigs: [[credentialsId: 'bitbucket.jenkins',
url: 'urlToRepo.git']]],
changelog: true, poll: true)
showChangeLogs()
//currentBuild.result = 'FAILURE'
emailext(
subject: 'Deployment',
body: '${SCRIPT, template="groovy-html.template"}',
to: 'email#address.com')
}
by this one
pipeline {
agent any
stages {
stage('master') {
steps {
script {
checkout(scm: [$class: 'GitSCM',
branches: [[name: "*/develop"]],
extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: 'repo1']],
userRemoteConfigs: [[credentialsId: 'bitbucket.jenkins',
url: 'urlToRepo.git']]],
changelog: true, poll: true)
showChangeLogs()
//currentBuild.result = 'FAILURE'
}
}
}
}
post {
always {
emailext(
subject: 'Deployment',
body: '${SCRIPT, template="groovy-html.template"}',
to: 'email#address.com')
}
}
}

BlueOcean is not asking for some of my jenkins multibranch parameters

I recently modified the Jenkinsfile of my branch(for now, I've only one branch with this jenkins branch).
When I try to launch the multibranch pipeline for this branch, I've a lot of parameters requested, but not the new one I've added.
If I go into Jenkins(not BlueOcean), in the configuration I see them, and if I start a build from there, I also see them.
Here is my Jenkinsfile:
pipeline {
agent {
node{
label 'windows-node'
customWorkspace "D:\\ws\\${env.BRANCH_NAME}"
}
}
options{
skipDefaultCheckout()
}
triggers{
pollSCM 'H 23 * * *'
}
stages {
stage('Initialization started'){
steps{
echo "Job parameters:\n\t- Build X86: ${params.buildX86}\n\t- Build X64: ${params.buildX64}\n\t- Commit Version changes: ${params.commitVersionChanges}.${env.BUILD_NUMBER}\n\t- Setup Version: ${params.version}\n\t- Setup Configuration: ${params.setupConfiguration}\nCurrent repository: ${workspace}"
}
}
stage('Checkout'){
steps{
echo "Custom checkout: ${env.BRANCH_NAME}"
checkout scm
}
}
stage('ABC Solution Pre-build') {
steps {
changeAsmVer "${params.version}.${env.BUILD_NUMBER}"
bat 'nuget.exe restore Solution\\ABC.sln'
powershell 'ContinuousIntegration\\Scripts\\ChangeBindingVersion.ps1 "HDAPluginNet4" "Src\\Clients\\OpcServer\\Xms.OpcHda.Server\\HDANSrv.Net4.exe.config"'
}
}
stage('Preparing SonarQube'){
when{
expression{ params.runTests == true && env.BRANCH_NAME == 'develop'}
}
steps{
withSonarQubeEnv('XYZ SonarQube') {
script{
def sqScannerMsBuildHome = tool 'SonarQube.Runner-3.0'
}
bat "${sqScannerMsBuildHome}\\SonarQube.Scanner.MSBuild.exe begin /k:ABC /n:ABC /v:${params.version}.${env.BUILD_NUMBER} /d:sonar.host.url=%SONAR_HOST_URL% /d:sonar.login=%SONAR_AUTH_TOKEN% /d:sonar.cs.nunit.reportsPaths=TestResult.xml /d:sonar.cs.dotcover.reportsPaths=dotcover.html"
}
}
}
stage('Build ABC Solution') {
steps{
bat "\"${tool 'MSBUILD15'}\" Solution\\ABC.sln /p:Configuration=${params.setupConfiguration} /p:Platform=\"Any CPU\" /t:Rebuild"
}
}
stage('ABC Solution Pre-setup') {
when{
expression{ params.buildX64 == true || params.buildX86 == true}
}
steps{
bat "\"Src\\Obfuscation\\XmsApplicationsObfuscation\\Release\\obfuscationProcess.cmd\" \"${workspace}\" \"${workspace}\\output\\dotfuscator.zip\" \"XXXXXXXX\""
bat "Doc\\BuildDocumentation.bat"
}
}
stage('X64 Setup build') {
when{
expression{ params.buildX64 == true}
}
steps{
bat "\"${tool 'MSBUILD15'}\" Solution\\SetupWix.sln /p:Configuration=${params.setupConfiguration} /p:Platform=x64 /t:Rebuild /p:Version=\"${params.version}.${env.BUILD_NUMBER}\""
bat "move SetupWix\\SetupWix\\bin\\Release\\en-us\\ABCSetup.msi SetupWix\\SetupWix\\bin\\Release\\en-us\\ABCSetup_64_bit.msi"
}
}
stage('X86 Setup build') {
when{
expression{ params.buildX86 == true}
}
steps{
bat "\"${tool 'MSBUILD15'}\" Solution\\SetupWix.sln /p:Configuration=${params.setupConfiguration} /p:Platform=x86 /t:Rebuild /p:Version=\"${params.version}.${env.BUILD_NUMBER}\""
bat "move SetupWix\\SetupWix\\bin\\Release\\en-us\\ABCSetup.msi SetupWix\\SetupWix\\bin\\Release\\en-us\\ABCSetup_32_bit.msi"
}
}
stage('Post-setup'){
when{
expression{ params.buildX64 == true || params.buildX86 == true}
}
steps{
powershell 'ContinuousIntegration\\Scripts\\MoveSetups.ps1'
}
}
stage('Commit version change'){
when{
expression{ params.commitVersionChanges == true}
}
steps{
bat 'git add "./*AssemblyInfo.*"'
bat 'git commit -m "Assembly infos changed by Jenkins"'
bat "git push origin HEAD:${env.BRANCH_NAME}"
}
}
stage('Testing'){
when{
expression{ params.runTests == true}
}
steps{
bat 'dotcover.exe analyze ContinuousIntegration/DotCoverConfig.xml'
nunit testResultsPattern: 'TestResult.xml'
}
}
stage('Finishing SonarQube'){
when{
expression{ params.runTests == true && env.BRANCH_NAME == 'develop'}
}
steps{
withSonarQubeEnv('XYZ SonarQube') {
script{
def sqScannerMsBuildHome = tool 'SonarQube.Runner-3.0'
}
bat "${sqScannerMsBuildHome}\\SonarQube.Scanner.MSBuild.exe end"
}
}
}
}
post{
failure {
emailext body: "<b>Error while excuting the following job</b><br><br>Project: ${env.JOB_NAME} <br>Build Number: ${env.BUILD_NUMBER} <br>Build URL: ${env.BUILD_URL}", mimeType: 'text/html', recipientProviders: [brokenTestsSuspects(), brokenBuildSuspects()], subject: "ERROR CI: Project name -> ${env.JOB_NAME}"
}
unstable{
emailext body: "<b>Error while excuting the following job</b><br><br>Project: ${env.JOB_NAME} <br>Build Number: ${env.BUILD_NUMBER} <br>Build URL: ${env.BUILD_URL}", mimeType: 'text/html', recipientProviders: [brokenTestsSuspects(), brokenBuildSuspects()], subject: "ERROR CI: Project name -> ${env.JOB_NAME}"
}
}
parameters {
booleanParam(name: 'buildX86', defaultValue: false, description: 'Build for X86 platform')
booleanParam(name: 'buildX64', defaultValue: true, description: 'Build for X64 platform')
booleanParam(name: 'commitVersionChanges', defaultValue: false, description: 'Commit the version changes')
booleanParam(name: 'runTests', defaultValue: false, description: 'Run unit tests')
string(name: 'version', defaultValue: '3.6.0', description: 'Version of the setup to build')
choice(name: 'setupConfiguration', choices: '''Release
Debug''', description: 'Setup configuration to use')
}
}
The "new" parameters for which I don't get any request(only in BlueOcean) is the "runTests".
What can I do to get them? I tried to reboot, didn't changed anything.
According to the documentation example, parameters {} needs to be declared before stages{}, otherwise it does not know what values to place in the template variables because scripted pipelines are serially executed from top to bottom.
Also if this was just added to a Jenkinsfile, you may need to run it twice, it won't know the first time that there are params to deal with.

Resources