Jenkins Pipeline - Git checkout collision - jenkins

We have a multi-stage pipeline in our CI, and some of the stages have their own nested stages that are parallelized and may run on the same or different agents (we request a certain agent label).
As with most CI pipelines, we build our artifacts and deploy and run our tests later.
As the pipeline may take some time to complete, we had an issue where new commits that are merged to our master branch may be picked up in the later stages, and it creates an incompatibility between the pre-packaged code and the new checkout-out one.
I'm currently using the skipDefaultCheckout directive and added my own function to checkout the commit SHA1 that is set in the GIT_COMMIT in every one of the parallel stages
void gitCheckoutByCommitHash(credentialsId, gitCommit=GIT_COMMIT) {
script {
println("Explicitly checking out git commit: ${gitCommit}")
}
checkout changelog: false, poll: false,
scm: [
$class: 'GitSCM',
branches: [[name: gitCommit]],
doGenerateSubmoduleConfigurations: false,
extensions: [
[
$class: 'CloneOption',
noTags: true,
shallow: true
],
[
$class: 'SubmoduleOption',
disableSubmodules: false,
parentCredentials: true,
recursiveSubmodules: true,
reference: '',
trackingSubmodules: false
],
],
submoduleCfg: [],
userRemoteConfigs: [[
credentialsId: credentialsId,
url: GIT_URL
]]
]
}
The problem I'm facing is that sometimes, two or more of the parallel stages are trying to run on the same agent and perform the checkout, and I get an error that a process has already retained .git/index.lock, and the stage that is locked out fails.
Is there any way to work around that?
This is a sample pipeline
pipeline {
agent {
label 'docker_v2'
}
options {
timestamps()
timeout(time: 1, unit: 'HOURS')
}
stages {
stage('Prepare test environment') {
options {
skipDefaultCheckout()
}
steps {
gitCheckoutByCommitHash('some-creds-id')
}
}
stage('Parallel stuff'){
parallel {
stage('Checkout 1') {
agent {
label 'docker_v2'
}
options {
skipDefaultCheckout()
}
steps {
gitCheckoutByCommitHash('some-creds-id')
}
}
stage('Checkout 2') {
agent {
label 'docker_v2'
}
options {
skipDefaultCheckout()
}
steps {
gitCheckoutByCommitHash('some-creds-id')
}
}
}
}
}
}

The best way to solve this issue would be to only perform the checkout once, and then use stash, with unstash in later stages:
pipeline {
agent {
label 'docker_v2'
}
options {
timestamps()
timeout(time: 1, unit: 'HOURS')
skipDefaultCheckout()
}
stages {
stage('Prepare test environment') {
steps {
// you can use this:
gitCheckoutByCommitHash('some-creds-id')
// or the usual "checkoutScm"
stash name: 'sources', includes: '**/*', allowEmpty: true , useDefaultExcludes: false
}
}
stage('Parallel stuff'){
parallel {
stage('Checkout 1') {
agent {
label 'docker_v2'
}
steps {
unstash 'sources'
}
}
stage('Checkout 2') {
agent {
label 'docker_v2'
}
steps {
unstash 'sources'
}
}
}
}
}
}
This would also speed up your pipeline.
To prevent collisions in the workspace, you can also use one of the following:
workspace directive so that different parallel stages would use different workspaces; or
Define your agents so that there is only one executor per agent; or
Both of the above.

Related

how to fail the jenkins build if any test cases are failed using findText plugin

I have a stage in Jenkins as follows, How do I mark the build to fail or unstable if there is a test case failure? I generated the script pipeline for textfinder plugin but it is not working. "findText alsoCheckConsoleOutput: true, regexp: 'There are test failures.', unstableIfFound: true" not sure where to place the textFinder regex.
pipeline {
agent none
tools {
maven 'maven_3_6_0'
}
options {
timestamps ()
buildDiscarder(logRotator(numToKeepStr:'5'))
}
environment {
JAVA_HOME = "/Users/jenkins/jdk-11.0.2.jdk/Contents/Home/"
imageTag = ""
}
parameters {
choice(name: 'buildEnv', choices: ['dev', 'test', 'preprod', 'production', 'prodg'], description: 'Environment for Image build')
choice(name: 'ENVIRONMENT', choices: ['dev', 'test', 'preprod', 'production', 'prodg'], description: 'Environment for Deploy')
}
stages {
stage("Tests") {
agent { label "xxxx_Slave"}
steps {
checkout([$class: 'GitSCM', branches: [[name: 'yyyyyyyyyyz']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'zzzzzzzzzzz', url: 'abcdefgh.git']]])
sh'''
cd dashboard
mvn -f pom.xml surefire-report:report -X -Dsurefire.suiteXmlFiles=src/test/resources/smoke_test.xml site -DgenerateReports=false
'''
}
}
}
}
All I did to make this request possible is as below:
added a post block of code below the steps block code.
post {
Success {
findText alsoCheckConsoleOutput: true, refexp: 'There are test failures.', unstableIfFound: true
}
}

How to share stages between Jenkins Pipelines

I have two JenkinsFile files where I want to share the same stages:
CommonJenkinsFile:
pipeline {
agent {
node {
label 'devel-slave'
}
}
stages {
stage("Select branch") {
options {
timeout(time: 3, unit: 'MINUTES')
}
steps {
script {
env.branchName = input(
id: 'userInput', message: 'Enter the name of the branch you want to deploy',
parameters: [
string(
defaultValue: '',
name: 'BranchName',
description: 'Name of the branch'),
]).replaceAll('\\/', '%2F')
}
}
}
}
Where I want to use it:
pipeline {
agent {
node {
label 'devel-slave'
}
}
load 'CommonJenkinsFile'
stages {
stage('Deploy to test') {
}
}
How can this stages be shared? Should I change to the Scripted Pipelines? Can they share stages or only steps?
CommonJenkinsfile cannot contain the pipeline directive (otherwise you are executing a pipeline within a pipeline). I think you also need to move it to the scripted syntax instead of the declarative (might be wrong there)
This file could look like this:
def commonStep(){
node('devel-slave'){
stage("Select branch") {
timeout(time: 3, unit: 'MINUTES'){
env.branchName = input ...
}
}
}
}
return this
You can then load it like this
stage('common step'){
script{
def sharedSteps = load "$workspace/common.groovy"
sharedSteps.commonStep()
}
}

Best way to clone or pull gitlab code using Jenkins to avoid merge issues

What is the best way to clone or pull gitlab code using Jenkins, I have this pipeline. However i am seeing merge issues popping up and then it ignored other builds. What is the best approach to do this. Below is my pipeline and errors:
pipeline {
agent any
environment {
APPIUM_PORT_ONE= 4723
APPIUM_PORT_TWO= 4724
}
tools {nodejs "node"}
stages {
stage('Checkout App 1') {
steps {
dir("/Users/Desktop/app1") {
sh 'git pull ###'
}
echo "Building.."
}
}
stage('Checkout App 2') {
steps {
dir("/Users//Desktop/app2") {
echo "Building.."
sh 'git pull ###'
}
}
}
stage('Checkout Mirror') {
steps {
echo "Building.."
}
}
stage('Checkout End to End Tests') {
steps {
dir("/Users/Desktop/qa-end-to-end/") {
sh 'git pull ###'
}
}
}
stage('Starting Appium Servers') {
steps {
parallel(
ServerOne: {
echo "Starting Appium Server 1"
dir("/Users/Desktop/qa-end-to-end/") {
}
},
ServerTwo: {
echo "Starting Appium Server 2"
})
}
}
stage('Starting End to End Tests') {
steps {
echo "Starting End to End Tests"
dir("/Users/Desktop/qa-end-to-end/") {
sh './tests.sh'
echo "Shutting Down Appium Servers"
}
}
}
stage('Publish Report') {
steps {
echo "Publishing Report"
}
}
}
}
Should i clone from scratch instead of doing pull?. Any documentation would be helpful.
Unless the repos are large and time consuming to clone from scratch then I would do that.
Then you are certain that you have clean correct code to run with
checkout([$class: 'GitSCM',
branches: [[name: '*/master']],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'CleanCheckout']],
submoduleCfg: [],
userRemoteConfigs: [[credentialsId: 'GIT', url: 'git#git.com:repo.git']]])
Can either run this in you DIR block or add the extension to checkout to a subdirectory
extensions: [[$class: 'RelativeTargetDirectory',
relativeTargetDir: 'checkout-directory']]
Dont forget to delete the old checkouts if you are persisting workspaces across builds.

Why Pipeline Job is triggered although there is no change in SCM

I am setting up a pipeline job for my project and what I would like to do is to setup the job to ping to the SCM every two minites to check if there is change, if yes then build the job using pipeline script.
Here is my pipeline job configuration:
and the pipeline script:
pipeline {
agent any
options {
disableConcurrentBuilds()
}
tools {
maven 'Maven 3.6'
}
stages {
stage('Git Checkout') {
steps {
sh 'echo "Git Checkout"'
checkout([$class: 'GitSCM', branches: [[name: 'master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'my-credential', url: 'my-git-url']]])
}
}
stage('Maven Build') {
steps {
sh 'mvn clean deploy -s $WORKSPACE/custom-config/settings.xml -Dsettings.security=$WORKSPACE/custom-config/settings-security.xml'
}
}
stage('Maven Build and SonarQube Analysis') {
steps {
withSonarQubeEnv('Sonar Qube') {
sh 'mvn sonar:sonar -Dsonar.projectKey=com.product:backend'
}
}
}
stage('SonarQube Quality Gate') {
steps {
timeout(time: 10, unit: 'MINUTES') {
waitForQualityGate abortPipeline: true
}
}
}
}
post {
failure {
mail bcc: '', body: "See ${env.BUILD_URL}", cc: '', charset: 'UTF-8', from: '', mimeType: 'text/html', replyTo: '', subject: "Build failed in Jenkins: ${env.JOB_NAME} #${env.BUILD_NUMBER}", to: "my.email#gmail.com";
}
unstable {
mail bcc: '', body: "See ${env.BUILD_URL}", cc: '', charset: 'UTF-8', from: '', mimeType: 'text/html', replyTo: '', subject: "Build unstable in Jenkins: ${env.JOB_NAME} #${env.BUILD_NUMBER}", to: "my.email#gmail.com";
}
}
}
Everything is working fine excepts that the job is triggered every two minutes although there is nothing changes in the master branch which I have setup in the script. Can anyone help me here to prevent this action. I would like the job to be triggered only when there is change in SCM. Thank you very much!

Parallel checkout in declarative Jenkins pipeline on multiple nodes

I'm developing a declarative Jenkins pipeline for CI builds, triggered from Gitlab. What I have now:
// variable definitions
pipeline {
agent none
parameters {
string(defaultValue: 'develop',
description: 'Commit ID or branch name to build',
name: 'branch',
trim: false)
}
stages {
stage('Checkout') {
parallel {
stage ('Windows') {
agent {
label 'build' && 'windows'
}
steps {
script {
def checkout_ext = [[$class: 'CleanCheckout'],
[$class: 'CleanBeforeCheckout']] // calls git clean -fdx and git reset --hard
if (env.gitlabActionType == "MERGE"){
checkout_ext.add([$class: 'PreBuildMerge',
options: [ mergeRemote: "origin",
mergeTarget: "${env.gitlabTargetBranch}"]])
}
}
checkout([
$class: 'GitSCM',
branches: [[name: "${params.branch}"]],
userRemoteConfigs: [[ url: "${git_url}", credentialsId: "${git_credentials_id}" ]],
extensions: checkout_ext
])
}
}
stage('Linux') {
agent {
label 'build' && 'linux'
}
steps {
script {
def checkout_ext = [[$class: 'CleanCheckout'],
[$class: 'CleanBeforeCheckout']] // calls git clean -fdx and git reset --hard
if (env.gitlabActionType == "MERGE"){
checkout_ext.add([$class: 'PreBuildMerge',
options: [ mergeRemote: "origin",
mergeTarget: "${env.gitlabTargetBranch}"]])
}
}
checkout([
$class: 'GitSCM',
branches: [[name: "${params.branch}"]],
userRemoteConfigs: [[ url: "${git_url}", credentialsId: "${git_credentials_id}"]],
extensions: checkout_ext
])
}
}
}
}
}
}
Checkout stage is somewhat complex. If gitlabActionType is MERGE, then first try to merge into a target branch, to make sure that merge request does not break anything in it.
This code is the same for both OSes. I'd like to avoid code duplication, but cannot figure out correct syntax for that.
I have tried moving definition of checkout steps to the global variable, but have got syntax errors.
def checkout_step = {
script {
...
}
checkout (... )
}
pipeline {
...
stages {
stage('Checkout') {
parallel {
stage ('Windows') {
agent {
label 'build' && 'windows'
}
steps {
checkout_step
}
}
stage ('Linux') {
agent {
label 'build' && 'linux'
}
steps {
checkout_step
}
}
}
}
}
}
If add steps, there's also an error:
def checkout_step = steps {
script {
...
}
checkout (... )
}
pipeline {
...
stages {
stage('Checkout') {
parallel {
stage ('Windows') {
agent {
label 'build' && 'windows'
}
checkout_step
}
stage ('Linux') {
agent {
label 'build' && 'linux'
}
checkout_step
}
}
}
}
}
Have found solution here
git_url = "git#gitserver.corp.com:group/repo.git"
git_credentials_id = 'aaaaaaa-bbbb-cccc-dddd-eefefefefef'
def checkout_tasks(os_labels) {
tasks = [:]
for (int i = 0; i < os_labels.size(); i++) {
def os = os_labels[i]
tasks["${os}"] = {
node("build && ${os}"){
def checkout_ext = [[$class: 'CleanCheckout'], [$class: 'CleanBeforeCheckout']] // calls git clean -fdx and git reset --hard
if (env.gitlabActionType == "MERGE"){
checkout_ext.add([
$class: 'PreBuildMerge',
options: [
mergeRemote: "origin",
mergeTarget: "${env.gitlabTargetBranch}"
]
])
/* using this extension requires .gitconfig with section [user for Jenkins]
Example
[user]
email = jenkins#builder
name = Jenkins
*/
}
checkout([
$class: 'GitSCM',
branches: [[name: "${params.branch}"]],
userRemoteConfigs: [[
url: "${git_url}",
credentialsId: "${git_credentials_id}"
]],
extensions: checkout_ext
])
}
}
}
return tasks
}
pipeline {
agent none
parameters {
string(defaultValue: 'develop',
description: 'Commit ID or branch name to build',
name: 'branch',
trim: false)
}
stages {
stage('Checkout') {
steps {
script {
def OSes = ["windows", "linux"]
parallel checkout_tasks(OSes)
}
}
}
}
}
It is important also to declare git_url and git_credentials_id without def, so that functions can read them.
More details in this question

Resources