I'm developing a declarative Jenkins pipeline for CI builds, triggered from Gitlab. What I have now:
// variable definitions
pipeline {
agent none
parameters {
string(defaultValue: 'develop',
description: 'Commit ID or branch name to build',
name: 'branch',
trim: false)
}
stages {
stage('Checkout') {
parallel {
stage ('Windows') {
agent {
label 'build' && 'windows'
}
steps {
script {
def checkout_ext = [[$class: 'CleanCheckout'],
[$class: 'CleanBeforeCheckout']] // calls git clean -fdx and git reset --hard
if (env.gitlabActionType == "MERGE"){
checkout_ext.add([$class: 'PreBuildMerge',
options: [ mergeRemote: "origin",
mergeTarget: "${env.gitlabTargetBranch}"]])
}
}
checkout([
$class: 'GitSCM',
branches: [[name: "${params.branch}"]],
userRemoteConfigs: [[ url: "${git_url}", credentialsId: "${git_credentials_id}" ]],
extensions: checkout_ext
])
}
}
stage('Linux') {
agent {
label 'build' && 'linux'
}
steps {
script {
def checkout_ext = [[$class: 'CleanCheckout'],
[$class: 'CleanBeforeCheckout']] // calls git clean -fdx and git reset --hard
if (env.gitlabActionType == "MERGE"){
checkout_ext.add([$class: 'PreBuildMerge',
options: [ mergeRemote: "origin",
mergeTarget: "${env.gitlabTargetBranch}"]])
}
}
checkout([
$class: 'GitSCM',
branches: [[name: "${params.branch}"]],
userRemoteConfigs: [[ url: "${git_url}", credentialsId: "${git_credentials_id}"]],
extensions: checkout_ext
])
}
}
}
}
}
}
Checkout stage is somewhat complex. If gitlabActionType is MERGE, then first try to merge into a target branch, to make sure that merge request does not break anything in it.
This code is the same for both OSes. I'd like to avoid code duplication, but cannot figure out correct syntax for that.
I have tried moving definition of checkout steps to the global variable, but have got syntax errors.
def checkout_step = {
script {
...
}
checkout (... )
}
pipeline {
...
stages {
stage('Checkout') {
parallel {
stage ('Windows') {
agent {
label 'build' && 'windows'
}
steps {
checkout_step
}
}
stage ('Linux') {
agent {
label 'build' && 'linux'
}
steps {
checkout_step
}
}
}
}
}
}
If add steps, there's also an error:
def checkout_step = steps {
script {
...
}
checkout (... )
}
pipeline {
...
stages {
stage('Checkout') {
parallel {
stage ('Windows') {
agent {
label 'build' && 'windows'
}
checkout_step
}
stage ('Linux') {
agent {
label 'build' && 'linux'
}
checkout_step
}
}
}
}
}
Have found solution here
git_url = "git#gitserver.corp.com:group/repo.git"
git_credentials_id = 'aaaaaaa-bbbb-cccc-dddd-eefefefefef'
def checkout_tasks(os_labels) {
tasks = [:]
for (int i = 0; i < os_labels.size(); i++) {
def os = os_labels[i]
tasks["${os}"] = {
node("build && ${os}"){
def checkout_ext = [[$class: 'CleanCheckout'], [$class: 'CleanBeforeCheckout']] // calls git clean -fdx and git reset --hard
if (env.gitlabActionType == "MERGE"){
checkout_ext.add([
$class: 'PreBuildMerge',
options: [
mergeRemote: "origin",
mergeTarget: "${env.gitlabTargetBranch}"
]
])
/* using this extension requires .gitconfig with section [user for Jenkins]
Example
[user]
email = jenkins#builder
name = Jenkins
*/
}
checkout([
$class: 'GitSCM',
branches: [[name: "${params.branch}"]],
userRemoteConfigs: [[
url: "${git_url}",
credentialsId: "${git_credentials_id}"
]],
extensions: checkout_ext
])
}
}
}
return tasks
}
pipeline {
agent none
parameters {
string(defaultValue: 'develop',
description: 'Commit ID or branch name to build',
name: 'branch',
trim: false)
}
stages {
stage('Checkout') {
steps {
script {
def OSes = ["windows", "linux"]
parallel checkout_tasks(OSes)
}
}
}
}
}
It is important also to declare git_url and git_credentials_id without def, so that functions can read them.
More details in this question
Related
We have a multi-stage pipeline in our CI, and some of the stages have their own nested stages that are parallelized and may run on the same or different agents (we request a certain agent label).
As with most CI pipelines, we build our artifacts and deploy and run our tests later.
As the pipeline may take some time to complete, we had an issue where new commits that are merged to our master branch may be picked up in the later stages, and it creates an incompatibility between the pre-packaged code and the new checkout-out one.
I'm currently using the skipDefaultCheckout directive and added my own function to checkout the commit SHA1 that is set in the GIT_COMMIT in every one of the parallel stages
void gitCheckoutByCommitHash(credentialsId, gitCommit=GIT_COMMIT) {
script {
println("Explicitly checking out git commit: ${gitCommit}")
}
checkout changelog: false, poll: false,
scm: [
$class: 'GitSCM',
branches: [[name: gitCommit]],
doGenerateSubmoduleConfigurations: false,
extensions: [
[
$class: 'CloneOption',
noTags: true,
shallow: true
],
[
$class: 'SubmoduleOption',
disableSubmodules: false,
parentCredentials: true,
recursiveSubmodules: true,
reference: '',
trackingSubmodules: false
],
],
submoduleCfg: [],
userRemoteConfigs: [[
credentialsId: credentialsId,
url: GIT_URL
]]
]
}
The problem I'm facing is that sometimes, two or more of the parallel stages are trying to run on the same agent and perform the checkout, and I get an error that a process has already retained .git/index.lock, and the stage that is locked out fails.
Is there any way to work around that?
This is a sample pipeline
pipeline {
agent {
label 'docker_v2'
}
options {
timestamps()
timeout(time: 1, unit: 'HOURS')
}
stages {
stage('Prepare test environment') {
options {
skipDefaultCheckout()
}
steps {
gitCheckoutByCommitHash('some-creds-id')
}
}
stage('Parallel stuff'){
parallel {
stage('Checkout 1') {
agent {
label 'docker_v2'
}
options {
skipDefaultCheckout()
}
steps {
gitCheckoutByCommitHash('some-creds-id')
}
}
stage('Checkout 2') {
agent {
label 'docker_v2'
}
options {
skipDefaultCheckout()
}
steps {
gitCheckoutByCommitHash('some-creds-id')
}
}
}
}
}
}
The best way to solve this issue would be to only perform the checkout once, and then use stash, with unstash in later stages:
pipeline {
agent {
label 'docker_v2'
}
options {
timestamps()
timeout(time: 1, unit: 'HOURS')
skipDefaultCheckout()
}
stages {
stage('Prepare test environment') {
steps {
// you can use this:
gitCheckoutByCommitHash('some-creds-id')
// or the usual "checkoutScm"
stash name: 'sources', includes: '**/*', allowEmpty: true , useDefaultExcludes: false
}
}
stage('Parallel stuff'){
parallel {
stage('Checkout 1') {
agent {
label 'docker_v2'
}
steps {
unstash 'sources'
}
}
stage('Checkout 2') {
agent {
label 'docker_v2'
}
steps {
unstash 'sources'
}
}
}
}
}
}
This would also speed up your pipeline.
To prevent collisions in the workspace, you can also use one of the following:
workspace directive so that different parallel stages would use different workspaces; or
Define your agents so that there is only one executor per agent; or
Both of the above.
I am using jenkins declarative pipeline jenkinsfile for our project. we want to try the option restart at stage.
pipeline {
agent { label 'worker' }
stages {
stage('clean directory') {
steps {
cleanWs()
}
}
stage('checkout') {
steps {
checkout([$class: 'GitSCM', branches: [[name: 'develop']], extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: devops], [$class: 'LocalBranch', localBranch: "**"]], userRemoteConfigs: [[credentialsId: 'xxxxxx', url: git#github.com/test/devops.git]]])
checkout([$class: 'GitSCM', branches: [[name: 'develop']], extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: harness], [$class: 'LocalBranch', localBranch: "**"]], userRemoteConfigs: [[credentialsId: 'xxxxxx', url: git#github.com/test/harness.git]]])
checkout([$class: 'GitSCM', branches: [[name: 'develop']], extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir: automation], [$class: 'LocalBranch', localBranch: "**"]], userRemoteConfigs: [[credentialsId: 'xxxxxx', url: git#github.com/test/automation.git]]])
}
}
stage('build initial commit to release train') {
steps {
sh '''#!/bin/bash
export TASK="build_initial_commit"
cd automation
sh main.sh
'''
}
}
stage('deploy application') {
steps {
sh '''#!/bin/bashexport TASK="deploy"
cd automation
sh main.sh
'''
}
}
}
}
and in jenkins I am using 'Pipeline script from SCM'. Jenkinsfile is present in automation.git repo (which is also defined in checkout stage)
Whenever I am restarting stage from GUI from 3rd one .. the workspace directory automatically gets cleaned up and it checksout automation.git ..
and the run fails as the other cloned repos were got cleaned...
how to handle this.. I want to restart the stage without wiping out the workspace dir..
if we just want to run the 3rd step 'deploy application' ..
I am not able to do , as the step depends on all 3 repos.. and
while restarting only 3rd stage the workspace is getting wiped out.. and as checkout is done in 1st stage(skipped) ... job is failing
how do I run only 3rd stage with retaining the old workspace ..
How about this:
SHOULD_CLEAN = true
pipeline {
agent { label 'worker' }
stages {
stage('clean directory') {
steps {
script {
if (SHOULD_CLEAN) {
cleanWs()
SHOULD_CLEAN = false
} else {
echo 'Skipping workspace clean'
}
}
}
}
I have a stage in Jenkins as follows, How do I mark the build to fail or unstable if there is a test case failure? I generated the script pipeline for textfinder plugin but it is not working. "findText alsoCheckConsoleOutput: true, regexp: 'There are test failures.', unstableIfFound: true" not sure where to place the textFinder regex.
pipeline {
agent none
tools {
maven 'maven_3_6_0'
}
options {
timestamps ()
buildDiscarder(logRotator(numToKeepStr:'5'))
}
environment {
JAVA_HOME = "/Users/jenkins/jdk-11.0.2.jdk/Contents/Home/"
imageTag = ""
}
parameters {
choice(name: 'buildEnv', choices: ['dev', 'test', 'preprod', 'production', 'prodg'], description: 'Environment for Image build')
choice(name: 'ENVIRONMENT', choices: ['dev', 'test', 'preprod', 'production', 'prodg'], description: 'Environment for Deploy')
}
stages {
stage("Tests") {
agent { label "xxxx_Slave"}
steps {
checkout([$class: 'GitSCM', branches: [[name: 'yyyyyyyyyyz']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'zzzzzzzzzzz', url: 'abcdefgh.git']]])
sh'''
cd dashboard
mvn -f pom.xml surefire-report:report -X -Dsurefire.suiteXmlFiles=src/test/resources/smoke_test.xml site -DgenerateReports=false
'''
}
}
}
}
All I did to make this request possible is as below:
added a post block of code below the steps block code.
post {
Success {
findText alsoCheckConsoleOutput: true, refexp: 'There are test failures.', unstableIfFound: true
}
}
What is the best way to clone or pull gitlab code using Jenkins, I have this pipeline. However i am seeing merge issues popping up and then it ignored other builds. What is the best approach to do this. Below is my pipeline and errors:
pipeline {
agent any
environment {
APPIUM_PORT_ONE= 4723
APPIUM_PORT_TWO= 4724
}
tools {nodejs "node"}
stages {
stage('Checkout App 1') {
steps {
dir("/Users/Desktop/app1") {
sh 'git pull ###'
}
echo "Building.."
}
}
stage('Checkout App 2') {
steps {
dir("/Users//Desktop/app2") {
echo "Building.."
sh 'git pull ###'
}
}
}
stage('Checkout Mirror') {
steps {
echo "Building.."
}
}
stage('Checkout End to End Tests') {
steps {
dir("/Users/Desktop/qa-end-to-end/") {
sh 'git pull ###'
}
}
}
stage('Starting Appium Servers') {
steps {
parallel(
ServerOne: {
echo "Starting Appium Server 1"
dir("/Users/Desktop/qa-end-to-end/") {
}
},
ServerTwo: {
echo "Starting Appium Server 2"
})
}
}
stage('Starting End to End Tests') {
steps {
echo "Starting End to End Tests"
dir("/Users/Desktop/qa-end-to-end/") {
sh './tests.sh'
echo "Shutting Down Appium Servers"
}
}
}
stage('Publish Report') {
steps {
echo "Publishing Report"
}
}
}
}
Should i clone from scratch instead of doing pull?. Any documentation would be helpful.
Unless the repos are large and time consuming to clone from scratch then I would do that.
Then you are certain that you have clean correct code to run with
checkout([$class: 'GitSCM',
branches: [[name: '*/master']],
doGenerateSubmoduleConfigurations: false,
extensions: [[$class: 'CleanCheckout']],
submoduleCfg: [],
userRemoteConfigs: [[credentialsId: 'GIT', url: 'git#git.com:repo.git']]])
Can either run this in you DIR block or add the extension to checkout to a subdirectory
extensions: [[$class: 'RelativeTargetDirectory',
relativeTargetDir: 'checkout-directory']]
Dont forget to delete the old checkouts if you are persisting workspaces across builds.
I am using jenkins and having scripted syntax in jenkinsfile
In the main job after source checkout I need to run other job n times (parallel) with different inputs .
Any tips to start this?
def checkout(repo, branch) {
checkout(changelog: false,
poll: false,
scm: [$class : 'GitSCM',
branches : [[name: "*/${branch}"]],
doGenerateSubmoduleConfigurations: false,
recursiveSubmodules : true,
extensions : [[$class: 'LocalBranch', localBranch: "${branch}"]],
submoduleCfg : [], userRemoteConfigs: [[credentialsId: '', url: "${repo}"]]])
withCredentials([[$class : '',
credentialsId : '',
passwordVariable: '',
usernameVariable: '']]) {
sh "git clean -f && git reset --hard origin/${branch}"
}
}
node("jenkins02") {
stage('Checkout') {
checkout gitHubRepo, gitBranch
}
}
We do this by storing all the jobs we want to run in a Map and then pass it into the parallel step for execution. So you just setup the different params and add each definition into the map, then execute.
Map jobs = [:]
jobs.put('job-1', {
stage('job-1') {
node {
build(job: "myorg/job-1/master", parameters: [new StringParameterValue('PARAM_NAME','VAL1')], propagate: false)
}
}
})
jobs.put('job-2', {
stage('job-2') {
node {
build(job: "myorg/job-2/master", parameters: [new StringParameterValue('PARAM_NAME','VAL2')], propagate: false)
}
}
})
parallel(jobs)