Jenkins Multibranch Pipeline Jenkinsfile detect Git repo that launched job - jenkins

I have a multi branch pipeline configuration job working fine so far....
However every single repository has exactly the same jenkinsfile except for the git repo name.
a typical jenkinsfile looks like:
node('docker-slave') {
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'NEXUS', usernameVariable: 'NEXUS_USERNAME', passwordVariable: 'NEXUS_PASSWORD']]) {
git url: 'git#bitbucket.org:myco/myprob.git', branch: env.branch_name, credentialsId: '08df8ab41de0', variable: 'CREDENTIALS'
stage 'Test'
sh 'env > env.txt'
sh 'cat env.txt'
sh 'make verify'
}
}
What I'd like to do is detect which git repo triggered the build so I don't have to hardcode it in the jenkinsfile.
So what I'd like is to change the git line to something like (notice GIT_URL):
git url: env.GIT_URL, branch: env.branch_name, credentialsId: '08df8ab41de0', variable: 'CREDENTIALS'
This gets me closer to my eventual goal of storing my Jenkinsfile in a common location instead of having to copy it repo to repo and modify it repo to repo.
Any ideas?
Thanks
phil

it turns out that in the script the following code does exactly what I need:
checkout sum
The git url line isn't needed....
so in the end the code looks like:
node('docker-slave') {
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'NEXUS', usernameVariable: 'NEXUS_USERNAME', passwordVariable: 'NEXUS_PASSWORD']]) {
checkout scm
stage 'Test'
sh 'make verify'
}

Related

Use a variable in 'git url' jenkins piplines

So when using a Jenkins pipeline I need to checkout a second git repository in a new folder:
dir('platform') {
println("\n\n\n=== === ===> gitBranch ${gitbranch} ");
dir('platform') {
deleteDir()
}
git url: 'git#gitlab.platform-automation.git', credentialsId: '1234567890', branch: 'feature/Packer-server-image-builds' // clones repo to subdir
}
When I try to use a variable to set the branch the command fails:
git url: 'git#gitlab.platform-automation.git', credentialsId: '1234567890', branch: '${gitbranch}'
What do I need to do to get this working?
use " instead of ' to use variables: "${gitbranch}"

How to copy Jenkins config files in a jenkins pipeline to Web server

I have some files.properties in Jenkins config File that I need to copy to a server during the jenkins pipeline.
pipeline code is more a less as showed, just to get an idea.
How can I add a step that copy this config file from jenkins on a destination server after las step after step DEPLOY WAR TO SERVER in pipeline like for example : "sh Scp file.properties jenkins#destinationserver:/destination/path/file.properties"
code {
stage ('Code Checkout') {
git branch: 'master',
credentialsId: 'b346fbxxxxxxxxxxxxxxxxxxx',
url: 'https://xxxxxxx#bitbucket.org/gr/code.git'
}
stage ('Check Branch') {
sh 'git branch'
}
stage('Compile and Build WAR') {
sh 'mvn clean compile war:war'
stage ('Deploy WAR to server') {
sh "scp .war jenkins#serverIp:/var/lib/tomcat/.war"
}
This is quite easy. You need to install the Config File Provider Plugin and then you can generate the appropriate line by visiting htts://localhost/jenkins/pipeline-syntax/. From there in the dropdown you can choose configFileProvider and fill the rest of the form.
The end result will be something like this:
configFileProvider(
[configFile(fileId: 'maven-settings-or-a-UUID-to-your-config-file', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS clean package'
}

Jenkins Pipeline deleteDir() is not waiting till the directory has been deleted

In my pipeline I have deleteDir() following by git clone. My repo is bit big and have a problem when I rerun the Jenkins pipeline, because deleteDir() is not waiting till the directory has been deleted completely resulting git clone failure. Here is my pippeline
node{
stage ("Clean"){
dir("${Service}") {
deleteDir()
}
}
stage ('Checkout'){
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'abc', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD']]) {
bat "git clone --recurse-submodules http://${USERNAME}:${PASSWORD}#X.X.X.X:9999/scm/x/${Service}.git"
}
}
}
Please suggest me how I can make clone task to wait till deleteDir() iscompleted
Maybe try to delete the directory in a shell:
sh "rm -rf dirName"
please make a comment, I do not have enough rating. In Jenkins issues there are many tickets related to deleteDir(). So #Frankenstein solution is a good workaround.

use the same Jenkinsfile for several github repositories

I have 40-50 github repositories , each repo contain one maven job.
I want to create multibranch pipeline job for each repository.
can I use the same Jenkinsfile for all projects without add Jenkinsfile for each repository. (take it from another SCM repo) ?
I know that I can use shared library to create a full pipeline , but I prefer something cleaner.
To accomplish this, I would suggest to create a pipeline with two parameters and pass the values based on the repo to build. 1) GIT BRANCH - to build and deploy required branch
2) GIT URL - to provide the git URL to checkout the code.
Providing a reference template.
node('NODE NAME')
{
withEnv([REQUIRED ENV VARIBALES])
{ withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'CREDENTIALS ID', passwordVariable: 'PW', usernameVariable: 'USER']])
{ try
{ stage 'Build'
checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: gitbranch]], doGenerateSubmoduleConfigurations: false,
extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'CREDENTIALS ID',
url: 'GIT URL']]]
****
MAVEN BUILD
****
stage 'Docker Image build & Push'
*****
DOCKER BUILD AND PUSH TO REPO
*****
}
catch (err) {
notify("Failed ${err}")
currentBuild.result = 'FAILURE'
}
stage 'Deploy to ENV'
*****
DEPLOYMENT TO REQUIRED ENV
*****
notify('Success -Deployed to Environment')
catch (err) {
notify("Failed ${err}")
currentBuild.result = 'FAILURE'
}
}
}
}
def notify(status)
{
****
NOTIFICATION FUCNTION
****
}
Link the Jenkinsfile in the pipeline job and provide the values- build with parameters, while building the Jenkins job.
Hope this helps.

Deploy to Heroku staging, then production with Jenkins

I have a Rails application with a Jenkinsfile which I'd like to set up so that a build is first deployed to staging, then if I am happy with the result, it can be built on production.
I've set up 2 Heroku instances, myapp-staging and myapp-production.
My Jenkinsfile has a node block that look like:
node {
currentBuild.result = "SUCCESS"
setBuildStatus("Build started", "PENDING");
try {
stage('Checkout') {
checkout scm
gitCommit = sh(returnStdout: true, script: 'git rev-parse HEAD').trim()
shortCommit = gitCommit.take(7)
}
stage('Build') {
parallel 'build-image':{
sh "docker build -t ${env.BUILD_TAG} ."
}, 'run-test-environment': {
sh "docker-compose --project-name myapp up -d"
}
}
stage('Test') {
ansiColor('xterm') {
sh "docker run -t --rm --network=myapp_default -e DATABASE_HOST=postgres ${env.BUILD_TAG} ./ci/bin/run_tests.sh"
}
}
stage('Deploy - Staging') {
// TODO. Use env.BRANCH_NAME to make sure we only deploy from staging
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'Heroku Git Login', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD']]) {
sh('git push https://${GIT_USERNAME}:${GIT_PASSWORD}#git.heroku.com/myapp-staging.git staging')
}
setBuildStatus("Staging build complete", "SUCCESS");
}
stage('Sanity check') {
steps {
input "Does the staging environment look ok?"
}
}
stage('Deploy - Production') {
// TODO. Use env.BRANCH_NAME to make sure we only deploy from master
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'Heroku Git Login', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD']]) {
sh('git push https://${GIT_USERNAME}:${GIT_PASSWORD}#git.heroku.com/myapp-production.git HEAD:refs/heads/master')
}
setBuildStatus("Production build complete", "SUCCESS");
}
}
My questions are:
Is this the correct way to do this or is there some other best practice? For example do I need two Jenkins pipelines for this or is one project pipeline enough?
How can I use Jenkins' BRANCH_NAME variable to change dynamically depending on the stage I'm at?
Thanks in advance!
for the first question, using one Jenkinsfile to describe the complete project pipeline is desirable. it keeps the description of the process all in one place, and shows you the process flow in one UI, so your Jenkinsfile seems great in that regard.
for the second question, you can wrap steps in if conditions based on branch. so if you wanted to, say, skip the prod deployment and the step that asks the user if staging looks ok (since you're not going to do the prod deployment) if the branch is not master, this would work.
node('docker') {
try {
stage('Sanity check') {
if (env.BRANCH_NAME == 'master') {
input "Does the staging environment look ok?"
}
}
stage('Deploy - Production') {
echo 'deploy check'
if (env.BRANCH_NAME == 'master') {
echo 'do prod deploy stuff'
}
}
} catch(error) {
}
}
i removed some stuff from your pipeline that wasn't necessary to demonstrate the idea, but i also fixed what looked to me like two issues. 1) you seemed to be mixing metaphors between scripted and declarative pipelines. i think you are trying to use a scripted pipeline, so i made it full scripted. that means you cannot use steps, i think. 2) your try was missing a catch.
at the end of the day, the UI is a bit weird with this solution, since all steps will always show up in all cases, and they will just show as green, like they passed and did what they said they would do (it will look like it deployed to prod, even on non-master branches). there is no way around this with scripted pipelines, to my knowledge. with declarative pipelines, you can do the same conditional logic with when, and the UI (at least the blue ocean UI) actually understands your intent and shows it differently.
have fun!

Resources