I have jenkins pipeline and also using a shared library for jenkins.
In my multibranch pipeline three to four repo clone while executing build using bitbucket plugin.
my question is how to get the last build revision from the previous build.
I have tried currentBuild.changeSets approach but for multiple repositories clone, it fails.
I had to get SCM revisions from the previous builds too. I didn't find any API to get it nicely, so I implemented a workaround. It is not great, but at least it works ;-)
When you save an environment variable by using env.setProperty(name, value) it is saved in the build metadata as a build variable. You can read it at any moment.
pipeline {
agent any
stages {
stage('Test') {
script {
env.setProperty('MY_ENV', env.BUILD_NUMBER)
def previousBuild = currentBuild.previousBuild
if (previousBuild != null) {
echo previousBuild.buildVariables['MY_ENV'] // prints env.BUILD_NUMBER - 1
}
}
}
}
}
In your case you have 4 checkouts. I don't know how you close sources, so let's imagine that you have a cloneRepo method and it sets the GIT_COMMIT environment variable. They you may use:
def previousBuild = currentBuild.previousBuild
if (previousBuild != null) {
echo previousBuild.buildVariables['GIT_COMMIT_REPO_1']
echo previousBuild.buildVariables['GIT_COMMIT_REPO_2']
echo previousBuild.buildVariables['GIT_COMMIT_REPO_3']
echo previousBuild.buildVariables['GIT_COMMIT_REPO_4']
}
cloneRepo(repo1)
env.setProperty('GIT_COMMIT_REPO_1', env.GIT_COMMIT)
cloneRepo(repo2)
env.setProperty('GIT_COMMIT_REPO_2', env.GIT_COMMIT)
cloneRepo(repo3)
env.setProperty('GIT_COMMIT_REPO_3', env.GIT_COMMIT)
cloneRepo(repo4)
env.setProperty('GIT_COMMIT_REPO_4', env.GIT_COMMIT)
If you use the checkout step, then you may do:
def commitId = checkout(scm).find { it.key == 'GIT_COMMIT' }
env.setProperty('GIT_COMMIT_REPO_1', commitId)
Related
We are using a Jenkins multibranch pipeline with BitBucket to build pull request branches as part of our code review process.
We wanted to abort any queued or in-progress builds so that we only run and keep the latest build - I created a function for this:
def call(){
def jobName = env.JOB_NAME
def buildNumber = env.BUILD_NUMBER.toInteger()
def currentJob = Jenkins.instance.getItemByFullName(jobName)
for (def build : currentJob.builds){
def exec = build.getExecutor()
if(build.isBuilding() && build.number.toInteger() != buildNumber && exec != null){
exec.interrupt(
Result.ABORTED,
new CauseOfInterruption.UserInterruption("Job aborted by #${currentBuild.number}")
)
println("Job aborted previously running build #${build.number}")
}
}
}
Now in my pipeline, I want to run this function when the build is triggered by the creation or push to a PR branch.
It seems the only way I can do this is to set the agent to none and then set it to the correct node for each of the subsequent stages. This results in missing environment variables etc. since the 'environment' section runs on the master.
If I use agent { label 'mybuildnode' } then it won't run the stage until the agent is free from the currently in-progress/running build.
Is there a way I can get the 'cancelpreviousbuilds()' function to run before the agent is allocated as part of my pipeline?
This is roughly what I have currently but doesn't work because of environment variable issues - I had to do the skipDefaultCheckout so I could do it manually as part of my build:
pipeline {
agent none
environment {
VER = '1.2'
FULLVER = "${VER}.${BUILD_NUMBER}.0"
PROJECTPATH = "<project path>"
TOOLVER = "2017"
}
options {
skipDefaultCheckout true
}
stages {
stage('Check Builds') {
when {
branch 'PR-*'
}
steps {
// abort any queued/in-progress builds for PR branches
cancelpreviousbuilds()
}
}
stage('Checkout') {
agent {
label 'buildnode'
}
steps {
checkout scm
}
}
}
}
It works and aborts the build successfully but I get errors related to missing environment variables because I had to start the pipeline with agent none and then set each stage to agent label 'buildnode' to.
I would prefer that my entire pipeline ran on the correct agent so that workspaces / environment variables were set correctly but I need a way to trigger the cancelpreviousbuilds() function without requiring the buildnode agent to be allocated first.
You can try combining the declarative pipeline and the scripted pipeline, which is possible.
Example (note I haven't tested it):
// this is scripted pipeline
node('master') { // use whatever node name or label you want
stage('Cancel older builds') {
cancel_old_builds()
}
}
// this is declarative pipeline
pipeline {
agent { label 'buildnode' }
...
}
As a small side comment, you seem to use: build.number.toInteger() != buildNumber which would abort not only older builds but also newer ones. In our CI setup, we've decided that it's best to abort the current build if a newer build has already been scheduled.
I am currently working on a generic pipeline which is going to be used via shared library to replace existing jobs so that it's easier to manage all jobs from a more centralized place. Most of the existing jobs have these three stages:
allocates a node
checks out the code from git repository and builds it
deploys the code to a testing repository
Some of the jobs have a few if/elses in their stages which do things based on parameters or environment variables, but overall the jobs are quite similar otherwise. The solution which comes to my mind is to use Closures to allow for additional logic code to be executed in those stages, but I am having difficulties figuring out how to secure this so that the only possible "steps" you can execute are sh and bat.
Here is an oversimplified example vars/genericPipeline.groovy to illustrate what I'm talking about:
def call(body)
{
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
body()
String AGENT_LABEL = config.getOrDefault('label', 'mvn3')
Closure MVN_BUILD = config.getOrDefault('build', {
sh "mvn clean install"
})
Closure MVN_DEPLOY = config.getOrDefault('deploy', { BRANCH_NAME, ARTIFACT_COORDINATES, SERVER_ID, REPO, TMP_REPO ->
def SERVER_URL = REPO
if (BRANCH_NAME != 'master')
{
SERVER_URL = TMP_REPO
}
sh label: "Deploying ${ARTIFACT_COORDINATES}",
script: "mvn deploy" +
" -DskipTests" +
" -DaltDeploymentRepository=${SERVER_ID}::default::${SERVER_URL}"
})
pipeline {
agent {
node {
label AGENT_LABEL
}
}
environment {
GROUP_ID = readMavenPom().getGroupId()
ARTIFACT_ID = readMavenPom().getArtifactId()
VERSION = readMavenPom().getVersion()
ARTIFACT_COORDINATES = "${readMavenPom().getGroupId()}:${readMavenPom().getArtifactId()}:${readMavenPom().getVersion()}"
}
stages {
stage('Building...') {
steps {
MVN_BUILD()
}
}
stage('Deploying...') {
steps {
MVN_DEPLOY(BRANCH_NAME, env.ARTIFACT_COORDINATES, config.serverId, config.repo, config.tmpRepo)
}
}
}
}
}
This could later be used in the jobs as:
genericPipeline {
build = {
sh "mvn clean install"
}
// could be set for the "corner cases" or could skipped to use the
// default deploy closure for all other cases.
deploy = {
sh "mvn deploy"
}
}
As you can see, the deploy uses two different repositories to deploy to, based on the branch name. I am aware I could simply just put the logic in the stage, but the problem is that some of the jobs are not multi-branch and they would not have this if/else logic. However they would still have the same pipeline structure without any other changes and I would prefer to have to maintain one pipeline than 5 relatively similar pipes for all the different if/else cases which could occur in the deploy stage. :)
So, the question here - is it possible to only white-list the execution of specific steps (i.e. sh/bat) in the MVN_BUILD and MVN_DEPLOY closures? Or if there is another, perhaps even better way to handle this case?
I'm trying to automate the creation of a Jenkins Pipeline build from within a pipeline.
I have a pipeline which creates a Bitbucket repository and commits some code to it, including a Jenkinsfile.
I need to add another step to this pipeline to then create the Pipeline build for it, which would run the steps in the Jenkinsfile.
I think the Jobs DSL should be able to handle this but the documentation I've found for it has been very sparse, and I'm still not entirely sure if it's possible or how to do it.
Any help would be appreciated. The generated Pipeline build I would imagine just needs to have a link to the repository and be told to run the Jenkinsfile there?
Yes, Job DSL is what you need for your use case.
See this and this to help you get started.
EDIT
pipeline {
agent {
label 'slave'
}
stages{
stage('stage'){
steps {
// some other steps
jobDsl scriptText: '''pipelineJob(\'new-job\') {
def repo = \'https://xxxxx#bitbucket.org/xxxx/dummyrepo.git\'
triggers {
scm(\'H/5 * * * *\')
}
definition {
cpsScm {
scm {
git {
remote {
url(repo)
credentials('bitbucket-jenkins-access')
}
branches(\'master\')
scriptPath(\'Jenkinsfile\')
extensions { }
}
}
}
}
}'''
}
}
}
}
Documentation - https://jenkinsci.github.io/job-dsl-plugin/#path/pipelineJob-scm-git
By using this python library jenins-job-builder you can easily create your expected pipeline or free-style job from another pipeline or from any other remote location.
Example:
steps-1
python3 -m venv .venv
source .venv/bin/activate
pip install --user jenkins-job-builder
steps-2
Once you have done above, Create 2 file, one with name config.ini and the other one is job.yml. Please note - there are no strict rules about the file name. It can be up to you.
The config.ini file contain can looks like
[job_builder]
allow_duplicates = False
keep_descriptions = False
ignore_cache = True
recursive = False
update = all
[jenkins]
password = jenkins-password
query_plugins_info = False
url = http://jenkins-url.net
user = jenkins-username
If you are creating a pipeline job , then your job.yml file can look like
- job:
name: pipeline01
display-name: 'pipeline01'
description: 'Do not edit this job through the web!'
project-type: pipeline
dsl: |
node(){
stage('hello') {
sh 'echo "Hellow World!"'
}
}
steps-3
after all the above. Invoke below command
jenkins-jobs --conf config.ini update job.yml
Note- jenkins-jobs command can only be available if you have followed steps-1
I have a project which has multiple build pipelines to allow for different types of builds against it (no, I don't have the ability to make one build out of it; that is outside my control).
Each of these pipelines is represented by a Jenkinsfile in the project repo, and each one must use the same build agent label (they need to share other pieces of configuration as well, but it's the build agent label which is the current problem). I'm trying to put the label into some sort of a configuration file in the project repo, so that all the Jenkinsfiles can read it.
I expected this to be simple, as you don't need this config data until you have already checked out a copy of the sources to read the Jenkinsfile. As far as I can tell, it is impossible.
It seems to me that a Jenkinsfile cannot read files from SCM until the project has done its SCM step. However, that's too late: the argument to agent{label} is read before any stages get run.
Here's a minimal case:
final def config
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
checkout scm // we don't need all the submodules here
echo "Reading configuration JSON"
script { config = readJSON file: 'buildjobs/buildjob-config.json' }
echo "Read configuration JSON"
}
}
stage('Build and Deploy') {
agent {
label config.agent_label
}
steps {
echo 'Got into Stage 2'
}
}
}
}
When I run this, I get:
java.lang.NullPointerException: Cannot get property 'agent_label' on null object I don't get either of the echoes from the 'Configure' stage.
If I change the label for the 'Build and Deploy' stage to 'master', the build succeeds and prints out all three echo statements.
Is there any way to read a file from the Git workspace before the agent labels need to be set?
Please see https://stackoverflow.com/a/52807254/7983309. I think you are running into this issue. label is unable to resolve config.agent_label to its updated value. Whatever is set in the first line is being sent to your second stage.
EDIT1:
env.agentName = ''
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
script {
env.agentName = 'slave'
echo env.agentName
}
}
}
stage('Finish') {
steps {
node (agentName as String) { println env.agentName }
script {
echo agentName
}
}
}
}
}
Source - In a declarative jenkins pipeline - can I set the agent label dynamically?
I have several pipeline jobs, which are configured very similarly.
They all have the same stages (of which there are about 10).
I am now I am thinking about moving to the declarative pipeline (https://jenkins.io/blog/2016/09/19/blueocean-beta-declarative-pipeline-pipeline-editor/).
But I do not want to define the ~10 stages in every pipeline. I want to define them at one place, and "import" them somehow.
Is this possible with declarative pipelines at all? I see that there are Libraries, but it does not seem like I could include the stage definition using them.
You will have to create a shared-library to implement what i am about to suggest. For shared-library implementation, you may check the following posts:
Using Building Blocks in Jenkins Declarative Pipeline
Upload file in Jenkins input step to workspace (Mainly for images so one can easily figure out things)
Now if you want to use a Jenkinsfile (kind of a template) which can be reused across multiple projects (jobs), then that is indeed possible.
Once you have created a shared-library repository with vars directory in it, then you just have to create a Groovy file (let's say, commonPipeline.groovy) inside vars directory.
Here's an example that works because I have used it earlier in multiple jobs.
$ cat shared-lib/vars/commonPipeline.groovy
// You can create function(s) as shown below, if required
def someFunctionA() {
// Your code
}
// This is where you will define all the stages that you want
// to run as a whole in multiple projects (jobs)
def call(Map config) {
pipeline {
agent {
node { label 'slaveA || slaveB' }
}
environment {
myvar_Y = 'apple'
myvar_Z = 'orange'
}
stages {
stage('Checkout') {
steps {
deleteDir()
checkout scm
}
}
stage ('Build') {
steps {
script {
check_something = someFunctionA()
if (check_something) {
echo "Build!"
# your_build_code
} else {
error "Something bad happened! Exiting..."
}
}
}
}
stage ('Test') {
steps {
echo "Running tests..."
// your_test_code
}
}
stage ('Deploy') {
steps {
script {
sh '''
# your_deploy_code
'''
}
}
}
}
post {
failure {
sh '''
# anything_you_need_to_perform_in_failure_step
'''
}
success {
sh '''
# anything_you_need_to_perform_in_success_step
'''
}
}
}
}
With above Groovy file in place, all you have to do now is to call it in your various Jenkins projects. Since you might already be having an existing Jenkinsfile (if not, create it) in your Jenkins project, you just have to replace the existing content of that file with the following:
$ cat Jenkinsfile
// Assuming you have named your shared-library as `my-shared-lib` & `Default version` to `master` branch in
// `Manage Jenkins` » `Configure System` » `Global Pipeline Libraries` section
#Library('my-shared-lib#master')_
def params = [:]
params=[
jenkins_var: "${env.JOB_BASE_NAME}",
]
commonPipeline params
Note: As you can see above, I am calling commonPipeline.groovy file. So, all your bulky Jenkinsfile will get reduced to just five or six lines of code, and those few lines are also going to be common across all those projects. Also note that I have used jenkins_var above. It can be any name. It's not actually used but is required for pipeline to run. Some Groovy expert can clarify that part.
Ref: https://www.jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/