External workspace manager plugin with declarative pipeline - jenkins

I want to use the mentioned plugin with a declarative pipeline, to be precise I want to convert the following documentation example to a declarative pipeline:
The pipeline code in the upstream job is the following:
stage ('Stage 1. Allocate workspace in the upstream job')
def extWorkspace = exwsAllocate 'diskpool1'
node ('linux') {
exws (extWorkspace) {
stage('Stage 2. Build in the upstream job')
git url: 'https://github.com/alexsomai/dummy-hello-world.git'
def mvnHome = tool 'M3'
sh '${mvnHome}/bin/mvn clean install -DskipTests'
}
}
And the downstream's Pipeline code is:
stage ('Stage 3. Select the upstream run')
def run = selectRun 'upstream'
stage ('Stage 4. Allocate workspace in the downstream job')
def extWorkspace = exwsAllocate selectedRun: run
node ('test') {
exws (extWorkspace) {
stage('Stage 5. Run tests in the downstream job')
def mvnHome = tool 'M3'
sh '${mvnHome}/bin/mvn test'
}
}
Thanks!

I searched everywhere for a clear answer to this, yet never found a definitive answer. So, I pulled the External Workspace Plugin code and read it. The answer is simple as long as the plugins Model doesn't change.
Anytunc's answer is very close, but the issue is getting the path from the External Workspace Plugin and getting it into the customWorkspace configuration.
What I ended up doing was creating a method:
def getExternalWorkspace() {
extWorkspace = exwsAllocate diskPoolId: "jenkins"
return extWorkspace.getCompleteWorkspacePath()
}
and setting my agent to:
agent {
node {
label 'Linux'
customWorkspace getExternalWorkspace()
}
}
If you'd rather not set the entire pipeline to that path, you could create as many external workspaces as you want, then use
...
steps {
dir(getExternalWorkspace()) {
do fancy stuff
...
}
}
...

You can use this agent directive:
agent {
node {
label 'my-defined-label'
customWorkspace '/some/other/path'
}
}

Related

Jenkins - How to run a stage/function before pipeline starts?

We are using a Jenkins multibranch pipeline with BitBucket to build pull request branches as part of our code review process.
We wanted to abort any queued or in-progress builds so that we only run and keep the latest build - I created a function for this:
def call(){
def jobName = env.JOB_NAME
def buildNumber = env.BUILD_NUMBER.toInteger()
def currentJob = Jenkins.instance.getItemByFullName(jobName)
for (def build : currentJob.builds){
def exec = build.getExecutor()
if(build.isBuilding() && build.number.toInteger() != buildNumber && exec != null){
exec.interrupt(
Result.ABORTED,
new CauseOfInterruption.UserInterruption("Job aborted by #${currentBuild.number}")
)
println("Job aborted previously running build #${build.number}")
}
}
}
Now in my pipeline, I want to run this function when the build is triggered by the creation or push to a PR branch.
It seems the only way I can do this is to set the agent to none and then set it to the correct node for each of the subsequent stages. This results in missing environment variables etc. since the 'environment' section runs on the master.
If I use agent { label 'mybuildnode' } then it won't run the stage until the agent is free from the currently in-progress/running build.
Is there a way I can get the 'cancelpreviousbuilds()' function to run before the agent is allocated as part of my pipeline?
This is roughly what I have currently but doesn't work because of environment variable issues - I had to do the skipDefaultCheckout so I could do it manually as part of my build:
pipeline {
agent none
environment {
VER = '1.2'
FULLVER = "${VER}.${BUILD_NUMBER}.0"
PROJECTPATH = "<project path>"
TOOLVER = "2017"
}
options {
skipDefaultCheckout true
}
stages {
stage('Check Builds') {
when {
branch 'PR-*'
}
steps {
// abort any queued/in-progress builds for PR branches
cancelpreviousbuilds()
}
}
stage('Checkout') {
agent {
label 'buildnode'
}
steps {
checkout scm
}
}
}
}
It works and aborts the build successfully but I get errors related to missing environment variables because I had to start the pipeline with agent none and then set each stage to agent label 'buildnode' to.
I would prefer that my entire pipeline ran on the correct agent so that workspaces / environment variables were set correctly but I need a way to trigger the cancelpreviousbuilds() function without requiring the buildnode agent to be allocated first.
You can try combining the declarative pipeline and the scripted pipeline, which is possible.
Example (note I haven't tested it):
// this is scripted pipeline
node('master') { // use whatever node name or label you want
stage('Cancel older builds') {
cancel_old_builds()
}
}
// this is declarative pipeline
pipeline {
agent { label 'buildnode' }
...
}
As a small side comment, you seem to use: build.number.toInteger() != buildNumber which would abort not only older builds but also newer ones. In our CI setup, we've decided that it's best to abort the current build if a newer build has already been scheduled.

Multiple Jenkinsfiles, One Agent Label

I have a project which has multiple build pipelines to allow for different types of builds against it (no, I don't have the ability to make one build out of it; that is outside my control).
Each of these pipelines is represented by a Jenkinsfile in the project repo, and each one must use the same build agent label (they need to share other pieces of configuration as well, but it's the build agent label which is the current problem). I'm trying to put the label into some sort of a configuration file in the project repo, so that all the Jenkinsfiles can read it.
I expected this to be simple, as you don't need this config data until you have already checked out a copy of the sources to read the Jenkinsfile. As far as I can tell, it is impossible.
It seems to me that a Jenkinsfile cannot read files from SCM until the project has done its SCM step. However, that's too late: the argument to agent{label} is read before any stages get run.
Here's a minimal case:
final def config
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
checkout scm // we don't need all the submodules here
echo "Reading configuration JSON"
script { config = readJSON file: 'buildjobs/buildjob-config.json' }
echo "Read configuration JSON"
}
}
stage('Build and Deploy') {
agent {
label config.agent_label
}
steps {
echo 'Got into Stage 2'
}
}
}
}
When I run this, I get:
java.lang.NullPointerException: Cannot get property 'agent_label' on null object I don't get either of the echoes from the 'Configure' stage.
If I change the label for the 'Build and Deploy' stage to 'master', the build succeeds and prints out all three echo statements.
Is there any way to read a file from the Git workspace before the agent labels need to be set?
Please see https://stackoverflow.com/a/52807254/7983309. I think you are running into this issue. label is unable to resolve config.agent_label to its updated value. Whatever is set in the first line is being sent to your second stage.
EDIT1:
env.agentName = ''
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
script {
env.agentName = 'slave'
echo env.agentName
}
}
}
stage('Finish') {
steps {
node (agentName as String) { println env.agentName }
script {
echo agentName
}
}
}
}
}
Source - In a declarative jenkins pipeline - can I set the agent label dynamically?

Create new Jenkins jobs using Pipeline Job and Groovy script

I have Jenkins pipeline Job with parameters (name, group, taskNumber)
I need to write pipeline script which will call groovy script (this one?: https://github.com/peterjenkins1/jenkins-scripts/blob/master/add-job.groovy)
I want to create new job (with name name_group_taskNamber) every times when I build main Pipeline Job.
I don't understand:
Where do I need to put may groovy script ?
How does Pipeline script should look like? :
node{
stage('Build'){
def pipeline = load "CreateJob.groovy"
pipeline.run()
}
}
You can use and configure a shared library like here (a git repo): https://github.com/lvthillo/shared-library . You need to configure this in your Jenkins global configuration.
It contains a folder vars/. Here you can manage pipelines and groovy scripts like my slackNotifier.groovy. The script is just a groovy script to print the build result in Slack.
In the jenkins pipeline job we will import our shared library:
#Library('name-of-shared-pipeline-library')_
mavenPipeline {
//define parameters
}
In the case above also the pipeline is in the shared library but this isn't necessary.
You can just write your pipeline in the job itself and call only the function from the pipeline like this:
This is the script in the shared library:
// vars/sayHello.groovy
def call(String name = 'human') {
echo "Hello, ${name}."
}
And in your pipeline:
final Lib= library('my-shared-library')
...
stage('stage name'){
echo "output"
Lib.sayHello.groovy('Peter')
}
...
EDIT:
In new declarative pipelines you can use:
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('test') {
steps {
sh 'echo "execute say hello script:"'
sayHello("Peter")
}
}
}
post {
always {
cleanWs()
}
}
}
def sayHello(String name = 'human') {
echo "Hello, ${name}."
}
output:
[test] Running shell script
+ echo 'execute say hello script:'
execute say hello script:
[Pipeline] echo
Hello, Peter.
[Pipeline] }
[Pipeline] // stage
We do it by using the https://wiki.jenkins.io/display/JENKINS/Jobcopy+Builder+plugin, try build another step in pipeline script and pass the parms which are to be considered

Is there a way to add pre-build step for Jenkins pipeline?

Currently I'm able to use a post directive in my Jenkinsfile. Is there a way to trigger a pre-build step similar to this ?
post {
always {
sh '''rm -rf build/workspace'''
}
}
I believe this newer question may have the answer: Is there a way to run a pre-checkout step in declarative Jenkins pipelines?
pre is a cool feature idea, but doesn't exist yet. skipDefaultCheckout
and checkout scm (which is the same as the default checkout) are the
keys:
pipeline {
agent { label 'docker' }
options {
skipDefaultCheckout true
}
stages {
stage('clean_workspace_and_checkout_source') {
steps {
deleteDir()
checkout scm
}
}
stage('build') {
steps {
echo 'i build therefore i am'
}
}
}
}

Create a Build Pipeline View based on Pipeline job using Job DSL plugin in Jenkins

Since there's a limitation in Jenkins Pipeline's that you cannot add a manual build step without hanging the build (see for example this stackoverflow question) I'm experimenting with a combination of Jenkins Pipeline and Build Pipeline Plugin using the Job DSL plugin.
My plan was to create a Job DSL script that first executes the the Jenkins Pipeline (defined in a Jenkinsfile) and then create a downstream job that deploys to production (this is the manual step). I've created this Job DSL script as a test:
pipelineJob("${REPO_NAME} jobs") {
logRotator(-1, 10)
def repo = "https://path-to-repo/${REPO_NAME}.git"
triggers {
scm('* * * * *')
}
description("Pipeline for $repo")
definition {
cpsScm {
scm {
git {
remote { url(repo) }
branches('master')
scriptPath('Jenkinsfile')
extensions { } // required as otherwise it may try to tag the repo, which you may not want
}
}
}
}
publishers {
buildPipelineTrigger("${REPO_NAME} deploy to prod") {
parameters {
currentBuild()
}
}
}
}
freeStyleJob("${REPO_NAME} deploy to prod") {
}
buildPipelineView("$REPO_NAME Build Pipeline") {
selectedJob("${REPO_NAME} jobs")
}
where REPO_NAME is defined as an environment variable. The Jenkinsfile looks like this:
node {
stage('build'){
echo "building"
}
stage('run tests'){
echo "running tests"
}
stage('package Docker'){
echo "packaging"
}
stage('Deploy to Test'){
echo "Deploying to Test"
}
}
The problem is that the selectedJob points to "${REPO_NAME} jobs" which doesn't seem to be a valid option as "Initial Job" in the Build Pipeline Plugin view (you can't select it manually either).
Is there a workaround for this? I.e. how can I use a Jenkins Pipeline as the "Initial Job" for the Build Pipeline Plugin?
From the documentation on yourDomain.com/plugin/job-dsl/api-viewer/index.html#method/javaposse.jobdsl.dsl.views.NestedViewsContext.envDashboardView
It shows that buildPipelineView can only be used within a View block which is inside of a Folder block.
Folder {
View {
buildPipelineView {
}
}
}

Resources