Jenkins Scripted Pipeline - specifying the workspace directory before node allocates the workspace - jenkins

I've got a multibranch pipeline, defined in a scripted pipeline (from a library) that is coordinating ~100 builds, each build across multiple slaves (different operating systems). One of the Operating systems is Windows, which has a 255 character path limitation. Because some of our jobs have ~200 character paths in them (which we can't control because it is a vendor provided hell), i need to change the step/node workspace on our windows slaves, ideally changing it with the node() step, so that git is automatically checked out only once into the custom workspace.
I've tried all kinds of various styles:
This works in the Declarative Pipeline:
stage('blah') {
node {
label 'win'
customWorkspace "c:\\w\\${JOB_NAME"
}
steps {
...
}
}
But i can't find the equivalent for scripted pipelines:
pipeline {
stage('stage1') {
node('win-node') {
// the git repository is checked out to ${env.WORKSPACE}, but it's unusable due to the path length issue
ws("c:\\w\\${JOB_NAME}") {
// this switches the workspace, but doesn't clone the git repo again
body()
}
}
}
}
Ideally, i'd like something like this:
pipeline {
stage('stage1') {
node('win-node', ws="c:\\w\\${JOB_NAME}") {
body()
}
}
}
Any recommendations?

Not tested (specially define options inside node), but you could try to skip default checkout and do it after changing the workspace, something like this:
pipeline {
stage('stage1') {
node('win-node') {
options {
skipDefaultCheckout true // prevent checkout to default workspace
}
ws("c:\\w\\${JOB_NAME}") {
checkout scm // perform default checkout here
body()
}
}
}
}

Related

How to return to inital agent of a Jenkinx matrix after a stage executed on another agent?

Is it possible in a Matrix-cell of a Jenkins declarative pipeline to execute a later stage on the pipeline's 'initial/main' agent, after a previous stage of the same cell has been executed on another agent (identified by label)?
To put this into context, I want to build native-binaries for different platforms in a Jenkins declarative pipeline using a matrix stage where each cell is responsible to
collect the native's sources for that platform
build the native-binaries from the sources for that platform
collect the just build native-binaries and distribute them to the platform specific artefacts
Step two has to be performed on special agents, which are prepared to build the binaries for a particular platform and are identified by their label. Step one and three has to be performed on the initial agent, the pipeline's 'main' agent where the sources are checkout from SCM. In the end the native-binaries are bundled together and distributed from the pipeline's inital/main agent. To transfer of sources and binaries stash/unstash is used.
A exemplary, simplified pseudo pipeline would look like:
pipeline {
agent { label 'basic' }
// Declarative SCM checkout configured in the mutli-branch pipeline job-config
stages {
stage('Build binaries') {
matrix {
axes {
axis {
name 'PLATFORM'
values 'linux', 'windows'
}
}
stages {
stage("Collect sources") {
steps {
<Collect native's sources for ${PLATFORM}> in "${WORKSPACE}/native.sources.${PLATFORM}"
dir("native.sources.${PLATFORM}") {
stash "sources.${PLATFORM}"
}
}
}
stage('Build binaries') {
options { skipDefaultCheckout() }
agent { label "natives-${PLATFORM}" }
steps {
unstash "sources.${PLATFORM}"
<Build native binaries from unstashed sources into 'libs' folder >
dir('libs') {
stash "binaries.${PLATFORM}"
}
}
}
stage('Collect and distribute binaries') {
agent {
<initial/pipeline-agent>
}
steps {
dir("libs.${PLATFORM}") {
unstash "binaries.${PLATFORM}"
}
}
}
}
}
}
stage('Bundle and distribute') {
...
}
}
}
But the question is, how do I tell Jenkins to execute the third stage of the matrix on the initial/pipeline agent again?
If I simply don't specify an agent for the third-stage the execution is:
Stage on Pipeline-Agent
Stage on Native-Build-Agent
Stage on Native-Build-Agent
but I want:
Stage on Pipeline-Agent
Stage on Native-Build-Agent
Stage on Pipeline-Agent
In the syntax-reference I didn't find a agent parameter like agent { <initial/pipeline-agent> }:
https://www.jenkins.io/doc/book/pipeline/syntax/#agent
https://www.jenkins.io/doc/book/pipeline/syntax/#matrix-cell-directives
The agent section describes a boolean option reuseNode, but it is only "valid for docker and dockerfile".
The only workaround I found so far, was to define a second matrix and move the execution of the third stage to that. This works as expect and the stage is executed on the pipeline-agent, but has the drawback that the matrix-stage has to be specified twice as well as its when-conditions.
Appendix
The problem probably also exists, when using per stage agents in an ordinary linear pipeline.

Jenkins: unable to access the artifacts on the initial run

My setup: main node runs on Linux and an agent on Windows. I want to compile a library on an agent, archive those artifacts and copy them on the main node to create a release togather with the Linux compiled binaries.
This is my Jenkinsfile:
pipeline {
agent none
stages {
stage('Build-Windows') {
agent {
dockerfile {
filename 'docker/Dockerfile-Windows'
label 'windows'
}
}
steps {
bat "tools/ci/build.bat"
archiveArtifacts artifacts: 'build_32/bin/mylib.dll'
}
}
}
post {
success {
node('linux') {
copyArtifacts filter: 'build_32/bin/mylib.dll', flatten: true, projectName: '${JOB_NAME}', target: 'Win32'
}
}
}
}
My problem is, when I run this project for the first time, I get the following error
Unable to find project for artifact copy: mylib
But when I comment the copyArtifacts block and rerun the project, it is successful and I have artifacts vivible in the project overview. After this I can reenable the copyArtifacts and then the artifacts will be copied as expected.
How to configure the pipeline so it can access the artifacts on the initial run?
The copyArtifacts capability is usually used to copy artifacts between different builds and not between agents on the same build. Instead, to achieve what you want you can use the stash and unstash keywords which are designed exactly for passing artifacts from different agents in the same pipeline execution:
stash: Stash some files to be used later in the build.
Saves a set of files for later use on any node/workspace in the same Pipeline run. By default, stashed files are discarded at the end of a pipeline run
unstash: Restore files previously stashed.
Restores a set of files previously stashed into the current workspace.
In your case it can look like:
pipeline {
agent none
stages {
stage('Build-Windows') {
agent {
dockerfile {
filename 'docker/Dockerfile-Windows'
label 'windows'
}
}
steps {
bat "tools/ci/build.bat"
// dir is used to control the path structure of the stashed artifact
dir('build_32/bin'){
stash name: "build_artifact" ,includes: 'mylib.dll'
}
}
}
}
post {
success {
node('linux') {
// dir is used to control the output location of the unstash keyword
dir('Win32'){
unstash "build_artifact"
}
}
}
}

How to build multiple projects the same time using jenkins pipeline

I already wrote an example Jenkinsfile to checkout and build and deploy a signal project. Is there a way to do all these for multiple project in different git repo the same time just using one Jenkinsfile ? I know I can set up these projects as independent jobs and use a Jenkinsfile to call them,but I'm wondering if I can do this without independent jobs.
Thanks.
You can make use of Job DSL Plugin to achieve this.
Jenkins Job DSL API will help you to write DSL scripts. You can find all the built-in DSL methods that will be needed to construct jobs.
Example pipeline script:
pipeline {
agent any
stages {
stage('Job1') {
steps {
//Pipeline Job
jobDsl scriptText: '''pipelineJob(\"$job1\") {
definition {
cpsScm {
scm {
git {
remote{
name('origin')
url('https://github.com/satta19/user-node.git')
credentials('git2-cred')
}
branch ('master')
}
}
scriptPath('Jenkinsfile')
}
}
}'''
}
}
stage('Job2') {
steps {
//Freestyle job
jobDsl scriptText: '''job(\"$job2\") {
steps {
shell(\'echo Hello World!\')
}
}'''
}
}
}
}
Note: I have taken the jobs name as string parameter i.e. $job1 and $job2 in the above example pipeline script.

Multiple Jenkinsfiles, One Agent Label

I have a project which has multiple build pipelines to allow for different types of builds against it (no, I don't have the ability to make one build out of it; that is outside my control).
Each of these pipelines is represented by a Jenkinsfile in the project repo, and each one must use the same build agent label (they need to share other pieces of configuration as well, but it's the build agent label which is the current problem). I'm trying to put the label into some sort of a configuration file in the project repo, so that all the Jenkinsfiles can read it.
I expected this to be simple, as you don't need this config data until you have already checked out a copy of the sources to read the Jenkinsfile. As far as I can tell, it is impossible.
It seems to me that a Jenkinsfile cannot read files from SCM until the project has done its SCM step. However, that's too late: the argument to agent{label} is read before any stages get run.
Here's a minimal case:
final def config
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
checkout scm // we don't need all the submodules here
echo "Reading configuration JSON"
script { config = readJSON file: 'buildjobs/buildjob-config.json' }
echo "Read configuration JSON"
}
}
stage('Build and Deploy') {
agent {
label config.agent_label
}
steps {
echo 'Got into Stage 2'
}
}
}
}
When I run this, I get:
java.lang.NullPointerException: Cannot get property 'agent_label' on null object I don't get either of the echoes from the 'Configure' stage.
If I change the label for the 'Build and Deploy' stage to 'master', the build succeeds and prints out all three echo statements.
Is there any way to read a file from the Git workspace before the agent labels need to be set?
Please see https://stackoverflow.com/a/52807254/7983309. I think you are running into this issue. label is unable to resolve config.agent_label to its updated value. Whatever is set in the first line is being sent to your second stage.
EDIT1:
env.agentName = ''
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
script {
env.agentName = 'slave'
echo env.agentName
}
}
}
stage('Finish') {
steps {
node (agentName as String) { println env.agentName }
script {
echo agentName
}
}
}
}
}
Source - In a declarative jenkins pipeline - can I set the agent label dynamically?

Can I "import" the stages in a Jenkins Declarative pipeline

I have several pipeline jobs, which are configured very similarly.
They all have the same stages (of which there are about 10).
I am now I am thinking about moving to the declarative pipeline (https://jenkins.io/blog/2016/09/19/blueocean-beta-declarative-pipeline-pipeline-editor/).
But I do not want to define the ~10 stages in every pipeline. I want to define them at one place, and "import" them somehow.
Is this possible with declarative pipelines at all? I see that there are Libraries, but it does not seem like I could include the stage definition using them.
You will have to create a shared-library to implement what i am about to suggest. For shared-library implementation, you may check the following posts:
Using Building Blocks in Jenkins Declarative Pipeline
Upload file in Jenkins input step to workspace (Mainly for images so one can easily figure out things)
Now if you want to use a Jenkinsfile (kind of a template) which can be reused across multiple projects (jobs), then that is indeed possible.
Once you have created a shared-library repository with vars directory in it, then you just have to create a Groovy file (let's say, commonPipeline.groovy) inside vars directory.
Here's an example that works because I have used it earlier in multiple jobs.
$ cat shared-lib/vars/commonPipeline.groovy
// You can create function(s) as shown below, if required
def someFunctionA() {
// Your code
}
// This is where you will define all the stages that you want
// to run as a whole in multiple projects (jobs)
def call(Map config) {
pipeline {
agent {
node { label 'slaveA || slaveB' }
}
environment {
myvar_Y = 'apple'
myvar_Z = 'orange'
}
stages {
stage('Checkout') {
steps {
deleteDir()
checkout scm
}
}
stage ('Build') {
steps {
script {
check_something = someFunctionA()
if (check_something) {
echo "Build!"
# your_build_code
} else {
error "Something bad happened! Exiting..."
}
}
}
}
stage ('Test') {
steps {
echo "Running tests..."
// your_test_code
}
}
stage ('Deploy') {
steps {
script {
sh '''
# your_deploy_code
'''
}
}
}
}
post {
failure {
sh '''
# anything_you_need_to_perform_in_failure_step
'''
}
success {
sh '''
# anything_you_need_to_perform_in_success_step
'''
}
}
}
}
With above Groovy file in place, all you have to do now is to call it in your various Jenkins projects. Since you might already be having an existing Jenkinsfile (if not, create it) in your Jenkins project, you just have to replace the existing content of that file with the following:
$ cat Jenkinsfile
// Assuming you have named your shared-library as `my-shared-lib` & `Default version` to `master` branch in
// `Manage Jenkins` » `Configure System` » `Global Pipeline Libraries` section
#Library('my-shared-lib#master')_
def params = [:]
params=[
jenkins_var: "${env.JOB_BASE_NAME}",
]
commonPipeline params
Note: As you can see above, I am calling commonPipeline.groovy file. So, all your bulky Jenkinsfile will get reduced to just five or six lines of code, and those few lines are also going to be common across all those projects. Also note that I have used jenkins_var above. It can be any name. It's not actually used but is required for pipeline to run. Some Groovy expert can clarify that part.
Ref: https://www.jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/

Resources