How to return to inital agent of a Jenkinx matrix after a stage executed on another agent? - jenkins

Is it possible in a Matrix-cell of a Jenkins declarative pipeline to execute a later stage on the pipeline's 'initial/main' agent, after a previous stage of the same cell has been executed on another agent (identified by label)?
To put this into context, I want to build native-binaries for different platforms in a Jenkins declarative pipeline using a matrix stage where each cell is responsible to
collect the native's sources for that platform
build the native-binaries from the sources for that platform
collect the just build native-binaries and distribute them to the platform specific artefacts
Step two has to be performed on special agents, which are prepared to build the binaries for a particular platform and are identified by their label. Step one and three has to be performed on the initial agent, the pipeline's 'main' agent where the sources are checkout from SCM. In the end the native-binaries are bundled together and distributed from the pipeline's inital/main agent. To transfer of sources and binaries stash/unstash is used.
A exemplary, simplified pseudo pipeline would look like:
pipeline {
agent { label 'basic' }
// Declarative SCM checkout configured in the mutli-branch pipeline job-config
stages {
stage('Build binaries') {
matrix {
axes {
axis {
name 'PLATFORM'
values 'linux', 'windows'
}
}
stages {
stage("Collect sources") {
steps {
<Collect native's sources for ${PLATFORM}> in "${WORKSPACE}/native.sources.${PLATFORM}"
dir("native.sources.${PLATFORM}") {
stash "sources.${PLATFORM}"
}
}
}
stage('Build binaries') {
options { skipDefaultCheckout() }
agent { label "natives-${PLATFORM}" }
steps {
unstash "sources.${PLATFORM}"
<Build native binaries from unstashed sources into 'libs' folder >
dir('libs') {
stash "binaries.${PLATFORM}"
}
}
}
stage('Collect and distribute binaries') {
agent {
<initial/pipeline-agent>
}
steps {
dir("libs.${PLATFORM}") {
unstash "binaries.${PLATFORM}"
}
}
}
}
}
}
stage('Bundle and distribute') {
...
}
}
}
But the question is, how do I tell Jenkins to execute the third stage of the matrix on the initial/pipeline agent again?
If I simply don't specify an agent for the third-stage the execution is:
Stage on Pipeline-Agent
Stage on Native-Build-Agent
Stage on Native-Build-Agent
but I want:
Stage on Pipeline-Agent
Stage on Native-Build-Agent
Stage on Pipeline-Agent
In the syntax-reference I didn't find a agent parameter like agent { <initial/pipeline-agent> }:
https://www.jenkins.io/doc/book/pipeline/syntax/#agent
https://www.jenkins.io/doc/book/pipeline/syntax/#matrix-cell-directives
The agent section describes a boolean option reuseNode, but it is only "valid for docker and dockerfile".
The only workaround I found so far, was to define a second matrix and move the execution of the third stage to that. This works as expect and the stage is executed on the pipeline-agent, but has the drawback that the matrix-stage has to be specified twice as well as its when-conditions.
Appendix
The problem probably also exists, when using per stage agents in an ordinary linear pipeline.

Related

Prevent Jenkins job from building from another jenkins file

We have 2 jenkins job AA and BB.
Is there a way to allow BB only to be trigger from AA after it completes?
Basically, you can use build triggers to do this. More about build triggers can be found here. There are two ways to add build triggers. Following is how you can add triggers using the UI. By going to Configure Job you can add triggers. Please refer to the following image.
In a declarative pipeline, you can add triggers as shown below.
pipeline {
agent any
triggers { upstream(upstreamProjects: 'AA', threshold: hudson.model.Result.SUCCESS) }
stages {
stage('Hello') {
steps {
echo 'Hello World BB'
}
}
}
}
Here you can specify the threshold on when to trigger the build based on the status of the upstream build. There are 4 different thresholds.
hudson.model.Result.ABORTED: The upstream build was manually aborted.
hudson.model.Result.FAILURE: The upstream build had a fatal error.
hudson.model.Result.SUCCESS: The upstream build had no errors.
hudson.model.Result.UNSTABLE: The upstream build had an unstable result.
Update 02
If you want to restrict all other jobs/users from triggering this job you will have to restructure your Job. You can wrap your stages with a parent Stage and conditionally check who triggered the Job. But note that the Job will anyway trigger but the stages will be skipped. Please refer the following pipeline.
pipeline {
agent any
triggers { upstream(upstreamProjects: 'AA', threshold: hudson.model.Result.SUCCESS) }
stages{
stage('Parent') {
// We will restrict triggering this Job for everyone other than Job AA
when { expression {
print("Checking if the Trigger is allowed to execute this job")
print('AA' in currentBuild.buildCauses.upstreamProject)
}
}
stages {
stage('Hello') {
steps {
echo 'Hello World BB'
}
}
}
}
}
}

Can we pass Variables through Stash in Jenkins Checkpoint

I am trying to pass variable assigned in previous stages to next stage of jenkins pipeline. In a continuous build variable is passing properly. but when I use checkpoint in jenkinsfile, i am not to pass the variable which is used in previous stage to next stage
eg:
stage('Initial')
{
agent {
node {
label 'NGC-NIS-VS2017-15-7-6-INTEG'
}
Steps
{
stash 'env.BUILD_NUMBER', name : "Passing buildnumber"
}
}
stage('CheckPoint')
{
agent none
steps
{
checkpoint 'HalfWay through'
}
}
stage("Deploy") {
agent any
steps {
unstash 'Passing buildnumber'
}
}
when i restarted the build from that checkpoint, i am not able to get the value previous build number.

Jenkins Scripted Pipeline - specifying the workspace directory before node allocates the workspace

I've got a multibranch pipeline, defined in a scripted pipeline (from a library) that is coordinating ~100 builds, each build across multiple slaves (different operating systems). One of the Operating systems is Windows, which has a 255 character path limitation. Because some of our jobs have ~200 character paths in them (which we can't control because it is a vendor provided hell), i need to change the step/node workspace on our windows slaves, ideally changing it with the node() step, so that git is automatically checked out only once into the custom workspace.
I've tried all kinds of various styles:
This works in the Declarative Pipeline:
stage('blah') {
node {
label 'win'
customWorkspace "c:\\w\\${JOB_NAME"
}
steps {
...
}
}
But i can't find the equivalent for scripted pipelines:
pipeline {
stage('stage1') {
node('win-node') {
// the git repository is checked out to ${env.WORKSPACE}, but it's unusable due to the path length issue
ws("c:\\w\\${JOB_NAME}") {
// this switches the workspace, but doesn't clone the git repo again
body()
}
}
}
}
Ideally, i'd like something like this:
pipeline {
stage('stage1') {
node('win-node', ws="c:\\w\\${JOB_NAME}") {
body()
}
}
}
Any recommendations?
Not tested (specially define options inside node), but you could try to skip default checkout and do it after changing the workspace, something like this:
pipeline {
stage('stage1') {
node('win-node') {
options {
skipDefaultCheckout true // prevent checkout to default workspace
}
ws("c:\\w\\${JOB_NAME}") {
checkout scm // perform default checkout here
body()
}
}
}
}

Multiple Jenkinsfiles, One Agent Label

I have a project which has multiple build pipelines to allow for different types of builds against it (no, I don't have the ability to make one build out of it; that is outside my control).
Each of these pipelines is represented by a Jenkinsfile in the project repo, and each one must use the same build agent label (they need to share other pieces of configuration as well, but it's the build agent label which is the current problem). I'm trying to put the label into some sort of a configuration file in the project repo, so that all the Jenkinsfiles can read it.
I expected this to be simple, as you don't need this config data until you have already checked out a copy of the sources to read the Jenkinsfile. As far as I can tell, it is impossible.
It seems to me that a Jenkinsfile cannot read files from SCM until the project has done its SCM step. However, that's too late: the argument to agent{label} is read before any stages get run.
Here's a minimal case:
final def config
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
checkout scm // we don't need all the submodules here
echo "Reading configuration JSON"
script { config = readJSON file: 'buildjobs/buildjob-config.json' }
echo "Read configuration JSON"
}
}
stage('Build and Deploy') {
agent {
label config.agent_label
}
steps {
echo 'Got into Stage 2'
}
}
}
}
When I run this, I get:
java.lang.NullPointerException: Cannot get property 'agent_label' on null object I don't get either of the echoes from the 'Configure' stage.
If I change the label for the 'Build and Deploy' stage to 'master', the build succeeds and prints out all three echo statements.
Is there any way to read a file from the Git workspace before the agent labels need to be set?
Please see https://stackoverflow.com/a/52807254/7983309. I think you are running into this issue. label is unable to resolve config.agent_label to its updated value. Whatever is set in the first line is being sent to your second stage.
EDIT1:
env.agentName = ''
pipeline {
agent none
stages {
stage('Configure') {
agent {
label 'master'
}
steps {
script {
env.agentName = 'slave'
echo env.agentName
}
}
}
stage('Finish') {
steps {
node (agentName as String) { println env.agentName }
script {
echo agentName
}
}
}
}
}
Source - In a declarative jenkins pipeline - can I set the agent label dynamically?

Can I "import" the stages in a Jenkins Declarative pipeline

I have several pipeline jobs, which are configured very similarly.
They all have the same stages (of which there are about 10).
I am now I am thinking about moving to the declarative pipeline (https://jenkins.io/blog/2016/09/19/blueocean-beta-declarative-pipeline-pipeline-editor/).
But I do not want to define the ~10 stages in every pipeline. I want to define them at one place, and "import" them somehow.
Is this possible with declarative pipelines at all? I see that there are Libraries, but it does not seem like I could include the stage definition using them.
You will have to create a shared-library to implement what i am about to suggest. For shared-library implementation, you may check the following posts:
Using Building Blocks in Jenkins Declarative Pipeline
Upload file in Jenkins input step to workspace (Mainly for images so one can easily figure out things)
Now if you want to use a Jenkinsfile (kind of a template) which can be reused across multiple projects (jobs), then that is indeed possible.
Once you have created a shared-library repository with vars directory in it, then you just have to create a Groovy file (let's say, commonPipeline.groovy) inside vars directory.
Here's an example that works because I have used it earlier in multiple jobs.
$ cat shared-lib/vars/commonPipeline.groovy
// You can create function(s) as shown below, if required
def someFunctionA() {
// Your code
}
// This is where you will define all the stages that you want
// to run as a whole in multiple projects (jobs)
def call(Map config) {
pipeline {
agent {
node { label 'slaveA || slaveB' }
}
environment {
myvar_Y = 'apple'
myvar_Z = 'orange'
}
stages {
stage('Checkout') {
steps {
deleteDir()
checkout scm
}
}
stage ('Build') {
steps {
script {
check_something = someFunctionA()
if (check_something) {
echo "Build!"
# your_build_code
} else {
error "Something bad happened! Exiting..."
}
}
}
}
stage ('Test') {
steps {
echo "Running tests..."
// your_test_code
}
}
stage ('Deploy') {
steps {
script {
sh '''
# your_deploy_code
'''
}
}
}
}
post {
failure {
sh '''
# anything_you_need_to_perform_in_failure_step
'''
}
success {
sh '''
# anything_you_need_to_perform_in_success_step
'''
}
}
}
}
With above Groovy file in place, all you have to do now is to call it in your various Jenkins projects. Since you might already be having an existing Jenkinsfile (if not, create it) in your Jenkins project, you just have to replace the existing content of that file with the following:
$ cat Jenkinsfile
// Assuming you have named your shared-library as `my-shared-lib` & `Default version` to `master` branch in
// `Manage Jenkins` » `Configure System` » `Global Pipeline Libraries` section
#Library('my-shared-lib#master')_
def params = [:]
params=[
jenkins_var: "${env.JOB_BASE_NAME}",
]
commonPipeline params
Note: As you can see above, I am calling commonPipeline.groovy file. So, all your bulky Jenkinsfile will get reduced to just five or six lines of code, and those few lines are also going to be common across all those projects. Also note that I have used jenkins_var above. It can be any name. It's not actually used but is required for pipeline to run. Some Groovy expert can clarify that part.
Ref: https://www.jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/

Resources