Skip Jenkins Pipeline Steps If Node Is Offline - jenkins

I have a Jenkins Pipeline job that, for part of the build, uses a node with a lot of downtime. I'd like this step performed if the node is online and skipped without failing the build if the node is offline.
This is related, but different from the problem of skipping parts of a Matrix Project.
I tried to programmatically check if a node is online like so.
jenkins.model.Nodes.getNode('my-node').toComputer().isOnline()
This runs up against the Jenkins security sandbox:
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: unclassified method java.lang.Class getNode java.lang.String
I tried setting a timeout that will be tripped if the node is offline.
try {
timeout(time: 10, unit: 'MINUTES') {
node('my-node') {
// Do optional step
}
}
} catch (e) {
echo 'Time out on optional step. Node down?'
}
This has a major downside. I have to know what the longest time the step would take, then wait even longer when the node is down. I tried working around that with a "canary" step:
try {
timeout(time: 1, unit: 'SECONDS') {
node('my-node') {
echo 'Node is up. Performing optional step.'
}
}
node('my-node') {
echo 'This is an optional step.'
}
} catch (e) {
echo 'Time out on optional step. Node down?'
}
This skips the step if the node is up, but busy with another job. This is the best solution I have come up with so far. Is there just a way to check if the node is online without using a timeout?

This should work:
Jenkins.instance.getNode('my-node').toComputer().isOnline()
see http://javadoc.jenkins-ci.org/jenkins/model/Jenkins.html

There is a pipeline call for this.
nodesByLabel 'my-node'
Returns [] if no node is online; returns arraylist with online instances otherwise.

I simply did this:
pipeline {
agent none
environment { AGENT_NODE = "somenode" }
stages {
stage('Offline Node') {
when {
beforeAgent true
expression {
return nodesByLabel(env.AGENT_NODE).size() > 0
}
}
agent {
label "${env.AGENT_NODE}"
}
steps {
...
}
}
}
}

Related

Can Jenkins pipelines have variable stages?

From my experience with Jenkins declarative-syntax pipelines, I'm aware that you can conditionally skip a stage with a when clause. E.g.:
run_one = true
run_two = false
run_three = true
pipeline {
agent any
stages {
stage('one') {
when {
expression { run_one }
}
steps {
echo 'one'
}
}
stage('two') {
when {
expression { run_two }
}
steps {
echo 'two'
}
}
stage('three') {
when {
expression { run_three }
}
steps {
echo 'three'
}
}
}
}
...in the above code block, there are three stages, one, two, and three, each of whose execution is conditional on a boolean variable.
I.e. the paradigm is that there is a fixed superset of known stages, of which individual stages may be conditionally skipped.
Does Jenkins pipeline script support a model where there is no fixed superset of known stages, and stages can be "looked up" for conditional execution?
To phrase it as pseudocode, is something along the lines of the following possible:
my_list = list populated _somehow_, maybe reading a file, maybe Jenkins build params, etc.
pipeline {
agent any
stages {
if (stage(my_list[0]) exists) {
run(stage(my_list[0]))
}
if (stage(my_list[1]) exists) {
run(stage(my_list[1]))
}
if (stage(my_list[2]) exists) {
run(stage(my_list[2]))
}
}
}
?
I think another way to think about what I'm asking is: is there a way to dynamically build a pipeline from some dynamic assembly of stages?
For dynamic stages you could write either a fully scripted pipeline or use a declarative pipeline with a scripted section (e. g. by using the script {…} step or calling your own function). For an overview see Declarative versus Scripted Pipeline syntax and Pipeline syntax overview.
Declarative pipeline is better supported by Blue Ocean so I personally would use that as a starting point. Disadvantage might be that you need to have a fixed root stage, but I usually name that "start" or "init" so it doesn't look too awkward.
In scripted sections you can call stage as a function, so it can be used completely dynamic.
pipeline {
agent any
stages {
stage('start') {
steps {
createDynamicStages()
}
}
}
}
void createDynamicStages() {
// Stage list could be read from a file or whatever
def stageList = ['foo', 'bar']
for( stageName in stageList ) {
stage( stageName ) {
echo "Hello from stage $stageName"
}
}
}
This shows in Blue Ocean like this:

Jenkins execute all sub jobs before marking a top job fail or pass?

def jobs = [
'subjob1': true,
'subjob2': false,
'subjob3': true
]
pipeline
{
agent { label "ag1" }
stages
{
stage('stage1')
{
steps
{
script
{
jobs.each
{
if ("$it.value".toBoolean())
{
stage("Stage $it.key")
{
build([job:"$it.key", wait:true, propagate:true])
}
}
}
}
}
}
}
}
This Jenkins job triggers other sub-jobs (via pipeline build step): subjob1, subjob2, subjob3. If any of the sub-jobs fail, this job immediately fails (propagate:true).
However, what I'd like to do is continue executing all jobs. And mark this one as failed if one or more sub-jobs fail. How would I do that?
Here is how you can do it. You can simply use a catchError block for this.
def jobs = [
'subjob1': true,
'subjob2': false,
'subjob3': true
]
pipeline
{
agent any
stages
{
stage('stage1')
{
steps
{
script
{
jobs.each
{
if ("$it.value".toBoolean())
{
stage("Stage $it.key")
{
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE')
{
echo "Building"
build([job:"$it.key", wait:true, propagate:true])
}
}
}
}
}
}
}
}
}
Instead of executing all the jobs one by one, you can execute them in parallel. This way, all the jobs will be executed independently of each other and stage1 will fail only if one or more jobs fails.
According to the documentation
The parallel directive takes a map from branch names to closures and an optional argument failFast which will terminate all branches upon a failure in any other branch.
So, we have to transform the jobs to a Map of stage names to Closures that will execute in parallel. We will use jobs.collectEntries() to build the mapping and pass it as the argument to the parallel directive:
stage('Parallel') {
steps {
script {
parallel(jobs.collectEntries {
[(it.key): {
if (it.value) {
build(job: it.key)
} else {
echo "Skipping job execution: ${it.key}"
// This is required to mark the parallel stage as skipped - it is not required for the solution to work
org.jenkinsci.plugins.pipeline.modeldefinition.Utils.markStageSkippedForConditional(it.key)
}
}]
})
}
}
}
We can omit the wait and propagate flags in the build step because they are set by default.
In the provided solution the Parallel stage (and the resulting build) will fail only if one or more jobs fails. Additionally, if you have Blue Ocean plugin installed you will see a nice view graph of Parallel stage along with all the parallel children:

Jenkins pipeline - building parallel with map received from function

Recently I have tried to build a jenkins pipeline with a large number of 'Tests' in one of its stages.
The thing is, at some point I got an error regarding my stages phase was too large, so I tried to solve it with function that will build all of my stages and I can run this function output(map of stages) in parallel.
Some of the stages have to run on agent(node) taken from a label, and others have some unique steps in them.
I am trying to understand in general, how can I write a function that will build a map to run in parallel - but was not successfull nor did I found any good example of it online.
I know the question is general, but if anyone can point me to some examples, or just write one, it will be great.
This is the snippet I am working on(not full JenkinsFile):
def getParallelBuilders(list_arr) {
def builders = [:]
builders['Test-1'] =
stage ('Test-1')
{
node('ci-nodes')
{
when {
environment name: 'TEST_NAME', value: 'true'
beforeAgent true
}
timeout(time: 1, unit: 'HOURS')
script { runtests() }
post {
success { onTestSuccess title: 'Temp', pytest: 'results.xml' }
cleanup { afterTestCleanup2("clean") }
}
}
}
return builders
}
The call to this function happens from my 'pipeline' block, after stages of build, configure etc:
stage('Testing') {
steps {
script { parallel getParallelBuilders(list_arr) }
}
}
Not sure if my approach to this problem is right at all,
hopefully someone can point me in the right direction.
After a while, here is the solution I got for my problem:
builders = [
'Test1':
{
stage ('Test1')
{
if (RUN_TESTS == 'true')
{
timeout(time: 30, unit: 'MINUTES')
{
node('ci-nodes')
{
try
{
runtests()
onTestSuccess title: 'Temp', pytest: 'results.xml'
}
catch (err)
{
onTestFailure testName: "Test1"
}
finally
{
afterTestCleanup()
}
}
}
}
}
}
The main issue was understanding the scripting pipeline syntax, which is really diffrent fron the declarative way.

Jenkins Declarative Pipeline detect first run and fail when choice parameters present

I often write Declarative Pipeline jobs where I setup parameters such as "choice". The first time I run the job, it blindly executes using the first value in the list. I don't want that to happen. I want to detect this case and not continue the job if a user didn't select a real value.
I thought about using "SELECT_VALUE" as the first item in the list and then fail the job if that is the value. I know I can use a 'when' condition on each stage, but I'd rather not have to copy that expression to each stage in the pipeline. I'd like to fail the whole job with one check up front.
I don't like the UI for 'input' tasks because the controls are hidden until you hover over a running stage.
What is the best way to validate arguments with a Declarative Pipeline? Is there a better way to detect when the job is run for the first time and stop?
I've been trying to figure this out myself and it looks like the pipeline runs with a fully populated parameters list.
So, the answer to your choice option is to make the first item a value like "please select option" and have your code use when to check that
For example
def paramset = true
pipeline {
parameters {
choice(choices: ['select','test','proof', 'prod'], name: 'ENVI')
}
stages {
stage ('check') {
when { expression { return params.choice.ENVI == 'select' }
steps {
script {
echo "Missing parameters"
paramset = false
}
}
}
stage ('step 1') {
when { expression { return paramset }
steps {
script {
echo "Doing step 1"
}
}
}
stage ('step 2') {
when { expression { return paramset }
steps {
script {
echo "Doing step 2"
}
}
}
stage ('step 3') {
when { expression { return paramset }
steps {
script {
echo "Doing step 3"
}
}
}
}
}

Can I build stages with a function in a Jenkinsfile?

I'd like to use a function to build some of the stages of my Jenkinsfile. This is going to be a build with a number of repetitive stages/steps - I'd not like to generate everything manually.
I was wondering if it's possible to do something like this:
_make_stage() {
stage("xx") {
step("A") {
echo "A"
}
step("B") {
echo "B"
}
}
}
_make_stages() {
stages {
_make_stage()
}
}
// pipeline starts here!
pipeline {
agent any
_make_stages()
}
Unfortunately Jenkins doesn't like this - when I run I get the error:
WorkflowScript: 24: Undefined section "_make_stages" # line 24, column 5.
_make_stages()
^
WorkflowScript: 22: Missing required section "stages" # line 22, column 1.
pipeline {
^
So what's going wrong here? The function _make_stages() really looks like it returns whatever the stages object returns. Why does it matter whether I put that in a function call or just inline it into the pipeline definition?
As explained here, Pipeline "scripts" are not simple Groovy scripts, they are heavily transformed before running, some parts on master, some parts on slaves, with their state (variable values) serialized and passed to the next step. As such, every Groovy feature is not supported, and what you see as simple functions really is not.
It does not mean what you want to achieve is impossible. You can create stages programmatically, but apparently not with the declarative syntax. See also this question for good suggestions.
You can define a declarative pipeline in a shared library, for example:
// vars/evenOrOdd.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
// Jenkinsfile
#Library('my-shared-library') _
evenOrOdd(currentBuild.getNumber())
See Defining Declarative Pipelines

Resources