How to add a matrix section to pipeline to run the pipeline on multiple nodes? - jenkins

I want to add a matrix section to the following pipeline. I want the pipeline to run on 4 nodes with each node running a different stage that is specified in the for loop (e.g. one node runs AD, the other runs CD, the other runs DC, and the last runs DISP_A. It then repeats this behavior for the rest of the list until it is done iterating to the end of the list).
I have looked at the documentation and have not come up with any concrete answers to my question.
pipeline
{
agent none
stages
{
stage ('Test')
{
steps
{
script
{
def test_proj_choices = ['AD', 'CD', 'DC', 'DISP_A', 'DISP_PROC', 'EGI', 'FD', 'FLT', 'FMS_C', 'IFF', 'liblO', 'libNGC', 'libSC', 'MISCMP_MP', 'MISCMP_GP', 'NAV_MGR', 'RADALT', 'SYS', 'SYSIO15', 'SYSIO42', 'SYSRED', 'TACAN', 'VOR_ILS', 'VPA', 'WAAS', 'WCA']
for (choice in test_proj_choices)
{
stage ("${choice}")
{
echo "Running ${choice}"
build job: "UH60Job", parameters: [string(name: "TEST_PROJECT", value: choice), string(name: "SCADE_SUITE_TEST_ACTION", value: "all"), string(name: "VIEW_ROOT", value: "myview")]
}
}
}
}
}
}
}

I don't think your expectation can be decorated with Jenkins matrix DSL, as matrix DSL works like a single or multi-dimensional array.
But you can do something similar by writing a small groovy logic.
Below is one small example similar to your expectation:
The expectation of this example will run one task in one Jenkins agent in a distributed fashion.
Meaning it will run this way (Task - agent) A - will run on- agent1, B - will run on- agent2, C -will run on- agent3, D - will run on- agent4, E - will run on- agent1, ....
node {
agent=['agent1','agent2','agent3', 'agent4']
tasks=['A','B','C','D','E','F','G','H','I','J','K','L']
int nodeCount=0
tasks.each {
node(agent[nodeCount]) {
stage("build") {
println "Task - ${it} running on ${agent[nodeCount]}"
}
}
nodeCount=nodeCount+1
if(nodeCount == agent.size()){
nodeCount=0
}
}
}
Jenkins agent no need to be hardcode, by using Jenkins groovy api, you can easily find all the available and active agent from Jenkins, like below.
agent=[]
for (a in hudson.model.Hudson.instance.slaves) {
if(!a.getComputer().isOffline()) {
agent << a.name
}
}

Related

How to run Jenkins job on multiple agents in parallel using declarative pipelines

I have a supposedly simple task: I want to run a job on multiple agents in parallel.
Even though I'm a bit of a noob with Jenkins, I've googled a bit, and got to the conclusion that the preferred solution is to use Matrix directive.
I've read Matrix official docs and this blog and still can't solve it completely.
But I'm close, so I assume I just need a bit of help.
The agents I need job to run on - have label: 'vms'.
The below pipeline will run job on some of required agents that have 'vms' label - equivalent to the amount of values for DUMMY_AXIS axis.
For example, 'vms' label has 3 agents, pipeline below will run stages on 2 out of 3.
How to fix the issue, so that stages would run once on each agent from given label , regardless of how many agents there are?
pipeline {
agent none
stages {
stage('Update TestHostAgent') {
matrix {
agent {
label 'vms'
}
axes {
axis {
name 'DUMMY_AXIS'
values 'dummy_val_1', 'dummy_val_2'
}
}
stages {
stage('Build') {
steps {
echo "Build stage"
}
}
stage('Test') {
steps {
script {
echo "Test Step"
}
}
}
}
}
}
}
}

Jenkins - Trigger another pipeline job in same machine - without creating new "Executor"

I want to do similar thing - call "some_job_pipeline" from trigger pipeline and that it would be controller by parameter to execute on same or some specific Jenkins node.
If it is should be executed on same/master/parent job jenkins node - it should not create new "Executor". That if i set for "Node1" node executors count to 1 - job would run successfully (would not require 2-nd executor).
In example I have Trigger_Main_Job which looks something like this:
node("${params.JenkinsNode}") {
stage("Stage 1") {
...
}
stage("some_job_pipeline") {
build job: 'some_job_pipeline', parameters: []
}
stage("Stage 3") {
...
}
...
}
and some_job_pipeline which looks something like this:
boolean runOnItsOwnNode = params.JenkinsNode?.trim()
properties([
parameters([
string(name: 'JenkinsNode', defaultValue: '', description: 'Node from which to deploy. If "JenkinsNode" is not passed - then it will use parent job node.')
])
])
if(runOnItsOwnNode) {
node("${params.JenkinsNode}") {
print "Deploying from node: '${params.JenkinsNode}'"
}
}
else {
print "Deploying from parent job node."
???? THIS_IS MISSING_PART ????
}
Note: Similar question, but it point out that parent pipeline should be changed: Jenkins pipeline: how to trigger another job and wait for it without using an extra agent/executor. Question is is it possible to implement this and how without changing "Trigger" job. That I could create "some_job_pipeline" which execution would depend only on JenkinsNode passed parameter and not Parent/Called job implementation.
I tried to different variants to specify "???? THIS_IS MISSING_PART ????" code part in
node("master") {...}
and without "note" and similar things. But no luck - "some_job_pipeline" still consumes/requires new executor.

Creating a sequential step in a jenkins declarative pipeline preceding a parallel stage

I would like to setup parallel stages as described in the image
In this instance, the setup is pretty heavy so has to be done once before the parallel group starts of G1, G2 and G3. At the same time, the stage : Initial Checks has 2 items that I would like to run in parallel.
Is this possible in the Declarative Pipeline or do I have to resort to a script?
I couldnt see in the documentation the ability for this to work
Stages {
stage ('P1') {
}
stage ('P2 Setup') {}
stage ('P2') {
//Here it can contain either Steps or Parallel. I can only do
parallel {
stage ('g1') {} //Parallel tests
stage ('g2') {}
stage ('g3') {}
}
}
stage ('P2 Cleanup') {}
}
Have you encountered similar situations and what have your solutions been like?
Ofcourse 1 solution is to make Setup and Cleanup as part of every group, but like I said, its pretty intensive and I would only take it on if what the diagram indicates isn't possible.
Solution 2
stage ('p2') {
script {
//Some scripting here to get the result?
}
}
Pipeline
This is not supported by a DSL or declarative pipeline yet. You are essentially looking for nested parallel stages as mentioned here
Issue is still open with Jenkins community which you can watch here
In your given case, you can launch stage P1 and stage setup in parallel. However, it is important to start P1 as a background process because from your graph it appears that P1 is a time-intensive operation. Once group stage completes, you can collect the status of P1 and proceed to S2.
stages{
stage('Build'){
steps{
echo "Build"
}
}
stage('Init'){
parallel{
stage('P1'){steps{ echo "launch p1 in background"}}
stage('setup'){steps{echo "setup"}}
}
}
stage('Group'){
parallel{
stage('P1'){steps{echo "p1"}}
stage('P2'){steps{echo "p2"}}
stage('P3'){steps{echo "p3"}}
}
}
stage('Cleanup'){
steps{
echo "cleanup"
}
}
stage('Check P1 status'){
steps{
echo "Check"
}
}
stage('S2'){
steps{
echo "S2"
}
}
}
I think you are looking for this
node {
stage("P1"){}
stage("p2") {}
stage("p3") {
parallel (
"firstTask" : {
},
"secondTask" : {
}
)
}
}

Running stages in parallel with Jenkins workflow / pipeline

Please note: the question is based on the old, now called "scripted" pipeline format. When using "declarative pipelines", parallel blocks can be nested inside of stage blocks (see Parallel stages with Declarative Pipeline 1.2).
I'm wondering how parallel steps are supposed to work with Jenkins workflow/pipeline plugin, esp. how to mix them with build stages. I know about the general pattern:
parallel(firstTask: {
// Do some stuff
}, secondTask: {
// Do some other stuff in parallel
})
However, I'd like to run couple of stages in parallel (on the same node, which has multiple executors), so I tried to add stages like this:
stage 'A'
// Do some preparation stuff
parallel(firstTask: {
stage 'B1'
// Do some stuff
}, secondTask: {
stage 'B2'
// Do some other stuff in parallel
})
stage 'C'
// Finalizing stuff
This does not work as expected. The "do stuff" tasks are executed in parallel, but the parallel stages end immediately and do not incorporate the stuff they should contain. As a consequence, the Stage View does not show the correct result and also does not link the logs.
Can I build different stages in parallel, or is the "parallel" step only meant to be used within a single stage?
You may not place the deprecated non-block-scoped stage (as in the original question) inside parallel.
As of JENKINS-26107, stage takes a block argument. You may put parallel inside stage or stage inside parallel or stage inside stage etc. However visualizations of the build are not guaranteed to support all nestings; in particular
The built-in Pipeline Steps (a “tree table” listing every step run by the build) shows arbitrary stage nesting.
The Pipeline Stage View plugin will currently only display a linear list of stages, in the order they started, regardless of nesting structure.
Blue Ocean will display top-level stages, plus parallel branches inside a top-level stage, but currently no more.
JENKINS-27394, if implemented, would display arbitrarily nested stages.
that syntax is now deprecated, you will get this error:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 14: Expected a stage # line 14, column 9.
parallel firstTask: {
^
WorkflowScript: 14: Stage does not have a name # line 14, column 9.
parallel secondTask: {
^
2 errors
You should do something like:
stage("Parallel") {
steps {
parallel (
"firstTask" : {
//do some stuff
},
"secondTask" : {
// Do some other stuff in parallel
}
)
}
}
Just to add the use of node here, to distribute jobs across multiple build servers/ VMs:
pipeline {
stages {
stage("Work 1"){
steps{
parallel ( "Build common Library":
{
node('<Label>'){
/// your stuff
}
},
"Build Utilities" : {
node('<Label>'){
/// your stuff
}
}
)
}
}
All VMs should be labelled as to use as a pool.
I have just tested the following pipeline and it works
parallel firstBranch: {
stage ('Starting Test')
{
build job: 'test1', parameters: [string(name: 'Environment', value: "$env.Environment")]
}
}, secondBranch: {
stage ('Starting Test2')
{
build job: 'test2', parameters: [string(name: 'Environment', value: "$env.Environment")]
}
}
This Job named 'trigger-test' accepts one parameter named 'Environment'
Job 'test1' and 'test2' are simple jobs:
Example for 'test1'
One parameter named 'Environment'
Pipeline : echo "$env.Environment-TEST1"
On execution, I am able to see both stages running in the same time
I think this has been officially implemented now:
https://jenkins.io/blog/2017/09/25/declarative-1/
As #Quartz mentioned, you can do something like
stage('Tests') {
parallel(
'Unit Tests': {
container('node') {
sh("npm test --cat=unit")
}
},
'API Tests': {
container('node') {
sh("npm test --cat=acceptance")
}
}
)
}

Aggregating results of downstream parameterised jobs in Jenkins

I have a Jenkins Build job which triggers multiple Test jobs with the test name as a parameter using the Jenkins Parameterized Trigger Plugin. This kicks off a number of test builds on multiple executors which all run correctly.
I now want to aggregate the results using 'Aggregate downstream test results->Automatically aggregate all downstream tests'. I have enabled this in the Build job and have set up fingerprinting so that these are recognised as downstream jobs. In the Build jobs lastBuild page I can see that they are recognised as downstream builds:
Downstream Builds
Test #1-#3
When I click on "Aggregated Test Results" however it only shows the latest of these (Test #3). This may be good behaviour if the job always runs the same tests but mine all run different parts of my test suite.
Is there some way I can get this to aggregate all of the relevant downstream Test builds?
Additional:
Aggregated Test Results does work if you replicate the Test job. This is not ideal as I have a large number of test suites.
I'll outline the manual solution (as mentioned in the comments), and provide more details if you need them later:
Let P be the parent job and D be a downstream job (you can easily extend the approach to multiple downstream jobs).
An instance (build) of P invokes D via Parameterized Trigger Plugin via a build step (not as a post-build step) and waits for D's to finish. Along with other parameters, P passes to D a parameter - let's call it PARENT_ID - based on P's build's BUILD_ID.
D executes the tests and archives them as artifacts (along with jUnit reports - if applicable).
P then executes an external Python (or internal Groovy) script that finds the appropriate build of D via PARENT_ID (you iterate over builds of D and examine the value of PARENT_ID parameter). The script then copies the artifacts from D to P and P publishes them.
If using Python (that's what I do) - utilize Python JenkinsAPI wrapper. If using Groovy - utilize Groovy Plugin and run your script as system script. You then can access Jenkins via its Java API.
I came up with the following solution using declarative pipelines.
It requires installation of "copy artifact" plugin.
In the downstream job, set "env" variable with the path (or pattern path) to result file:
post {
always {
steps {
script {
// Rem: Must be BEFORE execution that may fail
env.RESULT_FILE='Devices\\resultsA.xml'
}
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that I use xunit but the same apply with junit
In the parent job, save build variables, then in post process I aggregate results with following code:
def runs=[]
pipeline {
agent any
stages {
stage('Tests') {
parallel {
stage('test A') {
steps {
script {
runs << build(job: "test A", propagate: false)
}
}
}
stage('test B') {
steps {
script {
runs << build(job: "test B", propagate: false)
}
}
}
}
}
}
post {
always {
script {
currentBuild.result = 'SUCCESS'
def result_files = []
runs.each {
if (it.result != 'SUCCESS') {
currentBuild.result = it.result
}
copyArtifacts(
filter: it.buildVariables.RESULT_FILE,
fingerprintArtifacts: true,
projectName: it.getProjectName(),
selector: specific(it.getNumber().toString())
)
result_files << it.buildVariables.RESULT_FILE
}
env.RESULT_FILE = result_files.join(',')
println('Results aggregated from ' + env.RESULT_FILE)
}
archiveArtifacts env.RESULT_FILE
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that the parent job also set the "env" variable so it can itself be aggregated by a parent job.

Resources