I have a Jenkins Build job which triggers multiple Test jobs with the test name as a parameter using the Jenkins Parameterized Trigger Plugin. This kicks off a number of test builds on multiple executors which all run correctly.
I now want to aggregate the results using 'Aggregate downstream test results->Automatically aggregate all downstream tests'. I have enabled this in the Build job and have set up fingerprinting so that these are recognised as downstream jobs. In the Build jobs lastBuild page I can see that they are recognised as downstream builds:
Downstream Builds
Test #1-#3
When I click on "Aggregated Test Results" however it only shows the latest of these (Test #3). This may be good behaviour if the job always runs the same tests but mine all run different parts of my test suite.
Is there some way I can get this to aggregate all of the relevant downstream Test builds?
Additional:
Aggregated Test Results does work if you replicate the Test job. This is not ideal as I have a large number of test suites.
I'll outline the manual solution (as mentioned in the comments), and provide more details if you need them later:
Let P be the parent job and D be a downstream job (you can easily extend the approach to multiple downstream jobs).
An instance (build) of P invokes D via Parameterized Trigger Plugin via a build step (not as a post-build step) and waits for D's to finish. Along with other parameters, P passes to D a parameter - let's call it PARENT_ID - based on P's build's BUILD_ID.
D executes the tests and archives them as artifacts (along with jUnit reports - if applicable).
P then executes an external Python (or internal Groovy) script that finds the appropriate build of D via PARENT_ID (you iterate over builds of D and examine the value of PARENT_ID parameter). The script then copies the artifacts from D to P and P publishes them.
If using Python (that's what I do) - utilize Python JenkinsAPI wrapper. If using Groovy - utilize Groovy Plugin and run your script as system script. You then can access Jenkins via its Java API.
I came up with the following solution using declarative pipelines.
It requires installation of "copy artifact" plugin.
In the downstream job, set "env" variable with the path (or pattern path) to result file:
post {
always {
steps {
script {
// Rem: Must be BEFORE execution that may fail
env.RESULT_FILE='Devices\\resultsA.xml'
}
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that I use xunit but the same apply with junit
In the parent job, save build variables, then in post process I aggregate results with following code:
def runs=[]
pipeline {
agent any
stages {
stage('Tests') {
parallel {
stage('test A') {
steps {
script {
runs << build(job: "test A", propagate: false)
}
}
}
stage('test B') {
steps {
script {
runs << build(job: "test B", propagate: false)
}
}
}
}
}
}
post {
always {
script {
currentBuild.result = 'SUCCESS'
def result_files = []
runs.each {
if (it.result != 'SUCCESS') {
currentBuild.result = it.result
}
copyArtifacts(
filter: it.buildVariables.RESULT_FILE,
fingerprintArtifacts: true,
projectName: it.getProjectName(),
selector: specific(it.getNumber().toString())
)
result_files << it.buildVariables.RESULT_FILE
}
env.RESULT_FILE = result_files.join(',')
println('Results aggregated from ' + env.RESULT_FILE)
}
archiveArtifacts env.RESULT_FILE
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that the parent job also set the "env" variable so it can itself be aggregated by a parent job.
Related
I want to add a matrix section to the following pipeline. I want the pipeline to run on 4 nodes with each node running a different stage that is specified in the for loop (e.g. one node runs AD, the other runs CD, the other runs DC, and the last runs DISP_A. It then repeats this behavior for the rest of the list until it is done iterating to the end of the list).
I have looked at the documentation and have not come up with any concrete answers to my question.
pipeline
{
agent none
stages
{
stage ('Test')
{
steps
{
script
{
def test_proj_choices = ['AD', 'CD', 'DC', 'DISP_A', 'DISP_PROC', 'EGI', 'FD', 'FLT', 'FMS_C', 'IFF', 'liblO', 'libNGC', 'libSC', 'MISCMP_MP', 'MISCMP_GP', 'NAV_MGR', 'RADALT', 'SYS', 'SYSIO15', 'SYSIO42', 'SYSRED', 'TACAN', 'VOR_ILS', 'VPA', 'WAAS', 'WCA']
for (choice in test_proj_choices)
{
stage ("${choice}")
{
echo "Running ${choice}"
build job: "UH60Job", parameters: [string(name: "TEST_PROJECT", value: choice), string(name: "SCADE_SUITE_TEST_ACTION", value: "all"), string(name: "VIEW_ROOT", value: "myview")]
}
}
}
}
}
}
}
I don't think your expectation can be decorated with Jenkins matrix DSL, as matrix DSL works like a single or multi-dimensional array.
But you can do something similar by writing a small groovy logic.
Below is one small example similar to your expectation:
The expectation of this example will run one task in one Jenkins agent in a distributed fashion.
Meaning it will run this way (Task - agent) A - will run on- agent1, B - will run on- agent2, C -will run on- agent3, D - will run on- agent4, E - will run on- agent1, ....
node {
agent=['agent1','agent2','agent3', 'agent4']
tasks=['A','B','C','D','E','F','G','H','I','J','K','L']
int nodeCount=0
tasks.each {
node(agent[nodeCount]) {
stage("build") {
println "Task - ${it} running on ${agent[nodeCount]}"
}
}
nodeCount=nodeCount+1
if(nodeCount == agent.size()){
nodeCount=0
}
}
}
Jenkins agent no need to be hardcode, by using Jenkins groovy api, you can easily find all the available and active agent from Jenkins, like below.
agent=[]
for (a in hudson.model.Hudson.instance.slaves) {
if(!a.getComputer().isOffline()) {
agent << a.name
}
}
Start jenkins job immediately after creation by seed job
I can start a job from within the job dsl like this:
queue('my-job')
But how do I start a job with argument or parameters? I want to pass that job some arguments somehow.
Afaik, you can't.
But what you can do is creating it from a pipeline (jobDsl step), then run it. Something more or less like...
pipeline {
stages {
stage('jobs creation') {
steps {
jobDsl targets: 'my_job.dsl',
additionalParameters: [REQUESTED_JOB_NAME: "my_job's_name"]
build job: "my_job's_name",
parameters: [booleanParam(name: 'DRY_RUN', value: true)]
}
}
}
}
With a barebones 'my_job.dsl'...
pipelineJob(REQUESTED_JOB_NAME) {
definition {
// blah...
}
}
NOTE: As you see, I explicitly set the name of the job from the calling pipeline (the REQUESTED_JOB_NAME var) because otherwise I don't know how to make the jobDSL code to return the name of the job it creates back to the calling pipeline.
I use this "trick" to avoid the "job params go one run behind" problem. I use the DRY_RUN param of the job (I use a hidden param, in fact) to run a "do-nothing" build as its name implies, so by the time others need to use the job for "real stuff" its params section has already been properly parsed.
I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name
below is my use case,
I have jobs A, B, C - A is upstream and B and C are downstream jobs. When a Patch set created in Gerrit, based on the patchset created event I trigger Job A and based on the result of this job, we trigger B and C. After the B and C is executed, I want to display the result of all three jobs on Gerrit patch set. like
Job A SUCCESS
JOB B SUCCESS
JOB C FAILED
right now I see only JOB A Build result showing up on GERRIT PATCH SET as
JOB A SUCCESS
is there any way to do this?
Do the following:
1) Configure all jobs (A, B and C) to trigger when the patch set is created.
2) Configure the jobs B and C to depend on job A
2.1) Click on "Advanced..." in Gerrit Trigger job configuration
2.2) Add the job A on the "Other jobs on which this job depends" field
With this configuration jobs B and C will wait for job A finish before they start and you'll get the result you want:
The best way to solve this is to create a small wrapper pipeline job. Let name it Build_ABC.
Configure Build_ABC to trigger on the Gerrit event you wish. The job will be responsible for running the other 3 builds and in the event of any failures in these jobs your Build_ABC will fail and report this back to Gerrit. You will not be able to see immediately which job failed in your Gerrit message but you will be able to see in in your Jenkins pipeline overview.
In the below scripted pipeline script you see a pipeline that calls Build_A and waits for the result. If the build succeeds it will continue to execute Build B and C in parallel. In my example I made Build C failed which caused the whole pipeline job to fail.
This is a revised version of my fist answer and the script has grown a bit. As it is required to have the individual build results in the message posted to Gerrit the pipeline has been changed to catch the individual results and record them. If build A fails builds B+C will be skipped and the status will be skipped.
Next it is possible to use the gerrit review ssh command line tool to perform manual review. This way it is possible to have a custom message generated to include the individual build results. It looks like the screen shot below:
I haven't figured out how to make it a multi line comment but there is also an option to use json in the command line, have a look at that.
def build_a = "Skipped"
def build_b = "Skipped"
def build_c = "Skipped"
def build_result = "+1"
try {
stage("A") {
try {
build( job: '/Peter/Build_A', wait: true)
build_a = "Pass"
} catch (e) {
build_a = "Failed"
// throw again or else the job to make the build fail
// throwing here will prevent B+C from running
throw e
}
}
stage("After A") {
parallel B: {
try {
build( job: '/Peter/Build_B', wait: true)
build_b = "Pass"
} catch (e) {
build_b = "Failed"
// throw again or else the job to make the build fail
throw e
}
}, C: {
try {
build( job: '/Peter/Build_C', wait: true)
build_c = "Pass"
} catch (e) {
build_c = "Failed"
// throw again or else the job to make the build fail
throw e
}
}
}
} catch(e) {
build_result = "-1"
// throw again or else the job to make the build fail
throw e
} finally {
node('master') {
// Perform a custom review using the environment vars
sh "ssh -p ${env.GERRIT_PORT} ${env.GERRIT_HOST} gerrit review --verified ${build_result} -m '\"Build A:${build_a} Build B: ${build_a} Build C: ${build_c}\"' ${env.GERRIT_PATCHSET_REVISION}"
}
}
Next you should configure the Gerrit trigger to ignore the results from Jenkins or else there will be a double vote.
One more advantage is that with the Ocean Blue plugin you can get a nice graphical representation, see below image, of your build and you can examine what went wrong by clicking on the jobs.
I have Four Jobs- A,B,C,D
A- Build
B- Test
C- Sonar Analysis
D- Deploy
My scenario-
1- I need to create a Pipeline
A->B->C
2- I Need to create a other Pipeline
A->B->D
My issue is--
1- If I select "Trigger Parameterized builds on other projects " and add Job B under Job A, I can't use Job A for my second scenario.
How should I use Job A for both the pipelines without effecting.
If I understand the question correctly, you are looking to create two pipelines. In the pipelines, you can define which jobs to build using stages.
For your requirement, you need to create two pipelines and define stages according to your needs. Trigger Parameterized builds on other projects is not a suitable option for you.
stage('Build A') {
build job: 'A' , parameters: <Give_your_own_parameters>
}
stage('Build B') {
build job: 'B' , parameters: <Give_your_own_parameters>
}
stage('Build C') {
build job: 'C' , parameters: <Give_your_own_parameters>
}
You can also get the syntax from Pipeline Syntax in the Pipeline Section of the pipeline you are building.
You can ease the process by creating a duplicate project of A, and B. Project can easily be duplicated during New Item> Copy details from Project A.
This is not easy feasible for a simple reason : you are not using Jenkins jobs the way they were meant to be used.
The concept of a job in Jenkins is that a job is a sequence of actions. A job does not have a single responsibility such as just building, just testing or just deploying. In your case, you should have 3 "actions" in a job, and 3 "actions" in another job.
The freestyle job approach
The common approach would be something like :
Build job
Action 1 : build
Action 2 : test
Action 3 : run Sonar analysis
Deploy job
Action 1 : build
Action 2 : test
Action 3 : deploy
Unless I'm missing something here, you don't want to separate these 3/4 actions in 4 separate jobs, as this would be highly inneficient. For example, test phase and Sonar analysis should probably be run just after a code build has been made, so you want to share the same workspace to be able to test your built code.
The pipeline approach
Another - preferred - approach would be to use actual Jenkins pipelines, i.e. Groovy scripts that will allow you to define your steps as functions and then reuse them in both your "Build job" and "Deploy job".
As an example, you could have a functions.groovy containing your build/test functions :
functions.groovy
def build() {
// Build code here...
}
def test() {
// Test code here...
}
build-job.groovy
node {
def functions = load 'functions.groovy'
stage('Build') {
functions.build()
}
stage('Test') {
functions.test()
}
stage('Sonar Analysis') {
// Sonar analysis code...
}
}
deploy-job.groovy
node {
def functions = load 'functions.groovy'
stage('Build') {
functions.build()
}
stage('Test') {
functions.test()
}
stage('Deploy') {
// Deploy code...
}
}