I'm using MultiJob plugin and have a job (Job-A) that triggers Job-B several times.
My requirement is to copy some artifact (xml files) from each build.
The difficulty I have is that using Copy Artifact Plugin with "last successful build" option will only take the last build of Job-B, while I need to copy from all builds that were triggered on the same build of Job-A
The flow looks like:
Job-A starts and triggers:
`Job-A` -->
Job-B build #1
Job-B build #2
Job-B build #3
** copy artifcats of all last 3 builds, not just #3 **
Note: Job-B could be executed on different slaves on the same run (I set the slave to run on dynamically by setting parameter on upstream job-A)
When all builds are completed, I want Job-A to copy artifact from build #1, #2 and #3 , and not just from last build.
How can I do this?
Here is more generic groovy script; it uses the groovy plugin and the copyArtifact plugin; see instructions in the code comments.
It simply copies artifacts from all downstream jobs into the upstream job's workspace.
If you call the same job several times, you could use the job number in the copyArtifact's 'target' parameter to keep the artifacts separate.
// This script copies artifacts from downstream jobs into the upstream job's workspace.
//
// To use, add a "Execute system groovy script" build step into the upstream job
// after the invocation of other projects/jobs, and specify
// "/var/lib/jenkins/groovy/copyArtifactsFromDownstream.groovy" as script.
import hudson.plugins.copyartifact.*
import hudson.model.AbstractBuild
import hudson.Launcher
import hudson.model.BuildListener
import hudson.FilePath
for (subBuild in build.builders) {
println(subBuild.jobName + " => " + subBuild.buildNumber)
copyTriggeredResults(subBuild.jobName, Integer.toString(subBuild.buildNumber))
}
// Inspired by http://kevinormbrek.blogspot.com/2013/11/using-copy-artifact-plugin-in-system.html
def copyTriggeredResults(projName, buildNumber) {
def selector = new SpecificBuildSelector(buildNumber)
// CopyArtifact(String projectName, String parameters, BuildSelector selector,
// String filter, String target, boolean flatten, boolean optional)
def copyArtifact = new CopyArtifact(projName, "", selector, "**", null, false, true)
// use reflection because direct call invokes deprecated method
// perform(Build<?, ?> build, Launcher launcher, BuildListener listener)
def perform = copyArtifact.class.getMethod("perform", AbstractBuild, Launcher, BuildListener)
perform.invoke(copyArtifact, build, launcher, listener)
}
I suggest you the following approach:
Use Execute System Groovy script from Groovy Plugin to execute the following script:
import hudson.model.*
// get upstream job
def jobName = build.getEnvironment(listener).get('JOB_NAME')
def job = Hudson.instance.getJob(jobName)
def upstreamJob = job.upstreamProjects.iterator().next()
// prepare build numbers
def n1 = upstreamJob.lastBuild.number
def n2 = n1 - 1
def n3 = n1 - 2
// set parameters
def pa = new ParametersAction([
new StringParameterValue("UP_BUILD_NUMBER1", n1.toString()),
new StringParameterValue("UP_BUILD_NUMBER2", n2.toString()),
new StringParameterValue("UP_BUILD_NUMBER3", n3.toString())
])
Thread.currentThread().executable.addAction(pa)
This script will create three environment variables which correspond to three last build numbers of upstream job.
Add three build steps Copy artifacts from upstream project to copy artifacts from last three builds of upstream project (use environment variables from script above to set build number):
Run build and checkout build log, you should have something like this:
Copied 2 artifacts from "A" build number 4
Copied 2 artifacts from "A" build number 3
Copied 1 artifact from "A" build number 2
Note: perhaps, script need to be adjusted to catch unusual cases like "upstream project has only two builds", "current job doesn't have upstream job", "current job has more than one upstream job" etc.
You can use the following example from an "Execute Shell" Build Step.
Please note it can be run only from the Jenkins Master machine and the job calling this step also triggered the MultiJob.
#--------------------------------------
# Copy Artifacts from MultiJob Project
#--------------------------------------
PROJECT_NAME="MY_MULTI_JOB"
ARTIFACT_PATH="archive/target"
TARGET_DIRECTORY="target"
mkdir -p $TARGET_DIRECTORY
runCount="TRIGGERED_BUILD_RUN_COUNT_${PROJECT_NAME}"
for ((i=1; i<=${!runCount} ;i++))
do
buildNumber="${PROJECT_NAME}_${i}_BUILD_NUMBER"
cp $JENKINS_HOME/jobs/$PROJECT_NAME/builds/${!buildNumber}/$ARTIFACT_PATH/* $TARGET_DIRECTORY
done
#--------------------------------------
Related
I have a Jenkins "freestyle" project which triggers a "pipeline" project (in fact my "freestyle" project is mentionned as a trigger in "Build Triggers" step of the pipeline project).
How could I grab values of variables from a ".properties" file created by each build of the "parent/freestyle" project?
Currently I have checked "archive artifacts" on the "parent/freestyle" projet and add following code to my "child/pipeline":
node
{
load "${WORKSPACE}/variables.properties"
echo "${PARAM_FROM_TRIGGER}"
}
pipeline
{
agent any
stages
{
stage('STEP1')
{
steps
{
sh '''
#!/bin/bash
echo 'STEP 1'
'''
}
}
}
}
I encounter an exception after the "child/pipeline" build:
java.nio.file.NoSuchFileException:
/var/lib/jenkins/workspace/my_pipeline/variables.properties
How could I load values from my property file?
Since you're already archiving the .properties file, I think you're looking for the Copy Artifact Plugin.
You can use the command:
copyArtifacts(projectName: 'sourceproject');
to copy the artifacts from parent/freestyle into the workspace of child/pipeline.
I am trying to implement Machine learning in my jenkins pipeline.
For that I need output data of pipeline for each build.
Some parameters that i need are:
Which user triggered the pipeline
Duration of pipeline
Build number with its details
Pipeline pass/fail
If fail, at which stage it failed.
Error in the failed stage. (Why it failed)
Time required to execute each stage
Specific output of each stage (For. eg. : If a stage contains sonarcube execution then output be kind of percentage of codesmells or code coverage)
I need to fetch these details for all builds. How can get it?
There is jenkins api that can be implemented in python but i was able to get only JOB_NAME, Description of job, IS job Enabled.
These details werent useful.
There are 2 ways to get some of data from your list.
1. Jenkins API
For first 4 points from the list, you can use JSON REST API for a specific build to get those data. Example API endpoint:
https://[JENKINS_HOST]/job/[JOB_NAME]/[BUILD_NUMBER]/api/json?pretty=true
1. Which user triggered the pipeline
This will be under actions array in response, identyfi object in array by "_class": "hudson.model.CauseAction" and in it you will have shortDescription key which will have that information:
"actions": [
{
"_class": "hudson.model.CauseAction",
"causes": [
{
"_class": "hudson.triggers.SCMTrigger$SCMTriggerCause",
"shortDescription": "Started by an SCM change"
}
]
},
2. Duration of pipeline
It can be found under key: "duration". Example
"duration": 244736,
3. Build number with its details
I don't know what details you need, but for build number look for "number" key:
"number": 107,
4. Pipeline pass/fail
"result": "SUCCESS",
If you need to extract this information for all builds, run GET request for job API https://[JENKINS_HOST]/job/[JOB_NAME]/api/json?pretty=trueand extract all builds, then run above-mentioned request per build you have extracted.
I will write later a dummy python script to do just that.
2. Dump data in Jenkinsfile
There is also a possibility to dump some that information from Jenkinfile in post action.
pipeline {
agent any
stages {
stage('stage 1') {
steps {
sh 'echo "Stage 1 time: ${YOUR_TIME_VAR}" > job_data.txt'
}
}
}
post {
always {
sh 'echo "Result: ${result}" > job_data.txt'
sh 'echo "Job name: ${displayName}" > job_data.txt'
sh 'echo "Build number: ${number}" > job_data.txt'
sh 'echo "Duration: ${duration}" > job_data.txt'
archiveArtifacts artifacts: 'job_data.txt', onlyIfSuccessful: false
}
}
}
List of available global variables for pipeline job can be found:
https://[JENKINS_HOST]/pipeline-syntax/globals#env
For rest, you will need to implement your own logic in Jenkinsfile.
Ad. 5
Create a variable which holds information about current stage. At the beginning of each stage change its value to the ongoing stage. At the end dump to file like rest variables. If pipeline will fail let's say on stage foo in post action this variable will have exact same value because if pipeline fails it won't go to next stage.
Ad. 6
I'm not sure what you want, a traceback, error code?
I guess you will probably need to implement your own logging function.
Ad. 7
Make a function for measuring time for each stage and dump value at the end.
Ad. 8
Also not sure what you mean. Like, build artifacts?
At the end of each build this file job_data.txt will be archived as build artifact which can be later downloaded.
If i will find more elegant and simple solution I'll edit this post.
Hope it helps in any way
EDIT 1
Here is the script I've mentioned earlier.
import requests
username = "USERNAME"
password = "PASSWORD"
jenkins_host = "JENKINS_HOST"
jenkins_job = "JOBNAME"
request_url = "{0:s}/job/{1:s}/api/json".format(
jenkins_host,
jenkins_job,
)
job_data = requests.get(request_url, auth=(username, password)).json()
builds = []
for build in job_data.get('builds'):
builds.append(build.get('number'))
for build in builds:
build_url = "{0:s}/job/{1:s}/{2:d}/api/json".format(
jenkins_host,
jenkins_job,
build,
)
build_data = requests.get(build_url, auth=(username, password)).json()
build_name = build_data.get('fullDisplayName')
build_number = build_data.get('number')
build_status = build_data.get('result')
build_duration = build_data.get('duration')
for action in build_data.get('actions'):
if action.get("_class") == "hudson.model.CauseAction":
build_trigger = action.get('causes')
print(build_name)
print(build_status)
print(build_duration)
print(build_number)
print(build_trigger)
Please note you might need to authorize with API Token depending on your security settings.
I know it possible to pass values from parent to child jobs using Multijob Plugin
Is it possible to pass variables from child job to parent?
Yes with a little work. If JobParent calls jobChild and you want to have variableChild1 (that you may have created in jobChild job) to be visible in jobParent job then do the following simple steps.
In the child job, create a file (variable=value) pair with all the variables in it. Lets call it child or downstream_job or jobChild_envs.txt
Now, once jobParent is done calling jobChild (I guess you are calling Trigger another project or Build other project steps etc), next action just would be to use "Copy Artifact from another project/job" (Copy Artifact plugin in Jenkins). PS: You would need to click on the check box to FLATTEN the file (see jobParent image below). https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
Using this plugin, you'll be able to get a file/folder from jobChild's workspace into jobParent's workspace in a defined/base workspace location.
In jobParent, you'll Inject Environment variables (in the BUILD step).
https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin
At this time, if jobChild job created a .txt file with a variable for ex:
variableChild1=valueChild1
in it, then it'll be available/visible to the parent/upstrem job jobParent.
See the images for more details and run the jobs at your end to see the output.
and
In pipeline builds, you can do this as follows. Let's say you want to save the child build's URL and pass it back to the parent pipeline.
In your child build...
// write out some data about the job
def jobData = [job_url: "${BUILD_URL}"]
def jobDataText = groovy.json.JsonOutput.toJson(jobData)
writeFile file: "jobDataChild.json", text: jobDataText, encoding: 'UTF-8'
// archive the artifacts
archiveArtifacts artifacts: "jobDataChild.json", onlyIfSuccessful: false
And you can retrieve this in the parent build...
step ([$class: 'CopyArtifact', projectName: 'ChildJobName', filter: "jobDataChild.json", selector: [$class: 'LastCompletedBuildSelector'] ])
if (fileExists("jobDataChild.json")) {
def jobData = readJSON file: "jobDataChild.json"
def jobUrl = jobData.job_url
}
To add to this answer years later. The way i'm doing it is by having a redis instance that pipelines can connect to and pass data back and forth.
sh "redis-cli -u $redis_url ping" // server is up
def redis_key = "$BUILD_TAG" // BUILD_TAG is always unique
build job: "child", propagate: true, wait: true, parameters: [
string(name: "redis", value: "$redis_url;$redis_key"),
]
/******** in child job ***********/
def (redis_url, redis_key) = env.redis.tokenize(";")
sh"redis-cli -u $redis_url ping" // we are connected on url
// lpush adds to an array in redis
sh"""
redis-cli -u $redis_url lpush $redis_key "MY_DATA"
"""
/******* in parent job after waiting for child job *****/
def data_from_child = sh(script: "redis-cli --raw -u $redis_url LRANGE $redis_key 0 -1", returnStdout: true)
data_from_child == "MY_DATA"? println("π") : error("wow did not expect this")
I kind of like this approach better than passing back and forth with files because it allows scaling up via multiple worker nodes and executing multiple jobs in parallel.
I have two jenkins jobs:
build the project
deploy it
Both are working well and I can trigger the deploy job from the project build job.
Steps:
Build with parameters in the application's job >> check deploy on dev >> build
Add a yellow star badge to the build history in the application job - with groovy post-build action (code below)
Trigger the deploy job as post-build action
Question
After the deploy job was finished and failed change the build history of the application job (yellow star >> eg red one) - from the deploy job. How can I do that?
if ("true".equals(manager.build.buildVariables.get('DEPLOY_ON_DEV'))) {
manager.addBadge("star-gold.gif", "SNAPSHOT deployed on DEV")
}
This took me a while to develop but now it works like a charm in Post-build Actions β Add post-build action β Groovy Postbuild β Groovy script:
import hudson.model.Build
import hudson.model.Cause
import hudson.model.Project
import jenkins.model.Jenkins
import org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction
def log = manager.listener.logger
log.println(' ----------------')
log.println(' Groovy Postbuild')
// decorate this build
manager.addShortText('SNAPSHOT deployed on DEV', 'black', 'gold', '1px', 'black')
manager.addInfoBadge('SNAPSHOT deployed on DEV')
manager.addBadge('star-gold.png', 'SNAPSHOT deployed on DEV')
// decorate upstream builds
Jenkins jenkins = Jenkins.getInstance()
List<Project> projects = jenkins.getAllItems(Project.class)
log.println(" This build: '${manager.build}' --> " + manager.build.getResult())
log.println(' Decorating the following upstream builds:')
//log.println(manager.build.getUpstreamBuilds()) // prints "[:]", so using this to get the upstream Builds doesn't work
for (Cause cause : manager.build.getCauses()) {
for (Project project : projects) {
if (cause.toString().contains(project.getName())) {
int no = cause.getUpstreamBuild()
Build usb = project.getBuildByNumber(no)
log.println(" ${usb}")
usb.getActions().add(GroovyPostbuildAction.createShortText(
'SNAPSHOT deployed on DEV', 'black', 'gold', '1px', 'black'));
usb.getActions().add(GroovyPostbuildAction.createInfoBadge(
'SNAPSHOT deployed on DEV'))
usb.getActions().add(GroovyPostbuildAction.createBadge(
'star-gold.png', 'SNAPSHOT deployed on DEV'))
}
} // for (projects)
} // for (causes)
log.println(' ----------------')
Note:
This adds badges regardless of the build result but I'm confident that you can add the appropriate if easily. For removing badges see Groovy Postbuild Plugin's page.
References:
Jenkins set a badge as a pre-build step
Jenkins main module 1.622 API
Groovy Postbuild Plugin
GroovyPostbuildAction.java
After running the main project, every downstream project has test result, but the "Latest Aggregated Test Result" is no tests. How to configure the Jenkins to make all the test results display in aggregated list?
Aggregate downstream test results is not obvious, and not documented. The steps below are synthesized from How To Aggregate Downstream Test Results in Hudson.
To aggregate, you need to archive an artifact in the upstream job, fingerprint the artifact, and then pass the artifact from the upstream job to the downstream job. In my own words:
the shared, finger-printed artifact "ties" the jobs together and allows the upstream job to see the downstream test results
To show this, we can make a very simple flow between two free-style jobs, Job_A and Job_B.
Upstream
Job_A will run and create an artifact named some_file.txt. We're not aggregating the value/contents of some_file.txt, but it needs to be finger-printed and so it cannot be empty. Job_A will then trigger a build of Job_B.
Job_A's configuration:
Execute shell:
echo $(date) > some_file.txt
Archive the artifacts:
set Files to archive to the file, some_file.text
Aggregate downstream test results:
check the Automatically aggregate... option
Build other projects:
set Projects to build to Job_B
Record fingerprints of files to track usage:
set Files to fingerprint to some_file.txt
Downstream
Job_B will run, copy the file some_file.txt from the upstream job that triggered this run, echo out some mock test results to an XML file, then publish that XML result file. It's the published results that will get aggregated back into Job_A.
Job_B's configuration:
Copy artifacts from another project:
Project name
Job_A
Which build
Upstream build that triggered this job
Artifacts to copy
some_file.txt
Fingerprint Artifacts
β
Execute shell:
XML_VAR='<testsuite tests="3">
<testcase classname="foo" name="ASuccessfulTest"/>
<testcase classname="foo" name="AnotherSuccessfulTest"/>
<testcase classname="foo" name="AFailingTest">
<failure type="ValueError">Not enough foo!!</failure>
</testcase>
</testsuite>'
echo $XML_VAR > results.xml
Publish JUnit test result report:
set Test report XMLs with the file, results.xml
This should be sufficient to have Job_A aggregate Job_B's test results. I'm not sure if there's a way/plugin to change Job_A's status based on downstream results (like if Job_B failed, then Job_A would retroactively fail).
For Scripted Pipeline,
Say I have:
one upstream job - mainJob
two downstream jobs - downStreamJob1 and downStreamJob2.
To aggregate test result from downstreamJob1 and downStreamJob2, here is what the Jenkinsfile will look like:
downStreamJob1 Jenkinsfile - Archive and fingerprint the test result xml
archiveArtifacts allowEmptyArchive: true,
artifacts: **/test-results/test/*.xml,
fingerprint: true, defaultExcludes: false
downStreamJob2 Jenkinsfile - Archive and fingerprint the test result xml
archiveArtifacts allowEmptyArchive: true,
artifacts: **/output/junit-report/*.xml,
fingerprint: true, defaultExcludes: false
The artifacts path used Fileset to grab all test report XML. Read more about fileset HERE
mainJob Jenkinsfile - Copy artifact from each of the downstream jobs
copyArtifacts filter: 'build/test-result/test/*.xml', fingerprintArtifacts: true, projectName: 'downStreamJob1', selector: lastCompleted()
copyArtifacts filter: 'output/junit-report/*.xml', fingerprintArtifacts: true, projectName: 'downStreamJob2', selector: lastCompleted()
The best way to make sure you have the right path for filter and artifacts is to navigate to the artifact in each downstream job using this url $BUILD_URL/artifact/ where BUILD_URL is Full URL of this build, like http://server:port/jenkins/job/foo/15/