Extract Logs from remoteJobTrigger - jenkins

I'm using the below code to trigger a remote job in a a jenkins pipeline. After the job is trigger, I'd need to parse the logs and retrieve few information. The below code used handle.lastLog() function, but its returning null. Is there a way to get the logs from triggerRemoteJob ?
def handle = triggerRemoteJob (
auth: TokenAuth(apiToken:....'),...,
job: build_job,
parameters: parameters1,
useCrumbCache: true,
useJobInfoCache: true,
overrideTrustAllCertificates: false,
trustAllCertificates: true
)
def status = handle.getBuildStatus()
echo "--------------------------------------------------------------------------------"
echo "Log: " + handle.lastLog()
echo "--------------------------------------------------------------------------------"

You have many ways to parse the remote job console log by saving the output to a file as follows:
One way is to use the handle object to get the remote build URL:
def remoteBuildOutput = handle.getBuildUrl().toString()+"consoleText"
sh "curl -o remote_build_output.txt ${remoteBuildOutput} && cat remote_build_output.txt"
Another way is to output the last build log:
sh "curl -o remote_last_build_output.txt ${env.JENKINS_URL}/job/build_job/lastBuild/consoleText && cat remote_last_build_output.txt"
You can use env.JENKINS_URL if it's set. If not, you can replace it by your Jenkins URL.
build_job is the name of your remote job.
Note that if you're using the /lastBuild/ api call, you can't garantee that the last build is the same build that was triggered by your job (if the build_job is triggered by another system/person at the same time)
One last way is to enable the Parameterized Remote Trigger Plugin logging via the enhancedLogging option.
If set to true, the console output of the remote job is also logged.
def handle = triggerRemoteJob (
job: build_job,
enhancedLogging: true
...
)
You can then save the current job output in a file and parse what ever you want:
sh "curl -o current_build_output.txt ${env.JENKINS_URL}job/${env.JOB_NAME}/${currentBuild.number}/consoleText && cat current_build_output.txt"
${currentBuild.number} can also be replaced by the ${env.BUILD_NUMBER} variable

Related

Unable to get the payload from GitHub web hook trigger in jenkins pipeline

I have configured a Github web hook with the below settings:
Payload URL: https:///github-webhook/
Content Type: application/x-www-form-urlencoded
Events : Pushes, Pull Requests
The Jenkins job that I have, is a pipeline job that has the below enabled:
Build Trigger: GitHub hook trigger for GITScm polling
With the above configuration, I see that in response to an event ie; push/PR in GitHub, the jenkins job gets triggered successfully. In GitHub, under Recent Deliveries for the web hook, I see the details of the payload and a successful Response of 200.
I am trying to get the payload in Jenkins Pipeline for further processing. I need some details eg: PR URL/PR number, refs type, branch name etc for conditional processing in the Jenkins Pipeline.
I tried accessing the "payload" variable (as mentioned in other stack overflow posts and the documentations available around) and printing it as part of the pipeline, but I have had no luck yet.
So my question is, How can I get the payload from GitHub web hook trigger in my Jenkins Pipeline ?
You need to select Content type: application/json in your webhook in GitHub. Then you would be able to access any variable from the payload GitHub sends as follows: $. pull_request.url for pr url, for example.
Unsure if this is possible.
With the GitHub plugin we use (Pipeline Github), PR number is stored in the variable CHANGE_ID.
PR URL is pretty easy to generate given the PR number. Branch name is stored in the variable BRANCH_NAME. In case of pull requests, the global variable pullRequest is populated with lots of data.
Missing information can be obtained from Github by using their API. Here's an example of checking if PR is "behind", you can modify that to your specific requirements:
def checkPrIsNotBehind(String repo) {
withCredentials([usernamePassword(credentialsId: "<...>",
passwordVariable: 'TOKEN',
usernameVariable: 'USER')]) {
def headers = ' -H "Content-Type: application/json" -H "Authorization: token $TOKEN" '
def url = "https://api.github.com/repos/<...>/<...>/pulls/${env.CHANGE_ID}"
def head_sha = sh (label: "Check PR head SHA",
returnStdout: true,
script: "curl -s ${url} ${headers} | jq -r .head.sha").trim().toUpperCase()
println "PR head sha is ${head_sha}"
headers = ' -H "Accept: application/vnd.github.v3+json" -H "Authorization: token $TOKEN" '
url = "https://api.github.com/repos/<...>/${repo}/compare/${pullRequest.base}...${head_sha}"
def behind_by = sh (label: "Check PR commits behind",
returnStdout: true,
script: "curl -s ${url} ${headers} | jq -r .behind_by").trim().toUpperCase()
if (behind_by != '0') {
currentBuild.result = "ABORTED"
currentBuild.displayName = "#${env.BUILD_NUMBER}-Out of date"
error("The head ref is out of date. Please update your branch.")
}
}
}

How to get output of jenkins pipeline in a specific format?

I am trying to implement Machine learning in my jenkins pipeline.
For that I need output data of pipeline for each build.
Some parameters that i need are:
Which user triggered the pipeline
Duration of pipeline
Build number with its details
Pipeline pass/fail
If fail, at which stage it failed.
Error in the failed stage. (Why it failed)
Time required to execute each stage
Specific output of each stage (For. eg. : If a stage contains sonarcube execution then output be kind of percentage of codesmells or code coverage)
I need to fetch these details for all builds. How can get it?
There is jenkins api that can be implemented in python but i was able to get only JOB_NAME, Description of job, IS job Enabled.
These details werent useful.
There are 2 ways to get some of data from your list.
1. Jenkins API
For first 4 points from the list, you can use JSON REST API for a specific build to get those data. Example API endpoint:
https://[JENKINS_HOST]/job/[JOB_NAME]/[BUILD_NUMBER]/api/json?pretty=true
1. Which user triggered the pipeline
This will be under actions array in response, identyfi object in array by "_class": "hudson.model.CauseAction" and in it you will have shortDescription key which will have that information:
"actions": [
{
"_class": "hudson.model.CauseAction",
"causes": [
{
"_class": "hudson.triggers.SCMTrigger$SCMTriggerCause",
"shortDescription": "Started by an SCM change"
}
]
},
2. Duration of pipeline
It can be found under key: "duration". Example
"duration": 244736,
3. Build number with its details
I don't know what details you need, but for build number look for "number" key:
"number": 107,
4. Pipeline pass/fail
"result": "SUCCESS",
If you need to extract this information for all builds, run GET request for job API https://[JENKINS_HOST]/job/[JOB_NAME]/api/json?pretty=trueand extract all builds, then run above-mentioned request per build you have extracted.
I will write later a dummy python script to do just that.
2. Dump data in Jenkinsfile
There is also a possibility to dump some that information from Jenkinfile in post action.
pipeline {
agent any
stages {
stage('stage 1') {
steps {
sh 'echo "Stage 1 time: ${YOUR_TIME_VAR}" > job_data.txt'
}
}
}
post {
always {
sh 'echo "Result: ${result}" > job_data.txt'
sh 'echo "Job name: ${displayName}" > job_data.txt'
sh 'echo "Build number: ${number}" > job_data.txt'
sh 'echo "Duration: ${duration}" > job_data.txt'
archiveArtifacts artifacts: 'job_data.txt', onlyIfSuccessful: false
}
}
}
List of available global variables for pipeline job can be found:
https://[JENKINS_HOST]/pipeline-syntax/globals#env
For rest, you will need to implement your own logic in Jenkinsfile.
Ad. 5
Create a variable which holds information about current stage. At the beginning of each stage change its value to the ongoing stage. At the end dump to file like rest variables. If pipeline will fail let's say on stage foo in post action this variable will have exact same value because if pipeline fails it won't go to next stage.
Ad. 6
I'm not sure what you want, a traceback, error code?
I guess you will probably need to implement your own logging function.
Ad. 7
Make a function for measuring time for each stage and dump value at the end.
Ad. 8
Also not sure what you mean. Like, build artifacts?
At the end of each build this file job_data.txt will be archived as build artifact which can be later downloaded.
If i will find more elegant and simple solution I'll edit this post.
Hope it helps in any way
EDIT 1
Here is the script I've mentioned earlier.
import requests
username = "USERNAME"
password = "PASSWORD"
jenkins_host = "JENKINS_HOST"
jenkins_job = "JOBNAME"
request_url = "{0:s}/job/{1:s}/api/json".format(
jenkins_host,
jenkins_job,
)
job_data = requests.get(request_url, auth=(username, password)).json()
builds = []
for build in job_data.get('builds'):
builds.append(build.get('number'))
for build in builds:
build_url = "{0:s}/job/{1:s}/{2:d}/api/json".format(
jenkins_host,
jenkins_job,
build,
)
build_data = requests.get(build_url, auth=(username, password)).json()
build_name = build_data.get('fullDisplayName')
build_number = build_data.get('number')
build_status = build_data.get('result')
build_duration = build_data.get('duration')
for action in build_data.get('actions'):
if action.get("_class") == "hudson.model.CauseAction":
build_trigger = action.get('causes')
print(build_name)
print(build_status)
print(build_duration)
print(build_number)
print(build_trigger)
Please note you might need to authorize with API Token depending on your security settings.

Jenkins Pipeline script - return value of a build step

Is there a way to fetch URL of a build step (without waiting for completion) through Jenkins pipeline script?
Here is what I've tried but the return value of build is null.
def build_job = build job: 'dummy_job', wait: false
Trying to fetch URL as follows:
build_job.absoluteUrl
You can get it by using the getRawBuild() method:
def build_job=build(job:'dummy_job',propagate:false)
echo build_job.getResult()
echo build_job.getRawBuild().getAbsoluteUrl()
Don't use the wait: false since the function won't return the expected result.
don't use the propagate: false so the job won't fail before the next step if the called job fails.
BUILD_URL should provide you the job url. You can get all environment variables in the pipeline using env bash command.
Jenkins doc: here

CHANGE_AUTHOR_EMAIL and CHANGE_ID environment variables return "No such property: ..."

Given the following pipeline:
stages {
stage ("Checkout SCM") {
steps {
checkout scm
sh "echo ${CHANGE_AUTHOR_EMAIL}"
sh "echo ${CHANGE_ID}"
}
}
}
Why do these variables fail to resolve and provide a value?
Eventually I want to use these environment variables to send an email and merge a pull request:
post {
failure {
emailext (
attachLog: true,
subject: '[Jenkins] $PROJECT_NAME :: Build #$BUILD_NUMBER :: build failure',
to: '$CHANGE_AUTHOR_EMAIL',
replyTo: 'iadar#...',
body: '''<p>You are receiving this email because your pull request was involved in a failed build. Check the attached log file, or the console output at: $BUILD_URL to view the build results.</p>'''
)
}
}
and
sh "curl -X PUT -d '{\'commit_title\': \'Merge pull request\'}' <git url>/pulls/${CHANGE_ID}/merge?access_token=<token>"
Oddly enough, $PROJECT_NAME, $BUILD_NUMBER, $BUILD_URL do work...
Update: this may be an open bug... https://issues.jenkins-ci.org/browse/JENKINS-40486 :-(
Is there any workaround to get these values?
You need to be careful about how you refer to environment variables depending on whether it is shell or Groovy code, and how you are quoting.
When you do sh "echo ${CHANGE_ID}", what actually happens is that Groovy will interpolate the string first, by replacing ${CHANGE_ID} with the Groovy property CHANGE_ID, and that's where your error message is from. In Groovy, the environment variables are wrapped in env.
If you want to refer to the environment variables directly from your shell script, you either have to interpolate with env, use single quotes, or escape the dollar sign. All of the following should work:
sh 'echo $CHANGE_ID'
sh "echo \$CHANGE_ID"
sh "echo ${env.CHANGE_ID}"
For anyone who may come across this, these variables are available only if the checkbox for Build origin PRs (merged with base branch) was checked (this is in a multi-branch job).
See more in this other Jenkins issue: https://issues.jenkins-ci.org/browse/JENKINS-39838

How does it possible to pass value from child job to parent on Jenkins?

I know it possible to pass values from parent to child jobs using Multijob Plugin
Is it possible to pass variables from child job to parent?
Yes with a little work. If JobParent calls jobChild and you want to have variableChild1 (that you may have created in jobChild job) to be visible in jobParent job then do the following simple steps.
In the child job, create a file (variable=value) pair with all the variables in it. Lets call it child or downstream_job or jobChild_envs.txt
Now, once jobParent is done calling jobChild (I guess you are calling Trigger another project or Build other project steps etc), next action just would be to use "Copy Artifact from another project/job" (Copy Artifact plugin in Jenkins). PS: You would need to click on the check box to FLATTEN the file (see jobParent image below). https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin
Using this plugin, you'll be able to get a file/folder from jobChild's workspace into jobParent's workspace in a defined/base workspace location.
In jobParent, you'll Inject Environment variables (in the BUILD step).
https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin
At this time, if jobChild job created a .txt file with a variable for ex:
variableChild1=valueChild1
in it, then it'll be available/visible to the parent/upstrem job jobParent.
See the images for more details and run the jobs at your end to see the output.
and
In pipeline builds, you can do this as follows. Let's say you want to save the child build's URL and pass it back to the parent pipeline.
In your child build...
// write out some data about the job
def jobData = [job_url: "${BUILD_URL}"]
def jobDataText = groovy.json.JsonOutput.toJson(jobData)
writeFile file: "jobDataChild.json", text: jobDataText, encoding: 'UTF-8'
// archive the artifacts
archiveArtifacts artifacts: "jobDataChild.json", onlyIfSuccessful: false
And you can retrieve this in the parent build...
step ([$class: 'CopyArtifact', projectName: 'ChildJobName', filter: "jobDataChild.json", selector: [$class: 'LastCompletedBuildSelector'] ])
if (fileExists("jobDataChild.json")) {
def jobData = readJSON file: "jobDataChild.json"
def jobUrl = jobData.job_url
}
To add to this answer years later. The way i'm doing it is by having a redis instance that pipelines can connect to and pass data back and forth.
sh "redis-cli -u $redis_url ping" // server is up
def redis_key = "$BUILD_TAG" // BUILD_TAG is always unique
build job: "child", propagate: true, wait: true, parameters: [
string(name: "redis", value: "$redis_url;$redis_key"),
]
/******** in child job ***********/
def (redis_url, redis_key) = env.redis.tokenize(";")
sh"redis-cli -u $redis_url ping" // we are connected on url
// lpush adds to an array in redis
sh"""
redis-cli -u $redis_url lpush $redis_key "MY_DATA"
"""
/******* in parent job after waiting for child job *****/
def data_from_child = sh(script: "redis-cli --raw -u $redis_url LRANGE $redis_key 0 -1", returnStdout: true)
data_from_child == "MY_DATA"? println("👍") : error("wow did not expect this")
I kind of like this approach better than passing back and forth with files because it allows scaling up via multiple worker nodes and executing multiple jobs in parallel.

Resources