Is it possible, and if yes: how?, to get the log output for each parallel step separately?
I.e.:
def projectBranches = [:]
for (int i = 0; i < projects.size(); i++) {
def _i = i
projectBranches[_i] = {
someFunction(_i)
}
}
parallel projectBranches
Is it now possible to get the log for each of projectBranches[_i]?
I needed to access logs from within the pipeline code
so I implemented the algorithm proposed by キキジキ (really helpful) with a few adjustments (to add branchName prefix on each line to be able to get the whole log and still figure out which branch corresponds to each line ; and to support nested branches, which I needed) in https://github.com/gdemengin/pipeline-logparser :
to get logs programmatically
to get the full logs with branch prefix (similar to what currentBuild.rawBuild.log returned before version 2.2.5 of workflow-job plugin. but in version 2.26 they got rid of the branch information and I could not find any built-in function with the same information)
String logs = logparser.getLogsWithBranchInfo()
[Pipeline] Start of Pipeline
[Pipeline] parallel
[Pipeline] { (Branch: branch1)
[Pipeline] { (Branch: branch2)
[Pipeline] }
[Pipeline] echo
[branch1] in branch1
[Pipeline] sleep
[branch1] Sleeping for 1 sec
[Pipeline] echo
[branch2] in branch2
[Pipeline] sleep
[branch2] Sleeping for 1 sec
get the logs from 'branch2' only
String logsBranch2 = logparser.getLogsWithBranchInfo(filter: ['branch2'])
[branch2] in branch2
[branch2] Sleeping for 1 sec
to archive logs (in as $JOB_URL/<runId>/artifacts) to have them available as a link for later use
to archive the full logs (with branch prefix)
logparser.archiveLogsWithBranchInfo('consoleText.txt')
to archive the logs from branch2 only
logparser.archiveLogsWithBranchInfo('logsBranch2.txt', [filter: ['branch2']])
I found a way to achieve that, but you need to access the build folder directly (for example using currentBuild.rawBuild.getLogFile().getParent()).
Parse the xml files (or the single flowNodeStore.xml file) inside the workflow directory:
Build a hierarchy of nodes using the <id> and <parentIds> values.
If <branchName> is defined associate it to the current node and recursively to all nodes that have this node as parent. If a node has multiple parents assign no branch value to it.
Read the log file as byte[].
Read each line of log-index to find log ranges to assign to each node. The format of a line can be one of the following:
offset nodeId -> start of new node range, end of the previous (if present).
offset: end of current node range.
Convert the byte range back to a utf8 string (new String(range, "UTF-8")).
You might want to strip away all embedded codes with something like replaceAll("\u001B.*?\u001B\\[0m", "")
You could get your nodes by use of Jenkins REST API: job/test/1/api/json?depth=2
Result should contain something like:
{"_class":"org.jenkinsci.plugins.workflow.cps.nodes.StepStartNode","actions":[{},{},{}],"displayName":"Branch: 0","iconColor":"blue","id":"13","parents":["3"],"running":false,"url":"job/test/1/execution/node/13/"},
{"_class":"org.jenkinsci.plugins.workflow.cps.nodes.StepStartNode","actions":[{},{},{}],"displayName":"Allocate node : Start","iconColor":"blue","id":"23","parents":["13"],"running":false,"url":"job/test/1/execution/node/23/"},
{"_class":"org.jenkinsci.plugins.workflow.cps.nodes.StepStartNode","actions":[{},{}],"displayName":"Allocate node : Body : Start","iconColor":"blue","id":"33","parents":["23"],"running":false,"url":"job/test/1/execution/node/33/"},
{"_class":"org.jenkinsci.plugins.workflow.cps.nodes.StepAtomNode","actions":[{},{}],"displayName":"Print Message","iconColor":"blue","id":"37","parents":["33"],"running":false,"url":"job/test/1/execution/node/37/"}
So for your case you are interested in the child with type StepAtomNode of your Branch with name given (0-9 for this case). From this you could obtain console output address by simply adding log to the address (like: job/test/1/execution/node/37/log).
Now this is where it gets a bit ugly, you need to parse the html to get the actual log from the
<pre class="console-output">log here
</pre>
Related
So, most of the questions and answers I've found on this subject is for people who want to use the SAME workspace for different runs. (Which baffles me, but then I require a clean slate each time I start a job. Leftover stuff will only break things)
My issue is the EXACT opposite - I MUST have a separate workspace for each run (or I need to know how to create files with the same name in different runs that stay with that run only, and which are easily reachable from bash scripts started by the pipeline!)
So, my question is - how do I either force Jenkins to NOT use the same workspace for two concurrently-running jobs on different hosts, OR what variable can I use in the 'custom workspace' field to accomplish this?
After I responded to the question by #Joerg S I realized that I'm saying the thing that Joerg S says CAN'T happen is EXACTLY what I'm observing! Jenkins is using the SAME workspace for 2 different, concurrent, jobs on 2 different hosts. Is this a Jenkins pipeline bug?
See below for a bewildering amount of information.
Given the way I have to go onto and off of nodes during the run, I've found that I can start 2 different builds on different hosts of the same job, and they SHARE the workspace dir! Since each job has shell scripts which are busy writing files into that directory, this is extremely bad.
In Custom workspace in jenkins we are told to use custom workspace, and I'm set up just like that
In Jenkins: how to run builds in unique directories we are told to use ${BUILD_NUMBER} in the above custom workspace field, so what I tried was:
${JENKINS_HOME}/workspace/${ITEM_FULLNAME}/${BUILD_NUMBER}
All that happens to me when I use that is that the workspace name is, you guessed it, "${BUILD_NUMBER}" (and I even got a "${BUILD_NUMBER}#2" just for good measure!)
I tried {$BUILD_ID}, same thing (uses that literally, does not substitute the number).
I have the 'allow concurrent builds' turned on.
I'm using pipelines exclusively.
All jobs here, as part of normal execution, cause the slave, non-master host to reboot into an OS that does not have the capability to run slave.jar (indeed, it has no network access at all), so I cannot run the entire pipeline on that host.
All jobs use the following construct somewhere inside them:
tests=Arrays.asList(tests.split("\\r?\n"))
shellerror=231
for( line in tests){
So let's call an example job 'foo' that loops through a list, as above, that I want to run on 2 different hosts. The pipeline for that job starts running on master (since the above for (line in tests) is REQUIRED to run on a node!)). Then goes back and forth between master and slave, often multiple times.
If I start this job on host A and host B at about the same time, they will BOTH use the workspace ${JENKINS_HOME}/workspace/${JOB_NAME}, or in my case /var/lib/jenkins/jenkins/workspace/job
Since they write different data to files with the same name in that directory, I'm clearly totally broken immediately.
So, how do I force Jenkins to use a unique workspace EVERY SINGLE JOB?
Or, what???
Other things: pipeline build step version 2.5.1, Jenkins 2.46.2
I've been trying to get the workspace statement ('ws') to work, but that doesn't quite work as I expected either - some files are in the workspace I explicitly name, and some are still in the 'built-in' workspace (workspace/).
I was asked to provide code. The 'standard' pipeline I use is about 26K bytes, composing about 590 lines. So, I'm going to GREATLY reduce. That being said:
node("master") { // 1
..... lots of stuff....
} // this matches the "node('master')" above
node(HOST) {
echo "on $HOST, check what os"
if (isUnix())
...some more stuff...
} // end of 'node(HOST)' above
if (isok == 0 ) {
node("master") {
echo "----------------- Running on MASTER 19 $shellerror waiting on boot out of windows ------------"
sleep 120
echo "----------------- Leaving MASTER ------------"
}
}
... lots 'o code ...
node(HOST) {
... etc
} // matches the latest 'node HOST' above
node("master") { // 120
.... code ...
for( line in tests) {
...code...
}
}
... and on and on and on, switching back and forth from one to the other
FWIW, when I tried to make the above use 'ws' so that I could make certain the ws name was unique, I simply added a 'ws wsname' block directly under (almost) every 'node' opening so it was
node(name) { ws (wsname) { ..stuff that was in node block before... } }
But then I've got two directories to worry about checking - both the 'default' workspace/jobname dir AND the new wsname one.
Try using customWorkspace node common option:
pipeline {
agent {
node {
label 'node(s)-defined-label'
customWorkspace "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
}
}
stages {
// Your pipeline logic here
}
}
customWorkspace
A string. Run the Pipeline or individual stage this
agent is applied to within this custom workspace, rather than the
default. It can be either a relative path, in which case the custom
workspace will be under the workspace root on the node, or an absolute
path.
Edit
Since this doesn't work for your complex pipeline. Maybe try this silly solution:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
sh(script: "cd ${WORKSPACE}")
//Do stuff here
}
or if dir() is accessible:
def WORKSPACE = "${JENKINS_HOME}/workspace/${JOB_NAME}/${BUILD_NUMBER}"
node(HOST) {
sh(script: "mkdir -p ${WORKSPACE}")
dir(WORKSPACE) {
//Do stuff here
}
}
customWorkspace didn't work for me.
What worked:
stages {
stage("SCM (For commit trigger)"){
steps {
ws('custom-workspace') { // Because we don't want to switch from the pipeline checkout
// Generated from http://lstool01:8080/job/Permanent%20Build/pipeline-syntax/
checkout(xxx)
}
}
}
'${SOMEVAR}'
will not get substituted
"${SOMEVAR}"
will - this is how groovy strings are being handled
see groovy string handling
so if you have a
ws("/some/path/somewhere/${BUILD_ID}")
{
//something
}
on your node in your pipeline Jenkinsfile it should do the trick in this regard
the problem with #2 workspaces can occur when you allow concurrent builds of the project - I had the exact same problem with a custom ws() with #2 - simply disallow concurrent builds or work around that.
I have a Jenkinsfile multibranch pipeline script, which runs on two different Jenkins systems. Jenkinsfile relies on a specific label name. In one of the systems, the label based agent is available and in another not (intentionally). In the former it runs fine. In the Jenkins system without the matching label, the job just hangs because it cant find a matching agent.
Is there a way to specify an option to abort (or not start) a build if a label is not found?
Some discussion here:
https://issues.jenkins-ci.org/browse/JENKINS-35905
Might not be possible anytime soon
If they are calling in to a shared library then you can check for label being online/available and then fail the build
def computers = Jenkins.instance.computers
for(computer in computers){
if(computer.isOnline()){
labelStr = computer.node.getLabelString()
}
if labelStr ~= /user input/
break;
}
System.exit(1); // no label
For a declarative pipeline it may be possible to use when{beforeAgent} to test whether a label exists.
This would only be useful where the agent is specified for a stage rather than the whole pipeline.
...and caveat that this is an as yet untested hypothesis.
Just a workaround, but in order to avoid dependency on shared lib I run the below every X minutes to clean-up culprits from queue:
import hudson.model.*
def q = Jenkins.instance.queue
q.items.each {
if (it =~ /someregex or match all/) {
why = it.getWhy()
if (why =~ /.*There are no nodes with the label.*/) {
println "No node found for $it.task.runId. It's stuck in damn jenkins queue forever and ever. Killing it"
q.cancel(it.task)
}
}
}
I'm trying to convert my large multi-config Jenkins job over to pipeline syntax so I can, among other things, split it across multiple nodes and combine my multiple stages into one job. Here's the part where I'm seeing trouble:
def build_test_configs = [:]
def compilers = ['gnu', 'icc']
def configs = ['debug', 'default', 'opt']
for (int i = 0; i < configs.size(); i++) {
for (int j = 0; j < compilers.size(); j++) {
def node_name = ""
if ("${compilers[j]}" == "gnu") {
node_name = "node001"
} else {
node_name = "node002"
}
build_test_configs["${node_name} ${configs[i]}"] = {
node ("${node_name}") {
stage("Build Test ${node_name} ${compilers[j]} ${configs[i]}") {
unstash "${node_name}-tarball"
sh "$HOME/software/jenkins_scripts/nightly.sh ${configs[i]} ${compilers[j]} yes $WORKSPACE"
}
}
}
}
}
parallel build_test_configs
My problem is that ${compilers[j] and $configs[i] are undefined when I get to the part where I'm trying to build up a dictionary of build_test_configs on line 13. It would appear that the check on line 8 is working just fine.
Update
I don't have an error message per se. The script doesn't produce any runtime errors. The unexpected output is that the names of the stages are:
Build Test node001 null null
Build Test node001 null null
Build Test node002 null null
And the nightly.sh script is getting passed null parameters as well.
I think this is the expected behavior: Jenkins Pipeline scripts are written in Groovy but what is actually executed is a transformation of that (the term they use is "continuation-passing style transformation"). For example, some parts will run on the master, some on the slave nodes.
This involves a lot of magic that flies way above my head, but at our level it means we have to work with constraints in the syntax & constructs we use.
See the "fundamentals" paragraph of this article:
To understand Pipeline behavior you must understand a few points about how it executes.
Except for the steps themselves, all of the Pipeline logic, the
Groovy conditionals, loops, etc execute on the master. Whether
simple or complex! Even inside a node block!
Steps may use executors to do work where appropriate, but each step has a small on-master overhead too.
Pipeline code is written as Groovy but the execution
model is radically transformed at compile-time to Continuation
Passing Style (CPS).
This transformation provides valuable safety
and durability guarantees for Pipelines, but it comes with
trade-offs: Steps can invoke Java and execute fast and efficiently,
but Groovy is much slower to run than normal. Groovy logic requires
far more memory, because an object-based syntax/block tree is kept
in memory.
Pipelines persist the program and its state frequently to
be able to survive failure of the master.
Also see JENKINS-41335 discussing support of variables across the script.
Edit: ah, yes, as pointed in the comments, the new declarative model allows to define an environment with variables that would be passed the way you need... Don't know how to do that in scripted pipeline without JENKINS-41335 but it seems further evolutions will now happen in declarative land :/
I am using Jenkins and Gradle to build my java project.
Every time I build my project, I get a new build number on the Jenkins screen.
The following is my Jenkins build info:
Success > Console Output #96 03-Jan-2014 15:35:08
Success > Console Output #95 03-Jan-2014 15:27:29
Failed > Console Output #94 03-Jan-2014 15:26:16
Failed > Console Output #93 03-Jan-2014 15:25:01
Failed > Console Output #92 03-Jan-2014 15:23:50
Success > Console Output #91 03-Jan-2014 12:42:32
Success > Console Output #90 03-Jan-2014 12:02:45
I want to reset the Jenkins build number like:
Success > Console Output #1 03-Jan-2014 12:02:45
How can I reset the build number in Jenkins?
Can be easier done from groovy script console .
Go to http://your-jenkins-server/script
In script window enter:
item = Jenkins.instance.getItemByFullName("your-job-name-here")
//THIS WILL REMOVE ALL BUILD HISTORY
item.builds.each() { build ->
build.delete()
}
item.updateNextBuildNumber(1)
From here
Given your Hudson job is named FooBar,
rename FooBar to FooBar-Copy
create a new job named FooBar, using 'Copy existing job' option, from FooBar-Copy
delete FooBar-Copy
First wipeout workspace and get rid of previous builds.
On the server navigate to
the job dir eg. 'var/lib/jenkins/jobs/myJob' delete the
workspace & build dirs as well as any polling files, lastSuccessful,
lastStable files etc. You should only have config.xml and
lastBuildNumber.
Shut down jenkins using something like service jenkins stop
Edit the file called nextBuildNumber, inserting 1 instead of the current build number
Start up jenkins again, service jenkins start
Log into jenkins and go to your job and hit build. Should start building job#1
If you want set the next build number, there is plugin "NextBuildNumber" for that. But this will not work in your case because the build number you need, which is 1, is lesser than your current build number.
Here need to wipe out all the previous builds first. You can do this by running this simple script
Go to -> Manage Jenkins -> Script console
// change this variable to match the name of the job whose builds you want to delete
def jobName = "Your Job Name"
def job = Jenkins.instance.getItem(jobName)
job.getBuilds().each { it.delete() }
Now you can set next build number to 1 and run the build. It will start with 1. :)
Its that simple.
Update - Jenkins now has a Purge Job History plugin to get this done in easiest way. Checkout the page for more details - https://wiki.jenkins.io/display/JENKINS/Purge+Job+History+Plugin
To more generally reset your build number to N (where N is not necessarily 1):
Delete any existing builds where buildNumber >= N.
Edit Program Files (x86)/Jenkins/jobs/yourjob/nextBuildNumber. Set the number it contains to N.
From Jenkins, select Manage Jenkins -> Reload Configuration from Disk.
Expanding on the accepted answer, here's how to do it for all projects at once:
Jenkins.instance.allItems.each() {
item -> item.builds.each() {
build -> build.delete()
}
item.updateNextBuildNumber(1)
}
As an extention of #antweiss's excellent answer, we can actually go further ...
There's no need to delete the full Build History, if you don't want to, you can simply roll back time, to a prior point:
resetNumberTarget = 14
item = Jenkins.instance.getItemByFullName("Project Name [from project Dashboard]")
//println(item)
item.builds.each() { build ->
//println(build)
//println(build.number)
if(build.number >= resetNumberTarget)
{
//println("About to Delete '" + build + "'")
build.delete()
}
}
item.updateNextBuildNumber(resetNumberTarget)
If you want a dummy run, to check what it's going to do, without actually doing it, simply comment out the build.delete() and item.updateNextBuildNumber(resetNumberTarget) lines and uncomment the various print commands.
Documentation:
Details of these objects were hard to find, but I identified the following:
item is a FreeStyleProject (or possibly, just any type of Abstract Project?)
build appears to be a Run, thus exposing a number property and inheritting a delete() method from Job
Use Purge Job History plugin (Jenkins >= 2.176.1)
https://plugins.jenkins.io/purge-job-history/
You can use either nexBuildNumber plug-in or simply modify nexBuildNumber file to reset build number.
Following are the steps that you need to perform:
Go to .jenkins/Jobs/<YourJobName>/build/, take backup of this folder(if you need for future use) and delete build folder.
Note: Once you clean up old builds, you lose build histories and they
are no longer available on the Jenkins dashboard.
Reload the configuration(Jenkins -> Manage Jenkins).
Set next build version to 1 by either using the Next Build Number plug-in or modifying the nextBuildNumber file in yourjob directory.
So I tried the above solution and getting the following error.,
groovy.lang.MissingPropertyException: No such property: builds for
class:
org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject.
So I tried this,
item = Jenkins.get().getItem("Job Name")
jobs = item.getAllJobs()
jobs.each() { item ->
builds = item.getBuilds()
builds.each() { b ->
b.delete()
}
item.updateNextBuildNumber(1)
}
and it worked!!
Here is variation of #antweiss answer for multi-branch pipeline
items = Jenkins.instance.getItemByFullName("your-job-name-here").getItems()
//THIS WILL REMOVE ALL BUILD HISTORY
items.collectMany { it.builds }.each { build ->
build.delete()
}
items.each {
it.updateNextBuildNumber(1)
}
To reset build numbers of all jobs:
Jenkins.instance.getAllItems(AbstractProject.class).each {
item = Jenkins.instance.getItemByFullName(it.fullName)
//THIS WILL REMOVE ALL BUILD HISTORY
item.builds.each() { build ->
build.delete()
}
item.updateNextBuildNumber(1)
}
I found an easy way to do this.
Wipe out your work space.
Go to each builds which saved on Jenkins and delete it.
Set build number to 1 then build.
Jenkins use previous build to determine the next build number, if build number you input is lower than previous build number, Jenkins will automatically increase your build number to higher than previous build. So here we just
TLDR: I want to be able to run job simultaneously on multiple nodes in Jenkins pipeline. [ for example - build application x on nodes dev, test & staging nodes based on aws ]
I have a large group of nodes with the same label. I would like to be able to run a job in Jenkins that executes on all of the nodes with the same label as well as doing so simultaneously.
I saw a suggestion to use the matrix configuration option in Jenkins, but I can only think of one axis (the label group). When I try and run the job, it seems like it only executes once instead of 300 times (1 for each of the nodes in that label group).
What should my other axis be? Or...is there some plugin to do this? I had tried the NodeLabel Parameter Plugin, and choosing "run on all available online nodes", but it does not seem to run the jobs simultaneously.
Install
Parameterized Trigger Plugin
NodeLabel Parameter Plugin
For the job you want to run, enable Execute concurrent builds if necessary
Create another job besides the job you want to run on all slaves and configure it
Build > Add build step > Trigger/call builds on other projects
Add ParameterFactories > All Nodes for Label Factory > Label: the label of the nodes
The matrix build will work; use "Slaves" as the axis and expand the "Individual nodes" list to select all of your nodes.
Note that you will need to update the selection every time you add or remove a slave.
For a more maintainable solution, you could use the Job DSL plugin to set up a seed job that has the template for the build, then loops over each slave and creates a new job with the build label set to the name of the slave.
There is two plugins that you need: Paramitrized Trigger Plugin to be able to trigger other jobs as build step of your main job, and NodeLabel Plugin (read the BuildParameterFactory section for descrition of what you need) to specify the label.
The best and easiest way to accomplish this is using Elastic Axis plugin. 1. Install the pulgin.
2. Create a Multi Configuration job.(Install if not present)
3. In the job configuration you can find new axis added as Elastic axis. Add the label as shown below to get the job run on multiple slaves.
Taking a few of the above answers and adjusting them for 2.0 series.
You can now launch all a job on all nodes.
// The script triggers PayloadJob on every node.
// It uses Node and Label Parameter plugin to pass the job name to the payload job.
// The code will require approval of several Jenkins classes in the Script Security mode
def branches = [:]
def names = nodeNames()
for (int i=0; i<names.size(); ++i) {
def nodeName = names[i];
// Into each branch we put the pipeline code we want to execute
branches["node_" + nodeName] = {
node(nodeName) {
echo "Triggering on " + nodeName
build job: 'PayloadJob', parameters: [
new org.jvnet.jenkins.plugins.nodelabelparameter.NodeParameterValue
("TARGET_NODE", "description", nodeName)
]
}
}
}
// Now we trigger all branches
parallel branches
// This method collects a list of Node names from the current Jenkins instance
#NonCPS
def nodeNames() {
return jenkins.model.Jenkins.instance.nodes.collect { node -> node.name }
}
Taken from the code
https://jenkins.io/doc/pipeline/examples/#trigger-job-on-all-nodes
Rundeck might be a tool better suited to your needs. Can be setup to run several jobs in parallel and has a plugin for Jenkins: http://rundeck.org/
Rundeck is designed to integrate with larger systems. We generate the resource file from our configuration management database. Very easy to do see the documentation: http://rundeck.org/docs/administration/node-resource-sources.html.
Additionally plugins available for amazon and/or systems like puppet and chef: http://rundeck.org/plugins
I was looking for a way to run docker system prune on all nodes (with label docker). I ended with a pretty simple scripted pipeline, which AFAIK will only need the pipeline plugin to work:
#!/usr/bin/env groovy
def nodes = [:]
nodesByLabel('docker').each {
nodes[it] = { ->
node(it) {
stage("docker-prune#${it}") {
sh('docker system prune -af --filter "until=1440h"')
}
}
}
}
parallel nodes
Note: Requires Pipeline Utility Steps
What this does, it is looking for all nodes with label docker, then iterates over it and creates an associative array nodes with one step per found node (to be precise, what this is doing is cleaning all old docker stuff older then 60 days). parallel nodes starts to execute in parallel (on all found nodes simultaneously).
Hope that this will help someone.
Got it - No need for any special plugin!
I've created a parent job that triggers/call another build ,
And when I'm calling him I pass him the Label that I wan't the child job to run on.
So basically the parent job Only triggers the job I need ,
and the child job will run as many times as the number of slaves in that Label
(In my case 4 times).
Enable This project is parameterized, add a parameter of type Label, enter an arbitrary name for the label and select a default value such as a label covering a number of nodes or a conjuction (&&) of such labels. Enable Run on all nodes matching the label, keep Run regardless of result, keep Node eligibility at All nodes.
Solution: You can succinctly parallel the same build across multiple Jenkins nodes
This can be useful for building the same project on different environments ( for example: build node applications on test ,dev and staging environments )
Example:
pipeline {
agent { docker { image 'node:14-alpine' } }
stages {
stage('build') {
steps {
parallelTasks
}
}
}
}
def parallelTasks() {
def labels = ['test', 'dev', 'staging'] // labels for Jenkins node types we will build on
def builders = [:]
for (x in labels) {
def label = x
builders[label] = {
node(label) {
sh """#!/bin/bash -le
echo "build app on ${label} node"
cd /home/app
npm run build
"""
}
}
}
parallel builders
}