I have a jenkins pipeline script that runs various test suites in parallel over multiple nodes. I'm not a jenkins expert - most of this is copy and paste from other jobs we have.
I am occasionally getting failures where the archiveArtefacts command has somehow gotten the wrong 'tar_file' variable.
Normally, the suite name is built into the tar file of logs, then the tar file is archived to Jenkins, but for some runs, the tar file gets created with one suite name, then the archive step has a different name (I've seen runs where 2 or 3 of the parallel steps fail with this sort of error, and some where only 1 fails).
So somehow, between sh("tar -czf ${tar_file} ${host_logs}/*") and archiveArtifacts artifacts: tar_file the value of tar_file has changed to that of a different suite.
Any thoughts on how I can change this so that the tar_file stays constant in each step?
try {
stage('cuke_regression') {
def stepsForCuke = [:]
stepsForCuke['cuke_api'] = sectionCukeRegressionTests("api")
stepsForCuke['cuke_admin'] = sectionCukeRegressionTests("admin")
stepsForCuke['cuke_notification'] = sectionCukeRegressionTests("notification")
stepsForCuke['cuke_public'] = sectionCukeRegressionTests("public")
stepsForCuke['cuke_project'] = sectionCukeRegressionTests("project")
stepsForCuke['cuke_group'] = sectionCukeRegressionTests("group")
stepsForCuke['cuke_review'] = sectionCukeRegressionTests("review")
stepsForCuke['cuke_workflow'] = sectionCukeRegressionTests("workflow")
stepsForCuke['cuke_review_comment'] = sectionCukeRegressionTests("review_comment")
parallel stepsForCuke
}
}
def sectionCukeRegressionTests(suite) {
section = 'cukes'
return {
section: {
node('docker-node') {
tar_file = "cukes-"+suite+"-logs.tgz "
try {
sh("docker-compose exec ... run the tests \"")
sh("tar -czf ${tar_file} ${host_logs}/*")
} finally {
sh("docker-compose down")
archiveArtifacts artifacts: tar_file
}
}
}
}
}
Dibakar's suggestion of making the tar_file variable a 'def' variable has fixed the problem for me.
Related
I came here from this post Defining a variable in shell script portion of Jenkins Pipeline
My situation is the following I have a pipeline that is updating some files and generating a PR in my repo if there are changes in the generated files (they change every couple of weeks or less).
At the end of my pipeline I have a post action to send the result by email to our teams connector.
I wanted to know if I could somehow generate a variable and include that variable in my email.
It looks something like this but off course it does not work.
#!groovy
String WasThereAnUpdate = '';
pipeline {
agent any
environment {
GRADLE_OPTS = '-Dorg.gradle.java.home=$JAVA11_HOME'
}
stages {
stage('File Update') {
steps {
sh './gradlew updateFiles -P updateEnabled'
}
}
stage('Create PR') {
steps {
withCredentials(...) {
sh '''
if [ -n \"$(git status --porcelain)\" ]; then
WasThereAnUpdate=\"With Updates\"
...
else
WasThereAnUpdate=\"Without updates\"
fi
'''
}
}
}
}
post {
success {
office365ConnectorSend(
message: "Scheduler finished: " + WasThereAnUpdate,
status: 'Success',
color: '#1A5D1C',
webhookUrl: 'https://outlook.office.com/webhook/1234'
)
}
}
}
I've tried referencing my variable in different ways ${}, etc... but I'm pretty sure that assignment is not working.
I know I probably could do it with a script block but I'm not sure how I would put the script block inside the SH itself, not sure this would be possible.
Thanks to the response from MaratC https://stackoverflow.com/a/64572833/5685482 and this documentation
I'll do it something like this:
#!groovy
def date = new Date()
String newBranchName = 'protoUpdate_'+date.getTime()
pipeline {
agent any
stages {
stage('ensure a diff') {
steps {
sh 'touch oneFile.txt'
}
}
stage('AFTER') {
steps {
script {
env.STATUS2 = sh(script:'git status --porcelain', returnStdout: true).trim()
}
}
}
}
post {
success {
office365ConnectorSend(
message: "test ${env.STATUS2}",
status: 'Success',
color: '#1A5D1C',
webhookUrl: 'https://outlook.office.com/webhook/1234'
)
}
}
In your code
sh '''
if [ -n \"$(git status --porcelain)\" ]; then
WasThereAnUpdate=\"With Updates\"
...
else
WasThereAnUpdate=\"Without updates\"
fi
'''
Your code creates a sh session (most likely bash). That session inherits the environment variables from the process that started it (Jenkins). Once it runs git status, it then sets a bash variable WasThereAnUpdate (which is a different variable from likely named Groovy variable.)
This bash variable is what gets updated in your code.
Once your sh session ends, bash process gets destroyed, and all of its variables get destroyed too.
This whole process has no influence whatsoever on Groovy variable named WasThereAnUpdate that just stays what it was before.
I'm trying to dynamically set environment variables in the jenkins pipeline script.
I'm using a combination of .groovy and .jenkinsfile scripts to generate the stage{} definitions for a pipeline as DRY as possible.
I have a method below:
def generateStage(nameOfTestSet, pathToTestSet, machineLabel, envVarName, envVarValue)
{
echo "Generating stage for ${nameOfTestSet} on ${machineLabel}"
return node("${machineLabel}") {
stage(nameOfTestSet)
{
/////// Area of interest ////////////
environment {
"${envVarName} = ${envVarValue}"
}
/////////////////////////////////////
try {
echo "Would run: "+pathToTestSet
} finally {
echo "Archive results here"
}
}
}
}
There's some wrapper code running this, but abstracting away we'd have the caller essentially use:
generateStage("SimpleTestSuite", "path.to.test", "MachineA", "SOME_ENV_VAR", "ENV_VALUE")
Where the last two parameters are the environment name (SOME_ENV_VAR) and the value (ENV_VALUE)
The equivalent declarative code would be:
stage("SimpleTestSuite")
{
agent {
label "MachineA"
}
environment = {
SOME_ENV_VAR = ENV_VALUE
}
steps {
echo "Would run" + "path.to.test"
}
post {
always {
echo "Archive results"
}
}
}
However, when running this script, the environment syntax in first code block doesn't seem to affect the actual execution at all. If I echo the ${SOME_ENV_VAR} (or even echo ${envVarName} in case it took this variable name as the actual environment value) they both return null.
I'm wondering what's the best way to make this environment{} section as DRY / dynamic as possible?
I would prefer it if there's an extendable solution that can take in a list of environmentName=Value pairs, as this would be more general case.
Note: I have tried the withEnv[] solution for scripted pipelines, however this seems to have the same issue.
I figured out the solution to this.
It is to use the withEnv([]) step.
def generateStage(nameOfTestSet, pathToTestSet, machineLabel, listOfEnvVarDeclarations=[])
{
echo "Generating stage for ${nameOfTestSet} on ${machineLabel}"
return node("${machineLabel}") {
stage(nameOfTestSet)
{
withEnv(listOfEnvVarDeclarations) {
try {
echo "Would run: "+pathToTestSet
} finally {
echo "Archive results here"
}
}
}
}
}
And the caller method would be:
generateStage("SimpleTestSuite", "path.to.test", "MachineA", ["SOME_ENV_VAR=\"ENV_VALUE\""])
Since the withEnv([]) step can take in multiple environment variables, we can also do:
generateStage("SimpleTestSuite", "path.to.test", "MachineA", ["SOME_ENV_VAR=\"ENV_VALUE\"", "SECOND_VAR=\"SECOND_VAL\""])
And this would be valid and should work.
Reading the properties file for the node label and triggerConfigURL, node label works, but I couldn't read and set triggerConfigURL from environment.
def propFile = "hello/world.txt" //This is present in workspace, and it works.
pipeline {
environment {
nodeProp = readProperties file: "${propFile}"
nodeLabel = "$nodeProp.NODE_LABEL"
dtcPath = "$nodeProp.DTC"
}
agent { label env.nodeLabel } // this works!! sets NODE_LABEL value from the properties file.
triggers {
gerrit dynamicTriggerConfiguration: 'true',
triggerConfigURL: env.dtcPath, // THIS DON'T WORK, tried "${env.dtcPath}" and few other notations too.
serverName: 'my-gerrit-server',
triggerOnEvents: [commentAddedContains('^fooBar$')]
}
stages {
stage('Print Env') {
steps {
script {
sh 'env' // This prints "dtcPath=https://path/of/the/dtc/file", so the dtcPath env is set.
}
}
}
After running the job, the configuration is as below:
Of the env and triggers clauses Jenkins runs one before the other, and it looks like you have experimentally proven that triggers run first and env second. It also looks like agent runs after env as well.
While I don't know why the programmers have made this specific decision, I think you are in a kind of a chicken-and-egg problem, where you want to define the pipeline using a file but can only read the file once the pipeline is defined and running.
Having said that, the following might work:
def propFile = "hello/world.txt"
def nodeProp = null
node {
nodeProp = readProperties file: propFile
}
pipeline {
environment {
nodeLabel = nodeProp.NODE_LABEL
dtcPath = nodeProp.DTC
}
agent { label env.nodeLabel }
triggers {
gerrit dynamicTriggerConfiguration: 'true',
triggerConfigURL: nodeProp.DTC,
//etc.
I want to trigger several different pipeline jobs, depending on the input parameters of a Controller Pipeline job.
Within this job I build the names of the other pipelines, I want to trigger from a list, given back from a python script.
node {
stage('Get_Clusters_to_Build') {
copyArtifacts filter: params.file_name_var_mapping, fingerprintArtifacts: true, projectName: 'UpdateConfig', selector: lastSuccessful()
script {
cmd_string = 'determine_ci_builds --jobname ' + env.JOB_NAME
clusters = bat(script: cmd_string, returnStdout: true)
output_array = clusters.split('\n')
cluster_array = output_array[2].split(',')
}
echo "${clusters}"
}
jobs = Hudson.instance.getAllItems(AbstractProject.class)
echo "$jobs"
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
stage ("TriggerTests") {
echo "Done"
}
}
My problem is, it might be the case, that a couple of jobs with the names I get from the Stage Get_Clusters_to_Build do not exist. Therefore they cannot be triggered and my job fails.
Now to my question, is there a way to get the names of all pipeline jobs, and how can I use them to check if I can trigger a build?
I tried by jobs = Hudson.instance.getAllItems(AbstractProject.class) but this gives me only the "normal" FreeStyleProject-Jobs.
I want to do something like this in the loop:
def builders = [:]
for (i=0; i<cluster_array.size(); i++) {
def cluster = cluster_array[i]
def job_to_build = "BuildCI_${cluster}".trim()
echo "### branch${i}"
echo "### ${job_to_build}"
// This part I only want to be executed if job_to_build is found in the jobs list, somehow like:
if job_to_build in jobs: // I know, this is not proper groovy syntax
builders["${job_to_build}"] =
{
stage("${job_to_build}") {
build "${job_to_build}"
}
}
}
parallel builders
All pipeline jobs are instantces of org.jenkinsci.plugins.workflow.job.WorkflowJob. So you can get names of all Pipeline jobs using the following function
#NonCPS
def getPipelineJobNames() {
Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)*.fullName
}
Then you can use it this way
//...
def jobs = getPipelineJobNames()
if (job_to_build in jobs) {
//....
}
try this syntax to get standard and pipeline jobs:
def jobs = Hudson.instance.getAllItems(hudson.model.Job.class)
As #Vitalii Vitrenko wrote, that is working fine
for (job in Hudson.instance.getAllItems(org.jenkinsci.plugins.workflow.job.WorkflowJob)) {
println job.fullName
}
I have set up some folders (Using Cloudbees Folder Plugin).
It sounds like the simplest possible command to be able to tell Jenkins: Build every job in Folder X.
I do not want to have to manually create a comma-separated list of every job in the folder. I do not want to add to this list whenever I want to add a job to this folder. I simply want it to find all the jobs in the folder at run time, and try to build them.
I'm not finding a plugin that lets me do that.
I've tried using the Build Pipeline Plugin, the Bulk Builder Plugin, the MultiJob plugin, and a few others. None seem to support the use case I'm after. I simply want any Job in the folder to be built. In other words, adding a job to this build is as simple as creating a job in this folder.
How can I achieve this?
I've been using Jenkins for some years and I've not found a way of doing what you're after.
The best I've managed is:
I have a "run every job" job (which contains a comma-separated list of all the jobs you want).
Then I have a separate job that runs periodically and updates the "run every job" job as new projects come and go.
One way to do this is to create a Pipeline job that runs Groovy script to enumerate all jobs in the current folder and then launch them.
The version below requires the sandbox to be disabled (so it can access Jenkins.instance).
def names = jobNames()
for (i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
#NonCPS
def jobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
def childItems = project.parent.items
def targets = []
for (i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == project.fullName) continue;
targets.add(childItem.fullName)
}
return targets
}
If you use Pipeline libraries, then the following is much nicer (and does not require you to allow a Groovy sandbox escape:
Add the following to your library:
package myorg;
public String runAllSiblings(jobName) {
def names = siblingProjects(jobName)
for (def i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
}
#NonCPS
private List siblingProjects(jobName) {
def project = Jenkins.instance.getItemByFullName(jobName)
def childItems = project.parent.items
def targets = []
for (def i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == jobName) continue;
targets.add(childItem.fullName)
}
return targets
}
And then create a pipeline with the following code:
(new myorg.JobUtil()).runAllSiblings(currentBuild.fullProjectName)
Yes, there are ways to simplify this further, but it should give you some ideas.
I developed a Groovy script that does this. It works very nicely. There are two Jobs, initBuildAll, which runs the groovy script and then launches the 'buildAllJobs' jobs. In my setup, I launch the InitBuildAll script daily. You could trigger it another way that works for you. We aren't full up CI, so daily is good enough for us.
One caveat: these jobs are all independent of one another. If that's not your situation, this may need some tweaking.
These jobs are in a separate Folder called MultiBuild. The jobs to be built are in a folder called Projects.
import com.cloudbees.hudson.plugins.folder.Folder
import javax.xml.transform.stream.StreamSource
import hudson.model.AbstractItem
import hudson.XmlFile
import jenkins.model.Jenkins
Folder findFolder(String folderName) {
for (folder in Jenkins.instance.items) {
if (folder.name == folderName) {
return folder
}
}
return null
}
AbstractItem findItem(Folder folder, String itemName) {
for (item in folder.items) {
if (item.name == itemName) {
return item
}
}
null
}
AbstractItem findItem(String folderName, String itemName) {
Folder folder = findFolder(folderName)
folder ? findItem(folder, itemName) : null
}
String listProjectItems() {
Folder projectFolder = findFolder('Projects')
StringBuilder b = new StringBuilder()
if (projectFolder) {
for (job in projectFolder.items.sort{it.name.toUpperCase()}) {
b.append(',').append(job.fullName)
}
return b.substring(1) // dump the initial comma
}
return b.toString()
}
File backupConfig(XmlFile config) {
File backup = new File("${config.file.absolutePath}.bak")
FileWriter fw = new FileWriter(backup)
config.writeRawTo(fw)
fw.close()
backup
}
boolean updateMultiBuildXmlConfigFile() {
AbstractItem buildItemsJob = findItem('MultiBuild', 'buildAllProjects')
XmlFile oldConfig = buildItemsJob.getConfigFile()
String latestProjectItems = listProjectItems()
String oldXml = oldConfig.asString()
String newXml = oldXml;
println latestProjectItems
println oldXml
def mat = newXml =~ '\\<projects\\>(.*)\\<\\/projects\\>'
if (mat){
println mat.group(1)
if (mat.group(1) == latestProjectItems) {
println 'no Change'
return false;
} else {
// there's a change
File backup = backupConfig(oldConfig)
def newProjects = "<projects>${latestProjectItems}</projects>"
newXml = mat.replaceFirst(newProjects)
XmlFile newConfig = new XmlFile(oldConfig.file)
FileWriter nw = new FileWriter(newConfig.file)
nw.write(newXml)
nw.close()
println newXml
println 'file updated'
return true
}
}
false
}
void reloadMultiBuildConfig() {
AbstractItem job = findItem('MultiBuild', 'buildAllProjects')
def configXMLFile = job.getConfigFile();
def file = configXMLFile.getFile();
InputStream is = new FileInputStream(file);
job.updateByXml(new StreamSource(is));
job.save();
println "MultiBuild Job updated"
}
if (updateMultiBuildXmlConfigFile()) {
reloadMultiBuildConfig()
}
A slight variant on Wayne Booth's "run every job" approach. After a little head scratching I was able to define a "run every job" in Job DSL format.
The advantage being I can maintain my job configuration in version control. e.g.
job('myfolder/build-all'){
publishers {
downstream('myfolder/job1')
downstream('myfolder/job2')
downstream('myfolder/job2')
}
}
Pipeline Job
When running as a Pipeline job you may use something like:
echo jobNames.join('\n')
jobNames.each {
build job: it, wait: false
}
#NonCPS
def getJobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
project.parent.items.findAll {
it.fullName != project.fullName && it instanceof hudson.model.Job
}.collect { it.fullName }
}
Script Console
Following code snippet can be used from the script console to schedule all jobs in some folder:
import hudson.model.AbstractProject
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ 'path/to/folder') {
(it as AbstractProject).scheduleBuild2(0)
}
}
With some modification you'd be able to create a jenkins shared library method (requires to run outside the sandbox and needs #NonCPS), like:
import hudson.model.AbstractProject
#NonCPS
def triggerItemsInFolder(String folderPath) {
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ folderPath) {
(it as AbstractProject).scheduleBuild2(0)
}
}
}
Reference pipeline script to run a parent job that would trigger other jobs as suggested by #WayneBooth
pipeline {
agent any
stages {
stage('Parallel Stage') {
parallel {
stage('Parallel 1') {
steps {
build(job: "jenkins_job_1")
}
}
stage('Parallel 2') {
steps {
build(job: "jenkins_job_2")
}
}
}
}
}
The best way to run an ad-hoc command like that would be using the Script Console (can be found under Manage Jenkins).
The console allows running Groovy Script - the script controls Jenkins functionality. The documentation can be found under Jenkins JavaDoc.
A simple script triggering immediately all Multi-Branch Pipeline projects under the given folder structure (in this example folder/subfolder/projectName):
import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
import hudson.model.Cause.UserIdCause
Jenkins.instance.getAllItems(WorkflowMultiBranchProject.class).findAll {
return it.fullName =~ '^folder/subfolder/'
}.each {
it.scheduleBuild(0, new UserIdCause())
}
The script was tested against Jenkins 2.324.