I'm new to groovy and the workflow plugin, so perhaps this is something obvious. Q1: I'm try to run jobs read under a view in parallel. I do like this:
jenkins = Hudson.instance
parallel getBranches()
#NonCPS def getBranches() {
def jobBranches = [:]
for (int i = 0; i < getJobs().size(); i++) {
jobBranches["branch_${i}"] = {
build job : getJobs()[i]
}
}
return jobBranches
}
#NonCPS def getJobs() {
def jobArray = []
jenkins.instance.getView("view_A").items.each{jobArray.add(it.displayName)}
return jobArray
}
I got:
But if I wrote it like this:
jenkins = Hudson.instance
def jobBranches = [:]
for (int i = 0; i < getJobs().size(); i++) {
jobBranches["branch_${i}"] = {
build job : getJobs()[i]
}
}
parallel jobBranches
#NonCPS def getJobs() {
def jobArray = []
jenkins.instance.getView("view_A").items.each{jobArray.add(it.displayName)}
return jobArray
}
Then I got something like this:
What am I doing wrong? Or Is there another way to accomplish the same thing.
Q2: BTW, If there are three jobs, like j1, j2, j3. j1 and j2 are executed first and in parallel, when one of them are finished, j3 will be executed. so how to do this?
I figured out why.
for (int i = 0; i < getJobs().size(); i++) {
def j=i
jobBranches["branch_${i}"] = {
build job : getJobs()[j]
}
Then it will work!
Related
I want to be able to execute specific components that I organized in Files in the repository with only one main jenkinsfile.
For example I have this repo structure:
And I have three different components: Topic_A, Topic_B, Topic_C (same type of components but will be created for different teams).
I want to be able to modify only Topic_A and C and after I push the branch I want my jenkinsfile to able to execute just those changes instead also redeploying Topic_B which it was not modified.
My question is if this possible? Could it be done with a jenkins pipeline? or any other component? (script)
Thank you.
There is a changeset directive that allows you to check whether a file has changed in the Git repository. But it doesn't support checking directories, hence you can do something like the below.
pipeline {
agent any
stages {
stage('Cloning') {
steps {
// Get some code from a GitHub repository
git branch: 'main', url: 'https://github.com/xxxx/sample.git'
}
}
stage('TOPIC_A') {
when { expression { isChanged("Topic_A") } }
steps {
echo 'Doing something for Topic A'
}
}
stage('TOPIC_B') {
when { expression { isChanged("Topic_B") } }
steps {
echo 'Doing something for Topic B'
}
}
}
}
def isChanged(dirName) {
def changeLogSets = currentBuild.changeSets
def folderName = dirName
for (int i = 0; i < changeLogSets.size(); i++) {
def entries = changeLogSets[i].items
for (int j = 0; j < entries.length; j++) {
def entry = entries[j]
def files = new ArrayList(entry.affectedFiles)
for (int k = 0; k < files.size(); k++) {
def file = files[k]
if(file.path.contains(folderName)){
return true
}
}
}
}
return false
}
In my scripted pipeline, I want to get changes since last successful build and based on files which have changed I want to enable or disable some parts of the pipeline. I am using Global Shared Library which contains definitions of some additional steps and the whole pipeline. To print changes since last successful build I am using the following code:
def showChanges(def build) {
if ((build != null) && (build.result != 'SUCCESS')) {
def changeLogSets = build.rawBuild.changeSets
for (int i = 0; i < changeLogSets.size(); i++) {
def entries = changeLogSets[i].items
for (int j = 0; j < entries.length; j++) {
def entry = entries[j]
echo "${entry.commitId} by ${entry.author} on ${new Date(entry.timestamp)}: ${entry.msg}"
def files = new ArrayList(entry.affectedFiles)
for (int k = 0; k < files.size(); k++) {
def file = files[k]
echo " ${file.editType.name} ${file.path}"
}
}
}
showChanges(build.getPreviousBuild())
}
}
However, when I do some change in global library then it prints just this change and not the change which happened on the main repository. The changeSet contains no info regarding files which have changed in the main cloned repository.
This is because Jenkins loads all changes from all repositories and shared libraries referenced in your Pipeline into rawBuild.changeSets. There's nothing you can really do about this except manually filter out repositories. For instance, if you only want changes that come from the my_awesome_repo repository:
changeSets = rawBuild.changeSets.findAll { changeSet->
try {
changeSet.getBrowser().getRepoUrl() =~ /my_awesome_repo/
} catch(groovy.lang.MissingMethodException e) {
false // repository has no `browser` property
}
}
I am using a jenkins pipeline project. In the script I would like to write the parallel block in a dynamic way, since the number of nodes can change. For instance, from this:
parallel(
node1: {
node(){
stage1()
stage2()
...
}
},
node2: {
node(){
stage1()
stage2()
...
}
},
...
)
to something like this
for (int i = 0; i < $NODE_NUMBER; i++) {
"node${i}": {
node (’namenode-' + ${i}) {
something()
}
}
but this way doesn’t work, Groovy/Jenkins is not happy about this syntax. Can someone suggest a better way for doing this?
You can define node map like branches first, and then execute them as parallel branches.
def numNodes = 4
def branches = [:]
for(int i = 0; i < numNodes; i++) {
branches["node${i}"] = {
node("namenode-${i}") {
something()
}
}
}
parallel branches
Anyone have a Jenkins Pipeline script that can stuff all the changes since the previous successful build in a variable? I'm using git and a multibranch pipeline job.
Well I managed to cobble something together. I'm pretty sure you can iterate the arrays better but here's what I've got for now:
node('Android') {
passedBuilds = []
lastSuccessfulBuild(passedBuilds, currentBuild);
def changeLog = getChangeLog(passedBuilds)
echo "changeLog ${changeLog}"
}
def lastSuccessfulBuild(passedBuilds, build) {
if ((build != null) && (build.result != 'SUCCESS')) {
passedBuilds.add(build)
lastSuccessfulBuild(passedBuilds, build.getPreviousBuild())
}
}
#NonCPS
def getChangeLog(passedBuilds) {
def log = ""
for (int x = 0; x < passedBuilds.size(); x++) {
def currentBuild = passedBuilds[x];
def changeLogSets = currentBuild.rawBuild.changeSets
for (int i = 0; i < changeLogSets.size(); i++) {
def entries = changeLogSets[i].items
for (int j = 0; j < entries.length; j++) {
def entry = entries[j]
log += "* ${entry.msg} by ${entry.author} \n"
}
}
}
return log;
}
Based on the answer from CaptRespect I came up with the following script for use in the declarative pipeline:
def changes = "Changes:\n"
build = currentBuild
while(build != null && build.result != 'SUCCESS') {
changes += "In ${build.id}:\n"
for (changeLog in build.changeSets) {
for(entry in changeLog.items) {
for(file in entry.affectedFiles) {
changes += "* ${file.path}\n"
}
}
}
build = build.previousBuild
}
echo changes
This is quite useful in stage->when->expression parts to run a stage only when certain files were changed. I haven't gotten to that part yet though, I'd love to create a shared library from this and make it possible to pass it some globbing patterns to check for.
EDIT: Check the docs btw, in case you want to delve a little deeper. You should be able to convert all the object.getSomeProperty() calls into just entry.someProperty.
This is what I've used:
def listFilesForBuild(build) {
def files = []
currentBuild.changeSets.each {
it.items.each {
it.affectedFiles.each {
files << it.path
}
}
}
files
}
def filesSinceLastPass() {
def files = []
def build = currentBuild
while(build.result != 'SUCCESS') {
files += listFilesForBuild(build)
build = build.getPreviousBuild()
}
return files.unique()
}
def files = filesSinceLastPass()
There's the Changes Since Last Success Plugin that could help you with that.
For anyone using Accurev here is an adaptation of andsens answer. andsens answer can't be used because the Accurev plugin doesn't implement getAffectedFiles. Documentation for the AccurevTransaction that extends the ChangeLogSet.Entry class can be found at here.
import hudson.plugins.accurev.*
def changes = "Changes: \n"
build = currentBuild
// Go through the previous builds and get changes until the
// last successful build is found.
while (build != null && build.result != 'SUCCESS') {
changes += "Build ${build.id}:\n"
for (changeLog in build.changeSets) {
for (AccurevTransaction entry in changeLog.items) {
changes += "\n Issue: " + entry.getIssueNum()
changes += "\n Change Type: " + entry.getAction()
changes += "\n Change Message: " + entry.getMsg()
changes += "\n Author: " + entry.getAuthor()
changes += "\n Date: " + entry.getDate()
changes += "\n Files: "
for (path in entry.getAffectedPaths()) {
changes += "\n " + path;
}
changes += "\n"
}
}
build = build.previousBuild
}
echo changes
writeFile file: "changeLog.txt", text: changes
In order to return the changes as a list of strings, instead of just printing them, you may use this function (based on #andsens answer):
def getChangesSinceLastSuccessfulBuild() {
def changes = []
def build = currentBuild
while (build != null && build.result != 'SUCCESS') {
changes += (build.changeSets.collect { changeSet ->
(changeSet.items.collect { item ->
(item.affectedFiles.collect { affectedFile ->
affectedFile.path
}).flatten()
}).flatten()
}).flatten()
build = build.previousBuild
}
return changes.unique()
}
How can I make a function call in a closure in Groovy? Currently trying this but it results in all iterations using the values from the last array element:
def branches = [:]
for (int i = 0; i < data.steps.size(); i++) {
branches["${data.steps.get(i).name}"] = {
myFunc(data.steps.get(i))
}
}
parallel branches
That's a common gotcha
This should work:
def branches = data.steps.collectEntries { step ->
[step.name, { myFunc(step) }]
}
parallel branches
Or
def branches = data.steps.inject([:]) { map, step ->
map << [(step.name): { myFunc(step) }]
}