Multibranch Pipeline in Jenkins with SVN - jenkins

I have SVN as my SCM. The SVN Root URL structure is as follows.
https://svn.domain.com/orgName
Under this, I have a folder called "test". Then I have tags, branches and trunk. For example,
https://svn.domain.com/orgName/test/trunk
https://svn.domain.com/orgName/test/branches
Under trunk and branches, I have various modules. One module is Platform which is the core module. The URL structure for my project under Platform is as follows.
https://svn.domain.com/orgName/test/trunk/Platform/MyProject
https://svn.domain.com/orgName/test/branches/Platform/1.0.0.0/MyProject
Not sure if the above structure is correct or not. But this is how it is structured in my organization and it can't be changed. Now, I have the following questions.
At what level should I maintain the Jenkinsfile?
How should I pass the branch name (including trunk) to this file?
It will be great if someone can provide some details (if possible step by step) on how to use Multibranch pipeline with SVN. Unfortunately, I could not find any tutorial or examples to achieve this.

I figured this out on my own. Here are the details, in case, someone needs help.
For Trunk, add the Jenkinsfile inside trunk/Platform (in my case) and for Branches, add Jenkinsfile inside branches/Platform/ folder. For branches, it is better to keep the Jenkinsfile inside each version folder since it has some benefits. This approach will create a Jenkins job for each version.
In the Jenkins job (for multibranch pipeline), add the base url for Project Repository Base. In my case, it is https://svn.domain.com/orgName/test. In Include branches field, add trunk/Platform, branches/Platform/* in my case. In Jenkinsfile, to get branch name, use the built in variable $BRANCH_NAME. This gives trunk/Platform for trunk and branches/Platform/1.0.0.0 (for example) for branches.
Only challenge is that job names are created like Trunk/Platform, Branches/Platform/1.0.0.0. So the workspace gets created like Trunk%2FPlatform, Branches%2FPlatform%2F1.0.0.0 since "/" gets encoded with %2F. While using in jobs, make sure the job name is appropriately modified using the below code.
def cws = "${WORKSPACE_DIR}\\" + "${JOB_NAME}".replace("%2F","_").replace("/","\\")
echo "\u2600 workspace=${cws}"
def isTrunk = "${JOB_NAME}".toLowerCase().contains("trunk")
def version = ""
def verWithBldNum = ""
echo "\u2600 isTrunk=${isTrunk}"
if(!isTrunk)
{
version = "${JOB_NAME}".substring("${JOB_NAME}".lastIndexOf("%2F") + 3, "${JOB_NAME}".length())
echo "\u2600 version=${version}"
verWithBldNum = "${version}".substring(0, "${version}".lastIndexOf('.') + 1) + "${BUILD_NUMBER}"
echo "\u2600 verWithBldNum=${verWithBldNum}"
}
else
{
echo "\u2600 Branch is Trunk"
}

Related

Trigger the same pipeline for multiple independent products in parallel

We have multiple repositories, each one represents a software package/module that may be used in 2 different products. the same package can be built twice with different product’s context.
Inside each package I have a Jenkinsfile that looks like this:
#Library('do-jenkins-shared-lib#production')_
env.TOKEN = "<pkg_name>"
startPipeline()
As you can see, I distinguish between the various packages by env.TOKEN and when a webhook triggers a pipeline on some package, the startPipeline() function from my shared library is being called.
In my startPipeline function there is a “switch-case” logic to trigger the correct pipeline based on the branch and I will explain:
To associate the packages together into a “product” we have a manifest file:
MyAwesomeProduct:
package1:
branch: main
package2:
branch: 1.0.5
package3:
branch: main
...
I have 2 types of pipelines:
featurePipeline() - for private branches (different branches than what specified in the manifest)
trunkPipeline() - for branches which are considered as “main” or “trunk”
To explain better: for package1 - branch: main we go with “trunkPipeline” but for package1 - branch: feature/123 we (startPipline funcion) will choose “featurePipeline”.
My vars/startPipeline.groovy contains:
def call(){
node("builder"){
//clone the product manifests
sh("git clone <MyAwesomeProduct>")
def manifest = readYaml file: "manifest.yml"
if(manifest[env.PKG][branch] == env.BRANCH){
trunkPipeline()
}else{
featurePipeline()
}
}
}
trunkPipeline and featurePipeline are declared like the following:
def call(){
pipeline{
agent none
stages{
stage('stage1'){
}
}
...
}
}
Until now I supported only single product and everything worked fine but, Now I need to support one more.
As I described at the beginning, although we are building the same package for 2 different products the pipeline should be completely separated and independent.
So my question is: how can I trigger the pipelines (with their current & limited structure which doesn’t allow to use parallel) in parallel for each product? maybe I should change the design?
Now in startPipeline function I need to check for all products’ manifest and only then decide what to trigger (featurePipeline/trunkPipeline).
Need your help.
I need a way to run pipeline per product in parallel since the flows are independent
I don't want to create a separated pipeline and trigger it using build job: ... because:
This is a multibranch pipeline I don’t want to create a single job that will have single build history for all branches and all packages:
There are 300+ repositories/packages and around 100 branches on each one of them.
300*100 ->30,000 runs for a single commit/trigger.

jenkins declarative pipeline ignoring changelog of jenkinsfiles

I have apps and their codes on git repositories. Also jenkinsfiles for building apps and these files on another repository. The problem is jenkins builds changelog. Jenkins add jenkinsfiles changelog to build changesets and I don't want to that. Because these changes are according to infrastructure not relevant with apps. How to prevent this? I didn't find any workaround or solution.
If I got well your question... I don't think you can remove Jenkinsfile changes from the change set that Jenkins reads from git, but instead, you can skip your pipeline to build if there are only changes on Jenkinsfile.
If it helps...
First place, you need to read the change set:
def changedFiles = []
for (changeLogSet in currentBuild.changeSets) {
for (entry in changeLogSet.getItems()) {
for (file in entry.getAffectedFiles()) {
changedFiles.add(file.getPath())
}
}
}
Then, you can check if it is only Jenkinsfile:
if (changedFiles.size() == 1 && changedFiles[0] == "Jenkinsfile"){
println "Only Jenkinsfile has changed... Skipping build..."
currentBuild.getRawBuild().getExecutor().interrupt(Result.SUCCESS) // this skips your build prematurely and makes the rest of the stages green
sleep(1) // you just need this
}else{
println "There are other changes rather than only Jenkinsfile, build will proceed."
}
P.S. You have several ways to terminate the jobs earlier without failing the build, but this one is the cleanest in my experience, even though you need to allow some Admin Signatures Permission on Jenkins (I've seen it in another thread here some time ago, can't find it now though)

Jenkins: find build number for git commit

Each commit to my git repo triggers a build of my Jenkins pipeline.
I want to retrieve the buildNumber of an old build by commit hash. How can I do that?
I know this information exists because I can find it in the UI.
Some background as to why I want this:
When someone tags a commit I want to create a release (by tagging a docker image). Each build will push an image containing the build number. So I want to find out which build number corresponds to that commit so that I can find the image that I want to release.
Install Lucene plugin
https://wiki.jenkins.io/display/JENKINS/Lucene-Search
and you will be able to search by commit hash just via default Jenkins search bar! (but read plugin docs, for old builds to be searchable you need to rebuild database)
If you want to do it programatically you can use jenkins api, for example http://jenkinsapi.readthedocs.io/en/latest/using_jenkinsapi.html#example-5-getting-version-information-from-a-completed-build
Just modify function in example not to get latest successful build, but to get all builds and get their git hashes then filter this set.
Based on #akostadinov bit I poked around and found build number and other goodies but not GIT_COMMIT.
Maybe this would be useful to someone else so thought I would share what I found.
Open your admin script console with http://yourjenkins:8080/script and check it out for yourself.
def job = hudson.model.Hudson.instance.getItem("Foo Project")
def builds = job.getBuilds()
def thisBuild = builds[0]
def fourBuildsAgo = builds[4]
println('env' + builds[0].getEnvironment().keySet() )
println('each job has previous job e.g "' + thisBuild.getPreviousBuild() + '"')
fourBuildsAgo.getChangeSets().each {
println('Num of commits in this build ' + (it.getLogs()).size() )
it.getLogs().each {
println('commit data : ' + it.getRevision() + ' ' + it.getAuthor() + ' ' + it.getMsg())
}
}
I used this GitChangeSet API to poke around at the methods in groovy.
This code will fetch and display the commit hashes of each commit 4 builds ago. you can format your currentBuild.description with this text if you want and it will show on your status page.
This resulted in output ( real commit details hidden )
each job has previous job e.g "Foo Project #191"
Num of commits in this build 8
commit data : 288f0e7d3664045bcd0618aacf32841416519d92 user1 fixing the build
commit data : b752ee12b3d804f9a674314bef4de5942d9e02f5 user2 Fix handling to the library foo
commit data : 9067fd040199abe32d75467734a7a4d0d9b6e8b2 user2 Implemented Foo Class
...
...
...
If you want to get commit IDs for builds you can use groovy script like:
def job = hudson.model.Hudson.instance.getItem("My Job Name")
def builds = job.getBuilds()
Then for each git repo you are cloning, you can get revision by
println('last build ' + builds[0].getEnvironment()["GIT_COMMIT"])
println('2st last build ' + builds[1].getEnvironment()["GIT_COMMIT_4"])
For declarative pipelines see https://stackoverflow.com/a/49251515/520567
If I understand your question correctly, then you will first need to create a git hook to trigger a new build. This part is covered in the answer How do I react to new tags in git hooks?', though if you are using something like GitHub, BitBucket or Gitlab, then there may be other ways of going about it.
Then when the build is initiated, then there is build number which is provided as the variable 'BUILD_NUMBER' ,in Jenkins. If you want to include the git tag name, so you can use it in a script, then there seems to be a fews ways:
Git Parameter Plugin
Git Tag Message Plugin
Typically these plugins will create an environment variable that can be consumed by your scripts. I am not providing more concrete examples, since I am not aware of your exact tooling.
build.actions.find { action -> action instanceof jenkins.scm.api.SCMRevisionAction }?.revision?.hash;
ref & credits: https://gist.github.com/ftclausen/8c46195ee56e48e4d01cbfab19c41fc0

Scripting within Jenkins job

Main problem:
In the Jenkins build, I have a variable GIT_BRANCH of this content: origin/feature/JIRA1-add-component-A.
From it I would like to obtain another variable with the value: %3Aorigin%2Ffeature%2FJIRA1-add-component-A
For this, I need scripting. How can I do scripting in the job definition?
Context & further explanation:
From Jenkins, for every pull request that my project has, I am creating an instance of a project in SonarQube.
For example, if I have a pull request from branch origin/feature/JIRA1-add-component-A, I am creating a project with the following URL:
http://localhost:9000/dashboard?id=com.my.package%3Amy-project%3Aorigin%2Ffeature%2FJIRA1-add-component-A
I am using the Quality Gates plugin to fail the build in case the code's quality(as seen by SonarQube)is not good.
The problem is that Quality Gates needs my SonarQube project name, so in this case com.my.package%3Amy-project%3Aorigin%2Ffeature%2FJIRA1-add-component-A.
However, I can only specify it like this:
com.my.package:my-project:${GIT_BRANCH}
Which translates into this:
com.my.package:my-project:origin/feature/JIRA1-add-component-A
Something like this? Apologize if not, but dead tired lol.
def GIT_BRANCH = "origin/feature/JIRA1-add-component-A"
def BRANCH = "%A3" + "${GIT_BRANCH}".replace("/", "%2F")
println "${BRANCH}"
I'm sure it can be done easier BUT:
Pre: origin/feature/JIRA1-add-component-A
Post: %A3origin%2Ffeature%2FJIRA1-add-component-A

Jenkins, how to check regressions against another job

When you set up a Jenkins job various test result plugins will show regressions if the latest build is worse than the previous one.
We have many jobs for many projects on our Jenkins and we wanted to avoid having a 'job per branch' set up. So currently we are using a parameterized build to build eg different development branches using a single job.
But that means when I build a new branch any regressions are measured against the previous build, which may be for a different branch. What I really want is to measure regressions in a feature branch against the latest build of the master branch.
I thought we should probably set up a separate 'master' build alongside the parameterized 'branches' build. But I still can't see how I would compare results between jobs. Is there any plugin that can help?
UPDATE
I have started experimenting in the Script Console to see if I could write a post-build script... I have managed to get the latest build of master branch in my parameterized job... I can't work out how to get to the test results from the build object though.
The data I need is available in JSON at
http://<jenkins server>/job/<job name>/<build number>/testReport/api/json?pretty=true
...if I could just get at this data structure it would be great!
I tried using JsonSlurper to load the json via HTTP but I get 403, I guess because my script has no auth session.
I guess I could load the xml test results from disk and parse them in my script, it just seems a bit stupid when Jenkins has already done this.
I eventually managed to achieve everything I wanted, using a Groovy script in the Groovy Postbuild Plugin
I did a lot of exploring using the script console http://<jenkins>/script and also the Jenkins API class docs are handy.
Everyone's use is going to be a bit different as you have to dig down into the build plugins to get the info you need, but here's some bits of my code which may help.
First get the build you want:
def getProject(projectName) {
// in a postbuild action use `manager.hudson`
// in the script web console use `Jenkins.instance`
def project = manager.hudson.getItemByFullName(projectName)
if (!project) {
throw new RuntimeException("Project not found: $projectName")
}
project
}
// CloudBees folder plugin is supported, you can use natural paths:
project = getProject('MyFolder/TestJob')
build = project.getLastCompletedBuild()
The main test results (jUnit etc) seem to be available directly on the build as:
result = build.getTestResultAction()
// eg
failedTestNames = result.getFailedTests().collect{ test ->
test.getFullName()
}
To get the more specialised results from eg Violations plugin or Cobertura code coverage you have to look for a specific build action.
// have a look what's available:
build.getActions()
You'll see a list of stuff like:
[hudson.plugins.git.GitTagAction#2b4b8a1c,
hudson.scm.SCMRevisionState$None#40d6dce2,
hudson.tasks.junit.TestResultAction#39c99826,
jenkins.plugins.show_build_parameters.ShowParametersBuildAction#4291d1a5]
These are instances, the part in front of the # sign is the class name so I used that to make this method for getting a specific action:
def final VIOLATIONS_ACTION = hudson.plugins.violations.ViolationsBuildAction
def final COVERAGE_ACTION = hudson.plugins.cobertura.CoberturaBuildAction
def getAction(build, actionCls) {
def action = build.getActions().findResult { act ->
actionCls.isInstance(act) ? act : null
}
if (!action) {
throw new RuntimeException("Action not found in ${build.getFullDisplayName()}: ${actionCls.getSimpleName()}")
}
action
}
violations = getAction(build, VIOLATIONS_ACTION)
// you have to explore a bit more to find what you're interested in:
pylint_count = violations?.getReport()?.getViolations()?."pylint"
coverage = getAction(build, COVERAGE_ACTION)?.getResults()
// if you println it looks like a map but it's really an Enum of Ratio objects
// convert to something nicer to work with:
coverage_map = coverage.collectEntries { key, val -> [key.name(), val.getPercentageFloat()] }
With these building blocks I was able to put together a post-build script which compared the results for two 'unrelated' build jobs, then using the Groovy Postbuild plugin's helper methods to set the build status.
Hope this helps someone else.

Resources