Accessing Groovy class variables via Jenkins Declarative Pipeline - jenkins

I am trying to create a pipeline for my project using Jenkins Declarative Pipleline. As a part of it i had to write some Groovy code in some file and i was successful in loading it. Below is the a code snippet from JenkinsFile
stage('Pre-Build'){
steps{
script{
pipeline = load 'dependencyChecker.groovy'
dependents = pipeline.gatherAllDependencies("${base_workspace}/${branch}/${project}/settings.gradle")
for (item in dependents){
sh "cp --parents ${item} ."
}
}
}
}
The following is the groovy file(dependencyChecker.groovy)
class Test1{
static def outStream;
static setOut(PrintStream out){
outStream = out;
out.println(outStream);
}
static List gatherAllDependencies(String settingsPath){
List projectDependencies = [];
computeAllDependencies(settingsPath,projectDependencies);
return projectDependencies;
}
static List computeAllDependencies(String settingsPath,List projectDependencies){
new File(settingsPath).splitEachLine('='){line ->
if(line[1]!=null){
outStream.println("Entire Project Name is ${line[1]}");
def name = line[1].split('\'');
if(name.length==3){
def dir = name[1];
Path path = Paths.get(settingsPath);
def dependency = path.getParent().resolve(dir).normalize();
outStream.println(dependency);
if(!projectDependencies.contains("${dependency}")){
projectDependencies.add("${dependency}")
if(new File("${dependency}/settings.gradle").exists()){
computeAllDependencies("${dependency}/settings.gradle",projectDependencies);
}
}
}
}
}
}
}
Test1.setOut(System.out);
return Test1;
Updated:
import java.nio.file.*
class Test1 implements Serializable{
def script;
Test1(def script){
this.script = script;
}
def gatherAllDependencies(String settingsPath){
List projectDependencies = [];
computeAllDependencies(settingsPath,projectDependencies);
return projectDependencies;
}
def computeAllDependencies(String settingsPath,List projectDependencies){
new File(settingsPath).splitEachLine('='){line ->
if(line[1]!=null){
this.script.echo("Entire Project Name is ${line[1]}");
def name = line[1].split('\'');
if(name.length==3){
def dir = name[1];
Path path = Paths.get(settingsPath);
def dependency = path.getParent().resolve(dir).normalize();
this.script.echo(dependency);
if(!projectDependencies.contains("${dependency}")){
projectDependencies.add("${dependency}")
if(new File("${dependency}/settings.gradle").exists()){
computeAllDependencies("${dependency}/settings.gradle",projectDependencies);
}
}
}
}
}
}
}
def a = new Test1(script:this);
return a;
dependents(in JenkinsFile) is always empty despite the same code working fine in Jenkins Script Console. I could not debug it as none of my println statements are logged anywhere
Can someone guide me as to where i am going wrong

Related

Grab available artifact versions in jenkins parameters

I want to have an ability in Jenkins build fetch available artefacts versions (i use Artifactory to store code packed in .zip) and put them as a dropdown list, so i can select the version i want to use with this build.
Could you give an example how to do this best way?
To do so, i use jenkins plugin Active Choices and add reactive parameter into the job. This gives me an ability to select what Environment to use, and based on it fetch available artifacts from artifactory
Job configuration:
Groovy code used in "Active choice" parameters:
import groovy.json.JsonSlurper
import java.util.regex.Matcher;
import java.util.regex.Pattern;
Pattern pattern = Pattern.compile("((?:develop|master|function)_(?:latest|[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+).*)")
def repository_dev = "https://repo.nibr.novartis.net/artifactory/api/storage/nibr-generic/intuence_discovery/idaw_health_checker/develop/"
def repository_tst = "https://repo.nibr.novartis.net/artifactory/api/storage/nibr-generic/intuence_discovery/idaw_health_checker/release/"
def repository_prd = "https://repo.nibr.novartis.net/artifactory/api/storage/nibr-generic/intuence_discovery/idaw_health_checker/master/"
try {
if (DEPLOY_TO == "dev") {
versions = "curl -s $repository_dev"
}
else if (DEPLOY_TO == "tst") {
versions = "curl -s $repository_tst"
}
else if (DEPLOY_TO == "prd") {
versions = "curl -s $repository_prd"
}
def proc = versions.execute()
proc.waitFor()
def output = proc.in.text
def jsonSlurper = new JsonSlurper()
def artifactsJsonObject = jsonSlurper.parseText(output)
def dataArray = artifactsJsonObject.children
List<String> artifacts = new ArrayList<String>()
for(item in dataArray) {
Matcher m = pattern.matcher(item.uri)
while (m.find()) {
artifacts.add(m.group());
}
}
return artifacts
}
catch (Exception e) {
return ["There was a problem fetching the artifacts", e]
}

Define global variable in Jenkins Shared Library

I created a Jenkins Shared Library with many functions into /vars. Among them, there is a devopsProperties.groovy with many properties :
class devopsProperties {
//HOSTS & SLAVES
final String delivery_host = "****"
final String yamale_slave = "****"
//GIT
final Map<String,String> git_orga_by_project = [
"project1" : "orga1",
"project2" : "orga2",
"project3" : "orga3"
]
...
}
Other functions in my Shared Library use these parameters. For example, gitGetOrga.groovy :
def call(String project_name) {
devopsProperties.git_orga_by_project.each{
if (project_name.startsWith(it.key)){
orga_found = it.value
}
}
return orga_found
}
But now, as we have many environments, we need to load at the beginning of the pipeline the devopsProperties. I create properties files in the resources :
+-resources
+-properties
+-properties-dev.yaml
+-properties-val.yaml
+-properties-prod.yaml
and create a function to load it :
def call(String environment="PROD") {
// load the specific environment properties file
switch(environment.toUpperCase()) {
case "DEV":
def propsText = libraryResource 'properties/properties-dev.yaml'
devopsProperties = readYaml text:propsText
print "INFO : DEV properties loaded"
break
case "VAL":
def propsText = libraryResource 'properties/properties-val.yaml'
devopsProperties = readYaml text:propsText
print "INFO : VAl properties loaded"
break
case "PROD":
def propsText = libraryResource 'properties/properties-prod.yaml'
devopsProperties = readYaml text:propsText
print "INFO : PROD properties loaded"
break
default:
print "ERROR : environment unkown, choose between DEV, VAL or PROD"
break
}
return devopsProperties
}
but when I try to use it in a pipeline :
#Library('Jenkins-SharedLibraries')_
devopsProperties = initProperties("DEV")
pipeline {
agent none
stages {
stage("SLAVE JENKINS") {
agent {
node {
label ***
}
}
stages{
stage('Test') {
steps {
script {
print devopsProperties.delivery_host // THIS IS OK
print devopsProperties.git_orga_by_project["project1"] // THIS IS OK
print gitGetOrga("project1") //THIS IS NOT OK
}
}
}
}
}
}
}
The last print generates an error : groovy.lang.MissingPropertyException: No such property: devopsProperties for class: gitGetOrga
How I can use a global variable into all my Jenkins Shared Library functions ? If possible, I prefer to not pass it in parameter of all functions.
EDITED
First, you need to place gitGetOrga.groovy in to 'src' directory which is on the same level as var and where you have java package for your code. You'll get the structure like this:
After that, you need to import your class in gitGetOrga.groovy
import com.you-company.project-name.devopsProperties
def call(String project_name) {
devopsProperties.git_orga_by_project.each{
if (project_name.startsWith(it.key)){
orga_found = it.value
}
}
return orga_found
}
You can find more information in the Jenkins docs: https://www.jenkins.io/doc/book/pipeline/shared-libraries/#writing-libraries

Jenkins parallel script in loop using wrong variables

I'm trying to build a dynamic group of steps to run in parallel. The following example is what I came up with (and found examples of at https://devops.stackexchange.com/questions/3073/how-to-properly-achieve-dynamic-parallel-action-with-a-declarative-pipeline). But I'm having trouble getting it to use the expected variables. The result always seems to be the variables from the last iteration of the loop.
In the following example the echo output is always bdir2 for both tests:
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def tests = [:]
def files
files = ['adir1/adir2/adir3','bdir1/bdir2/bdir3']
files.each { f ->
rolePath = new File(f).getParentFile()
roleName = rolePath.toString().split('/')[1]
tests[roleName] = {
echo roleName
}
}
parallel tests
}
}
}
}
}
I'm expecting one of the tests to output adir2 and another to be bdir2. What am I missing here?
Just try to move the test section a little higher, and it will be work
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
def tests = [:]
def files
files = ['adir1/adir2/adir3','bdir1/bdir2/bdir3']
files.each { f ->
tests[f] = {
rolePath = new File(f).getParentFile()
roleName = rolePath.toString().split('/')[1]
echo roleName
}
}
parallel tests
}
}
}
}
}

How do you load a groovy file and execute it

I have a jenkinsfile dropped into the root of my project and would like to pull in a groovy file for my pipeline and execute it. The only way that I've been able to get this to work is to create a separate project and use the fileLoader.fromGit command. I would like to do
def pipeline = load 'groovy-file-name.groovy'
pipeline.pipeline()
If your Jenkinsfile and groovy file in one repository and Jenkinsfile is loaded from SCM you have to do:
Example.Groovy
def exampleMethod() {
//do something
}
def otherExampleMethod() {
//do something else
}
return this
JenkinsFile
node {
def rootDir = pwd()
def exampleModule = load "${rootDir}#script/Example.Groovy "
exampleModule.exampleMethod()
exampleModule.otherExampleMethod()
}
If you have pipeline which loads more than one groovy file and those groovy files also share things among themselves:
JenkinsFile.groovy
def modules = [:]
pipeline {
agent any
stages {
stage('test') {
steps {
script{
modules.first = load "first.groovy"
modules.second = load "second.groovy"
modules.second.init(modules.first)
modules.first.test1()
modules.second.test2()
}
}
}
}
}
first.groovy
def test1(){
//add code for this method
}
def test2(){
//add code for this method
}
return this
second.groovy
import groovy.transform.Field
#Field private First = null
def init(first) {
First = first
}
def test1(){
//add code for this method
}
def test2(){
First.test2()
}
return this
You have to do checkout scm (or some other way of checkouting code from SCM) before doing load.
Thanks #anton and #Krzysztof Krasori, It worked fine if I combined checkout scm and exact source file
Example.Groovy
def exampleMethod() {
println("exampleMethod")
}
def otherExampleMethod() {
println("otherExampleMethod")
}
return this
JenkinsFile
node {
// Git checkout before load source the file
checkout scm
// To know files are checked out or not
sh '''
ls -lhrt
'''
def rootDir = pwd()
println("Current Directory: " + rootDir)
// point to exact source file
def example = load "${rootDir}/Example.Groovy"
example.exampleMethod()
example.otherExampleMethod()
}
Very useful thread, had the same problem, solved following you.
My problem was: Jenkinsfile -> call a first.groovy -> call second.groovy
Here my solution:
Jenkinsfile
node {
checkout scm
//other commands if you have
def runner = load pwd() + '/first.groovy'
runner.whateverMethod(arg1,arg2)
}
first.groovy
def first.groovy(arg1,arg2){
//whatever others commands
def caller = load pwd() + '/second.groovy'
caller.otherMethod(arg1,arg2)
}
NB: args are optional, add them if you have or leave blank.
Hope this could helps further.
In case the methods called on your loaded groovy script come with their own node blocks, you should not call those methods from within the node block loading the script. Otherwise you'd be blocking the outer node for no reason.
So, building on #Shishkin's answer, that could look like
Example.Groovy
def exampleMethod() {
node {
//do something
}
}
def otherExampleMethod() {
node {
//do something else
}
}
return this
Jenkinsfile
def exampleModule
node {
checkout scm // could not get it running w/o checkout scm
exampleModule = load "script/Example.Groovy"
}
exampleModule.exampleMethod()
exampleModule.otherExampleMethod()
Jenkinsfile using readTrusted
When running a recent Jenkins, you will be able to use readTrusted to read a file from the scm containing the Jenkinsfile without running a checkout - or a node block:
def exampleModule = evaluate readTrusted("script/Example.Groovy")
exampleModule.exampleMethod()
exampleModule.otherExampleMethod()

How to tell Jenkins "Build every project in folder X"?

I have set up some folders (Using Cloudbees Folder Plugin).
It sounds like the simplest possible command to be able to tell Jenkins: Build every job in Folder X.
I do not want to have to manually create a comma-separated list of every job in the folder. I do not want to add to this list whenever I want to add a job to this folder. I simply want it to find all the jobs in the folder at run time, and try to build them.
I'm not finding a plugin that lets me do that.
I've tried using the Build Pipeline Plugin, the Bulk Builder Plugin, the MultiJob plugin, and a few others. None seem to support the use case I'm after. I simply want any Job in the folder to be built. In other words, adding a job to this build is as simple as creating a job in this folder.
How can I achieve this?
I've been using Jenkins for some years and I've not found a way of doing what you're after.
The best I've managed is:
I have a "run every job" job (which contains a comma-separated list of all the jobs you want).
Then I have a separate job that runs periodically and updates the "run every job" job as new projects come and go.
One way to do this is to create a Pipeline job that runs Groovy script to enumerate all jobs in the current folder and then launch them.
The version below requires the sandbox to be disabled (so it can access Jenkins.instance).
def names = jobNames()
for (i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
#NonCPS
def jobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
def childItems = project.parent.items
def targets = []
for (i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == project.fullName) continue;
targets.add(childItem.fullName)
}
return targets
}
If you use Pipeline libraries, then the following is much nicer (and does not require you to allow a Groovy sandbox escape:
Add the following to your library:
package myorg;
public String runAllSiblings(jobName) {
def names = siblingProjects(jobName)
for (def i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
}
#NonCPS
private List siblingProjects(jobName) {
def project = Jenkins.instance.getItemByFullName(jobName)
def childItems = project.parent.items
def targets = []
for (def i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == jobName) continue;
targets.add(childItem.fullName)
}
return targets
}
And then create a pipeline with the following code:
(new myorg.JobUtil()).runAllSiblings(currentBuild.fullProjectName)
Yes, there are ways to simplify this further, but it should give you some ideas.
I developed a Groovy script that does this. It works very nicely. There are two Jobs, initBuildAll, which runs the groovy script and then launches the 'buildAllJobs' jobs. In my setup, I launch the InitBuildAll script daily. You could trigger it another way that works for you. We aren't full up CI, so daily is good enough for us.
One caveat: these jobs are all independent of one another. If that's not your situation, this may need some tweaking.
These jobs are in a separate Folder called MultiBuild. The jobs to be built are in a folder called Projects.
import com.cloudbees.hudson.plugins.folder.Folder
import javax.xml.transform.stream.StreamSource
import hudson.model.AbstractItem
import hudson.XmlFile
import jenkins.model.Jenkins
Folder findFolder(String folderName) {
for (folder in Jenkins.instance.items) {
if (folder.name == folderName) {
return folder
}
}
return null
}
AbstractItem findItem(Folder folder, String itemName) {
for (item in folder.items) {
if (item.name == itemName) {
return item
}
}
null
}
AbstractItem findItem(String folderName, String itemName) {
Folder folder = findFolder(folderName)
folder ? findItem(folder, itemName) : null
}
String listProjectItems() {
Folder projectFolder = findFolder('Projects')
StringBuilder b = new StringBuilder()
if (projectFolder) {
for (job in projectFolder.items.sort{it.name.toUpperCase()}) {
b.append(',').append(job.fullName)
}
return b.substring(1) // dump the initial comma
}
return b.toString()
}
File backupConfig(XmlFile config) {
File backup = new File("${config.file.absolutePath}.bak")
FileWriter fw = new FileWriter(backup)
config.writeRawTo(fw)
fw.close()
backup
}
boolean updateMultiBuildXmlConfigFile() {
AbstractItem buildItemsJob = findItem('MultiBuild', 'buildAllProjects')
XmlFile oldConfig = buildItemsJob.getConfigFile()
String latestProjectItems = listProjectItems()
String oldXml = oldConfig.asString()
String newXml = oldXml;
println latestProjectItems
println oldXml
def mat = newXml =~ '\\<projects\\>(.*)\\<\\/projects\\>'
if (mat){
println mat.group(1)
if (mat.group(1) == latestProjectItems) {
println 'no Change'
return false;
} else {
// there's a change
File backup = backupConfig(oldConfig)
def newProjects = "<projects>${latestProjectItems}</projects>"
newXml = mat.replaceFirst(newProjects)
XmlFile newConfig = new XmlFile(oldConfig.file)
FileWriter nw = new FileWriter(newConfig.file)
nw.write(newXml)
nw.close()
println newXml
println 'file updated'
return true
}
}
false
}
void reloadMultiBuildConfig() {
AbstractItem job = findItem('MultiBuild', 'buildAllProjects')
def configXMLFile = job.getConfigFile();
def file = configXMLFile.getFile();
InputStream is = new FileInputStream(file);
job.updateByXml(new StreamSource(is));
job.save();
println "MultiBuild Job updated"
}
if (updateMultiBuildXmlConfigFile()) {
reloadMultiBuildConfig()
}
A slight variant on Wayne Booth's "run every job" approach. After a little head scratching I was able to define a "run every job" in Job DSL format.
The advantage being I can maintain my job configuration in version control. e.g.
job('myfolder/build-all'){
publishers {
downstream('myfolder/job1')
downstream('myfolder/job2')
downstream('myfolder/job2')
}
}
Pipeline Job
When running as a Pipeline job you may use something like:
echo jobNames.join('\n')
jobNames.each {
build job: it, wait: false
}
#NonCPS
def getJobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
project.parent.items.findAll {
it.fullName != project.fullName && it instanceof hudson.model.Job
}.collect { it.fullName }
}
Script Console
Following code snippet can be used from the script console to schedule all jobs in some folder:
import hudson.model.AbstractProject
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ 'path/to/folder') {
(it as AbstractProject).scheduleBuild2(0)
}
}
With some modification you'd be able to create a jenkins shared library method (requires to run outside the sandbox and needs #NonCPS), like:
import hudson.model.AbstractProject
#NonCPS
def triggerItemsInFolder(String folderPath) {
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ folderPath) {
(it as AbstractProject).scheduleBuild2(0)
}
}
}
Reference pipeline script to run a parent job that would trigger other jobs as suggested by #WayneBooth
pipeline {
agent any
stages {
stage('Parallel Stage') {
parallel {
stage('Parallel 1') {
steps {
build(job: "jenkins_job_1")
}
}
stage('Parallel 2') {
steps {
build(job: "jenkins_job_2")
}
}
}
}
}
The best way to run an ad-hoc command like that would be using the Script Console (can be found under Manage Jenkins).
The console allows running Groovy Script - the script controls Jenkins functionality. The documentation can be found under Jenkins JavaDoc.
A simple script triggering immediately all Multi-Branch Pipeline projects under the given folder structure (in this example folder/subfolder/projectName):
import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
import hudson.model.Cause.UserIdCause
Jenkins.instance.getAllItems(WorkflowMultiBranchProject.class).findAll {
return it.fullName =~ '^folder/subfolder/'
}.each {
it.scheduleBuild(0, new UserIdCause())
}
The script was tested against Jenkins 2.324.

Resources