Grab available artifact versions in jenkins parameters - jenkins

I want to have an ability in Jenkins build fetch available artefacts versions (i use Artifactory to store code packed in .zip) and put them as a dropdown list, so i can select the version i want to use with this build.
Could you give an example how to do this best way?

To do so, i use jenkins plugin Active Choices and add reactive parameter into the job. This gives me an ability to select what Environment to use, and based on it fetch available artifacts from artifactory
Job configuration:
Groovy code used in "Active choice" parameters:
import groovy.json.JsonSlurper
import java.util.regex.Matcher;
import java.util.regex.Pattern;
Pattern pattern = Pattern.compile("((?:develop|master|function)_(?:latest|[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+).*)")
def repository_dev = "https://repo.nibr.novartis.net/artifactory/api/storage/nibr-generic/intuence_discovery/idaw_health_checker/develop/"
def repository_tst = "https://repo.nibr.novartis.net/artifactory/api/storage/nibr-generic/intuence_discovery/idaw_health_checker/release/"
def repository_prd = "https://repo.nibr.novartis.net/artifactory/api/storage/nibr-generic/intuence_discovery/idaw_health_checker/master/"
try {
if (DEPLOY_TO == "dev") {
versions = "curl -s $repository_dev"
}
else if (DEPLOY_TO == "tst") {
versions = "curl -s $repository_tst"
}
else if (DEPLOY_TO == "prd") {
versions = "curl -s $repository_prd"
}
def proc = versions.execute()
proc.waitFor()
def output = proc.in.text
def jsonSlurper = new JsonSlurper()
def artifactsJsonObject = jsonSlurper.parseText(output)
def dataArray = artifactsJsonObject.children
List<String> artifacts = new ArrayList<String>()
for(item in dataArray) {
Matcher m = pattern.matcher(item.uri)
while (m.find()) {
artifacts.add(m.group());
}
}
return artifacts
}
catch (Exception e) {
return ["There was a problem fetching the artifacts", e]
}

Related

Groovy script to implement "How to get a parameter depend of other parameter in Hudson or Jenkins"?

Jenkins v2.289.3
I'm trying to implement the Active Choices plugin discussed in one of the answers in in How to get a parameter depend of other parameter in Hudson or Jenkins, but unsure how to implement my second script to get the value from the first parameter.
I have a MyFolder job folder with multi-pipeline job called Builders, with branches like master, release/*, feature/*. So the full name of the jobs in the folder will be MyFolder/Builders/release%2F1.0.0 for the release/1.0.0 job for example (%2F is the escape character for /).
I then created a second job in the same folder called DeployBranchVersion, whose goal is to execute deployment code that deploys a chosen branch and one its corresponding successful build numbers. I therefore need to pass 2 parameters to the deployment code, GIT_BRANCH and VERSION.
My first Active Choices parameter gets these branches using the following script, and assigns the choice to the GIT_BRANCH parameter.
Job name: MyFolder/DeployBranchVersion
Parameter name: GIT_BRANCH
Groovy script:
def gettags = "git ls-remote -h -t https://username:password#bitbucket.org/organization/myrepo.git".execute()
def branches = []
def t1 = []
gettags.text.eachLine {branches.add(it)}
for(i in branches)
t1.add(i.split()[1].replaceAll('\\^\\{\\}', '').replaceAll('refs/heads/', '').replaceAll('refs/tags/', ''))
t1 = t1.unique()
return t1
This returns a drop-down list of the branches in my repo, and the chosen one is assigned to the GIT_BRANCH parameter.
Now how do I setup the second Active Choices Reactive parameter to reference the above choice? I have the following Groovy code that works in a non-Active-Choice parameter setup. How can I modify it to work in this case? The BUILD_JOB_NAME needs to reference the GIT_BRANCH value from the first parameter?
import hudson.model.*
BUILD_JOB_NAME = "some_reference_to_GIT_BRANCH" // ??????????
def getJobs() {
def hi = Hudson.instance
return hi.getItems(Job)
}
def getBuildJob() {
def buildJob = null
def jobs = getJobs()
(jobs).each { job ->
if (job.fullName == BUILD_JOB_NAME) {
buildJob = job
}
}
return buildJob
}
def getAllBuildNumbers(Job job) {
def buildNumbers = []
(job.getBuilds()).each { build ->
def status = build.getBuildStatusSummary().message
if (status.contains("stable") || status.contains("normal")) {
buildNumbers.add("${build.displayName}")
}
}
return buildNumbers
}
def buildJob = getBuildJob()
return getAllBuildNumbers(buildJob)
I tried setting it this way to ho avail.
BUILD_JOB_NAME = "MyFolder/Builders/$GIT_BRANCH"
Turns out I was doing it correctly, I just had a buggy 2nd script. Here's the good one. I realized that GIT_BRANCH values had / in them so I had to replace them with the equivalent escape character %2F.
import hudson.model.*
BRANCH = GIT_BRANCH.replaceAll("/", "%2F")
BUILD_JOB_NAME = "MyFolder/Builders/$BRANCH"
def getJobs() {
def hi = Hudson.instance
return hi.getAllItems(Job.class)
}
def getBuildJob() {
def buildJob = null
def jobs = getJobs()
(jobs).each { job ->
if (job.fullName == BUILD_JOB_NAME) {
buildJob = job
}
}
return buildJob
}
def getAllBuildNumbers(Job job) {
def buildNumbers = []
(job.getBuilds()).each { build ->
def status = build.getBuildStatusSummary().message
if ((status.contains("stable") || status.contains("normal")) &&
build.displayName.contains("-")) {
buildNumbers.add(build.displayName)
}
}
return buildNumbers
}
def buildJob = getBuildJob()
return getAllBuildNumbers(buildJob)

How to use correctly cps notation in Jenkins pipeline

I am trying to write some code in Jenkins, but my knowledge is quite limited, I need to read some information (the commiter for that CL) from a xml file that Perforce generates(SCM) I then use that information I get, in another function to send an email in case the static analysis finds an error, the thing is I keep getting the expected to call WorkflowScript.sendEmailNewErrors but wound up catching readJSON Error, I have gone through the CPS documentation The output tells me but honestly it is still not clear to me what is wrong. My pipeline would be something like this:
import groovy.json.JsonSlurper
#NonCPS
def sendEmailNewErrors(){
def submitter = findItemInChangelog("changeUser")
def Emails = readJSON file: "D:/Emails.json"
def email = "test#email.com"
def msgList = []
def url = "http://localhost:8080/job/job1/1374/cppcheck/new/api/json"
def json = new JsonSlurper().parseText(new URL(url).text)
for (key in Emails.keySet()) {
if (submitter == key){
email = Emails.get(key)
}
}
json.issues.each{issue->
def msg = "New ERROR found in static analysis, TYPE OF ERROR ${issue.type}"+
", SEVERITY: ${issue.severity}, ERROR MESSAGE: ${issue.message}"+
", FILE ${issue.fileName} AT LINE: ${issue.lineStart}"
msgList.add(msg)
}
msgList.each{msg->
println msg
mail to: email,
subject: "New errors found in job1 build pipeline",
body: "$msg"
}
}
#NonCPS
def findItemInChangelog(item){
def result = "Build ran manually"
def found = false
def file = new XmlSlurper().parse("C:/Users/User/.jenkins/jobs/job1/builds/1374/changelog5966810591791724161.xml")
file.entry.each { entry ->
entry.changenumber.each { changenumber ->
changenumber.children().each { tag ->
if(tag.name() == item && found != true){
result = tag.text()
found = true
}
}
}
}
return result.toString()
}
pipeline {
agent any
stages {
stage("test"){
steps{
script{
sendEmailNewErrors()
}
}
}
}
}
I have tried without CPS notation but I understand if using .each method the notation has to be used. Anyone with more experience with this is able to help?

Use files as input to Jenkins JobDSL

I am trying to use Jenkins' JobDSL plugin to programatically create jobs. However, I want to be able to define the parameters in a file. According to docs on distributed builds, this may not be possible. Anyone have any idea how I can achieve this? I could use the readFileFromWorkspace method but I still need to iterate over all files provided and run JobDSL x times. JobDSL code below. The important part I am struggling with is the first 15 lines or so.
#!groovy
import groovy.io.FileType
def list = []
hudson.FilePath workspace = hudson.model.Executor.currentExecutor().getCurrentWorkspace()
def dir = new File(workspace.getRemote() + "/pipeline/applications")
dir.eachFile (FileType.FILES) { file ->
list << file
}
list.each {
println (it.path)
def properties = new Properties()
this.getClass().getResource( it.path ).withInputStream {
properties.load(it)
}
def _git_key_id = 'jenkins'
consumablesRoot = '//pipeline_test'
application_folder = "${consumablesRoot}/" + properties._application_name
// Create the branch_indexer
def jobName = "${application_folder}/branch_indexer"
folder(consumablesRoot) {
description("Ensure consumables folder is in place")
}
folder(application_folder) {
description("Ensure app folder in consumables spaces is in place.")
}
job(jobName) {
println("in the branch_indexer: ${GIT_BRANCH}")
label('master')
/* environmentVariables(
__pipeline_code_repo: properties."__pipeline_code_repo",
__pipeline_code_branch: properties."__pipeline_code_branch",
__pipeline_scripts_code_repo: properties."__pipeline_scripts_code_repo",
__pipeline_scripts_code_branch: properties."__pipeline_scripts_code_branch",
__gcp_template_code_repo: properties."__gcp_template_code_repo",
__gcp_template_code_branch: properties."__gcp_template_code_branch",
_git_key_id: _git_key_id,
_application_id: properties."_application_id",
_application_name: properties."_application_name",
_business_mnemonic: properties."_business_mnemonic",
_control_repo: properties."_control_repo",
_project_name: properties."_project_name"
)*/
scm {
git {
remote {
url(control_repo)
name('control_repo')
credentials(_git_key_id)
}
remote {
url(pipeline_code_repo)
name('pipeline_pipelines')
credentials(_git_key_id)
}
}
}
triggers {
scm('#daily')
}
steps {
//ensure that the latest code from the pipeline source code repo has been pulled
shell("git ls-remote --heads control_repo | cut -d'/' -f3 | sort > .branches")
shell("git checkout -f pipeline_pipelines/" + properties."pipeline_code_branch")
//get the last branch from the control_repo repo
shell("""
git for-each-ref --sort=-committerdate refs/remotes | grep -i control_repo | head -n 1 > .last_branch
""")
dsl(['pipeline/branch_indexer.groovy'])
}
}
// Start the branch_indexer
queue(jobName)
}
In case someone else ends up here in search of a simple method for reading only one parameter file, use readFileFromWorkspace (as mentioned by #CodyK):
def file = readFileFromWorkspace(relative_path_to_file)
If the file contains a parameter called your_param, you can read it using ConfigSlurper:
def config = new ConfigSlurper().parse(file)
def your_param = config.getProperty("your_param")
Was able to get it working with this piece of code:
hudson.FilePath workspace = hudson.model.Executor.currentExecutor().getCurrentWorkspace()
// Build a list of all config files ending in .properties
def cwd = hudson.model.Executor.currentExecutor().getCurrentWorkspace().absolutize()
def configFiles = new FilePath(cwd, 'pipeline/applications').list('*.properties')
configFiles.each { file ->
def properties = new Properties()
def content = readFileFromWorkspace(file.getRemote())
properties.load(new StringReader(content))

Accessing Groovy class variables via Jenkins Declarative Pipeline

I am trying to create a pipeline for my project using Jenkins Declarative Pipleline. As a part of it i had to write some Groovy code in some file and i was successful in loading it. Below is the a code snippet from JenkinsFile
stage('Pre-Build'){
steps{
script{
pipeline = load 'dependencyChecker.groovy'
dependents = pipeline.gatherAllDependencies("${base_workspace}/${branch}/${project}/settings.gradle")
for (item in dependents){
sh "cp --parents ${item} ."
}
}
}
}
The following is the groovy file(dependencyChecker.groovy)
class Test1{
static def outStream;
static setOut(PrintStream out){
outStream = out;
out.println(outStream);
}
static List gatherAllDependencies(String settingsPath){
List projectDependencies = [];
computeAllDependencies(settingsPath,projectDependencies);
return projectDependencies;
}
static List computeAllDependencies(String settingsPath,List projectDependencies){
new File(settingsPath).splitEachLine('='){line ->
if(line[1]!=null){
outStream.println("Entire Project Name is ${line[1]}");
def name = line[1].split('\'');
if(name.length==3){
def dir = name[1];
Path path = Paths.get(settingsPath);
def dependency = path.getParent().resolve(dir).normalize();
outStream.println(dependency);
if(!projectDependencies.contains("${dependency}")){
projectDependencies.add("${dependency}")
if(new File("${dependency}/settings.gradle").exists()){
computeAllDependencies("${dependency}/settings.gradle",projectDependencies);
}
}
}
}
}
}
}
Test1.setOut(System.out);
return Test1;
Updated:
import java.nio.file.*
class Test1 implements Serializable{
def script;
Test1(def script){
this.script = script;
}
def gatherAllDependencies(String settingsPath){
List projectDependencies = [];
computeAllDependencies(settingsPath,projectDependencies);
return projectDependencies;
}
def computeAllDependencies(String settingsPath,List projectDependencies){
new File(settingsPath).splitEachLine('='){line ->
if(line[1]!=null){
this.script.echo("Entire Project Name is ${line[1]}");
def name = line[1].split('\'');
if(name.length==3){
def dir = name[1];
Path path = Paths.get(settingsPath);
def dependency = path.getParent().resolve(dir).normalize();
this.script.echo(dependency);
if(!projectDependencies.contains("${dependency}")){
projectDependencies.add("${dependency}")
if(new File("${dependency}/settings.gradle").exists()){
computeAllDependencies("${dependency}/settings.gradle",projectDependencies);
}
}
}
}
}
}
}
def a = new Test1(script:this);
return a;
dependents(in JenkinsFile) is always empty despite the same code working fine in Jenkins Script Console. I could not debug it as none of my println statements are logged anywhere
Can someone guide me as to where i am going wrong

How to tell Jenkins "Build every project in folder X"?

I have set up some folders (Using Cloudbees Folder Plugin).
It sounds like the simplest possible command to be able to tell Jenkins: Build every job in Folder X.
I do not want to have to manually create a comma-separated list of every job in the folder. I do not want to add to this list whenever I want to add a job to this folder. I simply want it to find all the jobs in the folder at run time, and try to build them.
I'm not finding a plugin that lets me do that.
I've tried using the Build Pipeline Plugin, the Bulk Builder Plugin, the MultiJob plugin, and a few others. None seem to support the use case I'm after. I simply want any Job in the folder to be built. In other words, adding a job to this build is as simple as creating a job in this folder.
How can I achieve this?
I've been using Jenkins for some years and I've not found a way of doing what you're after.
The best I've managed is:
I have a "run every job" job (which contains a comma-separated list of all the jobs you want).
Then I have a separate job that runs periodically and updates the "run every job" job as new projects come and go.
One way to do this is to create a Pipeline job that runs Groovy script to enumerate all jobs in the current folder and then launch them.
The version below requires the sandbox to be disabled (so it can access Jenkins.instance).
def names = jobNames()
for (i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
#NonCPS
def jobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
def childItems = project.parent.items
def targets = []
for (i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == project.fullName) continue;
targets.add(childItem.fullName)
}
return targets
}
If you use Pipeline libraries, then the following is much nicer (and does not require you to allow a Groovy sandbox escape:
Add the following to your library:
package myorg;
public String runAllSiblings(jobName) {
def names = siblingProjects(jobName)
for (def i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
}
#NonCPS
private List siblingProjects(jobName) {
def project = Jenkins.instance.getItemByFullName(jobName)
def childItems = project.parent.items
def targets = []
for (def i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == jobName) continue;
targets.add(childItem.fullName)
}
return targets
}
And then create a pipeline with the following code:
(new myorg.JobUtil()).runAllSiblings(currentBuild.fullProjectName)
Yes, there are ways to simplify this further, but it should give you some ideas.
I developed a Groovy script that does this. It works very nicely. There are two Jobs, initBuildAll, which runs the groovy script and then launches the 'buildAllJobs' jobs. In my setup, I launch the InitBuildAll script daily. You could trigger it another way that works for you. We aren't full up CI, so daily is good enough for us.
One caveat: these jobs are all independent of one another. If that's not your situation, this may need some tweaking.
These jobs are in a separate Folder called MultiBuild. The jobs to be built are in a folder called Projects.
import com.cloudbees.hudson.plugins.folder.Folder
import javax.xml.transform.stream.StreamSource
import hudson.model.AbstractItem
import hudson.XmlFile
import jenkins.model.Jenkins
Folder findFolder(String folderName) {
for (folder in Jenkins.instance.items) {
if (folder.name == folderName) {
return folder
}
}
return null
}
AbstractItem findItem(Folder folder, String itemName) {
for (item in folder.items) {
if (item.name == itemName) {
return item
}
}
null
}
AbstractItem findItem(String folderName, String itemName) {
Folder folder = findFolder(folderName)
folder ? findItem(folder, itemName) : null
}
String listProjectItems() {
Folder projectFolder = findFolder('Projects')
StringBuilder b = new StringBuilder()
if (projectFolder) {
for (job in projectFolder.items.sort{it.name.toUpperCase()}) {
b.append(',').append(job.fullName)
}
return b.substring(1) // dump the initial comma
}
return b.toString()
}
File backupConfig(XmlFile config) {
File backup = new File("${config.file.absolutePath}.bak")
FileWriter fw = new FileWriter(backup)
config.writeRawTo(fw)
fw.close()
backup
}
boolean updateMultiBuildXmlConfigFile() {
AbstractItem buildItemsJob = findItem('MultiBuild', 'buildAllProjects')
XmlFile oldConfig = buildItemsJob.getConfigFile()
String latestProjectItems = listProjectItems()
String oldXml = oldConfig.asString()
String newXml = oldXml;
println latestProjectItems
println oldXml
def mat = newXml =~ '\\<projects\\>(.*)\\<\\/projects\\>'
if (mat){
println mat.group(1)
if (mat.group(1) == latestProjectItems) {
println 'no Change'
return false;
} else {
// there's a change
File backup = backupConfig(oldConfig)
def newProjects = "<projects>${latestProjectItems}</projects>"
newXml = mat.replaceFirst(newProjects)
XmlFile newConfig = new XmlFile(oldConfig.file)
FileWriter nw = new FileWriter(newConfig.file)
nw.write(newXml)
nw.close()
println newXml
println 'file updated'
return true
}
}
false
}
void reloadMultiBuildConfig() {
AbstractItem job = findItem('MultiBuild', 'buildAllProjects')
def configXMLFile = job.getConfigFile();
def file = configXMLFile.getFile();
InputStream is = new FileInputStream(file);
job.updateByXml(new StreamSource(is));
job.save();
println "MultiBuild Job updated"
}
if (updateMultiBuildXmlConfigFile()) {
reloadMultiBuildConfig()
}
A slight variant on Wayne Booth's "run every job" approach. After a little head scratching I was able to define a "run every job" in Job DSL format.
The advantage being I can maintain my job configuration in version control. e.g.
job('myfolder/build-all'){
publishers {
downstream('myfolder/job1')
downstream('myfolder/job2')
downstream('myfolder/job2')
}
}
Pipeline Job
When running as a Pipeline job you may use something like:
echo jobNames.join('\n')
jobNames.each {
build job: it, wait: false
}
#NonCPS
def getJobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
project.parent.items.findAll {
it.fullName != project.fullName && it instanceof hudson.model.Job
}.collect { it.fullName }
}
Script Console
Following code snippet can be used from the script console to schedule all jobs in some folder:
import hudson.model.AbstractProject
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ 'path/to/folder') {
(it as AbstractProject).scheduleBuild2(0)
}
}
With some modification you'd be able to create a jenkins shared library method (requires to run outside the sandbox and needs #NonCPS), like:
import hudson.model.AbstractProject
#NonCPS
def triggerItemsInFolder(String folderPath) {
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ folderPath) {
(it as AbstractProject).scheduleBuild2(0)
}
}
}
Reference pipeline script to run a parent job that would trigger other jobs as suggested by #WayneBooth
pipeline {
agent any
stages {
stage('Parallel Stage') {
parallel {
stage('Parallel 1') {
steps {
build(job: "jenkins_job_1")
}
}
stage('Parallel 2') {
steps {
build(job: "jenkins_job_2")
}
}
}
}
}
The best way to run an ad-hoc command like that would be using the Script Console (can be found under Manage Jenkins).
The console allows running Groovy Script - the script controls Jenkins functionality. The documentation can be found under Jenkins JavaDoc.
A simple script triggering immediately all Multi-Branch Pipeline projects under the given folder structure (in this example folder/subfolder/projectName):
import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
import hudson.model.Cause.UserIdCause
Jenkins.instance.getAllItems(WorkflowMultiBranchProject.class).findAll {
return it.fullName =~ '^folder/subfolder/'
}.each {
it.scheduleBuild(0, new UserIdCause())
}
The script was tested against Jenkins 2.324.

Resources