Set Jenkins Global Variables - init.groovy.d - jenkins

I am looking to set some configurations in Jenkins as part of a Docker build and using the init.groovy.d scripts to do so.
I am able to run the below code successfully when a Global Property already exists, but when there are no Global Properties in place the script is successful but the property is not added.
import jenkins.*
import jenkins.model.*
def instance = Jenkins.getInstance()
println "--> setting Global properties (Environment variables)..."
def globalProps = hudson.model.Hudson.instance.globalNodeProperties
def props = globalProps.getAll(hudson.slaves.EnvironmentVariablesNodeProperty.class)
for (prop in props) {
prop.envVars.put("PATH", "/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin")
}
instance.save()
println "--> setting Global properties (Environment variables)... done!"
How can I run this command in a Jenkins instance with no global properties already set?

This was caused by the for loop essentially looping over an empty array. Fixed by changing the code. No issue with Jenkins, just my implementation!
if ( envVarsNodePropertyList == null || envVarsNodePropertyList.size() == 0 ) {
newEnvVarsNodeProperty = new hudson.slaves.EnvironmentVariablesNodeProperty();
globalNodeProperties.add(newEnvVarsNodeProperty)
envVars = newEnvVarsNodeProperty.getEnvVars()
} else {
envVars = envVarsNodePropertyList.get(0).getEnvVars()
}
envVars.put("PATH", "/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin")

Related

Jenkins variable defined inside global variable(env variable)

I have defined an environment(global) variable in jenkins via configuration as
REPORT = "Test, ${CycleNumber},${JOB_NAME}"
I have 1 parameter defined in my pipeline called Cycle which has values new & update. Based on this cycle value CycleNumber should be updated and I tried it via groovy using script block in my pipeline as below
if(Cycle == "New")
{
CycleNumber = "12345"
}
else if (Cycle == "Update")
{
CycleNumber = "7890"
}
after this update if I do echo "${env.REPORT}" I get the value as "Test,,TestJob" where in the CycleNumber variable is not updated. Could you please let me know if there is a way to update this CycleNumber field ?
Don't rely on Groovy's String interpolation to replace the CycleNumber. You can have your own placeholder(e.g: _CYCLE_NUMBER_) in the environment variable which you can replace later in your flow. Take a look at the following example.
pipeline {
agent any
stages {
stage("Test") {
environment {
REPORT = "Test, _CYCLE_NUMBER_,${JOB_NAME}"
}
steps {
script {
def Cycle = 'New'
def CycleNumber = 'none'
if(Cycle == "New"){
CycleNumber = "12345"
} else if (Cycle == "Update") {
CycleNumber = "7890"
}
def newReport = "$REPORT".replace('_CYCLE_NUMBER_', CycleNumber)
echo "$newReport"
}
}
}
}
}
Also once you set the newReport variable, make sure you use the same variable. if you do "${env.REPORT}" you will always get the original value assigned the tne environment variable.
Here is an answer with a workaround here: Updating environment global variable in Jenkins pipeline from the stage level - is it possible?
TLDR:
You can't override a global environment variable that has been declared in environment(global), however you can use the withEnv() function to wrap your script block in your pipeline to reference the updated value, eg:
withEnv(['REPORT=...']) {
// do something with updated env.REPORT
}

How to use env variable inside triggers section in jenkins pipeline?

Reading the properties file for the node label and triggerConfigURL, node label works, but I couldn't read and set triggerConfigURL from environment.
def propFile = "hello/world.txt" //This is present in workspace, and it works.
pipeline {
environment {
nodeProp = readProperties file: "${propFile}"
nodeLabel = "$nodeProp.NODE_LABEL"
dtcPath = "$nodeProp.DTC"
}
agent { label env.nodeLabel } // this works!! sets NODE_LABEL value from the properties file.
triggers {
gerrit dynamicTriggerConfiguration: 'true',
triggerConfigURL: env.dtcPath, // THIS DON'T WORK, tried "${env.dtcPath}" and few other notations too.
serverName: 'my-gerrit-server',
triggerOnEvents: [commentAddedContains('^fooBar$')]
}
stages {
stage('Print Env') {
steps {
script {
sh 'env' // This prints "dtcPath=https://path/of/the/dtc/file", so the dtcPath env is set.
}
}
}
After running the job, the configuration is as below:
Of the env and triggers clauses Jenkins runs one before the other, and it looks like you have experimentally proven that triggers run first and env second. It also looks like agent runs after env as well.
While I don't know why the programmers have made this specific decision, I think you are in a kind of a chicken-and-egg problem, where you want to define the pipeline using a file but can only read the file once the pipeline is defined and running.
Having said that, the following might work:
def propFile = "hello/world.txt"
def nodeProp = null
node {
nodeProp = readProperties file: propFile
}
pipeline {
environment {
nodeLabel = nodeProp.NODE_LABEL
dtcPath = nodeProp.DTC
}
agent { label env.nodeLabel }
triggers {
gerrit dynamicTriggerConfiguration: 'true',
triggerConfigURL: nodeProp.DTC,
//etc.

Active Choices Parameter with Credentials

I'm trying to get access to the credentials stored in Jenkins without having to hardcode them in the script itself.
#!/usr/bin/env groovy
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'GroovyAWSScMgr', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
return ["${env.AWS_ACCESS_KEY_ID}"]
}
I've tried:
return [AWS_ACCESS_KEY_ID]
return [env.AWS_ACCESS_KEY_ID]
return ["${env.AWS_ACCESS_KEY_ID}"]
return ["${env.AWS_ACCESS_KEY_ID}"]
The result continues to be NULL
You can try this:
import jenkins.model.*
credentialsId = 'GroovyAWSScMgr'
def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(
com.cloudbees.plugins.credentials.common.StandardUsernameCredentials.class, Jenkins.instance, null, null ).find{
it.id == credentialsId}
return [creds.username]
You can use creds.usernameand creds.password in you script.
I'm not sure if it is secure.
I tried something similar in Active Choices Parameter for one of my jobs and nothing worked. I have instead used the below to prevent hardcoding credentials
Define your credentials, for ex. in your case AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with appropriate values, as Environment variables in Manage Jenkins -> Configure System -> Global properties and retrieve them in your script
import jenkins.model.*
instance = Jenkins.getInstance()
globalNodeProperties = instance.getGlobalNodeProperties()
aws_access_key_id = ''
aws_secret_key = ''
globalNodeProperties.each {
envVars = it.getEnvVars()
if (envVars.get('AWS_ACCESS_KEY_ID') != null) {
aws_access_key_id = envVars.get('AWS_ACCESS_KEY_ID');
}
if (envVars.get('AWS_SECRET_ACCESS_KEY') != null) {
aws_secret_key = envVars.get('AWS_SECRET_ACCESS_KEY');
}
}
You can refer them in your script as ${aws_access_key_id} and ${aws_secret_key}

Using FilePath to access workspace on slave in Jenkins pipeline

I need to check for the existence of a certain .exe file in my workspace as part of my pipeline build job. I tried to use the below Groovy script from my Jenkinsfile to do the same. But I think the File class by default tries to look for the workspace directory on jenkins master and fails.
#com.cloudbees.groovy.cps.NonCPS
def checkJacoco(isJacocoEnabled) {
new File(pwd()).eachFileRecurse(FILES) { it ->
if (it.name == 'jacoco.exec' || it.name == 'Jacoco.exec')
isJacocoEnabled = true
}
}
How to access the file system on slave using Groovy from inside the Jenkinsfile?
I also tried the below code. But I am getting No such property: build for class: groovy.lang.Binding error. I also tried to use the manager object instead. But get the same error.
#com.cloudbees.groovy.cps.NonCPS
def checkJacoco(isJacocoEnabled) {
channel = build.workspace.channel
rootDirRemote = new FilePath(channel, pwd())
println "rootDirRemote::$rootDirRemote"
rootDirRemote.eachFileRecurse(FILES) { it ->
if (it.name == 'jacoco.exec' || it.name == 'Jacoco.exec') {
println "Jacoco Exists:: ${it.path}"
isJacocoEnabled = true
}
}
Had the same problem, found this solution:
import hudson.FilePath;
import jenkins.model.Jenkins;
node("aSlave") {
writeFile file: 'a.txt', text: 'Hello World!';
listFiles(createFilePath(pwd()));
}
def createFilePath(path) {
if (env['NODE_NAME'] == null) {
error "envvar NODE_NAME is not set, probably not inside an node {} or running an older version of Jenkins!";
} else if (env['NODE_NAME'].equals("master")) {
return new FilePath(path);
} else {
return new FilePath(Jenkins.getInstance().getComputer(env['NODE_NAME']).getChannel(), path);
}
}
#NonCPS
def listFiles(rootPath) {
print "Files in ${rootPath}:";
for (subPath in rootPath.list()) {
echo " ${subPath.getName()}";
}
}
The important thing here is that createFilePath() ins't annotated with #NonCPS since it needs access to the env variable. Using #NonCPS removes access to the "Pipeline goodness", but on the other hand it doesn't require that all local variables are serializable.
You should then be able to do the search for the file inside the listFiles() method.

How to tell Jenkins "Build every project in folder X"?

I have set up some folders (Using Cloudbees Folder Plugin).
It sounds like the simplest possible command to be able to tell Jenkins: Build every job in Folder X.
I do not want to have to manually create a comma-separated list of every job in the folder. I do not want to add to this list whenever I want to add a job to this folder. I simply want it to find all the jobs in the folder at run time, and try to build them.
I'm not finding a plugin that lets me do that.
I've tried using the Build Pipeline Plugin, the Bulk Builder Plugin, the MultiJob plugin, and a few others. None seem to support the use case I'm after. I simply want any Job in the folder to be built. In other words, adding a job to this build is as simple as creating a job in this folder.
How can I achieve this?
I've been using Jenkins for some years and I've not found a way of doing what you're after.
The best I've managed is:
I have a "run every job" job (which contains a comma-separated list of all the jobs you want).
Then I have a separate job that runs periodically and updates the "run every job" job as new projects come and go.
One way to do this is to create a Pipeline job that runs Groovy script to enumerate all jobs in the current folder and then launch them.
The version below requires the sandbox to be disabled (so it can access Jenkins.instance).
def names = jobNames()
for (i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
#NonCPS
def jobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
def childItems = project.parent.items
def targets = []
for (i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == project.fullName) continue;
targets.add(childItem.fullName)
}
return targets
}
If you use Pipeline libraries, then the following is much nicer (and does not require you to allow a Groovy sandbox escape:
Add the following to your library:
package myorg;
public String runAllSiblings(jobName) {
def names = siblingProjects(jobName)
for (def i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
}
#NonCPS
private List siblingProjects(jobName) {
def project = Jenkins.instance.getItemByFullName(jobName)
def childItems = project.parent.items
def targets = []
for (def i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == jobName) continue;
targets.add(childItem.fullName)
}
return targets
}
And then create a pipeline with the following code:
(new myorg.JobUtil()).runAllSiblings(currentBuild.fullProjectName)
Yes, there are ways to simplify this further, but it should give you some ideas.
I developed a Groovy script that does this. It works very nicely. There are two Jobs, initBuildAll, which runs the groovy script and then launches the 'buildAllJobs' jobs. In my setup, I launch the InitBuildAll script daily. You could trigger it another way that works for you. We aren't full up CI, so daily is good enough for us.
One caveat: these jobs are all independent of one another. If that's not your situation, this may need some tweaking.
These jobs are in a separate Folder called MultiBuild. The jobs to be built are in a folder called Projects.
import com.cloudbees.hudson.plugins.folder.Folder
import javax.xml.transform.stream.StreamSource
import hudson.model.AbstractItem
import hudson.XmlFile
import jenkins.model.Jenkins
Folder findFolder(String folderName) {
for (folder in Jenkins.instance.items) {
if (folder.name == folderName) {
return folder
}
}
return null
}
AbstractItem findItem(Folder folder, String itemName) {
for (item in folder.items) {
if (item.name == itemName) {
return item
}
}
null
}
AbstractItem findItem(String folderName, String itemName) {
Folder folder = findFolder(folderName)
folder ? findItem(folder, itemName) : null
}
String listProjectItems() {
Folder projectFolder = findFolder('Projects')
StringBuilder b = new StringBuilder()
if (projectFolder) {
for (job in projectFolder.items.sort{it.name.toUpperCase()}) {
b.append(',').append(job.fullName)
}
return b.substring(1) // dump the initial comma
}
return b.toString()
}
File backupConfig(XmlFile config) {
File backup = new File("${config.file.absolutePath}.bak")
FileWriter fw = new FileWriter(backup)
config.writeRawTo(fw)
fw.close()
backup
}
boolean updateMultiBuildXmlConfigFile() {
AbstractItem buildItemsJob = findItem('MultiBuild', 'buildAllProjects')
XmlFile oldConfig = buildItemsJob.getConfigFile()
String latestProjectItems = listProjectItems()
String oldXml = oldConfig.asString()
String newXml = oldXml;
println latestProjectItems
println oldXml
def mat = newXml =~ '\\<projects\\>(.*)\\<\\/projects\\>'
if (mat){
println mat.group(1)
if (mat.group(1) == latestProjectItems) {
println 'no Change'
return false;
} else {
// there's a change
File backup = backupConfig(oldConfig)
def newProjects = "<projects>${latestProjectItems}</projects>"
newXml = mat.replaceFirst(newProjects)
XmlFile newConfig = new XmlFile(oldConfig.file)
FileWriter nw = new FileWriter(newConfig.file)
nw.write(newXml)
nw.close()
println newXml
println 'file updated'
return true
}
}
false
}
void reloadMultiBuildConfig() {
AbstractItem job = findItem('MultiBuild', 'buildAllProjects')
def configXMLFile = job.getConfigFile();
def file = configXMLFile.getFile();
InputStream is = new FileInputStream(file);
job.updateByXml(new StreamSource(is));
job.save();
println "MultiBuild Job updated"
}
if (updateMultiBuildXmlConfigFile()) {
reloadMultiBuildConfig()
}
A slight variant on Wayne Booth's "run every job" approach. After a little head scratching I was able to define a "run every job" in Job DSL format.
The advantage being I can maintain my job configuration in version control. e.g.
job('myfolder/build-all'){
publishers {
downstream('myfolder/job1')
downstream('myfolder/job2')
downstream('myfolder/job2')
}
}
Pipeline Job
When running as a Pipeline job you may use something like:
echo jobNames.join('\n')
jobNames.each {
build job: it, wait: false
}
#NonCPS
def getJobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
project.parent.items.findAll {
it.fullName != project.fullName && it instanceof hudson.model.Job
}.collect { it.fullName }
}
Script Console
Following code snippet can be used from the script console to schedule all jobs in some folder:
import hudson.model.AbstractProject
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ 'path/to/folder') {
(it as AbstractProject).scheduleBuild2(0)
}
}
With some modification you'd be able to create a jenkins shared library method (requires to run outside the sandbox and needs #NonCPS), like:
import hudson.model.AbstractProject
#NonCPS
def triggerItemsInFolder(String folderPath) {
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ folderPath) {
(it as AbstractProject).scheduleBuild2(0)
}
}
}
Reference pipeline script to run a parent job that would trigger other jobs as suggested by #WayneBooth
pipeline {
agent any
stages {
stage('Parallel Stage') {
parallel {
stage('Parallel 1') {
steps {
build(job: "jenkins_job_1")
}
}
stage('Parallel 2') {
steps {
build(job: "jenkins_job_2")
}
}
}
}
}
The best way to run an ad-hoc command like that would be using the Script Console (can be found under Manage Jenkins).
The console allows running Groovy Script - the script controls Jenkins functionality. The documentation can be found under Jenkins JavaDoc.
A simple script triggering immediately all Multi-Branch Pipeline projects under the given folder structure (in this example folder/subfolder/projectName):
import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
import hudson.model.Cause.UserIdCause
Jenkins.instance.getAllItems(WorkflowMultiBranchProject.class).findAll {
return it.fullName =~ '^folder/subfolder/'
}.each {
it.scheduleBuild(0, new UserIdCause())
}
The script was tested against Jenkins 2.324.

Resources