Caused: java.io.NotSerializableException: org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval$PendingScript - jenkins

In my declarative Jenkins pipeline I have added code which should approve scripts as suggested here:
...
} catch (Exception jobFailed) {
if (jobFailed.getMessage() == "script not yet approved for use") {
echo("[WARNING] Changes in delivery job were automatically approved")
approveDeliveryJob()
return false
}
...
#NonCPS
def approveDeliveryJob() {
toApprove = ScriptApproval.get().getPendingScripts().collect()
toApprove.each { pending -> ScriptApproval.get().approveScript(pending.getHash())} }
}
...
As suggested here
to solve this put all the code that works with non serializable variables into #NonCPS annotated function
What I am missing?

toApprove = ScriptApproval.get().getPendingScripts().collect()
Here you are storing the result in the script binding, which is affected by serialization.
You need a local variable instead:
def toApprove = ScriptApproval.get().getPendingScripts().collect()

Related

Bulk update parameter in jobs

We have a lot of jobs that all perform SCM checkout based on a Build Parameter value: say REPO_URL=ssh://url. Over time there accumulated small differences in names and values of these parameters: REPOURL, repo_url, =ssh://url/, =ssh://url:port, etc.
We need to reduce them to a common denominator with a single parameter name and a single value. How do we bulk update parameters in 50+ jobs?
Using Jenkins Script Console.
NOTE: these are essentially destructive operations, so make sure you tested your code on some spare jobs before running it in production!!!
Change default value of a parameter
Jenkins.instance.getAllItems(Job)
// filter jobs by name if needed
.findAll { it.fullName.startsWith('sandbox/tmp-magic') }
.each {
it
.getProperty(ParametersDefinitionProperty)
.getParameterDefinition('MAGIC_PARAMETER')
// `each` ensures nothing happens if `get` returns null; also see paragraph below
.each {
it.defaultValue = 'shmagic'
}
// the job has changed, but next config reload (f.x. at restart) will overwrite our changes
// so we need to save job config to its config.xml file
it.save()
}
Instead of .getParameterDefinition('MAGIC_PARAMETER') you can use
.parameterDefinitions
.findAll { it.name == 'MAGIC_PARAMETER' }
, changing predicate in findAll if you need f.x. to change value of multiple parameters with different names - then you iterate over found definitions via each{}.
Change parameter name (and value)
This is slightly more tricky, since apparently you cannot edit name of ParameterDefinition, only replace one in a list.
Jenkins.instance.getAllItems(Job)
.findAll { it.fullName.startsWith('sandbox/tmp-magic') }
.each {
def parameters = it.getProperty(ParametersDefinitionProperty).parameterDefinitions
def oldParameter = parameters.find { it.name == 'FOO' }
// avoid changing jobs without this parameter
if (!oldParameter)
return
def idx = parameters.indexOf(oldParameter)
// preserve original value if necessary
def oldValue = oldParameter.defaultValue
parameters[idx] = new StringParameterDefinition('GOOD_FOO', oldValue)
it.save()
}
Bonus points: replace value for SCM step in Freestyle and Pipeline From SCM job
Some of our jobs use MercurialSCM plugin, and some use MultiSCM plugin to checkout multiple repos, so this is what I tested it with.
import hudson.plugins.mercurial.MercurialSCM
import org.jenkinsci.plugins.multiplescms.MultiSCM
import org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition
import org.jenkinsci.plugins.workflow.job.WorkflowJob
Jenkins.instance.getAllItems(Job)
.findAll { it.fullName.startsWith('sandbox/tmp-magic') }
.each {
print "Checking $it ... "
if (it.class == FreeStyleProject && it.scm) {
println "Freestyle"
it.scm = replaceWhateverScm(it.scm)
it.save()
} else if (it.class == WorkflowJob) {
print "Pipeline ... "
def flow = it.definition
if (flow.class == CpsScmFlowDefinition) {
println "ScmFlow"
def scm = replaceWhateverScm(flow.scm)
def newFlow = new CpsScmFlowDefinition(scm, flow.scriptPath)
newFlow.lightweight = flow.lightweight
it.definition = newFlow
it.save()
} else
println "unsupported definition"
} else
println "unsupported job"
}
def replaceWhateverScm(scm) {
if (scm.class == MercurialSCM) {
println "replacing MercurialSCM"
return replaceMercurialSource(scm)
}
if (scm.class == MultiSCM) {
println "replacing MultiSCM"
// cannot replace part of MultiSCM, replace whole scm instead
return new MultiSCM(
scm.configuredSCMs
.collect { (it.class == MercurialSCM) ? replaceMercurialSource(it) : it }
)
}
throw new Exception("unknown class ${scm.class}")
}
def replaceMercurialSource(MercurialSCM original) {
if (!original.source.toLowerCase().contains('repo_url'))
return original
def s = new MercurialSCM('<new_url>')
for (v in ["browser","clean","credentialsId","disableChangeLog","installation","modules","revision","revisionType","subdir",]) {
s."$v" = original."$v"
}
return s
}```

Can I have a reusable "post" block for my jenkins pipelines?

I have many jenkins pipelines for several different platforms but my "post{}" block for all those pipelines is pretty samey. And its quite large at this point because I include success,unstable,failure and aborted in it.
Is there a way to parameterize a reusable post{} block I can import in all my pipelines? I'd like to be able to import it and pass it params as well (because while its almost the same it varies very slightly for different pipelines).
Example post block that is currently copy and pasted inside all my pipeline{}s
post {
success{
script {
// I'd like to be able to pass in values for param1 and param2
someGroovyScript {
param1 = 'blah1'
param2 = 'blah2'
}
// maybe id want a conditional here that does something with a passed in param
if (param3 == 'blah3') {
echo 'doing something'
}
}
}
unstable{
... you get the idea
}
aborted{
... you get the idea
}
failure{
... you get the idea
}
}
The following does not work:
// in mypipeline.groovy
...
post {
script {
myPost{}
}
}
// in vars/myPost.groovy
def call(body) {
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
body()
return always {
echo 'test'
}
}
Invalid condition "myPost" - valid conditions are [always, changed, fixed, regression, aborted, success, unstable, failure, notBuilt, cleanup]
Can i override post{} somehow or something?
Shared libraries is one approach for this, you were pretty close.
#Library('my-shared-library')_
pipeline {
...
post {
always {
script {
myPost()
}
}
}
}
Answer based on https://stackoverflow.com/a/48563538/1783362
Shared Libraries link: https://jenkins.io/doc/book/pipeline/shared-libraries/

Jenkins pipeline: return value of build step

In this integration pipeline in Jenkins, I am triggering different builds in parallel using the build step, as follows:
stage('trigger all builds')
{
parallel
{
stage('componentA')
{
steps
{
script
{
def myjob=build job: 'componentA', propagate: true, wait: true
}
}
}
stage('componentB')
{
steps
{
script
{
def myjob=build job: 'componentB', propagate: true, wait: true
}
}
}
}
}
I would like to access the return value of the build step, so that I can know in my Groovy scripts what job name, number was triggered.
I have found in the examples that the object returned has getters like getProjectName() or getNumber() that I can use for this.
But how do I know the exact class of the returned object and the list of methods I can call on it? This seems to be missing from the Pipeline documentation. I am asking for this case in particular, but generally speaking, how can I know the class of the returned object and its documentation?
The step documentation is generated based on some files that are bundled with the plugin, which sometimes isn't enough. One easy way would be to just print out the class of the result object by calling getClass:
def myjob=build job: 'componentB', propagate: true, wait: true
echo "${myjob.getClass()}"
This output would tell you that the result (in this case) is a org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper which has published Javadoc.
For other cases, I usually have to dive into the Jenkins source code. Here is my general strategy:
Figure out which plugin the step comes from either by the step documentation, jenkins.io steps reference, or just searching the internet
From the plugin site, go to the source code repository
Search for the String literal of the step name, and find the step type that returns it. In this case, it looks to be coming from the BuildTriggerStep class, which extends AbstractStepImpl
#Override
public String getFunctionName() {
return "build";
}
Look at the nested DescriptorImpl to see what execution class is returned
public DescriptorImpl() {
super(BuildTriggerStepExecution.class);
}
Go to BuildTriggerStepExecution and look at the execution body in the start() method
Reading over the workflow step README shows that something should call context.onSuccess(value) to return a result. There is one place in that file, but that is only on the "no-wait" case, which always returns immediately and is null (source).
if (step.getWait()) {
return false;
} else {
getContext().onSuccess(null);
return true;
}
Ok, so it isn't completing in the step execution, so it must be somwhere else. We can also search the repository for onSuccess and see what else might trigger it from this plugin. We find that a RunListener implementation handles setting the result asynchronously for the step execution if it has been configured that way:
for (BuildTriggerAction.Trigger trigger : BuildTriggerAction.triggersFor(run)) {
LOGGER.log(Level.FINE, "completing {0} for {1}", new Object[] {run, trigger.context});
if (!trigger.propagate || run.getResult() == Result.SUCCESS) {
if (trigger.interruption == null) {
trigger.context.onSuccess(new RunWrapper(run, false));
} else {
trigger.context.onFailure(trigger.interruption);
}
} else {
trigger.context.onFailure(new AbortException(run.getFullDisplayName() + " completed with status " + run.getResult() + " (propagate: false to ignore)"));
}
}
run.getActions().removeAll(run.getActions(BuildTriggerAction.class));
The trigger.context.onSuccess(new RunWrapper(run, false)); is where the RunWrapper result comes from
The result of the downstream job is given in the result attribute of the returned object.
I recommend to use propagate: false to get control of how the result of the downstream job affects the current build.
Example:
pipeline{
[...]
stages {
stage('Dummy') {
steps {
echo "Hello world #1"
}
}
stage('Fails') {
steps {
script {
downstream = build job: 'Pipeline Test 2', propagate: false
if (downstream.result != 'SUCCESS') {
unstable(message: "Downstream job result is ${downstream.result}")
}
}
}
}
}
[...]
}
In this example, the current build is set to UNSTABLE whenever the downstream build has not been successful.
The result can be: SUCCESS, FAILURE, UNSTABLE, or ABORTED.
For your other question see Is there a built-in function to print all the current properties and values of an object? and Groovy / grails how to determine a data type?
I got the class path from build log:
13:20:52 org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper#5fe8d20f
and then I found the doc from https://javadoc.jenkins.io/, you can get all you need from this page
link: https://javadoc.jenkins.io/plugin/workflow-support/org/jenkinsci/plugins/workflow/support/steps/build/RunWrapper.html

Using FilePath to access workspace on slave in Jenkins pipeline

I need to check for the existence of a certain .exe file in my workspace as part of my pipeline build job. I tried to use the below Groovy script from my Jenkinsfile to do the same. But I think the File class by default tries to look for the workspace directory on jenkins master and fails.
#com.cloudbees.groovy.cps.NonCPS
def checkJacoco(isJacocoEnabled) {
new File(pwd()).eachFileRecurse(FILES) { it ->
if (it.name == 'jacoco.exec' || it.name == 'Jacoco.exec')
isJacocoEnabled = true
}
}
How to access the file system on slave using Groovy from inside the Jenkinsfile?
I also tried the below code. But I am getting No such property: build for class: groovy.lang.Binding error. I also tried to use the manager object instead. But get the same error.
#com.cloudbees.groovy.cps.NonCPS
def checkJacoco(isJacocoEnabled) {
channel = build.workspace.channel
rootDirRemote = new FilePath(channel, pwd())
println "rootDirRemote::$rootDirRemote"
rootDirRemote.eachFileRecurse(FILES) { it ->
if (it.name == 'jacoco.exec' || it.name == 'Jacoco.exec') {
println "Jacoco Exists:: ${it.path}"
isJacocoEnabled = true
}
}
Had the same problem, found this solution:
import hudson.FilePath;
import jenkins.model.Jenkins;
node("aSlave") {
writeFile file: 'a.txt', text: 'Hello World!';
listFiles(createFilePath(pwd()));
}
def createFilePath(path) {
if (env['NODE_NAME'] == null) {
error "envvar NODE_NAME is not set, probably not inside an node {} or running an older version of Jenkins!";
} else if (env['NODE_NAME'].equals("master")) {
return new FilePath(path);
} else {
return new FilePath(Jenkins.getInstance().getComputer(env['NODE_NAME']).getChannel(), path);
}
}
#NonCPS
def listFiles(rootPath) {
print "Files in ${rootPath}:";
for (subPath in rootPath.list()) {
echo " ${subPath.getName()}";
}
}
The important thing here is that createFilePath() ins't annotated with #NonCPS since it needs access to the env variable. Using #NonCPS removes access to the "Pipeline goodness", but on the other hand it doesn't require that all local variables are serializable.
You should then be able to do the search for the file inside the listFiles() method.

How to tell Jenkins "Build every project in folder X"?

I have set up some folders (Using Cloudbees Folder Plugin).
It sounds like the simplest possible command to be able to tell Jenkins: Build every job in Folder X.
I do not want to have to manually create a comma-separated list of every job in the folder. I do not want to add to this list whenever I want to add a job to this folder. I simply want it to find all the jobs in the folder at run time, and try to build them.
I'm not finding a plugin that lets me do that.
I've tried using the Build Pipeline Plugin, the Bulk Builder Plugin, the MultiJob plugin, and a few others. None seem to support the use case I'm after. I simply want any Job in the folder to be built. In other words, adding a job to this build is as simple as creating a job in this folder.
How can I achieve this?
I've been using Jenkins for some years and I've not found a way of doing what you're after.
The best I've managed is:
I have a "run every job" job (which contains a comma-separated list of all the jobs you want).
Then I have a separate job that runs periodically and updates the "run every job" job as new projects come and go.
One way to do this is to create a Pipeline job that runs Groovy script to enumerate all jobs in the current folder and then launch them.
The version below requires the sandbox to be disabled (so it can access Jenkins.instance).
def names = jobNames()
for (i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
#NonCPS
def jobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
def childItems = project.parent.items
def targets = []
for (i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == project.fullName) continue;
targets.add(childItem.fullName)
}
return targets
}
If you use Pipeline libraries, then the following is much nicer (and does not require you to allow a Groovy sandbox escape:
Add the following to your library:
package myorg;
public String runAllSiblings(jobName) {
def names = siblingProjects(jobName)
for (def i = 0; i < names.size(); i++) {
build job: names[i], wait: false
}
}
#NonCPS
private List siblingProjects(jobName) {
def project = Jenkins.instance.getItemByFullName(jobName)
def childItems = project.parent.items
def targets = []
for (def i = 0; i < childItems.size(); i++) {
def childItem = childItems[i]
if (!childItem instanceof AbstractProject) continue;
if (childItem.fullName == jobName) continue;
targets.add(childItem.fullName)
}
return targets
}
And then create a pipeline with the following code:
(new myorg.JobUtil()).runAllSiblings(currentBuild.fullProjectName)
Yes, there are ways to simplify this further, but it should give you some ideas.
I developed a Groovy script that does this. It works very nicely. There are two Jobs, initBuildAll, which runs the groovy script and then launches the 'buildAllJobs' jobs. In my setup, I launch the InitBuildAll script daily. You could trigger it another way that works for you. We aren't full up CI, so daily is good enough for us.
One caveat: these jobs are all independent of one another. If that's not your situation, this may need some tweaking.
These jobs are in a separate Folder called MultiBuild. The jobs to be built are in a folder called Projects.
import com.cloudbees.hudson.plugins.folder.Folder
import javax.xml.transform.stream.StreamSource
import hudson.model.AbstractItem
import hudson.XmlFile
import jenkins.model.Jenkins
Folder findFolder(String folderName) {
for (folder in Jenkins.instance.items) {
if (folder.name == folderName) {
return folder
}
}
return null
}
AbstractItem findItem(Folder folder, String itemName) {
for (item in folder.items) {
if (item.name == itemName) {
return item
}
}
null
}
AbstractItem findItem(String folderName, String itemName) {
Folder folder = findFolder(folderName)
folder ? findItem(folder, itemName) : null
}
String listProjectItems() {
Folder projectFolder = findFolder('Projects')
StringBuilder b = new StringBuilder()
if (projectFolder) {
for (job in projectFolder.items.sort{it.name.toUpperCase()}) {
b.append(',').append(job.fullName)
}
return b.substring(1) // dump the initial comma
}
return b.toString()
}
File backupConfig(XmlFile config) {
File backup = new File("${config.file.absolutePath}.bak")
FileWriter fw = new FileWriter(backup)
config.writeRawTo(fw)
fw.close()
backup
}
boolean updateMultiBuildXmlConfigFile() {
AbstractItem buildItemsJob = findItem('MultiBuild', 'buildAllProjects')
XmlFile oldConfig = buildItemsJob.getConfigFile()
String latestProjectItems = listProjectItems()
String oldXml = oldConfig.asString()
String newXml = oldXml;
println latestProjectItems
println oldXml
def mat = newXml =~ '\\<projects\\>(.*)\\<\\/projects\\>'
if (mat){
println mat.group(1)
if (mat.group(1) == latestProjectItems) {
println 'no Change'
return false;
} else {
// there's a change
File backup = backupConfig(oldConfig)
def newProjects = "<projects>${latestProjectItems}</projects>"
newXml = mat.replaceFirst(newProjects)
XmlFile newConfig = new XmlFile(oldConfig.file)
FileWriter nw = new FileWriter(newConfig.file)
nw.write(newXml)
nw.close()
println newXml
println 'file updated'
return true
}
}
false
}
void reloadMultiBuildConfig() {
AbstractItem job = findItem('MultiBuild', 'buildAllProjects')
def configXMLFile = job.getConfigFile();
def file = configXMLFile.getFile();
InputStream is = new FileInputStream(file);
job.updateByXml(new StreamSource(is));
job.save();
println "MultiBuild Job updated"
}
if (updateMultiBuildXmlConfigFile()) {
reloadMultiBuildConfig()
}
A slight variant on Wayne Booth's "run every job" approach. After a little head scratching I was able to define a "run every job" in Job DSL format.
The advantage being I can maintain my job configuration in version control. e.g.
job('myfolder/build-all'){
publishers {
downstream('myfolder/job1')
downstream('myfolder/job2')
downstream('myfolder/job2')
}
}
Pipeline Job
When running as a Pipeline job you may use something like:
echo jobNames.join('\n')
jobNames.each {
build job: it, wait: false
}
#NonCPS
def getJobNames() {
def project = Jenkins.instance.getItemByFullName(currentBuild.fullProjectName)
project.parent.items.findAll {
it.fullName != project.fullName && it instanceof hudson.model.Job
}.collect { it.fullName }
}
Script Console
Following code snippet can be used from the script console to schedule all jobs in some folder:
import hudson.model.AbstractProject
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ 'path/to/folder') {
(it as AbstractProject).scheduleBuild2(0)
}
}
With some modification you'd be able to create a jenkins shared library method (requires to run outside the sandbox and needs #NonCPS), like:
import hudson.model.AbstractProject
#NonCPS
def triggerItemsInFolder(String folderPath) {
Jenkins.instance.getAllItems(AbstractProject.class).each {
if(it.fullName =~ folderPath) {
(it as AbstractProject).scheduleBuild2(0)
}
}
}
Reference pipeline script to run a parent job that would trigger other jobs as suggested by #WayneBooth
pipeline {
agent any
stages {
stage('Parallel Stage') {
parallel {
stage('Parallel 1') {
steps {
build(job: "jenkins_job_1")
}
}
stage('Parallel 2') {
steps {
build(job: "jenkins_job_2")
}
}
}
}
}
The best way to run an ad-hoc command like that would be using the Script Console (can be found under Manage Jenkins).
The console allows running Groovy Script - the script controls Jenkins functionality. The documentation can be found under Jenkins JavaDoc.
A simple script triggering immediately all Multi-Branch Pipeline projects under the given folder structure (in this example folder/subfolder/projectName):
import org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
import hudson.model.Cause.UserIdCause
Jenkins.instance.getAllItems(WorkflowMultiBranchProject.class).findAll {
return it.fullName =~ '^folder/subfolder/'
}.each {
it.scheduleBuild(0, new UserIdCause())
}
The script was tested against Jenkins 2.324.

Resources