Why does this Jenkins Pipeline code not succeed? - jenkins

this is my situation: one of my projects consists of multiple subprojects, roughly separated as frontend and backend, which are at different locations in a subversion repository.
I extracted the checkout plugin into a function, that is already properly parameterized for the checkout:
def svn(String url, String dir = '.') {
checkout([
$class: 'SubversionSCM',
locations: [[
remote: url,
credentialsId: '...'
local: dir,
]],
workspaceUpdater: [$class: 'UpdateUpdater']
])
}
That way, I was able to do the checkout by this means (simplified):
stage "1. Build"
parallel (
"Backend": { node {
svn('https://svn.acme.com/Backend/trunk')
sh 'gradle build'
}},
"Frontend": { node {
svn('https://svn.acme.com/Frontend/trunk')
sh 'gradle build'
}}
)
Checking out at the very same time lead to Jenkins having troubles with changeset xml files, as far as I could guess from the stacktraces.
Since I also want to reuse both the projects name and its svn url, I moved on to iterate over a map and checking out consecutively and just stashing the files in the first stage for the following parallel build-only stage:
stage "1. Checkout"
node {
[
'Backend': 'https://svn.acme.com/Backend/trunk',
'Frontend': 'https://svn.acme.com/Frontend/trunk',
].each { name, url ->
// Checkout in subdirectory
svn(url, name)
// Unstash by project name
dir(name) { stash name }
}
}
stage "2. Build"
// ...
Somehow Jenkins' pipeline does not support this, so I used a simple for-in loop instead:
node {
def projects = [
'Backend': '..'
// ...
]
for ( project in projects ) {
def name = project.getKey()
def url = project.getValue()
svn(url, name)
dir(name) { stash name }
}
project = projects = name = url = null
}
That doesn't work as well and exits the build with an Exception: java.io.NotSerializableException: java.util.LinkedHashMap$Entry. As you can see, I set every property to null, because I read somewhere, that this prevents that behaviour. Can you help me fix this issue and explain, what's exactly going on here?
Thanks!

I think it is a known Jenkins bug of the for in-loop:
https://issues.jenkins-ci.org/browse/JENKINS-27421
But there is also a known bug for .each style loops
https://issues.jenkins-ci.org/browse/JENKINS-26481
So currently it seems like you cannot iterate over Maps in Jenkins Pipelines. I suggest creating a list as a workaround and iterate over it with the "classic loop" style:
def myList = ["Backend|https://svn.acme.com/Backend/trunk", "Frontend|https://svn.acme.com/Frontend/trunk"]
for (i = 0; i < myList.size(); i++) {
//get current list item : myList[i] and split at pipe | ->escape pipe with \\
def (name, url) = myList[i].tokenize( '\\|' )
//do operations
svn(url, name)
dir(name) { stash name }
}

Related

Jenkins - Populate choice parameter with options from a list variable in a different groovy file

How can I declare a choice parameter for a declarative pipeline, the choices for which are read from a list in another groovy file?
l.groovy
opts = ['a','b','c','d']
main.groovy
pipeline {
parameters {
choice (
name: 'CHOICE_LIST',
choices: config.opts.keySet() as String[],
description: 'Make a choice'
)
...
}
...
}
Hoi,
just join your list with .join('\n') should do the trick.
choice (
name: 'CHOICE_LIST',
choices: config.opts.keySet().join('\n'),
description: 'Make a choice'
)
Why ?
ChoiceParameterDefinition requires a delimited string.
https://issues.jenkins-ci.org/browse/JENKINS-26143
UPDATE
It's the problem of importing the config that isn't working. How should I import from another groovy file? That's the bigger issue. cyberbeast
Add the other groovy.file as a shared library to the pipeline under the job-configuration.
Create a reference to the pipeline in your job. In my example the Groovy-file is called Prebuild which contains a funtion getBranchNames() where I get all branches from a svn-repro.
pipeline {
agent any
libraries {
lib('PreBuild')
}
stages {
stage('Set Parameters') {
steps {
timeout(time: 30, unit: 'SECONDS') {
script {
def INPUT_PARAMS = input message: 'Please Provide Parameters', ok: 'Next', parameters: [choice(name: 'Branch_Choice', choices: PreBuild.getBranchNames(), description: 'Which Branch?')]
}
}
...
The corrospending Prebuild.groovy file looks like this:
import groovy.util.XmlSlurper
def getBranchNames(){
def svn = bat(returnStdout: true, script: 'svn ls https://svn-repro --xml --username John --password Doe --non-interactive --trust-server-cert').trim()
def result = svn.readLines().drop(1).join(" ")
def slurper = new XmlSlurper()
def xml = slurper.parseText(result)
def name = new ArrayList()
name.addAll(xml.'*'.'*'.'name')
return name.join('\n')
}
I parse the svn-command output into an arraylist and return it as a joined string back to my pipeline job.
Be aware that your other Groovy-file has to be in a SCM too. The Library repro needs a special folder structure, find more information here:https://devopscube.com/jenkins-shared-library-tutorial/

Looking for a Jenkins plugin to allow per-branch default parameter values

I have a multi-branch pipeline job set to build by Jenkinsfile every minute if new changes are available from the git repo. I have a step that deploys the artifact to an environment if the branch name is of a certain format. I would like to be able to configure the environment on a per-branch basis without having to edit Jenkinsfile every time I create a new such branch. Here is a rough sketch of my Jenkinsfile:
pipeline {
agent any
parameters {
string(description: "DB name", name: "dbName")
}
stages {
stage("Deploy") {
steps {
deployTo "${params.dbName}"
}
}
}
}
Is there a Jenkins plugin that will let me define a default value for the dbName parameter per branch in the job configuration page? Ideally something like the mock-up below:
The values should be able to be reordered to set priority. The plugin stops checking for matches after the first one. Matching can be exact or regex.
If there isn't such a plugin currently, please point me to the closest open-source one you can think of. I can use it as a basis for coding a custom plugin.
A possible plugin you could use as a starting point for a custom plugin is the Dynamic Parameter Plugin
Here is a workaround :
Using the Jenkins Config File Provider plugin create a config json with parameters defined in it per branch. Example:
{
"develop": {
"dbName": "test_db",
"param2": "value"
},
"master": {
"dbName": "prod_db",
"param2": "value1"
},
"test_branch_1": {
"dbName": "zss_db",
"param2": "value2"
},
"default": {
"dbName": "prod_db",
"param2": "value3"
}
}
In your Jenkinsfile:
final commit_data = checkout(scm)
BRANCH = commit_data['GIT_BRANCH']
configFileProvider([configFile(fileId: '{Your config file id}', variable: 'BRANCH_SETTINGS')]) {
def config = readJSON file:"$BRANCH_SETTINGS"
def branch_config = config."${BRANCH}"
if(branch_config){
echo "using config for branch ${BRANCH}"
}
else{
branch_config = config.default
}
echo branch_config.'dbName'
}
You can then use branch_config.'dbName', branch_config.'param2' etc. You can even set it to a global variable and then use throughout your pipeline.
The config file can easily be edited via the Jenkins UI(Provided by the plugin) to provision for new branches/params in the future. This doesn't need access to any non sandbox methods.
Not really an answer to your question, but possibly a workaround...
I don't know that the rest of your parameter list looks like, but if it is a static list, you could potentially have your static list with a "use Default" option as the first one.
When the job is run, if the value is "use Default", then gather the default from a file stored in the SCM branch and use that.

How to run downstream job conditionally based on a parameter in Jenkins pipeline?

Following on from my question How to trigger parameterized build on successful build in Jenkins?
I would like the invoke a downstream project but only if a boolean parameter is set to true. Is this possible? My pipeline looks like this:
node {
try {
echo "ConfigFilePath: ${ConfigFilePath}"
echo "Delete VM on Successful Build: ${DeleteOnSuccess}"
stage('checkout') {
deleteDir()
git 'http://my.git.lab/repo.git'
}
stage('deploy') {
bat 'powershell -nologo -file BuildMyVM.ps1 -ConfigFilePath "%ConfigFilePath%" -Verbose'
}
}
stage('test') {
// functional tests go here
}
}
catch (e) {
// exception code
} finally {
// finally code
}
} //node
stage('delete') {
if(DeleteOnSuccess)
{
bat 'SET /p VM_NAME=<DeleteVM.txt'
echo "Deleting VM_NAME: %VM_NAME%"
def job = build job: 'remove-vm', parameters: [[$class: 'StringParameterValue', name: 'VM_NAME', value: '${VM_NAME}']]
}
}
I get this error on the delete stage
Required context class hudson.FilePath is missing.
Perhaps you forgot to surround the code with a step that provides this, such as: node
If I wrap the above in a node, then the parameter values are lost. If I put the delete stage in the main node, then I take up two executors, which I'm trying to avoid because it will result in some deadlock conditions.
The problem is that the running of a script actually needs a node to run on, so in your case the cause of the error is that you try to run a bat command outside of a node context
node {
...
}
stage('delete') {
if(DeleteOnSuccess)
{
bat 'SET /p VM_NAME=<DeleteVM.txt' // <- this is actually causing the error
echo "Deleting VM_NAME: %VM_NAME%"
def job = build job: 'remove-vm', parameters: [[$class: 'StringParameterValue', name: 'VM_NAME', value: '${VM_NAME}']]
}
}
You can fix this by wrapping this part also inside a node by either putting it inside the first node or add a new one, depending on what you want
Besides that, if the DeleteOnSuccess variable is a build parameter, it will be a string. I am not sure, but I think this is because it is injected as an environment variable, which are also strings (even if it is of type BooleanParameter. I guess that is only a UI thing so it will show up as checkbox).
You can check that by echoing DeleteOnSuccess.class. This will tell you its class
if(DeleteOnSuccess) { ... }
will always run the conditional block. You can fix this by either converting it to a bool using the toBoolean() extension method, or checking it against the the string true: DeleteOnSuccess == "true", like you did.
The extension method has the advantage that it will also allow values "1" and "True" as true

Multi branch Pipeline plugin load multiple jenkinsfile per branch

I am able to load Jenkinsfile automatically through multi branch pipeline plugin with a limitation of only one jenkinsfile per branch.
I have multiple Jenkinsfiles per branch which I want to load, I have tried with below method by creating master Jenkins file and loading specific files. In below code it merges 1.Jenkinsfile and 2.Jenkinsfile as one pipeline.
node {
git url: 'git#bitbucket.org:xxxxxxxxx/pipeline.git', branch: 'B1P1'
sh "ls -latr"
load '1.Jenkinsfile'
load '2.Jenkinsfile'
}
Is there a way I can load multiple Jenkins pipeline code separately from one branch?
I did this writing a share library (ref https://jenkins.io/doc/book/pipeline/shared-libraries/) containing the following file (in vars/generateJobsForJenkinsfiles.groovy):
/**
* Creates jenkins pipeline jobs from pipeline script files
* #param gitRepoName name of github repo, e.g. <organisation>/<repository>
* #param filepattern ant style pattern for pipeline script files for which we want to create jobs
* #param jobPath closure of type (relativePathToPipelineScript -> jobPath) where jobPath is a string of formated as '<foldername>/../<jobname>' (i.e. jenkins job path)
*/
def call(String gitRepoName, String filepattern, def jobPath) {
def pipelineJobs = []
def base = env.WORKSPACE
def pipelineFiles = new FileNameFinder().getFileNames(base, filepattern)
for (pipelineFil in pipelineFiles) {
def relativeScriptPath = (pipelineFil - base).substring(1)
def _jobPath = jobPath(relativeScriptPath).split('/')
def jobfolderpath = _jobPath[0..-2]
def jobname = _jobPath[-1]
echo "Create jenkins job ${jobfolderpath.join('/')}:${jobname} for $pipelineFil"
def dslScript = []
//create folders
for (i=0; i<jobfolderpath.size(); i++)
dslScript << "folder('${jobfolderpath[0..i].join('/')}')"
//create job
dslScript << """
pipelineJob('${jobfolderpath.join('/')}/${jobname}') {
definition {
cpsScm {
scm {
git {
remote {
github('$gitRepoName', 'https')
credentials('github-credentials')
}
branch('master')
}
}
scriptPath("$relativeScriptPath")
}
}
configure { d ->
d / definition / lightweight(true)
}
}
"""
pipelineJobs << dslScript.join('\n')
//println dslScript
}
if (!pipelineJobs.empty)
jobDsl sandbox: true, scriptText: pipelineJobs.join('\n'), removedJobAction: 'DELETE', removedViewAction: 'DELETE'
}
Most likely you want to map old Jenkins' jobs (pre pipeline) operating on single branch of some project to a single multi branch pipeline. The appropriate approach would be to create stages that are input dependent (like a question user if he/she wants to deploy to staging / live).
Alternatively you could just create a new separate Pipeline jenkins job that actually references an your project's SCM and points to your other Jenkinsfile (then one pipeline job per every other jenkinsfile).

How to use a FileParameterValue in a jenkins 2 pipeline

How can a file from the current project workspace be passed as a parameter to another project.
e.g. something like:
build job: 'otherproject', parameters: [[$class: 'FileParameterValue', name: 'output.tar.gz', value: ??? ]], wait: false
The java.File object only can recover files from the master node.
So to load the files as a java.File objects we use the master node to unstash the required files, then we wrap them as file objects and finally we send them as a FileParameterValue objects.
node("myNode") {
sh " my-commands -f myFile.any " // This command create a new file.
stash includes: "*.any", name: "my-custom-name", useDefaultExcludes: true
}
node("master") {
unstash "my-custom-name"
def myFile = new File("${WORKSPACE}/myFile.any")
def myJob = build(job: "my-job", parameters:
[ string(name: 'required-param-1', value: "myValue1"),
new FileParameterValue("myFile.any", myFile, "myFile.any")
], propagate: false)
print "The Job execution status is: ${myJob.result}."
if(myJob.result == "FAILURE") {
error("The Job execution has failed.")
}
else {
print "The Job was executed successfully."
}
}
You could skip the master node If the file that you need to send contain only text.
def myFileContent = readFile("myFile.txt")
FilePath fp = new FilePath(new File("${WORKSPACE}","myFile.txt"))
if(fp!=null){
fp.write(myFileContent, null)
}
def file = new File("${WORKSPACE}/myFile.txt")
Then use the file on the FileParameterValue object as usual.
Don't forget to import the FilePath object -> import hudson.FilePath
I've tried this myself recently with little success. There seems to be a problem with this. According to the documentation for class FileParameterValue there is a constructor which accepts a java.io.File like so:
#DataBoundConstructor
FileParameterValue(String name,
org.apache.commons.fileupload.FileItem file)
There is another wich expects a FileItem like so:
FileParameterValue(String name,
File file,
String originalFileName)
But since only the former is annotated with #DataBoundConstructor even when I try to use the latter in a script:
file = new File(pwd(), 'test.txt');
build(
job: 'jobB',
parameters: [
[$class: "FileParameterValue", name: "TEST_FILE", file: file, originalFileName: 'test.txt']
]
)
Note that this requires script approval for instantiating java.io.File
... I get the following error:
java.lang.ClassCastException: hudson.model.FileParameterValue.file expects interface org.apache.commons.fileupload.FileItem but received class java.io.File
I understand that only a file uploaded by the user as interactive runtime input provides an object of type org.apache.commons.fileupload.FileItem so in the end I resorted to archiving the file in the first job and unarchiving it in the downstream job, and got around the problem. It's not ideal of course but if you're in a jam it's the quickest way to sort it out.
You can't. Here is the jenkins bug. Update this thread once the bug is fixed. In the meantime, login and vote for this issue and ask for them to add documentation for pipeline build job parameters.
https://issues.jenkins-ci.org/browse/JENKINS-27413
Linked to from here: http://jenkins-ci.361315.n4.nabble.com/pipeline-build-job-with-FileParameterValue-td4861199.html
Here is the documentation for different parameter types (Link to FileParameterValue)
http://javadoc.jenkins.io/hudson/model/FileParameterValue.html
Try to pass an instance of FileParameterValue to parameters (it worked for me):
import hudson.model.*
def param_file = new File("path/to/file")
build job: 'otherproject', parameters: [new FileParameterValue('file_param_name', param_file, 'original_file_name')], wait: false
Using jenkins file parameter plugin, it supports (i) base 64 file and (ii) stash file.
The following is an "example" of caller and callee pipeline jenkins scripts on windows agent.
Caller
pipeline {
stages {
stage ('Call Callee Job') {
steps {
script {
def callee_job = build(job: 'test-callee', parameters: [
base64File(name: 'smallfile', base64: Base64.encoder.encodeToString('small file 123'.bytes)),
stashedFile(name: 'largefile', file: getFileItem())
], propagate: true)
}
}
}
}
}
// Read file and convert from java file io object to apache commons disk file item object
#NonCPS
def getFileItem() {
def largeFileObject = new File(pwd(), "filename.apk")
def diskFileItem = new org.apache.commons.fileupload.disk.DiskFileItem("fieldNameFile", "application/vnd.android.package-archive", false, largeFileObject.getName(), (int) largeFileObject.length() , largeFileObject.getParentFile())
def inputStream = new FileInputStream(largeFileObject)
def outputStream = diskFileItem.getOutputStream()
org.apache.commons.io.IOUtils.copy(inputStream, outputStream)
inputStream.close()
outputStream.close()
return diskFileItem
}
Callee
pipeline {
parameters {
base64File(name: 'smallfile')
stashedFile(name: 'largefile')
}
stages {
stage ('Print params') {
steps {
echo "params.smallfile: ${params.smallfile}" // gives base64 encoded value
echo "params.largefile: ${params.largefile}" // gives null
withFileParameter('smallfile') {
echo "$smallfile" // gives tmp file path in callee job workspace
bat "more $smallfile" // reads tmp file to give content value
}
unstash 'largefile'
bat 'dir largefile' // shows largefile in callee job workspace directory
}
}
}
}

Resources