Jenkins pipeline mandatory text parameters in input step - jenkins

We are building several pipeline tasks in Jenkins to make life easier on some deploy jobs. One of them requires manual input of several parameters. For that we are using an input step like this:
def userInput = input ( message : 'Select deployment versiĆ³n and input deployment code:',
parameters: [[$class: 'TextParameterDefinition', defaultValue: '', description: 'Clarive code', name: 'code']] )
Those parameters are mandatory. We didn't find in the documentation any property that will make the TextParameterDefinition mandatory. For now we are re running the step until all parameters are not null, but the solution is a bit confusing for the user.
Is there another way to handle mandatory parameters that avoids running the same step on a loop?

There was a plugin that did that but is no longer maintained.
There's an open bug to support it.
In the meantime what you can do is check if your parameter is present and if not throw an error like:
if (params.SomeParam == null) {
error("Build failed because of this and that..")
}

Related

Jenkins declarative pipeline dynamic choice parameter doesn't get updated after first build

I'm trying to convert old Jenkins jobs to declarative pipeline code.
When trying to use the choice parameter in the script I implement a function which should return updated values, if the values are not the most recent ones - the job will fail.
The problem is that after the first build which looks ok, the values stay static, they don't get updated afterwards which as I said above - fails my job.
It's like the function that i wrote runs only one time at the first build and doesn't run ever again.
I've tried writing the code in a way that the output will be sent to a file and be read from it - thus maybe the function will get updated by getting the text from a file - that didn't work.
I've tried looking at the Jenkins documentation / a lot of other threads and didn't find a thing.
My code looks like this:
def GetNames() {
def workspace = "..."
def proc = "${workspace}/script.sh list".execute()
return proc.text
}
${workspace} - Is just my workspace, doesn't matter.
script.sh - A script that 100% works and tested
return proc.text - Does return the values, I've tested it in my Jenkins website/script section and the values do return properly and updated.
My parameters section:
parameters {
choice(name: 'Names', choices: GetNames(), description: 'The names')
}
First build I get 5 names, which is good because those are the updated values, seconds build I know there are 10 values but I still get the 5 from before, and every build after that I will still get the same 5 names - they do not get updated at all, the function does not get triggered again.
It seems like this is a very long running issue which still didn't get patched, the only thread that had this mentioned was this one:
Jenkins dynamic declarative pipeline parameters but the solution is in a scripted and not declarative way.
Well, i've finally figured it out, the solution is combining declarative and scripted ways,
(using active parameter plugin).
node {
properties([
parameters([
[$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'The names',
filterLength: 1,
filterable: true,
name: 'Name',
randomName: 'choice-parameter-5631314439613978',
script: [
$class: 'GroovyScript',
script: [
classpath: [],
sandbox: false,
script: '''
some code.....
return something'''
]
]
],
])
])
}
pipeline {
agent any
.
.
This way the script part of the active parameter initiates every time you load the page and the values get returned updated every time.

Jenkins declarative pipeline : How to configure the klocwork result display on the job page

I am creating a pipeline using the declarative pipeline flavour, with clockwork steps enclosed within a klockwork wrapper where I can define the klocwork setup :
klocworkWrapper(installConfig: 'My Klocwork', ltoken: "${HOME}/.klocwork/ltoken", serverConfig: 'Klocwork#XYZ', serverProject: 'S3cr3TPr0j3ct') {
klocworkBuildSpecGeneration([additionalOpts: '', buildCommand: 'make', ignoreErrors: true, output: 'kwinject.out', tool: 'kwinject'])
klocworkIntegrationStep1([additionalOpts: '', buildSpec: 'kwinject.out', disableKwdeploy: false, ignoreCompileErrors: true, importConfig: '', incrementalAnalysis: false, tablesDir: 'kwtables'])
klocworkIntegrationStep2([additionalOpts: '', buildName: "${JOB_BASE_NAME}_${BUILD_NUMBER}", tablesDir: 'kwtables'])
}
Ok, analysis is launched, and I can see the results on the Klocwork server web interface.
But I cannot find a way to retrieve resulting diagrams on the Jenkins web interface, even when using the pipeline script generator.
Unless I am totally wrong, I think that I should use klocworkQualityGateway, but the generated script snippet is not correct.
Once copied within the wrapper, it fails lacking for some enableXYGateway or gatewayXYConfig property.
For example this line :
klocworkQualityGateway([enableCiGateway: false, enableServerGateway: true, gatewayServerConfigs: [[conditionName: 'Issues', jobResult: 'failure', query: 'state:+Status,Fix', threshold: '1']]])
fails with an error message :
WorkflowScript: 92: Missing required parameter: "gatewayCiConfig" # line 92, column 1.
klocworkQualityGateway([enableCiGateway: false, enableServerGateway: true, gatewayServerConfigs: [[conditionName: 'Issues', jobResult: 'failure', query: 'state:+Status,Fix', threshold: '1']]])
I really cannot find a way to make it work, and I guess I can take a wrong turn... so any help would be appreciate.
Thanks for your help and best regards
J-L
Well, after a fruitful discussion with the plugin maintainer (M. Baron) it appears that there is currently no simple and direct solution to display Klocwork result on a pipeline job page.
He said :
This step doesn't have a native pipeline interface and a few people
have tried, but haven't had much success with workarounds to use this
in a pipeline.
The simplest thing to do seems to trigger a freestyle job that will only do that.
As far as I have understood, a new plugin version with full pipeline support will replace the current one.
So, I think this discussion can be closed.

jenkins : update parameters via groovy script

I have a text file on server, e.g. /var/lib/jenkins/.../myChoices.txt
FirstChoice,SecondChoice
As the files will updated from time to time, I want the script update the parameters every time will I click "build with parameters"
But my code only works when I build the job, i.e. is not updating in real time.
def getMyChoices() {
List<String> choices = Arrays.asList(readFileFromWorkspace('/var/lib/jenkins/.../myChoices.txt').split(','))
return choices
}
job(jobName) {
description("Deploy something based on choice.")
parameters {
...
...
choiceParam('EB_ACTIVE_ENV_NAME', getMyChoices(), '')
}
}
I do not want to use the hudson plugin too due to some vulnerability reason.
Groovy scripts would be executed only when the job is run. Hence, until the job is run, the parameters would not be refreshed
The only solution available is to this job at regular intervals with an additional flag to refresh the parameters alone and then exit.
This way whenever you click on Build on Parameters options, you will have the latest parameters that exists in the file.
It is required to regenerate the job in order to refresh the parameters.
What I would do is create a job that generates the jobs with jobdsl step when I get a change on the repository where myChoices.txt is versioned
here is an exemple of use of jobDsl
jobDsl removedJobAction: 'DELETE',
removedViewAction: 'DELETE',
targets: targetFile,
unstableOnDeprecation: true,
additionalParameters: [
pipelineJobs: arrFiles,
props: [
basePath: destination,
gitRemoteUrl: config.gitRemoteUrl,
gitConfigJenkinsBranch: config.gitConfigJenkinsBranch,
localPath: config.localPath ?: ''
]
]
I use it with a shared library I created that allows me to abstract jobDSL and only write pipelineDSL https://github.com/SAP/jenkins-pipelayer/ but there are restriction to this lib, because I parse the pipelineDSL, getMyChoices() would not be evaluated in the current version of the lib

Pipeline jobs - pass parameters upstream?

TL;DR: Obviously in a Jenkins pipeline job you can easily pass parameters downstream. What I want to know is if you can pass them upstream.
Use case:
We have three jobs; job_one, job_two, and job_three. These are frequently run separately as only one stage is needed, but in increasingly more-frequent cases we'd like to be able to run all three back to back.
The first and second rely on parameters you can define ahead of time, but the third needs a parameter that is generated from the second job (a file name whose structure is unknown until job_two runs).
I have built umbrella, which calls something like the following for each job. In this case, PARAM1 is populated because umbrella runs as "Build with parameters".
build job: 'job_one', parameters: [[$class: 'StringParameterValue', name: 'PARAM1', value: "$PARAM1"]]
All fine and dandy, I can then use PARAM1 in job_one just fine.
The Problem:
For job_three I need the parameter filename. This is generated within job_two, and therefore from what I can tell is inaccessible because job_three has no idea what job_two is doing.
In an ideal world I would just have job_two pass the filename to the umbrella job, which would feed it back into job_three. Therefore, how can I pass the generated filename back up to the umbrella job?
I'm picturing a final script something like this;
node('on-demand-t2small'){
stage ('Build 1') {
build job: 'job_one', parameters: [[$class: 'StringParameterValue', name: 'PARAM1', value: "$PARMA1"]]
}
stage ('Build 2') {
build job: 'job_two', parameters: [[$class: 'StringParameterValue', name: 'PARAM2', value: "$PARMA2"]]
//somehow get the filename parameter out of job_two here so that I can move it to job three...
}
stage ('Build 3') {
build job: 'job_three', parameters: [[$class: 'StringParameterValue', name: 'filename', value: "$filename"]]
} }
Additional Notes:
I recognize that the first question will be "why not have job_two trigger job_three? I can't set the system up this way for two reasons;
job_two needs to be able to run without triggering job_three, and three can't always require two's input to run.
I debated having the umbrella kick off two and then have a clause in two that would trigger three ONLY IF it had been started by the umbrella, but as far as I can tell this will limit feedback in the umbrella job; you won't know if two failed because two failed, or because three (as a part of two) failed. If I'm wrong about this assumption please let me know.
I had thought about setting the parameter as an environment variable but I believe that's node-specific and I can't guarantee both jobs will run on the same node so that seemed to not be the solution.
Umbrella is a pipeline job written in groovy, the other three may be pipeline or freestyle jobs, if that matters.
I would appreciate detailed answers where possible, I'm still new to Groovy, Jenkins, and coding in general.
It should be as simple as that:
stage ('Build 3') {
res = build job: 'job_three', parameters: [[$class: 'StringParameterValue', name: 'filename', value: "$filename"]]
echo "$res.buildVariables.filename"
}
Assuming that in job_three you do
env.filename = "col new file name"

pass parameter to pipeline script

I'm trying to switch from using a freestyle Jenkins build to a pipeline project.
I like many things about it, but I wish that I could use the multibranch pipeline as that matches our company a bit better, but at present that is a not an option.
What we do currently is create a new build job with the name of <project name> - <environment>.
So I need to keep that going for now. I have a basic outline of a script that I can either copy and paste into the box or even better is to use the jenkins file from scm.
I like this one the most and that is what I'm currently using on my local Jenkins.
If I hard code the solution file and the environment I want in my script in scm it builds fine.
I don't like that option because that means I'd have to have lots of scripts with similar names just changing the branch. If I add build parameters with the solution name and environment I can easily make the script handle those as well, however what I don't like is that when I click build button it confirms that those are the parameters I want to use.
So is there a way that I can hardcode/get a plugin that lets me add those parameters as constants or environment variables or whatever so it is just part of the job?
EDIT
As an update to show what I tried yesterday and got to work for our needs is this. First was that I installed multibranch defaults plugin and followed the steps outline on their github page. With that installed and configured I added a new multibranch project, pointed it to my git repository. It now found 2 branches (as expected) and used the default config file. So far this seems like it will work for about 90% of our cases. The only problem I can see is if some people had custom steps in their existing freestyle project. But for now those can always just stay a freestyle project.
If I understand you correctly what you're looking for is a way to supply default parameters to your build.
In one of my builds I do something like that:
stage ('Setup') {
try {
timeout(time: 1, unit: 'MINUTES') {
userInput = input message: 'Configure build parameters:', ok: '', parameters: [
[$class: 'hudson.model.ChoiceParameterDefinition', choices: 'staging\nproduction\nfree', description: 'Choose build flavor', name: 'BUILD_FLAVOR'],
[$class: 'hudson.model.ChoiceParameterDefinition', choices: 'Debug\nRelease', description: 'Choose build type', name: 'BUILD_TYPE'],
[$class: 'hudson.model.ChoiceParameterDefinition', choices: 'NONE\ndevelop\nmaster\nrelease/core_0.5.0\nrelease/core_0.1.8.1\nrelease/core_0.1.9', description: 'Product core branch', name: 'CORE_BRANCH'],
[$class: 'hudson.model.ChoiceParameterDefinition', choices: '4.1.12\n4.1.11\n4.1.10\n4.1.9\n4.1.8\n4.1.4\n3.5.5\n3.1.8\ncore\nOldVersion', description: 'Version Name', name: 'VERSION_NAME'],
[$class: 'hudson.model.ChoiceParameterDefinition', choices: 'origin/develop\norigin/hotfix/4.1.11\norigin/release/4.1.8\norigin/hotfix/4.1.7\norigin/hotfix/4.1.9\norigin/hotfix/4.1.10\norigin/release/4.1.6\norigin/release/4.1.5\norigin/hotfix/3.5.5', description: 'Git branch', name: 'GIT_BRANCH'],
[$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Enable Gradle debug?', name: 'DEBUG']
] // According to Jenkins Bug: https://issues.jenkins-ci.org/browse/JENKINS-26143
}
} catch (err) {
userInput = [BUILD_FLAVOR: 'staging', BUILD_TYPE: 'Debug', CORE_BRANCH: 'NONE', VERSION_NAME: '4.1.12', GIT_BRANCH: 'origin/develop'] // if an error is caught set these values
}
}
Explanation:
I'm using the Try/Catch method to handle exceptions and then within the "try" section, I configured the question and possible answers to select from that I want to display to the user which starts the build.
Then, in the "catch" section I've put the default values I want to set in each one of the variables incase an exception is caught, which means that 1 minute has passed without selecting the relevant items.
Here are some useful links:
Pipeline: How to manage user inputs
pipeline-plugin/TUTORIAL.md

Resources