Retrieving an injected build parameter in Pipeline - jenkins

I'm specifically working on the example below, but I'm guessing the question is a bit more general.
https://github.com/jenkinsci/poll-mailbox-trigger-plugin
States the following build parameters, are injected into the job
pipeline {
agent any
stages {
stage('Test') {
steps {
echo env.pmt_content
echo ${pmt_content}
}
}
}
}
The above methods don't seem to work for me:
How is it possible to retrieve an injected build parameter in a pipeline job?

As explained in the pipeline-plugin tutorial, you can use params to access the job's build parameters.
print params.pmt_content
print "Content is ${params.pmt_content}"

Related

jenkins question - Adding post section to script based & difference between methods

I have 2 questions regarding jenkins
1. What is the difference between the 2 methods?
Method 1
pipeline {
agent any
stages {
stage('Example') {
steps {
echo 'Hello World'
}
}
}
Method 2
node ("jenkins-nodes") {
stage("git clone"){
echo 'Hello World' }
}
As I understand in the first method I can add Post section that will be running regardless of the result of the job. I wish to add the same post section for the second method, but it is not working. Any ideas?
What is the difference between the 2 methods?
As Noam Helmer wrote, pipeline{} is declarative syntax and and node{} is scripted syntax.
In general I recommend to always use pipeline{} as it makes common tasks easier to write and visualization using Blue Ocean plugin works best with declarative pipeline.
When declarative pipeline becomes too unflexible, you can insert scripted blocks using the script{} step in a declarative pipeline:
pipeline {
agent any
stages {
stage('Example') {
steps {
script {
echo 'Hello World'
}
}
}
}
}
A cleaner approach is to define a function, which is scripted by definition and use that as a custom step in declarative pipeline. Note this works even without script{}!
pipeline {
agent any
stages {
stage('Example') {
steps {
myStep 42
}
}
}
}
void myStep( def x ) {
echo "Hello World $x" // prints "Hello World 42"
}
In complex pipelines that use lots of custom code, I usually have one function per stage. This keeps the pipeline{} clean and makes it easy to see the overall structure of the pipeline, without script{} clutter all over the place.
As I understand in the first method I can add Post section that will
be running regardless of the result of the job. I wish to add the same
post section for the second method, but it is not working. Any ideas?
post{} is only available in declarative pipeline, but not within a scripted pipeline or a scripted section of a declarative pipeline. You can use try{} catch{} finally{} instead. catch{} runs only when an error occurs, finally{} always runs. You can use both or either of catch{} and finally{}.
node ("jenkins-nodes") {
stage("git clone"){
echo 'Hello World'
try {
// some steps that may fail
}
catch( Exception e ) {
echo "An error happened: $e"
// Rethrow exception, to let the build fail.
// If you remove "throw", the error would be ignored by Jenkins.
throw
}
finally {
echo "Cleaning up some stuff"
}
}
}

Execute a stage if environment variable contains specific substring

I have a jenkins declarative pipeline which I am interested to be able to perform a stage only if a specific environment variable contains a specific substring(not fully equals to it, just contains it).
Does anyone got any idea on how can I implement it(maybe using the when condition if possible).
Thanks in advance,
Alon
As you mentioned, in declarative pipeline you can use the when directive to establish a condition in which the stage will be executed.
Among the built in condition options like triggeredBy,branch and tag there is the generic expression option, which allows you to run any groovy code and calculate the relevant Boolean value according to your needs.
So for your case for example you can just use the groovy contains methods to achieve what you want, something like:
pipeline {
agent any
stages {
stage('Conditional Stage') {
when {
expression { return env.MyParamter.contains('MySubstring') }
}
steps {
echo "Running the conditional stage"
}
}
}
}

Is it possible to pass stages into a Jenkinsfile via pipeline parameters

I'm currently working with a Jenkinsfile I can't directly add code to as it is not developed by my team, however I was thinking there might be some way that I could get the owners of the Jenkinsfile (just another team in my company) to allow us to add "pre" and "post" type variables to the Jenkinsfile, where we could than pass in the stages and logic.
A sample Jenkinsfile today might look like
pipeline {
stages {
stage('Clean-Up WS') {
steps {
cleanWs()
}
}
stage('Do more....
And the desired Jenkinsfile might look like
def x = stage('Clean-Up WS') {
steps {
cleanWs()
}
}
pipeline {
stages {
x()
stage('Do more....
Where x in the above example could be passed in via a Jenkins parameter
I've played around with the above and tried with similar syntax but nothing seems to work
Does anyone know if anything like this possible using Jenkinsfiles?

Calculated-String as the parameter to Jenkins's Groovy "STAGE"

I put this in the script section of a Jenkins UI job's configuration -
pipeline {
agent any
stages{
stage('Project') {
...
That works, however -
pipeline {
agent any
stages{
stage('Project ' + 'Josh') {
...
throws and displays an incorrect error message because the parser gets all confused due to the constructed string inside the stage.
Moreover,
String description = 'Project' + ' Josh'
pipeline {
agent any
stages{
stage(description) {
...
does not fail, but displays 'description' as the stage's description.
Now, if you try to load a groovy PaC file with this in it:
node {
stage('Project' + 'Josh') {
...
it works without a hitch.
Is it possible that there are two different Groovy parsers employed, one for the UI and another for loaded PaC's? This means that the UI one has this really horrible bug in it...
Ideas?
.a.
Your example has nothing to do with Jenkins UI. You have shown two different pipeline types - a declarative and scripted one.
Declarative pipeline
A declarative pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
// do something here
}
}
}
}
introduces more simplified, limited and opinionated syntax. This type of a pipeline sets boundaries for Groovy code execution - it is only available inside a dedicated script block, e.g.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
def name = 'Joe'
echo "My name is ${name}"
}
}
}
}
}
This is why stage block expects a literal and not a variable nor expression.
Scripted pipeline
The second example you have shown is a scripted pipeline. This kind of pipeline is more powerful comparing to a declarative pipeline - the whole pipeline script is more or less a Groovy script so you can put any code almost everywhere. A scripted pipeline starts with node block and it allows you to put any Groovy code inside this block. Consider following example:
node {
stage("Test") {
echo "1,2,3"
}
for (int i = 0; i < 5; i++) {
stage("Stage ${i}") {
echo "This is ${i}"
}
}
}
This pipeline script generates 6 stages:
As you can see there are actually no limits what kind of stuff you put inside node block. Declarative pipeline does not allow you doing that - its syntax is strict and you have to follow it directly.
Differences
As a final note I will quote Jenkins official docs:
Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.
Source: https://jenkins.io/doc/book/pipeline/syntax/#compare
The script you configured via UI is using declarative pipeline syntax, while the other uses the scripted node syntax. I'd say that's probably where the other parser comes in and would agree that the one for pipeline has a bug.

How can I parameterize Jenkinsfile jobs

I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name

Resources