I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name
Related
I have Jenkins Pipeline which is triggering for different projects. However the only difference in all the pipelines is just the name.
So I have added a parameter ${project} in parameter of jenkins and assigned it a value of the name of the project.
We have a number of projects and I am trying to find a better way through which I can achieve this.
I am thinking how can we make the parameter run with different parameters for all the projects without actually creating different projects under jenkins.
I am pasting some screenshot for you to understand what exactly I want to achieve.
As mentioned here, this is a radioserver project, having a pipeline which has ${project} in it.
How can I give multiple values to that {project} from single jenkins job?
IF you have any doubts please message me or add a comment.
You can see those 2 projects I have created, it has all the contents same but just the parameterized value is different, I am thinking how can I give the different value to that parameter.
As you can see the 2 images is having their default value as radioserver, nrcuup. How can I combine them and make them run seemlessly ?
I hope this will help. Let me know if any changes required in answer.
You can use conditions in Jenkins. Based on the value of ${PROJECT}, you can then execute the particular stage.
Here is a simple example of a pipeline, where I have given choices to select the value of parameter PROJECT i.e. test1, test2 and test3.
So, whenever you select test1, jenkins job will execute the stages that are based on test1
Sample pipeline code
pipeline {
agent any
parameters {
choice(
choices: ['test1' , 'test2', 'test3'],
description: 'PROJECT NAME',
name: 'PROJECT')
}
stages {
stage ('PROJECT 1 RUN') {
when {
expression { params.PROJECT == 'test1' }
}
steps {
echo "Hello, test1"
}
}
stage ('PROJECT 2 RUN') {
when {
expression { params.PROJECT == 'test2' }
}
steps {
echo "Hello, test2"
}
}
}
}
Output:
when test1 is selected
when test2 is selected
Updated Answer
Yes, it is possible to trigger the job periodically with a specific parameter value using the Jenkins plugin Parameterized Scheduler
After you save the project with some parameters (like above mentioned pipeline code), go back again to the Configure and under Build Trigger, you can see the option of Build periodically with parameters
Example:
I will here run the job for PROJECT=test1 every even minutes and PROJECT=test2 every uneven minutes. So, below is the configuration
*/2 * * * * %PROJECT=test1
1-59/2 * * * * %PROJECT=test2
Please change the crontab values according to your need
Output:
I want to configure 2 jobs in Jenkins that use the same jenkinsfile but the only difference is the parameters to these jobs.
for example:
create 2 jobs named: A and B that each one of them gets param of X.
in A the job get X as 1 and in B the X is 2.
I want to create it in this way instead of one job that has multi-checkbox because the jobs are independent and I don't want to leave any option to make mistakes.
How can I achieve that only via jenkinsfile?
I read about load jenkinsfile within other jenkinsfile but can't find a way to pass parameters.
How can I achieve this?
If you use Build with Parameters you can reference these declared parameters in the Jenkinsfile with this syntax ${PARAM}.
You could also declare a Choice Parameter named CHOICE in a declaritive pipeline it would look like this:
pipeline {
agent any
parameters {
choice(name: 'CHOICE', choices: ['One', 'Two', 'Three'], description: 'Pick something')
}
stages {
stage('Example') {
steps {
echo "Choice: ${params.CHOICE}"
}
}
}
}
You would not need 2 jobs this way. It is the same job but it runs with different configurations for CHOICE.
This example is directly taken from the official docs:
https://www.jenkins.io/doc/book/pipeline/syntax/#parameters
my solution was:
in Jenkins file to set
environment {
X= getX(env.JOB_NAME)
Y= getY(env.JOB_NAME)
}
and create the following functions in the end of pipeline:
def getX(jobName)
{
if (jobName.contains("X1"))
{
return "X1";
}
else return "X2";
}
def getY(jobName)
{
if (jobName.contains("BLA"))
{
return "BLA1";
}
return "BLA2";
}
I put this in the script section of a Jenkins UI job's configuration -
pipeline {
agent any
stages{
stage('Project') {
...
That works, however -
pipeline {
agent any
stages{
stage('Project ' + 'Josh') {
...
throws and displays an incorrect error message because the parser gets all confused due to the constructed string inside the stage.
Moreover,
String description = 'Project' + ' Josh'
pipeline {
agent any
stages{
stage(description) {
...
does not fail, but displays 'description' as the stage's description.
Now, if you try to load a groovy PaC file with this in it:
node {
stage('Project' + 'Josh') {
...
it works without a hitch.
Is it possible that there are two different Groovy parsers employed, one for the UI and another for loaded PaC's? This means that the UI one has this really horrible bug in it...
Ideas?
.a.
Your example has nothing to do with Jenkins UI. You have shown two different pipeline types - a declarative and scripted one.
Declarative pipeline
A declarative pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
// do something here
}
}
}
}
introduces more simplified, limited and opinionated syntax. This type of a pipeline sets boundaries for Groovy code execution - it is only available inside a dedicated script block, e.g.
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
def name = 'Joe'
echo "My name is ${name}"
}
}
}
}
}
This is why stage block expects a literal and not a variable nor expression.
Scripted pipeline
The second example you have shown is a scripted pipeline. This kind of pipeline is more powerful comparing to a declarative pipeline - the whole pipeline script is more or less a Groovy script so you can put any code almost everywhere. A scripted pipeline starts with node block and it allows you to put any Groovy code inside this block. Consider following example:
node {
stage("Test") {
echo "1,2,3"
}
for (int i = 0; i < 5; i++) {
stage("Stage ${i}") {
echo "This is ${i}"
}
}
}
This pipeline script generates 6 stages:
As you can see there are actually no limits what kind of stuff you put inside node block. Declarative pipeline does not allow you doing that - its syntax is strict and you have to follow it directly.
Differences
As a final note I will quote Jenkins official docs:
Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.
Source: https://jenkins.io/doc/book/pipeline/syntax/#compare
The script you configured via UI is using declarative pipeline syntax, while the other uses the scripted node syntax. I'd say that's probably where the other parser comes in and would agree that the one for pipeline has a bug.
I'm using declarative Jenkins pipelines to run some of my build pipelines and was wondering if it is possible to define multiple agent labels.
I have a number of build agents hooked up to my Jenkins and would like for this specific pipeline to be able to be built by various agents that have different labels (but not by ALL agents).
To be more concrete, let's say I have 2 agents with a label 'small', 4 with label 'medium' and 6 with label 'large'. Now I have a pipeline that is very resource-low and I want it to be executed on only a 'small'- or 'medium'-sized agent, but not on a large one as it may cause larger builds to wait in the queue for an unnecessarily long time.
All the examples I've seen so far only use one single label.
I tried something like this:
agent { label 'small, medium' }
But it failed.
I'm using version 2.5 of the Jenkins Pipeline Plugin.
You can see the 'Pipeline-syntax' help within your Jenkins installation and see the sample step "node" reference.
You can use exprA||exprB:
node('small||medium') {
// some block
}
EDIT: I misunderstood the question. This answer is only if you know
which specific agent you want to run for each stage.
If you need multiple agents you can declare agent none and then declare the agent at each stage.
https://jenkins.io/doc/book/pipeline/jenkinsfile/#using-multiple-agents
From the docs:
pipeline {
agent none
stages {
stage('Build') {
agent any
steps {
checkout scm
sh 'make'
stash includes: '**/target/*.jar', name: 'app'
}
}
stage('Test on Linux') {
agent {
label 'linux'
}
steps {
unstash 'app'
sh 'make check'
}
post {
always {
junit '**/target/*.xml'
}
}
}
stage('Test on Windows') {
agent {
label 'windows'
}
steps {
unstash 'app'
bat 'make check'
}
post {
always {
junit '**/target/*.xml'
}
}
}
}
}
This syntax appears to work for me:
agent { label 'linux && java' }
As described in Jenkins pipeline documentation and by Vadim Kotov one can use operators in label definition.
So in your case if you want to run your jobs on nodes with specific labels, the declarative way goes like this:
agent { label('small || medium') }
And here are some examples from Jenkins page using different operators
// with AND operator
agent { label('windows && jdk9 )') }
// a more complex one
agent { label('postgres && !vm && (linux || freebsd)') }
Notes
When constructing those definitions one just needs to consider following rules/restrictions:
All operators are left-associative
Labels or agent names can be surrounded with quotation marks if they contain characters that would conflict with the operator syntax
Expressions can be written without whitespace
Jenkins will ignore whitespace when evaluating expressions
Matching labels or agent names with wildcards or regular expressions is not supported
An empty expression will always evaluate to true, matching all agents
Create a another label call 'small-or-medium' that has 6 all agents. Then in Jenkinsfile:
agent { label 'small-or-medium' }
We're considering using the Jenkins Pipeline plugin for a rather complex project consisting of several deliveries that need to be build using different tools (on different machines) before being merged. Still, it seems to be easy enough to do a complete build with a single Jenkinsfile, and I like the automatic discovery of git branches that comes with Pipeline.
However, at this point, we have jobs for each of the deliveries and use a build-flow based "meta" job to orchestrate the individual jobs. The nice thing about this is that it also allows starting just one individual job if only small changes were made, just to see whether this delivery still compiles.
To emulate this, some ideas came to mind:
Use different Jenkinsfiles for the deliveries and load them in the top-level Jenkinsfile; it seems that the Multibranch Pipeline job does not allow configuring the Jenkinsfile to use yet (https://issues.jenkins-ci.org/browse/JENKINS-35415), however, so creating the jobs for the individual deliveries is still open.
Provide a configuration option for the "top-level" job and have ifs for all deliveries in the Jenkinsfile to be able to select which should be build. This would mix different build types in one pipeline, though, and, at the very least, mess up the estimation of the build time.
Are those viable options, or is there a better one?
What you could do is to write a pipelining script that has has "if"-guards around the single stages, like this:
stage "s1"
if (theStage in ["s1","all"]) {
sleep 2
}
stage "s2"
if (theStage in ["s2", "all"]) {
sleep 2
}
stage "s3"
if (theStage in ["s3", "all"]) {
sleep 2
}
Then you can make a "main" job that uses this script and runs all stages at once by setting the parameter "theStage" to "all". This job will collect the statistics when all stages are run at once and give you useful estimation times.
Furthermore, you can make a "partial run" job that uses this script and that is parametrized with the stage that you want to run. The estimation will not be very useful, though.
Note that I put the stage itself to the main script and put only the execution code into the conditional, as suggested by Martin Ba. This makes sure that the visualization of the job is more reliable
As an expansion of the previous answer, I would propose something like that:
def stageIf(String name, Closure body) {
if (params.firstStage <= name && params.lastStage >= name) {
stage(name, body)
} else {
stage(name) {
echo "Stage skipped: $name"
}
}
}
node('linux') {
properties([
parameters([
choiceParam(
name: 'firstStage',
choices: '1.Build\n' +
'2.Docker\n' +
'3.Deploy',
description: 'First stage to start',
defaultValue: '1.Build',
),
choiceParam(
name: 'lastStage',
choices: '3.Deploy\n' +
'2.Docker\n' +
'1.Build',
description: 'Last stage to start',
defaultValue: '3.Deploy',
),
])
])
stageIf('1.Build') {
// ...
}
stageIf('3.Deploy') {
// ...
}
}
Not as perfect as I wish but at least its working.