I'm having a problem with a Jenkins multibranch pipeline, which is parameterized. The parameters are all declared in the Jenkinsfile.
The problem is that these parameters do not exist on the very first run of the job. So, the very first execution will fail with groovy.lang.MissingPropertyException. Any subsequent run is now aware of the parameters and will not fail.
Since this is a multibranch pipeline this happens for every new pull request or tracked branch. Is there any workaround to avoid this problem?
I tried setting the parameters in the UI as well, however there is no option on the pipeline configuration page to set parameters. Probably because this is a multibranch pipeline?
Cheers
This is a known issue with Parameters in Pipelines. To know which parameters are needed, Jenkins needs to execute the Jenkinsfile once. For example the parameters in the GUI are not available until after the first run of the pipeline.
To prevent errors, you could specify sensible default values like this:
pipeline {
agent any
parameters {
string(name: 'Greeting', defaultValue: 'Hello', description: 'How should I greet the world?')
}
stages {
stage('Example') {
steps {
echo "${params.Greeting} World!"
}
}
}
}
Related
I have a Jenkinsfile setup for our CI/CD pipeline, and it runs through the pipeline on git actions like Pull Requests, Branch Creation, Tag Pushes, etc..
Prior to this setup, I was used to setting up Jenkins build jobs in the Jenkins UI. The advantage of this, was that I could setup dedicated build jobs that I could trigger remotely, and independently of git webhook actions. I could do a POST to the job endpoint with parameters to trigger various actions.
Documentation for this process would be referenced here - see "Trigger Builds Remotely"
I could also hit the big button that says "Build", or "Build with Parameters" in the UI, which was super nice.
How would one do this with a Jenkinsfile? Is this even possible to define build jobs in a pipeline definition within a Jenkinsfile? I.E. define functions / build jobs that have dedicated URLs that could be called on the Jenkins URL independent of webhook callbacks?
What's the best practice here?
Thanks for any tips, references, suggestions!
I would recommend starting with Multibranch pipelines. In general you get all the things you mentioned, but a little better. Because thhe paramteres can be defined within your Jenkinsfile. In short just do it like this:
Create a Jenkinsfile an check this into a Git Repository.
To create a Multibranch Pipeline: Click New Item on Jenkins home page.
Enter a name for your Pipeline, select Multibranch Pipeline and click OK.
Add a Branch Source (for example, Git) and enter the location of the repository.
Save the Multibranch Pipeline project.
A declarative Jenkinsfile can look like this:
pipeline {
agent any
parameters {
string(name: 'Greeting', defaultValue: 'Hello', description: 'How should I greet the world?')
}
stages {
stage('Example') {
steps {
echo "${params.Greeting} World!"
}
}
}
}
A good tutorial with screenshhots can be found here: https://www.jenkins.io/doc/book/pipeline/multibranch/
I have created a jenkins parameterized pipeline script as below. I have stored it on my Github repository.
properties([parameters([string(defaultValue: 'Devasish', description: 'Enter your name', name: 'Name'),
choice(choices: ['QA', 'Dev', 'UAT', 'PROD'], description: 'Where you want to deploy?', name: 'Environnment')])])
pipeline {
agent any
stages {
stage('one') {
steps {
echo "Hello ${Name} Your code is Building in ${Environnment} "
}
}
stage('Two') {
steps {
echo "Hello ${Name} hard testing in ${Environnment}"
}
}
stage('Three') {
steps {
echo "Hello1 ${Name} deploying in ${Environnment}"
}
}
}
}
Then, I have created a jenkins job by choosing pipeline option. While creating jenkins pipeline Under build triggers section, I have checked GitHub hook trigger for GITScm polling checkbox and Under Pipeline section, I have chosen Pipeline script from SCM followed by choosing Git in SCM, providing Repository URL where the above written JenkinsFile script is stored.
Then, Under Github repository settings, I have gone to webhooks and added one webhook where I specified my Payload URL as myJenkinsServerURL/github-webhook/. which will enable a functionality like whenever there will be any push event occurred within the repository, it will run the jenkins pipeline I created above.
Now, the situation is, when I am running this jenkins job from Classic UI by clicking Build with parameters, I am getting a text box to fill my name and a dropdown having list of 4 options ('QA', 'Dev', 'UAT', 'PROD') I gave above in script to choose, in which server I want to deploy my code, then it gets run.
But when I am committing in Github, it starts jenkins pipeline but not asking for parameters value instead just taking default value Devasish in name and QA in server.
What should I do to get an option of filling these details but not from Classic UI.
Thanks in advance.
As you have noted, when you trigger your pipeline manually, it will ask for the Build Parameters and let you specify values before proceeding.
However, when triggering the pipeline thru automatic triggers (e.g. SCM triggers/webhooks), then it is assumed to be an unattended build and it will use the defaultValue settings from your Jenkinsfile build parameters" definition.
I want to use the same pipeline from scm, and build them in few jobs with different test tags to run(passed as simple String parameter)
I found that i can do this with:
parameters {
string(name: 'param', defaultValue: 'Hello', description: 'How should I greet the world?')
}
I thought that i can change default value for each job there:
Parametrize build
But it is overriden by code from pipeline everytime.
Is there a way to pass this parameter( or maybe pass parameter directly in job in different way)?
Or i have to create diferent pipeline files for each value of this parameter?
Thanks
Kamil
I would suggest you to remove
parameters {
string(name: 'param', defaultValue: 'Hello', description: 'How should I greet the world?')
}
from the pipeline script and to use This project is parameterized when creating pipeline jobs instead. This way the pipeline script is same for all jobs, but your jobs can have different parameters.
I'm trying to set up a multibranch pipeline configuration where the "Deploy" boolean checkbox is defaulted to true on non-production branches, and false on the production build.
pipeline {
parameters{
booleanParam(defaultValue: true, description: 'Do deploy after build', name: 'DEPLOY')
Is there some method to conditionally set defaultValue=false when $BRANCH_NAME == "production"?
I think I might have answered my own question through a bunch of experimentation. This seems crazy simple, but my test between two branches shows the Deploy parameter is properly defaulted on/off depending on the $BRANCH_NAME
def defaultDeploy = true
if ( BRANCH_NAME == "production" )
{
defaultDeploy = false
}
pipeline {
parameters{
booleanParam(defaultValue: defaultDeploy,
description: 'Do deploy after build', name: 'DEPLOY')
Answering the question of the poster in a more generic way, the parameters default values can also be set dynamically injecting properties with EnvInject plugin. Also Extended Choice Parameter plugin is needed to run the example. Create a declarative pipeline project with the following content:
pipeline {
agent any
parameters {
extendedChoice(
name: 'ArchitecturesCh',
defaultValue: "${env.BUILD_ARCHS}",
multiSelectDelimiter: ',',
type: 'PT_CHECKBOX',
value: 'linux-x86_64,android-x86_64,android-arm,android-arm64,ios-arm64,Win32,Win64'
)
string(name: 'ArchitecturesStr', defaultValue: "${env.BUILD_ARCHS}", description: "")
}
stages {
stage('Test') {
steps {
echo params.ArchitecturesCh
echo params.ArchitecturesStr
echo "${env.BUILD_ARCHS}"
}
}
}
}
Then prepare an environment with the EnvInject plugin. NOTE: Be careful not to clash with other environment variables. In my case I lost much time thinking the method was not working because an ARCHITECTURES variable is set somewhere else. In the same pipeline project GUI:
Save and build the pipeline, refresh the page. The default parameters will be available in the following build.
In your question, it's a bit unclear whether BRANCH_NAME refers to an environment variable (as in env.BRANCH_NAME) or to another parameter (as in params.BRANCH_NAME).
If former, then there's some environment variables, meaning that there's an environment, and so a node must have been allocated with its environment set. To allocate a node, the pipeline needs to start running. To start running, the user needs to select the parameters to run the pipeline. So it's a chicken-and-egg problem: you can't have environment variable before running pipelines, and you need to determine the parameters before running the pipeline.
If latter, and you are thinking of a case where, maybe, there's a String parameter that goes by the name of BRANCH_NAME, and a Boolean parameter that goes by the name of DEPLOY, and on the parameters page the checkbox DEPLOY is unchecked when you type maste into BRANCH_NAME, but once you press the r it magically becomes checked ... then it could be done — with a lot of pain — by using the Active Choice plugin.
Finally, if what you want is to prevent any deploying from the master branch, you may check for both the parameter and the branch name before deploying, and refuse to deploy if the parameter is false or if the branch is master.
Now that the Multibranch Pipeline job type has matured, is there any reason to use the simple Pipeline job type any longer? Even if you only have one branch today, it's probably wise to account for the possibility of multiple branches in the future, so what would the motivation be to use the Pipeline job type for your Jenkins Pipeline vs. always using the Multibranch Pipeline job type, assuming you are storing your Jenkinsfile in SCM? Is there feature parity between the two job types now?
In my experience with multibranch pipelines, the ONLY downside is that you can't see the last success/failure/duration columns on the Jenkins main page. They just show "NA" on the Jenkins front page since it's technically a 'folder' of sub-jobs.
Other than that I can't think of any other "cons" to using multibranch.
I disagree with the other answer.... that case was that multibranch sends changes for "any" branch. That's not necessarily true. If a Jenkinsfile exists on a random feature branch, but that branch is not defined in the pipeline then you can just not do anything with it using typical if/else conditionals.
For example:
node {
checkout scm
def workspace = pwd()
if (env.BRANCH_NAME == 'master') {
stage ('Some Stage 1 for master') {
sh 'do something'
}
stage ('Another Stage for Master') {
sh 'do something else here'
}
}
else if (env.BRANCH_NAME == 'stage') {
stage ('Some stage branch step') {
sh 'do something'
}
stage ('Deploy to stage target') {
sh 'do something else'
}
}
else {
sh 'echo "Branch not applicable to Jenkins... do nothing"'
}
}
Multibranch Pipeline works well if your Jenkins job deals with a single git repository. On the other hand, the pipeline job can be repository-neutral and branch-neutral and very flexible when working with multiple git repositories with a single Jenkins job.
For example, assume you have artifact-1 from repo-1, artifact-2 from repo-2, and integration tests from repo-3. And artifact-2 depends on artifact-1. A Jenkins job has to build artifact-1, then build artifact-2, and finally run integration tests from repo-3. And assume your code change goes to a feature-1 branch of repo-1 and feature-1 branch for new tests in repo-3. In this case, the Jenkins job builds feature-1 for artifact-1, then uses 'dev' branch as default from repo-2 (if feature-1 is not detected in repo-2), and runs 'feature-1' from repo-3 for new integration tests. As you can see, the job works well with three git repositories. A repo-neutral/branch-neutral pipeline job is ideal in this setting.
In a CI/CD situation, it may not be desirable to send every branch to the target environment. Using pipeline and specifying a single branch would allow you to filter, and send only /master to Staging or Production environments. Multibranch would be useful for sending any change on any branch specifically to a test environment.
On the other hand, if the QA/AutomatedTesting process is thorough enough, the risk with sending any branch to Production could be acceptable.
If you are still developing your flow, the simple pipeline has the added advantage of supporting parameterized projects. This feature is useful for developing the declarative pipelines in the jenkins gui, using the parameter to control what branch/repository you are targeting.