I am trying to access environment variables in my project the same way I do it in my other project and I can't.
This is what I have in my projects and it works for one project and doesn't work for the other. Why?
pipeline {
agent { label('builder') }
environment {
BRANCH = env.BRANCH_NAME.replace('/', '-')
SERVICE = "some_name"
IMAGE_TAG = "${env.BRANCH}-${env.BUILD_NUMBER}"
CI_IMAGE_TAG = "CI-${env.BRANCH}-${env.BUILD_NUMBER}"
}
}
I get an error:
java.lang.IllegalArgumentException: One or more variables have some issues with their values: BRANCH
Jenkins version: 2.332
Related
I'm new to shared libraries in Jenkins, and fairly new to Groovy as well.
I have several multibranch pipelines for different projects. I have setup email notifications for each job using an environmental variable containing a list of email addresses, which works just fine. However, several jobs share the same email addresses (depending on the project it's for) and I'd like to create a shared library for a master email list, so I don't have to update the list in each job individually if say I want to add or remove someone. I'm having trouble defining a variable in a library that can be used later in the Jenkinsfile. This is a simplified version of what I've been trying:
shared library (basically a copy paste of the environmental variables I was originally using in the individual Jenkinsfiles/jobs, which works):
Jenkinsfile-shared-libraries\vars\masterEmailList
def call () {
environment {
project1EmailList = "user1#xyz.com, user2#xyz.com, user3#xyz.com"
project2EmailList = "user2#xyz.com, user4#xyz.com, user5#xyz.com"
}
}
Jenkinsfile
#Library('Jenkinsfile-shared-libraries') _
pipeline {
agent any
stages {
stage ('email list for project 1') {
steps {
masterEmailList()
echo env.project1EmailList
}
}
}
}
The echo returns "null" rather than the email list of the project like I would expect.
Any guidance would be much appreciated!
Cheers.
The "Defining global variables" section of https://www.jenkins.io/doc/book/pipeline/shared-libraries/#defining-global-variables helped solve this one.
shared library:
Jenkinsfile-shared-libraries\vars\masterEmailList
def project1EmailList() {
"user1#xyz.com, user2#xyz.com, user3#xyz.com"
}
def project2EmailList() {
"user2#xyz.com, user4#xyz.com, user5#xyz.com"
}
Jenkinsfile:
#Library('Jenkinsfile-shared-libraries') _
pipeline {
agent any
stages {
stage ('email list for project 1') {
steps {
script {
echo masterEmailList.project1EmailList
}
}
}
}
}
I have a Jenkins Multibranch project, and I need to set some credentials depending on the Git branch I'm on right now. For example:
# If I'm in master
MY_VARIABLE = credentials("master-credential")
# If I'm in develop
MY_VARIABLE = credentials("develop-credential")
# If I'm in QA
MY_VARIABLE = credentials("qa-credential")
Right now, I've tried to name my variables with a different prefix and setting them up in the following way:
pipeline {
agent any
environment {
MY_VARIABLE = credentials("${env.BRANCH_NAME}-credential")
}
stages {
stage("Start") {
steps {
# use MY_VARIABLE on my steps
}
}
But it doesn't work.
I think I might be able to use different domains for my credentials and then specify the domain when I set them, but I haven't found in the docs how to specify the domains on the Jenkinsfile.
I would really appreciate if someone could help me.
Thanks!
While using the convenience function for credentials in the environment directive is cleaner, for interpolated strings resolved like this in the credentialsId you would need to use the withCredentials plugin and step method. Based on the question, I will assume the type bindings you want are for string:
steps {
withCredentials([string(credentialsId: "${env.BRANCH_NAME}-credential", variable: 'MY_VARIABLE')]) {
# use MY_VARIABLE on my steps
}
}
Check the documentation for more information.
I've been trying to construct multiple jobs from a list and everything seems to be working as expected. But as soon as I execute the first build (which works correctly) the parameters in the job disappears. This is how I've constructed the pipelineJob for the project.
import javaposse.jobdsl.dsl.DslFactory
def repositories = [
[
id : 'jenkins-test',
name : 'jenkins-test',
displayName: 'Jenkins Test',
repo : 'ssh://<JENKINS_BASE_URL>/<PROJECT_SLUG>/jenkins-test.git'
]
]
DslFactory dslFactory = this as DslFactory
repositories.each { repository ->
pipelineJob(repository.name) {
parameters {
stringParam("BRANCH", "master", "")
}
logRotator{
numToKeep(30)
}
authenticationToken('<TOKEN_MATCHES_WITH_THE_BITBUCKET_POST_RECEIVE_HOOK>')
displayName(repository.displayName)
description("Builds deploy pipelines for ${repository.displayName}")
definition {
cpsScm {
scm {
git {
branch('${BRANCH}')
remote {
url(repository.repo)
credentials('<CREDENTIAL_NAME>')
}
extensions {
localBranch('${BRANCH}')
wipeOutWorkspace()
cloneOptions {
noTags(false)
}
}
}
scriptPath('Jenkinsfile)
}
}
}
}
}
After running the above script, all the required jobs are created successfully. But then once I build any job, the parameters disappear.
After that when I run the seed job again, the job starts showing the parameter. I'm having a hard time figuring out where the problem is.
I've tried many things but nothing works. Would appreciate any help. Thanks.
This comment helped me to figure out similar issue with my .groovy file:
I called parameters property twice (one at the node start and then tried to set other parameters in if block), so the latter has overwritten the initial parameters.
BTW, as per the comments in the linked ticket, it is an issue with both scripted and declarative pipelines.
Fixed by providing all job parameters in each parameters call - for the case with ifs.
Though I don't see repeated calls in the code you've provided, please check the full groovy files for your jobs and add all parameters to all parameters {} blocks.
I am trying to run a pipeline that has several servers. I want to do some actions in several servers at a time when selecting a choice parameter. My idea is to select a choice parameter 'APPLICATION' and execute some actions on 4 different servers sequentially (one server at a time). I am trying to put the environment variables assigning the value os the servers in an array and then ask for the environment variable to execute the actions.
pipeline {
agent {
node {
label 'master'
}
}
environment {
APPLICATION = ['veappprdl001','veappprdl002','veappprdl003','veappprdl004']
ROUTER = ['verouprdl001','verouprdl002']
}
parameters {
choice(name: 'SERVER_NAME', choices: ['APPLICATION','ROUTER'], description: 'Select Server to Test' )
}
stages {
stage ('Application Sync') {
steps {
script {
if (env.SERVER_NAME == 'APPLICATION') {
sh """
curl --location --request GET 'http://${SERVER_NAME}//configuration-api/localMemory/update/ACTION'
"""
}
}
}
}
} }
I want to execute the action on all the servers of the 'APPLICATION' variable if is selected the 'APPLICATION' parameter in 'Build with parameters'.
Any Help would be appreciate it.
Thanks
You can't store a value of an array type in the environment variable. Whatever you are trying to assign to the env variable gets automatically cast to the string type. (I explained it in more detail in the following blog post or this video.) So when you try to assign an array, what you assign is its toString() representation.
However, you can solve this problem differently. Instead of trying to assign an array, you can store a string of values with a common delimiter (like , for instance.) Then in the part that expects to work with a list of elements, you simply call tokenize(",") method to produce a list of string elements. Having that you can iterate and do things in sequence.
Consider the following example that illustrates this alternative solution.
pipeline {
agent any
environment {
APPLICATION = "veappprdl001,veappprdl002,veappprdl003,veappprdl004"
}
stages {
stage("Application Sync") {
steps {
script {
env.APPLICATION.tokenize(",").each { server ->
echo "Server is $server"
}
}
}
}
}
}
When you run such a pipeline you will get something like this:
I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}