Correct way to structure a jenkins groovy pipeline script - jenkins

I wrote a pipeline that works with jeknins but as a newbie to jenkins scripting I've a lot of stuffs that are not clear to me, Here's the whole script, I'll express the issues below
SCRIPT:
node()
{
def libName = "PROJECT"
def slnPath = pwd();
def slnName = "${slnPath}\\${libName}.sln"
def webProject = "${slnPath}\\PROJECT.Web\\PROJECT.Web.csproj"
def profile = getProperty("profiles");
def version = getProperty("Version");
def deployFolder = "${slnPath}Deploy";
def buildRevision = "";
def msbHome = "C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Professional\\MSBuild\\15.0\\Bin\\msbuild.exe"
def msdHome = "C:\\Program Files (x86)\\IIS\\Microsoft Web Deploy V3\\msdeploy.exe"
def nuget = "F:\\NugetBin\\nuget.exe";
def assemblyScript = "F:\\Build\\Tools\\AssemblyInfoUpdatePowershellScript\\SetAssemblyVersion.ps1";
def webserverName ="192.168.0.116";
def buildName = "PROJECT";
def filenameBase ="PROJECT";
stage('SCM update')
{
checkout([$class: 'SubversionSCM', additionalCredentials: [], excludedCommitMessages: '', excludedRegions: '', excludedRevprop: '', excludedUsers: '', filterChangelog: false, ignoreDirPropChanges: false, includedRegions: '', locations: [[credentialsId: '08ae9e8c-8db8-43e1-b081-eb352eb14d11', depthOption: 'infinity', ignoreExternalsOption: true, local: '.', remote: 'http://someurl:18080/svn/Prod/Projects/PROJECT/PROJECT/trunk']], workspaceUpdater: [$class: 'UpdateWithRevertUpdater']])
}
stage('SCM Revision')
{
bat("svn upgrade");
bat("svn info \"${slnPath}\" >revision.txt");
for (String i : readFile('revision.txt').split("\r?\n"))
{
if(i.contains("Last Changed Rev: "))
{
def splitted = i.split(": ")
echo "Revisione : "+ splitted[1];
buildName += "." + splitted[1];
currentBuild.displayName = buildName;
buildRevision += version + "." + splitted[1];
}
}
}
stage("AssemblyInfo update")
{
powerShell("${assemblyScript} ${buildRevision} -path .")
}
stage('Nuget restore')
{
bat("${nuget} restore \"${slnName}\"")
}
stage('Main build')
{
bat("\"${msbHome}\" \"${slnName}\" /p:Configuration=Release /p:PublishProfile=Release /p:DeployOnBuild=true /p:Profile=Release ");
stash includes: 'Deploy/Web/**', name : 'web_artifact'
stash includes: 'PROJECT.Web/Web.*', name : 'web_config_files'
stash includes: 'output/client/release/**', name : 'client_artifact'
stash includes: 'PROJECT.WPF/App.*', name : 'client_config_files'
stash includes: 'PROJECT.WPF/Setup//**', name : 'client_setup'
}
stage('Profile\'s customizations')
{
if (profile != "")
{
def buildProfile = profile.split(',');
def stepsForParallel = buildProfile.collectEntries {
["echoing ${it}" : performTransformation(it,filenameBase,buildRevision)]
}
parallel stepsForParallel;
}
}
post
{
always
{
echo "mimmo";
}
}
}
def powerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';[Console]::OutputEncoding=[System.Text.Encoding]::UTF8;$psCmd;EXIT \$global:LastExitCode\""
}
def performTransformation(profile,filename,buildRevision) {
return {
node {
def ctt ="F:\\Build\\Tools\\ConfigTransformationTool\\ctt.exe";
def nsiTool = "F:\\Build\\Tools\\NSIS\\makensis.exe";
def slnPath = pwd();
unstash 'web_artifact'
unstash 'web_config_files'
def source = 'Deploy/Web/Web.config';
def transform = 'PROJECT.Web\\web.' + profile + '.config';
bat("\"${ctt}\" i s:\"${source}\" t:\"${transform}\" d:\"${source}\"" )
def fname= filename + "_" + profile + "_" + buildRevision + "_web.zip";
if (fileExists(fname))
bat("del "+ fname);
zip(zipFile:fname, dir:"Deploy\\Web")
archiveArtifacts artifacts: fname
//Now I generate the client part
unstash 'client_artifact'
unstash 'client_config_files'
unstash 'client_setup'
def sourceClient = 'output/client/release/PROJECT.WPF.exe.config';
def transformClient = 'PROJECT.WPF/App.' + profile + '.config';
bat("\"${ctt}\" i s:\"${sourceClient}\" t:\"${transformClient}\" d:\"${sourceClient}\"" )
def directory = new File(pwd() + "\\output\\installer\\")
if(!directory.exists())
{
bat("mkdir output\\installer");
}
directory = new File( pwd() + "\\output\\installer\\${profile}")
if(!directory.exists())
{
echo " directory does not exist";
bat("mkdir output\\installer\\${profile}");
}
else
{
echo " directory exists";
}
def filename2= filename + "_" + profile + "_" + buildRevision + "_client.zip";
bat("${nsiTool} /DAPP_VERSION=${buildRevision} /DDEST_FOLDER=\"${slnPath}\\output\\installer\\${profile}\" /DTARGET=\"${profile}\" /DSOURCE_FILES=\"${slnPath}\\output\\client\\release\" \"${slnPath}\\PROJECT.WPF\\Setup\\setup.nsi\" ");
if (fileExists(filename2))
bat("del "+ filename2);
zip(zipFile:filename2, dir:"output\\installer\\" + profile);
archiveArtifacts artifacts: filename2
}
}
};
The series of questions are:
I've seen some script where everything is wrapped in a pipeline {}, is this necessary or does Jenkins pipeline plugin paste it?
I really dislike to have all those definitions inside the node and then replicated below.
I don't see inside the Jenkins workflow the parallelism, even if I've 4 executors in idle.
I'm not able to call the post pipeline event to clear the workspace (rigth now It's just en echo

There are 2 types of pipeline. Straight groovy like you have written is referred to as a scripted pipeline. The style that has the pipeline{} block around it is a declarative style pipeline. The declarative tends to be easier for newer Pipeline users and is a good choice for starting out with a pipeline. Many pipelines don't need the complexity that scripted allows.
This is groovy. If you want to declare a bunch of variables, you have to do it somewhere. Otherwise you hard-code those values in your script somewhere. In groovy, you don't HAVE to declare every variable, but you have to define it somewhere, and unless you know how the declaration is going to affect scope, you should just declare them. Most programming languages require some kind of variable declaration, especially when you have to worry about scope, so I don't see that this is a problem. I think it is very clean to define all of the variable values in one place at the top. Easier for maintenance.
At first glance, your parallel execution looks like it should work, but unless I set this up and ran it, it is hard to say. It could be that the parallel parts are running fast enough that the UI doesn't update. You should be able to see in the console output if these are running in parallel.
The post pipeline block is not available in scripted pipeline. That is part of the declarative pipeline syntax. In scripted, to do similar things you have to use try/catch to catch errors and run post-type things.

Related

groovy current scope error in jenkins pipeline

i neeed your help please
i'm working on groovy script to list all scm polling jobs.
the script is working fine on jenkins scripting console but when i integrate it in jenkinsfile and run it in pipeline i get this error :
12:51:21 WorkflowScript: 10: The current scope already contains a variable of the name it
12:51:21 # line 10, column 25.
12:51:21 def logSpec = { it, getTrigger -> String spec = getTrigger(it)?.getSpec(); if (spec ) println ("job_name " + it.name + " job_path " + it.getFullName() + " with spec " + spec )}
12:51:21 ^
12:51:21
12:51:21 1 error
12:51:21
12:51:21 at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
12:51:21 at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:958)
Here is the jenkinsfile :
#!/usr/bin/env groovy
import hudson.triggers.*
import hudson.maven.MavenModuleSet
import org.jenkinsci.plugins.workflow.job.*
pipeline {
agent any
stages {
stage('list jobs with scm polling') {
steps {
def logSpec = { it, getTrigger -> String spec = getTrigger(it)?.getSpec(); if (spec ) println ("job_name " + it.name + " job_path " + it.getFullName() + " with spec " + spec )}
println("--- SCM Frequent Polling for Pipeline jobs ---")
Jenkins.getInstance().getAllItems(WorkflowJob.class).each() { logSpec(it, {it.getSCMTrigger()}) }
println("\n--- SCM Frequent Polling for FreeStyle jobs ---")
Jenkins.getInstance().getAllItems(FreeStyleProject.class).each() { logSpec(it, {it.getSCMTrigger()}) }
println("\n--- SCM Frequent Polling for Maven jobs ---");
Jenkins.getInstance().getAllItems(MavenModuleSet.class).each() { logSpec(it, {it.getTrigger(SCMTrigger.class)}) }
println("--- SCM Frequent Polling for Abstract jobs---")
Jenkins.getInstance().getAllItems(AbstractProject.class).each() { logSpec(it, {it.getTrigger(SCMTrigger.class)}) }
println '\nDone.'
}} }}
Does anyone can help ?
thanksss
it is an implicit variable that is provided in closures, when the closure doesn't have an explicitly declared parameter. So when you declare a parameter, make sure it is not called it to avoid conflicts with parent scopes that already define it (in your case the closure of .each()).
Also, to integrate a script section in a pipeline, either use the script step or define a function that you could call like a built-in step.
Lastly, .each() doesn't work well in pipeline code, due to the restrictions imposed by the CPS transformations applied by Jenkins to the pipeline code (unless tagged #NonCPS - which has other restrictions). So .each() should be replaced by a for loop.
pipeline {
agent any
stages {
stage('list jobs with scm polling') {
steps {
script {
def logSpec = { job, getTrigger -> String spec = getTrigger(job)?.getSpec(); if (spec ) println ("job_name " + job.name + " job_path " + job.getFullName() + " with spec " + spec )}
println("--- SCM Frequent Polling for Pipeline jobs ---")
for( item in Jenkins.getInstance().getAllItems(WorkflowJob.class) ) {
logSpec( item, {item.getSCMTrigger()})
}
// ... other code ...
println '\nDone.'
}
}} }}
Variant with separate function:
pipeline {
agent any
stages {
stage('list jobs with scm polling') {
steps {
doStuff()
}} }}
void doStuff() {
def logSpec = { job, getTrigger -> String spec = getTrigger(job)?.getSpec(); if (spec ) println ("job_name " + job.name + " job_path " + job.getFullName() + " with spec " + spec )}
println("--- SCM Frequent Polling for Pipeline jobs ---")
for( item in Jenkins.getInstance().getAllItems(WorkflowJob.class) ) {
logSpec( item, {item.getSCMTrigger()})
}
// ... other code ...
println '\nDone.'
}

Jenkins pipeline - "cannot invoke method on null object" on function outside the pipeline

i get the error above when trying to run my pipeline
tried to run it inside and outside the groovy sandBox.
also tried debugging and it's fails on this method call "last_build_number(lastBuildToGenerateNumber)"
before adding try, catch and recursion this code was working well outside the pipeline. don't get me wrong - this code can not run inside the pipeline so i did not try it.
/*
SEIIc DPS_NIGHTLY_BUILD JenkinsFile
*/
def buildDescription // for setting the build name, based on the downstream jobs name
def last_build_number(build) {
println 'the display name is' + build.displayName
return build.displayName
if (build != null) {
if(build.displayName!=null){
println 'the display name is' + build.displayName
return build.displayName
}
}
else {
return '0.0.0.0'
}
return '0.0.0.0'
}
def autoIncBuildNightlyNumber(build) {
def debugEnable = 1
println 'build is: ' + build.displayName
def lastBuildToGenerateNumber = build; //a build variable
def last_build_number; //last build number i.e: "52.88.0.7" or "#43"
build_number=0;
try{
println 'last build to genreate from ' + lastBuildToGenerateNumber.displayName
last_build_number = last_build_number(lastBuildToGenerateNumber);
if (debugEnable == 1) println 'last successfull build: ' + last_successfull_build
def tokens = last_build_number.tokenize('.')
if (debugEnable == 1) println 'tokens: ' + tokens
// update global variable - if it's not a legit number the crash will be catched
build_number = tokens[3].toInteger() + 1
if (debugEnable == 1) println 'new build number: ' + build_number
return build_number
} catch (e){
if (debugEnable == 1) println 'error is ' + e
if (debugEnable == 1) println 'build number: ' + build_number + ' is not valid. recurse now to find a valid number'
build_number = autoIncBuildNightlyNumber(lastBuildToGenerateNumber.getPreviousBuild());
println 'genrate ' + lastBuildToGenerateNumber
return build_number
}
}
// Declarative Pipeline
pipeline {
/*
maximum time for this job
*/
options { //maximum time for this job
timeout(time: 1, unit: 'HOURS')
}
environment {
AUTO_BUILD_NUMBER = autoIncBuildNightlyNumber(currentBuild.getPreviousBuild())
PLASTICSCM_TARGET_SERVER = "g-plasticscm-server.gilat.com:8087"
PLASTICSCM_TARGET_REPOSITORY = "SEIIc_DPS"
PLASTICSCM_WORKSPACE_NAME = "${env.JOB_BASE_NAME}_${env.BUILD_NUMBER}"
AUTOMATION_FOLDER = "${env.PLASTICSCM_WORKSPACE_NAME}\\Tools\\Automation"
Branch = "/main"
TEST_BRANCH = "/QualiTest for SW Automation"
QUALITEST_FOLDER = "${env.PLASTICSCM_WORKSPACE_NAME}\\QualiTest for SW Automation"
PLASTICSCM_TEST_REPOSITORY="SW_Utiles"
PLASTICSCM_TEST_WORKSPACE = "TEST_${env.JOB_BASE_NAME}_${env.BUILD_NUMBER}"
}
// Select target host for building this pipeline
agent { node { label "SEIIc_DPS" } }
// Stages to run for this pipeline
stages {
/*
Checkout files from source control. In this case the pipeline use PlasticSCM plugin to checkout a branch with given parameter "Branch".
When this stage run, it will checkout the branch in the parameter string from the defined repository and server.
It will not
*/
stage('SCM Checkout') {
steps {
cm branch: env.Branch, changelog: true, repository: env.PLASTICSCM_TARGET_REPOSITORY, server: env.PLASTICSCM_TARGET_SERVER, useUpdate: false, workspaceName: env.PLASTICSCM_WORKSPACE_NAME
//checkOut QualiTest
cm branch: env.TEST_BRANCH, changelog: false, repository: 'SW_Utiles', server: env.PLASTICSCM_TARGET_SERVER, useUpdate: false, workspaceName: env.PLASTICSCM_TEST_WORKSPACE
}
}
}//stages
}//pipeline

Jenkins pipeline leaves conan.tmpxxxxx dirs under workspace

Background, we are just starting to use conan and want to integrate it with Jenkins pipeline builds, which are also new to us.
I have a simple pipeline job that iterates over a yaml file to discover the components used in a product, it then calls another pipeline, UploadRecipe, that downloads the components source, finds the recipes and uploads them to the relevant repo in artifactory
But, it leaves behind a whole bunch of conan.tmp dirs in workspace/UploadRecipe#tmp
$ pwd /jenkins_conan/workspace/UploadRecipe#tmp
$ ls -1
conan.tmp1453946246097996081
conan.tmp2037444640117259875
conan.tmp3926464088111486375
conan.tmp7293377119892400567
conan.tmp868991149159211380
The pipeline didn't fail, but they never get cleaned up, it also happens in other conan related pipelines we use to generate large iso files that consume GB's, but the Upload recipe example is much simpler to explain and shows the same behaviour.
Is there something wrong in my pipeline groovy script ?
i.e. is there some command I should have called to tidy up ?
properties([parameters([string(description: 'Name/Version', name: 'name_version', defaultValue: 'base/1.0.2'),
string(description: 'User/Channel', name: 'user_channel', defaultValue: 'release/stable'),
string(description: 'SVN repository branch', name: 'svn_repo_branch', defaultValue: 'tags/CONAN_REL_1.0.2'),
string(description: 'SVN repository url', name: 'svn_repo_url', defaultValue: 'svn+ssh://$USER#svnserver/svncmake/base/'),
string(description: 'Artifactory', name: 'artifactory', defaultValue: 'my-artifactory'),
string(description: 'Upload repo', name: 'uploadRepo', defaultValue: 'stable-release')
])])
node('buildserver') {
withEnv(['PATH+LOCAL_BIN=/xxxxx/release/.virtualenvs/jfrog/bin']) {
currentBuild.displayName = params.name_version + "#" + params.user_channel
def server
def client
def uploadRepo
def mysvncreds = 'creds-for-svn'
def SVN_repo_url
deleteDir()
stage("Configure/Get recipe"){
server = Artifactory.server params.artifactory
client = Artifactory.newConanClient()
uploadRepo = client.remote.add server: server, repo: params.uploadRepo
dir("_comp_repo"){
SVN_repo_url = params.svn_repo_url + params.svn_repo_branch
checkout([$class: 'SubversionSCM', locations: [[credentialsId: mysvncreds, depthOption: 'files', ignoreExternalsOption: true, local: '.', remote: SVN_repo_url ]]])
}
}
stage("Export recipe"){
dir("_comp_repo"){
myrecipes = ['conanfile.py', 'conanfile_policy.py', 'conanfile_rpm.py']
for(int i = 0; i < myrecipes.size(); i++)
{
def thisrecipe = myrecipes[i]
if (fileExists(thisrecipe)) {
mycommand = "export ./" + thisrecipe + " " + params.user_channel
client.run(command: mycommand )
} else {
echo thisrecipe
}
}
client.run(command: "search" )
}
}
stage("Upload recipe to Artifactory"){
def name_version = params.name_version
string myname = name_version.split("/")[0]
string myversion = name_version.split("/")[1]
String command = "upload ${myname}*/*#${params.user_channel} -r ${uploadRepo} --all --confirm --retry 3 --retry-wait 10"
client.run(command: command)
}
}
}
do it in bash with find and remove
find /jenkins_conan/workspace/UploadRecipe#tmp -type f -name 'conan.tmp*' -exec rm -v {} \;
or try to add this to your pipeline
node('yournode') {
..
stage 'removing cache files'
def ws = pwd()
def file = ws + '/conan.tmp*'
sh 'rm ' + file + ' -rf'
}

Jenkins pipeline - How to give choice parameters dynamically

pipeline {
agent any
stages {
stage("foo") {
steps {
script {
env.RELEASE_SCOPE = input message: 'User input required', ok: 'Release!',
parameters: [choice(name: 'RELEASE_SCOPE', choices: 'patch\nminor\nmajor',
description: 'What is the release scope?')]
}
echo "${env.RELEASE_SCOPE}"
}
}
}
}
In this above code, The choice are hardcoded (patch\nminor\nmajor) -- My requirement is to dynamically give choice values in the dropdown.
I get the values from calling api - Artifacts list (.zip) file names from artifactory
In the above example, It request input when we do the build, But i want to do a "Build with parameters"
Please suggest/help on this.
Depends how you get data from API there will be different options for it, for example let's imagine that you get data as a List of Strings (let's call it releaseScope), in that case your code be following:
...
script {
def releaseScopeChoices = ''
releaseScope.each {
releaseScopeChoices += it + '\n'
}
parameters: [choice(name: 'RELEASE_SCOPE', choices: ${releaseScopeChoices}, description: 'What is the release scope?')]
}
...
hope it will help.
This is a cutdown version of what we use. We separate stuff into shared libraries but I have consolidated a bit to make it easier.
Jenkinsfile looks something like this:
#!groovy
#Library('shared') _
def imageList = pipelineChoices.artifactoryArtifactSearchList(repoName, env.BRANCH_NAME)
imageList.add(0, 'build')
properties([
buildDiscarder(logRotator(numToKeepStr: '20')),
parameters([
choice(name: 'ARTIFACT_NAME', choices: imageList.join('\n'), description: '')
])
])
Shared library that looks at artifactory, its pretty simple.
Essentially make GET Request (And provide auth creds on it) then filter/split result to whittle down to desired values and return list to Jenkinsfile.
import com.cloudbees.groovy.cps.NonCPS
import groovy.json.JsonSlurper
import java.util.regex.Pattern
import java.util.regex.Matcher
List artifactoryArtifactSearchList(String repoKey, String artifact_name, String artifact_archive, String branchName) {
// URL components
String baseUrl = "https://org.jfrog.io/org/api/search/artifact"
String url = baseUrl + "?name=${artifact_name}&repos=${repoKey}"
Object responseJson = getRequest(url)
String regexPattern = "(.+)${artifact_name}-(\\d+).(\\d+).(\\d+).${artifact_archive}\$"
Pattern regex = ~ regexPattern
List<String> outlist = responseJson.results.findAll({ it['uri'].matches(regex) })
List<String> artifactlist=[]
for (i in outlist) {
artifactlist.add(i['uri'].tokenize('/')[-1])
}
return artifactlist.reverse()
}
// Artifactory Get Request - Consume in other methods
Object getRequest(url_string){
URL url = url_string.toURL()
// Open connection
URLConnection connection = url.openConnection()
connection.setRequestProperty ("Authorization", basicAuthString())
// Open input stream
InputStream inputStream = connection.getInputStream()
#NonCPS
json_data = new groovy.json.JsonSlurper().parseText(inputStream.text)
// Close the stream
inputStream.close()
return json_data
}
// Artifactory Get Request - Consume in other methods
Object basicAuthString() {
// Retrieve password
String username = "artifactoryMachineUsername"
String credid = "artifactoryApiKey"
#NonCPS
credentials_store = jenkins.model.Jenkins.instance.getExtensionList(
'com.cloudbees.plugins.credentials.SystemCredentialsProvider'
)
credentials_store[0].credentials.each { it ->
if (it instanceof org.jenkinsci.plugins.plaincredentials.StringCredentials) {
if (it.getId() == credid) {
apiKey = it.getSecret()
}
}
}
// Create authorization header format using Base64 encoding
String userpass = username + ":" + apiKey;
String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes());
return basicAuth
}
I could achieve it without any plugin:
With Jenkins 2.249.2 using a declarative pipeline,
the following pattern prompt the user with a dynamic dropdown menu
(for him to choose a branch):
(the surrounding withCredentials bloc is optional, required only if your script and jenkins configuration do use credentials)
node {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'user-credential-in-gitlab',
usernameVariable: 'GIT_USERNAME',
passwordVariable: 'GITLAB_ACCESS_TOKEN']]) {
BRANCH_NAMES = sh (script: 'git ls-remote -h https://${GIT_USERNAME}:${GITLAB_ACCESS_TOKEN}#dns.name/gitlab/PROJS/PROJ.git | sed \'s/\\(.*\\)\\/\\(.*\\)/\\2/\' ', returnStdout:true).trim()
}
}
pipeline {
agent any
parameters {
choice(
name: 'BranchName',
choices: "${BRANCH_NAMES}",
description: 'to refresh the list, go to configure, disable "this build has parameters", launch build (without parameters)to reload the list and stop it, then launch it again (with parameters)'
)
}
stages {
stage("Run Tests") {
steps {
sh "echo SUCCESS on ${BranchName}"
}
}
}
}
The drawback is that one should refresh the jenkins configration and use a blank run for the list be refreshed using the script ...
Solution (not from me): This limitation can be made less anoying using an aditional parameters used to specifically refresh the values:
parameters {
booleanParam(name: 'REFRESH_BRANCHES', defaultValue: false, description: 'refresh BRANCH_NAMES branch list and launch no step')
}
then wihtin stage:
stage('a stage') {
when {
expression {
return ! params.REFRESH_BRANCHES.toBoolean()
}
}
...
}
this is my solution.
def envList
def dockerId
node {
envList = "defaultValue\n" + sh (script: 'kubectl get namespaces --no-headers -o custom-columns=":metadata.name"', returnStdout: true).trim()
}
pipeline {
agent any
parameters {
choice(choices: "${envList}", name: 'DEPLOYMENT_ENVIRONMENT', description: 'please choose the environment you want to deploy?')
booleanParam(name: 'SECURITY_SCAN',defaultValue: false, description: 'container vulnerability scan')
}
The example of Jenkinsfile below contains AWS CLI command to get the list of Docker images from AWS ECR dynamically, but it can be replaced with your own command. Active Choices Plug-in is required.
Note! You need to approve the script specified in parameters after first run in "Manage Jenkins" -> "In-process Script Approval", or open job configuration and save it to approve
automatically (might require administrator permissions).
properties([
parameters([[
$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
name: 'image',
description: 'Docker image',
filterLength: 1,
filterable: false,
script: [
$class: 'GroovyScript',
fallbackScript: [classpath: [], sandbox: false, script: 'return ["none"]'],
script: [
classpath: [],
sandbox: false,
script: '''\
def repository = "frontend"
def aws_ecr_cmd = "aws ecr list-images" +
" --repository-name ${repository}" +
" --filter tagStatus=TAGGED" +
" --query imageIds[*].[imageTag]" +
" --region us-east-1 --output text"
def aws_ecr_out = aws_ecr_cmd.execute() | "sort -V".execute()
def images = aws_ecr_out.text.tokenize().reverse()
return images
'''.stripIndent()
]
]
]])
])
pipeline {
agent any
stages {
stage('First stage') {
steps {
sh 'echo "${image}"'
}
}
}
}
choiceArray = [ "patch" , "minor" , "major" ]
properties([
parameters([
choice(choices: choiceArray.collect { "$it\n" }.join(' ') ,
description: '',
name: 'SOME_CHOICE')
])
])

Jenkinsfile Declarative Pipeline defining dynamic env vars

I'm new to Jenkins pipeline; I'm defining a declarative syntax pipeline and I don't know if I can solve my problem, because I didn't find a solution.
In this example, I need to pass a variable to ansible plugin (in old version I use an ENV_VAR or injecting it from file with inject plugin) that variable comes from a script.
This is my perfect scenario (but it doesn't work because environment{}):
pipeline {
agent { node { label 'jenkins-node'}}
stages {
stage('Deploy') {
environment {
ANSIBLE_CONFIG = '${WORKSPACE}/chimera-ci/ansible/ansible.cfg'
VERSION = sh("python3.5 docker/get_version.py")
}
steps {
ansiblePlaybook credentialsId: 'example-credential', extras: '-e version=${VERSION}', inventory: 'development', playbook: 'deploy.yml'
}
}
}
}
I tried other ways to test how env vars work in other post, example:
pipeline {
agent { node { label 'jenkins-node'}}
stages {
stage('PREPARE VARS') {
steps {
script {
env['VERSION'] = sh(script: "python3.5 get_version.py")
}
echo env.VERSION
}
}
}
}
but "echo env.VERSION" return null.
Also tried the same example with:
- VERSION=python3.5 get_version.py
- VERSION=python3.5 get_version.py > props.file (and try to inject it, but didnt found how)
If this is not possible I will do it in the ansible role.
UPDATE
There is another "issue" in Ansible Plugin, to use vars in extra vars it must have double quotes instead of single.
ansiblePlaybook credentialsId: 'example-credential', extras: "-e version=${VERSION}", inventory: 'development', playbook: 'deploy.yml'
You can create variables before the pipeline block starts. You can have sh return stdout to assign to these variables. You don't have the same flexibility to assign to environment variables in the environment stanza. So substitute in python3.5 get_version.py where I have echo 0.0.1 in the script here (and make sure your python script just returns the version to stdout):
def awesomeVersion = 'UNKNOWN'
pipeline {
agent { label 'docker' }
stages {
stage('build') {
steps {
script {
awesomeVersion = sh(returnStdout: true, script: 'echo 0.0.1').trim()
}
}
}
stage('output_version') {
steps {
echo "awesomeVersion: ${awesomeVersion}"
}
}
}
}
The output of the above pipeline is:
awesomeVersion: 0.0.1
In Jenkins 2.76 I was able to simplify the solution from #burnettk to:
pipeline {
agent { label 'docker' }
environment {
awesomeVersion = sh(returnStdout: true, script: 'echo 0.0.1')
}
stages {
stage('output_version') {
steps {
echo "awesomeVersion: ${awesomeVersion}"
}
}
}
}
Using the "pipeline utility steps" plugin, you can define general vars available to all stages from a properties file. For example, let props.txt as:
version=1.0
fix=alfa
and mix script and declarative Jenkins pipeline as:
def props
def VERSION
def FIX
def RELEASE
node {
props = readProperties file:'props.txt'
VERSION = props['version']
FIX = props['fix']
RELEASE = VERSION + "_" + FIX
}
pipeline {
stages {
stage('Build') {
echo ${RELEASE}
}
}
}
A possible variation of the main answer is to provide variable using another pipeline instead of a sh script.
example (set the variable pipeline) : my-set-env-variables pipeline
script
{
env.my_dev_version = "0.0.4-SNAPSHOT"
env.my_qa_version = "0.0.4-SNAPSHOT"
env.my_pp_version = "0.0.2"
env.my_prd_version = "0.0.2"
echo " My versions [DEV:${env.my_dev_version}] [QA:${env.my_qa_version}] [PP:${env.my_pp_version}] [PRD:${env.my_prd_version}]"
}
(use these variables) in a another pipeline my-set-env-variables-test
script
{
env.dev_version = "NOT DEFINED DEV"
env.qa_version = "NOT DEFINED QA"
env.pp_version = "NOT DEFINED PP"
env.prd_version = "NOT DEFINED PRD"
}
stage('inject variables') {
echo "PRE DEV version = ${env.dev_version}"
script
{
// call set variable job
def variables = build job: 'my-set-env-variables'
def vars = variables.getBuildVariables()
//println "found variables" + vars
env.dev_version = vars.my_dev_version
env.qa_version = vars.my_qa_version
env.pp_version = vars.my_pp_version
env.prd_version = vars.my_prd_version
}
}
stage('next job') {
echo "NEXT JOB DEV version = ${env.dev_version}"
echo "NEXT JOB QA version = ${env.qa_version}"
echo "NEXT JOB PP version = ${env.pp_version}"
echo "NEXT JOB PRD version = ${env.prd_version}"
}
For those who wants the environment's key to be dynamic, the following code can be used:
stage('Prepare Environment') {
steps {
script {
def data = [
"k1": "v1",
"k2": "v2",
]
data.each { key ,value ->
env."$key" = value
// env[key] = value // Deprecated, this can be used as well, but need approval in sandbox ScriptApproval page
}
}
}
}
You can also dump all your vars into a file, and then use the '-e #file' syntax. This is very useful if you have many vars to populate.
steps {
echo "hello World!!"
sh """
var1: ${params.var1}
var2: ${params.var2}
" > vars
"""
ansiblePlaybook inventory: _inventory, playbook: 'test-playbook.yml', sudoUser: null, extras: '-e #vars'
}
You can do use library functions in the environments section, like so:
#Library('mylibrary') _ // contains functions.groovy with several functions.
pipeline {
environment {
ENV_VAR = functions.myfunc()
}
…
}

Resources