Jenkins pipeline leaves conan.tmpxxxxx dirs under workspace - jenkins

Background, we are just starting to use conan and want to integrate it with Jenkins pipeline builds, which are also new to us.
I have a simple pipeline job that iterates over a yaml file to discover the components used in a product, it then calls another pipeline, UploadRecipe, that downloads the components source, finds the recipes and uploads them to the relevant repo in artifactory
But, it leaves behind a whole bunch of conan.tmp dirs in workspace/UploadRecipe#tmp
$ pwd /jenkins_conan/workspace/UploadRecipe#tmp
$ ls -1
conan.tmp1453946246097996081
conan.tmp2037444640117259875
conan.tmp3926464088111486375
conan.tmp7293377119892400567
conan.tmp868991149159211380
The pipeline didn't fail, but they never get cleaned up, it also happens in other conan related pipelines we use to generate large iso files that consume GB's, but the Upload recipe example is much simpler to explain and shows the same behaviour.
Is there something wrong in my pipeline groovy script ?
i.e. is there some command I should have called to tidy up ?
properties([parameters([string(description: 'Name/Version', name: 'name_version', defaultValue: 'base/1.0.2'),
string(description: 'User/Channel', name: 'user_channel', defaultValue: 'release/stable'),
string(description: 'SVN repository branch', name: 'svn_repo_branch', defaultValue: 'tags/CONAN_REL_1.0.2'),
string(description: 'SVN repository url', name: 'svn_repo_url', defaultValue: 'svn+ssh://$USER#svnserver/svncmake/base/'),
string(description: 'Artifactory', name: 'artifactory', defaultValue: 'my-artifactory'),
string(description: 'Upload repo', name: 'uploadRepo', defaultValue: 'stable-release')
])])
node('buildserver') {
withEnv(['PATH+LOCAL_BIN=/xxxxx/release/.virtualenvs/jfrog/bin']) {
currentBuild.displayName = params.name_version + "#" + params.user_channel
def server
def client
def uploadRepo
def mysvncreds = 'creds-for-svn'
def SVN_repo_url
deleteDir()
stage("Configure/Get recipe"){
server = Artifactory.server params.artifactory
client = Artifactory.newConanClient()
uploadRepo = client.remote.add server: server, repo: params.uploadRepo
dir("_comp_repo"){
SVN_repo_url = params.svn_repo_url + params.svn_repo_branch
checkout([$class: 'SubversionSCM', locations: [[credentialsId: mysvncreds, depthOption: 'files', ignoreExternalsOption: true, local: '.', remote: SVN_repo_url ]]])
}
}
stage("Export recipe"){
dir("_comp_repo"){
myrecipes = ['conanfile.py', 'conanfile_policy.py', 'conanfile_rpm.py']
for(int i = 0; i < myrecipes.size(); i++)
{
def thisrecipe = myrecipes[i]
if (fileExists(thisrecipe)) {
mycommand = "export ./" + thisrecipe + " " + params.user_channel
client.run(command: mycommand )
} else {
echo thisrecipe
}
}
client.run(command: "search" )
}
}
stage("Upload recipe to Artifactory"){
def name_version = params.name_version
string myname = name_version.split("/")[0]
string myversion = name_version.split("/")[1]
String command = "upload ${myname}*/*#${params.user_channel} -r ${uploadRepo} --all --confirm --retry 3 --retry-wait 10"
client.run(command: command)
}
}
}

do it in bash with find and remove
find /jenkins_conan/workspace/UploadRecipe#tmp -type f -name 'conan.tmp*' -exec rm -v {} \;
or try to add this to your pipeline
node('yournode') {
..
stage 'removing cache files'
def ws = pwd()
def file = ws + '/conan.tmp*'
sh 'rm ' + file + ' -rf'
}

Related

ExtendedChoice parameter with Global Variable are not working

I'm in the process of passing a from the Jenkins Global Variable Reference variable called JOB_BASE_NAME to the groovy script. I'm using extendedChoice parameter with Groovy script and it is responsible for listing container images from the ECR on a specific repository. In my case Jenkins job names and ECR repository names are equivalent.
Ex:
Jenkins Job Name = http://jenkins.localhost/job/application-abc
ECR Repo name = abc/application-abc
I tried several things but all time I ended up with an empty response to the container images listing part.
Please help me to figure out is it outofthebox or how can i implement this thing
Thanks
Here is my Code
pipeline {
agent {
label 'centos7-slave'
}
stages {
stage('Re Tag RELEASE TAG AS UAT') {
environment {
BRANCH = "${params.GITHUB_BRANCH_TAG}"
}
input {
message 'Select tag'
ok 'Release!'
parameters {
extendedChoice(
bindings: '',
groovyClasspath: '',
multiSelectDelimiter: ',',
name: 'DOCKER_RELEASE_TAG',
quoteValue: false,
saveJSONParameterToFile: false,
type: 'PT_SINGLE_SELECT',
visibleItemCount: 5,
groovyScript: '''
import groovy.json.JsonSlurper
def AWS_ECR = ("/usr/local/bin/aws ecr list-images --repository-name abc/${JOB_BASE_NAME} --filter tagStatus=TAGGED --region ap-southeast-1").execute()
def DATA = new JsonSlurper().parseText(AWS_ECR.text)
def ECR_IMAGES = []
DATA.imageIds.each {
if(("$it.imageTag".length()>3))
{
ECR_IMAGES.push("$it.imageTag")
}
}
return ECR_IMAGES.grep( ~/.*beta.*/ ).sort().reverse()
'''
)
}
}
steps {
script {
def DOCKER_TAG = sh(returnStdout: true, script:"""
#!/bin/bashF
set -e
set -x
DOCKER_TAG_NUM=`echo $DOCKER_RELEASE_TAG | cut -d "-" -f1`
echo \$DOCKER_TAG_NUM
""")
DOCKER_TAG = DOCKER_TAG.trim()
DOCKER_TAG_NUM = DOCKER_TAG
}
sh "echo ${AWS_ECR} | docker login --username AWS --password-stdin ${ECR}"
sh "docker pull ${ECR}/${REPOSITORY}:${DOCKER_RELEASE_TAG}"
sh " docker tag ${ECR}/${REPOSITORY}:${DOCKER_RELEASE_TAG} ${ECR}/${REPOSITORY}:${DOCKER_TAG_NUM}-rc"
sh "docker push ${ECR}/${REPOSITORY}:${DOCKER_TAG_NUM}-rc"
}
}
}
}
You can leverage Groovy String Interpolation to replace the job base name in the script for the parameter, but the script can't access any variable out of the scope of the script.
You can try as following:
Use a function to compose the Groovy script for parameter
The function accept the JOB_BASE_NAME value
Use Groovy string interpolation to replace to real value.
pipeline {
agent {
label 'centos7-slave'
}
stages {
stage('Re Tag RELEASE TAG AS UAT') {
environment {
BRANCH = "${params.GITHUB_BRANCH_TAG}"
}
input {
message 'Select tag'
ok 'Release!'
parameters {
extendedChoice(
bindings: '',
groovyClasspath: '',
multiSelectDelimiter: ',',
name: 'DOCKER_RELEASE_TAG',
quoteValue: false,
saveJSONParameterToFile: false,
type: 'PT_SINGLE_SELECT',
visibleItemCount: 5,
groovyScript: list_ecr_images("${env.JOB_BASE_NAME}")
)
}
}
steps {
script {
def DOCKER_TAG = sh(returnStdout: true, script:"""
#!/bin/bashF
set -e
set -x
DOCKER_TAG_NUM=`echo $DOCKER_RELEASE_TAG | cut -d "-" -f1`
echo \$DOCKER_TAG_NUM
""")
DOCKER_TAG = DOCKER_TAG.trim()
DOCKER_TAG_NUM = DOCKER_TAG
}
sh "echo ${AWS_ECR} | docker login --username AWS --password-stdin ${ECR}"
sh "docker pull ${ECR}/${REPOSITORY}:${DOCKER_RELEASE_TAG}"
sh " docker tag ${ECR}/${REPOSITORY}:${DOCKER_RELEASE_TAG} ${ECR}/${REPOSITORY}:${DOCKER_TAG_NUM}-rc"
sh "docker push ${ECR}/${REPOSITORY}:${DOCKER_TAG_NUM}-rc"
}
}
}
}
def list_ecr_images(jobBaseName) {
def _script = """
import groovy.json.JsonSlurper
def AWS_ECR = [
'/usr/local/bin/aws',
'ecr list-images',
"--repository-name abc/${jobBaseName}",
'--filter tagStatus=TAGGED',
'--region ap-southeast-1'
].execute().text
def DATA = new JsonSlurper().parseText(AWS_ECR)
def ECR_IMAGES = []
DATA.imageIds.each {
if((it.imageTag.length()>3))
{
ECR_IMAGES.push(it.imageTag)
}
}
return ECR_IMAGES.grep( ~/.*beta.*/ ).sort().reverse()
"""
return _script.stripIndent()
}

Jenkins start build, then wait for choices input before continuing build

I have a Jenkins Pipeline contained in a script that dynamically builds dropdown menus for input parameters that get used later in subsequent steps. The first time I run the build it creates the dropdowns but doesn't display them to the user, it just continues on to the next step using the default inputs.
If I then run the build a second time it displays the dropdowns and waits for choice selection by the user before continuing on to the next step. How do I get the dropdowns to display and wait on the first run as well?
Here is a pic of the dropdowns the second time I run the build:
Here is my Pipeline script:
pipeline {
agent any
stage ('Build Parameters') {
steps {
script {
properties([
parameters([
choice(
choices: [ 'MyEsxiServer.esxi' ],
description: 'On which ESXi Server should this script run?',
name: 'esxiServer'
),
choice(
choices: [ 'dev','stage','prod' ],
description: 'In which environment should this script run?',
name: 'environment'
),
[$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'On which runner should this script run?',
filterLength: 1,
filterable: false,
name: 'runner',
randomName: 'choice-parameter-596645940283131',
script: [$class: 'GroovyScript',
script: [classpath: [], sandbox: false,
script:
'''// Build choices for runner drop down
import groovy.json.JsonSlurper
log = new File('/tmp/groovy.log')
// execute the OS cmd and return the results
def sout = new StringBuilder()
def serr = new StringBuilder() //standard out and error strings
//Assemble command
def cmd = "/home/jenkins/bpb/testautomation/ansible/playbooks/listVMs.sh"
log.append("\\ncmd: " + cmd + "\\n") //debug
//Execute OS command
def proc = cmd.execute()
proc.waitForProcessOutput(sout, serr) //Wait for command to complete
proc.waitForOrKill(10000) //Set timeout
log.append("sout: " + sout + "\\n") //debug
log.append("serr: " + serr + "\\n") //debug
// translate JSON to List
def soutList = new JsonSlurper().parseText(sout.toString())
log.append("soutList: " + soutList + "\\n") //debug
def List vmList = soutList.vms.sort()
log.append("vmList: " + vmList + "\\n") //debug
return vmList
'''
]
]
],
choice(
choices: [ 'CentOS51','Comodore64','BeOS' ],
description: 'What is the target Operating System?',
name: 'targetOS'
)
]) //parameters
]) //properties
} //script
} //steps
} //stage
stage ('Execute Ansible') {
steps {
ansiblePlaybook credentialsId: 'b7a24821-8dc3-40d0-8cee-ef284e07393a',
disableHostKeyChecking: true,
colorized: true,
installation: 'Ansible',
inventory: "testautomation/ansible/${params.environment}/${params.environment}.inv",
playbook: 'testautomation/ansible/playbooks/run_cmd_inside_vm.yml',
extras: "-v -e runner=${params.runner} -e shell_cmd=/home/kcason/Desktop/${params.targetOS}/3-run.sh"
} //steps
} //stage
} // stages
} // pipeline
Not at all sadly. You can start a job with or without parameters (depending on how it is configured). The properties call changes this configuration but only after the job is started (without parameters/ with the old parameters). Jenkins cannot do that before since the properties call could be anywhere in the script and could depend on steps before it.
This article https://dev.to/pencillr/jenkins-pipelines-and-their-dirty-secrets-1 discusses this topic in more detail.
Also see the open issue in the jenkins issue tracker https://issues.jenkins-ci.org/plugins/servlet/mobile#issue/JENKINS-41929

Is there a way to use a pipeline env var within the extendedChoice parameter?

I am using the extendedChoice plugin for my Jenkins pipeline. It fetches the s3 objects from the bucket and provides the list of values using a short Groovy script. The issue is that I need to parametrize the s3 bucket by using the corresponding variable defined within the pipeline's environment section. How can I do this?
So I tried a lot of different snippets to get the env vars though with no result.
import jenkins.model.*
// This will print out the requested var from the global Jenkins config.
def envVars = Jenkins.instance.getGlobalNodeProperties()[0].getEnvVars()
return envVars['S3_BUCKET']
// This will print out values from the env vars of the node itself where the Jenkins is running.
def env = System.getenv('S3_BUCKET')
return env
// This is what I have now
def domainsList = "aws s3api list-objects-v2 --bucket someRandomBucket --output text --delimiter /".execute() | 'cut -d / -f 1'.execute() | 'sed 1d'.execute()
domainsList.waitFor()
def output = domainsList.in.text
return output.split('COMMONPREFIXES')
// This is the Jenkinsfile
pipeline {
agent any
environment {
DOMAIN_NAME = "${params.DOMAIN_NAME}"
MODEL_VERSION = "${params.MODEL_VERSION}"
S3_BUCKET = "someRandomBucket"
}
parameters {
extendedChoice(
bindings: '',
defaultValue: '',
description: '',
descriptionPropertyValue: '',
groovyClasspath: '',
groovyScript: '''
def domainsList = "aws s3api list-objects-v2 --bucket someRandomBucket --output text --delimiter /".execute() | 'cut -d / -f 1'.execute() | 'sed 1d'.execute()
domainsList.waitFor()
def output = domainsList.in.text
return output.split('COMMONPREFIXES')
''',
multiSelectDelimiter: ',',
name: 'DOMAIN_NAME',
quoteValue: false,
saveJSONParameterToFile: false,
type: 'PT_SINGLE_SELECT',
visibleItemCount: 10)
choice(
choices: ['a', 'b'],
description: 'Select a model version for processing',
name: 'MODEL_VERSION')
}
stages {
stage('Clean workdir') {
steps {
cleanWs()
}
}
stage('build') {
steps {
sh "echo $S3_BUCKET"
sh "echo $DOMAIN_NAME"
sh "echo $MODEL_VERSION"
}
}
}
}
As I mentioned above I need to substitue the someRandomBucket hardcode with the S3_BUCKET env var value in the groovy script within the extendedChoice parameter
RESOLVED - Environment variables can be injected particarly for the parameter via the Jenkins job UI

Jenkins pipeline - How to give choice parameters dynamically

pipeline {
agent any
stages {
stage("foo") {
steps {
script {
env.RELEASE_SCOPE = input message: 'User input required', ok: 'Release!',
parameters: [choice(name: 'RELEASE_SCOPE', choices: 'patch\nminor\nmajor',
description: 'What is the release scope?')]
}
echo "${env.RELEASE_SCOPE}"
}
}
}
}
In this above code, The choice are hardcoded (patch\nminor\nmajor) -- My requirement is to dynamically give choice values in the dropdown.
I get the values from calling api - Artifacts list (.zip) file names from artifactory
In the above example, It request input when we do the build, But i want to do a "Build with parameters"
Please suggest/help on this.
Depends how you get data from API there will be different options for it, for example let's imagine that you get data as a List of Strings (let's call it releaseScope), in that case your code be following:
...
script {
def releaseScopeChoices = ''
releaseScope.each {
releaseScopeChoices += it + '\n'
}
parameters: [choice(name: 'RELEASE_SCOPE', choices: ${releaseScopeChoices}, description: 'What is the release scope?')]
}
...
hope it will help.
This is a cutdown version of what we use. We separate stuff into shared libraries but I have consolidated a bit to make it easier.
Jenkinsfile looks something like this:
#!groovy
#Library('shared') _
def imageList = pipelineChoices.artifactoryArtifactSearchList(repoName, env.BRANCH_NAME)
imageList.add(0, 'build')
properties([
buildDiscarder(logRotator(numToKeepStr: '20')),
parameters([
choice(name: 'ARTIFACT_NAME', choices: imageList.join('\n'), description: '')
])
])
Shared library that looks at artifactory, its pretty simple.
Essentially make GET Request (And provide auth creds on it) then filter/split result to whittle down to desired values and return list to Jenkinsfile.
import com.cloudbees.groovy.cps.NonCPS
import groovy.json.JsonSlurper
import java.util.regex.Pattern
import java.util.regex.Matcher
List artifactoryArtifactSearchList(String repoKey, String artifact_name, String artifact_archive, String branchName) {
// URL components
String baseUrl = "https://org.jfrog.io/org/api/search/artifact"
String url = baseUrl + "?name=${artifact_name}&repos=${repoKey}"
Object responseJson = getRequest(url)
String regexPattern = "(.+)${artifact_name}-(\\d+).(\\d+).(\\d+).${artifact_archive}\$"
Pattern regex = ~ regexPattern
List<String> outlist = responseJson.results.findAll({ it['uri'].matches(regex) })
List<String> artifactlist=[]
for (i in outlist) {
artifactlist.add(i['uri'].tokenize('/')[-1])
}
return artifactlist.reverse()
}
// Artifactory Get Request - Consume in other methods
Object getRequest(url_string){
URL url = url_string.toURL()
// Open connection
URLConnection connection = url.openConnection()
connection.setRequestProperty ("Authorization", basicAuthString())
// Open input stream
InputStream inputStream = connection.getInputStream()
#NonCPS
json_data = new groovy.json.JsonSlurper().parseText(inputStream.text)
// Close the stream
inputStream.close()
return json_data
}
// Artifactory Get Request - Consume in other methods
Object basicAuthString() {
// Retrieve password
String username = "artifactoryMachineUsername"
String credid = "artifactoryApiKey"
#NonCPS
credentials_store = jenkins.model.Jenkins.instance.getExtensionList(
'com.cloudbees.plugins.credentials.SystemCredentialsProvider'
)
credentials_store[0].credentials.each { it ->
if (it instanceof org.jenkinsci.plugins.plaincredentials.StringCredentials) {
if (it.getId() == credid) {
apiKey = it.getSecret()
}
}
}
// Create authorization header format using Base64 encoding
String userpass = username + ":" + apiKey;
String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes());
return basicAuth
}
I could achieve it without any plugin:
With Jenkins 2.249.2 using a declarative pipeline,
the following pattern prompt the user with a dynamic dropdown menu
(for him to choose a branch):
(the surrounding withCredentials bloc is optional, required only if your script and jenkins configuration do use credentials)
node {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'user-credential-in-gitlab',
usernameVariable: 'GIT_USERNAME',
passwordVariable: 'GITLAB_ACCESS_TOKEN']]) {
BRANCH_NAMES = sh (script: 'git ls-remote -h https://${GIT_USERNAME}:${GITLAB_ACCESS_TOKEN}#dns.name/gitlab/PROJS/PROJ.git | sed \'s/\\(.*\\)\\/\\(.*\\)/\\2/\' ', returnStdout:true).trim()
}
}
pipeline {
agent any
parameters {
choice(
name: 'BranchName',
choices: "${BRANCH_NAMES}",
description: 'to refresh the list, go to configure, disable "this build has parameters", launch build (without parameters)to reload the list and stop it, then launch it again (with parameters)'
)
}
stages {
stage("Run Tests") {
steps {
sh "echo SUCCESS on ${BranchName}"
}
}
}
}
The drawback is that one should refresh the jenkins configration and use a blank run for the list be refreshed using the script ...
Solution (not from me): This limitation can be made less anoying using an aditional parameters used to specifically refresh the values:
parameters {
booleanParam(name: 'REFRESH_BRANCHES', defaultValue: false, description: 'refresh BRANCH_NAMES branch list and launch no step')
}
then wihtin stage:
stage('a stage') {
when {
expression {
return ! params.REFRESH_BRANCHES.toBoolean()
}
}
...
}
this is my solution.
def envList
def dockerId
node {
envList = "defaultValue\n" + sh (script: 'kubectl get namespaces --no-headers -o custom-columns=":metadata.name"', returnStdout: true).trim()
}
pipeline {
agent any
parameters {
choice(choices: "${envList}", name: 'DEPLOYMENT_ENVIRONMENT', description: 'please choose the environment you want to deploy?')
booleanParam(name: 'SECURITY_SCAN',defaultValue: false, description: 'container vulnerability scan')
}
The example of Jenkinsfile below contains AWS CLI command to get the list of Docker images from AWS ECR dynamically, but it can be replaced with your own command. Active Choices Plug-in is required.
Note! You need to approve the script specified in parameters after first run in "Manage Jenkins" -> "In-process Script Approval", or open job configuration and save it to approve
automatically (might require administrator permissions).
properties([
parameters([[
$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
name: 'image',
description: 'Docker image',
filterLength: 1,
filterable: false,
script: [
$class: 'GroovyScript',
fallbackScript: [classpath: [], sandbox: false, script: 'return ["none"]'],
script: [
classpath: [],
sandbox: false,
script: '''\
def repository = "frontend"
def aws_ecr_cmd = "aws ecr list-images" +
" --repository-name ${repository}" +
" --filter tagStatus=TAGGED" +
" --query imageIds[*].[imageTag]" +
" --region us-east-1 --output text"
def aws_ecr_out = aws_ecr_cmd.execute() | "sort -V".execute()
def images = aws_ecr_out.text.tokenize().reverse()
return images
'''.stripIndent()
]
]
]])
])
pipeline {
agent any
stages {
stage('First stage') {
steps {
sh 'echo "${image}"'
}
}
}
}
choiceArray = [ "patch" , "minor" , "major" ]
properties([
parameters([
choice(choices: choiceArray.collect { "$it\n" }.join(' ') ,
description: '',
name: 'SOME_CHOICE')
])
])

Correct way to structure a jenkins groovy pipeline script

I wrote a pipeline that works with jeknins but as a newbie to jenkins scripting I've a lot of stuffs that are not clear to me, Here's the whole script, I'll express the issues below
SCRIPT:
node()
{
def libName = "PROJECT"
def slnPath = pwd();
def slnName = "${slnPath}\\${libName}.sln"
def webProject = "${slnPath}\\PROJECT.Web\\PROJECT.Web.csproj"
def profile = getProperty("profiles");
def version = getProperty("Version");
def deployFolder = "${slnPath}Deploy";
def buildRevision = "";
def msbHome = "C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Professional\\MSBuild\\15.0\\Bin\\msbuild.exe"
def msdHome = "C:\\Program Files (x86)\\IIS\\Microsoft Web Deploy V3\\msdeploy.exe"
def nuget = "F:\\NugetBin\\nuget.exe";
def assemblyScript = "F:\\Build\\Tools\\AssemblyInfoUpdatePowershellScript\\SetAssemblyVersion.ps1";
def webserverName ="192.168.0.116";
def buildName = "PROJECT";
def filenameBase ="PROJECT";
stage('SCM update')
{
checkout([$class: 'SubversionSCM', additionalCredentials: [], excludedCommitMessages: '', excludedRegions: '', excludedRevprop: '', excludedUsers: '', filterChangelog: false, ignoreDirPropChanges: false, includedRegions: '', locations: [[credentialsId: '08ae9e8c-8db8-43e1-b081-eb352eb14d11', depthOption: 'infinity', ignoreExternalsOption: true, local: '.', remote: 'http://someurl:18080/svn/Prod/Projects/PROJECT/PROJECT/trunk']], workspaceUpdater: [$class: 'UpdateWithRevertUpdater']])
}
stage('SCM Revision')
{
bat("svn upgrade");
bat("svn info \"${slnPath}\" >revision.txt");
for (String i : readFile('revision.txt').split("\r?\n"))
{
if(i.contains("Last Changed Rev: "))
{
def splitted = i.split(": ")
echo "Revisione : "+ splitted[1];
buildName += "." + splitted[1];
currentBuild.displayName = buildName;
buildRevision += version + "." + splitted[1];
}
}
}
stage("AssemblyInfo update")
{
powerShell("${assemblyScript} ${buildRevision} -path .")
}
stage('Nuget restore')
{
bat("${nuget} restore \"${slnName}\"")
}
stage('Main build')
{
bat("\"${msbHome}\" \"${slnName}\" /p:Configuration=Release /p:PublishProfile=Release /p:DeployOnBuild=true /p:Profile=Release ");
stash includes: 'Deploy/Web/**', name : 'web_artifact'
stash includes: 'PROJECT.Web/Web.*', name : 'web_config_files'
stash includes: 'output/client/release/**', name : 'client_artifact'
stash includes: 'PROJECT.WPF/App.*', name : 'client_config_files'
stash includes: 'PROJECT.WPF/Setup//**', name : 'client_setup'
}
stage('Profile\'s customizations')
{
if (profile != "")
{
def buildProfile = profile.split(',');
def stepsForParallel = buildProfile.collectEntries {
["echoing ${it}" : performTransformation(it,filenameBase,buildRevision)]
}
parallel stepsForParallel;
}
}
post
{
always
{
echo "mimmo";
}
}
}
def powerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';[Console]::OutputEncoding=[System.Text.Encoding]::UTF8;$psCmd;EXIT \$global:LastExitCode\""
}
def performTransformation(profile,filename,buildRevision) {
return {
node {
def ctt ="F:\\Build\\Tools\\ConfigTransformationTool\\ctt.exe";
def nsiTool = "F:\\Build\\Tools\\NSIS\\makensis.exe";
def slnPath = pwd();
unstash 'web_artifact'
unstash 'web_config_files'
def source = 'Deploy/Web/Web.config';
def transform = 'PROJECT.Web\\web.' + profile + '.config';
bat("\"${ctt}\" i s:\"${source}\" t:\"${transform}\" d:\"${source}\"" )
def fname= filename + "_" + profile + "_" + buildRevision + "_web.zip";
if (fileExists(fname))
bat("del "+ fname);
zip(zipFile:fname, dir:"Deploy\\Web")
archiveArtifacts artifacts: fname
//Now I generate the client part
unstash 'client_artifact'
unstash 'client_config_files'
unstash 'client_setup'
def sourceClient = 'output/client/release/PROJECT.WPF.exe.config';
def transformClient = 'PROJECT.WPF/App.' + profile + '.config';
bat("\"${ctt}\" i s:\"${sourceClient}\" t:\"${transformClient}\" d:\"${sourceClient}\"" )
def directory = new File(pwd() + "\\output\\installer\\")
if(!directory.exists())
{
bat("mkdir output\\installer");
}
directory = new File( pwd() + "\\output\\installer\\${profile}")
if(!directory.exists())
{
echo " directory does not exist";
bat("mkdir output\\installer\\${profile}");
}
else
{
echo " directory exists";
}
def filename2= filename + "_" + profile + "_" + buildRevision + "_client.zip";
bat("${nsiTool} /DAPP_VERSION=${buildRevision} /DDEST_FOLDER=\"${slnPath}\\output\\installer\\${profile}\" /DTARGET=\"${profile}\" /DSOURCE_FILES=\"${slnPath}\\output\\client\\release\" \"${slnPath}\\PROJECT.WPF\\Setup\\setup.nsi\" ");
if (fileExists(filename2))
bat("del "+ filename2);
zip(zipFile:filename2, dir:"output\\installer\\" + profile);
archiveArtifacts artifacts: filename2
}
}
};
The series of questions are:
I've seen some script where everything is wrapped in a pipeline {}, is this necessary or does Jenkins pipeline plugin paste it?
I really dislike to have all those definitions inside the node and then replicated below.
I don't see inside the Jenkins workflow the parallelism, even if I've 4 executors in idle.
I'm not able to call the post pipeline event to clear the workspace (rigth now It's just en echo
There are 2 types of pipeline. Straight groovy like you have written is referred to as a scripted pipeline. The style that has the pipeline{} block around it is a declarative style pipeline. The declarative tends to be easier for newer Pipeline users and is a good choice for starting out with a pipeline. Many pipelines don't need the complexity that scripted allows.
This is groovy. If you want to declare a bunch of variables, you have to do it somewhere. Otherwise you hard-code those values in your script somewhere. In groovy, you don't HAVE to declare every variable, but you have to define it somewhere, and unless you know how the declaration is going to affect scope, you should just declare them. Most programming languages require some kind of variable declaration, especially when you have to worry about scope, so I don't see that this is a problem. I think it is very clean to define all of the variable values in one place at the top. Easier for maintenance.
At first glance, your parallel execution looks like it should work, but unless I set this up and ran it, it is hard to say. It could be that the parallel parts are running fast enough that the UI doesn't update. You should be able to see in the console output if these are running in parallel.
The post pipeline block is not available in scripted pipeline. That is part of the declarative pipeline syntax. In scripted, to do similar things you have to use try/catch to catch errors and run post-type things.

Resources