how to change the # and #tmp directory creation in jenkins workspace - jenkins

Every time my build is running, i got 2 or 3 folders with # and #tmp
Example: If my build name is test, I run the build it fetches some code from git and store it in the jenkins workspaces with names test, test#2 test#2tmp test#tmp. But original folder is test. I only want the test folder and i need to remove the next 2 folders. How can i do this.
My present working directory is automatically choosing as test#2
I want the default pwd to be /var/lib/jenkins/workspace/
I want to delete the #2 and #tmp files and change my working directory to after the build runs
Sample output is:
pwd
/var/lib/jenkins/workspace/test#2

You can use customWorkspace in your jenkins pipelines:
Example:
agent {
node {
label 'my-label'
customWorkspace '/my/path/to/workspace'
}
}
Note that Jenkins use different directories to support concurrent builds:
If concurrent builds ask for the same workspace, a directory with a suffix such as #2 may be locked instead.
Since you don't want this behaviour I advise you to use disableConcurrentBuilds option:
Disallow concurrent executions of the Pipeline. Can be useful for preventing simultaneous accesses to shared resources, etc. For example: options { disableConcurrentBuilds() }
References on customWorkspace and disableConcurrentBuilds: https://jenkins.io/doc/book/pipeline/syntax/

look for agent{ } block in your pipeline groovy script. it should be only at the pipeline block level and not into any stage block.

Related

jenkins workspace folder cut in multibranch pipeline

I note that in multibranch pipeline the workspace folder is cut.
For example a project named:
Sample09-Netbeans-MultiBranch-Pipeline-Maven-Svn
that comes from a subversion repository like
https://my-favourite-repo/svn/ProjectsJava/DevOps/Jenkins/Test/test-jenkins-java-maven-multibranch/
with a project folder like
D:\ProjectsJava\DevOps\Jenkins\Test\test-jenkins-java-maven-multibranch\trunk\myproject
produce a workspace folder like this
D:\Jenkins.jenkins\workspace\peline-Maven-Svn_trunk_myproject
other types of project whit similar names doesn't have this problem
I found a workaround adding
-a node definition
-a customWorkspace
but when i use it maven doesn't see
the settings.xml file and i must directly specify it in the maven command passing
a jenkins-global property.
No other action can provide it to the command (define a jenkins-config-file,
define it in jenkins-maven configuration or in project-maven configuration)
pipeline {
agent{
node{
label 'my-node'
customWorkspace "${JENKINS_HOME}/Workspace/${JOB_NAME}/${BUILD_NUMBER}"
}
}
stages {
stage('Build-And-Test') {
steps {
withMaven {
bat "mvn clean package test -B -s ${MAVEN_SETTINGS}"
}
}
}
}
}
Why Jenkins cut the folder name only in multibranch pipelines?
There is another way to define the workspace-job-folder-name outside of the jenkinsfile ?
OR
There is a way to let maven see the settings.xml configured in one of the Jenkins configuration?
Most operating systems have an upper bound on the length of a file name and the length of a directory path. Jenkins pipeline jobs that use full length strings were encountering operating system path limitations (especially the 256 character default limit on Windows).
Pipeline job names were intentionally changed to shorter forms so that they would reduce the likelihood of encountering an operating system or file system limit on path length.

Bamboo: Override a runtime variable - build working directory?

I'd like to override the build working directory for my bamboo plan.
I noticed the it's always something like <HOME>\<BUILD_PROJECT>-<BUILD-PLAN>-<JOB-KEY>
I'd like to override the build directory so all the stages and jobs will use the same.
Current Setup:
STAGE 1
JOB 1: build dir = C:\data\bamboo\agent5_1\xml-data\build-dir\PROJ-PLAN-S101
STAGE 2
JOB 1: build dir = C:\data\bamboo\agent5_1\xml-data\build-dir\PROJ-PLAN-S201
JOB 2: build dir = C:\data\bamboo\agent5_1\xml-data\build-dir\PROJ-PLAN-S202
Current Setup:
STAGE 1
JOB 1: build dir = C:\data\bamboo\agent5_1\xml-data\build-dir\PROJ-PLAN-FOO
STAGE 2
JOB 1: build dir = C:\data\bamboo\agent5_1\xml-data\build-dir\PROJ-PLAN-FOO
JOB 2: build dir = C:\data\bamboo\agent5_1\xml-data\build-dir\PROJ-PLAN-FOO
How can i achieve this?
I do not think you can or would want to use the same folder as it violates the multi-stage and concurrent job philosophy of Bamboo.
Multiple stages are separated by folder so that each build stage is isolated from the previous. If you want to share files between the stages you will want to use an artifact.
Multiple jobs are separated by folder so that they can be run concurrently. If the jobs were all in the same folder, this would not be possible due to permissions errors (especially on Windows). If you don't care about the jobs running concurrently the two jobs in the second stage could be combined.
Since you want to build in the same folder on the same system, it sounds like this pipeline could be simplified to one stage with one job.

Is there a way to run a pre-checkout step in declarative Jenkins pipelines?

Jenkins declarative pipelines offer a post directive to execute code after the stages have finished. Is there a similar thing to run code before the stages are running, and most importantly, before the SCM checkout?
For example something along the lines of:
pre {
always {
rm -rf ./*
}
}
This would then clean the workspace of my build before the source code is checked out.
pre is a cool feature idea, but doesn't exist yet. skipDefaultCheckout and checkout scm (which is the same as the default checkout) are the keys:
pipeline {
agent { label 'docker' }
options {
skipDefaultCheckout true
}
stages {
stage('clean_workspace_and_checkout_source') {
steps {
deleteDir()
checkout scm
}
}
stage('build') {
steps {
echo 'i build therefore i am'
}
}
}
}
For the moment there are no pre-build steps but for the purpose you are looking for, it can be done in the pipeline job configurarion and also multibranch pipeline jobs, when you define where is your jenkinsfile, choose Additional Behaviours -> Wipe out repository & force clone.
Delete the contents of the workspace before building, ensuring a fully fresh workspace.
If you do not really want to delete everything and save some network usage, you can just use this other option: Additional Behaviours -> Clean before checkout.
Clean up the workspace before every checkout by deleting all untracked files and directories, including those which are specified in .gitignore. It also resets all tracked files to their versioned state. This ensures that the workspace is in the same state as if you cloned and checked out in a brand-new empty directory, and ensures that your build is not affected by the files generated by the previous build.
This one will not delete the workspace but just reset the repository to the original state and pull new changes if there are some.
I use "Prepare an environment for the run / Script Content"

How to manage multiple Jenkins pipelines from a single repository?

At this moment we use JJB to compile Jenkins jobs (mostly pipelines already) in order to configure about 700 jobs but JJB2 seems not to scale well to build pipelines and I am looking for a way to drop it from the equation.
Mainly i would like to be able to have all these pipelines stored in a single centralized repository.
Please note that keeping the CI config (Jenkinsfile) inside each repository and branch is not possible in our use case, we need to keep all pipelines in a single "jenkins-jobs.git" repo.
As far as I know this is not possible yet, but in progress. See: https://issues.jenkins-ci.org/browse/JENKINS-43749
I think this is the purpose of jenkins shared libraries
I didn't dev such library my-self but I am using some. Basically:
Develop the "shared code" of the jenkins pipeline in a shared library
it can contains the whole pipeline (seq of steps)
Add this library to the jenkins server
In each project, add a jenkinsfile that "import" those using #Library
as #Juh_ said, you can use jenkins shared libraries, here is a complete steps, Suppose that we have three branches:
master
develop
stage
and we want to create a single Jenkins file so that we can change in only one place. All you need is creating a new branch ex: common. This branch MUST have this structure. What we are interested for now is adding a new groovy file in vars directory, ex: common.groovy. Here we can put the common Jenkins file that you wish to be used across all branches.
Here is a sample:
def call() {
node {
stage("Install Stage from common file") {
if (env.BRANCH_NAME.equals('master')){
echo "npm install from common files master branch"
}
else if(env.BRANCH_NAME.equals('develop')){
echo "npm install from common files develop branch"
}
}
stage("Test") {
echo "npm test from common files"
}
}
}
You must wrap your code call function in order to be used in other branches. now we have finished work in common branch we need to use it in our branches. go to any branch you wish to use this pipline ex: master and create Jenkinsfile and put this one line of code:
common()
This will call the common function that you have created before in common branch and will execute the pipeline.

Jenkins pipeline share information between jobs

We are trying to define a set of jobs on Jenkins that will do really specific actions. JobA1 will build maven project, while JobA2 will build .NET code, JobB will upload it to Artifactory, JobC will download it from Artifactory and JobD will deploy it.
Every job will have a set of parameters so we can reuse the same job for any product (around 100).
The idea behind this is to create black boxes, I call a job with some input and I get always some output, whatever happens between is something that I don't care. On the other side, this allows us to improve each job separately, adding the required complexity, and instantly all products will get benefit.
We want to use Jenkins Pipeline to orchestrate the execution of actions. We are going to have a pipeline per environment/usage.
PipelineA will call JobA1, then JobB to upload to artifactory.
PipelineB will download package JobC and then deploy to staging.
PipelineC will download package JobC and then deploy to production based on some internal validations.
I have tried to get some variables from JobA1 (POM basic stuff such as ArtifactID or Version) injected to JobB but the information seems not to be transfered.
Same happens while downloading files, I call JobC but the file is in the job workspace not available for any other and I'm afraid that"External Workspace Manager" plugin adds too much complexity.
Is there any way rather than share the workspace to achieve my purpose? I understand that share the workspace will make it impossible to run two pipelines at the same time
Am I following the right path or am I doing something weird?
There are two ways to share info between jobs:
You can use stash/unstash to share the files/data between multiple jobs in a single pipeline.
stage ('HostJob') {
build 'HostJob'
dir('/var/lib/jenkins/jobs/Hostjob/workspace/') {
sh 'pwd'
stash includes: '**/build/fiblib-test', name: 'app'
}
}
stage ('TargetJob') {
dir("/var/lib/jenkins/jobs/TargetJob/workspace/") {
unstash 'app'
build 'Targetjob'
}
In this manner, you can always copy the file/exe/data from one job to the other. This feature in pipeline plugin is better than Artifact as it saves only the data locally. The artifact is deleted after a build (helps in data management).
You can also use Copy Artifact Plugin.
There are two things to consider for copying an artifact:
a) Archive the artifacts in the host project and assign permissions.
b) After building a new job, select the 'Permission to copy artifact' → Projects to allow copy artifacts: *
c) Create a Post-build Action → Archive the artifacts → Files to archive: "select your files"
d) Copy the artifacts required from host to target project.
Create a Build action → Copy artifacts from another project → Enter the ' $Project name - Host project', which build 'e.g. Lastest successful build', Artifacts to copy '$host project folder', Target directory '$localfolder location'.
The first part of your question(to pass variables between jobs) please use the below command as a post build section:
post {
always {
build job:'/Folder/JobB',parameters: [string(name: 'BRANCH', value: "${params.BRANCH}")], propagate: false
}
}
The above post build action is for all build results. Similarly, the post build action could be triggered on the current build status. I have used the BRANCH parameter from current build(JobA) as a parameter to be consumed by 'JobB' (provide the exact location of the job). Please note that there should be a similar parameter defined in JobB.
Moreover, for sharing the workspace you can refer this link and share the workspace between the jobs.
You could use the Pipelines shared groovy libraries plugin. Have a look at its documentation to implement libraries that multiple pipelines share and define shared global variables.

Resources