What are the #tmp folders in a Jenkins workspace and how to clean them up - docker

I have a Jenkins pipeline, for a PHP project in a Docker container. This is my Jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
agent any
steps {
sh 'docker-compose up -d'
sh 'docker exec symfony composer install'
}
}
stage('Test') {
steps {
sh 'docker exec symfony php ./bin/phpunit --coverage-clover=\'reports/coverage/coverage.xml\' --coverage-html=\'reports/coverage\' --coverage-crap4j=\'reports/crap4j.xml\''
}
}
stage('Coverage') {
steps {
step([$class: 'CloverPublisher', cloverReportDir: '/reports/coverage', cloverReportFileName: 'coverage.xml'])
}
}
}
post {
cleanup {
sh 'docker-compose down -v'
cleanWs()
}
}
}
After running the pipeline, the var/lib/jenkins/workspace folder contains 4 folders (assuming my project name is Foo):
Foo
Foo#2
Foo#2#tmp
Foo#tmp
What are these, and how do I clean them up? cleanWs does not remove any except the first of them after the build.
EDIT: This is not a duplicate of this question because
That question does not answer my question: what are these files.
The answers to that question suggest using deleteDir, which is not recommended when using Docker containers.

There is an opened Jenkins issue about deleteDir() not deleting the #tmp/#script/#... directories.
A workaround to delete those:
post {
always {
cleanWs()
dir("${env.WORKSPACE}#tmp") {
deleteDir()
}
dir("${env.WORKSPACE}#script") {
deleteDir()
}
dir("${env.WORKSPACE}#script#tmp") {
deleteDir()
}
}
}
There is also a comment on the issue describing what #tmp is:
It [#tmp folder] contains the content of any library that was loaded at
run time. Without a copy, Replay can't work reliably.

The
Foo#2
Foo#2#tmp
folders were created because the agent was defined 2 times. Once it was defined at the top level inside the pipeline block. And once inside the stage called build.
The working folder of stage 'build' is the Foo#2 folder.

Related

How to make Jenkins execute pipeline steps from the remote root directory?

I created a simple pipline in Jenkins. The remote root directory of my agent is set to my project root path. But when I test, where I am during the build (e.g. by defining a step like sh 'pwd'), I see, that the directory, my steps are executed from is the $WORKSPACE directory (/path_to_remote_root_directory_of_the_agent/workspace/jenkins_project_title). That means, I cannot just start neither my unit tests like sh 'vendor/bin/phpunit ./test/Unit', nor other tasks, that I usually run from the project root folder.
I'm pretty sure, that I simply configured something incorrectly and that in the normal case scripts like this
pipeline {
agent {
label 'devvm-slave-01'
}
stages {
stage('Prepare') {
steps {
sh 'composer install'
...
}
}
...
stage('Checkstyle') {
steps {
sh 'vendor/bin/phpcs --report=checkstyle --report-file=`pwd`/build/logs/checkstyle.xml --standard=PSR2 --extensions=php --ignore=autoload.php --ignore=vendor/ . || exit 0'
checkstyle pattern: 'build/logs/checkstyle.xml'
}
}
}
}
work as expected without any crude workarounds for paths.
What am I doing wrong and how to get it working correctly?
From the section "agent" of the "Jenkins Handbook"'s chapter "Pipeline Syntax":
Parameters
node
agent { node { label 'labelName' } } behaves the same as agent { label 'labelName' }, but node allows for additional options (such as customWorkspace).
So, the solution is the using of the node and its customWorkspace option:
pipeline {
agent {
node {
label 'devvm-slave-01'
customWorkspace '/path/to/my/project'
}
}
...
}

Jenkins Pipeline: Executing a shell script

I have create a pipeline like below and please note that I have the script files namely- "backup_grafana.sh" and "gitPush.sh" in source code repository where the Jenkinsfile is present. But I am unable to execute the script because of the following error:-
/home/jenkins/workspace/grafana-backup#tmp/durable-52495dad/script.sh:
line 1: backup_grafana.sh: not found
Please note that I am running jenkins master on kubernetes in a pod. So copying scripts files as suggested by the error is not possible because the pod may be destroyed and recreated dynamically(in this case with a new pod, my scripts will no longer be available in the jenkins master)
pipeline {
agent {
node {
label 'jenkins-slave-python2.7'
}
}
stages {
stage('Take the grafana backup') {
steps {
sh 'backup_grafana.sh'
}
}
stage('Push to the grafana-backup submodule repository') {
steps {
sh 'gitPush.sh'
}
}
}
}
Can you please suggest how can I run these scripts in Jenkinsfile? I would like to also mention that I want to run these scripts on a python slave that I have already created finely.
If the command 'sh backup_grafana.sh' fails to execute when it actually should have successfully executed, here are two possible solutions.
1) Maybe you need a dot slash in front of those executable commands to tell your shell where they are. if they are not in your $PATH, you need to tell your shell that they can be found in the current directory. here's the fixed Jenkinsfile with four non-whitespace characters added:
pipeline {
agent {
node {
label 'jenkins-slave-python2.7'
}
}
stages {
stage('Take the grafana backup') {
steps {
sh './backup_grafana.sh'
}
}
stage('Push to the grafana-backup submodule repository') {
steps {
sh './gitPush.sh'
}
}
}
}
2) Check whether you have declared your file as a bash or sh script by declaring one of the following as the first line in your script:
#!/bin/bash
or
#!/bin/sh

Reuse artifacts at a later stage in the same Jenkins project

I have a Jenkins pipeline whose Build step has an archiveArtifacts command.
After the Build step there is Unit test, Integration test and Deploy.
In Deploy step, I want to use one of the artifacts. I thought I could find it in the same place the Build step generated it, but apparently the archiveArtifacts has deleted them.
As a workaround I can copy the artifact before it is archived, but it doesn't look elegant to me. Is there any better way?
As I understand it, archiveArtifacts is more for saving artifacts for use by something (or someone) after the build has finished. I would recommend looking at using "stash" and "unstash" for transferring files between stages or nodes.
You just go...
stash include: 'globdescribingfiles', name: 'stashnameusedlatertounstash'
and when you want to later retrieve that artifact...
unstash 'stashnameusedlatertounstash'
and the stashed files will be put into the current working directory.
Here's the example of that given in the Jenkinsfile docs (https://jenkins.io/doc/book/pipeline/jenkinsfile/#using-multiple-agents):
pipeline {
agent none
stages {
stage('Build') {
agent any
steps {
checkout scm
sh 'make'
stash includes: '**/target/*.jar', name: 'app'
}
}
stage('Test on Linux') {
agent {
label 'linux'
}
steps {
unstash 'app'
sh 'make check'
}
post {
always {
junit '**/target/*.xml'
}
}
}
stage('Test on Windows') {
agent {
label 'windows'
}
steps {
unstash 'app'
bat 'make check'
}
post {
always {
junit '**/target/*.xml'
}
}
}
}
}

Jenkins Pipeline: How to archive artifacts when the build fails?

When our browser based tests fail, we take a screenshot of the browser window to better illustrate the problem. However, I don't understand how to archive them in my pipeline, because the pipeline stops after the failure. Same for the junit.xml, I'd also like to use it in error cases.
I've checked, the screenshots are generated and stored correctly.
My definition looks like this (irrelevant things mostly trimmed):
node {
stage('Build docker container') {
checkout([$class: 'GitSCM', ...])
sh "docker build -t webapp ."
}
stage('test build') {
sh "mkdir -p rspec screenshots"
sh "docker run -v /var/jenkins_home/workspace/webapp/rspec/junit.xml:/myapp/junit.xml -v /var/jenkins_home/workspace/webapp/screenshots:/myapp/tmp/capybara -v webapp bundle exec rspec"
}
stage('Results') {
junit 'rspec/junit*.xml'
archive 'screenshots/*'
}
}
You can use simple Java try/catch to avoid pipeline failure on test failure, or Jenkins catchError like this :
node {
catchError {
// Tests that might fail...
}
// Archive your tests artifacts
}
From here, you can use the post section in your pipeline:
pipeline {
agent any
stages {
stage('Build') {
...
}
stage('Test') {
...
}
}
post {
always {
archive 'build/libs/**/*.jar'
}
}
}

Jenkins Pipeline Wipe Out Workspace

We are running Jenkins 2.x and love the new Pipeline plugin. However, with so many branches in a repository, disk space fills up quickly.
Is there any plugin that's compatible with Pipeline that I can wipe out the workspace on a successful build?
Like #gotgenes pointed out with Jenkins Version. 2.74, the below works, not sure since when, maybe if some one can edit and add the version above
cleanWs()
With, Jenkins Version 2.16 and the Workspace Cleanup Plugin, that I have, I use
step([$class: 'WsCleanup'])
to delete the workspace.
You can view it by going to
JENKINS_URL/job/<any Pipeline project>/pipeline-syntax
Then selecting "step: General Build Step" from Sample step and then selecting "Delete workspace when build is done" from Build step
The mentioned solutions deleteDir() and cleanWs() (if using the workspace cleanup plugin) both work, but the recommendation to use it in an extra build step is usually not the desired solution. If the build fails and the pipeline is aborted, this cleanup-stage is never reached and therefore the workspace is not cleaned on failed builds.
=> In most cases you should probably put it in a post-built-step condition like always:
pipeline {
agent any
stages {
stage('Example') {
steps {
echo 'Hello World'
}
}
}
post {
always {
cleanWs()
}
}
}
You can use deleteDir() as the last step of the pipeline Jenkinsfile (assuming you didn't change the working directory).
In fact the deleteDir function recursively deletes the current directory and its contents. Symbolic links and junctions will not be followed but will be removed.
To delete a specific directory of a workspace wrap the deleteDir step in a dir step.
dir('directoryToDelete') {
deleteDir()
}
Using the following pipeline script:
pipeline {
agent { label "master" }
options { skipDefaultCheckout() }
stages {
stage('CleanWorkspace') {
steps {
cleanWs()
}
}
}
}
Follow these steps:
Navigate to the latest build of the pipeline job you would like to clean the workspace of.
Click the Replay link in the LHS menu.
Paste the above script in the text box and click Run
I used deleteDir() as follows:
post {
always {
deleteDir() /* clean up our workspace */
}
}
However, I then had to also run a Success or Failure AFTER always but you cannot order the post conditions.
The current order is always, changed, aborted, failure, success and then unstable.
However, there is a very useful post condition, cleanup which always runs last, see https://jenkins.io/doc/book/pipeline/syntax/
So in the end my post was as follows :
post {
always {
}
success{
}
failure {
}
cleanup{
deleteDir()
}
}
Hopefully this may be helpful for some corner cases
If you have used custom workspace in Jenkins then deleteDir() will not delete #tmp folder.
So to delete #tmp along with workspace use following
pipeline {
agent {
node {
customWorkspace "/home/jenkins/jenkins_workspace/${JOB_NAME}_${BUILD_NUMBER}"
}
}
post {
cleanup {
/* clean up our workspace */
deleteDir()
/* clean up tmp directory */
dir("${workspace}#tmp") {
deleteDir()
}
/* clean up script directory */
dir("${workspace}#script") {
deleteDir()
}
}
}
}
This snippet will work for default workspace also.
Using the 'WipeWorkspace' extension seems to work as well. It requires the longer form:
checkout([
$class: 'GitSCM',
branches: scm.branches,
extensions: scm.extensions + [[$class: 'WipeWorkspace']],
userRemoteConfigs: scm.userRemoteConfigs
])
More details here: https://support.cloudbees.com/hc/en-us/articles/226122247-How-to-Customize-Checkout-for-Pipeline-Multibranch-
Available GitSCM extensions here: https://github.com/jenkinsci/git-plugin/tree/master/src/main/java/hudson/plugins/git/extensions/impl
For Jenkins 2.190.1 this works for sure:
post {
always {
cleanWs deleteDirs: true, notFailBuild: true
}
}
pipeline {
agent any
tools {nodejs "node"}
environment {
}
parameters {
string(name: 'FOLDER', defaultValue: 'ABC', description: 'FOLDER', trim: true)
}
stages {
stage('1') {
steps{
}
}
stage("2") {
steps {
}
}
}
post {
always {
echo "Release finished do cleanup and send mails"
deleteDir()
}
success {
echo "Release Success"
}
failure {
echo "Release Failed"
}
cleanup {
echo "Clean up in post work space"
cleanWs()
}
}
}
We make sure we are working with a clean workspace by using a feature of the git plugin. You can add additional behaviors like 'Clean before checkout'. We use this as well for 'Prune stale remote-tracking branches'.
In my case, I want to clear out old files at the beginning of the build, but this is problematic since the source code has been checked out.
My solution is to ask git to clean out any files (from the last build) that it doesn't know about:
sh "git clean -x -f"
That way I can start the build out clean, and if it fails, the workspace isn't cleaned out and therefore easily debuggable.
Cleaning up : Since the post section of a Pipeline is guaranteed to run at the end of a Pipeline’s execution, we can add some notification or other steps to perform finalization, notification, or other end-of-Pipeline tasks.
pipeline {
agent any
stages {
stage('No-op') {
steps {
sh 'ls'
}
}
}
post {
cleanup {
echo 'One way or another, I have finished'
deleteDir() /* clean up our workspace */
}
}
}
Currently both deletedir() and cleanWs() do not work properly when using Jenkins kubernetes plugin, the pod workspace is deleted but the master workspace persists
it should not be a problem for persistant branches, when you have a step to clean the workspace prior to checkout scam. It will basically reuse the same workspace over and over again: but when using multibranch pipelines the master keeps the whole workspace and git directory
I believe this should be an issue with Jenkins,
any enlightenment here?
I usually use this:
post {
success {
cleanWs()
}
}

Resources