I have several parallel stages that share a node, and clean up their workspace after they are done. The issue I have is that when the stage fails, I want the workspace NOT cleaned up, so I can inspect it.
What happens instead is:
failing stage fails, leaves the workspace as I want it
second stage reuses the workspace, succeeds
second stage cleans up the workspace
How can I avoid this?
Jenkins has a post-stage for this. Depending on the result of your pipeline a different branch of code is executed. So lets say your pipeline is successful then your cleanup script of clean up plugin is called. If you pipeline fails you can archive your results or simply skip the cleanup of the workspace.
Check the official jenkin documentation for more information (search for 'post'): https://jenkins.io/doc/book/pipeline/syntax/
pipeline {
agent any
stages {
stage('PostExample') {
steps {
// do something here
}
}
}
post { //Is called after your stage
failure {
//pipeline failed - do not clear workspace
}
success {
//pipeline is successful - clear workspace
}
}
}
On the other hand if you want to keep your results you could think about archiving them so they are independent from your workspace because you can access them anytime from the jenkins gui (
you just have to use finally(this will execute irrespective of stage output) method while you are executing jenkins files: Refer to How to perform actions for failed builds in Jenkinsfile
Related
In a nutshell:
How can I access the location of the produced artifacts within a shell script started in a build or post-build action?
The longer story:
I'm trying to setup a jenkins job to automate the building and propagation of debian packages.
So far, I was already successfull in using the debian-pbuilder plugin to perform the build process, such that jenkins presents the final artifacts after successfully finishing the job:
mypackage_1+020200224114528.NOREV.4_all.deb
mypackage_1+020200224114528.NOREV.4_amd64.buildinfo
mypackage_1+020200224114528.NOREV.4_amd64.changes
mypackage_1+020200224114528.NOREV.4.dsc
mypackage_1+020200224114528.NOREV.4.tar.xz
Now I would like to also automate the deployment process into the local reprepro repository, which would actually just require a simple shell script invocation, I've put together.
My problem: I find no way to determine the artifact location for that deployment script to operate on. The "debian-pbuilder" plugin generates the artifacts in a temporary directory ($WORKSPACE/binaries.tmp15567690749093469649), which changes with every build.
Since the artifacts are listed properly in the finished job status view, I would expect that the artifact details are provided to the script (e.g. by environment variables). But that is obvously not the case.
I've already search extensively for a solution, but didn't find anything helpful.
Or is it me (still somewhat a Rookie in Jenkins), following a wron approach here?
You can use archiveArtifacts. You have binaries.tmp directory in the Workspace and you can use it, but before execute clear workspace using deleteDir().
Pipeline example:
pipeline {
agent any
stages {
stage('Build') {
steps {
deleteDir()
...
}
}
}
post {
always {
archiveArtifacts artifacts: 'binaries*/**', fingerprint: true
}
}
}
You can also check https://plugins.jenkins.io/copyartifact/
I have a multibranch pipeline with the following behaviors:
And the following Jenkinsfile:
pipeline {
agent {
label 'apple'
}
stages {
stage('Lint') {
when {
changeRequest()
}
steps {
sh 'fastlane lint'
}
}
}
post {
success {
reportSuccess()
}
failure {
reportFailure()
}
}
}
I use a slave to run the actual build, but the master still needs to checkout the code to get the Jenkinsfile. For that, it seems to use the same behaviors as the one defined in the job even though it really only needs the Jenkinsfile.
My problem is that I want to discover pull requests by merging the pull request with the current target branch revision, but when there is a merge conflict the build will fail before the Jenkinsfile is executed. This prevents any kind of reporting done in post steps.
Is there a way to have the initial checkout not merge the target branch, but still have it merged when actually running the Jenkinsfile on a slave?
You may want to check out using "Current Pull Request revision" strategy, and then on a successful build issue a git merge command.
We have "stage (build)" on all our branches. Temporarily how can we skip this stage to run on all our branches in multibranch pipeline. I know one solution is use when condition on stage and ask all developers to pull that branch into their branch. But thats lot of work and coordination. Instead I am looking for a global configuration where we can simply skip the stage by name on any branch.
It sounds like you keep the Jenkinsfile alongside the code, but want to change how the Jenkinsfile runs from an administrative point of view.
You could store this stage in a separate (version control flavor of preference) repository. Before you execute the stage, load in the repository then load in the script file and within the stage execute a method defined in that file.
Alternatively, add a parameter to the build job (this assumes you aren't setting parameters via options in the Jenkinsfile) that is a boolean to use or skip the stage and you can modify the project configuration when you want to turn it on or off
What about this approach ?
stage('build') {
if(buildFlag=="disable"){
//jump to next stage
return;
}
//do something
}
buildFlag could be
a simple var in the same pipeline
a global var configured in jenkins. You could set disabled temporarily to skip build stage and change to enabled to restore to normalcy
Also you could set the status to failure, instead of return :
currentBuild.result = 'FAILURE'
Or throw an exception and exit the entire job or pipeline.
One option is to skip the body of the stage if the condition is met. For example:
stage('Unit Tests') {
steps {
script{
if (env.branch.startsWith(releaseBranch)} {
echo 'Running unit tests with coverage...'
} else {
// run actual unit tests
}
}
}
}
The only downside is that the UI will show the "green box" for this step - even though it effectively did nothing.
If you want to remove the stage completely for the branch, use a when directive.
stage('deploy') {
when {
branch 'production'
}
steps {
echo 'Deploying'
}
}
Bonus: You can also specify a "when not" directive as well. See How to specify when branch NOT (branch name) in jenkinsfile?
I have a CD pipeline that requires user confirmation at some stages, so I would like to free up server resources while the pipeline is waiting for the user input.
pipeline {
agent any
stages {
stage ('Build Stage') {
steps {
...
}
}
stage ('User validation stage') {
agent none
steps {
input message: 'Are you sure you want to deploy?'
}
}
stage ('Deploy Stage') {
steps {
...
}
}
}
}
You can see above that I have a global agent any but in the User Validation Stage I added agent none.
Can someone confirm that this doing what I want (no agent/node is waiting for the user input)? I don't see how to verify it, nothing different in the execution log...
If not, how could I do it?
This will not work as you expect. You cannot specify agent any on the entire pipeline and then expect agent none to not occupy the executor.
To prove this, you can run this code as you have it, and while it is waiting at the input stage, go to your main jenkins page and look at the Build Executor Status. You will see there is an executor still running your job.
Next, switch your pipeline to agent none and add agent any to all the other steps (besides your input step) and do the same test. You can see that while waiting at the input stage, none of the executors are occupied.
As to your question about different workspaces on different nodes... Assuming you are using code from an SCM, it will be checked out on every new node, so that isn't a concern. The only thing you need to worry about is artifacts you created in each stage.
It is not safe to "hope" that you will stay on the same node, though Jenkins will "try" to keep you there. But even then, there is not a guarantee that you will get the same workspace directory.
The correct way to handle this is to stash all the files that you may have created or modified that you will need in later stages. Then in the following stages, unstash the required files. Never assume files will make it between stages that have their own node declaration.
Jenkins declarative pipelines offer a post directive to execute code after the stages have finished. Is there a similar thing to run code before the stages are running, and most importantly, before the SCM checkout?
For example something along the lines of:
pre {
always {
rm -rf ./*
}
}
This would then clean the workspace of my build before the source code is checked out.
pre is a cool feature idea, but doesn't exist yet. skipDefaultCheckout and checkout scm (which is the same as the default checkout) are the keys:
pipeline {
agent { label 'docker' }
options {
skipDefaultCheckout true
}
stages {
stage('clean_workspace_and_checkout_source') {
steps {
deleteDir()
checkout scm
}
}
stage('build') {
steps {
echo 'i build therefore i am'
}
}
}
}
For the moment there are no pre-build steps but for the purpose you are looking for, it can be done in the pipeline job configurarion and also multibranch pipeline jobs, when you define where is your jenkinsfile, choose Additional Behaviours -> Wipe out repository & force clone.
Delete the contents of the workspace before building, ensuring a fully fresh workspace.
If you do not really want to delete everything and save some network usage, you can just use this other option: Additional Behaviours -> Clean before checkout.
Clean up the workspace before every checkout by deleting all untracked files and directories, including those which are specified in .gitignore. It also resets all tracked files to their versioned state. This ensures that the workspace is in the same state as if you cloned and checked out in a brand-new empty directory, and ensures that your build is not affected by the files generated by the previous build.
This one will not delete the workspace but just reset the repository to the original state and pull new changes if there are some.
I use "Prepare an environment for the run / Script Content"