I've set up multibranch pipeline to track my repo and automatically build and test for all merge requests. It works wonders, however, I noticed that Jenkins creates a new workspace for each new branch. It is a pretty big project with a heavy build process and a lot of non-tracked cache files, that mostly stay valid from one version to another - so if instead of a fresh git checkout it would re-use previous workspace, it would build much faster (and also not use up so much hard drive space).
How can I configure it to re-use the same workspace for different branches?
After researching the issue, I found out that this is not something I can do with multibranch pipeline, so I switched to using the regular pipeline project. Now every build uses one of the available workspaces, so they end up re-using previous workspaces and the same cache files that really speed up the build.
Jenkins for MultiBranch projects by default uses isolated workspaces for every branch.
Jobs within the same branch use the same workspace.
A possible solution for you is to use ws(path) inside a pipeline.
node("agent_name") {
ws(workspacePath) {
echo '...'
// ..
}
}
Related
Is the anyway to make a Jenkins Multibranch Pipeline better support incremental builds by preferring the last build node when it build a branch instead of choosing an available node more or less randomly?
Details:
We are setting up a Jenkins multibranch pipeline for a large Git project, where we use Make to build and test a lot of code. A full build takes 6-8 hours, but the dependency tracking in Make is good enough for us to use incremental builds, shortening most our build times a lot. For this to work, Jenkins have to pick the same workspace for changes to the same branch again. Luckily it does so - but only on the same build node.
We have some identical Jenkins slave nodes available. Each time a build job is started due to a change on a branch in Git, Jenkins apparently pick a random free build node with a fresh, clean workspace meaning no incremental build speedup.
We have tried to build via NFS, such all the build nodes can share the workspaces, but at least the NFS server we have available is way too slow to make this work.
Is there anyway to make Jenkins choose the node a little less randomly and prefer the latest node on which the branch was build the last time?
We have some jobs set up which share a workspace. The workflow for the various branches is:
Build a big honking C++ project called foo.
Execute several downstream tests, each of which uses the workspace of foo.
We accomplish this by assigning the Use custom workspace field of the downstream jobs to the build workspace.
Recently, we took one branch and assigned it to be build on a Jenkins slave machine rather than on the master. I was surprised to find that on master, the foo repository was cloned to $JENKINS_JOBS_PATH/FOO/workspace/foo_repo - while on the slave, the repository was cloned to $JENKINS_JOBS_PATH/FOO/foo_repo.
Is this by design, or have we somehow configured master and slave inconsistently?
Older versions of Jenkins put the workspace under the ${JENKINS_HOME}/jobs/JOB/workspace directories. After upgrading, this pattern stays with the Jenkins instance. New versions put the workspaces in ${JENKINS_HOME}/workspace/. I suspect the slaves don't need to follow the old pattern (especially if it is a newer slave), so the directories may not be consistent across machines.
You can change the location of the workspaces on the master in Jenkins -> Configure Jenkins -> Advanced.
I think the safe way to handle this... If you are going to use a custom workspace, you should use that for all of your jobs, including the first one that builds the big honking c++ project.
If you did this all in a pipeline, you can run all of this in a single job and have more control over where all the files are, and you have the option of stash and unstash, but if the files are huge, stash may not be the way to go.
You can omit 'Use custom workspace' option for each job and instead change master and/or slave workspace paths and use
%WORKSPACE%/../foo_repo path
or (that equal)
./../foo_repo path
In that case
%WORKSPACE% = [master or slave node workspace]/[job name]
and
%WORKSPACE%/../ = [master or slave node workspace]
In part of our testing setup we are build our artifacts needed and then copying a template job and settings its name so it recognizable.
Build artifact -> copy test template -> ending with a job for each
test case
that means i'm ending up with lots of jobs with Test_Client${BRANCHNAME}_Server${BRANCHNAME}
I'm running through these jobs alot while testing that branch, but as soon as it's merged it's not going to be touched again, which is why i would like to create a job of sorts that simply deletes the jobs that havn't been run for the 14 days or so.
Does anyone know a way of doing this? and not just cleaning out the workspace.
Thanks!
It may be a big change, but this is an ideal case for the Multibranch pipeline.
On one project, we have a master branch and version branches. The developers branch to short-lived feature branches or other branching purposes. As the branches are created and pushed to github, the multibranch job picks them up and starts building them. Developers get quick feedback that the changes will pass the build. When they merge them to master or to a version branch, then delete the branch, the multibranch job goes away.
I am working with Jenkins, and we have quite a few projects that all use the same tasks, i.e. we set a few variables, change the version, restore packages, start sonarqube, build the solution, run unit/integration tests, stop sonarqube etc. The only difference would be like {Solution_Name}, everything else is exactly the same.
What my question is, is there a way to create 1 'Shared' job, that does all that work, while the job for building the project passes the variables down to that shared worker job. What i'm looking for is the ability to not have to create all the tasks for all of our services/components. It be really nice if each of our services/components could have only 2 tasks, one to set the variables, another to run the shared job.
Is this possible?
Thanks in advance.
You could potentially benefit from looking into the new pipelines as code feature.
https://jenkins.io/doc/book/pipeline/
Using this pattern, you define your build pipeline in a groovy script rather than the jenkins' UI. This script is then kept in the codebase of the project it builds in a file called Jenkinsfile.
By checking this pipeline into a git repository, you can create a minimal configuration on the jenkins' side and simply tell it to look towards a specific repo and do the things that pipeline says to do.
There's a few benefits to this approach if it works for your setup. The big one being that your build pipeline will be fully versioned just like the project it builds. And the repository becomes portable, easily able to be built on any jenkins' installation across as many jobs as long as the pipeline plugins are installed.
I have several Jenkins Pipeline jobs set up on my Jenkins installation all of them with a Jenkinsfile inside the repository.
These pipelines are run for all branches, and contains all steps necessary to build and deploy the branch. However, there are some differences for the different branches with regards to building and deploying them, and I would like to be able to configure different environment variables for the different branches.
Is that possible with Jenkins, or do I need to reevaluate my approach or use another CI system?
#rednax answer works if you're using a branch-per-environment git strategy. But if you're using git-flow (or any strategy where you assume that changes will be propogated up, possibly without human intervention, to master/production) you'll run into headaches where a merge will overwrite scripts/variables.
We use a set of folders which match the environment names : infrastructure/Jenkinsfile contains the common steps, and infrastructure/test/Jenkinsfile contains the steps specific to the test environment (the folders also contain Dockerfiles and cloudformation scripts). You could make that very complex with cascading includes or file merges, or simply have almost-identical copies of each file in each folder.
When configuring the job you can specify for Jenkins to grab the script (Jenkins file) from the branch on which you are running. This mean that technically you can adjust the script on each of your branches to set up parameters there. Or you can grab the script from the same source control location, but commit a configuration file in each of your branches and have the script read that file after the checkout.