How to cache downloaded dependencies for a Jenkins Docker SSH Slave (Gradle) - jenkins

We have Jenkins Docker Slave template that successfully builds a piece of software for example a Gradle project. This is based on the https://hub.docker.com/r/evarga/jenkins-slave/).
When we fire up the docker slave the dependencies are downloaded everytime we do a build. We would like to speed up the build so dependencies that are downloaded can be reused by the same build or even by other builds.
Is there a way to specify an external folder so that cache is used? Or another solution that reuses the same cache?

I think, the described answers work only for exclusive caches for every build-job. If I have different jenkins-jobs running on docker-slaves, I will get some trouble with this scenario. If the jobs run on the same time and write to the same mounted cache in the host-filesystem, it can become corrupted. Or you must mount a folder with the job-name as part of the filesystem-path (one jenkins-job run only once at a time).

Here's an example for maven dependencies, it's exactly what Opal suggested. You create a Volume, wich refers to cache folder of the host.

Related

Debug why gradle caching fails across successful docker build instances?

I am looking at trying to make the gradle 6.9 cache work in a docker CI build invoked by Jenkins running in Kubernetes without access to scan.gradle.org.
The idea is to save an image after gradle --build-cache --no-daemon classes bootJar and use that as the ‘FROM‘ of subsequent builds. This works for me on my own machine, but I cannot make it work on the Jenkins server. Everything happens in the gradle home directory so everything should be cached. I am wondering if the path to that matters, as it is deep in a Kubernetes mount under ‘/var‘ and this is the only differences between the two docker builds I can think of.
Caching would preferably be for the whole build, but just caching the Maven dependencies would be a substantial saving.
What am i missing? Is there a way to get insight in why Gradle decides to use what it has already or not?

Minimize build time for Gradle project on Docker

Imagine that I need to build a big Cuba application (it uses Gradle to manage dependencies and in the build produces a .war).
I need to dockerize both the build and the application. The latter is run in a Tomcat image in which the .war is copied.
The most of the dependencies remain actually unchanged between consecutive builds of the project, but the build seems to go over them each time, taking like forever...
I'd like to produce a custom Docker image from gradle:jdk8 (kinda) that imports all the Gradle dependencies.
his image will be used for consecutive builds to produce the .wars and will be rebuilt only when there is a change in the dependencies' versions.
Though, I'm quite new to Gradle and I don't know:
if it is possible to import the dependencies without building the project;
if it is actually possible to use previously imported dependencies to build the project in a shorter time.
Any advice/suggestion? Is this possible?
Hope my question is clear, but I have difficulties in explaining my aim. Ask me for better explication.
Thanks in advance.
You mean that you want to build a Docker image for build runner (or build agent), right?
It's not possible to import the dependencies without building the project, because Gradle resolves dependencies lazily, only when they are needed.
E.g. artifacts to build a CUBA theme are resolved only when web theme is built.
Yes, it's possible to re-use previously downloaded library artifacts (cached in ~/.gradle/caches) to build the project in a shorter time.
So in your case you need to create build runner's docker image by fully building your project once in a Docker container. Dependencies will be downloaded and cached in the file system. Then you can pull that image and use it again for subsequent builds, avoiding re-downloading artifacts.
If you change CUBA platform version in your project, you'll need to re-create the build runner image if you want to avoid downloading cuba-*.jar artifacts for every build.

How to attach build log files to Jenkins?

I'm building Jenkins pipeline and after pipeline fails with server installation some logs are generated on machine where server is being installed.
I want to attach those logs to Jenkins build so person can see that file from Jenkins build only instead of going to machine and find it out.
I saw a plugin Copy To Slave Plugin but for installation when I searched for it in Jenkins, it's not listed.
Could you please suggest which plugin will help me to attach log files to Jenkins build?
Due to the complex nature of filesystems, Jenkins is not capable to copy logs from extraneous locations like those outside of the Jenkins root directory. This is for security reasons, which is why the Copy to Slave Plugin you referred to earlier has been discontinued.
In short, Jenkins spawns processes that spawn other processes that are owned by different users in the filesystem (e.g. root). For this reason, it is highly probable that the log files you are referring to are located elsewhere on the file system (i.e. not in $JENKINS_HOME), and thus are not owned by the jenkins user.
It is possible to use cat or tail on the log files in the Jenkins build itself. In combination with a plugin like Log Parser, this can provide some nice output in another screen.
I would be interested about what do you mean by “install”? Can the install happen during the building of a Docker image? Or in a pre-built Docker container? If this is the case, you can copy the “installed” files to the destination.
This would help you, because any log files created during the “install” can be copied out from the docker container and attached to the Jenkins build as an archived artifact.
For this, you don’t even need a plug-in.

Web development workflow using Github and Docker

I learnt the basics of github and docker and both work well in my environment. On my server, I have project directories, each with a docker-compose.yml to run the necessary containers. These project directories also have the actual source files for that particular app which are mapped to virtual locations inside the containers upon startup.
My question is now- how to create a pro workflow to encapsulate all of this? Should the whole directory (including the docker-compose files) live on github? Thus each time changes are made I push the code to my remote, SSH to the server, pull the latest files and rebuild the container. This rebuilding of course means pulling the required images from dockerhub each time.
Should the whole directory (including the docker-compose files) live on github?
It is best practice to keep all source code including dockerfiles, configuration ... versioned. Thus you should put all the source code, dockerfile, and dockercompose in a git reporitory. This is very common for projects on github that have a docker image.
Thus each time changes are made I push the code to my remote, SSH to the server, pull the latest files and rebuild the container
Ideally this process should be encapsulated in a CI workflow using a tool like Jenkins. You basically push the code to the git repository,
which triggers a jenkins job that compiles the code, builds the image and pushes the image to a docker registry.
This rebuilding of course means pulling the required images from dockerhub each time.
Docker is smart enough to cache the base images that have been previously pulled. Thus it will only pull the base images once on the first build.

Service from sources inside Docker container

Short question: is it ok (aren't there any contradictions with Docker ideology) to compile and start application from sources inside Docker container?
Assume that I have some hypothetical application. Let it be Java web service built with Maven, located somewhere in GitHub. Specifics doesn't matter here.
But before starting this service, I need to set-up several config files with right parameters, known at deployment time. Right now I can build fully-preconfigured application package with a single maven command, passing all the necessary configurations at build command.
Now assume that I need to make it a Docker container and don't have time to refactor it somehow right now. So I have a plan: let my docker image have Maven and Git, ENTRYSCRIPT clones my Git repository, builds and starts the application, passing all the necessary parameters via environment.
Is it suitable plan, or it's just wrong?

Resources