I am looking at trying to make the gradle 6.9 cache work in a docker CI build invoked by Jenkins running in Kubernetes without access to scan.gradle.org.
The idea is to save an image after gradle --build-cache --no-daemon classes bootJar and use that as the ‘FROM‘ of subsequent builds. This works for me on my own machine, but I cannot make it work on the Jenkins server. Everything happens in the gradle home directory so everything should be cached. I am wondering if the path to that matters, as it is deep in a Kubernetes mount under ‘/var‘ and this is the only differences between the two docker builds I can think of.
Caching would preferably be for the whole build, but just caching the Maven dependencies would be a substantial saving.
What am i missing? Is there a way to get insight in why Gradle decides to use what it has already or not?
Related
I've run into a very strange issue where a Dockerfile is failing in one of it's steps when it's built on GCP Cloud Build.
However it builds locally just fine.
What might be the cause of the issue? Why would there by any difference?
The actual command that is failing to build is a npm build within the container.
Turned out to be a .env file that I had locally (but was not present in the repository due to a gitignore).
We run gradle builds within docker containers (the reason for that is the build requires software we don't want to install on the host; node, wine etc. Not even java or gradle is installed on the host).
Launching each container with an empty cache is annoying slow.
I've set up gradle-4.0's http build cache. That avoided the need to java-compile in most cases. The performance gain is though quite low, because build-time is dominated by downloading dependencies. gradlew --parallel helped to mitigate that a bit, but to really boost the build, downloading should be avoided altogether.
Sharing ~/.gradle as a docker volume is problematic, because it will cause contention when containers run in parallel (https://github.com/gradle/gradle/issues/851).
So, what else can be done to avoid downloading the same artifacts over and over again?
While it is problematic to share gradle caches from containers running in parallel, it is absolutely OK to reuse gradle caches when containers run sequentially. Builds that are launched by jenkins run sequentially.
Jenkins builds can be sped up by using a docker volume for the .gradle folder. The only drawback is, that each job requires its own volume.
You could build a docker image containing a cache, then use this image to run the building containers.
Short question: is it ok (aren't there any contradictions with Docker ideology) to compile and start application from sources inside Docker container?
Assume that I have some hypothetical application. Let it be Java web service built with Maven, located somewhere in GitHub. Specifics doesn't matter here.
But before starting this service, I need to set-up several config files with right parameters, known at deployment time. Right now I can build fully-preconfigured application package with a single maven command, passing all the necessary configurations at build command.
Now assume that I need to make it a Docker container and don't have time to refactor it somehow right now. So I have a plan: let my docker image have Maven and Git, ENTRYSCRIPT clones my Git repository, builds and starts the application, passing all the necessary parameters via environment.
Is it suitable plan, or it's just wrong?
We have Jenkins Docker Slave template that successfully builds a piece of software for example a Gradle project. This is based on the https://hub.docker.com/r/evarga/jenkins-slave/).
When we fire up the docker slave the dependencies are downloaded everytime we do a build. We would like to speed up the build so dependencies that are downloaded can be reused by the same build or even by other builds.
Is there a way to specify an external folder so that cache is used? Or another solution that reuses the same cache?
I think, the described answers work only for exclusive caches for every build-job. If I have different jenkins-jobs running on docker-slaves, I will get some trouble with this scenario. If the jobs run on the same time and write to the same mounted cache in the host-filesystem, it can become corrupted. Or you must mount a folder with the job-name as part of the filesystem-path (one jenkins-job run only once at a time).
Here's an example for maven dependencies, it's exactly what Opal suggested. You create a Volume, wich refers to cache folder of the host.
I don't like when it comes to release my projects on production server.. May be i just don't have enough experience, nobody taught me how to do this in a right way.
For now i have several repos with scala (on top of spray). I have everything to build and run this projects on my local machine (of course, i develop them). So installed jenkins on my production server in order to sync from git, build and run. It works for now but i don't like it, because i need to install jenkins on every machine i want to have run my projects. What if i want to show my project to my friend in cafe?
So i've come with idea: what if i run tests before building app, make portable build (e.q. with sbt native packager) and save it on remote server "release server". That server just keeps these ready to be launched apps.
Then i go to production server, run bash script that downloads executables from release server and runs my project on a machine
In future i want to:
download and run projects inside docker containers.
keep ready to be served static files for frontend. Run docker
container with nginx and linked volume with static files
I heard about nexus (http://www.sonatype.org/nexus/), that artist use to save their songs, images, so on. I believe there should be open source projects that expose idea like mine
Any help is appreciated!
A common anti-pattern, in my opinion, is to build the software every time you perform a deployment.You are best advised to separate the process of build from the act of deployment by introducing a binary repository manager (you've mentioned on such example, nexus).
Best Practice - Using a Repository Manager
Binary repository manager
How can I automatically deploy a war from Nexus to Tomcat?
Only successfully tests builds get pushed to the repository, so you can treat each successful build as a mini-release. A by-product of this is that your production server does not have to have all the build software pre-installed (like, Jenkins, ANT , Maven, etc).
It should be noted that modern repository managers like Nexus and Artifactory now support Docker registries too, so that you use these for deploying docker images too.
Update
A related chef question, a technology where there is no intermediate binary file (like a jar). In this case the software is still "released" by creating a tar distribution stored in the repo.
chef cookbook delivery - chef server vs. artifactory + berkshelf