Imagine that I need to build a big Cuba application (it uses Gradle to manage dependencies and in the build produces a .war).
I need to dockerize both the build and the application. The latter is run in a Tomcat image in which the .war is copied.
The most of the dependencies remain actually unchanged between consecutive builds of the project, but the build seems to go over them each time, taking like forever...
I'd like to produce a custom Docker image from gradle:jdk8 (kinda) that imports all the Gradle dependencies.
his image will be used for consecutive builds to produce the .wars and will be rebuilt only when there is a change in the dependencies' versions.
Though, I'm quite new to Gradle and I don't know:
if it is possible to import the dependencies without building the project;
if it is actually possible to use previously imported dependencies to build the project in a shorter time.
Any advice/suggestion? Is this possible?
Hope my question is clear, but I have difficulties in explaining my aim. Ask me for better explication.
Thanks in advance.
You mean that you want to build a Docker image for build runner (or build agent), right?
It's not possible to import the dependencies without building the project, because Gradle resolves dependencies lazily, only when they are needed.
E.g. artifacts to build a CUBA theme are resolved only when web theme is built.
Yes, it's possible to re-use previously downloaded library artifacts (cached in ~/.gradle/caches) to build the project in a shorter time.
So in your case you need to create build runner's docker image by fully building your project once in a Docker container. Dependencies will be downloaded and cached in the file system. Then you can pull that image and use it again for subsequent builds, avoiding re-downloading artifacts.
If you change CUBA platform version in your project, you'll need to re-create the build runner image if you want to avoid downloading cuba-*.jar artifacts for every build.
Related
Is it possible to run bentoml build without importing the services.py file during the process?
I'm trying to put the bento build and containarize steps in our CI/CD server. Our model depends on some OS packages installed and some python packages. I thought I could run bentoml build to package the model code and binaries that are present. I'd leave the dependencies especification to the contanairize step.
To my surprise the bentoml build process tried to import the service file during the packaging and the build failed since I didn't have the dependencies installed in my CI/CD machine.
Can I prevent this importing while building/packaging the model? Maybe I should ignore the bento containarize and create my bento container by hand and just execute the bentoml serve inside.
I feel that the need to install by hand the dependencies is doubling the effort to specify them in the bentofile.yaml and preventing the reproducibility of my environment.
This is not currently possible. The community is working on an environment management feature, such that an environment with the necessary dependencies will be automatically created during build.
I have been searching far and wide to see if I can find information on Jenkins incremental pipeline builds that does not involve Maven.
The general idea is that I want to build a generic project and run specific steps of the pipeline if the underlying code has changed. If the code did not change, I want to re-use the results from a previous build.
The reason why I want to do this, is to drastically reduce build times for huge projects.
Imagine that you only need to fix 1 line in a SCSS file, but the whole project needs to be rebuild, repackaged, etc because of this. In the meantime, the site is live and broken and waiting 15 mins to be fixed.
Can someone give a basic example of how such a build can be created or where I can find more information on incremental building?
The only thing I have been able to find is incremental building for Maven projects, but this is not applicable for me.
The standard solution is to create modules that depends on each others.
Publish the built artifact of your modules to a binary repository like Sonatype Nexus (you can easily create private npm repo as well as proxy npm repo).
During the build download the dependencies, instead of building them.
If this solution is not the one you want to take, you will have a hard time hacking a solution. To persist the state of your steps, an easy solution is to create files in the job workspace and read them at next build
I have this situation, because the documentation was not clear. The gcloud builds submit --tag gcr.io/[PROJECT-ID]/helloworld command will
archive the contents of my source folder and then run the docker build on the Google build server.
Also it is only looking at the .gitignore file for the contents to archive. If it is a docker build, it should honor the .dockerignore file.
Also there is no word about how to compile the application. It has to be compiled if is not precompiled application before it is dockerized.
the quick guide only considers that the application is a precompiled one and all the contents of the folder as per the .gitignore are required required to run the application. People will not be aware of all that for a new technology. I have just figured it out by myself.
So, the alternate way of doing all that is either include the build steps in the docker file (which will make my image heavy) or create a docker image locally (manually) and then submit the image to the repository (manually) and then publish to the cloud run (using the second command documented or manually).
Is there anything I am missing over here?
Cloud Build respects .dockerignore. It will upload all files that are not in .gitignore, but once uploaded, it will respect .dockerignore regarding which files to use for the build.
Compiling your application is usually done at the same time as "containerizing" it. For example, for a Node.js app, the Dockerfile must run npm install --production. I recommend looking at the many examples in the quickstart.
I think you've got it, essentially your options are:
Building using Cloud Build
Building locally and pushing using Docker
Generally if you need additional build steps, I would recommend including them in your Docker file. Ideally you should be able to go from source + Dockerfile to a complete image in either case.
I have a TeamCity build project that parameterizes a docker-compose.yml template with the build versions of a dozen Docker containers, so in order to get the build_counter from each container, I have them set as snapshot dependencies in the docker-compose build job. Each container's Dockerfile and other files are in their own BitBucket repo, and they have triggers for the appropriate files. In the snapshot dependencies in the docker-compose build I have them set to "Do not run new build if there is a suitable one" but it still tries to run all of the dependent builds even though there aren't any changes in their respective repos.
This makes what should be a very simple and quick build in to a very long build. And often times, one of the dependent builds will fail with "could not collect changes: connection refused" and I suspect it has to do with TC trying to hit all of these different repos all at once.
Is there something I can do to not trigger a build of every dependency every time the docker-compose build is run?
Edit:
Here's an example of what our docker-compose.yml.j2 looks like: http://termbin.com/b2xy
Obviously, I've sanitized it for sharing, and our real docker-compose template has about a dozen services listed.
Here is an example Dockerfile for one of the services: http://termbin.com/upins
Rather than changing the source code of your build (parameterized docker-compose.yml) and brute-force your build every time, you could consider building the containers independently while tagging them with a version increment, and labels. After the build store the images in a local registry. Use docker-compose to suit your runtime needs. docker-compose can use multiple yaml files, so if you need other images for a particular build, just pull the other images you need. For production use another yaml file that composes the system to run. Add LABEL to your Dockerfile. See http://label-schema.org//rc1/ for a set of labels that suit your needs.
I know this is old question but I have come across this issue and you can't do what sounds reasonable i.e. get recent green builds without rebuilding. This is partly because of what the snapshot dependencies are designed to do by Jetbrains.
The basic idea is that dependencies are for synchronized revisions of code: that is if you build Compose at a certain time, it will need to use not just its own versions of source code at that point in time but also the code for all the dependencies that also comes from that point of time, regardless of whether anything significant has changed.
In my example, there were always changes because the same repo was used for lots of projects and had unrelated changes that would not trigger a build but would make the project appear behind and cause a build.
If your dependencies have NOT changed and show no changes pending, then they shouldn't build. In this case, you need to tick "Do not run new build if there is a suitable one". "Enforce Revisions Synchronization" is slightly confusing. If ticked, it will find older builds that match the first build after your build was triggered. If unticked, it can use newer builds.
(I understand this question is somewhat out of scope for stack overflow, because contains more problems and somewhat vague. Suggestions to ask it in the proper ways are welcome.)
I have some open source projects depending in each other.
The code resides in github, the builds happen in shippable, using docker images which in turn are built on docker hub.
I have set up an artifact repo and a debian repository where shippable builds put the packages, and docker builds use them.
The build chain looks like this in terms of deliverables:
pre-zenta docker image
zenta docker image (two steps of docker build because it would time out otherwise)
zenta debian package
zenta-tools docker image
zenta-tools debian package
xslt docker image
adadocs artifacts
Currently I am triggering the builds by pushing to github and sometimes rerunning failed builds on shippable after the docker build ran.
I am looking for solutions for the following problems:
Where to put Dockerfiles? Now they are in the repo of the package needing the resulting docker image for build. This way all information to build the package are in one place, but sometimes I have to trigger an extra build to have the package actually built.
How to trigger build automatically?
..., in a way supporting git-flow? For example if I change the code in zenta develop branch, I want to make sure that zenta-tools will build and test with the development version of it, before merging with master.
Are there a tool with which I can overview the health of the whole build chain?
Since your question is related to Shippable, I've created a support issue for you here - https://github.com/Shippable/support/issues/2662. If you are interested in discussing the best way to handle your scenario, you can also send me an email at support#shippable.com You can set up your entire flow, including building the docker images, using Shippable.