How to use Gitlab's Auto DevOps for multi-container application? - docker

I have a multi-container application, with nginx as web server and reverse-proxy, and a simple 'Hello World' Streamlit app.
It is available on my Gitlab.
I am totally new to DevOps, and would therefore like to leverage Gitlab's Auto DevOps so as to make it easy.
By default Gitlab's Auto DevOps expects one Dockerfile only, and at the root of the project (source)
Surprisingly, I only found one ressource on my multi-container use case, that aimed to answer this issue : https://forum.gitlab.com/t/auto-build-for-multiple-docker-containers/46949
I followed the advice, and made only slights changes to the .gitlab-ci.yml for the path to my dockerfiles.
But then I have an issue with the Dockerfiles not recognizing the files in its folder :
App's Dockerfile doesn't find the requirements.txt :
And Nginx's Dockerfile doesn't find the project.conf
It seems that the DOCKERFILE_PATH: src/nginx/Dockerfile variable gives only acess to the Dockerfile in itself, but doesn't understand this path as the location for the build.
How can I customize this .gitlab-ci.yml so that the build passes correctly ?
Thank you very much !

The reason the files are not being found is due to how docker's context works. Since you're running docker build from the root, your context will be within the root as opposed to from the path for your dockerfile. That means that your docker build command is trying to find /requirements.txt instead of src/app/requirements.txt. You can fix this relatively easily by just executing a cd to change to your /src/app directory before you run docker build, and removing the -f flag from your docker build (since you no longer need to specify the folder).
Since each job executes in an isolated container, you don't need to worry about CDing back to your build root, since your job never runs any other non-docker commands.

Related

How to ensure a sourced script is available when using a docker image for GitLab CI?

I use custom docker images (mostly based on phusion) for GitLab CI alright. But sometimes an image requires sourcing a shell file to work properly (set PATH, LD_LIBRARY_PATH, etc.).
When running an interactive shell from the docker image (e.g. docker run -it <image_name> /bin/bash), this can be fixed by simply adding the appropriate source command to /etc/profile or whatever. But it looks like the scripts in GitLab CI are not run in an interactive shell, and then the paths are not properly set up. I work around this by adding the source (or .) command to the GitLab CI script itself, but this is something image-specific, that should be in the image, not in the script.
Is there anything I can do that will effectively source the file directly on the image (or at least when GitLab CI runs the script on the image)? I could manually inspect what environment changes the sourced file introduces and put them in ENV instructions, but I'm looking for something less fragile when rebuilding the image from possibly updated sources.

How to use docker images when building artefacts in Actions?

TL;DR: I would like to use on a self-hosted Actions runner (itself a docker container on my docker engine) specific docker images to build artefacts that I would move between the build phases, and end with a standalone executable (not a docker container to be deployed). I do not know how to use docker containers as "building engines" in Actions.
Details: I have a home project consisting of a backend in Go (cross compiled to a standalone binary) and a frontend in Javascript (actually a framework: Quasar).
I develop on my laptop in Windows and use GitHub as the SCM.
The manual steps I do are:
build a static version of the frontend which lands in a directory spa
copy that directory to the backend directory
compile the executable that embeds the spa directory
copy (scp) this executable to the final destination
For development purposes this works fine.
I now would like to use Actions to automate the whole thing. I use docker based self-hosted runners (tcardonne/github-runner).
My problem: the containers do a great job isolating the build environment from the server they run on. They are however reused across build jobs and this may create conflicts. More importantly, the default versions of software provided by these containers is not the right (usually - latest) one.
The solution would be to run the build phases in disposable docker containers (that would base on the right image, shortening the build time as a collateral nice to have). Unfortunately, I do not know how to set this up.
Note: I do not want to ultimately create docker containers, I just want to use them as "building engines" and extract the artefacts from them, and share between the jobs (in my specific case - one job would be to build the front with quasar and generate a directory, the other one would be a compilation ending up with a standalone executable copied elsewhere)
Interesting premise, you can certainly do this!
I think you may be slightly mistaken with regards to:
They are however reused across build jobs and this may create conflicts
If you run a new container from an image, then you will start with a fresh instance of that container. Files, software, etc, all adhering to the original image definition. Which is good, as this certainly aids your efforts. Let me know if I have the wrong end of the stick in regards to the above though.
Base Image
You can define your own image for building, in order to mitigate shortfalls of public images that may not be up to date, or suit your requirements. In fact, this is a common pattern for CI, and Google does something similar with their cloud build configuration. For either approach below, you will likely want to do something like the following to ensure you have all the build tools you may
As a rough example:
FROM golang:1.16.7-buster
RUN apt update && apt install -y \
git \
make \
...
&& useradd <myuser> \
&& mkdir /dist
USER myuser
You could build and publish this with the following tag:
docker build . -t <containerregistry>:buildr/golang
It would also be recommended that you maintain a separate builder image for other types of projects, such as node, python, etc.
Approaches
Building with layers
If you're looking to leverage build caching for your applications, this will be the better option for you. Caching is only effective if nothing has changed, and since the projects will be built in isolation, it makes it relatively safe.
Building your app may look something like the following:
FROM <containerregistry>:buildr/golang as builder
COPY src/ .
RUN make dependencies
RUN make
RUN mv /path/to/compiled/app /dist
FROM scratch
COPY --from=builder /dist /dist
The gist of this is that you would start building your app within the builder image, such that it includes all the build deps you require, and then use a multi stage file to publish a final static container that includes your compiled source code, with no dependencies (using the scratch image as the smallest image possible ).
Getting the final files out of your image would be a bit harder using this approach, as you would have to run an instance of the container once published in order to mount the files and persist it to disk, or use docker cp to retrieve the files from a running container (not image) to your disk.
In Github actions, this would look like running a step that builds a Docker container, where the step can occur anywhere with docker accessibility
For example:
jobs:
docker:
runs-on: ubuntu-latest
steps:
...
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
push: true
tags: user/app:latest
Building as a process
This one can not leverage build caching as well, but you may be able to do clever things like mounting a host npm cache into your container to aid in actions like npm restore.
This approach differs from the former in that the way you build your app will be defined via CI / a purposeful script, as opposed to the Dockerfile.
In this scenario, it would make more sense to define the CMD in the parent image, and mount your source code in, thus not maintaining a image per project you are building.
This would shift the responsibility of building your application from the buildtime of the image, to the runtime. Retrieving your code from the container would be doable through volume mounting for example:
docker run -v /path/to/src:/src /path/to/dist:/dist <containerregistry>:buildr/golang
If the CMD was defined in the builder, that single script would execute and build the mounted in source code, and subsequently publish to /dist in the container, which would then be persisted to your host via that volume mapping.
Of course, this applies if you're building locally. It actually becomes a bit nicer in a Github actions context if you wish to keep your build instructions there. You can choose to run steps within your builder container using something like the following suggestion
jobs:
...
container:
runs-on: ubuntu-latest
container: <containerregistry>:buildr/golang
steps:
- run: |
echo This job does specify a container.
echo It runs in the container instead of the VM.
name: Run in container
Within that run: spec, you could choose to call a build script, or enter the commands that might be present in the script yourself.
What you do with the compiled source is muchly up to you once acquired 👍
Chaining (Frontend / Backend)
You mentioned that you build static assets for your site and then embed them into your golang binary to be served.
Something like that introduces complications of course, but nothing untoward. If you do not need to retrieve your web files until you build your golang container, then you may consider taking the first approach, and copying the content from the published image as part of a Docker directive. This makes more sense if you have two separate projects, one for frontend and backend.
If everything is in one folder, then it sounds like you may just want to extend your build image to facilitate go and js, and then take the latter approach and define those build instructions in a script, makefile, or your run: config in your actions file
Conclusion
This is alot of info, I hope it's digestible for you, and more importantly, I hope it gives you some ideas as to how you can tackle your current issue. Let me know if you would like clarity in the comments

gcloud rebuilds complete container but Dockerfile is the same, only the script has changed

I am building Docker containers using gcloud:
gcloud builds submit --timeout 1000 --tag eu.gcr.io/$PROJECT_ID/dockername Dockerfiles/folder_with_dockerfile
The last 2 steps of the Dockerfile contain this:
COPY script.sh .
CMD bash script.sh
Many of the changes I want to test are in the script. So the Dockerfile stays intact. Building those Docker files on Linux with Docker-compose results in a very quick build because it detects nothing has changed. However, doing this on gcloud, I notice the complete Docker being re-generated whereas only a minor change in the script.sh has been created.
Any way to prevent this behavior?
Your local build is fast because you already have all remote resouces cached locally.
Looks like using kaniko-cache would speed a lot your build. (see https://cloud.google.com/cloud-build/docs/kaniko-cache#kaniko-build).
To enable the cache on your project run
gcloud config set builds/use_kaniko True
The first time you build the container it will feed the cache (for 6h by default) and the rest will be faster since dependencies will be cached.
If you need to further speed up your build, I would use two containers and have both in my local GCP container registry:
The fist one as a cache with all remote dependencies (OS / language / framework / etc).
The second one is the one you need with just the COPY and CMD using the cache container as base.
Actually, gcloud has a lot to do:
The gcloud builds submit command:
compresses your application code, Dockerfile, and any other assets in the current directory as indicated by .;
uploads the files to a storage bucket;
initiates a build using the uploaded files as input;
tags the image using the provided name;
pushes the built image to Container Registry.
Therefore the compete build process could be time consuming.
There are recommended practices for speeding up builds such as:
building leaner containers;
using caching features;
using a custom high-CPU VM;
excluding unnecessary files from upload.
Those could optimize the overall build process.

How to use a local file or retrieve it remotely (conditionally) in Dockerfile?

I'd like to be able to control the source of a file (Java Archive) in a Dockerfile which is either a download (with curl) or a local file on the same machine I build the Docker image.
I'm aware of ways to control RUN statements, e.g. Conditional ENV in Dockerfile, but since I need access to the filesystem outside the Docker build image, a RUN statement won't do. I'd need a conditional COPY or ADD or a workaround.
I'm interested in built-in Docker functions/features which avoid the use of more than one Dockerfile or wrapping the Dockerfile in a script using templating software (those just workarounds popping into my head).
you can use multi-stage build which is rather new in docker:
https://docs.docker.com/develop/develop-images/multistage-build/

Coverity scan while building in Docker container

I have a custom Docker container in which I perform build and test of a project. It is somehow integrated with Travis CI. Now I want to run the Coverity scan analysis from within the Travis CI as well, but the tricky part is (if I understand the Coverity docs correctly), that I need to run the build. The build, however, runs in the container.
Now, according to the cov-build --help
The cov-build or cov-build-sbox command intercepts all calls to the
compiler invoked by the build system and captures source code from the
file system.
What I've tried:
cov-build --dir=./cov docker exec -ti container_name sh -c "<custom build commands>"
With this approach, however, Coverity apparently does not catch the calls to the compiler (it is quite understandable considering Docker philosophy) and emits no files
What I do not want (at least while there is hope for a better solution):
to install locally all the necessary stuff to build in the container
only to be able to run Coverity scan.
to run cov-build from within the container, since:
I believe this would increase the docker image size significantly
I use Travis CI addon for the Coverity scan and this would
complicate things a lot.
The Travis CI part just FWIW, tried all that locally and it doesn't work either.
I am thrilled for any suggestions to the problem. Thank you.
Okay, I sort of solved the issue.
I downloaded and modified ( just a few modification to fit my
environment ) the script that Travis uses to download and run Coverity
scan.
Then I installed Coverity to the host machine (in my case Travis
CI machine).
I ran the docker container and mounted the directory where the
Coverity is installed using docker run -dit -v <coverity-dir>:<container-dir>:ro .... This way I avoided increasing the docker image size.
Executed the cov-build command and uploaded the analysis using
another part of the script directly from docker container.
Hope this helps someone struggling with similar issue.
If you're amenable to adjusting your build, you can change your "compiler" to be cov-translate <args> --run-compile <original compiler command line>. This is effectively what cov-build does under the hood (minus the run-compile since your compiler is already running), and should result in a build capture.
Here is the solution I use:
In "script", "after_script" or another phase in Travis job's lifecycle you want
Download coverity tool archive using wget (the complete Command to use can be found in your coverity scan account)
Untar the archive into a coverity_tool directory
Start your docker container as usual without needing to mount coverity_tool directory as a volume (in case you've created coverity_tool inside the directory from where the docker container is started)
Build the project using cov-build tool inside docker
Archive the generated cov-int directory
Send the result to coverity using curl command
Step 6 should be feasible inside the container but I usually do it outside.
Also don't forget the COVERITY_SCAN_TOKEN to be encrypted and exported as an environment variable.
A concrete example is often more understandable than a long text; here is a commit that applies above steps to build and send results to coverity scan:
https://github.com/BoubacarDiene/NetworkService/commit/960d4633d7ec786d471fc62efb85afb5af2bed7c

Resources