Using ccache in automated builds on Docker cloud - docker

I am using automated builds on Docker cloud to compile a C++ app and provide it in an image.
Compilation is quite long (range 2-3 hours) and commits on github are frequent (~10 to 30 per day).
Is there a way to keep the building cache (using ccache) somehow?
As far as I understand it, docker caching is useless since the compilation layer producing the ccache will not be used due to the source code changes.
Or can we tweak to bring some data back to first layer?
Any other solution? Pushing it somewhere?
Here is the Dockerfile:
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG}
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install \
&& rm -rf /usr/src/QGIS
WORKDIR /

You should try saving and restoring your cache data from a third party service:
- an online object storage like Amazon S3
- a simple FTP server
- an Internet available machine with ssh to make a scp
I'm assuming that your cache data is stored inside the ´~/.ccache´ directory
Using Docker multistage build
From some time, Docker supports Multi-stage builds and you can try using it to implement the solution with a single Dockerfile:
Warning: I've not tested it
# STAGE 1 - YOUR ORIGINAL DOCKER FILE CUSTOMIZED
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG} as builder
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
# restore cache
RUN curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2
RUN tar -xjvf ccache.tar.bz2
COPY --from=downloader /.ccache ~/.ccache
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install
# save the current cache online
WORKDIR ~/
RUN tar -cvjSf ccache.tar.bz2 .ccache
RUN curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
# STAGE 2
FROM alpine:latest
# YOUR CUSTOM LOGIC TO CREATE THE FINAL IMAGE WITH ONLY REQUIRED BINARIES
# USE THE FROM IMAGE YOU NEED, this is only an example
# E.g.:
# COPY --from=builder /usr/src/QGIS/build/YOUR_EXECUTABLE /usr/bin
# ...
In the stage 2 you will build the final image that will be pushed to your repository.
 Using Docker cloud hooks
Another, but less clear, approach could be using a Docker Cloud pre_build hook file to download cache data:
#!/bin/bash
echo "=> Downloading build cache data"
curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2 # e.g. Amazon S3 like service
cd /
tar -xjvf ccache.tar.bz2
Obviously you can use dedicate docker images to run curl or tar mounting the local directory as a volume in this script.
Then, copy the .ccache extracted folder inside your container during the build, using a COPY command before your cmake call:
WORKDIR /usr/src/QGIS/build
COPY /.ccache ~/.ccache
RUN cmake ...
In order to make this you should find a way to upload your cache data after the build and you could make this easily using a post_build hook file:
#!/bin/bash
echo "=> Uploading build cache data"
tar -cvjSf ccache.tar.bz2 ~/.ccache
curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
But your compilation data aren't available from the outside, because they live inside the container. So you should upload the cache after the cmake command inside your main Dockerfile:
RUN cmake...
&& tar ...
&& curl ...
&& ninja ...
&& rm ...
If curl or tar aren't available, just add them to your container using the package manager (qgis/qgis3-build-deps is based on Ubuntu 16.04, so they should be available).

Related

Adding build tools to a Kaniko image for Gitlab-CI

Given a monorepo of ~35 services using a Gitlab-CI with k8s runners.
The images are built using Kaniko, utilizing <job>.extends of a prototype template, and life is great.
However, lately, we wanted to save a key on consul and change a gitlab-ci env-var after a successful build - which requires curl, and preferably jq.
I've been trying to create the following image to serve as image for image-building jobs:
FROM gcr.io/kaniko-project/executor:debug
RUN mkdir -p /workspace \
&& wget -qO /workspace/curl https://github.com/moparisthebest/static-curl/releases/download/v7.86.0/curl-amd64 \
&& chmod +x /workspace/curl \
&& wget -qO /workspace/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 \
&& chmod +x /workspace/jq
ENV PATH "$PATH:/workspace"
The build of which appears to succeed.
However - de-facto, when used in a pipeline job, given the following script:
.build-with-kaniko:
script:
- mkdir -p /kaniko/.docker;
echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":..... > /kaniko/.docker/config.json
- which jq || log no jq;
which curl || log no curl;
- >-
/kaniko/executor
--context $PROJECT_PATH
--dockerfile $DOCKERFILE
--destination ${CI_REGISTRY}/${DOCKER_REPO}:${TAG}
- which jq || log no jq;
which curl || log no curl;
Before running the executor - the curl and jq are found.
But after running the executor - they are gone!! <tam-tam-taaaaaaAAAMM!!!> :o
I tried placing them in few different folders: /busibox, /kaniko, /workspace or even a custom dir /misc- and could not get it to work...
I thought maybe it packs them to the target image - but no, they are not there.
I also noted that after building with --no-push they are still there
(but then I do not get my image on the registry...).
What is going on? is there a post-push cleanup mechanism I should instruct to leave these two files?
Help?
What must I do to help kaniko understand I need these two utilities?
OMG. :facepalm:
I knew I'll find the answer only after I post the question... :shrug:
Here's what worked:
Declare it as a new volume:
FROM gcr.io/kaniko-project/executor:debug
RUN mkdir -p /misc \
&& wget -qO /misc/curl https://github.com/moparisthebest/static-curl/releases/download/v7.86.0/curl-amd64 \
&& chmod +x /misc/curl \
&& wget -qO /misc/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 \
&& chmod +x /misc/jq
VOLUME /misc
ENV PATH "$PATH:/misc"
I got the clue from the current Dockerfile of the kaniko:debug image itself (at the time of this writing).
The image is recommended to be used as the base image for gitlab-ci jobs that use kaniko - and it includes /busybox.
I still don't understand why putting the tools in /busybox dir did not work, but I got a working solution now, and no time to dig deeper :sad: :shrug:

Build all containers before pushing to registry using Buildkit

I've the following Dockerfile:
FROM php:7.4 as base
RUN apt-get install -y libzip-dev && \
docker-php-ext-install zip
# install some other things...
FROM base as intermediate
COPY upgrade.sh /usr/local/bin
FROM base as final
COPY start-app.sh /usr/local/bin
As you can see, I've 3 stages:
base
intermedia
final
At first, I'm building the base container and then both "derived" containers. My problem is that I need to push the intermediate and the final container to my (gitlab) registry. The containers are built using the following gitlab-ci.yml:
.build_container: &build_container
stage: build
image:
name: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/moby/buildkit:rootless
entrypoint: ["sh", "-c"]
variables:
BUILDKITD_FLAGS: --oci-worker-no-process-sandbox
script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > ~/.docker/config.json
- |
buildctl-daemonless.sh build \
--frontend dockerfile.v0 \
--local context=${CI_PROJECT_DIR} \
--local dockerfile=${CI_PROJECT_DIR} \
--opt filename=./${DOCKERFILE} \
--import-cache type=registry,ref=${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG} \
--export-cache type=inline \
--output type=image,name=${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG},push=true
Build:Container:
<<: *build_container
variables:
DOCKERFILE: "Dockerfile"
Ok, I can simply add another "buildctl-daemonless.sh"-command and use a separate file, but I want to make sure that both containers (intermediate and final) are build successfully before pushing them. So I'm looking for a solution that builds the intermediate and the final containers at first and then pushing both to the registry, e.g. something like this:
buildctl-daemonless.sh build \
--frontend dockerfile.v0 \
--local context=${CI_PROJECT_DIR} \
--local dockerfile=${CI_PROJECT_DIR} \
--opt filename=./${DOCKERFILE} \
--import-cache type=registry,ref=${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG} \
--export-cache type=inline
buildctl-daemonless.sh push intermediate final
Unfortunately there's no "push" command in buildctl-daemonless.sh as far as I know. So es anyone having an idea how I can to this?
Directly from buildkit, I don't think there's a separate push command. The output of a multi-platform image is usually directly to a registry, but could also be an OCI Layout tar file. To write to that tar file, you would add the following option:
--output type=oci,dest=path/to/output.tar
With that tar file, there are various standalone tools that can help. Pretty sure RedHat's skopeo and Google's go-containerregistry/crane both have options. My own tool is regclient which includes the following regctl command:
regctl image import ${image_name_tag} path/to/output.tar
There's a regclient/regctl:alpine image that's made to be embedded in CI pipelines like GitLab, and it will read registry auths from the same ~/.docker/config.json.

Docker proxy config not working for ADD in Dockerfile

I try to write a Dockerfile that adds a file to the image like this:
ADD https://repository.internal/file.zip /tmp/
The repository.internal host is only reachable through a proxy. I provide the proxy configuraton with the --config option but the ADD command seems not to use the proxy and fails.
I know the proxy configuration is correct because I added the line
RUN curl https://repository.internal/file.zip
which is working fine.
Is there any possibility to run the ADD command also with the proxy config?
As per my comments above, I believe this to be something to do with the internal way the Docker build process handles the ADD and RUN commands... I cant find documentation to back this up - so someone with greater internal knowledge may confirm or deny, but makes sense as a RUN command is done in a layer TO the image being built, where as the ADD command is performed and the results of it are baked into the image.
Whichever way this is being handled, you can achieve what you need by moving to the RUN method as follows:
FROM <your base image>
RUN curl https://repository.internal/file.zip >> /tmp/file.zip \
&& cd /tmp \
&& unzip file.zip \
&& rm file.zip
And you will have the files unzipped.
You may need to check if the rm at the end is required - cant remember off the top of my head if the unzip command removes the original zip file.
As you mentioned, this would rely on the curl and unzip packages being available on the image... however you could potentially avoid having these within your final application image by using Docker Multi Stage Builds
Your Dockerfile would then look something like:
FROM <some useful base image> as collector
RUN apt-get install -y curl unzip
RUN mkdir /tmp/files && \
&& curl https://repository.internal/file.zip >> /tmp/files/file.zip \
&& cd /tmp/files \
&& unzip file.zip \
&& rm file.zip
FROM <your final desired base image>
COPY --from=collector /tmp/files /tmp
This would then utilise an image to have curl and unzip in to collect and deal with the extraction of your files without having to install them on your final application image.

ARG or ENV, which one to use in this case?

This could be maybe a trivial question but reading docs for ARG and ENV doesn't put things clear to me.
I am building a PHP-FPM container and I want to give the ability for enable/disable some extensions on user needs.
Would be great if this could be done in the Dockerfile by adding conditionals and passing flags on the build command perhaps but AFAIK is not supported.
In my case and my personal approach is to run a small script when container starts, something like the following:
#!/bin/sh
set -e
RESTART="false"
# This script will be placed in /config/init/ and run when container starts.
if [ "$INSTALL_XDEBUG" == "true" ]; then
printf "\nInstalling Xdebug ...\n"
yum install -y php71-php-pecl-xdebug
RESTART="true"
fi
...
if [ "$RESTART" == "true" ]; then
printf "\nRestarting php-fpm ...\n"
supervisorctl restart php-fpm
fi
exec "$#"
This is how my Dockerfile looks like:
FROM reynierpm/centos7-supervisor
ENV TERM=xterm \
PATH="/root/.composer/vendor/bin:${PATH}" \
INSTALL_COMPOSER="false" \
COMPOSER_ALLOW_SUPERUSER=1 \
COMPOSER_ALLOW_XDEBUG=1 \
COMPOSER_DISABLE_XDEBUG_WARN=1 \
COMPOSER_HOME="/root/.composer" \
COMPOSER_CACHE_DIR="/root/.composer/cache" \
SYMFONY_INSTALLER="false" \
SYMFONY_PROJECT="false" \
INSTALL_XDEBUG="false" \
INSTALL_MONGO="false" \
INSTALL_REDIS="false" \
INSTALL_HTTP_REQUEST="false" \
INSTALL_UPLOAD_PROGRESS="false" \
INSTALL_XATTR="false"
RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
https://rpms.remirepo.net/enterprise/remi-release-7.rpm
RUN yum install -y \
yum-utils \
git \
zip \
unzip \
nano \
wget \
php71-php-fpm \
php71-php-cli \
php71-php-common \
php71-php-gd \
php71-php-intl \
php71-php-json \
php71-php-mbstring \
php71-php-mcrypt \
php71-php-mysqlnd \
php71-php-pdo \
php71-php-pear \
php71-php-xml \
php71-pecl-apcu \
php71-php-pecl-apfd \
php71-php-pecl-memcache \
php71-php-pecl-memcached \
php71-php-pecl-zip && \
yum clean all && rm -rf /tmp/yum*
RUN ln -sfF /opt/remi/php71/enable /etc/profile.d/php71-paths.sh && \
ln -sfF /opt/remi/php71/root/usr/bin/{pear,pecl,phar,php,php-cgi,phpize} /usr/local/bin/. && \
mv -f /etc/opt/remi/php71/php.ini /etc/php.ini && \
ln -s /etc/php.ini /etc/opt/remi/php71/php.ini && \
rm -rf /etc/php.d && \
mv /etc/opt/remi/php71/php.d /etc/. && \
ln -s /etc/php.d /etc/opt/remi/php71/php.d
COPY container-files /
RUN chmod +x /config/bootstrap.sh
WORKDIR /data/www
EXPOSE 9001
Currently this is working but ... If I want to add let's say 20 (a random number) of extensions or any other feature that can be enable|disable then I will end with 20 non necessary ENV (because Dockerfile doesn't support .env files) definition whose only purpose would be set this flag for let the script knows what to do then ...
Is this the right way to do it?
Should I use ENV for this purpose?
I am open to ideas if you have a different approach for achieve this please let me know about it
From Dockerfile reference:
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
The ENV instruction sets the environment variable <key> to the value <value>.
The environment variables set using ENV will persist when a container is run from the resulting image.
So if you need build-time customization, ARG is your best choice.
If you need run-time customization (to run the same image with different settings), ENV is well-suited.
If I want to add let's say 20 (a random number) of extensions or any other feature that can be enable|disable
Given the number of combinations involved, using ENV to set those features at runtime is best here.
But you can combine both by:
building an image with a specific ARG
using that ARG as an ENV
That is, with a Dockerfile including:
ARG var
ENV var=${var}
You can then either build an image with a specific var value at build-time (docker build --build-arg var=xxx), or run a container with a specific runtime value (docker run -e var=yyy)
So if want to set the value of an environment variable to something different for every build then we can pass these values during build time and we don't need to change our docker file every time.
While ENV, once set cannot be overwritten through command line values. So, if we want to have our environment variable to have different values for different builds then we could use ARG and set default values in our docker file. And when we want to overwrite these values then we can do so using --build-args at every build without changing our docker file.
For more details, you can refer this.
Why to use ARG or ENV ?
Let's say we have a jar file and we want to make a docker image of it. So, we can ship it to any docker engine.
We can write a Dockerfile.
Dockerfile
FROM eclipse-temurin:17-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Now, if we want to build the docker image using Maven, we can pass the JAR_FILE using the --build-arg as target/*.jar
docker build --build-arg JAR_FILE=target/*.jar -t myorg/myapp
However, if we are using Gradle; the above command doesn't work and we've to pass a different path: build/libs/
docker build --build-arg JAR_FILE=build/libs/*.jar -t myorg/myapp .
Once you have chosen a build system, we don’t need the ARG. We can hard code the JAR location.
For Maven, that would be as follows:
Dockerfile
FROM eclipse-temurin:17-jdk-alpine
VOLUME /tmp
COPY target/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
here, we can build an image with the following command:
docker build -t image:tag .
When to use `ENV`?
If we want to set some values at running containers and reflect that to the image like the Port Number that your application can run/listen on. We can set that using the ENV.
Both ARG and ENV seem very similar. Both can be accessed from within our Dockerfile commands in the same manner.
Example:
ARG VAR_A 5
ENV VAR_B 6
RUN echo $VAR_A
RUN echo $VAR_B
Personal Option!
There is a tradeoff between choosing ARG over ENV. If you choose ARG you can't change it later during the run. However, if you chose ENV you can modify the value at the container.
I personally prefer ARG over ENV wherever I can, like,
In the above Example:
I have used ARG as the build system maven or Gradle impacts during build rather than runtime. It thus encapsulates a lot of details and provided a minimum set of arguments for the runtime.
For more details, you can refer to this.

How to add a file to an image in Dockerfile without using the ADD or COPY directive

I need the contents of a large *.zip file (5 gb) in my Docker container in order to compile a program. The *.zip file resides on my local machine. The strategy for this would be:
COPY program.zip /tmp/
RUN cd /tmp \
&& unzip program.zip \
&& make
After having done this I would like to remove the unzipped directory and the original *.zip file because they are not needed any more. The problem is that the COPY (and also the ADD directive) will add a layer to the image that will contain the file program.zip which is problematic as may image will be at least 5gb big. Is there a way to add a file to a container without using COPY or ADD directive? wget will not work as the mentioned *.zip file is on my local machine and curl file://localhost/home/user/program.zip -o /tmp/program.zip will not work either.
It is not straightforward but it can be done via wget or curl with a little support from python. (All three tools should usually be available on a *nix system.)
wget will not work when no url is given and
curl file://localhost/home/user/program.zip -o /tmp/
will not work from within a Dockerfile's RUN instruction. Hence, we will need a server which wget and curl can access and download program.zip from.
To do this we set up a little python server which serves our http requests. We will be using the http.server module from python for this. (You can use python or python 3. It will work with both.).
python -m http.server --bind 192.168.178.20 8000
The python server will serve all files in the directory it is started in. So you should make sure that you start your server either in the directory the file you want to download during your image build resides in or create a temporary directory which contains your program. For illustration purposes let's create the file foo.txt which we will later download via wget in our Dockerfile:
echo "foo bar" > foo.txt
When starting the http server, it is important, that we specify the IP address of our local machine on the LAN. Furthermore, we will open Port 8000. Having done this we should see the following output:
python3 -m http.server --bind 192.168.178.20 8000
Serving HTTP on 192.168.178.20 port 8000 ...
Now we build a Dockerfile to illustrate how this works. (We will assume that the file foo.txt should be downloaded into /tmp):
FROM debian:latest
RUN apt-get update -qq \
&& apt-get install -y wget
RUN cd /tmp \
&& wget http://192.168.178.20:8000/foo.txt
Now we start the build with
docker build -t test .
During the build you will see the following output on our python server:
172.17.0.21 - - [01/Nov/2014 23:32:37] "GET /foo.txt HTTP/1.1" 200 -
and the build output of our image will be:
Step 2 : RUN cd /tmp && wget http://192.168.178.20:8000/foo.txt
---> Running in 49c10e0057d5
--2014-11-01 22:56:15-- http://192.168.178.20:8000/foo.txt
Connecting to 192.168.178.20:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 25872 (25K) [text/plain]
Saving to: `foo.txt'
0K .......... .......... ..... 100% 129M=0s
2014-11-01 22:56:15 (129 MB/s) - `foo.txt' saved [25872/25872]
---> 5228517c8641
Removing intermediate container 49c10e0057d5
Successfully built 5228517c8641
You can then check if it really worked by starting and entering a container from the image you just build:
docker run -i -t --rm test bash
You can then look in /tmp for foo.txt.
We can now add any file to our image without creating an new layer. Assuming you want to add a program of about 5 gb as mentioned in the question we could do:
FROM debian:latest
RUN apt-get update -qq \
&& apt-get install -y wget
RUN cd /tmp \
&& wget http://conventiont:8000/program.zip \
&& unzip program.zip \
&& cd program \
&& make \
&& make install \
&& cd /tmp \
&& rm -f program.zip \
&& rm -rf program
In this way we will not be left with 10 gb of cruft.
There's no way to do this. A feature request is here https://github.com/docker/docker/issues/3156.
Can you not map a local folder to the container when launched and then copy the files you need.
sudo docker run -d -P --name myContainerName -v /localpath/zip_extract:/container/path/ yourContainerID
https://docs.docker.com/userguide/dockervolumes/
I have posted a similar answer here: https://stackoverflow.com/a/37542913/909579
You can use docker-squash to squash newly created layers. That will essentially remove the archive from final image if you remove it in subsequent RUN instruction.

Resources