I am trying to figure out how to run Elixir Phoenix on Heroku using Docker. I am pretty much using this Dockerfile: (https://github.com/jpiepkow/phoenix-docker/blob/master/Dockerfile)
# ---- Build Base Stage ----
FROM elixir:1.9.1-alpine AS app_builder
RUN apk add --no-cache=true \
gcc \
g++ \
git \
make \
musl-dev
RUN mix do local.hex --force, local.rebar --force
# ---- Build Deps Stage ----
FROM app_builder as deps
COPY mix.exs mix.lock ./
ARG MIX_ENV=prod
ENV MIX_ENV=$MIX_ENV
RUN mix do deps.get --only=$MIX_ENV, deps.compile
# ---- Build Release Stage ----
FROM deps as releaser
RUN echo $MIX_ENV
COPY config ./config
COPY lib ./lib
COPY priv ./priv
RUN mix release && \
cat mix.exs | grep app: | sed -e 's/ app: ://' | tr ',' ' ' | sed 's/ //g' > app_name.txt
# ---- Final Image Stage ----
FROM alpine:3.9 as app
RUN apk add --no-cache bash libstdc++ openssl
ENV CMD=start
COPY --from=releaser ./_build .
COPY --from=releaser ./app_name.txt ./app_name.txt
CMD ["sh","-c","./prod/rel/$(cat ./app_name.txt)/bin/$(cat ./app_name.txt) $CMD"]
I have pushed to Heroku and the app is running but when I try to using database things that's when it blows up. The logs say the database needs to be migrated, which makes sense since I haven't done it. But now I realize I'm not sure how to do that when mix is not available and I'm using Docker.
Does anyone know how to create and migrate postgres heroku when deployed with Docker?
Create the lib/my_app/release.ex mentioned in https://hexdocs.pm/phoenix/releases.html#ecto-migrations-and-custom-commands
bash into Heroku release container (this is different behavior from, for example, a Rails app):
heroku run bash --app my_app
Two options:
1
./prod/rel/my_app/bin/my_app start_iex
Then run
MyApp.Release.migrate
2
./prod/rel/my_app/bin/my_app eval ""MyApp.Release.migrate"
Related
I'm using this container to set up X11 in GitPod.
ARG base
FROM ${base}
# Dazzle does not rebuild a layer until one of its lines are changed. Increase this counter to rebuild this layer.
ENV TRIGGER_REBUILD=1
# Install Xvfb, JavaFX-helpers and Openbox window manager
RUN sudo install-packages xvfb x11vnc xterm openjfx libopenjfx-java openbox
# Overwrite this env variable to use a different window manager
ENV WINDOW_MANAGER="openbox"
USER root
# Change the default number of virtual desktops from 4 to 1 (footgun)
RUN sed -ri "s/<number>4<\/number>/<number>1<\/number>/" /etc/xdg/openbox/rc.xml
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
# Add VNC startup script
COPY start-vnc-session.sh /usr/bin/
RUN chmod +x /usr/bin/start-vnc-session.sh
USER gitpod
# This is a bit of a hack. At the moment we have no means of starting background
# tasks from a Dockerfile. This workaround checks, on each bashrc eval, if the X
# server is running on screen 0, and if not starts Xvfb, x11vnc and novnc.
RUN echo "export DISPLAY=:0" >> /home/gitpod/.bashrc.d/300-vnc
RUN echo "[ ! -e /tmp/.X0-lock ] && (/usr/bin/start-vnc-session.sh &> /tmp/display-\${DISPLAY}.log)" >> /home/gitpod/.bashrc.d/300-vnc
USER root
### checks ###
# no root-owned files in the home directory
RUN notOwnedFile=$(find . -not "(" -user gitpod -and -group gitpod ")" -print -quit) \
&& { [ -z "$notOwnedFile" ] \
|| { echo "Error: not all files/dirs in $HOME are owned by 'gitpod' user & group"; exit 1; } }
USER gitpod
This is where it gets sketchy :
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
I get this output please help !
COPY failed: file not found in build context or excluded by .dockerignore: stat novnc-index.html: file does not exist
Knowing that my dockerfile is in /src and i'm building in /src . I tried to rebuild with the --no-cache flag and use export DOCKER_BUILDKIT=1 . But still I'm stuck with this problem .
I am ovbviously not au fait with building from Github repos but would really appreciate some help.
OS: Kali Linux (5.15.0-kali2-amd64)
I am trying to build per the instructions here: https://github.com/Tripwire/padcheck
The build seems to install OK:
sudo docker build . -t padcheck
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
STEP 1/15: FROM alpine:3.9
STEP 2/15: RUN apk add --no-cache ca-certificates
--> Using cache d7c12b042fcc42a2ef3ec1733900a720263cf64b8d69cbd05b1189133a8f2f8e
--> d7c12b042fc
STEP 3/15: RUN [ ! -e /etc/nsswitch.conf ] && echo 'hosts: files dns' > /etc/nsswitch.conf
--> Using cache 7feea938c3cf5def2e55edbe0f515c83ec3a608a045f6f15ac3ead6448fae86b
--> 7feea938c3c
STEP 4/15: COPY ./paddingmodes-go1.11.diff /tmp/paddingmodes-go1.11.diff
--> Using cache dc9c476290cadbfce6c23b5c86fe7cac8a92f52924ab537680be72341a389641
--> dc9c476290c
STEP 5/15: ENV GOLANG_VERSION 1.11.5
--> Using cache 14ad60e2e84e634cfa252abad776eeb513752eefee542de803950bbfb64be5b7
--> 14ad60e2e84
STEP 6/15: RUN set -eux; apk add --no-cache --virtual .build-deps bash gcc musl-dev openssl go ; export GOROOT_BOOTSTRAP="$(go env GOROOT)"GOOS="$(go env GOOS)" GOARCH="$(go env GOARCH)" GOHOSTOS="$(go env GOHOSTOS)" GOHOSTARCH="$(go env GOHOSTARCH)" ; apkArch="$(apk --print-arch)"; case "$apkArch" in armhf) export GOARM='6' ;; x86) export GO386='387' ;; esac; wget -O go.tgz "https://golang.org/dl/go$GOLANG_VERSION.src.tar.gz"; echo 'bc1ef02bb1668835db1390a2e478dcbccb5dd16911691af9d75184bbe5aa943e *go.tgz' | sha256sum -c -; tar -C /usr/local -xzf go.tgz; rm go.tgz; cd /usr/local/go/src; patch -p2 < /tmp/paddingmodes-go1.11.diff; rm /tmp/paddingmodes-go1.11.diff; ./make.bash; rm -rf /usr/local/go/pkg/bootstrap /usr/local/go/pkg/obj ; apk del .build-deps; export PATH="/usr/local/go/bin:$PATH"; go version
--> Using cache 9299711dd2046598f08f1dedceb322af2d2723797664a698a4f322290d37e097
--> 9299711dd20
STEP 7/15: ENV GOPATH /go
--> Using cache 7b9bebc3e577b5e6976b16a7c9e21115959836b06f49f87d693f924245769478
--> 7b9bebc3e57
STEP 8/15: ENV GOBIN $GOPATH/bin
--> Using cache 7ffb2e13401e316132b5d0572e5313eab602ecb91bb9a7d60a8e4b0f5d5391b2
--> 7ffb2e13401
STEP 9/15: ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
--> Using cache 18ae898383c2553a814df7913e3be77123e8a6a3673ae2d17ea4feaf50f06410
--> 18ae898383c
STEP 10/15: RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
--> Using cache 03ae353817062bf2f135999e0ba4b01dc3dcb23e7da6e48f6990ed9502b5379b
--> 03ae3538170
STEP 11/15: WORKDIR $GOPATH
--> Using cache d01b2a8bb52880b155919645013b3dc4b548a8da4feae75892b75f115eb83b11
--> d01b2a8bb52
STEP 12/15: COPY padcheck.go /go/src
--> Using cache 1435510254fccb68633b033e0e88da685215f34420094cfb695bd230fe64493b
--> 1435510254f
STEP 13/15: RUN cd /go/src && go install padcheck.go
--> Using cache 82a8ba8ed6255289ea31e26ecc341d58ddef964e201a644902a8294ad8cb5ba9
--> 82a8ba8ed62
STEP 14/15: ENTRYPOINT ["/go/bin/padcheck"]
--> Using cache 6fcadceec4f2de2365d84ae96a7e9906727709ecfe46cd4f692299ca1a41f650
--> 6fcadceec4f
STEP 15/15: CMD ["-h"]
--> Using cache 80892f2bf809ca96bfd6abb7ff049353edde423964804a194d0ff6e39af44c39
COMMIT padcheck
--> 80892f2bf80
Successfully tagged localhost/padcheck:latest
80892f2bf809ca96bfd6abb7ff049353edde423964804a194d0ff6e39af44c39
But when I run (using docker run --rm -it padcheck):
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
Error: short-name "padcheck" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
And using golang-go:
sudo ./build.sh
--2022-02-22 10:49:41-- https://github.com/golang/go/archive/go1.11.tar.gz
Resolving github.com (github.com)... 140.82.121.3
Connecting to github.com (github.com)|140.82.121.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/golang/go/tar.gz/refs/tags/go1.11 [following]
--2022-02-22 10:49:42-- https://codeload.github.com/golang/go/tar.gz/refs/tags/go1.11
Resolving codeload.github.com (codeload.github.com)... 140.82.121.9
Connecting to codeload.github.com (codeload.github.com)|140.82.121.9|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21058849 (20M) [application/x-gzip]
Saving to: ‘go1.11.tar.gz.2’
go1.11.tar.gz.2 100%[==============================================>] 20.08M 14.0MB/s in 1.4s
2022-02-22 10:49:43 (14.0 MB/s) - ‘go1.11.tar.gz.2’ saved [21058849/21058849]
patching file src/crypto/tls/common.go
patching file src/crypto/tls/conn.go
patching file src/crypto/tls/handshake_client.go
patching file src/crypto/tls/handshake_server.go
Building Go cmd/dist using /usr/lib/go-1.17.
go: cannot find main module, but found .git/config in /home/padcheck
to create a module there, run:
cd ../.. && go mod init
┌──
└─$ cd /home/padcheck && go mod init
go: cannot determine module path for source directory /home/padcheck (outside GOPATH, module path must be specified)
Example usage:
'go mod init example.com/m' to initialize a v0 or v1 module
'go mod init example.com/m/v2' to initialize a v2 module
Run 'go help mod init' for more information.
You're trying to run padcheck but your container is called localhost/padcheck as you can see in the second to last line of your build output:
Successfully tagged localhost/padcheck:latest
Run it like docker run --rm -ti localhost/padcheck instead
As for the golang problem:
The script is downloading golang 1.11 in which you could have projects without a gomod more frequently, hence no gomod files exist in that repo.
However, you can "fix" this like follows:
$ go mod init github.com/Tripwire/padcheck/m/v2
go: creating new go.mod: module github.com/Tripwire/padcheck/m/v2
go: to add module requirements and sums:
go mod tidy
$ go mod tidy
Searching up the above shows many results about how to set breakpoints for apps running in docker containers, yet I'm interested in setting a breakpoint in the Dockerfile itself, such that the docker build is paused at the breakpoint. For an example Dockerfile:
FROM ubuntu:20.04
RUN echo "hello"
RUN echo "bye"
I'm looking for a way to set a breakpoint on the RUN echo "bye" such that when I debug this Dockerfile, the image will build non-interactively up to the RUN echo "bye" point, exclusive. After then, I would be able to interactively run commands with the container. In the actual Dockerfile I have, there are RUNs before the breakpoint that change the file system of the image being built, and I want to analyze the filesystem of the image at the breakpoint by being able to interactively run commands like cd / ls / find at the time of the breakpoint.
You can't set a breakpoint per se, but you can get an interactive shell at an arbitrary point in your build sequence (between steps).
Let's build your image:
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM ubuntu:20.04
---> 1e4467b07108
Step 2/3 : RUN echo "hello"
---> Running in 917b34190e35
hello
Removing intermediate container 917b34190e35
---> 12ebbdc1e72d
Step 3/3 : RUN echo "bye"
---> Running in c2a4a71ae444
bye
Removing intermediate container c2a4a71ae444
---> 3c52993b0185
Successfully built 3c52993b0185
Each of the lines that says ---> 0123456789ab with a hex ID has a valid image ID. So from here you can
docker run --rm -it 12ebbdc1e72d sh
which will give you an interactive shell on the partial image resulting from the first RUN command.
There's no requirement that the build as a whole succeed. If a RUN step fails, you can use this technique to get an interactive shell on the image immediately before that step and re-run the command by hand. If you have a very long RUN command, you may need to break it into two to be able to get a debugging shell at a specific point within the command sequence.
I don't think this is possible directly - that feature has been discussed and rejected.
What I generally do to debug a Dockerfile is to comment all of the steps after the "breakpoint", then run docker build followed by docker run -it image bash or docker run -it image sh (depending on whether you have bash installed inside the container).
Then, I have an interactive shell, and I can run commands to debug why later stages are failing.
I agree that being able to set a breakpoint and poke around would be a handy feature, though.
You can run commands in intermediate containers using Remote shell debugging tricks.
Make sure your container images include basic utilities like netcat (nc) and fuser. These utilities enable "calling home" from any intermediate container image. At home you'll answer calls with netcat (or socat). This netcat will send your commands to containers, and print their outcomes. This debugging approach will work even on Dockerfiles that are built on unknown worker nodes somewhere in cloud.
Example:
FROM debian:testing-slim
# Set environment variables for calling home from breakpoints (BP)
ENV BP_HOME=<IP-ADDRESS-OF-YOUR-HOST>
ENV BP_PORT=33720
ENV BP_CALLHOME='BP_FIFO=/tmp/$BP.$BP_HOME.$BP_PORT; (rm -f $BP_FIFO; mkfifo $BP_FIFO) && (echo "\"c\" continues"; echo -n "($BP) "; tail -f $BP_FIFO) | nc $BP_HOME $BP_PORT | while read cmd; do if test "$cmd" = "c" ; then echo -n "" >$BP_FIFO; sleep 0.1; fuser -k $BP_FIFO >/dev/null 2>&1; break; else eval $cmd >$BP_FIFO 2>&1; echo -n "($BP) " >$BP_FIFO; fi; done'
# Install needed utils (netcat, fuser)
RUN apt update && apt install -y netcat psmisc
# Now you are ready to run "eval $BP_CALLHOME" wherever you want to call home.
RUN BP=before-hello eval $BP_CALLHOME
RUN echo "hello"
RUN BP=after-hello eval $BP_CALLHOME
RUN echo "bye"
Start waiting for and answering calls from a Dockerfile before launching a Docker build. On home host run nc -k -l -p 33720 (alternatively socat STDIN TCP-LISTEN:33720,reuseaddr,fork).
This is how above example looks like at home:
$ nc -k -l -p 33720
"c" continues
(before-hello) echo *
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
(before-hello) id
uid=0(root) gid=0(root) groups=0(root)
(before-hello) c
"c" continues
(after-hello)
...
The recent (May 2022) project ktock/buildg offers breakpoints.
See "Interactive debugger for Dockerfile" from Kohei Tokunaga
buildg is a tool to interactively debug Dockerfile based on BuildKit.
Source-level inspection
Breakpoints and step execution
Interactive shell on a step with your own debugigng tools
Based on BuildKit (needs unmerged patches)
Supports rootless
The command break, b LINE_NUMBER sets a breakpoint.
Example:
$ buildg.sh debug --image=ubuntu:22.04 /tmp/ctx
WARN[2022-05-09T01:40:21Z] using host network as the default
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 195B done
#2 DONE 0.1s
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 3.0s
#4 [build1 1/2] FROM docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
#4 resolve docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8 0.0s done
#4 sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 772.81kB / 772.81kB 0.2s done
Filename: "Dockerfile"
2| RUN echo hello > /hello
3|
4| FROM busybox AS build2
=> 5| RUN echo hi > /hi
6|
7| FROM scratch
8| COPY --from=build1 /hello /
>>> break 2
>>> breakpoints
[0]: line 2
>>> continue
#4 extracting sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 0.0s done
#4 DONE 0.3s
...
From PR 24:
Add --cache-reuse option which allows sharing the build cache among invocation of buildg debug to make the 2nd-time debugging faster.
This is useful to speed up running buildg multiple times for debugging an errored step.
Note that breakpoints on cached steps are ignored as of now.
Because of this limitation, this feature is optional as of now. We should fix this limitation and make it the default behaviour in the future.
Man, Docker makes things hard. Here's a workaround I cooked up:
Insert FROM scratch where you want the break point.
Run docker build . --stage=<n-1> where <n> is the number of FROM commands before your "breakpoint". Eg, if it's a single stage build, use --stage=0.
Alternatively, if you have already named the stage where you want the break point with FROM <image> AS <stage> then you can use --stage=<stage> instead.
Docker has cached all your successful layers anyway (even if you can't see them), and because the FROM "breakpoint" comes before the (potentially unsuccessful) point of interest, the build should all come from cache and be very fast.
So for example, if my Dockerfile looks like this:
FROM debian:bullseye AS build
RUN apt-get update && apt-get install -y \
build-essential cmake ninja-build \
libfontconfig1-dev libdbus-1-dev libfreetype6-dev libicu-dev libinput-dev libxkbcommon-dev libsqlite3-dev libssl-dev libpng-dev libjpeg-dev libglib2.0-dev
<SNIP lots of other setup commands>
ADD my_source.tar.xz /
WORKDIR /my_source
RUN ./configure -option1 -option2
RUN cmake --build . --parallel
RUN cmake --install .
FROM alpine
COPY --from=build /my_build /my_build
...
Then I can add a "breakpoint" like this:
FROM debian:bullseye AS build
RUN apt-get update && apt-get install -y \
build-essential cmake ninja-build \
libfontconfig1-dev libdbus-1-dev libfreetype6-dev libicu-dev libinput-dev libxkbcommon-dev libsqlite3-dev libssl-dev libpng-dev libjpeg-dev libglib2.0-dev
<SNIP lots of other setup commands>
ADD my_source.tar.xz /
WORKDIR /my_source
#### BREAKPOINT ###
FROM scratch
#### BREAKPOINT ###
RUN ./configure -option1 -option2
RUN cmake --build . --parallel
RUN cmake --install .
FROM alpine
COPY --from=build /my_build /my_build
...
and trigger it with docker build . --stage=build
I am using security scan software in my Dockerfile and I need to add its bin folder to the path. Its path will contain the version part so I do not know the path until I download the software. My current progress is something like this:
1.Download the software:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
2.The desired folder is located: SAClientUtil/SAClientUtil.X.Y.Z/bin/ (xyz mary vary from run to run). Get there using find and cd combination and try to add it to the PATH:
RUN cd "$(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"; \
export PATH="$PATH:$PWD"; # doesn't work
Looks like ENV command is not evaluating the parameter, so
ENV PATH $PATH:"echo $(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"
doesn't work also.
Any ideas on how to dynamically add a folder to the PATH during docker image build?
If you're pretty sure the zip file will contain only a single directory with that exact layout, you can rename it to something fixed.
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip \
&& unzip SAClientUtil.zip -d tmp \
&& mv tmp/SAClientUtil.* SAClientUtil \
&& rm -rf tmp SAClientUtil.zip
ENV PATH=/SAClientUtil/bin:${PATH}
A simple solution would be to include a small wrapper script in your image, and then use that to run commands from the SAClientUtil directory. For example, if I have the following in saclientwrapper.sh:
#!/bin/sh
cmd=$1
shift
saclientpath=$(ls -d /SAClientUtil/SAClientUtil.*)
echo "got path: $saclientpath"
cd "$saclientpath"
exec "$saclientpath/bin/$cmd" "$#"
Then I can do this:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
COPY saclientwrapper.sh /saclientwrapper.sh
RUN sh /saclientwrapper.sh appscan.sh
And this will produce, when building the image:
STEP 6: RUN sh /saclientwrapper.sh appscan.sh
got path: /SAClientUtil/SAClientUtil.8.0.1374
COMMAND SYNTAX
appscan <command> [options]
ADDITIONAL COMMAND HELP
appscan help <command>
.
.
.
I am using automated builds on Docker cloud to compile a C++ app and provide it in an image.
Compilation is quite long (range 2-3 hours) and commits on github are frequent (~10 to 30 per day).
Is there a way to keep the building cache (using ccache) somehow?
As far as I understand it, docker caching is useless since the compilation layer producing the ccache will not be used due to the source code changes.
Or can we tweak to bring some data back to first layer?
Any other solution? Pushing it somewhere?
Here is the Dockerfile:
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG}
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install \
&& rm -rf /usr/src/QGIS
WORKDIR /
You should try saving and restoring your cache data from a third party service:
- an online object storage like Amazon S3
- a simple FTP server
- an Internet available machine with ssh to make a scp
I'm assuming that your cache data is stored inside the ´~/.ccache´ directory
Using Docker multistage build
From some time, Docker supports Multi-stage builds and you can try using it to implement the solution with a single Dockerfile:
Warning: I've not tested it
# STAGE 1 - YOUR ORIGINAL DOCKER FILE CUSTOMIZED
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG} as builder
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
# restore cache
RUN curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2
RUN tar -xjvf ccache.tar.bz2
COPY --from=downloader /.ccache ~/.ccache
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install
# save the current cache online
WORKDIR ~/
RUN tar -cvjSf ccache.tar.bz2 .ccache
RUN curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
# STAGE 2
FROM alpine:latest
# YOUR CUSTOM LOGIC TO CREATE THE FINAL IMAGE WITH ONLY REQUIRED BINARIES
# USE THE FROM IMAGE YOU NEED, this is only an example
# E.g.:
# COPY --from=builder /usr/src/QGIS/build/YOUR_EXECUTABLE /usr/bin
# ...
In the stage 2 you will build the final image that will be pushed to your repository.
Using Docker cloud hooks
Another, but less clear, approach could be using a Docker Cloud pre_build hook file to download cache data:
#!/bin/bash
echo "=> Downloading build cache data"
curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2 # e.g. Amazon S3 like service
cd /
tar -xjvf ccache.tar.bz2
Obviously you can use dedicate docker images to run curl or tar mounting the local directory as a volume in this script.
Then, copy the .ccache extracted folder inside your container during the build, using a COPY command before your cmake call:
WORKDIR /usr/src/QGIS/build
COPY /.ccache ~/.ccache
RUN cmake ...
In order to make this you should find a way to upload your cache data after the build and you could make this easily using a post_build hook file:
#!/bin/bash
echo "=> Uploading build cache data"
tar -cvjSf ccache.tar.bz2 ~/.ccache
curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
But your compilation data aren't available from the outside, because they live inside the container. So you should upload the cache after the cmake command inside your main Dockerfile:
RUN cmake...
&& tar ...
&& curl ...
&& ninja ...
&& rm ...
If curl or tar aren't available, just add them to your container using the package manager (qgis/qgis3-build-deps is based on Ubuntu 16.04, so they should be available).