Quarkus deployment to Google Cloud App engine fails - docker

I'm trying to deploy a simple single endpoint quarkus app to Google Cloud -> App Engine flex environment. I managed to deploy an uber-jar to standard environment, according to documentation
But I'm struggling to deploy with flex environment, as my understating, from the same documentation link as above, is that GCP will create a docker container based on the Dockerfile; In the end, my intention is to deploy a native image to GCP App Engine.
I followed the steps in the link above:
Copy the JVM Dockerfile to the root directory of your project: cp src/main/docker/Dockerfile.jvm Dockerfile
Build your application using mvn clean package
src/main/appengine/app.yaml has the following:
runtime: custom
env: flex
gcloud app deploy
The gcloud log returns the same error, both for native and normal docker file, respectively:
starting build "502dd964-0abf-470e-a4c1-44c89a67a96e"
FETCHSOURCE
Fetching storage object: gs://staging.os-xxxx-quarkus.appspot.com/eu.gcr.io/os-xxxx-quarkus/appengine/default.20210322t183731:latest#1616431057582767
Copying gs://staging.os-xxxx-quarkus.appspot.com/eu.gcr.io/os-xxxx-quarkus/appengine/default.20210322t183731:latest#1616431057582767...
/ [0 files][ 0.0 B/ 230.0 B]
/ [1 files][ 230.0 B/ 230.0 B]
Operation completed over 1 objects/230.0 B.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /Users: no such file or directory
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
For brevity, I will include the dockerfile, but it is the default generated without any changes.
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3
ARG JAVA_PACKAGE=java-11-openjdk-headless
ARG RUN_JAVA_VERSION=1.3.8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# Install java and the run-java script
# Also set up permissions for user `1001`
RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \
&& microdnf update \
&& microdnf clean all \
&& mkdir /deployments \
&& chown 1001 /deployments \
&& chmod "g+rwX" /deployments \
&& chown 1001:root /deployments \
&& curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \
&& chown 1001 /deployments/run-java.sh \
&& chmod 540 /deployments/run-java.sh \
&& echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/lib/security/java.security
# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size.
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
# We make four distinct layers so if there are application changes the library layers can be re-used
COPY --chown=1001 target/quarkus-app/lib/ /deployments/lib/
COPY --chown=1001 target/quarkus-app/*.jar /deployments/
COPY --chown=1001 target/quarkus-app/app/ /deployments/app/
COPY --chown=1001 target/quarkus-app/quarkus/ /deployments/quarkus/
EXPOSE 8080
USER 1001
ENTRYPOINT [ "/deployments/run-java.sh" ]
Thank you for your time

It seems i polluted the space with an embarrassing mistake; for Google Cloud App Engine ,flex environment, the app.yaml file should be placed in the root folder; it is also documented
Moreover, for some reason, the max_num_instances should be explicitly set to a value of max 8, according to account quota google documentation otherwise a EXAHUSTED_RESOURCE exception is thrown.

Related

Adding build tools to a Kaniko image for Gitlab-CI

Given a monorepo of ~35 services using a Gitlab-CI with k8s runners.
The images are built using Kaniko, utilizing <job>.extends of a prototype template, and life is great.
However, lately, we wanted to save a key on consul and change a gitlab-ci env-var after a successful build - which requires curl, and preferably jq.
I've been trying to create the following image to serve as image for image-building jobs:
FROM gcr.io/kaniko-project/executor:debug
RUN mkdir -p /workspace \
&& wget -qO /workspace/curl https://github.com/moparisthebest/static-curl/releases/download/v7.86.0/curl-amd64 \
&& chmod +x /workspace/curl \
&& wget -qO /workspace/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 \
&& chmod +x /workspace/jq
ENV PATH "$PATH:/workspace"
The build of which appears to succeed.
However - de-facto, when used in a pipeline job, given the following script:
.build-with-kaniko:
script:
- mkdir -p /kaniko/.docker;
echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":..... > /kaniko/.docker/config.json
- which jq || log no jq;
which curl || log no curl;
- >-
/kaniko/executor
--context $PROJECT_PATH
--dockerfile $DOCKERFILE
--destination ${CI_REGISTRY}/${DOCKER_REPO}:${TAG}
- which jq || log no jq;
which curl || log no curl;
Before running the executor - the curl and jq are found.
But after running the executor - they are gone!! <tam-tam-taaaaaaAAAMM!!!> :o
I tried placing them in few different folders: /busibox, /kaniko, /workspace or even a custom dir /misc- and could not get it to work...
I thought maybe it packs them to the target image - but no, they are not there.
I also noted that after building with --no-push they are still there
(but then I do not get my image on the registry...).
What is going on? is there a post-push cleanup mechanism I should instruct to leave these two files?
Help?
What must I do to help kaniko understand I need these two utilities?
OMG. :facepalm:
I knew I'll find the answer only after I post the question... :shrug:
Here's what worked:
Declare it as a new volume:
FROM gcr.io/kaniko-project/executor:debug
RUN mkdir -p /misc \
&& wget -qO /misc/curl https://github.com/moparisthebest/static-curl/releases/download/v7.86.0/curl-amd64 \
&& chmod +x /misc/curl \
&& wget -qO /misc/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 \
&& chmod +x /misc/jq
VOLUME /misc
ENV PATH "$PATH:/misc"
I got the clue from the current Dockerfile of the kaniko:debug image itself (at the time of this writing).
The image is recommended to be used as the base image for gitlab-ci jobs that use kaniko - and it includes /busybox.
I still don't understand why putting the tools in /busybox dir did not work, but I got a working solution now, and no time to dig deeper :sad: :shrug:

Containerized Azure Function does not see databricks odbc driver

I have below Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS installer-env
COPY . /src/dotnet-function-app
RUN cd /src/dotnet-function-app && \
mkdir -p /home/site/wwwroot && \
dotnet publish *.csproj --output /home/site/wwwroot
FROM mcr.microsoft.com/azure-functions/dotnet:4
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
#ODBCINI=/etc/odbc.in \
#ODBCSYSINI=/etc/odbcinst.ini \
#SIMBASPARKINI=/opt/simba/spark/lib/64/simba.sparkodbc.ini
WORKDIR ./home/site/wwwroot
COPY --from=installer-env /home/site/wwwroot /home/site/wwwroot
RUN apt update && apt install -y apt-utils odbcinst1debian2 libodbc1 odbcinst vim unixodbc unixodbc-dev freetds-dev curl tdsodbc unzip libsasl2-modules-gssapi-mit
RUN curl -sL https://databricks.com/wp-content/uploads/drivers-2020/SimbaSparkODBC-2.6.16.1019-Debian-64bit.zip -o databricksOdbc.zip && unzip databricksOdbc.zip
RUN dpkg -i SimbaSparkODBC-2.6.16.1019-Debian-64bit/simbaspark_2.6.16.1019-2_amd64.deb
RUN export ODBCINI=/etc/odbc.ini ODBCSYSINI=/etc/odbcinst.ini SIMBASPARKINI=/opt/simba/spark/lib/64/simba.sparkodbc.ini
Purpose why this azure function app is containerized is to enable using databricks odbc driver to connect to azure databricks instance and delta lake. I have read on stackoverflow, in other thread, that there is no other way for installing custom drivers if app service is not containerized. I thought it should work the same with function app if is containerized.
Unfortunatelly I get exception that:
ERROR [01000] [unixODBC][Driver Manager]Can't open lib 'Simba Spark ODBC Driver' : file not found
or
Dependency unixODBC with minimum version 2.3.1 is required. Unable to load shared library 'libodbc.so.2' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: liblibodbc.so.2: cannot open shared object file: No such file or directory
even if I point drivers in this line:
RUN export ODBCINI=/etc/odbc.ini ODBCSYSINI=/etc/odbcinst.ini SIMBASPARKINI=/opt/simba/spark/lib/64/simba.sparkodbc.ini
Looks like home/site/wwwroot can not access above folders. What interessting I also tried to copy content of /etc to /home/site/wwwroot/bin to set enrironment variable pointing from that folder, but it is not possible to copy:
WORKDIR /etc
COPY . /home/site/wwwroot/bin
Generally, I pass connection details to databricks instance in connection string, but I also tried to point /etc files by below command:
RUN gawk -i inplace '{ print } ENDFILE { print "[ODBC Drivers]" }' /etc/odbcinst.ini
but I get exception during building that:
gawk: inplace:59: warning: inplace::begin: Cannot stat '/etc/odbcinst.ini' (No such file or directory)

Robot Framework test case fails with “Element not found” when running selenium library based test case with headless chrome inside a DOCKER container

Below is the test case that I am trying to execute inside the docker container.
Login To GUI
[Documentation] To open GUI and login with valid credentials
${chrome_options}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
Call Method ${chrome_options} add_argument --no-sandbox
Call Method ${chrome_options} add_argument --headless
Call Method ${chrome_options} add_argument --disable-dev-shm-usage
Call Method ${chrome_options} add_argument --ignore-certificate-errors-spki-list
Call Method ${chrome_options} add_argument --ignore-ssl-errors
Open Browser ${url} chrome options=${chrome_options} executable_path=/usr/lib/chromium/chromedriver
Set Browser Implicit Wait 5
Input Text id=username ${username}
Input Text id=password ${password}
Click Button //input[#value='Sign in']
The test case passed successfully when I tried to execute it directly from IDE (Pycharm) in the MAC terminal. But, When I tried to perform the same via docker container, it fails with error “Element with locator 'id=username' not found” and a blank white screen is attached as part of screenshot in logs. The page I request should get redirected to an authentication page (key cloak) with the username password field, but I am getting blank page in the docker container.
I checked the log file inside container “/usr/lib/chromium/chrome_debug.log”
[0302/115225.286372:WARNING:dns_config_service_posix.cc(342)] Failed to read DnsConfig.
[0302/115226.149284:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0302/115226.345313:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0302/115226.345462:ERROR:cert_issuer_source_aia.cc(104)] AiaRequest::OnFetchCompleted got error -301
[0302/115226.346040:ERROR:ssl_client_socket_impl.cc(960)] handshake failed; returned -1, SSL error code 1, net_error -202
Then I tried the below command inside the container and I got:
/usr/lib/chromium # chromium-browser --headless --no-sandbox --ignore-certificate-errors --ignore-ssl-errors https://<url>
[0302/115903.090501:ERROR:bus.cc(393)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[0302/115903.091302:WARNING:dns_config_service_posix.cc(342)] Failed to read DnsConfig.
[0302/115903.152546:WARNING:dns_config_service_posix.cc(342)] Failed to read DnsConfig.
[0302/115903.631311:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0302/115903.633207:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0302/115903.633315:ERROR:cert_issuer_source_aia.cc(104)] AiaRequest::OnFetchCompleted got error -301
[0302/115904.273717:INFO:CONSOLE(27)] "Mixed Content: The page at 'https://<url>/auth/realms/ml/protocol/openid-connect/auth?client_id=ml-client&redirect_uri=https%3A%2F%2F<url>%2Foauth%2Fcallback&response_type=code&scope=ml-scope+openid+email+profile&state=6d35f7-add8-40b-a8e7-b169876cfc' was loaded over a secure connection, but contains a form that targets an insecure endpoint 'http://ml-sec-access-mgmt-http:8080/auth/realms/ml/login-actions/authenticate?session_code=mrjXrpjeadGywFIIgkHhddBag74tDnWV6FHA3Qk&execution=f19849-6670-406c-a1b0-139bb1f1dc05&client_id=ml-client&tab_id=vGTrJ7OI8'. This endpoint should be made available over a secure connection.", source: https://<url>/auth/realms/ml/protocol/openid-connect/auth?client_id=ml-client&redirect_uri=https%3A%2F%2F<url>%2Foauth%2Fcallback&response_type=code&scope=ml-scope+openid+email+profile&state=6d85f7-add8-40db-a8e7-b16239876cfc (27)
I even download the chromium browser in my MAC and tried opening the URL it works fine.
Docker File [Reference: https://github.com/ppodgorsek/docker-robot-framework/blob/master/Dockerfile]:
#Base image
FROM python:3.9.0-alpine3.12
# Set the reports directory environment variable
ENV ROBOT_REPORTS_DIR /opt/robotframework/reports
# Set the tests directory environment variable
ENV ROBOT_TESTS_DIR /opt/robotframework/tests
# Set the working directory environment variable
ENV ROBOT_WORK_DIR /opt/robotframework/temp
# Set number of threads for parallel execution
# By default, no parallelisation
ENV ROBOT_THREADS 1
# Install system dependencies
RUN apk update \
&& apk --no-cache upgrade \
&& apk --no-cache --virtual .build-deps add \
gcc \
libffi-dev \
linux-headers \
make \
musl-dev \
openssl-dev \
which \
wget \
curl \
vim \
ca-certificates \
git \
jq \
chromium \
chromium-chromedriver
#Install robotframework and required libraries from the requirements file
ADD requirements.txt /
RUN pip3 install \
--no-cache-dir \
-r requirements.txt
# Create the default report and work folders with the default user to avoid runtime issues
# These folders are writeable by anyone, to ensure the user can be changed on the command line.
RUN mkdir -p ${ROBOT_REPORTS_DIR} \
&& mkdir -p ${ROBOT_WORK_DIR} \
&& chmod ugo+w ${ROBOT_REPORTS_DIR} ${ROBOT_WORK_DIR}
# Installing product related utilities inside the container
XXXXX<contents are hidden as it is not relevant to this query>
# Allow any user to write logs
RUN chmod ugo+w /var/log
# Update system path
ENV PATH=/opt/robotframework/bin:$PATH
# A dedicated work folder to allow for the creation of temporary files
WORKDIR ${ROBOT_WORK_DIR}
Requirements.txt file contents:
#Required robot framework packages
robotframework==3.2.2
robotframework-requests==0.7.2
robotframework-seleniumlibrary==4.5.0
robotframework-jsonlibrary==0.3.1
robotframework-kubelibrary==0.2.0
I even referred the link Getting empty page running selenium in headless chrome Docker.
I could not figure out what could be the issue. Is it really a redirect issue or certificate issue or Mixed content? I am quite confused. Any ideas?
I found a solution for the above problem statement.
First I tried using chrome and firefox instead of chromium. But apline doesn't had chrome and so switched my base image to ubuntu. Also, in general, ubuntu is suggested [Reference: https://pythonspeed.com/articles/base-image-python-docker-images/] as a best docker base image for running Python Applications.
But even after changing to ubuntu as new docker base image with chrome and firefox, it is the same error (blank page white screen).
Below error as well,
oot#a4ac8fd9a950:/opt/google/chrome# google-chrome --headless --no-sandbox https://<URL>
[0306/152227.264852:WARNING:headless_content_main_delegate.cc(530)] Cannot create Pref Service with no user data dir.
[0306/152227.265234:WARNING:discardable_shared_memory_manager.cc(194)] Less than 64MB of free space in temporary directory for shared memory files: 63
[0306/152227.269687:ERROR:bus.cc(393)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[0306/152228.160231:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0306/152228.363766:ERROR:cert_issuer_source_aia.cc(32)] Error parsing cert retrieved from AIA (as DER):
ERROR: Failed parsing Certificate SEQUENCE
ERROR: Failed parsing Certificate
[0306/152228.363958:ERROR:cert_issuer_source_aia.cc(104)] AiaRequest::OnFetchCompleted got error -301
[0306/152228.364625:ERROR:ssl_client_socket_impl.cc(924)] handshake failed; returned -1, SSL error code 1, net_error -202
Then I tried the same with Xvfb [Xvfb (short for X virtual framebuffer) is an in-memory display server for UNIX-like operating system (e.g., Linux). It enables you to run graphical applications without a display (e.g., browser tests on a CI server) while also having the ability to take screenshots.] This worked. Giving all the contents below for reference.
Modified the docker file as below:
FROM ubuntu:20.04
# Set the reports directory environment variable
ENV ROBOT_REPORTS_DIR /opt/robotframework/reports
# Set the tests directory environment variable
ENV ROBOT_TESTS_DIR /opt/robotframework/tests
# Set the working directory environment variable
ENV ROBOT_WORK_DIR /opt/robotframework/temp
# Set number of threads for parallel execution
# By default, no parallelisation
ENV ROBOT_THREADS 1
ENV DEBIAN_FRONTEND=noninteractive
# Install system dependencies
RUN apt-get update \
&& apt-get install --quiet --assume-yes \
python3-pip \
unzip \
firefox \
wget \
curl \
vim \
ca-certificates \
git \
jq \
xvfb
# Install chrome package
RUN wget --no-verbose https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg --install google-chrome-stable_current_amd64.deb; apt-get --fix-broken --assume-yes install
#Install robotframework and required libraries from the requirements file
ADD requirements.txt /
RUN pip3 install \
--no-cache-dir \
-r requirements.txt
# Install webdrivers for chrome and firefox
RUN CHROMEDRIVER_VERSION=`wget --no-verbose --output-document - https://chromedriver.storage.googleapis.com/LATEST_RELEASE` && \
wget --no-verbose --output-document /tmp/chromedriver_linux64.zip http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip && \
unzip -qq /tmp/chromedriver_linux64.zip -d /opt/chromedriver && \
chmod +x /opt/chromedriver/chromedriver && \
ln -fs /opt/chromedriver/chromedriver /usr/local/bin/chromedriver
RUN GECKODRIVER_VERSION=`wget --no-verbose --output-document - https://api.github.com/repos/mozilla/geckodriver/releases/latest | grep tag_name | cut -d '"' -f 4` && \
wget --no-verbose --output-document /tmp/geckodriver.tar.gz https://github.com/mozilla/geckodriver/releases/download/$GECKODRIVER_VERSION/geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz && \
tar --directory /opt -zxf /tmp/geckodriver.tar.gz && \
chmod +x /opt/geckodriver && \
ln -fs /opt/geckodriver /usr/local/bin/geckodriver
# Create the default report and work folders with the default user to avoid runtime issues
# These folders are writeable by anyone, to ensure the user can be changed on the command line.
RUN mkdir -p ${ROBOT_REPORTS_DIR} \
&& mkdir -p ${ROBOT_WORK_DIR} \
&& chmod ugo+w ${ROBOT_REPORTS_DIR} ${ROBOT_WORK_DIR}
# Installing product related utilities inside the container
XXXXXXXX
# Allow any user to write logs
RUN chmod ugo+w /var/log
# Update system path
ENV PATH=/opt/robotframework/bin:$PATH
# A dedicated work folder to allow for the creation of temporary files
WORKDIR ${ROBOT_WORK_DIR}
Requirement text file:
#Required robot framework packages
robotframework==3.2.2
robotframework-requests==0.7.2
robotframework-seleniumlibrary==4.5.0
robotframework-jsonlibrary==0.3.1
robotframework-kubelibrary==0.2.0
robotframework-xvfb==1.2.2
New Robot FW test case with Xvfb:
*** Settings ***
Library SeleniumLibrary
Library XvfbRobot
*** Test Cases ***
Login To GUI
[Documentation] To open GUI and login with valid credentials
Start Virtual Display 1920 1080
Open Browser ${URL}
Set Window Size 1920 1080
Set Browser Implicit Wait 5
Input Text id=username ${username}
Input Text id=password ${password}
Click Button //input[#value='Sign in']

Using ccache in automated builds on Docker cloud

I am using automated builds on Docker cloud to compile a C++ app and provide it in an image.
Compilation is quite long (range 2-3 hours) and commits on github are frequent (~10 to 30 per day).
Is there a way to keep the building cache (using ccache) somehow?
As far as I understand it, docker caching is useless since the compilation layer producing the ccache will not be used due to the source code changes.
Or can we tweak to bring some data back to first layer?
Any other solution? Pushing it somewhere?
Here is the Dockerfile:
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG}
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install \
&& rm -rf /usr/src/QGIS
WORKDIR /
You should try saving and restoring your cache data from a third party service:
- an online object storage like Amazon S3
- a simple FTP server
- an Internet available machine with ssh to make a scp
I'm assuming that your cache data is stored inside the ´~/.ccache´ directory
Using Docker multistage build
From some time, Docker supports Multi-stage builds and you can try using it to implement the solution with a single Dockerfile:
Warning: I've not tested it
# STAGE 1 - YOUR ORIGINAL DOCKER FILE CUSTOMIZED
# CACHE_TAG is provided by Docker cloud
# see https://docs.docker.com/docker-cloud/builds/advanced/
# using ARG in FROM requires min v17.05.0-ce
ARG CACHE_TAG=latest
FROM qgis/qgis3-build-deps:${CACHE_TAG} as builder
MAINTAINER Denis Rouzaud <denis.rouzaud#gmail.com>
ENV CC=/usr/lib/ccache/clang
ENV CXX=/usr/lib/ccache/clang++
ENV QT_SELECT=5
COPY . /usr/src/QGIS
WORKDIR /usr/src/QGIS/build
# restore cache
RUN curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2
RUN tar -xjvf ccache.tar.bz2
COPY --from=downloader /.ccache ~/.ccache
RUN cmake \
-GNinja \
-DCMAKE_INSTALL_PREFIX=/usr \
-DBINDINGS_GLOBAL_INSTALL=ON \
-DWITH_STAGED_PLUGINS=ON \
-DWITH_GRASS=ON \
-DSUPPRESS_QT_WARNINGS=ON \
-DENABLE_TESTS=OFF \
-DWITH_QSPATIALITE=ON \
-DWITH_QWTPOLAR=OFF \
-DWITH_APIDOC=OFF \
-DWITH_ASTYLE=OFF \
-DWITH_DESKTOP=ON \
-DWITH_BINDINGS=ON \
-DDISABLE_DEPRECATED=ON \
.. \
&& ninja install
# save the current cache online
WORKDIR ~/
RUN tar -cvjSf ccache.tar.bz2 .ccache
RUN curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
# STAGE 2
FROM alpine:latest
# YOUR CUSTOM LOGIC TO CREATE THE FINAL IMAGE WITH ONLY REQUIRED BINARIES
# USE THE FROM IMAGE YOU NEED, this is only an example
# E.g.:
# COPY --from=builder /usr/src/QGIS/build/YOUR_EXECUTABLE /usr/bin
# ...
In the stage 2 you will build the final image that will be pushed to your repository.
 Using Docker cloud hooks
Another, but less clear, approach could be using a Docker Cloud pre_build hook file to download cache data:
#!/bin/bash
echo "=> Downloading build cache data"
curl -o ccache.tar.bz2 http://my-object-storage/ccache.tar.bz2 # e.g. Amazon S3 like service
cd /
tar -xjvf ccache.tar.bz2
Obviously you can use dedicate docker images to run curl or tar mounting the local directory as a volume in this script.
Then, copy the .ccache extracted folder inside your container during the build, using a COPY command before your cmake call:
WORKDIR /usr/src/QGIS/build
COPY /.ccache ~/.ccache
RUN cmake ...
In order to make this you should find a way to upload your cache data after the build and you could make this easily using a post_build hook file:
#!/bin/bash
echo "=> Uploading build cache data"
tar -cvjSf ccache.tar.bz2 ~/.ccache
curl -T ccache.tar.bz2 -X PUT http://my-object-storage/ccache.tar.bz2
But your compilation data aren't available from the outside, because they live inside the container. So you should upload the cache after the cmake command inside your main Dockerfile:
RUN cmake...
&& tar ...
&& curl ...
&& ninja ...
&& rm ...
If curl or tar aren't available, just add them to your container using the package manager (qgis/qgis3-build-deps is based on Ubuntu 16.04, so they should be available).

How to add a file to an image in Dockerfile without using the ADD or COPY directive

I need the contents of a large *.zip file (5 gb) in my Docker container in order to compile a program. The *.zip file resides on my local machine. The strategy for this would be:
COPY program.zip /tmp/
RUN cd /tmp \
&& unzip program.zip \
&& make
After having done this I would like to remove the unzipped directory and the original *.zip file because they are not needed any more. The problem is that the COPY (and also the ADD directive) will add a layer to the image that will contain the file program.zip which is problematic as may image will be at least 5gb big. Is there a way to add a file to a container without using COPY or ADD directive? wget will not work as the mentioned *.zip file is on my local machine and curl file://localhost/home/user/program.zip -o /tmp/program.zip will not work either.
It is not straightforward but it can be done via wget or curl with a little support from python. (All three tools should usually be available on a *nix system.)
wget will not work when no url is given and
curl file://localhost/home/user/program.zip -o /tmp/
will not work from within a Dockerfile's RUN instruction. Hence, we will need a server which wget and curl can access and download program.zip from.
To do this we set up a little python server which serves our http requests. We will be using the http.server module from python for this. (You can use python or python 3. It will work with both.).
python -m http.server --bind 192.168.178.20 8000
The python server will serve all files in the directory it is started in. So you should make sure that you start your server either in the directory the file you want to download during your image build resides in or create a temporary directory which contains your program. For illustration purposes let's create the file foo.txt which we will later download via wget in our Dockerfile:
echo "foo bar" > foo.txt
When starting the http server, it is important, that we specify the IP address of our local machine on the LAN. Furthermore, we will open Port 8000. Having done this we should see the following output:
python3 -m http.server --bind 192.168.178.20 8000
Serving HTTP on 192.168.178.20 port 8000 ...
Now we build a Dockerfile to illustrate how this works. (We will assume that the file foo.txt should be downloaded into /tmp):
FROM debian:latest
RUN apt-get update -qq \
&& apt-get install -y wget
RUN cd /tmp \
&& wget http://192.168.178.20:8000/foo.txt
Now we start the build with
docker build -t test .
During the build you will see the following output on our python server:
172.17.0.21 - - [01/Nov/2014 23:32:37] "GET /foo.txt HTTP/1.1" 200 -
and the build output of our image will be:
Step 2 : RUN cd /tmp && wget http://192.168.178.20:8000/foo.txt
---> Running in 49c10e0057d5
--2014-11-01 22:56:15-- http://192.168.178.20:8000/foo.txt
Connecting to 192.168.178.20:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 25872 (25K) [text/plain]
Saving to: `foo.txt'
0K .......... .......... ..... 100% 129M=0s
2014-11-01 22:56:15 (129 MB/s) - `foo.txt' saved [25872/25872]
---> 5228517c8641
Removing intermediate container 49c10e0057d5
Successfully built 5228517c8641
You can then check if it really worked by starting and entering a container from the image you just build:
docker run -i -t --rm test bash
You can then look in /tmp for foo.txt.
We can now add any file to our image without creating an new layer. Assuming you want to add a program of about 5 gb as mentioned in the question we could do:
FROM debian:latest
RUN apt-get update -qq \
&& apt-get install -y wget
RUN cd /tmp \
&& wget http://conventiont:8000/program.zip \
&& unzip program.zip \
&& cd program \
&& make \
&& make install \
&& cd /tmp \
&& rm -f program.zip \
&& rm -rf program
In this way we will not be left with 10 gb of cruft.
There's no way to do this. A feature request is here https://github.com/docker/docker/issues/3156.
Can you not map a local folder to the container when launched and then copy the files you need.
sudo docker run -d -P --name myContainerName -v /localpath/zip_extract:/container/path/ yourContainerID
https://docs.docker.com/userguide/dockervolumes/
I have posted a similar answer here: https://stackoverflow.com/a/37542913/909579
You can use docker-squash to squash newly created layers. That will essentially remove the archive from final image if you remove it in subsequent RUN instruction.

Resources