Add fonts to image using Jib - docker

We have Jib in out project instead of Docker containerization and now I have a task to add fonts to final image.
Commands that I need to reproduce with Jib and how it looks like with Docker:
RUN apk --no-cache add curl ttf-dejavu msttcorefonts-installer fontconfig \
&& update-ms-fonts \
&& fc-cache -f
Actually I don't think that I need to add all fonts, I need only Times New Roman.
Also I would mention that I don't have any opportunity to change base image, so I can't add fonts to image before Jib using it.
Considering all points, what would you recommend eventually?

Related

How can I prevent docker compile a library every time I deploy to bitbucket? Is there any bitbucket pipeline cache?

We have our Flask API in a docker image, we push this docker to a bitbucket repository, then a bitbucket pipeline start deploying.
Everything works as expected, but the compilation of OpenCV is taking in average 15 min.
I would like to know if is there any way to avoid this compilation every time we push to bitbucket. Something like caching.
I have read about cache on bitbucket pipelines but it did not work as I expected.
This is part of my Dockerfile I would like to improve:
RUN mkdir /opt && cd /opt && \
wget -q https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip && \
unzip ${OPENCV_VERSION}.zip && \
rm -rf ${OPENCV_VERSION}.zip && \
mkdir -p /opt/opencv-${OPENCV_VERSION}/build && \
cd /opt/opencv-${OPENCV_VERSION}/build && \
CXX=/usr/bin/clang++ CC=/usr/bin/clang cmake \
-D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D WITH_FFMPEG=NO \
-D WITH_IPP=NO \
-D WITH_OPENEXR=NO \
-D WITH_TBB=YES \
-D BUILD_EXAMPLES=NO \
-D BUILD_ANDROID_EXAMPLES=NO \
-D INSTALL_PYTHON_EXAMPLES=NO \
-D BUILD_DOCS=NO \
-D BUILD_opencv_python2=NO \
-D BUILD_opencv_python3=ON \
-D ENABLE_PYTHON3=ON \
-D PYTHON3_EXECUTABLE=/usr/bin/python3 \
.. && \
make VERBOSE=1 -j8 && \
make && \
make install && \
rm -rf /opt/opencv-${OPENCV_VERSION}
I expect some solution like just pointing a pre-compiled version of the OpenCV Api.
I have recently faced this problem and agree that cache doesn't seem to work as expected. However without looking at your entire Dockerfile, it's hard to say. ADD's and COPY's will invalidate the cache so i'd suggest you move this section up to the top if you can before adding any files.
A better solution (if there is no pre-compiled version), is to use the concept of a base image which is what I have done to cut my build time down in half. Basically you build a base image flask-api-base which will install all your packages and compile OpenCV and then your actual final image will pull FROM flask-api-base:latest and build your application specific code. Just remember if the base image changes, you may need to wipe your Bitbucket cache.
I'm unfamiliar with OpenCV but assume that, if there is a binary that you can use, that would be the ideal option.
I'm curious as to why this layer (RUN ...) isn't being cached between builds. It appears that you're cleanly separating the make of OpenCV from other statements in your Dockerfile and so, this RUN should generate a distinct layer that's stable and thus reused across builds.
Does this statement occur after earlier e.g. RUN statements that do change? If so, you may want to reorder this statement and place it earlier in the Dockerfile so that this layer becomes constant. See best practices for the Dockerfile statements that generate layers.
Alternatively, you could make a separate image containing OpenCV and then FROM this image in your code builds. You may do this either using distinct Dockerfiles or multi-stage builds. This way, this image containing the OpenCV build would only be built on (your) demand and reused across subsequent builds.
The solution I used was to create my own image, upload it to Docker hub, and create a new one based on that.
So the first docker image should contain all the basic libraries my system uses.
The second has the environmental variables and the api itself.

Docker proxy config not working for ADD in Dockerfile

I try to write a Dockerfile that adds a file to the image like this:
ADD https://repository.internal/file.zip /tmp/
The repository.internal host is only reachable through a proxy. I provide the proxy configuraton with the --config option but the ADD command seems not to use the proxy and fails.
I know the proxy configuration is correct because I added the line
RUN curl https://repository.internal/file.zip
which is working fine.
Is there any possibility to run the ADD command also with the proxy config?
As per my comments above, I believe this to be something to do with the internal way the Docker build process handles the ADD and RUN commands... I cant find documentation to back this up - so someone with greater internal knowledge may confirm or deny, but makes sense as a RUN command is done in a layer TO the image being built, where as the ADD command is performed and the results of it are baked into the image.
Whichever way this is being handled, you can achieve what you need by moving to the RUN method as follows:
FROM <your base image>
RUN curl https://repository.internal/file.zip >> /tmp/file.zip \
&& cd /tmp \
&& unzip file.zip \
&& rm file.zip
And you will have the files unzipped.
You may need to check if the rm at the end is required - cant remember off the top of my head if the unzip command removes the original zip file.
As you mentioned, this would rely on the curl and unzip packages being available on the image... however you could potentially avoid having these within your final application image by using Docker Multi Stage Builds
Your Dockerfile would then look something like:
FROM <some useful base image> as collector
RUN apt-get install -y curl unzip
RUN mkdir /tmp/files && \
&& curl https://repository.internal/file.zip >> /tmp/files/file.zip \
&& cd /tmp/files \
&& unzip file.zip \
&& rm file.zip
FROM <your final desired base image>
COPY --from=collector /tmp/files /tmp
This would then utilise an image to have curl and unzip in to collect and deal with the extraction of your files without having to install them on your final application image.

Lightweight GCC for Alpine

Is there a lightweight GCC distribution that I can install in Alpine?
I am trying to make a small Docker image. For that reason, I am using Alpine as the base image (5MB). The standard GCC install dwarfs this in comparison (>100MB).
So is there a lightweight GCC distribution that I can install on Alpine?
Note: Clang is much worse (475MB last I checked).
There isn't such an image available, AFAIK, but you can make GCC slimmer by deleting unneeded GCC binaries.
It very much depends on what capabilities are required from GCC.
As a starting point, I'm assuming you need C support only, which means the gcc and musl-dev packages (for standard headers) are installed, which result with a ~100MB image with Alpine 3.8.
If you don't need Objective-C support, you could remove cc1obj, which is the Objective-C backend. On Alpine 3.8, it would be located at /usr/libexec/gcc/x86_64-alpine-linux-musl/6.4.0/cc1obj, and takes up 17.6MB.
If you don't need link time optimization (LTO), you could remove the LTO wrapper and main executables, lto-wrapper and lto1, which take up 700kb and 16.8MB respectively.
While LTO optimization may be powerful, on most applications it's likely to result with only minor speed and size improvements (few percents). Plus, you have to opt-in for LTO, which is not done by most applications, so it may be a good candidate for removal.
You could remove the Java front end, gcj, which doesn't seem to be working anyways. It is located at /usr/bin/x86_64-alpine-linux-musl-gcj, and weights 812kb.
By removing these, and squashing the resulting image, it would shrink into 64.4MB, which is still considerably large. You may be able to shrink further by removing additional files, but then you may loose some desired functionality and with a less appealing tradeoff.
Here's an example Dockerfile:
FROM alpine:3.8
RUN set -ex && \
apk add --no-cache gcc musl-dev
RUN set -ex && \
rm -f /usr/libexec/gcc/x86_64-alpine-linux-musl/6.4.0/cc1obj && \
rm -f /usr/libexec/gcc/x86_64-alpine-linux-musl/6.4.0/lto1 && \
rm -f /usr/libexec/gcc/x86_64-alpine-linux-musl/6.4.0/lto-wrapper && \
rm -f /usr/bin/x86_64-alpine-linux-musl-gcj
Tested using:
sudo docker image build --squash -t alpine-gcc-minimal .

configured a downloaded package with ./configure, how to remove it completely from centos

did following in setup.sh and creating a docker image
wget -qO-
https://downloads.sourceforge.net/project/libpng/zlib/1.2.8/zlib-1.2.8.tar.gz | tar zvx
rm zlib-1.2.8.tar.gz
cd zlib-1.2.8
./configure
make
make install
at the end of docker file want to remove all binaries of this package to reduce the size of docker, How to do that
You probably want to run make uninstall after you have done what you want with this library + remove zlib-1.2.8 folder. Your Dockerfile should look like :
FROM centos:7
RUN ./setup.sh \
&& ./do_stuff_with_zlib.sh \
&& ./uninstall_zlib.sh
The uninstall_zlib.sh script should contain :
#!/usr/bin/env sh
(cd zlib-1.2.8; make uninstall) # uninstall binaries
rm zlib-1.2.8 # also remove folder to gain some space
Note that ./setup.sh and ./uninstall_zlib.sh should be run in the same layer (same RUN directive), otherwise resulting image size will not be reduced (unless you squash it afterwards).

Docker how to ADD a file without committing it to an image?

I have a ~300Mb zipped local file that I add to a docker image. The next state then extracts the image.
The problem is that the ADD statement results in a commit that results in a new file system layer makes the image ~300Mb larger than it needs to be.
ADD /files/apache-stratos.zip /opt/apache-stratos.zip
RUN unzip -q apache-stratos.zip && \
rm apache-stratos.zip && \
mv apache-stratos-* apache-stratos
Question: Is there a work-around to ADD local files without causing a commit?
One option is to run a simple web server (e.g. python -m SimpleHTTPServer) before starting the docker build, and then using wget to retrieve the file, but that seems a bit messy:
RUN wget http://localhost:8000/apache-stratos.zip && \
unzip -q apache-stratos.zip && \
rm apache-stratos.zip && \
mv apache-stratos-* apache-stratos
Another option is to extract the zipped file at container start up instead of build time, but I would prefer to keep the start up as quick as possible.
According to the documentation, if you pass an archive file from the local filesystem (not a URL) to ADD in the Dockerfile (with a destination path, not a path + filename), it will uncompress the file into the directory given.
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory.
Resources from remote URLs are not decompressed. When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
1) Whatever existed at the destination path and 2) The contents of the
source tree, with conflicts resolved in favor of "2." on a file-by-file basis.
try:
ADD /files/apache-stratos.zip /opt/
and see if the files are there, without further decompression.
With Docker 17.05+ you can use a multi-stage build to avoid creating extra layers.
FROM ... as stage1
# No need to clean up here, these layers will be discarded
ADD /files/apache-stratos.zip /opt/apache-stratos.zip
RUN unzip -q apache-stratos.zip
&& mv apache-stratos-* apache-stratos
FROM ...
COPY --from=stage1 apache-stratos/ apache-stratos/
You can use docker-squash to squash newly created layers. That should reduce the image size significantly.
Unfortunately the mentioned workarounds (RUN curl ... && unzip ... & rm ..., unpack on container start) are the only options at the moment (docker 1.11).
There are currently 3 options I can think of.
Option 1: you can switch to a tar or compressed tar format from the zip file and then allow ADD to decompress the file for you.
ADD /files/apache-stratos.tgz /opt/
Only downside is any other changes, like a directory rename, will trigger the copy on write again, so you need to make sure your tar file has the contents in the final directory structure.
Option 2: Use a multi-stage build. Extract the file in an early stage, perform any changes, and then copy the resulting directory to your final stage. This is a good option for any build engines that cannot use BuildKit. augurar's answer covers this so I won't repeat the same Dockerfile he already has.
Option 3: BuildKit (available in 18.09 and newer) allows you to mount files from other locations, including your build context, within a RUN command. This currently requires the experimental syntax. The resulting Dockerfile looks like:
# syntax=docker/dockerfile:experimental
FROM ...
...
RUN --mount=type=bind,source=/files/apache-stratos.zip,target=/opt/apache-stratos.zip \
unzip -q apache-stratos.zip && \
rm apache-stratos.zip && \
mv apache-stratos-* apache-stratos
Then to build that, you export a variable before running your build (you could also export it in your .bashrc or equivalent):
DOCKER_BUILDKIT=1 docker build -t your_image .
More details on BuildKit's experimental features are available here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md

Resources