I hope you can help me on my RISC-V issue.
I am currently experimenting with the toolchain support for RISC-V's vector (RVV) instructions. From what I found on the internet, the spec is currently frozen at v1.0. GCC has a support RVV, but it is not actively maintained anymore. LLVM on the other hand has support for RVV.
So I went ahead and set up a Docker container with: RISC-V tools (Repo at basic-rvv), spike (latest commit) and LLVM (latest commit).
Next, I compiled a sgemm example with the following command: clang -march=rv32gcv --target=riscv32 --sysroot=/usr/local/riscv32-unknown-elf --gcc-toolchain=/usr/local -O2 sgemm.c -o sgemm.elf. The command runs successfully and I get an elf file, which looks fine using objdump: it uses the vector instructions.
Now to my issue: Afterwards, I want to verify the binary with the instruction set simulator spike. Therefore, I ran: spike /usr/local/riscv32-unknown-elf/bin/pk sgemm.elf, which ends up in an execution of an illegal instruction (see below for the full error message). The following OP fails: 0xb2905457. I decoded the instruction with echo "DASM(0xb2905457)" | spike-dasm --> vfmacc.vf v8, v9, ft0, which looks fine to me.
I already went through the code of spike, and why it might fail, but I got lost.
Maybe you have an idea what's going wrong here? I have the feeling that my vector unit is misconfigured (setvl instructions). I hope you guys can give me some support on this!
Thanks very much in advance!
Tim
Error message from spike:
bbl loader
z 00000000 ra 000103cc sp 7ffffd70 gp 00020810
tp 00000000 t0 00000020 t1 bf06fb33 t2 00000000
s0 00020090 s1 00020b54 a0 00000004 a1 00020000
a2 00020010 a3 00000004 a4 00020b94 a5 0000001c
a6 bfed957a a7 00020b94 s2 00000000 s3 00000000
s4 00000000 s5 00000000 s6 00000000 s7 00000000
s8 00000000 s9 00000000 sA 00000000 sB 00000000
t3 3ea13dab t4 bf4b3713 t5 3ea6844f t6 3fdfe3d3
pc 000103ea va/inst b2905457 sr 80006620
An illegal instruction was executed!
You should try
spike --isa=rv64gcv --varch=vlen:128,elen:64 /usr/local/riscv32-unknown-elf/bin/pk sgemm.elf
Add --isa=rv64gcv --varch=vlen:128,elen:64 in the spike run command.
GCC now support RVV intrinsics and auto-vectorization now. You can checkout
"riscv-gcc-rvv-next" branch in riscv-gcc directory and "riscv-binutils-2.38" in riscv-binutils directory. Then build up the whole toolchain.
so I think over time I found a solution. However, I forgot to post it until now.
In fact, there was something going wrong in building the Docker container and maybe also during the build process. The error was originating from an earlier point in executing the elf in spike.
I can only tell everyone to check spike -d $PK $ELF 2> debug.txt to see where things went wrong.
Anyhow, I attached the Dockerfile and the Makefile for anyone how might run into the same issue.
Dockerfile:
FROM gcc:11.2
RUN apt update
RUN apt install -y autoconf automake autotools-dev curl python3 gawk \
build-essential bison flex texinfo gperf libtool patchutils bc zlib1g-dev \
libexpat-dev cmake vim device-tree-compiler libmpc-dev libmpfr-dev \
gdb zsh tmux libgmp-dev && \
apt clean -y && \
apt autoremove -y
#environemnt
ARG CFLAGS=-D__riscv_compressed
#install toolchain
RUN mkdir riscv-gnu-toolchain && cd riscv-gnu-toolchain && \
git clone https://github.com/riscv/riscv-gnu-toolchain . && \
git fetch && \
git checkout basic-rvv && \
git submodule update --init --recursive && \
./configure --with-arch=rv32gc --with-abi=ilp32d && \
make -j32 && \
make install && \
cd .. && \
rm riscv-gnu-toolchain -rf
#install spike
RUN mkdir -p /build/spike/build /build/spike/repo && cd /build/spike && \
git clone https://github.com/riscv/riscv-isa-sim.git repo && \
cd /build/spike/build && \
../repo/configure --with-varch=vlen:128,elen:32 --with-isa=rv32imafcv && \
make && make install && \
cd /build && \
rm /build/spike -rf
# install pk
RUN mkdir -p /build/pk/build /build/pk/repo && cd /build/pk && \
git clone https://github.com/riscv/riscv-pk.git repo && \
cd /build/pk/build && \
../repo/configure --host=riscv32-unknown-elf --with-arch=rv32gc CC=riscv32-unknown-elf-gcc --with-abi=ilp32d && \
make && make install && \
cd /build && \
rm /build/pk -rf
# install llvm
RUN mkdir -p /build/llvm && cd /build/llvm && \
git clone https://github.com/llvm/llvm-project . && \
mkdir build && cd build && \
cmake -G "Unix Makefiles" -DLLVM_TARGETS_TO_BUILD="RISCV" \
-DLLVM_DEFAULT_TARGET_TRIPLE=riscv32-unknown-elf \
-DCMAKE_BUILD_TYPE=Release -DDEFAULT_SYSROOT=/usr/local/riscv32-unknown-elf \
-DLLVM_ENABLE_PROJECTS="clang;lld" ../llvm && \
make -j32 && \
make install && \
cd /build && \
rm /build/llvm -rf
WORKDIR /root
Makefile (assuming the only file is test_sgemm.c)
PREFIX = riscv32-unknown-elf
AR = $(PREFIX)-ar
CC = $(PREFIX)-gcc
CLANG = clang
MARCH = rv32gcv
INCLUDES =
CFLAGS = -march=$(MARCH) --target=riscv32 \
--sysroot=/usr/local/riscv32-unknown-elf \
--gcc-toolchain=/usr/local \
-g
OBJS = $(patsubst %.c, %_$(MARCH).o, $(wildcard *.c))
all: test_sgemm.elf
%.o: %.c $(DEPS)
$(CLANG) -c -o $# $< $(CFLAGS) $(INCLUDES)
%.elf: %.o
$(CLANG) -o $# $< $(CFLAGS)
Hope this helps you!
Thanks
Tim
Related
I currently have a project I'm working on where the version of pdftotext from poppler-utils is using the "testing" version (found here https://manpages.debian.org/testing/poppler-utils/pdftotext.1.en.html). Instead, I want to use the version "experimental" by updating the debian image in the dockerfile (trying to avoid conflicts with other items). Is there a simple way to do this or is this not feasible?
As usual, I figured out the solution. I got some data from this post that provided some good insight on the commands. I had to update to the version that would work with my bot base but got it all figured out.
Installing Poppler utils of version 0.82 in docker
Leaving this here in case someone else encounters something similar.
FROM python:3.8-slim-buster
RUN apt-get update && apt-get install wget build-essential cmake libfreetype6-dev
pkg-config libfontconfig-dev libjpeg-dev libopenjp2-7-dev -y
RUN wget https://poppler.freedesktop.org/poppler-data-0.4.9.tar.gz \
&& tar -xf poppler-data-0.4.9.tar.gz \
&& cd poppler-data-0.4.9 \
&& make install \
&& cd .. \
&& wget https://poppler.freedesktop.org/poppler-20.08.0.tar.xz \
&& tar -xf poppler-20.08.0.tar.xz \
&& cd poppler-20.08.0 \
&& mkdir build \
&& cd build \
&& cmake .. \
&& make \
&& make install \
&& ldconfig
CMD tail -f /dev/null
What I am doing is setting up sbcl and quicklisp in RUN commands in the Dockerfile and then using a CMD to load my custom code.
When I run it with Docker on my local machine all is well, but when I push it to "google run" the lisp code (loaded with CMD) crashes because it cant find quicklisp.
And as far as I can see that is because the HOME is different for RUN (/root) and CMD (/home).
It is the same user in RUN and CMD = uid=0(root) gid=0(root) groups=0(root)
I am assuming that they (google run) use some linux command to change the "user context" (dont know the correct word to use for linux) of the user, but dont know why or how they are doing it. And because I dont know what or why, they are doing it, it is difficult to google for a solution.
Any suggestions would be welcome.
To see the behaviour:
Dockerfile:
FROM phusion/baseimage
MAINTAINER Piet Pompies <piet#pompies.com>
RUN echo $HOME
CMD echo $HOME
When you build $HOME will be /root
and when you deploy and run it will be /home
EDIT: I found a work around (12 Jun 2020):
Not sure if I should put the workaround in the answer or just in a edit like I did here. Will leave it in the edit till advised otherwise.
WORK AROUND:
You can can dump a core in your RUN commands and use that in CMD or use buildapp to run lisp. buildapp is what you will be going for in a final release.
Full woo and buildapp example
woo.lisp
(defun main (&rest args)
(declare (ignore args))
(woo:run
(lambda (env)
(cond ((equalp (getf env :REQUEST-URI) "/test")
(list 200
(list :content-type "text/plain")
(list (format nil "Hello, World - ~A" (getf env :REQUEST-URI)))))
(t
(list 200
(list :content-type "text/plain")
(list (format nil "~S" env))))))
:address "0.0.0.0"
:port 5000))
Dockerfile:
FROM phusion/baseimage
MAINTAINER Piet Pompies <piet#pompies.com>
RUN apt-get update &&\
apt-get install -y sbcl curl wget rlwrap build-essential time libev-dev screen && \
cd /tmp && \
curl -O https://ufpr.dl.sourceforge.net/project/sbcl/sbcl/2.0.5/sbcl-2.0.5-source.tar.bz2 && \
tar jxvf sbcl-2.0.5-source.tar.bz2 && \
cd /tmp/sbcl-2.0.5 && \
sh ./make.sh && \
sh ./install.sh && \
rm -rf /tmp/sbcl*
RUN cd /tmp && \
wget http://www.xach.com/lisp/buildapp.tgz && \
tar xvf buildapp.tgz && \
cd /tmp/buildapp-1.5.6 && \
make install && \
rm -rf /tmp/buildapp*
RUN mkdir /src/
# install quicklisp (requirements: curl, sbcl)
RUN curl -k -o /tmp/quicklisp.lisp 'https://beta.quicklisp.org/quicklisp.lisp' && \
sbcl --noinform --non-interactive --load /tmp/quicklisp.lisp --eval \
'(quicklisp-quickstart:install :path "~/quicklisp/")' && \
sbcl --noinform --non-interactive --load ~/quicklisp/setup.lisp --eval \
'(ql-util:without-prompting (ql:add-to-init-file))' && \
echo '#+quicklisp(push "/src" ql:*local-project-directories*)' >> ~/.sbclrc && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY woo.lisp /src/woo.lisp
RUN sbcl --no-userinit \
--no-sysinit --non-interactive \
--load ~/quicklisp/setup.lisp \
--eval '(ql:quickload "woo")' \
--eval '(ql:write-asdf-manifest-file "quicklisp-manifest.txt")'
RUN buildapp \
--manifest-file quicklisp-manifest.txt \
--load-system woo \
--load /src/woo.lisp \
--entry main \
--output woo
EXPOSE 5000
CMD sleep 0.05; ./woo
I am getting a error: no include path in which to search for stdint.h error message when building a docker image from alpine:edge, that leads to other errors like unknown type name 'uint32_t' and failure when compiling a program.
As far as I understand, stdint.h is part of the C++ standard library and should be present, unless there is something broken within alpine:edge, which I don't think will be the case.
My docker image is the following:
FROM alpine:edge
RUN apk update && apk add \
git \
make \
gcc \
python3 \
ldc \
&& git clone --recursive https://github.com/lomereiter/sambamba.git \
&& cd sambamba \
&& make \
&& mv sambamba /usr/local/bin/ \
&& cd ../.. \
&& rm -r sambamba
WORKDIR /wd
ENTRYPOINT ["/usr/local/bin/sambamba"]
Note that the image alpine:edge is necessary, because the ldc package is only available on it. How to fix this? Why isn't stdint.h found?
To successfully compile Sambamba, you need some additional packages:
g++ (for the C++ compiler and includes)
zlib
zlib-dev (for the zlib Header files)
Overall, this modified Dockerfile should do the trick:
FROM alpine:edge
RUN apk update && apk add \
git \
make \
gcc \
g++ \
zlib \
zlib-dev \
python3 \
ldc \
&& git clone --recursive https://github.com/lomereiter/sambamba.git \
&& cd sambamba \
&& make \
&& mv sambamba /usr/local/bin/ \
&& cd ../.. \
&& rm -r sambamba
WORKDIR /wd
ENTRYPOINT ["/usr/local/bin/sambamba"]
I have a problem with compilation Thrift in docker container with alpine 3.8
FROM alpine:3.8
RUN apk add --update --no-cache \
libstdc++ \
libgcc \
libevent \
composer \
tar \
git \
bash \
nginx
RUN apk add --update --no-cache php7-dev boost-dev autoconf openssl-dev automake make libtool bison flex g++ && \
cd /tmp && wget https://github.com/apache/thrift/archive/0.11.0.zip -O thrift.zip && unzip thrift.zip && cd thrift-0.11.0 && \
./bootstrap.sh && ./configure --with-openssl=/usr/include/openssl --without-ruby --disable-tests --without-php_extension --without-python --without-haskell --without-java --without-perl --without-php --without-py3 --without-erlang && make && make install && \
cd /tmp/thrift-0.11.0/lib/php/src/ext/thrift_protocol && phpize && ./configure && make && \
echo 'extension=thrift_protocol.so' >> /etc/php7/conf.d/thrift_protocol.ini && \
apk del --update --no-cache php7-dev boost-dev autoconf openssl-dev automake make libtool bison flex g++ && \
rm -rf /tmp/*
after compile bin file has size around 50MB
-rwxr-xr-x 1 root root 55566368 Sep 5 10:05 thrift
for example bin file after compile on Mac OSX
4.2M Sep 4 17:37 thrift
By default, configure && make will cause Thrift to be built with debug symbols, which is the major reason for binary bloating.
For building a more compact and optimized thrift binary, replace:
configure
With:
configure CFLAGS="-s -O2" CXXFLAGS="-s -O2"
The -s compiler option will cause debug information to be stripped away from generated objects.
The -O2 compiler option will enable common compiler optimizations, which should improve thrift performance considerably.
More information:
https://thrift.apache.org/docs/BuildingFromSource
https://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html
https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
We were wondering how to parameterize the building process. We have an application that can have features if the right libraries are there at compile time. Also, we optionally would like to include some other debugging applications into the same image.
Our current strategy is to comment-in/out lines in the Dockerfile and compile it to another name.
# Dockerfile in multiple variants / t-shirt sizing:
# - as is, with all the #something; comments: very basic compilation without features
# - removing #something; enable a variant/feature
# see build and test instructions below
###
# REQUIRED: INSTALL COMPILER, DOWNLOAD AND INSTALL BITS AND PIECES
###
# start from a fedora 28 image
FROM fedora:28 AS compiler_build
RUN echo "############################# COMPILER IMAGE #################################"
# install requirements
#RUN dnf upgrade -y && dnf clean all
RUN dnf install -y git gcc gcc-c++ make automake autoconf gettext-devel
#######
# OPTIONAL: CAM SUPPORT
#######
#cam;RUN dnf install -y wget mercurial patch glibc-static
#cam;
#cam;# do not use pre-built dvb-apps and libdvbcsa from distro-mirror, but build from sources. This is required for cam support on fedora.
#cam;RUN cd /usr/local/src && \
#cam; hg clone http://linuxtv.org/hg/dvb-apps && \
#cam; cd dvb-apps && \
#cam; # patching for >=4.14 Kernel (https://aur.archlinux.org/packages/linuxtv-dvb-apps)
#cam; wget -q -O - https://git.busybox.net/buildroot/plain/package/dvb-apps/0003-handle-static-shared-only-build.patch | patch -p1 && \
#cam; wget -q -O - https://git.busybox.net/buildroot/plain/package/dvb-apps/0005-utils-fix-build-with-kernel-headers-4.14.patch | patch -p1 && \
#cam; wget -q -O - https://gitweb.gentoo.org/repo/gentoo.git/plain/media-tv/linuxtv-dvb-apps/files/linuxtv-dvb-apps-1.1.1.20100223-perl526.patch | patch -p1 && \
#cam; make && make install && \
#cam; ldconfig # b/c libdvben50221.so
#######
# OPTIONAL: SCAM SUPPORT
#######
#scam;RUN yum install -y openssl-devel dialog svn pcsc-lite pcsc-lite-devel libusb libusb-devel findutils file libtool
#scam;
#scam;RUN cd /usr/local/src && \
#scam; git clone https://code.videolan.org/videolan/libdvbcsa.git && \
#scam; cd libdvbcsa && \
#scam; autoreconf -i -f && \
#scam; ./configure --prefix=/usr && make && make install && \
#scam; ldconfig # b/c libdvbcsa.so
#scam; #dnf install -y https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm && \
#scam; #dnf install -y libdvbcsa-devel
#scam;
#scam;RUN cd /usr/local/src && \
#scam; svn checkout http://www.streamboard.tv/svn/oscam/trunk oscam-svn && \
#scam; cd oscam-svn && \
#scam; make USE_PCSC=1 USE_LIBUSB=1
#scam;
#scam;RUN cd /usr/local/src && \
#scam; git clone https://github.com/gfto/tsdecrypt.git && \
#scam; cd tsdecrypt && \
#scam; git submodule init && \
#scam; git submodule update && \
#scam; make && make install
#######
# REQUIRED: ACTUAL APPLICATION ITSELF
#######
# note: the ./configure will detect cam/scam support automagically if everything provided
RUN cd /usr/local/src && \
ldconfig && \
git clone https://github.com/braice/MuMuDVB.git && \
cd MuMuDVB && \
autoreconf -i -f && \
./configure --enable-android && \
make && make install
#######
# OPTIONAL: TOOLBOXING
#######
#tool;RUN cd /usr/local/src && \
#tool; git clone git://git.videolan.org/bitstream.git && \
#tool; cd bitstream && \
#tool; make all && make install
#tool;
#tool;RUN cd /usr/local/src && \
#tool; dnf install -y libev-devel && \
#tool; git clone https://code.videolan.org/videolan/dvblast.git && \
#tool; cd dvblast && \
#tool; make all && make install
#tool;
#tool;RUN cd /usr/local/src && \
#tool; yum install -y wget bzip2 && \
#tool; wget http://wirbel.htpc-forum.de/w_scan/w_scan-20170107.tar.bz2 && \
#tool; tar -jxf w_scan-20170107.tar.bz2 && \
#tool; cd w_scan-20170107/ && \
#tool; ./configure && make && make install
#tool;
#tool;RUN cd /usr/local/src && \
#tool; git clone https://github.com/stefantalpalaru/w_scan2.git && \
#tool; cd w_scan2 && \
#tool; autoreconf -i -f && \
#tool; ./configure && make && make install
#tool;
#tool;RUN cd /usr/local/src && \
#tool; yum install -y wget && \
#tool; wget http://udpxy.com/download/udpxy/udpxy-src.tar.gz && \
#tool; tar -zxf udpxy-src.tar.gz && \
#tool; cd udpxy-*/ && \
#tool; make && make install
#tool;
#tool;RUN cd /usr/local/src && \
#tool; yum install -y xz wget && \
#tool; wget ftp://ftp.videolan.org/pub/videolan/miniSAPserver/0.3.8/minisapserver-0.3.8.tar.xz && \
#tool; tar -Jxf minisapserver-0.3.8.tar.xz && \
#tool; cd minisapserver-*/ && \
#tool; ./configure && make && make install
#tool;
#tool;RUN cd /usr/local/src && \
#tool; yum install -y wget && \
#tool; wget https://dl.bintray.com/tvheadend/fedora/bintray-tvheadend-fedora-4.2-stable.repo
###
# OPTIONAL: START OVER AND ONLY RE-INSTALL
###
FROM fedora:28
RUN echo "############################# RUNTIME IMAGE #################################"
# copy the whole /usr/local from the previous compiler-image (note the --from)
COPY --from=compiler_build /usr/local /usr/local
# install runtime libraries
#scam;RUN dnf install -y openssl-devel pcsc-lite libusb
#tool;RUN dnf install -y v4l-utils libev
#tool;RUN mv /usr/local/src/bintray-tvheadend-fedora-4.2-stable.repo /etc/yum.repos.d
#tool;RUN dnf search tvheadend # experimental
# unfortunately, some make's need gcc anyway :(
RUN dnf install -y make gcc gcc-c++ cpp glibc-devel glibc-headers kernel-headers
# re-install all the stuff from before
RUN test -e /usr/local/src/dvb-apps && cd /usr/local/src/dvb-apps && make install && ldconfig || exit 0
RUN test -e /usr/local/src/libdvbcsa && cd /usr/local/src/libdvbcsa && make install && ldconfig || exit 0
RUN cd /usr/local/src/MuMuDVB && make install && mumudvb -v
RUN test -e /usr/local/src/tsdecrypt && cd /usr/local/src/tsdecrypt && make install || exit 0
RUN test -e /usr/local/src/bitstream && cd /usr/local/src/bitstream && make install || exit 0
RUN test -e /usr/local/src/dvblast && cd /usr/local/src/dvblast && make install || exit 0
RUN test -e /usr/local/src/w_scan-20170107 && cd /usr/local/src/w_scan-20170107 && make install || exit 0
RUN test -e /usr/local/src/w_scan2 && cd /usr/local/src/w_scan2 && make install || exit 0
RUN test -e /usr/local/src/udpxy-*/ && cd /usr/local/src/udpxy-*/ && make install || exit 0
RUN test -e /usr/local/src/minisapserver-*/ && cd /usr/local/src/minisapserver-*/ && make install || exit 0
# remove gcc again
RUN dnf remove -y make gcc gcc-c++ cpp glibc-devel glibc-headers kernel-headers
RUN echo "############################# FINAL STEPS #################################"
# add a runtime user
RUN useradd -c "simple user" -g users -G audio,video,cdrom,dialout,lp,tty,games user
# include this very file into the image
COPY Dockerfile /
# use this user as default user
USER user
# assume persistent storage
VOLUME /tmp
# assume exposed ports
EXPOSE 8500:8500
# assume standard runtime executable
CMD ["/bin/bash"]
###
# RECOMMENDED: HOW TO BUILD AND TEST
###
# build mumudvb plain:
# cat Dockerfile.template > Dockerfile; time docker build -t my_mumudvb_simple .
# enable cam/scam support:
# sed -r 's_^#(cam|scam);__g' Dockerfile.template > Dockerfile; time docker build -t my_mumudvb_cam .
# enable tool but not scam support:
# sed -r 's_^#(tool);__g' Dockerfile.template > Dockerfile; time docker build -t my_mumudvb_tool .
# enable all support:
# sed -r 's_^#(cam|scam|tool);__g' Dockerfile.template > Dockerfile; time docker build -t my_mumudvb_full .
# simpe compare and test
# $ docker run -it --rm my_mumudvb_simple /bin/bash
# $ docker run -it --rm my_mumudvb_full /usr/local/bin/w_scan
# $ docker run -it --rm my_mumudvb_cam /usr/local/bin/mumudvb
# $ docker run -it --rm my_mumudvb_tool /usr/local/bin/mumudvb
# run a scan. note the mapped device tree /dev/dvb
# $ docker run -it --rm --device /dev/dvb/ my_mumudvb_full w_scan -f s -s S13E0 -D1c
# run a mumudvb instance. Note the mapped device, filesystem and tcp-port
# $ docker run -it --rm --device /dev/dvb/ --volume ${PWD}/conf:/conf -p 8500:8500 my_mumudvb_cam mumudvb -d -c /conf/test.conf
What can you recommend how to implement and manage this? Re-using the compile-intermediate images could save some space and time, using tags could easy the variant-usage, etc.
What would you suggest?
Look at using ARG and ENV in your Dockerfile. ARG sets values that are available while the image is being built. ENV sets variables available and persist with the image after it is built.
For example, say you needed different settings files based on DEV or PRD versions, you might do this:
ARG=settings_filename
ENV=settings_filename=${settings_filename}
Then call docker to build the image like so:
docker build --build-arg settings_filename="settings.dev"
The ARG line in the Dockerfile sets the value to "settings.dev", but ARG values are not persisted to the image. The ENV line actually persists whatever the ARG value is as an environment variable in the image that is produced.
Are the permutations hierarchical in any fashion? If so, you could create intermediate images and inherit from them. This will have the advantage that the "ancestor" part of the builds will be pre-built, and will not need further build time (except when they change, of course).
If not, you could use your favourite scripting language to build your Dockerfiles. It does not really matter how you do this, what matters is that you automate it. I would suggest using an array to specify, for each named build, what sections it should include. Since this will run quickly, you could always assemble all Dockerfiles, and make that something you do when you do the image builds themselves, so that your specification for a Dockerfile does not come out of sync with the Dockerfiles themselves.
In fact, if you do this, it may be worth not committing your Dockerfiles to version control, since that is one less thing to come out of sync with your scripted definitions.
Addendum
Regarding this remark:
We optionally would like to include some other debugging applications into the same image.
It's worth being careful here about what constitutes a tested image. If you add optional debug tools into an image, then:
if this is not to be deployed to production, then you may end up testing against a version of your app that is not in production. The ideal is to use the same image all the way through your development workflow.
if this is to be deployed to production, it's worth considering whether the attack surface may be increased.