I have added installation of the Vega tools to the docker-asciidoctor Dockerfile and they are present when running the bats tests, but when I run the image they are no longer present.
I am quite new to Docker.
I have also tried some variations of adding the dir node_modules to the path, but nothing works.
In all cases the directory where the Vega tools are installed to is simply not in the image.
I am adding the Vega tools like this:
...
&& npm install --build-from-source -g vega-cli vega vega-lite vega-embed \
&& echo `which vl2vg` \
...
which provides this output:
...
/usr/bin/vl2png -> /usr/lib/node_modules/vega-lite/bin/vl2png
/usr/bin/vl2svg -> /usr/lib/node_modules/vega-lite/bin/vl2svg
/usr/bin/vl2vg -> /usr/lib/node_modules/vega-lite/bin/vl2vg
/usr/bin/vg2pdf -> /usr/lib/node_modules/vega-cli/bin/vg2pdf
/usr/bin/vg2png -> /usr/lib/node_modules/vega-cli/bin/vg2png
/usr/bin/vg2svg -> /usr/lib/node_modules/vega-cli/bin/vg2svg
...
/usr/bin/vl2vg
just as one would expect.
And the test that one of the tools are there looks like this:
#test "vl2vg is installed and in the path" {
docker run -t --rm "${DOCKER_IMAGE_NAME_TO_TEST}" which vl2vg
}
That passes.
I would expect the Vega tools to be available in the image when I do the following:
docker run -it --entrypoint /bin/sh asciidoctor/docker-asciidoctor
/documents # which vl2vg
which: no vl2vg in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin)
Given the output from the build of the image I would have expected to see /usr/bin/vl2vg and /usr/lib/node_modules is not there.
For completeness here is the Docker file and the steps taken:
FROM alpine:3.9
LABEL MAINTAINERS="Guillaume Scheibel <guillaume.scheibel#gmail.com>, Damien DUPORTAL <damien.duportal#gmail.com>"
ARG asciidoctor_version=2.0.9
ARG asciidoctor_confluence_version=0.0.2
ARG asciidoctor_pdf_version=1.5.0.alpha.17
ARG asciidoctor_diagram_version=1.5.16
ARG asciidoctor_epub3_version=1.5.0.alpha.9
ARG asciidoctor_mathematical_version=0.3.0
ARG asciidoctor_revealjs_version=2.0.0
ENV ASCIIDOCTOR_VERSION=${asciidoctor_version} \
ASCIIDOCTOR_CONFLUENCE_VERSION=${asciidoctor_confluence_version} \
ASCIIDOCTOR_PDF_VERSION=${asciidoctor_pdf_version} \
ASCIIDOCTOR_DIAGRAM_VERSION=${asciidoctor_diagram_version} \
ASCIIDOCTOR_EPUB3_VERSION=${asciidoctor_epub3_version} \
ASCIIDOCTOR_MATHEMATICAL_VERSION=${asciidoctor_mathematical_version} \
ASCIIDOCTOR_REVEALJS_VERSION=${asciidoctor_revealjs_version}
# Installing package required for the runtime of
# any of the asciidoctor-* functionnalities
RUN apk add --no-cache \
bash \
curl \
ca-certificates \
findutils \
font-bakoma-ttf \
graphviz \
inotify-tools \
make \
openjdk8-jre \
py2-pillow \
py-setuptools \
python2 \
ruby \
ruby-mathematical \
ttf-liberation \
unzip \
which
RUN addgroup --system appgroup && adduser -S appuser -G appgroup
WORKDIR /data/
# Installing Ruby Gems needed in the image
# including asciidoctor itself
RUN apk add --no-cache --virtual .rubymakedepends \
build-base \
libxml2-dev \
ruby-dev \
&& gem install --no-document \
"asciidoctor:${ASCIIDOCTOR_VERSION}" \
"asciidoctor-confluence:${ASCIIDOCTOR_CONFLUENCE_VERSION}" \
"asciidoctor-diagram:${ASCIIDOCTOR_DIAGRAM_VERSION}" \
"asciidoctor-epub3:${ASCIIDOCTOR_EPUB3_VERSION}" \
"asciidoctor-mathematical:${ASCIIDOCTOR_MATHEMATICAL_VERSION}" \
asciimath \
"asciidoctor-pdf:${ASCIIDOCTOR_PDF_VERSION}" \
"asciidoctor-revealjs:${ASCIIDOCTOR_REVEALJS_VERSION}" \
coderay \
epubcheck:3.0.1 \
haml \
kindlegen:3.0.3 \
pygments.rb \
rake \
rouge \
slim \
thread_safe \
tilt \
&& apk add --update npm \
# && npm -g config set user root \
&& apk --no-cache --virtual .canvas-build-deps add \
build-base \
cairo-dev \
jpeg-dev \
pango-dev \
giflib-dev \
pixman-dev \
pangomm-dev \
libjpeg-turbo-dev \
freetype-dev \
&& apk --no-cache add \
pixman \
cairo \
pango \
giflib \
# && npm -g config set user root \
&& npm config set user 0 \
&& npm config set unsafe-perm true \
&& npm install --build-from-source -g vega-cli vega vega-lite vega-embed \
&& echo `which vl2vg` \
# && apk del .canvas-build-deps \
&& apk del -r --no-cache .rubymakedepends
ENV PATH /data/node_modules/.bin:$PATH
ENV NODE_PATH /data/node_modules/
# Installing Python dependencies for additional
# functionnalities as diagrams or syntax highligthing
RUN apk add --no-cache --virtual .pythonmakedepends \
build-base \
python2-dev \
py2-pip \
&& pip install --upgrade pip \
&& pip install --no-cache-dir \
actdiag \
'blockdiag[pdf]' \
nwdiag \
Pygments \
seqdiag \
&& apk del -r --no-cache .pythonmakedepends
USER appuser
WORKDIR /documents
VOLUME /documents
CMD ["/bin/bash"]
The build command is (lifted from the Makefile):
DOCKER_IMAGE_NAME ?= docker-asciidoctor
DOCKERHUB_USERNAME ?= asciidoctor
DOCKER_IMAGE_TEST_TAG ?= $(shell git rev-parse --short HEAD)
build:
docker build \
-t $(DOCKER_IMAGE_NAME_TO_TEST) \
-f Dockerfile \
$(CURDIR)/
Testing is with:
test:
bats $(CURDIR)/tests/*.bats
where one of the tests is the one mentioned above.
The root cause for this was that I wasn't using the image that I had just built since I wasn't tagging the image.
docker tag asciidoctor/docker-asciidoctor:442d4d0 asciidoctor/docker-asciidoctor:latest
and then things worked.
Thanks for this! - super helpful, especially the tip about npm install -g canvas and vega-cli using the npm - g config set user root.
I was building from a different base image: mhart/alpine-node. Here's the Dockerfile that worked for me..
FROM mhart/alpine-node:12.16.2
WORKDIR /app
COPY *.jar .
RUN apk add --no-cache \
python2 \
build-base \
g++ \
cairo-dev \
jpeg-dev \
pango-dev \
bash \
imagemagick
RUN npm -g config set user root \
&& npm install -g canvas \
&& npm install -g vega vega-lite vega-cli
# USER root
# RUN apk update
# RUN apk fetch openjdk8
# RUN apk add openjdk8
The four lines commented out at the each are for installing java as well.
Related
Here I'm trying to build a terraform image using Alpine with the following Dockerfile, but success. However same used to work until a few months ago, not sure went changed
Dockerfile:
FROM alpine:latest
ARG Test_GID=1002
ARG Test_UID=1002
# Change to root user
USER root
RUN addgroup --gid ${Test_GID:-1002} test
RUN adduser -S -u ${Test_UID:-1002} -D -h "$(pwd)" -G test test
ENV USER=test
ENV TERRAFORM_VERSION=0.15.4
ENV TERRAFORM_SHA256SUM=ddf9fdfdfdsffdsffdd4e7c080da9a106befc1ff9e53b57364622720114e325c
ENV TERRAFORM_DOWNLOAD_URL=https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN apk --update add python3 py3-pip python3-dev
RUN apk update && \
apk add ansible \
gcc \
libffi \
libffi-dev \
musl-dev \
make \
openssl \
openssl-dev \
curl \
zip \
git \
jq
When I run the command docker image build -t terraform:0.15.5 . I get below-shown error
there is a problem in your Dockerfile that you copied here I think jenkins should not be there. Anyway, I tried with the Dockerfile below and build was success, I couldn't reproduce the problem, Are there any other lines in your Dockerfile ?
FROM alpine:latest
ARG Test_GID=1002
ARG Test_UID=1002
# Change to root user
USER root
RUN addgroup --gid ${Test_GID:-1002} test
RUN adduser -S -u ${Test_UID:-1002} -D -h "$(pwd)" -G test test
ENV USER=test
ENV TERRAFORM_VERSION=0.15.4
ENV TERRAFORM_SHA256SUM=ddf9fdfdfdsffdsffdd4e7c080da9a106befc1ff9e53b57364622720114e325c
ENV TERRAFORM_DOWNLOAD_URL=https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN apk --update add python3 py3-pip python3-dev
RUN apk update && \
apk add ansible \
gcc \
libffi \
libffi-dev \
musl-dev \
make \
openssl \
openssl-dev \
curl \
zip \
git \
jq
I have Dockerfile that looks like this
FROM python:3.7-alpine
VOLUME /folder
WORKDIR /
ADD my_packge.whl .
RUN wget -O- "https://packages.cloudfoundry.org/stable?release=linux64-binary&source=github" | tar -zx -C /usr/local/bin && \
apk update && \
apk add --virtual deps gcc python3-dev linux-headers musl-dev libffi-dev openssl-dev && \
apk add --no-cache bash && \
pip install my_package.whl && \
apk del deps
ENTRYPOINT [ "python_app" ]
And I execute it like this
docker run --rm \
-v ${WORKSPACE}/TEST_RESULTS:/folder \
-e CF_USERNAME=${CF_USN} \
-e CF_PASSWORD=${CF_PWD} \
container/${CF_SPACE}:${GIT_TAG_NAME} \
-b 'myy_parameter'
I expected that the execution results will put some files into ${WORKSPACE}/TEST_RESULTS. But it doesn't happen.
What is wrong?
I have a Dockerfile that already works in openjdk:8 but I am trying to convert it to alpine. It is giving me some troubles. The application was made in Java and uses Selenium. This is my current code:
FROM openjdk:8-jdk-alpine
RUN apk update \
&& apk fetch gnupg \
&& apk add --virtual \
curl wget xvfb unzip gnupg \
&& gpg --list-keys
ARG CHROME_DRIVER_VERSION=85.0.4183.87
RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \
&& apk update \
&& apk add google-chrome-stable \
&& apk cache clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
&& wget https://chromedriver.storage.googleapis.com/${CHROME_DRIVER_VERSION}/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
EXPOSE 42052
.
.
.
I tried to add gnupg like I found in here:
Docker: Using apt-key with alpine image
But it does not work, I just get an error: /bin/sh: gpg: not found
If I removed it, I just get the issue with apt-key that is not found. What is the alternative in alpine or what changes do I have to do to my docker file to work again.
Thanks in advance
Apparently the Chrome .deb file won't work on Alpine. So it needs Chromium to work. If you are already using the ChromeDriver in the Java code it will work without making any changes like in my case.
FROM openjdk:8-jdk-alpine
RUN apk update && apk add --no-cache bash \
alsa-lib \
at-spi2-atk \
atk \
cairo \
cups-libs \
dbus-libs \
eudev-libs \
expat \
flac \
gdk-pixbuf \
glib \
libgcc \
libjpeg-turbo \
libpng \
libwebp \
libx11 \
libxcomposite \
libxdamage \
libxext \
libxfixes \
tzdata \
libexif \
udev \
xvfb \
zlib-dev \
chromium \
chromium-chromedriver \
&& rm -rf /var/cache/apk/* \
/usr/share/man \
/tmp/*
RUN mkdir -p /data && adduser -D chrome \
&& chown -R chrome:chrome /data
USER chrome
.
.
.
If you are going to add create folders and/or add files like in my case, just add USER root to work
It will work the same as the openjdk:8 version.
Actually alpine version in answer post to correct work has to add in code:
chromeOptions.setBinary("/usr/bin/chromium-browser");
FROM some-build:latest as build
COPY / /var/www/html
WORKDIR /var/www/html
RUN cd /var/www/html && composer install
FROM some-build2:latest as run
COPY --from=build /var/www/html /var/www/html
ENV PATH ${HOME}/local/bin:${PATH}:/home/site/wwwroot
RUN cd /var/www/html && \
npm install && \
npm run production
ENTRYPOINT ["/bin/init_container.sh"]
The image run contains an installed npm. Despite this fact, the npm install return the error: /bin/sh: 1: npm: not found
How is this possible? What am I doing wrong?
Edit:
As answer to #BMitch 's comment, when I run the RUN image, in the container the node is on the PATH and I can use it. The path is /root/local/bin. I've attached all the Dockerfiles.
I have 3 docker files:
APP
The one you've already seen before.
RUN
FROM php:7.2.5-apache
MAINTAINER Azure App Services Container Images <appsvc-images#microsoft.com>
COPY apache2.conf /bin/
COPY init_container.sh /bin/
RUN a2enmod rewrite expires include deflate
# install the PHP extensions we need
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libpng-dev \
libjpeg-dev \
libpq-dev \
libldap2-dev \
libldb-dev \
libicu-dev \
libgmp-dev \
mysql-client \
libmagickwand-dev \
openssh-server vim curl wget tcptraceroute \
&& chmod 755 /bin/init_container.sh \
&& echo "root:Docker!" | chpasswd \
&& echo "cd /home" >> /etc/bash.bashrc \
&& ln -s /usr/lib/x86_64-linux-gnu/libldap.so /usr/lib/libldap.so \
&& ln -s /usr/lib/x86_64-linux-gnu/liblber.so /usr/lib/liblber.so \
&& ln -s /usr/include/x86_64-linux-gnu/gmp.h /usr/include/gmp.h \
&& rm -rf /var/lib/apt/lists/* \
&& pecl install imagick-beta \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-configure pdo_mysql --with-pdo-mysql=mysqlnd \
&& docker-php-ext-configure mysqli --with-mysqli=mysqlnd \
&& docker-php-ext-install gd \
mysqli \
opcache \
pdo \
pdo_mysql \
pdo_pgsql \
pgsql \
ldap \
intl \
gmp \
zip \
bcmath \
mbstring \
pcntl \
xml \
xmlrpc \
&& docker-php-ext-enable imagick
###################
# Installing node #
###################
RUN apt-get update -yq && apt-get upgrade -yq && \
apt-get install -yq g++ libssl-dev apache2-utils curl git python make nano
# setting up npm for global installation without sudo
# http://stackoverflow.com/a/19379795/580268
RUN MODULES="local" && \
echo prefix = ~/$MODULES >> ~/.npmrc && \
echo "export PATH=\$HOME/$MODULES/bin:\$PATH" >> ~/.bashrc && \
. ~/.bashrc && \
mkdir ~/$MODULES && \
\
# install Node.js and npm
# https://gist.github.com/isaacs/579814#file-node-and-npm-in-30-seconds-sh
mkdir ~/node-latest-install && cd $_ && \
curl http://nodejs.org/dist/v8.11.3/node-v8.11.3.tar.gz | tar xz --strip-components=1 && \
./configure --prefix=~/$MODULES && \
make install && \
curl -L https://www.npmjs.org/install.sh | sh
# optional, check locations and packages are correct
# RUN which node; node -v; which npm; npm -v; \
# npm ls -g --depth=0
# Remove unnecessary packages
# RUN apt-get -yq purge g++ libssl-dev curl git python make nano
# RUN apt-get -yq autoremove
###################
RUN \
rm -f /var/log/apache2/* \
&& rmdir /var/lock/apache2 \
&& rmdir /var/run/apache2 \
&& rmdir /var/log/apache2 \
&& chmod 777 /var/log \
&& chmod 777 /var/run \
&& chmod 777 /var/lock \
&& chmod 777 /bin/init_container.sh \
&& cp /bin/apache2.conf /etc/apache2/apache2.conf \
&& rm -rf /var/www/html \
&& rm -rf /var/log/apache2 \
&& mkdir -p /home/LogFiles \
&& ln -s /home/LogFiles /var/log/apache2
RUN { \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=60'; \
echo 'opcache.fast_shutdown=1'; \
echo 'opcache.enable_cli=1'; \
} > /usr/local/etc/php/conf.d/opcache-recommended.ini
RUN { \
echo 'error_log=/var/log/apache2/php-error.log'; \
echo 'display_errors=Off'; \
echo 'log_errors=On'; \
echo 'display_startup_errors=Off'; \
echo 'date.timezone=UTC'; \
} > /usr/local/etc/php/conf.d/php.ini
COPY sshd_config /etc/ssh/
EXPOSE 2222 8080
ENV APACHE_RUN_USER www-data
ENV PHP_VERSION 7.2.5
ENV PORT 8080
ENV WEBSITE_ROLE_INSTANCE_ID localRoleInstance
ENV WEBSITE_INSTANCE_ID localInstance
ENV PATH ${PATH}:/home/site/wwwroot
ENTRYPOINT ["/bin/init_container.sh"]
BUILD
FROM composer:latest as composer
FROM php:7.2.5-apache as apache
COPY --from=composer /usr/bin/composer /usr/bin/composer
RUN apt-get update && \
apt-get install git zip unzip -y
Edit 2:
It is important that if I remove the RUN npm... commands, then the whole build is a success and the result image contains the npm and I can use it (I've verified by using a container in interactive mode).
Edit 3:
Here's a lot lot simpler solution that can be tried out instantly:
FROM alpine as img1
RUN echo "$HOME" > $HOME/test.txt
FROM alpine as img2
RUN cat $HOME/test.txt
The result is: cat: can't open '/root/test.txt': No such file or directory
Two issues going on here. The "php:7.2.5-apache" image won't have /root/local/bin in the path, and you did not add it to the path during your build. The npm commands will work when you login interactively likely because of some changes to the shell login scripts that setup the environment. You'll need to run these environment setup scripts before running any npm commands, and that must be done within the same RUN command. To verify for yourself, you can check your .bashrc for variables or commands run to setup the npm environment. And you can verify the environment is different by comparing the PATH value with an env command in the interactive shell and in your build, you should see two different outputs if this is your issue. When I ran part of your run image, I saw the following in the .bashrc:
export PATH=$HOME/local/bin:$PATH
So you'll want to update the line in your Dockerfile for the run image:
ENV PATH /root/local/bin:${PATH}:/home/site/wwwroot
Per your edit 3, that's an entirely different issue. You created a file in one new image, and then went back to the base image where the file doesn't exist. If you wanted to see the file in a multi-stage build, then you either need to copy it between the stages, or use the previous image as your "from".
FROM alpine as img1
RUN echo "$HOME" > $HOME/test.txt
FROM alpine as img2
COPY --from=img1 /root/test.txt /root/test.txt
RUN cat $HOME/test.txt
or
FROM alpine as img1
RUN echo "$HOME" > $HOME/test.txt
FROM img1 as img2
RUN cat $HOME/test.txt
I'm trying to reduce the docker image size, but Dockerfile is being weird.
I concatenate the RUN command to reduce the size of the image. When I build the below Dockerfile it creates only 235MB.
FROM nginx:alpine
RUN apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl \
pcre-dev \
zlib-dev \
linux-headers \
curl \
gnupg \
libxslt-dev \
gd-dev \
perl-dev \
&& apk add --no-cache --virtual .libmodsecurity-deps \
pcre-dev \
libxml2-dev \
git \
libtool \
automake \
autoconf \
g++ \
flex \
bison \
yajl-dev \
git \
# Add runtime dependencies that should not be removed
&& apk add --no-cache \
doxygen \
geoip \
geoip-dev \
yajl \
libstdc++ \
sed \
# Installing ModSec Library version 3
&& echo "Installing ModSec Library" \
&& git clone -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity /opt/ModSecurity \
&& cd /opt/ModSecurity \
&& git submodule init \
&& git submodule update \
&& ./build.sh \
&& ./configure && make && make install \
&& echo "Finished Installing ModSec Library" \
# Installing ModSec - Nginx connector
&& cd /opt \
&& echo 'Installing ModSec - Nginx connector' \
&& git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git \
&& wget http://nginx.org/download/nginx-$NGINX_VERSION.tar.gz \
&& tar zxvf nginx-$NGINX_VERSION.tar.gz \
# Adding Nginx Connector Module
&& cd /opt/nginx-$NGINX_VERSION \
&& ./configure --with-compat --add-dynamic-module=../ModSecurity-nginx \
&& make modules \
&& cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules \
&& echo "Finished Installing ModSec - Nginx connector" \
# Begin installing ModSec OWASP Rules
&& echo "Begin installing ModSec OWASP Rules" \
&& mkdir /etc/nginx/modsec \
&& wget -P /etc/nginx/modsec/ https://raw.githubusercontent.com/SpiderLabs/ModSecurity/v3/master/modsecurity.conf-recommended \
&& mv /etc/nginx/modsec/modsecurity.conf-recommended /etc/nginx/modsec/modsecurity.conf \
&& sed -i 's/SecRuleEngine DetectionOnly/SecRuleEngine On/' /etc/nginx/modsec/modsecurity.conf \
# Fetching owasp-modsecurity-crs
&& cd /opt \
&& git clone -b v3.0/master https://github.com/SpiderLabs/owasp-modsecurity-crs \
&& mv owasp-modsecurity-crs/ /usr/local/ \
&& cp /usr/local/owasp-modsecurity-crs/crs-setup.conf.example /usr/local/owasp-modsecurity-crs/crs-setup.conf \
# Creating modsec file
&& echo 'Creating modsec file' \
&& echo -e '# From https://github.com/SpiderLabs/ModSecurity/blob/master/\n \
# modsecurity.conf-recommended\n \
# Edit to set SecRuleEngine On\n \
Include "/etc/nginx/modsec/modsecurity.conf"\n \
# OWASP CRS v3 rules\n \
Include "/usr/local/owasp-modsecurity-crs/crs-setup.conf"\n \
Include "/usr/local/owasp-modsecurity-crs/rules/*.conf"'\
>>/etc/nginx/modsec/main.conf \
&& chown nginx:nginx /etc/nginx/modsec/main.conf \
# Removing old Nginx conf files
&& rm -fr /etc/nginx/conf.d/ \
&& rm -fr /etc/nginx/nginx.conf \
&& chown -R nginx:nginx /usr/share/nginx \
# delete uneeded and clean up
&& apk del .build-deps \
&& apk del .libmodsecurity-deps \
&& rm -fr ModSecurity \
&& rm -fr ModSecurity-nginx \
&& rm -fr nginx-$NGINX_VERSION.tar.gz \
&& rm -fr nginx-$NGINX_VERSION
COPY conf/nginx.conf /etc/nginx
COPY conf/conf.d /etc/nginx/conf.d
COPY errors /usr/share/nginx/errors
WORKDIR /usr/share/nginx/html
CMD nginx -g 'daemon off;'
EXPOSE 80
I have seen the docker history imagedId it shows that this RUN command has an increased size around 855MB. Anybody Understand why it is behaving weird?
Any thoughts would be much helpful, its is hard to debug building the image everytime.
I tried building in both ways and found not much difference.
Most of the disk space is consumed by /opt/ModSecurity
Initially it was 74MB after git clone.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
oldimage latest 924a8d4f941e 11 minutes ago 867MB
newimage latest d1ca029927c2 About an hour ago 867MB
nginx alpine ebe2c7c61055 6 days ago 18MB
However after building the complete build - it has grown to ~650MB.
$ du -sh *
639.7M ModSecurity
408.0K ModSecurity-nginx
7.5M nginx-1.13.12
996.0K nginx-1.13.12.tar.gz