Split up Dockerfile in two Images - docker

I have a Dockerfile including e.g. apache, further installations and my code that is copied to /var/www/html to create a project. After I created the image locally, I export it as a .tar file and upload the image to portainer. Portainer is my productive environment. However, everytime when I want to update the Version and the services that are using my software, I have to update the whole new image which has a size of 800MB. Portainer has furthermore multiple managers, which causes that I have to upload it to each manager.
Because everything keeps the same, except my code that is inserted by the copy part COPY HRmAppBare/ /var/www/html, I thought about the idea, If it is possible to create two images. One Image for the whole Installations (let us say: 1.0-BaseInstall) and a second Image (let us say: 1.9-backend) that only stores my code. Then, for each Version update, I only have to upload the image with the new code and can maybe somehow refere to the 1.0-BaseInstall like e.g. From 1.0-BaseInstall. If the BaseInstall changes (which is really rarely), I could just create a new image for that.
Because I could not find anything about that, I want to know, if this approach is applicable and if yes, how I have to build this?
#start with base Image from php
FROM php:7.3-apache
#install system dependencies and enable PHP modules
RUN apt-get update && apt-get install -y \
libicu-dev \
vim \
cron \
libpq-dev \
libmcrypt-dev \
default-mysql-client \
zip \
unzip \
libzip-dev \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure pdo_mysql --with-pdo-mysql=mysqlnd \
&& docker-php-ext-install \
intl \
mbstring \
pcntl \
pdo_mysql \
opcache \
gettext \
&& pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt \
&& rm -rf /var/lib/apt/lists/*
#configure imap for mails
RUN apt-get update && \
apt-get install -y \
libc-client-dev libkrb5-dev && \
rm -r /var/lib/apt/lists/*
RUN docker-php-ext-configure imap --with-kerberos --with-imap-ssl && \
docker-php-ext-install -j$(nproc) imap
#install composer
#RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
#change uid and gid of apache to docker user uid/gid, enable apache module rewrite
RUN usermod -u 1000 www-data && groupmod -g 1000 www-data && a2enmod rewrite
#copy the source code (Can this be in a 2nd image?)
COPY HRmAppBare/ /var/www/html
#Update apache2.conf
RUN echo 'Alias ${COOKIE_PATH} "/var/www/html"' >> /etc/apache2/apache2.conf
RUN echo 'Alias ${COOKIE_PATH}/ "/var/www/html/"' >> /etc/apache2/apache2.conf
#change ownership of our applications
RUN chown -R www-data:www-data /var/www/html/
ENTRYPOINT [ "sh", "-c", "rm /var/www/html/app/tmp/cache/models/* && rm /var/www/html/app/tmp/cache/persistent/* && /var/www/html/app/Console/cake schema update -y && apache2-foreground"]
EXPOSE 80

You can break this into multiple docker files and it would be viable.
If you have no other consumers of the base image it may create confusion and just add more overhead to have to manage versions for your base image and your application but it is certainly viable.
It may be worth it to look into multi-stage builds.
https://github.com/docker/docker.github.io/blob/master/develop/develop-images/multistage-build.md
If the objects on the file system that Docker is about to produce are unchanged between builds, reusing a cache of a previous build on the host is a great time-saver. It makes building a new container really, really fast.
https://thenewstack.io/understanding-the-docker-cache-for-faster-builds/
I'm unsure when you say
Then, for each Version update, I only have to upload the code
I may be misunderstanding the phrase but instead I would suggest having all of the builds done on your build box and have the versioned image pushed to a docker repo for your production box to pull from when you're ready to grab the next version. You shouldn't need to upload your code anywhere. Just the built image to whatever docker repo you store your images on.
Edit: Add in link for creating your own docker repo
https://docs.docker.com/registry/deploying/
Edit 2: To better answer your question
FROM php:7.3-apache AS base
//...rest of your dockerfile until copy
FROM base
#copy the source code (Can this be in a 2nd image?)
COPY HRmAppBare/ /var/www/html
#Update apache2.conf
RUN echo 'Alias ${COOKIE_PATH} "/var/www/html"' >> /etc/apache2/apache2.conf
RUN echo 'Alias ${COOKIE_PATH}/ "/var/www/html/"' >> /etc/apache2/apache2.conf
#change ownership of our applications
RUN chown -R www-data:www-data /var/www/html/
ENTRYPOINT [ "sh", "-c", "rm /var/www/html/app/tmp/cache/models/* && rm /var/www/html/app/tmp/cache/persistent/* && /var/www/html/app/Console/cake schema update -y && apache2-foreground"]
EXPOSE

Related

Permission in docker Container just partly working with chown

I have a problem with access rights in a Docker Container. I am copiyng a folder from the host to the docker image into the folder /var/www/html. This folder has a deeper folder structure. Then, I want www-data which is executing apache to have access to the complete /var/www/html folder. I create the container with the following dockerfile.
#start with base Image from php
FROM php:7.3-apache
#install system dependencies and enable PHP modules
RUN apt-get update && apt-get install -y \
libicu-dev \
libpq-dev \
libmcrypt-dev \
mysql-client \
git \
zip \
unzip \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure pdo_mysql --with-pdo-mysql=mysqlnd \
&& docker-php-ext-install \
intl \
mbstring \
pcntl \
pdo_mysql \
pdo_pgsql \
pgsql \
opcache
# zip \
# mcrypt \
#configure imap for mails
RUN apt-get update && \
apt-get install -y \
libc-client-dev libkrb5-dev && \
rm -r /var/lib/apt/lists/*
RUN docker-php-ext-configure imap --with-kerberos --with-imap-ssl && \
docker-php-ext-install -j$(nproc) imap
#install mcrypt
RUN apt-get update \
&& apt-get install -y libmcrypt-dev \
&& rm -rf /var/lib/apt/lists/* \
&& pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt
#install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
#set our application folder as an environment variable
ENV APP_HOME /var/www/html
#change uid and gid of apache to docker user uid/gid
RUN usermod -u 1000 www-data && groupmod -g 1000 www-data
# enable apache module rewrite
RUN a2enmod rewrite
#COPY Data to html
COPY --chown=www-data:www-data AppBare/ /var/www/html
#change ownership of our applications
RUN chown -R www-data:www-data /var/www/html
#Copy file to start schema update on startup
ENTRYPOINT [ "sh", "-c", "/var/www/html/app/Console/cake schema update -y && /var/www/html/app/Console/cake migration && /usr/sbin/apachectl -D FOREGROUND"]
EXPOSE 80
After I create and start the container, I get the following error message accessing a website of the serving webserver. However it is also loading the website with images that were copied, so basically, the user has access to e.g. images, css and so on.
SplFileInfo::openFile(/var/www/html/app/tmp/cache/models/demo_backend_cake_model_default_backend_dockertest_list):
failed to open stream: Permission denied
When I go into the console of the container and reset the permissions with the chown command, the problem disappears. So the command itself must be right. Also when I create a volume and mount the folder from the host to /var/www/html, everything is working fine.
How can I give the user the full access to the folder? I also tried out to switch give the access before I copy the data, but that's not working also.
About your last comment
The two files are created by the Entrypoint of the code /var/www/html/app/Console/cake schema update -y. So this is executed by the root user. Is it possible to say to execute this as www-data not as root?
The answer is yes. You have to add the following line before your entrypoint :
USER www-data
This way, everything run after this line will be with this user.

Dockerfile COPY command puts an empty file in its container when overwriting another file

When copying files from host machine to container where a file already exists at the destination path, the copied file is empty.
I've attempted to copy the same files to a path with a different name and this works fine.
The two lines from my dockerfile that this issue happens on are:
COPY conf/policy.xml /etc/ImageMagick-6/
COPY conf/000-default.conf /etc/apache2/sites-available/
Full dockerfile:
FROM php:7.3-apache
RUN docker-php-ext-install pdo_mysql && docker-php-ext-enable pdo_mysql
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
RUN apt-get update && apt-get install -y \
git libmagick++-dev \
--no-install-recommends && \
git clone https://github.com/mkoppanen/imagick.git && \
cd imagick && git checkout master && phpize && ./configure && \
make && make install && \
docker-php-ext-enable imagick && \
cd ../ && rm -rf imagick && \
apt-get install -y ghostscript && rm -r /var/lib/apt/lists/*
RUN pecl install xdebug
RUN docker-php-ext-enable xdebug
COPY conf/php.ini /etc/php/7.3/fpm/conf.d/40-custom.ini
COPY conf/policy.xml /etc/ImageMagick-6/
COPY www/ /var/www/html/
COPY conf/000-default.conf /etc/apache2/sites-available/
COPY scripts/generate-ssl.sh /generate-ssl.sh
RUN chmod +x /generate-ssl.sh
RUN /bin/bash /generate-ssl.sh
EXPOSE 80 443
Is this intended behavior ?
From Docker Documentation - Docerfile Copy:
COPY src dest: If src is any kind of file, it is copied individually along with its metadata. In this case, if ends with a trailing slash /, it will be considered a directory and the contents of will be written at /base().
For your case try and specifiy the dest file:
COPY conf/policy.xml /etc/ImageMagick-6/policy.xml
COPY conf/000-default.conf /etc/apache2/sites-available/000-default.conf
Otherwise i dont see anything wrong with your dockerfile

how to merge Docker's layers of image and slim down the image file

docker image inspect <name>
gives me 16GB
and about 20 layers
When I am logged as root, this
du -hs /
show me just 2GB
FYI, there are already very multi-lines RUN commands in Dockerfile.
can I squash all layers into one layer without touching Dockerfile, rebuilding etc?
or possibly by adding extra action to Dockerfile which clear/improve caching
Dockerfile is
FROM heroku/heroku:18
ENV PYENV_ROOT="/pyenv"
ENV PATH="/pyenv/shims:/pyenv/bin:$PATH"
ENV PYTHON_VERSION 3.5.6
ENV GPG_KEY <value>
ENV PYTHONUNBUFFERED 1
ENV TERM xterm
ENV EDITOR vim
RUN apt-get update && apt-get install -y \
build-essential \
gdal-bin \
binutils \
iputils-ping \
libjpeg8 \
libproj-dev \
libjpeg8-dev \
libtiff-dev \
zlib1g-dev \
libfreetype6-dev \
liblcms2-dev \
libxml2-dev \
libxslt1-dev \
libssl-dev \
libncurses5-dev \
virtualenv \
python-pip \
python3-pip \
python-dev \
libmysqlclient-dev \
mysql-client-5.7 \
libpq-dev \
libcurl4-gnutls-dev \
libgnutls28-dev \
libbz2-dev \
tig \
git \
vim \
nano \
tmux \
tmuxinator \
fish \
sudo \
libnet-ifconfig-wrapper-perl \
ruby \
libssl-dev \
nodejs \
strace \
tcpdump \
# npm & grunt
&& curl -L https://npmjs.com/install.sh | sh \
&& npm install -g grunt-cli grunt \
# ruby & foreman
&& gem install foreman \
# installing pyenv
&& curl https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bash
COPY . /app
COPY ./requirements /requirements
COPY ./requirements.txt /requirements.txt
COPY ./docker/docker_compose/django/foreman.sh /foreman.sh
COPY ./docker/docker_compose/django/Procfile /Procfile
COPY ./docker/docker_compose/django/entrypoint.sh /entrypoint.sh
# ADD sudoer user django with password django
RUN groupadd -r django -g 1000 && \
useradd -ms /usr/bin/fish -p $(openssl passwd -1 django) --uid 1000 --gid 1000 -r -g django django && \
usermod -a -G sudo django && \
chown -R django:django /app
COPY --chown=django:django ./docker/docker_compose/django/fish /home/django/.config/fish
COPY --chown=django:django ./docker/docker_compose/django/tmuxinator /home/django/.tmuxinator
COPY ./docker/docker_compose/django/fish /root/.config/fish
WORKDIR /app
RUN sed -i 's/\r//' /entrypoint.sh \
&& sed -i 's/\r//' /foreman.sh \
&& chmod +x /entrypoint.sh \
&& chown django /entrypoint.sh \
&& chmod +x /foreman.sh \
&& chown django /foreman.sh \
&& chown -R django:django /home/django/ \
&& pyenv install ${PYTHON_VERSION%%} \
&& mkdir -p /app/log \
&& pyenv global ${PYTHON_VERSION%%} \
&& pyenv rehash \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -U pip \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements.txt \
&& chown -R django:django /pyenv/ \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements/dev_requirements.txt
# this user receives ENVs from the top
USER django
ENTRYPOINT ["/entrypoint.sh"]
What I've tried so far:
The --squash option from experimental mode of docker build is rather not for me. That Dockerfile is one of more Dockerfiles inside docker-compose.
I've also checked this:
https://github.com/jwilder/docker-squash
but seems docker load cannot load a squashed image.
also, that squash gives me 8GB (still far away from expected ~2GB)
docker save <image_id> | docker-squash -t latest_tiny | docker load
update after answers:
when I've added this:
&& apt-get autoremove \ # ? to consider
&& apt-get clean \ # ? to consider
&& rm -rf /var/lib/apt/lists/*
to apt-get and --no-cache-dir to each pip, the result was 72GB (yes, even much more - docker images shows 36GB before pip command, and 72GB as final size).
my working directory is clear (regarding COPY). du -hs / (as a root) still has 2GB. And all images were removed before rebuilding.
Following the #Mihai approach, I was able to slim down the image from 16GB to 9GB.
There is a simple trick to get rid of the intermediate layers. It will bring down the size as well but with how much depends on how it was built.
Create a Dockerfile like this:
FROM your_image as initial
FROM your_image_base
COPY --from=initial / /
your_image_base should be something like 'alpine' - so the smallest image from which your image and its parents descend from.
Now build the image and check the history and size:
docker build -t your-image:2.0 .
docker image history your-image:2.0
docker image ls
This way you do create a new Dockerfile (if that is acceptable for your process) without touching the initial Dockerfile.
Let me know if this solves your issue.
UPDATE AFTER SEEING THE Dockerfile:
maybe I miss it but I don't see you cleaning up the apt-get cache after you perform the installations. Your big RUN command should end with "&& rm -rf /var/lib/apt/lists/*" on the same line so that it doesn't store the whole cache on the layer.
Definitely add && rm -rf /var/lib/apt/lists/* on the end of your main run command, like Mihai said. Another thing that may help (depending on how big your dependencies are) is installing with pip using the --no-cache-dir option . Also, make sure you understand build context and consider using either a .dockerignore or sending the context to another directory (totally depends on how you're directory is setup)
I've also had luck exploring an image using dive. Honestly this looks like a pretty big image so not sure how much you're going to be able to get it down
To squash a (Docker) container image, without re-building the image or manipulating the original Dockerfile,
You can extend from your image and squash it:
docker build --squash -t your_image_squashed - <<< "FROM your_image"
It's very easy, just use
docker commit YOUR_CONTAINER_ID NEW_IMAGE_ID
The docker will throw away the intermediate layers, you lost history but the size is small

Docker - Execute command after mounting a volume

I have the following Dockerfile for a php runtime based on the official [php][1] image.
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache \
&& php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php \
&& php -r "unlink('composer-setup.php');" \
&& mv composer.phar /usr/local/bin/composer
I am having trouble running composer install.
I am guessing that the Dockerfile runs before a volume is mounted because I receive a composer.json file not found error if adding:
...
&& mv composer.phar /usr/local/bin/composer \
&& composer install
to the above.
But, adding the following property to docker-compose.yml:
command: sh -c "composer install && composer require drush/drush"
seems to terminate the container after the command finishes executing.
Is there a way to:
wait for a volume to become mounted
run composer install using the mounted composer.json file
have the container keep running afters
?
I generally agree with Chris's answer for local development. I am going to offer something that combines with a recent Docker feature that may set a path for doing both local development and eventual production deployment with the same image.
Let's first start with the image that we can build in a manner that can be used for either local development or deployment somewhere that contains the code and dependencies. In the latest Docker version (17.05) there is a new multi-stage build feature that we can take advantage of. In this case we can first install all your Composer dependencies to a folder in the build context and then later copy them to the final image without needing to add Composer to the final image. This might look like:
FROM composer as composer
COPY . /app
RUN composer install --ignore-platform-reqs --no-scripts
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache
COPY . /var/www/root
COPY --from=composer /app/vendor /var/www/root/vendor
This removes all of Composer from the application image itself and instead uses the first stage to install the dependencies in another context and copy them over to the final image.
Now, during development you have some options. Based on your docker-compose.yml command it sounds like you are mounting the application into the container as .:/var/www/root. You could add a composer service to your docker-compose.yml similar to my example at https://gist.github.com/andyshinn/e2c428f2cd234b718239. Here, you just do docker-compose run --rm composer install when you need to update dependencies locally (this keeps the dependencies build inside the container which could matter for native compiled extensions, especially if you are deploying as containers and developing on Windows or Mac).
The other option is to just do something similar to what Chris has already suggested, and use the official Composer image to update and manage dependencies when needed. I've done something like this locally before where I had private dependencies on GitHub which required SSH authentication:
docker run --rm --interactive --tty \
--volume $PWD:/app:rw,cached \
--volume $SSH_AUTH_SOCK:/ssh-auth.sock \
--env SSH_AUTH_SOCK=/ssh-auth.sock \
--volume $COMPOSER_HOME:/composer \
composer:1.4 install --ignore-platform-reqs --no-scripts
To recap, the reasoning for this method of building the image and installing Composer dependencies using an external container / service:
Platform specific dependencies will be built correctly for the container (Linux architecture vs Windows or Mac).
No Composer or PHP is required on your local computer (it is all contained inside Docker and Docker Compose).
The initial image you built is runnable and deployable without needing to mount code into it. In development, you are just overriding the /var/www/root folder with a local volume.
I've been down this rabbit hole for 5 hours, all of the solutions out there are way too complicated. The easiest solution is to exclude vendor or node_modules and similar directories from volume.
#docker-compose.yml
volumes:
- .:/srv/app/
- /srv/app/vendor/
So this will map current project directory but exclude its vendor subdirectory. Dont forget the trailing slash!
So now you can easily run composer install in dockerfile and when docker mounts your volume it will ignore vendor directory.
If this is is for a general development environment, then the intention is not really ideal because it's coupling the application to the Docker configuration.
Just run composer install seperately by some other means (there is an image available for this on dockerhub, which allows you to just do (docker run -it --rm -v $(pwd):/app composer/composer install).
But yes it is possible you would need the last line in the Dockerfile to be bash -c "composer install && php-fpm".
wait for a volume to become mounted
No, volumes are not able to be mounted during a docker build process. Though you can copy the source code in.
run composer install using the mounted composer.json file
No, see above response.
have the container keep running after
Yes, you would need to execute php-fpm --nodaemonize ( which is a long running process, hence it won't terminate.
To execute a command after you have mounted a volume on a docker container
Assuming that you are fetching dependencies from a public repo
docker run --interactive -t --privileged --volume ${pwd}:/xyz composer /bin/sh -c 'composer install'
For fetching dependencies from a private git repo, you would need to copy/create ssh keys, I guess that should be out of scope of this question.

How to handle PHP project code in docker container

I ran into kind of a hen-and-egg problem with my docker setup. In my Dockerfile I install nginx, php and the needed configurations. I also install composer there:
FROM ubuntu
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Now, the next step would be to actually install all dependencies in the project directory via composer. And this is where the trouble starts: As this is my development machine, I don't want to copy my local project files over to the docker container. Instead, I mounted it in my docker-compose.yml:
version: '3'
services:
web:
...
volumes:
- "./crm-application:/var/www/orocrm/"
I cannot put composer install in the Dockerfile, as the mounting of the directory (in my docker-compose file) is taking place after the Dockerfile is run.
What is the best solution here? Another option which comes to my mind is intially copying the files into the container and later on use a filewatcher to scp the changed files into the container. Not a nice solution, though.
UPDATE I would like to emphasize what my actual problem is: I am on my development machine and I want to continuously update the code and have the changes mirrored instantly withouth building the image once again. Therefore, COPY is not an option.
My suggestion is to copy your content in your container using the COPYcommand, like this
FROM ubuntu
COPY ./crm-application /var/www/orocrm/
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/ && \
composer install
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Why? in this way you don't need to use docker-compose or another system. You're going to be able to run your single container.
Even if you want to use docker-compose, you're using a volume that allows you to update the code inside your container.
Notice that I've added composer install in the Docker because you've already the code inside the container at the moment of the build.
Regards,
Idir!

Resources