Docker - Execute command after mounting a volume - docker

I have the following Dockerfile for a php runtime based on the official [php][1] image.
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache \
&& php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php \
&& php -r "unlink('composer-setup.php');" \
&& mv composer.phar /usr/local/bin/composer
I am having trouble running composer install.
I am guessing that the Dockerfile runs before a volume is mounted because I receive a composer.json file not found error if adding:
...
&& mv composer.phar /usr/local/bin/composer \
&& composer install
to the above.
But, adding the following property to docker-compose.yml:
command: sh -c "composer install && composer require drush/drush"
seems to terminate the container after the command finishes executing.
Is there a way to:
wait for a volume to become mounted
run composer install using the mounted composer.json file
have the container keep running afters
?

I generally agree with Chris's answer for local development. I am going to offer something that combines with a recent Docker feature that may set a path for doing both local development and eventual production deployment with the same image.
Let's first start with the image that we can build in a manner that can be used for either local development or deployment somewhere that contains the code and dependencies. In the latest Docker version (17.05) there is a new multi-stage build feature that we can take advantage of. In this case we can first install all your Composer dependencies to a folder in the build context and then later copy them to the final image without needing to add Composer to the final image. This might look like:
FROM composer as composer
COPY . /app
RUN composer install --ignore-platform-reqs --no-scripts
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache
COPY . /var/www/root
COPY --from=composer /app/vendor /var/www/root/vendor
This removes all of Composer from the application image itself and instead uses the first stage to install the dependencies in another context and copy them over to the final image.
Now, during development you have some options. Based on your docker-compose.yml command it sounds like you are mounting the application into the container as .:/var/www/root. You could add a composer service to your docker-compose.yml similar to my example at https://gist.github.com/andyshinn/e2c428f2cd234b718239. Here, you just do docker-compose run --rm composer install when you need to update dependencies locally (this keeps the dependencies build inside the container which could matter for native compiled extensions, especially if you are deploying as containers and developing on Windows or Mac).
The other option is to just do something similar to what Chris has already suggested, and use the official Composer image to update and manage dependencies when needed. I've done something like this locally before where I had private dependencies on GitHub which required SSH authentication:
docker run --rm --interactive --tty \
--volume $PWD:/app:rw,cached \
--volume $SSH_AUTH_SOCK:/ssh-auth.sock \
--env SSH_AUTH_SOCK=/ssh-auth.sock \
--volume $COMPOSER_HOME:/composer \
composer:1.4 install --ignore-platform-reqs --no-scripts
To recap, the reasoning for this method of building the image and installing Composer dependencies using an external container / service:
Platform specific dependencies will be built correctly for the container (Linux architecture vs Windows or Mac).
No Composer or PHP is required on your local computer (it is all contained inside Docker and Docker Compose).
The initial image you built is runnable and deployable without needing to mount code into it. In development, you are just overriding the /var/www/root folder with a local volume.

I've been down this rabbit hole for 5 hours, all of the solutions out there are way too complicated. The easiest solution is to exclude vendor or node_modules and similar directories from volume.
#docker-compose.yml
volumes:
- .:/srv/app/
- /srv/app/vendor/
So this will map current project directory but exclude its vendor subdirectory. Dont forget the trailing slash!
So now you can easily run composer install in dockerfile and when docker mounts your volume it will ignore vendor directory.

If this is is for a general development environment, then the intention is not really ideal because it's coupling the application to the Docker configuration.
Just run composer install seperately by some other means (there is an image available for this on dockerhub, which allows you to just do (docker run -it --rm -v $(pwd):/app composer/composer install).
But yes it is possible you would need the last line in the Dockerfile to be bash -c "composer install && php-fpm".
wait for a volume to become mounted
No, volumes are not able to be mounted during a docker build process. Though you can copy the source code in.
run composer install using the mounted composer.json file
No, see above response.
have the container keep running after
Yes, you would need to execute php-fpm --nodaemonize ( which is a long running process, hence it won't terminate.

To execute a command after you have mounted a volume on a docker container
Assuming that you are fetching dependencies from a public repo
docker run --interactive -t --privileged --volume ${pwd}:/xyz composer /bin/sh -c 'composer install'
For fetching dependencies from a private git repo, you would need to copy/create ssh keys, I guess that should be out of scope of this question.

Related

Split up Dockerfile in two Images

I have a Dockerfile including e.g. apache, further installations and my code that is copied to /var/www/html to create a project. After I created the image locally, I export it as a .tar file and upload the image to portainer. Portainer is my productive environment. However, everytime when I want to update the Version and the services that are using my software, I have to update the whole new image which has a size of 800MB. Portainer has furthermore multiple managers, which causes that I have to upload it to each manager.
Because everything keeps the same, except my code that is inserted by the copy part COPY HRmAppBare/ /var/www/html, I thought about the idea, If it is possible to create two images. One Image for the whole Installations (let us say: 1.0-BaseInstall) and a second Image (let us say: 1.9-backend) that only stores my code. Then, for each Version update, I only have to upload the image with the new code and can maybe somehow refere to the 1.0-BaseInstall like e.g. From 1.0-BaseInstall. If the BaseInstall changes (which is really rarely), I could just create a new image for that.
Because I could not find anything about that, I want to know, if this approach is applicable and if yes, how I have to build this?
#start with base Image from php
FROM php:7.3-apache
#install system dependencies and enable PHP modules
RUN apt-get update && apt-get install -y \
libicu-dev \
vim \
cron \
libpq-dev \
libmcrypt-dev \
default-mysql-client \
zip \
unzip \
libzip-dev \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure pdo_mysql --with-pdo-mysql=mysqlnd \
&& docker-php-ext-install \
intl \
mbstring \
pcntl \
pdo_mysql \
opcache \
gettext \
&& pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt \
&& rm -rf /var/lib/apt/lists/*
#configure imap for mails
RUN apt-get update && \
apt-get install -y \
libc-client-dev libkrb5-dev && \
rm -r /var/lib/apt/lists/*
RUN docker-php-ext-configure imap --with-kerberos --with-imap-ssl && \
docker-php-ext-install -j$(nproc) imap
#install composer
#RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
#change uid and gid of apache to docker user uid/gid, enable apache module rewrite
RUN usermod -u 1000 www-data && groupmod -g 1000 www-data && a2enmod rewrite
#copy the source code (Can this be in a 2nd image?)
COPY HRmAppBare/ /var/www/html
#Update apache2.conf
RUN echo 'Alias ${COOKIE_PATH} "/var/www/html"' >> /etc/apache2/apache2.conf
RUN echo 'Alias ${COOKIE_PATH}/ "/var/www/html/"' >> /etc/apache2/apache2.conf
#change ownership of our applications
RUN chown -R www-data:www-data /var/www/html/
ENTRYPOINT [ "sh", "-c", "rm /var/www/html/app/tmp/cache/models/* && rm /var/www/html/app/tmp/cache/persistent/* && /var/www/html/app/Console/cake schema update -y && apache2-foreground"]
EXPOSE 80
You can break this into multiple docker files and it would be viable.
If you have no other consumers of the base image it may create confusion and just add more overhead to have to manage versions for your base image and your application but it is certainly viable.
It may be worth it to look into multi-stage builds.
https://github.com/docker/docker.github.io/blob/master/develop/develop-images/multistage-build.md
If the objects on the file system that Docker is about to produce are unchanged between builds, reusing a cache of a previous build on the host is a great time-saver. It makes building a new container really, really fast.
https://thenewstack.io/understanding-the-docker-cache-for-faster-builds/
I'm unsure when you say
Then, for each Version update, I only have to upload the code
I may be misunderstanding the phrase but instead I would suggest having all of the builds done on your build box and have the versioned image pushed to a docker repo for your production box to pull from when you're ready to grab the next version. You shouldn't need to upload your code anywhere. Just the built image to whatever docker repo you store your images on.
Edit: Add in link for creating your own docker repo
https://docs.docker.com/registry/deploying/
Edit 2: To better answer your question
FROM php:7.3-apache AS base
//...rest of your dockerfile until copy
FROM base
#copy the source code (Can this be in a 2nd image?)
COPY HRmAppBare/ /var/www/html
#Update apache2.conf
RUN echo 'Alias ${COOKIE_PATH} "/var/www/html"' >> /etc/apache2/apache2.conf
RUN echo 'Alias ${COOKIE_PATH}/ "/var/www/html/"' >> /etc/apache2/apache2.conf
#change ownership of our applications
RUN chown -R www-data:www-data /var/www/html/
ENTRYPOINT [ "sh", "-c", "rm /var/www/html/app/tmp/cache/models/* && rm /var/www/html/app/tmp/cache/persistent/* && /var/www/html/app/Console/cake schema update -y && apache2-foreground"]
EXPOSE

Dockerfile cannot find executable script (no such file or directory)

I'm writting a Dockerfile in order to create an image for a web server (a shiny server more precisely). It works well, but it depends on a huge database folder (db/) that it is not distributed with the package, so I want to do all this preprocessing while creating the image, by running the corresponding script in the Dockerfile.
I expected this to be simple, but I'm struggling figuring out where my files are being located within the image.
This repo has the following structure:
Dockerfile
preprocessing_files
configuration_files
app/
application_files
db/
processed_files
So that app/db/ does not exist, but is created and filled with files when preprocessing_files are run.
The Dockerfile is the following:
# Install R version 3.6
FROM r-base:3.6.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxml2-dev \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
RUN R -e "install.packages(c('shiny', 'flexdashboard','rmarkdown','tidyverse','plotly','DT','drc','gridExtra','fitdistrplus'), repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
COPY /app/db /srv/shiny-server/app/
# Make the ShinyApp available at port 80
EXPOSE 80
CMD ["/usr/bin/shiny-server"]
This above file works well if preprocessing_files are run in advance, so app/application_files can successfully read app/db/processed_files. How could this script be run in the Dockerfile? To me the intuitive solution would be simply to write:
RUN bash -c "preprocessing.sh"
Before the ADD instruction, but then preprocessing_files are not found. If the above instruction is written below ADD and also WORKDIR app/, the same error happens. I cannot understand why.
You cannot execute code on the host machine from Dockerfile. RUN command executes inside the container being built. You can:
Copy preprocessing_files inside docker container and run preprocessing.sh inside the container (this would increase size of the container)
Create a makefile/build.sh script which launches preprocessing.sh before executing docker build

running a phpUnit test within a docker container

i want to run a specific version of phpUnit WITHIN a docker container. This container will use a specific version of php. i.e
php:5.6-apache
Its a Laravel application. i have install phpunit via composer on the hostfiles and then used the volume command to transfer this to the container.
my composer.json file has following entry:
"require-dev": {
"phpunit/phpunit": "^5.0"
},
This is my docker run command to run the test on my testdev container:
docker run --rm -it -v ~/Users/mow/Documents/devFolder/testdev:/app testdev_php "php ./vendor/bin/phpunit"
This returns the error:
exec: fatal: unable to exec php ./vendor/bin/phpunit: No such file or directory
i am unclear why it says this because te vendor directory is at the root of my site directory.
this is my dockerfile
FROM php:5.6-apache
ENV S6_OVERLAY_VERSION 1.11.0.1
RUN apt-get update && apt-get install -y \
libldap2-dev \
git \
--no-install-recommends \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ \
&& docker-php-ext-install ldap \
&& docker-php-ext-install mysqli pdo pdo_mysql
#install xdebug
RUN git clone https://github.com/xdebug/xdebug.git \
&& cd xdebug \
&& git checkout tags/XDEBUG_2_5_5 \
&& phpize \
&& ./configure --enable-xdebug \
&& make \
&& make install
RUN a2enmod rewrite
COPY ./docker/rootfs /
COPY . /app
WORKDIR /app
ENTRYPOINT ["/init"]
I guess the real question is this: what is the correct way to run a phpUnit test within a docker container so that its subject to the php version within that container.
In the Entrypoint, you need to ran composer install to install the requisite packages that will be available in the docker container
without Dockerfile use this code in windows PowerShell :
docker run --rm -v ${pwd}:/app composer:latest require --dev phpunit/phpunit:^8
create composer container to install phpunit then remove the container.
note: in windows Command %cd% / in windows PowerShell ${pwd} / in linux $PWD
then Create new PHP container with copy all fills include phpuint feamework
docker run -d -p 80:80 --name my-php-apache -v ${pwd}:/var/www/html php:7.4.0-apache
to active autoload file, and testing please visit this Page

How to handle PHP project code in docker container

I ran into kind of a hen-and-egg problem with my docker setup. In my Dockerfile I install nginx, php and the needed configurations. I also install composer there:
FROM ubuntu
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Now, the next step would be to actually install all dependencies in the project directory via composer. And this is where the trouble starts: As this is my development machine, I don't want to copy my local project files over to the docker container. Instead, I mounted it in my docker-compose.yml:
version: '3'
services:
web:
...
volumes:
- "./crm-application:/var/www/orocrm/"
I cannot put composer install in the Dockerfile, as the mounting of the directory (in my docker-compose file) is taking place after the Dockerfile is run.
What is the best solution here? Another option which comes to my mind is intially copying the files into the container and later on use a filewatcher to scp the changed files into the container. Not a nice solution, though.
UPDATE I would like to emphasize what my actual problem is: I am on my development machine and I want to continuously update the code and have the changes mirrored instantly withouth building the image once again. Therefore, COPY is not an option.
My suggestion is to copy your content in your container using the COPYcommand, like this
FROM ubuntu
COPY ./crm-application /var/www/orocrm/
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/ && \
composer install
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Why? in this way you don't need to use docker-compose or another system. You're going to be able to run your single container.
Even if you want to use docker-compose, you're using a volume that allows you to update the code inside your container.
Notice that I've added composer install in the Docker because you've already the code inside the container at the moment of the build.
Regards,
Idir!

Syntaxnet spec file and Docker?

I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.

Resources