How to handle PHP project code in docker container - docker

I ran into kind of a hen-and-egg problem with my docker setup. In my Dockerfile I install nginx, php and the needed configurations. I also install composer there:
FROM ubuntu
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Now, the next step would be to actually install all dependencies in the project directory via composer. And this is where the trouble starts: As this is my development machine, I don't want to copy my local project files over to the docker container. Instead, I mounted it in my docker-compose.yml:
version: '3'
services:
web:
...
volumes:
- "./crm-application:/var/www/orocrm/"
I cannot put composer install in the Dockerfile, as the mounting of the directory (in my docker-compose file) is taking place after the Dockerfile is run.
What is the best solution here? Another option which comes to my mind is intially copying the files into the container and later on use a filewatcher to scp the changed files into the container. Not a nice solution, though.
UPDATE I would like to emphasize what my actual problem is: I am on my development machine and I want to continuously update the code and have the changes mirrored instantly withouth building the image once again. Therefore, COPY is not an option.

My suggestion is to copy your content in your container using the COPYcommand, like this
FROM ubuntu
COPY ./crm-application /var/www/orocrm/
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/ && \
composer install
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Why? in this way you don't need to use docker-compose or another system. You're going to be able to run your single container.
Even if you want to use docker-compose, you're using a volume that allows you to update the code inside your container.
Notice that I've added composer install in the Docker because you've already the code inside the container at the moment of the build.
Regards,
Idir!

Related

How does this Dockerfile actually run logstash without an entrypoint or cmd?

Just doing a container start on this official logstash docker container does make logstash properly run, given the right config.
It does not have an entrypoint or cmd, or anything of the sort though. I am also not issuing one on the start command. So, how is logstash actually getting executed in this case?
I need to know because I need to edit the command for other reasons. We're working on running it in kubernetes but are just testing with local docker for now.
https://github.com/elastic/logstash/blob/7.15/Dockerfile
Copied for easy reference:
FROM ubuntu:bionic
RUN apt-get update && \
apt-get install -y zlib1g-dev build-essential vim rake git curl libssl-dev libreadline-dev libyaml-dev \
libxml2-dev libxslt-dev openjdk-11-jdk-headless curl iputils-ping netcat && \
apt-get clean
WORKDIR /root
RUN adduser --disabled-password --gecos "" --home /home/logstash logstash && \
mkdir -p /usr/local/share/ruby-build && \
mkdir -p /opt/logstash && \
mkdir -p /opt/logstash/data && \
mkdir -p /mnt/host && \
chown logstash:logstash /opt/logstash
USER logstash
WORKDIR /home/logstash
# used by the purge policy
LABEL retention="keep"
# Setup gradle wrapper. When running any `gradle` command, a `settings.gradle` is expected (and will soon be required).
# This section adds the gradle wrapper, `settings.gradle` and sets the permissions (setting the user to root for `chown`
# and working directory to allow this and then reverts back to the previous working directory and user.
COPY --chown=logstash:logstash gradlew /opt/logstash/gradlew
COPY --chown=logstash:logstash gradle/wrapper /opt/logstash/gradle/wrapper
COPY --chown=logstash:logstash settings.gradle /opt/logstash/settings.gradle
WORKDIR /opt/logstash
RUN for iter in `seq 1 10`; do ./gradlew wrapper --warning-mode all && exit_code=0 && break || exit_code=$? && echo "gradlew error: retry $iter in 10s" && sleep 10; done; exit $exit_code
WORKDIR /home/logstash
ADD versions.yml /opt/logstash/versions.yml
ADD LICENSE.txt /opt/logstash/LICENSE.txt
ADD NOTICE.TXT /opt/logstash/NOTICE.TXT
ADD licenses /opt/logstash/licenses
ADD CONTRIBUTORS /opt/logstash/CONTRIBUTORS
ADD Gemfile.template Gemfile.jruby-2.5.lock.* /opt/logstash/
ADD Rakefile /opt/logstash/Rakefile
ADD build.gradle /opt/logstash/build.gradle
ADD rubyUtils.gradle /opt/logstash/rubyUtils.gradle
ADD rakelib /opt/logstash/rakelib
ADD config /opt/logstash/config
ADD spec /opt/logstash/spec
ADD qa /opt/logstash/qa
ADD lib /opt/logstash/lib
ADD pkg /opt/logstash/pkg
ADD tools /opt/logstash/tools
ADD logstash-core /opt/logstash/logstash-core
ADD logstash-core-plugin-api /opt/logstash/logstash-core-plugin-api
ADD bin /opt/logstash/bin
ADD modules /opt/logstash/modules
ADD x-pack /opt/logstash/x-pack
ADD ci /opt/logstash/ci
USER root
RUN rm -rf build && \
mkdir -p build && \
chown -R logstash:logstash /opt/logstash
USER logstash
WORKDIR /opt/logstash
LABEL retention="prune"
If you look at the final layer on the image here, it looks like there is an ENTRYPOINT ["/usr/local/bin/docker-entrypoint"]. The Dockerfile you've linked might not be the one used to build the image.

Copy folder from Dockerfile to host [duplicate]

This question already has answers here:
Docker: Copying files from Docker container to host
(27 answers)
Closed 1 year ago.
I have a docker file
FROM ubuntu:20.04
################################
### INSTALL Ubuntu build tools and prerequisites
################################
# Install build base
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
build-essential \
git \
subversion \
sharutils \
vim \
asciidoc \
binutils \
bison \
flex \
texinfo \
gawk \
help2man \
intltool \
libelf-dev \
zlib1g-dev \
libncurses5-dev \
ncurses-term \
libssl-dev \
python2.7-dev \
unzip \
wget \
rsync \
gettext \
xsltproc && \
apt-get clean && rm -rf /var/lib/apt/lists/*
ARG FORCE_UNSAFE_CONFIGURE=1
RUN git clone https://git.openwrt.org/openwrt/openwrt.git
WORKDIR /openwrt
RUN ./scripts/feeds update -a && ./scripts/feeds install -a
COPY .config /openwrt/.config
RUN mkdir files
WORKDIR /files
RUN mkdir etc
WORKDIR /etc
RUN mkdir uci-defaults
WORKDIR /uci-defaults
COPY xx_custom /openwrt/files/etc/uci-defaults/xx_custom
WORKDIR /openwrt
RUN make -j 4
RUN ls /openwrt/bin/targets/ramips/mt76x8
WORKDIR /root
CMD ["bash"]
I want to copy all the files inside the folder mt76x8 to the host. I want to that inside the dockerfile so that when I run the docker file I should get the generated files in my host.
How can I do that?
you can use the volume mount to access the docker-generated artifacts on the host machine.
you can also run the command
docker cp to copy the files to the host machine.
if don't want to use the docker command as mention only option is to use the volume.
you can also use docker create once the docker image is ready to create the writable layer and copy data.
You have two choices.
Use docker volumes to map the /openwrt/bin/targets/ramips/mt76x8 folder when you are running the container. i.e. docker run -v {VoluneName}:/openwrt/bin/targets/ramips/mt76x8. All of the files in the mt76x8 folder would be available in the volume folder. If you are using Linux then you will find the docker volumes in /var/lib/docker/volumes/
You can use docker cp command to copy data from container to the host machine. Here is an example

running a phpUnit test within a docker container

i want to run a specific version of phpUnit WITHIN a docker container. This container will use a specific version of php. i.e
php:5.6-apache
Its a Laravel application. i have install phpunit via composer on the hostfiles and then used the volume command to transfer this to the container.
my composer.json file has following entry:
"require-dev": {
"phpunit/phpunit": "^5.0"
},
This is my docker run command to run the test on my testdev container:
docker run --rm -it -v ~/Users/mow/Documents/devFolder/testdev:/app testdev_php "php ./vendor/bin/phpunit"
This returns the error:
exec: fatal: unable to exec php ./vendor/bin/phpunit: No such file or directory
i am unclear why it says this because te vendor directory is at the root of my site directory.
this is my dockerfile
FROM php:5.6-apache
ENV S6_OVERLAY_VERSION 1.11.0.1
RUN apt-get update && apt-get install -y \
libldap2-dev \
git \
--no-install-recommends \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ \
&& docker-php-ext-install ldap \
&& docker-php-ext-install mysqli pdo pdo_mysql
#install xdebug
RUN git clone https://github.com/xdebug/xdebug.git \
&& cd xdebug \
&& git checkout tags/XDEBUG_2_5_5 \
&& phpize \
&& ./configure --enable-xdebug \
&& make \
&& make install
RUN a2enmod rewrite
COPY ./docker/rootfs /
COPY . /app
WORKDIR /app
ENTRYPOINT ["/init"]
I guess the real question is this: what is the correct way to run a phpUnit test within a docker container so that its subject to the php version within that container.
In the Entrypoint, you need to ran composer install to install the requisite packages that will be available in the docker container
without Dockerfile use this code in windows PowerShell :
docker run --rm -v ${pwd}:/app composer:latest require --dev phpunit/phpunit:^8
create composer container to install phpunit then remove the container.
note: in windows Command %cd% / in windows PowerShell ${pwd} / in linux $PWD
then Create new PHP container with copy all fills include phpuint feamework
docker run -d -p 80:80 --name my-php-apache -v ${pwd}:/var/www/html php:7.4.0-apache
to active autoload file, and testing please visit this Page

Docker - Execute command after mounting a volume

I have the following Dockerfile for a php runtime based on the official [php][1] image.
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache \
&& php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php \
&& php -r "unlink('composer-setup.php');" \
&& mv composer.phar /usr/local/bin/composer
I am having trouble running composer install.
I am guessing that the Dockerfile runs before a volume is mounted because I receive a composer.json file not found error if adding:
...
&& mv composer.phar /usr/local/bin/composer \
&& composer install
to the above.
But, adding the following property to docker-compose.yml:
command: sh -c "composer install && composer require drush/drush"
seems to terminate the container after the command finishes executing.
Is there a way to:
wait for a volume to become mounted
run composer install using the mounted composer.json file
have the container keep running afters
?
I generally agree with Chris's answer for local development. I am going to offer something that combines with a recent Docker feature that may set a path for doing both local development and eventual production deployment with the same image.
Let's first start with the image that we can build in a manner that can be used for either local development or deployment somewhere that contains the code and dependencies. In the latest Docker version (17.05) there is a new multi-stage build feature that we can take advantage of. In this case we can first install all your Composer dependencies to a folder in the build context and then later copy them to the final image without needing to add Composer to the final image. This might look like:
FROM composer as composer
COPY . /app
RUN composer install --ignore-platform-reqs --no-scripts
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache
COPY . /var/www/root
COPY --from=composer /app/vendor /var/www/root/vendor
This removes all of Composer from the application image itself and instead uses the first stage to install the dependencies in another context and copy them over to the final image.
Now, during development you have some options. Based on your docker-compose.yml command it sounds like you are mounting the application into the container as .:/var/www/root. You could add a composer service to your docker-compose.yml similar to my example at https://gist.github.com/andyshinn/e2c428f2cd234b718239. Here, you just do docker-compose run --rm composer install when you need to update dependencies locally (this keeps the dependencies build inside the container which could matter for native compiled extensions, especially if you are deploying as containers and developing on Windows or Mac).
The other option is to just do something similar to what Chris has already suggested, and use the official Composer image to update and manage dependencies when needed. I've done something like this locally before where I had private dependencies on GitHub which required SSH authentication:
docker run --rm --interactive --tty \
--volume $PWD:/app:rw,cached \
--volume $SSH_AUTH_SOCK:/ssh-auth.sock \
--env SSH_AUTH_SOCK=/ssh-auth.sock \
--volume $COMPOSER_HOME:/composer \
composer:1.4 install --ignore-platform-reqs --no-scripts
To recap, the reasoning for this method of building the image and installing Composer dependencies using an external container / service:
Platform specific dependencies will be built correctly for the container (Linux architecture vs Windows or Mac).
No Composer or PHP is required on your local computer (it is all contained inside Docker and Docker Compose).
The initial image you built is runnable and deployable without needing to mount code into it. In development, you are just overriding the /var/www/root folder with a local volume.
I've been down this rabbit hole for 5 hours, all of the solutions out there are way too complicated. The easiest solution is to exclude vendor or node_modules and similar directories from volume.
#docker-compose.yml
volumes:
- .:/srv/app/
- /srv/app/vendor/
So this will map current project directory but exclude its vendor subdirectory. Dont forget the trailing slash!
So now you can easily run composer install in dockerfile and when docker mounts your volume it will ignore vendor directory.
If this is is for a general development environment, then the intention is not really ideal because it's coupling the application to the Docker configuration.
Just run composer install seperately by some other means (there is an image available for this on dockerhub, which allows you to just do (docker run -it --rm -v $(pwd):/app composer/composer install).
But yes it is possible you would need the last line in the Dockerfile to be bash -c "composer install && php-fpm".
wait for a volume to become mounted
No, volumes are not able to be mounted during a docker build process. Though you can copy the source code in.
run composer install using the mounted composer.json file
No, see above response.
have the container keep running after
Yes, you would need to execute php-fpm --nodaemonize ( which is a long running process, hence it won't terminate.
To execute a command after you have mounted a volume on a docker container
Assuming that you are fetching dependencies from a public repo
docker run --interactive -t --privileged --volume ${pwd}:/xyz composer /bin/sh -c 'composer install'
For fetching dependencies from a private git repo, you would need to copy/create ssh keys, I guess that should be out of scope of this question.

Syntaxnet spec file and Docker?

I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.

Resources