Dockerfile VOLUME not visibile on host - docker

In the WordPress Dockerfile, there's a VOLUME /var/www/html statement. If I understand correctly, this means that the WordPress files (in /var/www/html) should be mapped to the directory on my host containing the docker-compose.yml BUT this is not happening. Do you know why?
I created my own WordPress Dockerfile that extends the original WordPress Dockerfile where you'll find said VOLUME /var/www/html statement on line 44 (https://github.com/docker-library/wordpress/blob/b3739870faafe1886544ddda7d2f2a88882eeb31/php7.2/apache/Dockerfile).
I even tried to add the VOLUME /var/www/html statement at the bottom of my Dockerfile as you can see in my Dockerfile below. I added it just in case but I don't think anything is going wrong in there.
FROM wordpress:4.9.8-php7.2-apache
##########
# XDebug #
##########
# Install
RUN pecl install xdebug-2.6.1; \
docker-php-ext-enable xdebug
# Configure
RUN echo "error_reporting = E_ALL" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
echo "display_startup_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
echo "display_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
echo "xdebug.idekey=\"PHPSTORM\"" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
echo "xdebug.remote_port=9000" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
echo "xdebug.remote_host=docker.for.win.localhost" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
#RUN echo "xdebug.remote_autostart=1" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini ##
###########
# PHPUnit #
###########
RUN apt-get update; \
apt-get install wget
RUN wget https://phar.phpunit.de/phpunit-7.4.phar; \
chmod +x phpunit-7.4.phar; \
mv phpunit-7.4.phar /usr/local/bin/phpunit
RUN phpunit --version
###################
# PHP Codesniffer #
###################
RUN curl -OL https://squizlabs.github.io/PHP_CodeSniffer/phpcs.phar; \
mv phpcs.phar /usr/local/bin/phpcs; \
chmod +x /usr/local/bin/phpcs
############
# Composer #
############
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"; \
php -r "if (hash_file('sha384', 'composer-setup.php') === '93b54496392c062774670ac18b134c3b3a95e5a5e5c8f1a9f115f203b75bf9a129d5daa8ba6a13e2cc8a1da0806388a8') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"; \
php composer-setup.php; \
php -r "unlink('composer-setup.php');"; \
mv composer.phar /usr/local/bin/composer
##################
# Install Nodejs #
##################
RUN apt-get install -y gnupg2; \
curl -sL https://deb.nodesource.com/setup_11.x | bash -; \
apt-get install -y nodejs
##################
# Install Grunt #
##################
RUN npm install -g grunt-cli
#####################
# BASH customization#
#####################
RUN echo "alias ll='ls --color=auto -lA'" >> ~/.bashrc
VOLUME /var/www/html
docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7
volumes:
- ./docker-mysql/db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: progonkpa/wordpress:1.0
restart: always
ports:
- "80:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
restart: always
volumes:
db_data:

The volume is being created, it just isn't being created in the execution context where you docker-compose.yml file lives. I assume you are running the ls -lah command and expecting something to be created in the directory where your docker-compose.yml file is. That is why you say, "BUT this is not happening"
The VOLUME command in the Dockerfile is limited. The host is unknown when you build an image from the Dockerfile. It is not until the docker run is executed using your built image that the Docker host is known.
And so, when using the VOLUME command in a Dockerfile and then using docker run with that image, the volume is created in a location configured by the Docker installation. To confirm that a volume has indeed been created for your container use this command:
docker inspect -f '{{ .Mounts }}' [container_name]
To have better control and specify where you VOLUME is created on your Docker host, you need to use the -v option with docker run or configure in your docker-compose.yml file, like is being done for your MySQL persistence container.
You can remove VOLUME /var/www/html from your Dockerfile, and you should. Because your FROM wordpress image creates the VOLUME, as you already know.

Related

Building a Dockerfile from inside Docker Compose

So I'm trying to follow these instructions:
https://github.com/open-forest/sendy
I'm using Portainer and trying to run a Sendy container (newsletter software). Instead of running a MySQL image with it, I'm just using my external managed database instead.
On my server I keep project data at: /var/docker/project-name. I use this structure for bind mounting if I need to bring data into the containers from the start.
So for this project in the project-name folder I have sendy-6.0.2.zip and this Dockerfile: (This file was provide via the instructions on the above link)
#
# Docker with Sendy Email Campaign Marketing
#
# Build:
# $ docker build -t sendy:latest --target sendy -f ./Dockerfile .
#
# Build w/ XDEBUG installed
# $ docker build -t sendy:debug-latest --target debug -f ./Dockerfile .
#
# Run:
# $ docker run --rm -d --env-file sendy.env sendy:latest
FROM php:7.4.8-apache as sendy
ARG SENDY_VER=6.0.2
ARG ARTIFACT_DIR=6.0.2
ENV SENDY_VERSION ${SENDY_VER}
RUN apt -qq update && apt -qq upgrade -y \
# Install unzip cron
&& apt -qq install -y unzip cron \
# Install php extension gettext
# Install php extension mysqli
&& docker-php-ext-install calendar gettext mysqli \
# Remove unused packages
&& apt autoremove -y
# Copy artifacts
COPY ./artifacts/${ARTIFACT_DIR}/ /tmp
# Install Sendy
RUN unzip /tmp/sendy-${SENDY_VER}.zip -d /tmp \
&& cp -r /tmp/includes/* /tmp/sendy/includes \
&& mkdir -p /tmp/sendy/uploads/csvs \
&& chmod -R 777 /tmp/sendy/uploads \
&& rm -rf /var/www/html \
&& mv /tmp/sendy /var/www/html \
&& chown -R www-data:www-data /var/www \
&& mv /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini \
&& rm -rf /tmp/* \
&& echo "\nServerName \${SENDY_FQDN}" > /etc/apache2/conf-available/serverName.conf \
# Ensure X-Powered-By is always removed regardless of php.ini or other settings.
&& printf "\n\n# Ensure X-Powered-By is always removed regardless of php.ini or other settings.\n\
Header always unset \"X-Powered-By\"\n\
Header unset \"X-Powered-By\"\n" >> /var/www/html/.htaccess \
&& printf "[PHP]\nerror_reporting = E_ALL & ~E_NOTICE & ~E_STRICT & ~E_DEPRECATED\n" > /usr/local/etc/php/conf.d/error_reporting.ini
# Apache config
RUN a2enconf serverName
# Apache modules
RUN a2enmod rewrite headers
# Copy hello-cron file to the cron.d directory
COPY cron /etc/cron.d/cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cron \
# Apply cron job
&& crontab /etc/cron.d/cron \
# Create the log file to be able to run tail
&& touch /var/log/cron.log
COPY artifacts/docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
CMD ["apache2-foreground"]
#######################
# XDEBUG Installation
#######################
FROM sendy as debug
# Install xdebug extension
RUN pecl channel-update pecl.php.net \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& rm -rf /tmp/pear
Here is my Docker Compose file:
version: '3.7'
services:
project-sendy:
container_name: project-sendy
image: sendy:6.0.2
build:
dockerfile: var/docker/project-sendy/Dockerfile
restart: unless-stopped
networks:
- proxy
- default
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.project-secure.entrypoints=websecure"
- "traefik.http.routers.project-secure.rule=Host(`project.com`)"
environment:
SENDY_PROTOCOL: https
SENDY_FQDN: project.com
MYSQL_HOST: db-host-name-here
MYSQL_DATABASE: db-name-here
MYSQL_USER: db-user-name-here
MYSQL_PASSWORD: db-password-here
SENDY_DB_PORT: db-port-here
networks:
proxy:
external: true
When I try to deploy I get:
failed to deploy a stack: project-sendy Pulling project-sendy
Error could not find /data/compose/126/var/docker/project-sendy:
stat /data/compose/126/var/docker/project-sendy: no such file or directory
So here's what I've done.
I have the cron and artifacts folder on the same directory as the Dockerfile.
In the Dockerfile look for this line:
COPY artifacts/docker-entrypoint.sh /usr/local/bin/
Right below it put this line:
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
Otherwise you will get this error:
Starting Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/usr/local/bin/docker-entrypoint.sh": permission denied: unknown
Then build it with:
docker build -t sendy:6.0.2 .
Then your image will show up in portainer.
You can then remove the build section in your docker compose file and hit deploy. It now works for me.

Files which generated through docker-compose run web rails g doesn't create

I have project which used docker-compose to provide environment to developers. The application is running fine on docker-compose build command and running on 0.0.0.0:3000 on docker-compose up command. When I am trying to run the command docker-compose run web rails g uploader or docker-compose run web rails g migration it's show in console thats they successfuly create but when I check project there are no files.
This is my Dockerfile:
# Base image
FROM ruby:2.7.0
# Set enviroment variables in docker
ENV INSTALL_PATH=/app \
RAILS_ENV=production \
RACK_ENV=$RAILS_ENV \
RAILS_LOG_TO_STDOUT=true \
RAILS_SERVE_STATIC_FILES=true \
SECRET_KEY_BASE=ad187ccccdf25beb51568211a26b0bff237385d79df37e08151acda85266f9a469f37926450ba18d9362ec5e83d1b612c09368bc59dc895cb5ce2798a3ab456b
RUN env
# Ensure gems are cached and only get updated when they change. This will
# drastically increase build times when your gems do not change.
ADD Gemfile* $INSTALL_PATH/
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
&& apt-get update \
&& apt-get install -qq -y build-essential nodejs libpq-dev cron htop vim sqlite3 yarn imagemagick netcat --fix-missing --no-install-recommends \
&& cd $INSTALL_PATH; bundle install --jobs 20 --retry 5
WORKDIR $INSTALL_PATH
ADD . .
RUN mv config/database.docker.yml config/database.yml \
# Fix windows line ending from windows runners
&& find ./ -type f -exec sed -i 's/\r$//' {} + \
&& chmod +x docker/* \
&& yarn install --check-files \
&& RAILS_ENV=$RAILS_ENV bundle exec rails assets:precompile \
&& chown -R nobody:nogroup $INSTALL_PATH
USER nobody
# Expose a volume so that nginx will be able to read in assets in production.
VOLUME ["$INSTALL_PATH/public"]
EXPOSE 3000
CMD ["docker/startup.sh"]
This one is my docker-compose.yml:
version: '2'
volumes:
database_data:
driver: local
web_rails_public: {}
services:
web:
restart: always
image: eu.gcr.io/academic-ivy-225422/joystree_web
container_name: joystree_web_app_container
build: .
volumes:
- web_rails_public:/app/public
env_file:
- '.env.web'
ports:
- "3000:3000"
links:
- "database:database"
depends_on:
- database
database:
restart: always
container_name: joystree_postgres_container
image: postgres:11
env_file:
- '.env.db'
ports:
- "5432:5432"
volumes:
- database_data:/var/lib/postgresql/data
I had this same problem and solved it by following these steps:
1 - in Dockerfile add the fallowing lines, for be able to create new files:
RUN mkdir /home/web \
&& chown $(id -un):$(id -gn) /home/web
WORKDIR /home/web
2 - in docker-compose.yml web the volume should be .:/home/web or just the same as you call in mkdir
web:
volumes:
- .:/home/web
I hope that solve your problem too

docker : how to share ssh-keys between containers?

I've 4 containers configured like the following (docker-compose.yml):
version: '3'
networks:
my-ntwk:
ipam:
config:
- subnet: 172.20.0.0/24
services:
f-app:
image: f-app
tty: true
container_name: f-app
hostname: f-app.info.my
ports:
- "22:22"
networks:
my-ntwk:
ipv4_address: 172.20.0.5
extra_hosts:
- "f-db.info.my:172.20.0.6"
- "p-app.info.my:172.20.0.7"
- "p-db.info.my:172.20.0.8"
depends_on:
- f-db
- p-app
- p-db
f-db:
image: f-db
tty: true
container_name: f-db
hostname: f-db.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.6
p-app:
image: p-app
tty: true
container_name: p-app
hostname: p-app.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.7
p-db:
image: p-db
tty: true
container_name: prod-db
hostname: p-db.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.8
Each image is build by the same Dockerfile :
FROM openjdk:8
RUN apt-get update && \
apt-get install -y openssh-server
EXPOSE 22
RUN useradd -s /bin/bash -p $(openssl passwd -1 myuser) -d /home/nf2/ -m myuser
ENTRYPOINT service ssh start && bash
Now I want to be able to connect from f-app to any other machine without typing the password when running this line : ssh myuser#f-db.info.my.
I know that I need to exchange ssh-keys between the servers (thats not a problem). My problem is how to do it with docker containers and when (build or runtime)!
For doing ssh without password you to need to create passwordless user along with configuring SSH keys in the container, plus you will also need to add ssh keys in the sources container plus public key should be added in the authorized of the destination container.
Here is the working Dockerfile
FROM openjdk:7
RUN apt-get update && \
apt-get install -y openssh-server vim
EXPOSE 22
RUN useradd -rm -d /home/nf2/ -s /bin/bash -g root -G sudo -u 1001 ubuntu
USER ubuntu
WORKDIR /home/ubuntu
RUN mkdir -p /home/nf2/.ssh/ && \
chmod 0700 /home/nf2/.ssh && \
touch /home/nf2/.ssh/authorized_keys && \
chmod 600 /home/nf2/.ssh/authorized_keys
COPY ssh-keys/ /keys/
RUN cat /keys/ssh_test.pub >> /home/nf2/.ssh/authorized_keys
USER root
ENTRYPOINT service ssh start && bash
docker-compose will remain same, here is the testing script that you can try.
#!/bin/bash
set -e
echo "start docker-compose"
docker-compose up -d
echo "list of containers"
docker-compose ps
echo "starting ssh test from f-db to f-app"
docker exec -it f-db sh -c "ssh -i /keys/ssh_test ubuntu#f-app"
For further detail, you can try the above working example docker-container-ssh
git clone git#github.com:Adiii717/docker-container-ssh.git
cd docker-container-ssh;
./test.sh
You can replace the keys as these were used for testing purpose only.
If you are using docker compose an easy choice is to forward SSH agent like that:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
ssh-forwarding on macOS hosts - instead of mounting the path of $SSH_AUTH_SOCK, you have to mount this path - /run/host-services/ssh-auth.sock
or you can do it like:
It's a harder problem if you need to use SSH at build time. For example if you're using git clone, or in my case pip and npm to download from a private repository.
The solution I found is to add your keys using the --build-arg flag. Then you can use the new experimental --squash command (added 1.13) to merge the layers so that the keys are no longer available after removal. Here's my solution:
Build command
$ docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .
Dockerfile
FROM openjdk:8
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN apt-get update && \
apt-get install -y openssh-server && \
apt-get install -y openssh-client
EXPOSE 22
RUN useradd -s /bin/bash -p $(openssl passwd -1 myuser) -d /home/nf2/ -m myuser
ENTRYPOINT service ssh start && bash
If you're using Docker 1.13+ and/or have experimental features on you can append --squash to the build command which will merge the layers, removing the SSH keys and hiding them from docker history.

Empty volume folder after docker-compose up

I have a dockerfile which load some Stuff from composer in a vendor folder on the container. Now I want to link the vendor folder on the container with my host enviroment. If I start the service with docker-compose up the vendor folder is empty. What can I do to keep the data on the container?
Here is my dockerfile:
FROM php:7.3.3-apache-stretch
RUN apt-get update && \
apt-get install -y --no-install-recommends nano \
git \
openssh-server
RUN curl -s https://getcomposer.org/installer | php && \
echo "{}" > composer.json && \
php composer.phar require slim/slim "^3.0" && \
chown -R www-data. .
VOLUME /var/www/html/vendor
And here my docker-compose.yml:
version: '3.2'
services:
slim:
build:
context: ./slim
ports:
- "1337:1337"
networks:
- backend
volumes:
- ./slim/vendor:/var/www/html/vendor
networks:
backend:
thanks for the help
What you see is expected behaviour.
If you want the vendor folder populated and available on the host as well then you have to run the installation AFTER the mapping is happening, not the other way round.
This command:
curl -s https://getcomposer.org/installer | php && \
echo "{}" > composer.json && \
php composer.phar require slim/slim "^3.0" && \
chown -R www-data. .
should become your ENTRYPOINT or CMD so that it is run when the container starts (not when it is built).
I would suggest to put those commands in an install script and run that. It would look cleaner and easier to understand.
Hope this helps but if you need more information just let me know.

Can I use a docker volume with S3FS?

I would like to have a shared directory between my containers: ftp and s3fs. Todo so, I have created a volume in my docker-compose file called s3.
If I stop s3fs from running in my s3fs container, then I can create files in the ftp container and they will show up in side s3fs under /home/files.
However, when running s3fs the directory /home/files remains empty whilst I create files in the ftp.
This is what my /proc/mounts file looks like:
/dev/sda2 /home/files ext4 rw,relatime,data=ordered 0 0
s3fs /home/files fuse.s3fs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
I belive fuse maybe overriding my docker volume, has anyone encountered this problem before?
docker-compose.yml
version: "3"
services:
ftp:
image: app/proftpd:latest
volumes:
- s3:/home/files
ports:
- 2222:2222
s3fs:
image: app/s3fs:latest
command: start
env_file:
- s3fs/aws.env
volumes:
- s3:/home/files
cap_add:
- SYS_ADMIN
devices:
- "/dev/fuse"
environment:
ENVIRONMENT: "dev"
volumes:
s3:
s3fs - Dockerfile
FROM ubuntu:16.04
RUN apt-get update -qq
RUN apt-get install -y \
software-properties-common
RUN apt-get update -qq
RUN apt-get install -y \
automake \
autotools-dev \
fuse \
g++ \
git \
libcurl4-openssl-dev \
libfuse-dev \
libssl-dev \
libxml2-dev \
make \
pkg-config \
curl
RUN curl -L https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.84.tar.gz | tar zxv -C /usr/src
RUN cd /usr/src/s3fs-fuse-1.84 && ./autogen.sh && ./configure --prefix=/usr --with-openssl && make && make install
COPY entrypoint.sh /opt/s3fs/bin/entrypoint.sh
RUN mkdir -p /home/files
WORKDIR /opt/s3fs/bin
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]
s3fs - entrypoint.sh
#!/usr/bin/env bash
case $1 in
start)
echo "Starting S3Fs: "
s3fs mybucket /home/files -o allow_other,nonempty -d -d
;;
esac
ftp - Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y \
openssh-server \
proftpd-basic \
proftpd-mod-mysql
COPY proftpd.conf /etc/proftpd/proftpd.conf
COPY sftp.conf /etc/proftpd/conf.d/sftp.conf
COPY setup.sh /etc/proftpd/setup.sh
RUN chmod 500 /etc/proftpd/setup.sh && /etc/proftpd/setup.sh
EXPOSE 2222
ENTRYPOINT ["/bin/sh", "/etc/proftpd/entrypoint.sh"]
You can mount s3 in your docker container in next way
1.Add to Dockerfile
RUN apt-get install -y fuse s3fs
RUN mkdir /root/.aws
RUN touch /root/.aws/.passwd-s3fs && chmod 600 /root/.aws/.passwd-s3fs
COPY entrypoint.sh ./
RUN chmod 700 entrypoint.sh
ENTRYPOINT entrypoint.sh
2.Create entrypoint.sh with next script
#!/bin/sh
echo "$AWS_CREDS" > /root/.aws/.passwd-s3fs
echo "$BUCKET_NAME /srv/files fuse.s3fs _netdev,allow_other,passwd_file=/root/.aws/.passwd-s3fs 0 0" > /etc/fstab
mount -a
<your old CMD or ENTRYPOINT>
3.In docker-compose.yml add next
<your-container-name>:
image: ...
build: ...
environment:
- AWS_ID="AKI..."
- AWS_KEY="omIE..."
- AWS_CREDS=AKI...:2uMZ...
- BUCKET_NAME=<YOUR backed name>
devices:
- "/dev/fuse"
cap_add:
- SYS_ADMIN
security_opt:
- seccomp:unconfined

Resources