I have this service...
storage:
image: mcr.microsoft.com/azure-storage/azurite
ports:
"20000:10000"
restart: unless-stopped
volumes:
C:/Data:/hello
I can add data to the Azurite service and I can browse it in the volume via Docker Desktop but I can't see any files in my local file system - the folder is always empty.
Why isn't the volume mapped to my file system?
You need to add quotations in your volumes declaration since there are yaml special characters in local path.
Hope this helps. Below is a docker compose file where you can start Azurite with volumes. Please create a folder called storagedata at the same directory as your docker compose file exists.
version: '3.4'
services:
storageemulator:
image: mcr.microsoft.com/azure-storage/azurite
command: "azurite --loose --blobHost 0.0.0.0 --blobPort 10000 --queueHost 0.0.0.0 --queuePort 10001 --tableHost 0.0.0.0 --tablePort 10002 --location /workspace --debug /workspace/debug.log"
ports:
- "10000:10000"
- "10001:10001"
- "10002:10002"
volumes:
- ./storagedata:/workspace
Related
Whenever I try to mount the log directory to an existing volume, it ignores that and creates a new volume and maps it to the new volume
In the docker-compose yaml file I have mentioned the volume name and as external, but it still creates a new one.
I am using windows OS
nosqldata:
container_name: 'docker_exec_service_mongodb'
image: mongo:latest
networks:
- app-tier
restart: on-failure:5
ports:
- "30001:27017"
volumes:
- docker_exec_container_vol_mongo_data:/data/db
- docker_exec_container_vol_mongo_log:/var/log/mongodb
Any other way to use the external volume
Github - https://github.com/austinnoronha/docker_exec_container
I have 2 containers in a compose files,that i want to serve app static files through nginx.
I have read this: https://stackoverflow.com/a/43560093/7522096 and want to use host dir to share between app container and nginx container, for some reason I dont want to use named volume.
===
Using a host directory Alternately you can use a directory on the host
and mount that into the containers. This has the advantage of you
being able to work directly on the files using your tools outside of
Docker (such as your GUI text editor and other tools).
It's the same, except you don't define a volume in Docker, instead
mounting the external directory.
version: '3'
services:
nginx:
volumes:
- ./assets:/var/lib/assets
asset:
volumes:
- ./assets:/var/lib/assets
===
My docker-compose file:
version: "3.7"
services:
app:
container_name: app
restart: always
ports:
- 8888:8888
env_file:
- ./env/app.env
image: registry.gitlab.com/app/development
volumes:
- ./public/app/:/usr/app/static/
- app-log:/root/.pm2
nginx:
container_name: nginx
image: 'nginx:1.16-alpine'
restart: always
ports:
- 80:80
- 443:443
volumes:
- /home/devops/config/:/etc/nginx/conf.d/:ro
- /home/devops/ssl:/etc/nginx/ssl:ro
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
volumes:
# app-public:
app-log:
Yet when i do this in my compose, the dir always come up empty on nginx, and the static files in my app container got disappear too.
Please help, I tried a lot of ways but can not figure it out.
Thanks.
During the initialization of the container docker will bind the ./public/app directory on the host with the /usr/app/static/ directory in the container.
If the ./public/app does not exist it will be created. The bind is from the host to the container, meaning that the content of ./public/app folder is
reflected (copied) into the container and not viceversa. That's why after the initialization the static app directory is empty.
If my understanding is correct your goal is to share the application files between the app container and nginx.
Taken into consideration the above the only solution is to create the files in the volume after the volume is created. Here is an example for the relevant parts:
version: "3"
services:
app:
image: ubuntu
volumes:
- ./public/app/:/usr/app/static_copy/
entrypoint: /bin/bash
command: >
-c "mkdir /usr/app/static;
touch /usr/app/static/shared_file;
mv /usr/app/static/* /usr/app/static_copy;
rm -r /usr/app/static;
ln -sfT /usr/app/static_copy/ /usr/app/static;
exec sleep infinity"
nginx:
image: 'nginx:1.16-alpine'
volumes:
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
This will move the static files to the static_copy directory and link back those files to /usr/app/static. Those files will be shared with the host (public/app director)
and nginx container (/etc/nginx/public/app/). Adapt it to fit your needs.
In alternative you can of course use named volumes.
Hope it helps
I persist container's data to a volume (not a bind mount) and I wonder how I can inspect this data later. For example, let's say that I use something like this to run a WordPress site:
docker-compose.yml:
services:
wordpress:
volumes:
- wordpress-files:/var/www/html
volumes:
wordpress-files:
Is it possible to start another container (based on Alpine or something) that would mount the same volume and also expose it to my host OS (macOS – I'm using Docker for Mac)? Something like this (pseudocode):
services:
wordpress:
image: wordpress
volumes:
- wordpress-files:/var/www/html
wordpress-files-inspector:
volumes:
- wordpress-files:/tmp:host
volumes:
wordpress-files:
It's possible to exec into a temporary container but I'd like to make the files available to my local filesystem so that I can use my local tools to browse them. Note that primarily, the files need to live in a named volume (for performance and other reasons) so it cannot be a bind mount like ./my-local-path:/var/www/html.
Why don't you just use samba? Like that:
services:
wordpress:
image: wordpress
volumes:
- wordpress-files:/var/www/html
wordpress-files-inspector:
image: dperson/samba
command: sh -c "samba.sh -s \"mount;/mount\""
volumes:
- wordpress-files:/mount
volumes:
wordpress-files:
You can inspect IP address of the wordpress-files-inspector container later (or set the container with static ip) and mount it into your host OS.
I am trying to allow nginx to proxy between multiple containers while also accessing the static files from those containers.
To share volumes between containers created using docker compose, the following works correctly:
version: '3.6'
services:
web:
build:
context: .
dockerfile: ./Dockerfile
image: webtest
command: ./start.sh
volumes:
- .:/code
- static-files:/static/teststaticfiles
nginx:
image: nginx:1.15.8-alpine
ports:
- "80:80"
volumes:
- ./nginx-config:/etc/nginx/conf.d
- static-files:/static/teststaticfiles
depends_on:
- web
volumes:
static-files:
However what I actually require is for the nginx compose file to be in a separate file and also in a completely different folder. In other words, the docker compose up commands would be run separately. I have tried the following:
First compose file:
version: '3.6'
services:
web:
build:
context: .
dockerfile: ./Dockerfile
image: webtest
command: ./start.sh
volumes:
- .:/code
- static-files:/static/teststaticfiles
networks:
- directorylocation-nginx_mynetwork
volumes:
static-files:
networks:
directorylocation-nginx_mynetwork:
external: true
Second compose file (ie: nginx):
version: '3.6'
services:
nginx:
image: nginx:1.15.8-alpine
ports:
- "80:80"
volumes:
- ./nginx-config:/etc/nginx/conf.d
- static-files:/static/teststaticfiles
networks:
- mynetwork
volumes:
static-files:
networks:
mynetwork:
The above two files work correctly in the sense that the site can be viewed. The problem is that the static files are not available in the nginx container. The site therefore displays without any images etc.
One work around which works correctly found here is to change the nginx container static files volume to instead be as follows:
- /var/lib/docker/volumes/directory_static-files/_data:/static/teststaticfiles
The above works correctly, but it seems 'hacky' and brittle. Is there another way to share volumes between containers which are housed in different compose files without needing to map the /var/lib/docker/volumes directory.
By separating the 2 docker-compose.yml files as you did in your question, 2 different volumes are actually created; that's the reason you don't see data from web service inside volume of nginx service, because there are just 2 different volumes.
Example : let's say you have the following structure :
example/
|- web/
|- docker-compose.yml # your first docker compose file
|- nginx/
|- docker-compose.yml # your second docker compose file
Running docker-compose up from web folder (or docker-compose -f web/docker-compose.yml up from example directory) will actually create a volume named web_static-files (name of the volume defined in docker-compose.yml file, prefixed by the folder where this file is located).
So, running docker-compose up from nginx folder will actually create nginx_static-files instead of re-using web_static-files as you want.
You can use the volume created by web/docker-compose.yml by specifying in the 2nd docker compose file (nginx/docker-compose.yml) that this is an external volume, and its name :
volumes:
static-files:
external:
name: web_static-files
Note that if you don't want the volume (and all resources) to be prefixed by the folder name (default), but by something else, you can add -p option to docker-compose command :
docker-compose \
-f web/docker-compose.yml \
-p abcd \
up
This command will now create a volume named abcd_static-files (that you can use in the 2nd docker compose file).
You can also define the volumes creation on its own docker-compose file (like volumes/docker-compose.yml) :
version: '3.6'
volumes:
static-files:
And reference this volume as external, with name volumes_static-files, in web and nginx docker-compose.yml files :
volumes:
volumes_static-files:
external: true
Unfortunately, you cannot set the volume name in docker compose, it will be automatically prefixed. If this is really a problem, you can also create the volume manually (docker volume create static-files) before running any docker-compose up command (I do not recommand this solution though because it adds a manual step that can be forgotten if you reproduce your deployment on another environment).
I'm trying to use Docker to containerize a MySQL (MariaDB actually) database. I figured out how to store MySQL data (/var/lib/mysql) in a volume mounted from a host directory.
However, because the underlying filesystem is different from host to host there are some inconsistencies, for example table names are case insensitive on NTFS (Windows). Also, it looks like if the database is created on a Linux host it doesn't work on a Windows host (haven't figured out why exactly).
Therefore, I want to store the data on a disk image and mount it inside the container, i.e. db-data.img formatted as ext4. But I'm facing a strange problem, when mounting this image inside the container:
$ docker run -v $PWD:/outside --rm -it ubuntu /bin/bash
# dd if=/dev/zero of=/test.img bs=1M count=100
# mkfs.ext4 test.img
# mount -o loop -t ext4 test.img /mnt
mount: /mnt: mount failed: Operation not permitted.
Using another directory instead of /mnt didn't work either.
Why does it refuse to mount the img file?
I would suggest to use docker-compose and just use a volume declared in the docker-compose.yml configuration.
Something like this:
version: '3'
services:
mysql:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
MYSQL_USER: $MYSQL_USER
MYSQL_PASS: $MYSQL_PASSWORD
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
The mysql-data volume should be stored as a separate volume, independent from the host operating system. The difference to just mounting a directory on the host, it's basically mounting a volume container (which you could also do without docker-compose, but it's more work).
It will not work inside of docker image, Docker blocks access to mouning filesystems (and loop devices). Should be easier create these image earlier, mount and connect to docker as folder by -v.
P.S. Another option is dump your database to sql and restore from windows.
I managed to solve this by using the privileged option in docker-compose.yml:
privileged: true
(or --privileged in the docker command)
Here is my final docker-compose.yml:
version: '3'
services:
db:
build: ./db
image: my_db
container_name: db
privileged: true
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
volumes:
- ${MYSQL_DATA_IMG}:/data.img
restart: always
Dockerfile:
FROM mariadb
COPY my-custom.cnf /etc/mysql/conf.d/custom.cnf
COPY run.sh /usr/local/bin/run-mariadb.sh
ENTRYPOINT ["run-mariadb.sh"]
and a custom entry point script that executes mount (run.sh):
#!/bin/sh
# For this mount comamnd to work the DB container must be started
# with --privileged.
mount -o loop /data.img /var/lib/mysql
# Call the entry point script of MariaDB image.
exec /usr/local/bin/docker-entrypoint.sh mysqld
for storing database data make docker-compose.yml will look like
if you want to use Dockerfile
version: '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
mysql-data:
your docker-compose.yml will looks like
if you want to use your image instead of Dockerfile
version: '3.1'
services:
php:
image: php:7.4-apache
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
if you want to store or preserve data of mysql then
must remember to add two lines in your docker-compose.yml
volumes:
- mysql-data:/var/lib/mysql
and
volumes:
mysql-data:
after that use this command
docker-compose up -d
now your data will persistent and will not be deleted even after using this command
docker-compose down
extra:- but if you want to delete all data then you will use
docker-compose down -v
plus you can check your data list by this command
docker volume ls
DRIVER VOLUME NAME
local 35c819179d883cf8a4355ae2ce391844fcaa534cb71dc9a3fd5c6a4ed862b0d4
local 133db2cc48919575fc35457d104cb126b1e7eb3792b8e69249c1cfd20826aac4
local 483d7b8fe09d9e96b483295c6e7e4a9d58443b2321e0862818159ba8cf0e1d39
local 725aa19ad0e864688788576c5f46e1f62dfc8cdf154f243d68fa186da04bc5ec
local de265ce8fc271fc0ae49850650f9d3bf0492b6f58162698c26fce35694e6231c
local phphelloworld_mysql-data