How to mount a Docker volume to my host OS (macOS)? - docker

I persist container's data to a volume (not a bind mount) and I wonder how I can inspect this data later. For example, let's say that I use something like this to run a WordPress site:
docker-compose.yml:
services:
wordpress:
volumes:
- wordpress-files:/var/www/html
volumes:
wordpress-files:
Is it possible to start another container (based on Alpine or something) that would mount the same volume and also expose it to my host OS (macOS – I'm using Docker for Mac)? Something like this (pseudocode):
services:
wordpress:
image: wordpress
volumes:
- wordpress-files:/var/www/html
wordpress-files-inspector:
volumes:
- wordpress-files:/tmp:host
volumes:
wordpress-files:
It's possible to exec into a temporary container but I'd like to make the files available to my local filesystem so that I can use my local tools to browse them. Note that primarily, the files need to live in a named volume (for performance and other reasons) so it cannot be a bind mount like ./my-local-path:/var/www/html.

Why don't you just use samba? Like that:
services:
wordpress:
image: wordpress
volumes:
- wordpress-files:/var/www/html
wordpress-files-inspector:
image: dperson/samba
command: sh -c "samba.sh -s \"mount;/mount\""
volumes:
- wordpress-files:/mount
volumes:
wordpress-files:
You can inspect IP address of the wordpress-files-inspector container later (or set the container with static ip) and mount it into your host OS.

Related

Content of docker bind mount is not showing inside the container

EDITED:
Rclone has a bucket mounted to the host directory /home/user/rclone. I want to access the contents of this directory inside nextcloud docker instance. So I would bind mount it to /var/www/html/data. With the option shared, any changes made in the container will be reflected in the host, and vice versa.
I have set the permission of /home/user/rclone to be 777. And the content is visible with a ls command from the host. Once the docker container is restarted, a ls command from within the container does not show any files. Rclone is still running properly.
I am suspecting that because the volume nextcloud is mounted at /var/www/html, the mount of the bind mount at /var/www/html/data is covered up.
So then I picked another directory inside the container, namely /mnt and tried it. Still no files show up with a ls command.
My nextcloud docker compose: (mysql does not have anything to do with this; showing the /var/www/html/data mount version only.)
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=xxx
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
network_mode: npm_default
container_name: db
app:
image: nextcloud:latest
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
- /home/user/rclone:/var/www/html/data:shared
environment:
- MYSQL_PASSWORD=xxx
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- NEXTCLOUD_TRUSTED_DOMAINS=xxx
network_mode: npm_default
container_name: nextcloud
Another way of putting it:
rclone cloud storage --> host --> —> docker —-> nextcloud external storage
So the reason why the nextcloud service cannot view the content is due to permission problems.
If you do exec -it nextcloud bash to check the contents, they are there because you are root.
So the proper solution if you want to use a shared bind mount with the host directory, is to set permission to 666 so the others in the container can view the file.
But at the end, I have figured that volume plugins are a way better solution, thus this is somewhat deprecated.

Share dir between host and multiple containers using docker-compose

I have 2 containers in a compose files,that i want to serve app static files through nginx.
I have read this: https://stackoverflow.com/a/43560093/7522096 and want to use host dir to share between app container and nginx container, for some reason I dont want to use named volume.
===
Using a host directory Alternately you can use a directory on the host
and mount that into the containers. This has the advantage of you
being able to work directly on the files using your tools outside of
Docker (such as your GUI text editor and other tools).
It's the same, except you don't define a volume in Docker, instead
mounting the external directory.
version: '3'
services:
nginx:
volumes:
- ./assets:/var/lib/assets
asset:
volumes:
- ./assets:/var/lib/assets
===
My docker-compose file:
version: "3.7"
services:
app:
container_name: app
restart: always
ports:
- 8888:8888
env_file:
- ./env/app.env
image: registry.gitlab.com/app/development
volumes:
- ./public/app/:/usr/app/static/
- app-log:/root/.pm2
nginx:
container_name: nginx
image: 'nginx:1.16-alpine'
restart: always
ports:
- 80:80
- 443:443
volumes:
- /home/devops/config/:/etc/nginx/conf.d/:ro
- /home/devops/ssl:/etc/nginx/ssl:ro
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
volumes:
# app-public:
app-log:
Yet when i do this in my compose, the dir always come up empty on nginx, and the static files in my app container got disappear too.
Please help, I tried a lot of ways but can not figure it out.
Thanks.
During the initialization of the container docker will bind the ./public/app directory on the host with the /usr/app/static/ directory in the container.
If the ./public/app does not exist it will be created. The bind is from the host to the container, meaning that the content of ./public/app folder is
reflected (copied) into the container and not viceversa. That's why after the initialization the static app directory is empty.
If my understanding is correct your goal is to share the application files between the app container and nginx.
Taken into consideration the above the only solution is to create the files in the volume after the volume is created. Here is an example for the relevant parts:
version: "3"
services:
app:
image: ubuntu
volumes:
- ./public/app/:/usr/app/static_copy/
entrypoint: /bin/bash
command: >
-c "mkdir /usr/app/static;
touch /usr/app/static/shared_file;
mv /usr/app/static/* /usr/app/static_copy;
rm -r /usr/app/static;
ln -sfT /usr/app/static_copy/ /usr/app/static;
exec sleep infinity"
nginx:
image: 'nginx:1.16-alpine'
volumes:
- ./public/app/:/etc/nginx/public/app/
depends_on:
- app
This will move the static files to the static_copy directory and link back those files to /usr/app/static. Those files will be shared with the host (public/app director)
and nginx container (/etc/nginx/public/app/). Adapt it to fit your needs.
In alternative you can of course use named volumes.
Hope it helps

Docker compose not mounting volume?

If I run this command the volume mounts and the container starts as expected with initialized state:
docker run --name gogs --net mk1net --ip 203.0.113.3 -v gogs-data:/data -d gogs/gogs
However if I run the corresponding docker-compose script the volume does not mount. The container still starts up, but without the state it reads on startup.
version: '3'
services:
gogs:
image: gogs/gogs
ports:
- "3000:3000"
volumes:
- gogs-data:/data
networks:
mk1net:
ipv4_address: 203.0.113.3
volumes:
gogs-data:
networks:
mk1net:
ipam:
config:
- subnet: 203.0.113.0/24
Any ideas?
Looking at your command, the gogs-data volume was defined outside the docker compose file, probably using something like:
docker volume create gogs-data
If so then you need to specify it as external inside your docker compose file like this:
volumes:
gogs-data:
external: true
You can also define a different name for your external volume and keep using current volume name inside your docker compose file to avoid naming conflicts, like for example, let's say your project is about selling cars so you want the external volume to be call selling-cars-gogs-data but want to keep it simple as gogs-data inside your docker compose file, then you can do this:
volumes:
gogs-data:
external:
name: selling-cars-gogs-data
Or even better using environment variable to set the volume name for a more dynamic docker compose design, like this:
volumes:
gogs-data:
external:
name: "${MY_GOGS_DATA_VOLUME}"
And then start your docker compose like this:
env MY_GOGS_DATA_VOLUME='selling-cars-gogs-data' docker-compose up
Hope this helps, here is also a link to the docker compose external volumes documentation in case you want to learn more: https://docs.docker.com/compose/compose-file/#external
You can make pretty much everything external, including container linking to connect to other docker compose containers.

Docker-compose shared volume between containers but that has a path in host

I found that you can use named volumes so two containers can exchange data between them. However, I need to store this names volume in my host computer (the computer which is running the docker images).
So how do I create a voluma that is stored in /media/my_volume that is also shared between containers? I tried to simply binding /media/my_volume to both containers but it ended up in everything being erased when I started the compose again
UPDATE:
version: '3'
services:
transmission:
build: ./rpi-transmission
image: rpi-transmission
ports:
- "9091:9091"
- "51413:51413"
- "51413:51413/udp"
volumes:
- "/home/pi/transmission:/etc/transmission"
- "/media/external:/home/downloads"
- "/home/transmission-watch:/home/transmission-watch"
samba:
build: ./rpi-samba
image: rpi-samba
stdin_open: true
volumes:
- "/media/external:/data/share:ro"
kodi:
build: ./kodi-rpi
image: kodi-rpi
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9777:9777/udp"
devices:
- "/dev/tty0:/dev/tty0"
- "/dev/tty2:/dev/tty2"
- "/dev/fb0:/dev/fb0"
- "/dev/input:/dev/input"
- "/dev/snd:/dev/snd"
- "/dev/vchiq:/dev/vchiq"
volumes:
- "/var/run/dbus:/var/run/dbus"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"
- "/home/pi/kodi-rpi/config:/config/kodi"
- "/home/pi/kodi-rpi/data:/data"
I need to use /media/external on both containers. If I give it a name, I can't mount it to /media/external. If I simply do as it it now, I think samba erases the content of transmission
The content isn't erased from the container though, it is "masked", because the mounted directory is mounted on top of the existing files. The files are still in the container, only not reachable.
However, un-mounting the volume reveals the content that is still in the container (only not accessible, because the volume is mounted over it)
It already has a path on the host inside /var/lib/docker (or whatever directory you has configured as a graph path).
$ docker volume create test
test
$ docker volume inspect -f '{{.Mountpoint}}' test
/var/lib/docker/volumes/test/_data
If you want it to appear on /media/my_volume you can do a bind mount:
mount --bind /var/lib/docker/volumes/test/_data /media/my_volume

What is a docker-compose.yml file?

I can't find a real definition of what a docker-compose file is.
Is it correct to say this:
A docker-compose file is a YAML file that allows us to deploy multiples Docker containers at the same time.
I'd like to be able to explain a bit better what a docker-compose file is.
A docker-compose.yml is a config file for Docker Compose.
It allows to deploy, combine, and configure multiple docker containers at the same time. The Docker "rule" is to outsource every single process to its own Docker container.
Take for example a simple web application: You need a server, a database, and PHP. So you can set three docker containers with Apache2, PHP, and MySQL.
The advantage of Docker Compose is easy configuration. You don't have to write a big bunch of commands into Bash. You can predefine it in the docker-compose.yml:
db:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: example_db
MYSQL_USER: root
MYSQL_PASSWORD: rootpw
php:
image: php
ports:
- "80:80"
- "443:443"
volumes:
- ./SRC:/var/www/
links:
- db
As you can see in my example, I define port forwarding, volumes for external data, and links to the other Docker container. It's fast, reproducible, and not that hard to understand.
The Docker Compose file format is formally specified which enables docker-compose.yml files being executed with something else than Docker, Podman for example.
Docker Compose is a tool that allows you to deploy and manage multiple containers at the same time.
A docker-compose.yml file contains instructions on how to do that.
In this file, you instruct Docker Compose for example to:
From where to take the Dockerfile to build a particular image
Which ports you want to expose
How to link containers
Which ports you want to bind to the host machine
Docker Compose reads that file and executes commands.
It is used instead of all optional parameters when building and running a single docker container.
Example:
version: '2'
services:
nginx:
build: ./nginx
links:
- django:django
- angular:angular
ports:
- "80:80"
- "8000:8000"
- "443:443"
networks:
- my_net
django:
build: ./django
expose:
- "8000"
networks:
- my_net
angular:
build: ./angular2
links:
- django:django
expose:
- "80"
networks:
- my_net
networks:
my_net:
external:
name: my_net
This example instructs Docker Compose to:
Build nginx from path ./nginx
Links angular and django containers (so their IP in the Docker network is resolved by name)
Binds ports 80, 443, 8000 to the host machine
Add it to network my_net
(so all 3 containers are in the same network and therefore accessible from each other)
Then something similar is done for the django and angular containers.
If you would use just Docker commands, it would be something like:
docker build --name nginx .
docker run --link django:django angular:angular --expose 80 443 8000 --net my_net nginx
So while you probably don't want to type all these options and commands for each image/container, you can write a docker-compose.yml file in which you write all these instructions in a human-readable format.

Resources