Example (many options omitted for brevity):
version: "3"
volumes:
traefik:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.1.100,soft,rw,nfsvers=4,async"
device: ":/volume/docker/traefik"
services:
traefik:
volumes:
- traefik/traefik.toml:/traefik.toml
This errors out as there is no volume with the name traefik/traefik.toml meaning that the volume name must be the full path to the file (i.e. you can't append a path to the volume name)?
Trying to set device: ":/volume/docker/traefik/traefik.toml" just returns a not a directory error.
Is there a way to take a single file and mount it into a container?
You cannot mount a file or sub-directory within a named volume, the source is either the named volume or a host path. NFS itself, along with most filesystems you'd mount in Linux, require you to mount an entire filesystem, not a single file, and when you get down to the inode level, this is often a really good thing.
The options remaining that I can think of are to mount the entire directory somewhere else inside your container, and symlink to the file you want. Or to NFS mount the directory to the host and do a host mount (bind mount) to a specific file.
However considering the example you presented, using a docker config would be my ideal solution, removing the NFS mount entirely, and getting a read only copy of the file that's automatically distributed to whichever node is running the container.
More details on configs: https://docs.docker.com/engine/swarm/configs/
I believe I found the issue!
Wrong:
volumes:
- traefik/traefik.toml:/traefik.toml
Correct:
volumes:
- /traefik/traefik.toml:/traefik.toml
Start the volume with "/"
Related
I need to store the project on the mounted drive using the docker-compose file, placed at the root of that drive. I want to use docker volumes, not as bounded folders, but as named volumes. But I do not want to store it physically on the default location (/var/lib/docker/volume), because the system drive is small. How can I set up named volumes in docker-compose and ask the docker to create volumes in the same place where the docker-compose file is located?
Recommended: If system drive is small then you should move the docker data-root to some other location. This will keep compose file same across all users/systems.
Not Recommended: For having a named volume on desired location, you can specify driver_opts for the volume driver.
e.g. for local driver, declare top level volumes in compose file as
volumes:
my_volume:
driver: local
driver_opts:
type: none
device: "/full/host/path/"
o: bind
This change will apply to all systems your docker compose is used, with small system drives or large.
I am trying to use an nfs share to use docker swarm with a single endpoint server for a drive, the NFS share its self does work as i can create files on it, however when trying to use it on a stack i get a bind source path error. my nfs share is set to /container on both machines so each machine can find it at the same location. here is what i have as a volume in my docker compose file:
volumes:
- /container/tdarr-server/server:/app/server
- /container/tdarr-server/configs:/app/configs
- /container/tdarr-server/logs:/app/logs
- /container/plex/media:/media
- /container/tdarr-server/transcode:/transcode
There are two ways to do this:
On each server mount a nfs share. Assuming you have a nfs server sharing a volume "docker_volumes" you could mount that as "/mnt/volumes"
Then your stack file could look like this:
version: "3.9"
volumes:
prometheus:
driver: local
driver_opts:
o: bind
type: none
device: /mnt/volumes/prometheus-data
services:
prometheus:
image: prom/prometheus:latest
volumes:
- prometheus:/data
NB. Docker will NOT create missing volumes for you. Each missing directory (e.g. ./prometheus-data) needs to be manually created on the nfs share before docker will start the service.
As an alternative you can - rather than pre-mounting the nfs volume in a defined location - provide docker with the nfs connection details so it can mount the nfs share on the fly:
volumes:
data:
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"
Again, if you provide a path into the nfs share as part of the device description, docker will not create it if it does not exist. The admin must pre-create any specific sub folders referenced before services or containers will be able to use the volume definition.
Might sound weird, but I am trying to get docker-compose to mount directories to a volume I created
volumes:
- backend-data:/app/migrations/autogen-migrations
- backend-data:/app/seeds/autogen-seeds
- backend-data:/app/server/public
- backend-data:/app/server/src/services/location
Only problem is, that instead of simply mapping the folders to the volume, it's mounting the content inside the volume. Is there any way to tell docker-compose to copy/map the folder itself?
Edit:
Already tried doing
backend-data/autogen-migrations:/app/migrations/autogen-migrations
And I get the following error:
Named volume "backend-data/autogen-migrations:/app/migrations/autogen-migrations:rw" is used in service "backend" but no declaration was found in the volumes section.
Btw this is how my volumes are declared
volumes:
backend-data:
driver: local
When you need mount a local directory as Docker volume, you have use this syntax in your service:
volumes:
- ./your/local/path:/app/migrations/autogen-migrations
In this way, Docker create a local path "./your/local/path" in the same folder where docker-compose.yml file is, to store the data of mounted volume. In this case you don't need specify the volume section in the docker-compose.yml because you manage the volumes by yourself.
If you need mount more than one folder, remember also to mount more than one local folder:
volumes:
- ./your/local/migrations/or/whatever/you/want:/app/migrations/autogen-migrations
- ./your/local/seeds:/app/seeds/autogen-seeds
- ./your/local/server/public:/app/server/public
- ./your/local/server/src:/app/server/src/services/location
You can also aggregate mounts folder under the same folder:
volumes:
- ./your/local/migrations:/app/migrations/autogen-migrations
- ./your/local/seeds:/app/seeds/autogen-seeds
- ./your/local/server:/app/server
In this case into './your/local/server' you found all '/app/server/' content.
If you use the syntax:
volumes:
backend-data:
driver: local
you tell Docker: "Docker, please, mount the folder which correspond to backend-data: (in your case backend-data:/app/migrations/autogen-migrations) when you want and store my data!". In this case, Docker manage the "local" folder by itself without use the the same folder where docker-compose.yml file is.
How does mixing named volumes and bind mounts work? Using the following setup will the paths that are being bind mounted still be available inside the bind mount as they exist in the bind mount?
/var/www/html/wp-content/uploads
Using a separate container which I attach to the named volumes, seems to show that it is not the case as those paths are completely empty from the view of the separate container. Is there a way for this to work in a sense?
volumes:
- "wordpress:/var/www/html"
- "./wordpress/uploads:/var/www/html/wp-content/uploads"
- "./wordpress/plugins:/var/www/html/wp-content/plugins"
- "./wordpress/themes:/var/www/html/wp-content/themes"
Host volumes: For a host volume, defined with a path in your docker compose file like:
volumes:
- "./wordpress/uploads:/var/www/html/wp-content/uploads"
you will not receive any initialization of the host directory from the image contents. This is by design.
Named volumes: You can define a named volume that maps back to a local directory:
version: "2"
services:
your-service:
volumes:
- uploads:/var/www/html/wp-content/uploads
volumes:
uploads:
driver: local
driver_opts:
type: none
o: bind
device: /path/on/host/to/wordpress/uploads
This will provide the initialization properties of a named volume. When your host directory is empty empty, on container creation docker will copy the contents of the image at /var/www/html/wp-content/uploads to /path/on/host/to/wordpress/uploads.
Nested mounts with Docker: If you have multiple nested volume mounts, docker will still copy from the image directory contents, not from a parent volume.
Here's an example of that initialization. Starting with the filesystem:
testvol/
data-image/
sub-dir/
from-image
data-submount/
Dockerfile
docker-compose.yml
The Dockerfile contains:
FROM busybox
COPY data-image/ /data
The docker-compose.yml contains:
version: "2"
services:
test:
build: .
image: test-vol
command: find /data
volumes:
- data:/data
- subdir:/data/sub-dir
volumes:
data:
subdir:
driver: local
driver_opts:
type: none
o: bind
device: /path/on/host/test-vol/data-submount
And the named volume has been initialized:
$ docker run -it --rm -v testvol_data:/data busybox find /data
/data
/data/sub-dir
/data/sub-dir/from-named-vol
Running the test shows the copy comes from-image rather than from-named-vol:
$ docker-compose -f docker-compose.bind.yml up
...
Attaching to testvol_test_1
test_1 | /data
test_1 | /data/sub-dir
test_1 | /data/sub-dir/from-image
testvol_test_1 exited with code 0
And docker has copied this to the host filesystem:
$ ls -l data-submount/
total 0
-rw-r--r-- 1 root root 0 Jan 15 08:08 from-image
Nested mounts in Linux: From your question, there appears to be some confusion on how a mount itself works in Linux. Each volume mount runs in the container's mount namespace. This namespace gives the container its own view of a filesystem tree. When you mount a volume into that tree, you do not modify the contents from the parent filesystem, it simply covers up the contents of the parent at that location. All changes happen directly in that newly mounted directory, and if you were to unmount it, the parent directories will then be visible in their original state.
Therefore, if you mount two nested directories in one container, e.g. /data and /data/a, and then mount /data in a second container, you will not see /data/a from your first container in the second container, only the contents of /data will be there, including any folders that were mounted over top of.
I believe the answer is to configure bind propagation.
will report back.
Edit: Seems you can only configure bind propagation on bind mounted volumes and only on linux host system.
I've tried to get this to work for hours, but I've come to the conclusion that it just won't. My case was adding a specific plugin to a CMS as a volume for local development. I want to post this here because I haven't come across this workaround anywhere.
So the following would suffer from the overlapping volumes issue, causing the folders to be empty.
services:
your-service:
volumes:
- web-data:/var/www/html
- ./wordpress/plugins:/var/www/html/wp-content/plugins
- ./wordpress/themes:/var/www/html/wp-content/themes
This is how you avoid that, by binding your themes and plugins to a different directory, not inside /var/www/html.
services:
your-service:
volumes:
- web-data:/var/www/html
- ./wordpress/plugins:/tmp/plugins
- ./wordpress/themes:/tmp/themes
But now you have to get these files in the correct place, and have them still be in sync with the files on your host.
Simple version
Note: These examples assume you have a shell script as your entrypoint.
In your Docker entrypoint:
#!/bin/bash
ln -s /tmp/plugins/my-plugin /var/www/html/wp-content/plugins/my-plugin
ln -s /tmp/themes/my-theme /var/www/html/wp-content/themes/my-theme
This should work as long as your system/software resolves symlinks.
More modular solution
I only wrote this for plugins, but you could process themes the same way. This finds all plugins in the /tmp/plugins folder and symlinks them to /var/www/html/wp-content/plugins/<plugin>, without you having to write hard-coded folder/plugin names in your script.
#!/bin/bash
TMP_PLUGINS_DIR="/tmp/plugins"
CMS_PLUGINS_DIR="/var/www/html/wp-content/plugins"
# Loop through all paths in the /tmp/plugins folder.
for path in $TMP_PLUGINS_DIR/*/; do
# Ignore anything that's not a directory.
[ -d "${path}" ] || continue
# Get the plugin name from the path.
plugin="$(basename "${path}")"
# Symlink the plugin to the real plugins folder.
ln -sf $TMP_PLUGINS_DIR/$plugin CMS_PLUGINS_DIR/$plugin
# Anything else you might need to do for each plugin, like installing/enabling it in your CMS.
done
I need to make a named volume use a relative path to the folder where the docker-compose command is executed.
Here is the volume definition in the docker-compose.yml
volumes:
esdata1:
driver: local
driver_opts:
type: none
device: ./esdata1
o: bind
It seems that docker-compose do not create the folder if it does not exist, but even when the folder is created before lauching docker I'm always getting this error:
ERROR: for esdata Cannot create container for service esdata: error while mounting volume with options: type='none' device='./esdata1' o='bind': no such file or directory
NOTE: This is maybe silly, but esdata is the service that use the named volume
esdata:
...
volumes:
- esdata1:/usr/share/elasticsearch/data
...
What I'm missing here?
Maybe the relative path ./ does not point to the folder where the docker-compose is executed (I've tried with ~/ to use a folder relative the user's home but I got the same error).
Thanks in advance,
PS: If I use an absolute path it works like a charm
I encountered exactly the same issue. It seems that you have done nothing wrong. This is just not yet implemented in Docker : https://github.com/docker/compose/issues/6343
Not very nice for portability...
If you use a named bind mount like this, you must include the full path to the file, e.g.:
volumes:
esdata1:
driver: local
driver_opts:
type: none
device: /home/username/project/esdata1
o: bind
That folder must also exist in advance. This is how the linux bind mount syscall works, and when you pass flags like this, you are talking directly to Linux without any path expansion by docker or compose.
If you just want to mount the directory from the host, using a host volume will be expanded for the relative path by compose:
esdata:
...
volumes:
- ./esdata1:/usr/share/elasticsearch/data
...
While the host volume is easier for portability since the path is automatically expanded, you will lose the feature from named volumes where docker initializes an empty named volume with the image contents (including file permissions/ownership).