docker-compose volumes syntax for local driver to mount a file - docker

I am attempting to mount a file (nginx.conf) in the volumes section of my docker-compose.yml. I can mount a directory as a volume without issue, however, it is not clear to me what the syntax is for a file.
I have the following defined in my volumes section
volumes:
roundcubeweb:
driver: local
driver_opts:
type: none
o: bind
device: /mnt/docker/volumes/roundcube/html
nginxconf:
driver: local
driver_opts:
type: none
o: bind
device: /mnt/docker/volumes/roundcube/nginx.conf
Later on, I have the following under my services section
nginx:
image: nginx:latest
deploy:
replicas: 1
restart: always
depends_on:
- roundcube
ports:
- "8082:80"
volumes:
- roundcubeweb:/var/www/html
- nginxconf:/etc/nginx/conf.d/default.conf
When I attempt to start this up, I receive the following error:
ERROR: for roundcube_nginx_1 Cannot create container for service
nginx: failed to mount local volume: mount /mnt/docker
/volumes/roundcube/nginx.conf:/var/lib/docker/volumes/roundcube_nginxconf/_data,
flags: 0x1000: not a directory
ERROR: for nginx Cannot create container for service nginx: failed to
mount local volume: mount /mnt/docker/volumes/rou
ndcube/nginx.conf:/var/lib/docker/volumes/roundcube_nginxconf/_data,
flags: 0x1000: not a directory ERROR: Encountered errors while
bringing up the project.
I've found that if I inline the file location in the nginx service section's volume declaration then it works just fine. For example:
volumes:
- roundcubeweb:/var/www/html
- /mnt/docker/volumes/roundcube/nginx.conf:/etc/nginx/conf.d/default.conf
Can one not mount a file in the volumes section? Is there a reference for the local driver's parameters?

I don't think it is possible to have a standalone Docker volume point to a specific file rather than a directory. It may be related to a Docker design on how volumes are managed and mounted (tried to find a doc or related piece of code but could not find any)
local driver's parameter seems to take similar parameter as Linux mount commands. It seems implied in Docker volume doc though not very clearly:
the local driver accepts mount options as a comma-separated list in the o parameter
I've found that if I inline the file location in the nginx service section's volume declaration then it works just fine. For example:
volumes:
- roundcubeweb:/var/www/html
- /mnt/docker/volumes/roundcube/nginx.conf:/etc/nginx/conf.d/default.conf
In your example you are doing a Bind Mound rather than using Volumes, which is handled differently by Docker and more suited for your use case of mounting a local file in the container. I think using Bind Mount is the appropriate solution in your case.

Related

I am trying to connect to an SMB share via Docker Compose and it will not give write permissions

I am new to docker. I am attempting run a docker server on a raspberry pi and have the volumes point to a network share so I can avoid persistent files directly on the pi. I am using docker compose and am attempting to mount an SMB share from my Unraid server to the docker containers utilizing volumes.
This is how I have tried mounting the SMB shares:
volumes:
downloads:
driver_opts:
type: cifs
o: username=COOLUSERNAMEHERE,password=SUPERSECRETPASSWORD,vers=3
device: //192.168.0.110/downloads
I then mount this volume inside the container as follows:
services:
myservicename:
image: theplaceigetmyimagefrom
container_name: myservice
environment:
- PUID=1000
- PGID=1000
- TZ=America/Toronto
volumes:
- downloads:/downloads
ports:
- 1234:1234
restart: unless-stopped
This volume mounts just fine however it mounts as read only. No matter what I do I cannot get it to have write permissions. I again am new to docker. So I am probably trying the wrong stuff. I have tried adding privileged: true to the container but that didn't change anything. The SMB share I am connecting to is set to public and shared so there shouldn't be any user issues, however I have also logged in with a user that has "read/write"access to that folder as setup in Unraid itself.
Any ideas on next steps would be appreciated.
Figured this out. I changed the version to 3.0 (not sure if that did anything at all) and I added GID and UID and the rw option:
volumes:
downloads:
driver_opts:
type: cifs
o: username=COOLUSERNAMEHERE,password=SUPERSECRETPASSWORD,uid=1000,gid-1000,vers=3.0,rw
device: //192.168.0.110/downloads
it now connects as expected.

Mount OpenMediaVault NFS in docker-compose.yml volume for Docker Swarm

I am trying to externalise my runtime data from my applications to be saved in OpenMediaVault shared folder.
I was able to create shared folder and configure NFS or at least I think so. The config I see in OMV/Services/NFS/Shares is:
Shared folder: NasFolder[on /dev/sda1, nas/]
Client: 192.168.50.0/24
Privelage: Read/Write
Extra options: subtree_check,insecure
Now in that shared folder I have this structure(I checked it using windows SMB/CIFS config)
\\nfs-ip\NasFolder
|- mysql
| \- some my sql folders...
|- TEST.txt
I want to use this mysql folder to store MariaDB runtime data(I know names are messed up I am in a middle of a migration to Maria...). And meaby create some other folders for other services. This is my config from docker-compose.yml:
version: '3.2'
services:
mysqldb:
image: arm64v8/mariadb:latest
ports:
- 3306:3306
restart: on-failure:3
volumes:
- type: volume
source: nfs-volume
target: /mysql
volume:
nocopy: true
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
command: --character-set-server=utf8 --collation-server=utf8_general_ci
volumes:
nfs-volume:
driver: local
driver_opts:
type: "nfs"
o: addr=192.168.50.70,nolock,soft,rw
device: ":/NasFolder"
Now when I run docker stack deploy -c docker-compose.yml --with-registry-auth maprealm on my manager node I get error on maprealm_mysqldb.1 that looks like this:
"Err": "starting container failed: error while mounting volume '/var/lib/docker/volumes/maprealm_nfs-volume/_data': failed to mount local volume: mount :/NasFolder:/var/lib/docker/volumes/maprealm_nfs-volume/_data, data: addr=192.168.50.70,nolock,soft: permission denied",
I am pretty new to integration stuff. This is my home server and I just can't find good tutorials that 'get through my thick skull' how to configure those NFS paths and permissions or at least how can I debug it beside just getting this error. I know that volumes.nfs-volume.driver_opts.device is supposed to be a path but I am not sure what path should that be.
I was trying to adapt config from here: https://gist.github.com/ruanbekker/4a9c0d250bce9f84482f2a788ce92131
Edit1) Few additional details:
Docker swarm has 3 nodes and only one node is manager with availability pause.
OMV is running on a separet machine that is not a part of a cluster
Ok so if someone would be looking for solution:
OMV by default has /export/ for NFS so volume needed to be updated. I needed to update volume for mysql and update volumes.mysql-volume.driver_opts.device to include that /export/ prefix and I also added path to mysql folder to have volume for mysqldb service use only:
volumes:
mysql-volume:
driver: local
driver_opts:
type: "nfs"
o: addr=192.168.50.70,nolock,soft,rw
device: ":/export/NasFolder/mysql"
After those changes there was need to update volume config on mysql/mariadb:
mysqldb:
image: arm64v8/mariadb:latest
ports:
- 3306:3306
restart: on-failure:3
volumes:
- type: volume
source: mysql-volume
target: /var/lib/mysql
volume:
nocopy: true
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
command: --character-set-server=utf8 --collation-server=utf8_general_ci
mysqldb.volumes.source points to name of your volume defined in step 1 - mysql-volume
mysqldb.volumes.target is where inside container runtime data is stored. In mysql/mariadb databases runtime data is stored in /var/lib/mysql so you want to point to that and you can only use full path.
Since I used default OMV config there were problems with permissions. So I updated OMV/Services/NFS/Shares to this:
Shared folder: NasFolder[on /dev/sda1, nas/]
#here you can see note 'The location of the files to share. The share will be accessible at /export/.'
Client: 192.168.50.0/24
Privelage: Read/Write
Extra options: rw,sync,no_root_squash,anonuid=1000,anongid=1000,no_acl

Store Docker volume on external hard drive

I am trying to store the data of my container on an 'external hard drive' (/dev/xvdd) that is mounted at /mnt/datadbs.
My docker-compose.yml looks like this:
version: "3":
services:
...
volumes:
prometheus-data:
driver: local
driver_opts:
type: btrfs
device: /mnt/dataebs
When I start the container, I am getting the following error:
ERROR: for prometheus Cannot create container for service prometheus: failed to mount local volume: mount /mnt/dataebs:/var/lib/docker/volumes/ubuntu_prometheus-data/_data: block device required
Can someone point me in the right direction? Eventually, I want to be able to store several docker volumes on the 'external hard drive'.
Try changing your named volume declaration type to "bind" instead of "btrfs".
So it would be something like this:
volumes:
prometheus-data:
driver: local
driver_opts:
type: none
device: /mnt/dataebs
o: bind
You can also bind mount directly in your service declaration, so something like this:
app:
image: appimage
ports:
- "8080:8080"
volumes:
- /mnt/dataebs:/container/path

Volume device disk mount docker compose error - not a directory

I need to mount a disk using docker-compose.
Currently I can assemble using docker service create, this way:
docker service create -d \
--name my-service \
-p 8888:80 \
--mount='type=volume,dst=/usr/local/apache2/htdocs,volume-driver=local,volume-opt=type=xfs,volume-opt=device=/dev/sdd' \
httpd
My docker-compose.yml looks like this:
version: '3.8'
services:
my-service:
image: httpd
ports:
- '80:80'
volumes:
- type: volume
source: my-vol
target: /usr/local/apache2/htdocs
volumes:
my-vol:
driver: local
driver_opts:
type: xfs
o: bind
device: '/dev/sdd'
When uploading the service with docker-compose up I get the error:
"failed to mount local volume: mount /dev/sdd:/var/lib/docker/volumes/apache_my-vol/_data, flags: 0x1000: not a directory"
How can I configure my docker-compose.yml to be able to mount a disk?
*Sorry, my bad english..
The o: line matches options you'd pass to the ordinary Linux mount(8) command. o: bind manually creates a bind mount that attaches part of the filesystem somewhere else; this is the mechanic behind Docker's bind-mount system. The important thing that changes here is that it causes the device: to actually be interpreted as a (host-system) directory to be remounted, and not a device.
Since you're trying to mount a physical device, you don't need the o: bind option. If you delete that option, the device: will be interpreted as a device name and you'll be able to mount your disk.

Docker Named Volumes

What's the right way to mix named volumes with and without local host path in docker compose v3?
This way I'm getting YML error:
volumes:
/mnt/volume-part1:/volume
conf:
vhost:
html:
certs:
Then I'd like to refer to volume inside containers...
For named volumes, you need to declare the volume name under the dedicated volumes section in the compose file. For a mount, you don't declare it in that section:
Consider the following compose file:
version: "3"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
- /mnt/volume-part1:/volume
volumes:
data-volume:
As you can see the named volume data-volume needes to be declared in the volumes section before being assiged to the container.
Whereas the directory mount is directly mounted onto the container.
UPDATE
If you don't want to replicate the machine path on all the container, you can use a clever trick to specify where exactly the named volume will be created as such:
version: "3"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
- volume-part1:/volume
volumes:
data-volume:
volume-part1:
driver_opts:
type: none
device: /mnt/volume-part1
o: bind
As you can see above, we have created a named volume volume-part1 and specified where this volume will be backuped on the host machine.

Resources