Folders for some Docker named volumes are empty? - docker

I have a few Docker named volumes set up like this:
---
version: '3.8'
services:
broker:
...
volumes:
- volume_kafka:/var/lib/kafka
materialize:
...
volumes:
- volume_mzdata:/var/lib/mzdata
volumes:
volume_kafka:
driver: local
driver_opts:
type: none
o: bind
device: /var/lib/kafka
volume_mzdata:
driver: local
driver_opts:
type: none
o: bind
device: /var/lib/mzdata
When I go to /var/lib/mzdata on the host, it's the same as /var/lib/mzdata in the container. If I docker-compose down, then docker-compose up, everything in /var/lib/mzdata is the same as before.
However, /var/lib/kafka on host is missing all the files from /var/lib/kafka in the container. If I docker-compose down, then docker-compose up, some data is missing and there's a bunch of errors. It's not clear whether it reused /var/lib/kafka (but the files are invisible to me) or if it's storing the data elsewhere.
If the files on in /var/lib/kafka on the host, but it's invisible to me, how can I view them? sudo didn't work. If they're stored elsewhere, how can I find them?

Related

Mount OpenMediaVault NFS in docker-compose.yml volume for Docker Swarm

I am trying to externalise my runtime data from my applications to be saved in OpenMediaVault shared folder.
I was able to create shared folder and configure NFS or at least I think so. The config I see in OMV/Services/NFS/Shares is:
Shared folder: NasFolder[on /dev/sda1, nas/]
Client: 192.168.50.0/24
Privelage: Read/Write
Extra options: subtree_check,insecure
Now in that shared folder I have this structure(I checked it using windows SMB/CIFS config)
\\nfs-ip\NasFolder
|- mysql
| \- some my sql folders...
|- TEST.txt
I want to use this mysql folder to store MariaDB runtime data(I know names are messed up I am in a middle of a migration to Maria...). And meaby create some other folders for other services. This is my config from docker-compose.yml:
version: '3.2'
services:
mysqldb:
image: arm64v8/mariadb:latest
ports:
- 3306:3306
restart: on-failure:3
volumes:
- type: volume
source: nfs-volume
target: /mysql
volume:
nocopy: true
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
command: --character-set-server=utf8 --collation-server=utf8_general_ci
volumes:
nfs-volume:
driver: local
driver_opts:
type: "nfs"
o: addr=192.168.50.70,nolock,soft,rw
device: ":/NasFolder"
Now when I run docker stack deploy -c docker-compose.yml --with-registry-auth maprealm on my manager node I get error on maprealm_mysqldb.1 that looks like this:
"Err": "starting container failed: error while mounting volume '/var/lib/docker/volumes/maprealm_nfs-volume/_data': failed to mount local volume: mount :/NasFolder:/var/lib/docker/volumes/maprealm_nfs-volume/_data, data: addr=192.168.50.70,nolock,soft: permission denied",
I am pretty new to integration stuff. This is my home server and I just can't find good tutorials that 'get through my thick skull' how to configure those NFS paths and permissions or at least how can I debug it beside just getting this error. I know that volumes.nfs-volume.driver_opts.device is supposed to be a path but I am not sure what path should that be.
I was trying to adapt config from here: https://gist.github.com/ruanbekker/4a9c0d250bce9f84482f2a788ce92131
Edit1) Few additional details:
Docker swarm has 3 nodes and only one node is manager with availability pause.
OMV is running on a separet machine that is not a part of a cluster
Ok so if someone would be looking for solution:
OMV by default has /export/ for NFS so volume needed to be updated. I needed to update volume for mysql and update volumes.mysql-volume.driver_opts.device to include that /export/ prefix and I also added path to mysql folder to have volume for mysqldb service use only:
volumes:
mysql-volume:
driver: local
driver_opts:
type: "nfs"
o: addr=192.168.50.70,nolock,soft,rw
device: ":/export/NasFolder/mysql"
After those changes there was need to update volume config on mysql/mariadb:
mysqldb:
image: arm64v8/mariadb:latest
ports:
- 3306:3306
restart: on-failure:3
volumes:
- type: volume
source: mysql-volume
target: /var/lib/mysql
volume:
nocopy: true
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
command: --character-set-server=utf8 --collation-server=utf8_general_ci
mysqldb.volumes.source points to name of your volume defined in step 1 - mysql-volume
mysqldb.volumes.target is where inside container runtime data is stored. In mysql/mariadb databases runtime data is stored in /var/lib/mysql so you want to point to that and you can only use full path.
Since I used default OMV config there were problems with permissions. So I updated OMV/Services/NFS/Shares to this:
Shared folder: NasFolder[on /dev/sda1, nas/]
#here you can see note 'The location of the files to share. The share will be accessible at /export/.'
Client: 192.168.50.0/24
Privelage: Read/Write
Extra options: rw,sync,no_root_squash,anonuid=1000,anongid=1000,no_acl

Store Docker volume on external hard drive

I am trying to store the data of my container on an 'external hard drive' (/dev/xvdd) that is mounted at /mnt/datadbs.
My docker-compose.yml looks like this:
version: "3":
services:
...
volumes:
prometheus-data:
driver: local
driver_opts:
type: btrfs
device: /mnt/dataebs
When I start the container, I am getting the following error:
ERROR: for prometheus Cannot create container for service prometheus: failed to mount local volume: mount /mnt/dataebs:/var/lib/docker/volumes/ubuntu_prometheus-data/_data: block device required
Can someone point me in the right direction? Eventually, I want to be able to store several docker volumes on the 'external hard drive'.
Try changing your named volume declaration type to "bind" instead of "btrfs".
So it would be something like this:
volumes:
prometheus-data:
driver: local
driver_opts:
type: none
device: /mnt/dataebs
o: bind
You can also bind mount directly in your service declaration, so something like this:
app:
image: appimage
ports:
- "8080:8080"
volumes:
- /mnt/dataebs:/container/path

How to prevent a bind mounted directory to be overridden by container's data

My docker-compose.yml is like below where I have mounted my local dir code/drupal to /var/www/example.com. My container creates some temporary cache files inside /var/www/example.com/temp/. I want to map everything from container to host and host to container i.e in bidirectional but excluding the temp dir. Infact I don't want the content of that temp dir should be synchronized with my host machine.
version: "3.3"
services:
nginx:
build: ./docker/nginx
volumes:
- drupal:/var/www/example.com
volumes:
drupal:
driver: local
driver_opts:
type: none
device: $PWD/code/drupal
o: bind
Create a symlink from the problem directory to a non-volumed area in your docker container

docker-compose: previous volume setting overrides changes

I've been using a host directory as my data volume for a postgresql container. My docker-compose.yml reads something like this.
postgresql:
image: postgres
ports:
- "5432:5432"
container_name: postgresql
networks:
- mynet
volumes:
- pg-data:/var/lib/postgresql/data
volumes:
pg-data:
driver_opts:
type: none
device: /volumes/pgdata
o: bind
As we wanted to consolidate all data into a single encrypted volume, I decided to remap the volume for this container.
volumes:
pg-data:
driver_opts:
type: none
device: /volumes/data/pgdata
o: bind
I know that I need to remove the containers of this image and recreate the image. I removed all containers and images. It appears that docker-compose is still remembering my old setting somewhere when I try to rebuild (docker-compose up --build postgresql) the image. I get the following error.
ERROR: for postgresql Cannot create container for service postgresql: error while mounting volume with options: type='none' device='/volumes/pgdata' o='bind': no such file or directory
Its still trying to access the old volume definition where as my new docker-compose.yml has no reference to this directory.
Appreciate some help to resolve this. Am I missing some steps here?
This turned out to be an issue in docker-compose as per this post. This is scheduled to be fixed in the next release. Meanwhile, the workaround suggested works.
In the meantime, you should be able to work around the issue by
removing the existing volume (either docker volume rm -f <project>_pg-data
or docker-compose down -v)

How to set a path on host for a named volume in docker-compose.yml

Example below creates dbdata named volume and references it inside db service:
version: '2'
services:
db:
image: mysql
volumes:
- dbdata:/var/lib/mysql
volumes:
dbdata:
driver: local
(from https://stackoverflow.com/a/35675553/4291814)
I can see the path for the volume defaults to:
/var/lib/docker/volumes/<project_name>_dbdata
My question is how to configure the path on host for the dbdata volume?
With the local volume driver comes the ability to use arbitrary mounts; by using a bind mount you can achieve exactly this.
For setting up a named volume that gets mounted into /srv/db-data, your docker-compose.yml would look like this:
version: '2'
services:
db:
image: mysql
volumes:
- dbdata:/var/lib/mysql
volumes:
dbdata:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/srv/db-data'
I have not tested it with the version 2 of the compose file format, but https://docs.docker.com/compose/compose-file/compose-versioning/#version-2 does not indicate, that it should not work.
I've also not tested it on Windows...
The location of named volumes is managed by docker; if you want to specify the location yourself, you can either "bind mount" a host directory, or use a volume plugin that allows you to specify a path.
You can find some details in another answer I posted recently; https://stackoverflow.com/a/36321403/1811501

Resources