Volume device disk mount docker compose error - not a directory - docker

I need to mount a disk using docker-compose.
Currently I can assemble using docker service create, this way:
docker service create -d \
--name my-service \
-p 8888:80 \
--mount='type=volume,dst=/usr/local/apache2/htdocs,volume-driver=local,volume-opt=type=xfs,volume-opt=device=/dev/sdd' \
httpd
My docker-compose.yml looks like this:
version: '3.8'
services:
my-service:
image: httpd
ports:
- '80:80'
volumes:
- type: volume
source: my-vol
target: /usr/local/apache2/htdocs
volumes:
my-vol:
driver: local
driver_opts:
type: xfs
o: bind
device: '/dev/sdd'
When uploading the service with docker-compose up I get the error:
"failed to mount local volume: mount /dev/sdd:/var/lib/docker/volumes/apache_my-vol/_data, flags: 0x1000: not a directory"
How can I configure my docker-compose.yml to be able to mount a disk?
*Sorry, my bad english..

The o: line matches options you'd pass to the ordinary Linux mount(8) command. o: bind manually creates a bind mount that attaches part of the filesystem somewhere else; this is the mechanic behind Docker's bind-mount system. The important thing that changes here is that it causes the device: to actually be interpreted as a (host-system) directory to be remounted, and not a device.
Since you're trying to mount a physical device, you don't need the o: bind option. If you delete that option, the device: will be interpreted as a device name and you'll be able to mount your disk.

Related

Store all data on specific host filesystem using Docker-Compose [duplicate]

On my Ubuntu EC2 I host an application using docker containers. db data and upload data is being stored in volumes CaseBook-data-db and CaseBook-data-uploads which are being created with this commands:
docker volume create --name=CaseBook-data-db
docker volume create --name=CaseBook-data-uploads
Volumes being attached through docker-compose file:
version: '2'
services:
mongo:
container_name: "CaseBook-db"
restart: always
image: mongo:3.2.7
ports:
- "27017"
volumes:
- data_db:/data/db
labels:
- "ENVIRONMENT_TYPE=meteor"
app:
container_name: "CaseBook-app"
restart: always
image: "meteor/casebook"
build: .
depends_on:
- mongo
environment:
- MONGO_URL=mongodb://mongo:27017/CaseBook
ports:
- "80:3000"
volumes:
- data_uploads:/Meteor-CaseBook-Container/.uploads
labels:
- "ENVIRONMENT_TYPE=meteor"
volumes:
data_db:
external:
name: CaseBook-data-db
data_uploads:
external:
name: CaseBook-data-uploads
I need to store those docker volumes in different folder(for example /home/ubuntu/data/) of the host machine. How to change docker storage folder for volumes? Or there is a better way in doing this? Thank you in advance.
Named volumes will be stored inside docker's folder (/var/lib/docker). If you want to create a volume in a specific host folder, use a host volume with the following syntax:
docker run -v /home/ubuntu/data/app-data:/app-data my-image
Or from your compose file:
version: '2'
services:
mongo:
container_name: "CaseBook-db"
restart: always
image: mongo:3.2.7
ports:
- "27017"
volumes:
- /home/ubuntu/data/db:/data/db
labels:
- "ENVIRONMENT_TYPE=meteor"
app:
container_name: "CaseBook-app"
restart: always
image: "meteor/casebook"
build: .
depends_on:
- mongo
environment:
- MONGO_URL=mongodb://mongo:27017/CaseBook
ports:
- "80:3000"
volumes:
- /home/ubuntu/data/uploads:/Meteor-CaseBook-Container/.uploads
labels:
- "ENVIRONMENT_TYPE=meteor"
With host volumes, any contents of the volume inside the image will be overlaid with the exact contents of the host folder, including UID's of the host folder. An empty host folder is not initialized from the image the way an empty named volume is. UID mappings tend to be the most difficult part of using a host volume.
Edit: from the comments below, if you need a named volume that acts as a host volume, there is a local persist volume plugin that's listed on docker's plugin list. After installing the plugin, you can create volumes that point to host folders, with the feature that even after removing the named volume, the host directory is left behind. Sample usage from the plugin includes:
docker volume create -d local-persist -o mountpoint=/data/images --name=images
docker run -d -v images:/path/to/images/on/one/ one
docker run -d -v images:/path/to/images/on/two/ two
They also include a v2 compose file with the following volume example:
volumes:
data:
driver: local-persist
driver_opts:
mountpoint: /data/local-persist/data
One additional option that I've been made aware of in the past month is to use the local volume driver's mount options to manually create a bind mount. This is similar to a host volume in docker with the following differences:
If the directory doesn't exist, trying to start a container with a named volume pointing to a bind mount will fail. With host volumes, docker will initialize it to an empty directory owned by root.
If the directory is empty, a named volume will initialize the bind mount with the contents of the image at the mount location, including file and directory ownership/permissions. With a host volume, there is no initialization of the host directory contents.
To create a named volume as a bind mount, you can create it in advance with:
docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
From a docker run command, this can be done with --mount:
docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
Or in a compose file, you can create the named volume with:
volumes:
data:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
My preference would be to use the named volume with the local driver instead of the local-persist 3rd party driver if you need the named volume features.
Another way with build-in driver local:
docker volume create --opt type=none --opt device=/home/ubuntu/data/ --opt o=bind data_db
(This use DimonVersace example with: data_db declared as external named volume in docker-compose and /home/ubuntu/data/ as the folder on the host machine)

mount a directory from docker container to the linux ubuntu host machine

I would like to mount a directory from inside a docker to my linux Ubuntu host machine using docker-compose.yml.
The directory in the docker container is /usr/local/XXX and I want to mount it on /home/Projects/XX
How can I make it happen?
This is my docker-compose.yml file:
version: '3'
services:
MyContainer:
image: XX.XXX.XXX.XXX:XXXX/XXX/MyContainer:latest
restart: always
container_name: MyContainer
hostname: MyContainer_docker
privileged: true
ports:
- "XXXX:XX"
volumes:
- /home/Project/workspace/XXX/XXXX:/home/XX
environment:
- ...
extra_hosts:
- ...
networks:
net_plain3:
ipv4_address: ...
networks:
# ...etc...
It is possible with the right driver options.
Technically, you still mount the host directory to the container, but the result is that the host directory is populated with the data in the container directory. Usually it's the other way around. That's why you need those driver options.
services:
somebox:
volumes:
- xx-vol:/usr/local/XXX
volumes:
xx-vol:
driver: local
driver_opts:
type: none
o: bind
device: /home/Projects/XX
Empty named volumes are initialized with the content of the image at the mount location when the container is created.
- bmitch
So the key here is to create a named volume that uses as device the desired location on the host.
As a full working demonstration.
I create the following Dockerfile to add text file in the /workspace dir:
FROM busybox
WORKDIR /workspace
RUN echo "Hello World" > hello.txt
Then a compose.yaml to build this image and mount a volume with these driver options:
services:
databox:
build: ./
volumes:
- data:/workspace
volumes:
data:
driver: local
driver_opts:
type: none
o: bind
device: /home/blue/scrap/vol/data
Now I run the below commands:
$ mkdir data
$ docker-compose up
[+] Running 1/0
⠿ Container vol-databox-1 Created 0.0s
Attaching to vol-databox-1
vol-databox-1 exited with code 0
$ cat /home/blue/scrap/vol/data/hello.txt
Hello World
As you can see, the hello.txt file ended up on the host. It was not created after container startup but was already inside the container's file system when the container started, since it has been added during build.
That means, it is possible to populate a host directory with data from a container in such a way that the data doesn't have to be generated after volume mount, which is usually the case.

Mount OpenMediaVault NFS in docker-compose.yml volume for Docker Swarm

I am trying to externalise my runtime data from my applications to be saved in OpenMediaVault shared folder.
I was able to create shared folder and configure NFS or at least I think so. The config I see in OMV/Services/NFS/Shares is:
Shared folder: NasFolder[on /dev/sda1, nas/]
Client: 192.168.50.0/24
Privelage: Read/Write
Extra options: subtree_check,insecure
Now in that shared folder I have this structure(I checked it using windows SMB/CIFS config)
\\nfs-ip\NasFolder
|- mysql
| \- some my sql folders...
|- TEST.txt
I want to use this mysql folder to store MariaDB runtime data(I know names are messed up I am in a middle of a migration to Maria...). And meaby create some other folders for other services. This is my config from docker-compose.yml:
version: '3.2'
services:
mysqldb:
image: arm64v8/mariadb:latest
ports:
- 3306:3306
restart: on-failure:3
volumes:
- type: volume
source: nfs-volume
target: /mysql
volume:
nocopy: true
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
command: --character-set-server=utf8 --collation-server=utf8_general_ci
volumes:
nfs-volume:
driver: local
driver_opts:
type: "nfs"
o: addr=192.168.50.70,nolock,soft,rw
device: ":/NasFolder"
Now when I run docker stack deploy -c docker-compose.yml --with-registry-auth maprealm on my manager node I get error on maprealm_mysqldb.1 that looks like this:
"Err": "starting container failed: error while mounting volume '/var/lib/docker/volumes/maprealm_nfs-volume/_data': failed to mount local volume: mount :/NasFolder:/var/lib/docker/volumes/maprealm_nfs-volume/_data, data: addr=192.168.50.70,nolock,soft: permission denied",
I am pretty new to integration stuff. This is my home server and I just can't find good tutorials that 'get through my thick skull' how to configure those NFS paths and permissions or at least how can I debug it beside just getting this error. I know that volumes.nfs-volume.driver_opts.device is supposed to be a path but I am not sure what path should that be.
I was trying to adapt config from here: https://gist.github.com/ruanbekker/4a9c0d250bce9f84482f2a788ce92131
Edit1) Few additional details:
Docker swarm has 3 nodes and only one node is manager with availability pause.
OMV is running on a separet machine that is not a part of a cluster
Ok so if someone would be looking for solution:
OMV by default has /export/ for NFS so volume needed to be updated. I needed to update volume for mysql and update volumes.mysql-volume.driver_opts.device to include that /export/ prefix and I also added path to mysql folder to have volume for mysqldb service use only:
volumes:
mysql-volume:
driver: local
driver_opts:
type: "nfs"
o: addr=192.168.50.70,nolock,soft,rw
device: ":/export/NasFolder/mysql"
After those changes there was need to update volume config on mysql/mariadb:
mysqldb:
image: arm64v8/mariadb:latest
ports:
- 3306:3306
restart: on-failure:3
volumes:
- type: volume
source: mysql-volume
target: /var/lib/mysql
volume:
nocopy: true
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
command: --character-set-server=utf8 --collation-server=utf8_general_ci
mysqldb.volumes.source points to name of your volume defined in step 1 - mysql-volume
mysqldb.volumes.target is where inside container runtime data is stored. In mysql/mariadb databases runtime data is stored in /var/lib/mysql so you want to point to that and you can only use full path.
Since I used default OMV config there were problems with permissions. So I updated OMV/Services/NFS/Shares to this:
Shared folder: NasFolder[on /dev/sda1, nas/]
#here you can see note 'The location of the files to share. The share will be accessible at /export/.'
Client: 192.168.50.0/24
Privelage: Read/Write
Extra options: rw,sync,no_root_squash,anonuid=1000,anongid=1000,no_acl

docker-compose volumes syntax for local driver to mount a file

I am attempting to mount a file (nginx.conf) in the volumes section of my docker-compose.yml. I can mount a directory as a volume without issue, however, it is not clear to me what the syntax is for a file.
I have the following defined in my volumes section
volumes:
roundcubeweb:
driver: local
driver_opts:
type: none
o: bind
device: /mnt/docker/volumes/roundcube/html
nginxconf:
driver: local
driver_opts:
type: none
o: bind
device: /mnt/docker/volumes/roundcube/nginx.conf
Later on, I have the following under my services section
nginx:
image: nginx:latest
deploy:
replicas: 1
restart: always
depends_on:
- roundcube
ports:
- "8082:80"
volumes:
- roundcubeweb:/var/www/html
- nginxconf:/etc/nginx/conf.d/default.conf
When I attempt to start this up, I receive the following error:
ERROR: for roundcube_nginx_1 Cannot create container for service
nginx: failed to mount local volume: mount /mnt/docker
/volumes/roundcube/nginx.conf:/var/lib/docker/volumes/roundcube_nginxconf/_data,
flags: 0x1000: not a directory
ERROR: for nginx Cannot create container for service nginx: failed to
mount local volume: mount /mnt/docker/volumes/rou
ndcube/nginx.conf:/var/lib/docker/volumes/roundcube_nginxconf/_data,
flags: 0x1000: not a directory ERROR: Encountered errors while
bringing up the project.
I've found that if I inline the file location in the nginx service section's volume declaration then it works just fine. For example:
volumes:
- roundcubeweb:/var/www/html
- /mnt/docker/volumes/roundcube/nginx.conf:/etc/nginx/conf.d/default.conf
Can one not mount a file in the volumes section? Is there a reference for the local driver's parameters?
I don't think it is possible to have a standalone Docker volume point to a specific file rather than a directory. It may be related to a Docker design on how volumes are managed and mounted (tried to find a doc or related piece of code but could not find any)
local driver's parameter seems to take similar parameter as Linux mount commands. It seems implied in Docker volume doc though not very clearly:
the local driver accepts mount options as a comma-separated list in the o parameter
I've found that if I inline the file location in the nginx service section's volume declaration then it works just fine. For example:
volumes:
- roundcubeweb:/var/www/html
- /mnt/docker/volumes/roundcube/nginx.conf:/etc/nginx/conf.d/default.conf
In your example you are doing a Bind Mound rather than using Volumes, which is handled differently by Docker and more suited for your use case of mounting a local file in the container. I think using Bind Mount is the appropriate solution in your case.

Docker for Windows unable to mount nfs drive with docker-compose

permission denied errors when attempting to mount an nfs drive to a docker container using a docker-compose file.
This error only applies when running Docker for Windows. I am able to successfully mount the drive on an Ubuntu host.
docker-compose file
version: '2'
services:
builder:
image: some_image
ports:
- "8888:8080"
volumes:
- "nfsmountCC:</container/path>"
volumes:
nfsmountCC:
driver: local
driver_opts:
type: nfs
o: addr=<nfs_IP_Address>
device: ":</nfs/server/dir/path>"
Docker for Windows Produces
ERROR: for test_1 Cannot start service builder: b"error while mounting volume '/var/lib/docker/volumes/test-master_nfsmountCC/_data': error while mounting volume with options: type='nfs' device=':</nfs/server/dir/path>' o='addr=<nfs_IP_Address>': permission denied"
The following worked for me with Docker Toolbox on Windows 7 mounting a NFS volume from an Ubuntu server:
NFS Server side:
allow the nfs and mountd services on your firewall (if you have one) on the NFS server
add the insecure option in each relevant entry of your '/etc/exports' file
Docker client side: add the hard and nolock options to the NFS volume definition
version: '3.7'
services:
builder:
image: some_image
ports:
- "8888:8080"
volumes:
- "nfsmountCC:</container/path>"
volumes:
nfsmountCC:
driver: local
driver_opts:
type: nfs
o: "addr=<nfs_IP_Address>,rw,hard,nolock"
device: ":</nfs/server/dir/path>"

Resources