Docker shared volume creation - docker

I am trying to create a docker volume which will be shared between 2 hosts.
Let's say that I have two hosts A and B. When the volume is created on host A with the following command:
docker volume create --driver local --opt type=nfs --opt o=addr=B,rw --opt device=:/tmp/dir --name foo
After inspection of volume, the result is the following:
docker volume inspect foo
[
{
"Name": "foo",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/foo/_data",
"Labels": {},
"Scope": "local"
}
]
My question is: Why Mountpoint directory of volume doesn't point to directory /tmp/dir, but to default docker volume location? How I can consider that the data in directory host B/tmp/dir will be sharable?
Thanks in advance!

The volume you created does not match the inspect output. This would indicate that your volume create command failed, or perhaps you're checking the wrong host for the volume. With the current version of docker, the expected output is:
$ docker volume create --driver local --opt type=nfs \
--opt o=addr=10.10.10.10,rw --opt device=:/path/to/dir nfs_test
$ docker volume inspect nfs_test
[
{
"CreatedAt": "2018-02-18T12:10:03-05:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/home/var-docker/volumes/nfs_test/_data",
"Name": "nfs_test",
"Options": {
"device": ":/path/to/dir",
"o": "addr=10.10.10.10,rw",
"type": "nfs"
},
"Scope": "local"
}
]
The output you produced matches a local volume created without any options:
$ docker volume create foo
$ docker volume inspect foo
[
{
"CreatedAt": "2018-02-18T12:10:51-05:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/home/var-docker/volumes/foo/_data",
"Name": "foo",
"Options": {},
"Scope": "local"
}
]

Related

Unexpected behavior with named Docker NFS volumes in a Swarm cluster

I'm new to Docker Swarm and I'm managing all config via Ansible. I have a four-node Swarm cluster with the following members
manager (also the NFS exporter for volumes shared across the cluster)
cluster0
cluster1
cluster2
cluster[0-2] are workers in the swarm.
I'm configuring a service which makes use of a named volume media, which is an NFS export from manager. The service is constrained to the worker nodes. Once a service is deployed that makes use of this volume, I'd expect to see the following output of docker volume inspect media from any of the worker nodes with the replicated service:
user#cluster1:~ $ docker volume inspect media
[
{
"CreatedAt": "2021-01-27T12:23:55-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": {
"device": ":/path/media",
"o": "addr=manager.domain.com,rw",
"type": "nfs"
},
"Scope": "local"
}
]
and indeed when I docker volume inspect media from the swarm manager, I see:
user#manager:~$ docker volume inspect media
[
{
"CreatedAt": "2021-11-22T07:58:55-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": {
"device": ":/path/media",
"o": "addr=manager.domain.com,rw",
"type": "nfs"
},
"Scope": "local"
}
]
However, this is the output I see from the nodes with the replicated service:
user#cluster1:~ $ docker volume inspect media
[
{
"CreatedAt": "2021-11-22T07:59:02-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": null,
"Scope": "local"
}
]
As a result, I'm unable to access the NFS-exported files. I'm not entirely sure how to troubleshoot. Is this a limitation of Swarm and if so, how are others working around this issue? If not, what am I missing?

NFS volume created manually mounts but shows empty contents

server: docker ubuntu, 18.06.3-ce
local : docker for mac, 19.03.13
I have created a volume in the swarm manually, to a remote nfs server. When I try to mount this volume in a service it appears to work, but the contents are empty and any writes seem to succeed (calling code doesn't crash), but the bytes are gone. Maybe even to /dev/null.
When I declare a similar volume inside the compose file it works. The only difference I can find is the label "com.docker.stack.namespace".
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
my_nfs
version: "3.5"
services:
my-api:
volumes:
- "compose_nfs:/data1/" # works fine
- "externl_nfs:/data2/" # empty contents, forgotten writes
volumes:
externl_nfs:
external: true
compose_nfs:
driver: local
driver_opts:
type: nfs
o: addr=10.0.1.100
device: ":/data/"
When inspecting the networks they are identical, except for that label.
{
"CreatedAt": "2020-20-20T20:20:20Z",
"Driver": "local",
"Labels": {
# label missing on the manually created one
"com.docker.stack.namespace": "stackie"
},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "compose_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}
If you use an external volume, swarm is deferring the creation of that volume to you. Volumes are also local to the node they are created on, so you must create that volume on every node where swarm could schedule this job. For this reason, many will delegate the volume creation to swarm mode itself and put the definition in the compose file. So in your example, before scheduling the service, on each node run:
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
external_nfs
Otherwise, when the service gets scheduled on a node without the volume defined, it appears that swarm will create the container, and that create command generates a default named volume, storing the contents on that local node (I could also see swarm failing to schedule the service because of a missing volume, but your example shows otherwise).
Answering this, since it is an older version of docker and probably not relevant to most people, considering the NFS part.
It appears to be a bug of some sort in docker/swarm.
Create a NFS volume on the swarm (via api, from remote)
Volume is correct on the manager node which was contacted
Volume is missing the options on all other worker nodes
As some strange side effect, the volume seems to work. It can be mounted, writes succeed without issue, but all bytes written disappear. Reads work but every file is "not found", which is logical considering the writes disappearing.
On manager:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T15:56:44+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}]
On worker:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T16:22:16+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": null,
"Scope": "local"
}]

Error message in docker about user specified subnet

I'm trying to attribute an IP for a container using the --ip flag. But I get the following message:
Error response from daemon: user specified IP address is supported only when connecting to networks with user configured subnets.
What does this message mean? How do I get the container to run?
The network was created with the command:
docker network create my_network_name
And the container is called with:
docker run -it --net my_network_name --ip 172.22.0.30 image_name
When you create your network provide a subnet from the private IP range that is free in your network. Then when you create your container in this network pick an address from that subnet.
For instance with IP range 10.11.0.0/16 and container IP 10.11.0.10:
$ docker network create my_network_name --subnet=10.11.0.0/16
$ docker run -it --net my_network_name --ip 10.11.0.10 image_name
And here is an actual run:
$ docker --version
Docker version 19.03.6, build 369ce74a3c
$ uname -a
Linux 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
$ docker network create my_network_name --subnet=10.11.0.0/16
35a9e4e5fb4ff243202fc4f6b687901c3cbfcd8fe34e06290db5d257310417a2
$ docker run --rm -it --net my_network_name --ip 10.11.0.10 ubuntu
root#f0d283bc5023:/#
On another window:
$ docker network inspect my_network_name
[
{
"Name": "my_network_name",
"Id": "35a9e4e5fb4ff243202fc4f6b687901c3cbfcd8fe34e06290db5d257310417a2",
"Created": "2020-09-19T11:51:59.985580503-07:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.11.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f0d283bc5023fbe8a1c854fd2bb5bdd121be7245013cfac62d9933f95ace7bbf": {
"Name": "sleepy_colden",
"EndpointID": "088fbd64b82e05920fda91b28ebb5b4a14c9fca3ac9fde457c8819663f6049df",
"MacAddress": "02:42:0a:0b:00:0a",
"IPv4Address": "10.11.0.10/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Docker: find out by command line if and which shared drives are enabled

Do you know if there is a way to find out by using the command line, if and which shared drives are enabled?
Thanks
EDIT:
[Windows - Git Bash] To find out which drives are shared:
cat C:/Users/[user]/AppData/Roaming/Docker/settings.json
[Windows - CMD prompt]
type C:\Users\[User]\AppData\Roaming\Docker\settings.json
Within the file, you'll find the JSON object you're looking for:
"SharedDrives": {
"C": true
},...
To find out which volumes are mounted on your host you can use the following commands:
docker volume ls
This will give you a list, for more details you can inspect a single volume
docker volume inspect 2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d
Which will return something similar to:
{
"CreatedAt": "2020-02-24T08:35:57Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d/_data",
"Name": "2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d",
"Options": null,
"Scope": "local"
}

Docker volumes-from blank, from network share

I have two container one is setup as a data volume, I can go inside the data container and explore the files that are mounted from a network share with out any issues.
how ever on the second docker instance when I go to the folder with mounted volumes the folder exists but all the files and directories that should be there are not visible.
this used to work so I can only assume its due to docker 1.9 I am seeing this on a linux and mac box.
Any ideas as to the cause ? is this a bug or is there something else i can investigate ?
output of inspect.
"Volumes": {
"/mnt/shared_app_data": {},
"/srv/shared_app_data": {}
},
"Mounts": [
{
"Name": "241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53",
"Source": "/var/lib/docker/volumes/241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53/_data",
"Destination": "/mnt/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
},
{
"Name": "061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57",
"Source": "/var/lib/docker/volumes/061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57/_data",
"Destination": "/srv/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
}
],
the files are mounted in the docker file in this manner
RUN echo '/srv/path ipaddress/255.255.255.0(rw,no_root_squash,subtree_check,fsid=0)' >> /etc/exports
RUN echo 'ipaddress:/srv/path /srv/shared_app_data nfs defaults 0 0' >> /etc/fstab
RUN echo 'ipaddress:/srv/path /mnt/shared_app_data nfs defaults 0 0' >> /etc/fstab
and then when the container starts it runs.
service rpcbind start
mount -a
You need to be sure that the second container does mount the VOLUME declared in the first one
docker run --volumes-from first_container second_container
Make sure the first container does have the right files: see "Locating a volume"
docker inspect first_container
# more precisely
sudo ls $(docker inspect -f '{{ (index .Mounts 0).Source }}' first_container)

Resources