NFS volume created manually mounts but shows empty contents - docker

server: docker ubuntu, 18.06.3-ce
local : docker for mac, 19.03.13
I have created a volume in the swarm manually, to a remote nfs server. When I try to mount this volume in a service it appears to work, but the contents are empty and any writes seem to succeed (calling code doesn't crash), but the bytes are gone. Maybe even to /dev/null.
When I declare a similar volume inside the compose file it works. The only difference I can find is the label "com.docker.stack.namespace".
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
my_nfs
version: "3.5"
services:
my-api:
volumes:
- "compose_nfs:/data1/" # works fine
- "externl_nfs:/data2/" # empty contents, forgotten writes
volumes:
externl_nfs:
external: true
compose_nfs:
driver: local
driver_opts:
type: nfs
o: addr=10.0.1.100
device: ":/data/"
When inspecting the networks they are identical, except for that label.
{
"CreatedAt": "2020-20-20T20:20:20Z",
"Driver": "local",
"Labels": {
# label missing on the manually created one
"com.docker.stack.namespace": "stackie"
},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "compose_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}

If you use an external volume, swarm is deferring the creation of that volume to you. Volumes are also local to the node they are created on, so you must create that volume on every node where swarm could schedule this job. For this reason, many will delegate the volume creation to swarm mode itself and put the definition in the compose file. So in your example, before scheduling the service, on each node run:
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=10.0.1.100 \
--opt device=:/data/ \
external_nfs
Otherwise, when the service gets scheduled on a node without the volume defined, it appears that swarm will create the container, and that create command generates a default named volume, storing the contents on that local node (I could also see swarm failing to schedule the service because of a missing volume, but your example shows otherwise).

Answering this, since it is an older version of docker and probably not relevant to most people, considering the NFS part.
It appears to be a bug of some sort in docker/swarm.
Create a NFS volume on the swarm (via api, from remote)
Volume is correct on the manager node which was contacted
Volume is missing the options on all other worker nodes
As some strange side effect, the volume seems to work. It can be mounted, writes succeed without issue, but all bytes written disappear. Reads work but every file is "not found", which is logical considering the writes disappearing.
On manager:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T15:56:44+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": {
"device": ":/data/",
"o": "addr=10.0.1.100",
"type": "nfs"
},
"Scope": "local"
}]
On worker:
> docker network inspect external_nfs
[{
"CreatedAt": "2020-11-03T16:22:16+01:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/externl_nfs/_data",
"Name": "externl_nfs",
"Options": null,
"Scope": "local"
}]

Related

Docker: find out by command line if and which shared drives are enabled

Do you know if there is a way to find out by using the command line, if and which shared drives are enabled?
Thanks
EDIT:
[Windows - Git Bash] To find out which drives are shared:
cat C:/Users/[user]/AppData/Roaming/Docker/settings.json
[Windows - CMD prompt]
type C:\Users\[User]\AppData\Roaming\Docker\settings.json
Within the file, you'll find the JSON object you're looking for:
"SharedDrives": {
"C": true
},...
To find out which volumes are mounted on your host you can use the following commands:
docker volume ls
This will give you a list, for more details you can inspect a single volume
docker volume inspect 2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d
Which will return something similar to:
{
"CreatedAt": "2020-02-24T08:35:57Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d/_data",
"Name": "2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d",
"Options": null,
"Scope": "local"
}

Why can't i attach a container to a docker network?

I've created a user defined attachable overlay swarm network. I can inspect it, but when i attempt to attach a container to it, i get the following error when running on the manager node:
$ docker network connect mrunner baz
Error response from daemon: network mrunner not found
The network is defined and is attachable
$ docker network inspect mrunner
[
{
"Name": "mrunner",
"Id": "kviwxfejsuyc9476eznb7a8yw",
"Created": "2019-06-20T21:25:45.271304082Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": null
}
]
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
4a454d677dea bridge bridge local
95383b47ee94 docker_gwbridge bridge local
249684755b51 host host local
zgx0nppx33vj ingress overlay swarm
kviwxfejsuyc mrunner overlay swarm
a30a12f8d7cc none null local
uftxcaoz9rzg taskman_default overlay swarm
Why is this network connection failing?
** This was answered here: https://github.com/moby/moby/issues/39391
See this:
To create an overlay network for use with swarm services, use a command like the following:
$ docker network create -d overlay my-overlay
To create an overlay network which can be used by swarm services or standalone containers to communicate with other standalone containers running on other Docker daemons, add the --attachable flag:
$ docker network create -d overlay --attachable my-attachable-overlay
So, by default overlay network cannot be used by standalone containers, if insist on, you need to add --attachable to allow this network be used by standalone containers.
Thanks to thaJeztah on docker git repo:
The solution is as follows, but essentially make the flow service centric:
docker network create -d overlay --attachable --scope=swarm somenetwork
docker service create --name someservice nginx:alpine
If you want to connect the service to the somenetwork after it was created, update the service;
docker service update --network-add somenetwork someservice
After this; all tasks of the someservice service will be connected to somenetwork (in addition to other overlay networks they were connected to).
https://github.com/moby/moby/issues/39391#issuecomment-505050610

How to get all ip addresses on a docker network?

I have containers running in a swarm stack of services (on different docker-machines each) connected together on an overlay docker network.
How would it be possible to get all used ip adresses on the network associated with their services or container name from inside a container on this network?
Thank you
If you want to execute this command from inside containers, first you have to mount docker.sock for each service (assuming that docker is installed in the container)
volumes:
- /var/run/docker.sock:/var/run/docker.sock
then in each container you have to install jq and after that you can simply run docker network inspect <network_name_here> | jq -r 'map(.Containers[].IPv4Address) []' expected output something like:
172.21.0.2/16
172.21.0.5/16
172.21.0.4/16
172.21.0.3/16
Find the name OR ID of overlay network -
$ docker network ls | grep overlay
Do a inspect -
docker inspect $NETWORK_NAME
You will be able to find the container names & IPs allocated to them. You can do a fetch/grep the required values from the inspect output. You will find the output something as below -
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.23.0.0/16",
"Gateway": "172.23.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"183584efd63af145490a9afb61eac5db994391ae94467b32086f1ece84ec0114": {
"Name": "emailparser_lr_1",
"EndpointID": "0a9d0958caf0fa454eb7dbe1568105bfaf1813471d466e10030db3f025121dd7",
"MacAddress": "02:42:ac:17:00:04",
"IPv4Address": "172.23.0.4/16",
"IPv6Address": ""
},
"576cb03e753a987eb3f51a36d4113ffb60432937a2313873b8608c51006ae832": {
"Name": "emailparser",
"EndpointID": "833b5c940d547437c4c3e81493b8742b76a3b8644be86af92e5cdf90a7bb23bd",
"MacAddress": "02:42:ac:17:00:02",
"IPv4Address": "172.23.0.2/16",
"IPv6Address": ""
},
Assuming you're using the default VIP endpoint, you can use DNS to resolve the IP's of a service. Here's an example of using dig to get VIP IP using and then get the individual container IP's behind that VIP using tasks.
docker network create --driver overlay --attachable sweet
docker service create --name nginx --replicas=5 --network sweet nginx
docker container run --network sweet -it bretfisher/netshoot dig nginx
~~~
;; ANSWER SECTION:
nginx. 600 IN A 10.0.0.3
~~~
docker container run --network sweet -it bretfisher/netshoot dig tasks.nginx
~~~
;; ANSWER SECTION:
tasks.nginx. 600 IN A 10.0.0.5
tasks.nginx. 600 IN A 10.0.0.8
tasks.nginx. 600 IN A 10.0.0.7
tasks.nginx. 600 IN A 10.0.0.6
tasks.nginx. 600 IN A 10.0.0.4
~~~
for n in `docker network ls | awk '!/NETWORK/ {print $1}'`; do docker network inspect $n; done
First, find the name of the network which your swarm is using.
Then run docker network inspect <NETWORK-NAME>. This will give you a JSON output, in which you'll find an object with key "Containers". This object reveals all the containers in the network and their IP addresses respectively.

Docker shared volume creation

I am trying to create a docker volume which will be shared between 2 hosts.
Let's say that I have two hosts A and B. When the volume is created on host A with the following command:
docker volume create --driver local --opt type=nfs --opt o=addr=B,rw --opt device=:/tmp/dir --name foo
After inspection of volume, the result is the following:
docker volume inspect foo
[
{
"Name": "foo",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/foo/_data",
"Labels": {},
"Scope": "local"
}
]
My question is: Why Mountpoint directory of volume doesn't point to directory /tmp/dir, but to default docker volume location? How I can consider that the data in directory host B/tmp/dir will be sharable?
Thanks in advance!
The volume you created does not match the inspect output. This would indicate that your volume create command failed, or perhaps you're checking the wrong host for the volume. With the current version of docker, the expected output is:
$ docker volume create --driver local --opt type=nfs \
--opt o=addr=10.10.10.10,rw --opt device=:/path/to/dir nfs_test
$ docker volume inspect nfs_test
[
{
"CreatedAt": "2018-02-18T12:10:03-05:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/home/var-docker/volumes/nfs_test/_data",
"Name": "nfs_test",
"Options": {
"device": ":/path/to/dir",
"o": "addr=10.10.10.10,rw",
"type": "nfs"
},
"Scope": "local"
}
]
The output you produced matches a local volume created without any options:
$ docker volume create foo
$ docker volume inspect foo
[
{
"CreatedAt": "2018-02-18T12:10:51-05:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/home/var-docker/volumes/foo/_data",
"Name": "foo",
"Options": {},
"Scope": "local"
}
]

Docker volumes-from blank, from network share

I have two container one is setup as a data volume, I can go inside the data container and explore the files that are mounted from a network share with out any issues.
how ever on the second docker instance when I go to the folder with mounted volumes the folder exists but all the files and directories that should be there are not visible.
this used to work so I can only assume its due to docker 1.9 I am seeing this on a linux and mac box.
Any ideas as to the cause ? is this a bug or is there something else i can investigate ?
output of inspect.
"Volumes": {
"/mnt/shared_app_data": {},
"/srv/shared_app_data": {}
},
"Mounts": [
{
"Name": "241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53",
"Source": "/var/lib/docker/volumes/241d3e495f312c79abbeaa9495fa3b32110e9dca8442291d248cfbc5acca5b53/_data",
"Destination": "/mnt/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
},
{
"Name": "061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57",
"Source": "/var/lib/docker/volumes/061f16c066b59f31baac450d0d97043d1fcdceb4ceb746515586e95d26c91b57/_data",
"Destination": "/srv/shared_app_data",
"Driver": "local",
"Mode": "",
"RW": true
}
],
the files are mounted in the docker file in this manner
RUN echo '/srv/path ipaddress/255.255.255.0(rw,no_root_squash,subtree_check,fsid=0)' >> /etc/exports
RUN echo 'ipaddress:/srv/path /srv/shared_app_data nfs defaults 0 0' >> /etc/fstab
RUN echo 'ipaddress:/srv/path /mnt/shared_app_data nfs defaults 0 0' >> /etc/fstab
and then when the container starts it runs.
service rpcbind start
mount -a
You need to be sure that the second container does mount the VOLUME declared in the first one
docker run --volumes-from first_container second_container
Make sure the first container does have the right files: see "Locating a volume"
docker inspect first_container
# more precisely
sudo ls $(docker inspect -f '{{ (index .Mounts 0).Source }}' first_container)

Resources