Unexpected behavior with named Docker NFS volumes in a Swarm cluster - docker

I'm new to Docker Swarm and I'm managing all config via Ansible. I have a four-node Swarm cluster with the following members
manager (also the NFS exporter for volumes shared across the cluster)
cluster0
cluster1
cluster2
cluster[0-2] are workers in the swarm.
I'm configuring a service which makes use of a named volume media, which is an NFS export from manager. The service is constrained to the worker nodes. Once a service is deployed that makes use of this volume, I'd expect to see the following output of docker volume inspect media from any of the worker nodes with the replicated service:
user#cluster1:~ $ docker volume inspect media
[
{
"CreatedAt": "2021-01-27T12:23:55-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": {
"device": ":/path/media",
"o": "addr=manager.domain.com,rw",
"type": "nfs"
},
"Scope": "local"
}
]
and indeed when I docker volume inspect media from the swarm manager, I see:
user#manager:~$ docker volume inspect media
[
{
"CreatedAt": "2021-11-22T07:58:55-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": {
"device": ":/path/media",
"o": "addr=manager.domain.com,rw",
"type": "nfs"
},
"Scope": "local"
}
]
However, this is the output I see from the nodes with the replicated service:
user#cluster1:~ $ docker volume inspect media
[
{
"CreatedAt": "2021-11-22T07:59:02-08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/media/_data",
"Name": "media",
"Options": null,
"Scope": "local"
}
]
As a result, I'm unable to access the NFS-exported files. I'm not entirely sure how to troubleshoot. Is this a limitation of Swarm and if so, how are others working around this issue? If not, what am I missing?

Related

How to migrate data between named volumes?

I want to migrate data from an old_volume volume to a new new_volume volume. I want to move data from a local volume to a storage volume.
The old_volume is a named volume that uses the default Docker path. The new_volume has a device associated with the default Docker path.
I copied the files from old_volume to new_volume using one of the commands below:
rsync -av /var/lib/docker/volumes/old_volume/_data/ /var/lib/docker/volumes/new_volume/_data
or
rsync -av /var/lib/docker/volumes/old_volume/_data/ /storage/volumes/myapp/new_volume
Data from /storage/volumes/myapp/new_volume does not appear in /var/lib/docker/volumes/new_volume/_data. Why does this occur?
When a device is used to create a container, the data appears in both paths. In this case when copying the data they do not appear in both paths.
Is it possible to resolve this?
docker inspect old_volume
[
{
"CreatedAt": "2022-06-01T18:35:53-03:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/old_volume/_data",
"Name": "old_volume",
"Options": {},
"Scope": "local"
}
]
docker inspect new_volume
[
{
"CreatedAt": "2022-06-02T09:27:40-03:00",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/new_volume/_data",
"Name": "new_volume",
"Options": {
"device": "/storage/volumes/myapp/new_volume",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]

Docker bridge network with swarm scope does not accept subnet and driver options

I want to control which external IP is used to send traffic from my swarm containers, this can be easily used with a bridge network and iptables rules.
This works fine for local-scoped bridge networks:
docker network create --driver=bridge --scope=local --subnet=172.123.0.0/16 -o "com.docker.network.bridge.enable_ip_masquerade"="false" -o "com.docker.network.bridge.name"="my_local_bridge" my_local_bridge
and on iptables:
sudo iptables -t nat -A POSTROUTING -s 172.123.0.0/16 ! -o my_local_bridge -j SNAT --to-source <my_external_ip>
This is the output of docker network inspect my_local_bridge:
[
{
"Name": "my_local_bridge",
"Id": "...",
"Created": "...",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.123.0.0/16"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
...
},
"Options": {
"com.docker.network.bridge.enable_ip_masquerade": "false",
"com.docker.network.bridge.name": "my_local_bridge"
},
"Labels": {}
}
]
But if I try to attach a swarm container to this network I get this error:
network "my_local_bridge" is declared as external, but it is not in the right scope: "local" instead of "swarm"
Alright, great, let's switch the scope to swarm then, right? Wrong, oh so wrong.
Creating the network:
docker network create --driver=bridge --scope=swarm --subnet=172.123.0.0/16 -o "com.docker.network.bridge.enable_ip_masquerade"="false" -o "com.docker.network.bridge.name"="my_swarm_bridge" my_swarm_bridge
Now let's check docker network inspect my_swarm_bridge:
[
{
"Name": "my_swarm_bridge",
"Id": "...",
"Created": "...",
"Scope": "swarm",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
...
},
"Options": {},
"Labels": {}
}
]
I can now attach it to swarm containers just fine, but neither the options are set, nor the subnet is what I defined...
How can I set these options for "swarm"-scoped bridge networks? Or, how can I set iptables to use a defined external IP if I can't set com.docker.network.bridge.enable_ip_masquerade to false?
Do I need to make a script to check the subnet assigned and manually delete the iptables MASQUERADE rule?
thanks guys
I'm pretty sure you can't use the bridge driver with swarm, and that you should use the overlay driver.
From Docker documentation :
Bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.
I might not understand your particular use case though ...

Problems with network connectivity and docker on Synology

I run docker containers on a Synology NAS. All container using the host driver have network connection but none of the containers using the bridge driver have. In the past it worked but some months ago one of my experimental containers experienced network problems
Environment:
Synology DS218+
DSM 6.2.3-25426 Update 2
10 GB internal memory
To simplify the description of the problem I have followed the tutorial from docker:
docker run –dit --name alpine1 alpine ash
docker run –dit --name alpine2 alpine ash
The containers have 172.17.0.2 and172.17.0.3 as IP addresses. When I attached to alpine1 I wasn’t able to ping to alpine2 using its IP-address (since the default bridge doesn’t do name resolution)
I also tried to use a user defined bridge:
docker network create –driver bridge test
and connected the containers to this network (and disconnected them from the default bridge network)
bash-4.3# docker network inspect test
[
{
"Name": "test",
"Id": "e0e203000f5cfae8103ed9b80dce113633e0e198c542f943ac2e7026cb684784",
"Created": "2020-12-22T22:47:08.331525073+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3da4fda1508743b36540d6848c5334c84c3c9c02df88170e617d08f15e85999b": {
"Name": "alpine1",
"EndpointID": "ccf4be3f89c45dc73183210fafcfdafee9bbe30309ef15cf27e37bbb3783ea58",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"c024024eb5a0e57720f7c2abe76ea5f5396a29eb02addd1f60d23075fcfcad78": {
"Name": "alpine2",
"EndpointID": "d4a8cf285d6dae7e8b7f96426a390b73ea800a72bf1739b0ea88c122de975650",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Also in this case I wasn’t able to ping one container from the other.
Apart from updates of DSM I also upgraded the internal memory. Don’t think this has anything to do with the problem but you never know
I had a similar issue, have you tried disabling the firewall rules on the NAS?

Docker: find out by command line if and which shared drives are enabled

Do you know if there is a way to find out by using the command line, if and which shared drives are enabled?
Thanks
EDIT:
[Windows - Git Bash] To find out which drives are shared:
cat C:/Users/[user]/AppData/Roaming/Docker/settings.json
[Windows - CMD prompt]
type C:\Users\[User]\AppData\Roaming\Docker\settings.json
Within the file, you'll find the JSON object you're looking for:
"SharedDrives": {
"C": true
},...
To find out which volumes are mounted on your host you can use the following commands:
docker volume ls
This will give you a list, for more details you can inspect a single volume
docker volume inspect 2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d
Which will return something similar to:
{
"CreatedAt": "2020-02-24T08:35:57Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d/_data",
"Name": "2d858a93d15a8e6903cccfe04cdf5576812df8697ca4e07edbbf40575873d33d",
"Options": null,
"Scope": "local"
}

What is IP of bridge localhost for Docker?

I am dockerizing my application. I have two containers now. One of it wants to talk to another, in it's config I have "mongo": "127.0.0.1" I suppose they should talk through the bridge network:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
},
"Labels": {}
}
]
Should I now change "mongo": "127.0.0.1" to "mongo": "0.0.0.0"?
You can check a container IP.
$ docker inspect $(container_name) -f "{{json .NetworkSettings.Networks}}"
You can find IPAddress attribute from the output of json.
Yes, you should use a bridge network. The default "bridge" can be used but won't give you DNS resolution, check https://docs.docker.com/engine/userguide/networking/#user-defined-networks for details.
Best way to use is using with --link option to avoid to many changes.
for ex: --link mongo01:mongo it will instruct Docker to use the container named mongo01 as a linked container, and name it mongo inside your application container
So in your application you can use mongo:27017. without making any changes.
refer this for more details.
https://www.thachmai.info/2015/05/10/docker-container-linking-mongo-node/

Resources