customizing docker bridge network - docker

I want my docker0 and all containers to have the same gateway address or be in the same IPrange as my local machine. I started by defining a fixed-cidr in ther daemon.json file /etc/docker/daemon.json
{
"bip": "10.80.44.248/24",
"fixed-cidr": "10.80.44.250/25",
"mtu": 1500,
"default-gateway": "10.80.44.254",
"dns": ["10.80.41.14"]
}
It seems to be working looking at the output of the ip -a
It seems the docker0 has never received any data since.
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet *10.80.44.248* netmask 255.255.255.0 broadcast *10.80.44.255*
ether 02:42:9c:b9:e1:63 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet *10.80.44.39* netmask 255.255.255.0 broadcast *10.80.44.255*
inet6 fe80::250:56ff:feb1:79e4 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:b1:79:e4 txqueuelen 1000 (Ethernet)
RX packets 211061 bytes 30426474 (29.0 MiB)
RX errors 0 dropped 33861 overruns 0 frame 0
TX packets 3032 bytes 260143 (254.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
The local machine and the docker0 are in same IP range with the same gateway. Good.
But when I start the docker containers and inspected the bridge settings Everything was different. This is the output of
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "b326a37a589245449e1268bbb9ee65262eb7986574c0e972c56d350aa82d7238",
"Created": "2018-04-04T03:25:52.00544539+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.80.44.248/24",
"IPRange": "10.80.44.128/25",
"Gateway": "10.80.44.248",
"AuxiliaryAddresses": {
"DefaultGatewayIPv4": "10.80.44.254"
}
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
I don't understand why now IPAM config is having a IPv4 als auxiliary
"AuxiliaryAddresses": {
"DefaultGatewayIPv4": "10.80.44.254"
}
I realised that now the bridge is not created from the same subnet as it was configured by the daemon I it created 2 different bridges with different IP ranges. That is still the default from docker.
docker network ls
NETWORK ID NAME DRIVER SCOPE
b326a37a5892 bridge bridge local
6ce11066cdea dockergitlab_default bridge local
d5a36c04b809 host host local
15f66b88ee67 none null local
docker network inspect dockergitlab_default
[
{
"Name": "dockergitlab_default",
"Id": "6ce11066cdeabf3cfe65b2dff22046bd1e9c18d2588f47b9cd3c52ea24f7a636",
"Created": "2018-03-14T08:56:23.351051727+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"60f769c91cff1de47794a7c8b587b778488883da094ae32cfde5196ee0f528f1": {
"Name": "gitlab-runner",
"EndpointID": "5122fe862537fb8434a484b4797153274b945e20bc3c7223efc6fd0bd55eae14",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
},
"9c46e1fde6390142bddf67270cfeda7b3e68b1a6e68cabc334046db687240a8d": {
"Name": "dockergitlab_postgresql_1",
"EndpointID": "8488b32cc34a2c92308528de74b5eddcecac12a402ee6e67c1ef0f2750b72721",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"eaf29f5f405cbf9bdd918efad26ceae1a8c3f58f4bef0aa8fd86b4631bcfdf43": {
"Name": "dockergitlab_gitlab_1",
"EndpointID": "d7f78ee9bd51dd13826d7834470d03a9084fc7ab8c6567c0181acecc221628c6",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"f460687ec00eff214fa08adfe9a0af5b85c392ceb470c4ed630ef7ecb0bfcba1": {
"Name": "dockergitlab_redis_1",
"EndpointID": "8b18906f1c79a5faaadd32afdef20473f9b635e9a1cd2c7108dd98df48eaed86",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "dockergitlab"
}
}
]
I have no idea why the docker bridge is now created with the old default ip address.
LOCAL SYSTEM Details
I can do apt update on the local machine but when i logged into the gitlab-runner i can't do apt update
Linux 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) x86_64
Docker version 17.12.0-ce, build c97c6d6
docker-compose version 1.18.0, build 8dd22a9
Is there a way I can oveeride the bridge settings. From what i have read, when I define/configure the cdir and gateway in daemon.json file everything will be taken from there for the creation of the bridge network and all other containers.
Thanks in Advance for your help.

First of all you've correctly configured the docker0 bridge and starting containers with the plain docker run command should connect them to the bridge and give them IPs in 10.80.44.250/25.
From what you've pasted I guess you're using docker-compose to start your containers.
docker-compose will create a myproject_default network per docker-compose.yml if you don't specify anything.
Today you cannot choose in which pool the IP ranges will be chosen, it's by default a 172.[17-31].0.0/16. There is currently an active pull request to allow override of this behaviour : https://github.com/moby/moby/pull/36396.
If you want to manually specify the IP range in your docker-compose.yml you can write this :
networks:
default:
ipam:
config:
- subnet: 10.80.44.250/25
Edit : this is only compatible with a docker-compose syntax >=3.0.

Related

Containers unable to communicate on same EC2 host

I have a multi-container application deployed on an EC2 instance via a single ECS task. When I try making an HTTP request to container-2 from container-1, I get error "Name or service not known."
I'm unable to reproduce this locally when I run with docker compose. I'm using the bridge network mode. I've SSH'd into the EC2 instance and can see that both containers are on the bridge network. (I've unsuccessfully tried awsvpc as well and that led to a different set of issues... so I'll save that for a separate post if necessary.)
Here's a snippet of my task-definition.json:
{
...
"containerDefinitions": [
{
"name": "container-1",
"image": "container-1",
"portMappings": [
{
"hostPort": 8081,
"containerPort": 8081,
"protocol": "tcp"
}
]
},
{
"name": "container-2",
"image": "container-2",
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080,
"protocol": "tcp"
}
]
}
],
"networkMode": "bridge",
...
}
EDIT1 - Adding some of my ifconfig, let me know if I need to add more.
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:a7ff:febd:55df prefixlen 64 scopeid 0x20<link>
ether 02:42:a7:bd:55:df txqueuelen 0 (Ethernet)
RX packets 842 bytes 55315 (54.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 614 bytes 78799 (76.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ecs-bridge: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 169.254.172.1 netmask 255.255.252.0 broadcast 0.0.0.0
inet6 fe80::c5a:1bff:fed4:525f prefixlen 64 scopeid 0x20<link>
ether 00:00:00:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 23 bytes 1890 (1.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 3760 bytes 274480 (268.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3760 bytes 274480 (268.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
EDIT2 - docker inspect bridge
[
{
"Name": "bridge",
"Id": "...",
"Created": "...",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "XXX",
"Gateway": "XXX"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"somehash": {
"Name": "container-1",
"EndpointID": "XXX",
"MacAddress": "XXX",
"IPv4Address": "XXX",
"IPv6Address": ""
},
"somehash": {
"Name": "container-2",
"EndpointID": "XXX",
"MacAddress": "XXX",
"IPv4Address": "XXX",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
To allow containers in a single task, in EC2 host networking mode, to communicate with each other you need to specify the links attribute to map containers to internal network names. This is documented here.

ping: proxy: Temporary failure in name resolution in docker container network -- "proxy" is container name here running in docker

Here I am having two proxy and my_ngnix containers running inside docker
I have two containers inside one network "bridge" as below
C:\> docker network inspect bridge
[
{
"Name": "bridge",
"Id": "82ffe522177d113af71d150c96b5e43df8946b9f17f901152cc2b4b96caf313a",
"Created": "2022-12-25T14:38:40.7492988Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"008e6142e13e91624e89b26ba697bff60765965a09daedafa6db766f30b6beb9": {
"Name": "proxy",
"EndpointID": "ed0536b6b97d9ad00b8deeb8dc5ad6f91e9809af3fc4032dfca4abd02760cc71",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"e215ead97a7e6580d3f6fec0f25790771af9f878b4194d61fa041b218a2117bf": {
"Name": "my_ngnix",
"EndpointID": "f0e7a6e824c18587a1cb32de89dbc8a1ec2fa62dcc7fe38516d294d7fdb19606",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
the moment I hit the command
C:\> docker container exec -it my_ngnix ping proxy
it shows me error : ping: proxy: Temporary failure in name resolution
can anyone help me here ---
I have installed all required update on container so I can use ping command
why container is not able to ping each other using container name ---same command work with ip address
I tried to use the same command using IP address as below
docker container exec -it my_ngnix ping 172.17.0.3
it works , however it wont work with the container name.

How to connect docker container and swarm in different hosts

I have many processing tasks and require the use of many hosts to distribute these tasks to. For my use case, I am using zmq's malamute because of familiarity and my favor with zmq, but I would be glad to hear suggestions for other libraries that would make it easy to distribute tasks to workers.
I am trying to connect one container, a worker, to a swarm manager, which is a malamute broker, during development. I am developing what will be a global mode docker service. During development, I need to interactively give commands to write my code, so I am using a container with a shared overlay.
#### manager node, host 1
docker service create --replicas 0 --name malamute zeromqorg/malamute
#docker service update --publish-add 9999 malamute
docker service update --publish-add published=9999,target=9999 malamute
docker network create --attachable --driver overlay primarynet
docker service update --network-add primarynet malamute
### docker inspect primarynet
[
{
"Name": "primarynet",
"Id": "b7vq0p0purgiykebtykdn7syh",
"Created": "2021-11-08T13:34:08.600557322-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f50adb23eebcac36e4c1c5816d140da469df1e1ee72ae2c7bada832ba69e1535": {
"Name": "malamute.1.e29sipw9i0rijvldeqbmh4r9r",
"EndpointID": "2fe7789b4c50eeca7d19007e447aa32a3bb8d6a33435051d84ba04e34f206446",
"MacAddress": "02:42:0a:00:02:0f",
"IPv4Address": "10.0.2.15/24",
"IPv6Address": ""
},
"lb-primarynet": {
"Name": "primarynet-endpoint",
"EndpointID": "9b3926bcfea39d77a48ac4fcc1c5df37c1dd5e7add6b790bc9cbc549a5bea589",
"MacAddress": "02:42:0a:00:02:04",
"IPv4Address": "10.0.2.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": {},
"Peers": [
{
"Name": "582d161489c7",
"IP": "192.168.1.106"
},
{
"Name": "c71e09e3cd2c",
"IP": "192.168.1.107"
}
]
}
]
### docker inspect ingress
docker inspect ingress
[
{
"Name": "ingress",
"Id": "od7bn815iuxyq4v9jzwe17n4p",
"Created": "2021-11-08T10:59:00.304709057-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f50adb23eebcac36e4c1c5816d140da469df1e1ee72ae2c7bada832ba69e1535": {
"Name": "malamute.1.e29sipw9i0rijvldeqbmh4r9r",
"EndpointID": "bb87dbf390d23c8992c4f2b27597af80bb8d3b96b3bd62aa58618cca82cc0426",
"MacAddress": "02:42:0a:00:00:0a",
"IPv4Address": "10.0.0.10/24",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "2c1ef914e0aa506756b779ece181c7d8efb8697b71cb5dce0db1d9426137660f",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "582d161489c7",
"IP": "192.168.1.106"
},
{
"Name": "c71e09e3cd2c",
"IP": "192.168.1.107"
},
{
"Name": "c67bb64801c0",
"IP": "192.168.1.143"
}
]
}
]
This is where the error occurs
#### worker node, host 2
docker swarm join --token <token> 192.168.1.106:2377
docker run --net primarynet -ti zeromqorg/malamute /bin/bash
### within the container
export PYTHONPATH=/home/zmq/malamute/bindings/python/:/home/zmq/czmq/bindings/python
python3
>>> from malamute import MalamuteClient
>>> client=MalamuteClient()
>>> client.connect(b'tcp://manager.addr.ip.addr:9999', 100, b'service')
### failure happens on the connect, but it shsould go through.
### the error output:
I: 21-11-10 07:12:59 My address is 'service'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zmq/malamute/bindings/python/malamute/__init__.py", line 51, in connect
"Could not connect to malamute server at {!r}", endpoint,
File "/home/zmq/malamute/bindings/python/malamute/__init__.py", line 41, in _check_error
fmt.format(*args, **kw) + ': ' + str(reason)
malamute.MalamuteError: Could not connect to malamute server at <addr>: Server is not reachable
# the malamute server is confirmed to be reachable with ping from the worker
zmq#8e2256f6a806:~/malamute$ ping 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.646 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.399 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.398 ms
64 bytes from 10.0.2.15: icmp_seq=4 ttl=64 time=0.401 ms
^C
--- 10.0.2.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3079ms
rtt min/avg/max/mdev = 0.398/0.461/0.646/0.106 ms
The hostnames of the worker and manager host also confirm with ping, similar output as above; ping manager... ping worker..., just as above. But, if I were enter the docker id of the service running on swarm manager as the argument to ping in the worker, then it doesn't work.
After reading the guides, I don't know why the connect error occurs. Basically, once I'm done, the connect call needs to go through and then the rest of the script should execute. I will then script what I need executed and create a docker service called worker.

Docker isolated network receives packets from outside

I set up a docker bridge network (on Linux) for the purpose of testing how network traffic of individual applications (containers) looks like. Therefore, a key requirement for the network is that it is completely isolated from traffic that originates from other applications or devices.
A simple example I created with compose is a ping-container that sends ICMP-packets to another one, with a third container running tcpdump to collect the traffic:
version: '3'
services:
ping:
image: 'detlearsom/ping'
environment:
- HOSTNAME=blank
- TIMEOUT=2
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
blank:
image: 'alpine'
command: sleep 300
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
tcpdump:
image: 'detlearsom/tcpdump'
volumes:
- '$PWD/data:/data'
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
network_mode: 'service:ping'
command: -v -w "/data/dump-011-ping2-${CAPTURETIME}.pcap"
networks:
capture:
driver: "bridge"
internal: true
Note that I have set the network to internal, and I have also disabled IPV6. However, when I run it and collect the traffic, additional to the expected ICMP packets I get IPV6 packets:
10:42:40.863619 IP6 fe80::42:2aff:fe42:e303 > ip6-allrouters: ICMP6, router solicitation, length 16
10:42:43.135167 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local.
10:42:37.875646 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0*- [0q] 2/0/0 (Cache flush) PTR he...F.local., (Cache flush) AAAA fe80::e437:76ff:fe9e:36b4 (161)
What is even stranger is that I receive UDP packets from port 57621:
10:42:51.868199 IP 172.25.0.1.57621 > 172.25.255.255.57621: UDP, length 44
This port corresponds to spotify traffic and most likely originates from my spotify application that is running on the host machine.
My question: Why do I see this traffic in my network that is supposed to be isolated?
For anyone interested, here is the network configuration:
[
{
"Name": "capture-011-ping2_capture",
"Id": "35512f852332351a9f677f75b522982aa6bd288e813a31a3c36477baa005c0fd",
"Created": "2018-08-07T10:42:31.610178964+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.25.0.0/16",
"Gateway": "172.25.0.1"
}
]
},
"Internal": true,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"dac25cb8810b2c786735a76c9b8387d1cfb4d6006dbb7549f5c7c3f381d884c2": {
"Name": "capture-011-ping2_tcpdump_1",
"EndpointID": "2463a46cf00a35c8c77ff9f224ff052aea7f061684b7a24b41dab150496f5c3d",
"MacAddress": "02:42:ac:19:00:02",
"IPv4Address": "172.25.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "capture",
"com.docker.compose.project": "capture-011-ping2",
"com.docker.compose.version": "1.22.0"
}
}
]

Docker Swarm Overlay Network Not Working Between Nodes

i am trying to connect my docker services together in docker swarm.
the network is made of 2 raspberry pi's.
i can create an overlay network called test-overlay and i can see that services on either raspberry pi node can connect to the network.
my problem:
i cannot link to services between nodes with the overlay network.
given the following configuration of nodes and services, service1 can use the address http://service2 to connect to service2. but it does NOT work for http://service3. however http://service3 is accessible from service4.
node1:
- service1
- service2
node2:
- service3
- service4
i am new to docker swarm and any help is appreciated.
inspecting overlay
i have run the command sudo docker inspect network test-overlay on both nodes.
on the master node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-15T20:31:20.629719732Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3acb436a0cc9a4d584d537edb1546988d334afa4793cc4fae4dd6ac9b48828ea": {
"Name": "docker-registry.1.la1myuodpkq0x5h39pqo6lt7f",
"EndpointID": "66887fb1f5f253c6cbec149aa51ab85168903fdd2290719f26d2bcd8d6c68dc8",
"MacAddress": "02:42:0a:00:00:04",
"IPv4Address": "10.0.0.4/24",
"IPv6Address": ""
},
"786e1fee538f81fe41ccd082800c646a0e191b0fd912e5c15530e61c248e81ac": {
"Name": "portainer.1.qyvvlcdqo5sewuku3eiykaplz",
"EndpointID": "0d29e5452c208ed637ae2e7dcec026f39d2431e8e0e20765a9e0e6d6dfdc60ca",
"MacAddress": "02:42:0a:00:00:15",
"IPv4Address": "10.0.0.21/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]
on the worker node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-20T14:04:57.870696195Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4cb50161119e4b58a472e1b5c380c301bbb00a23fc99fc2e0712a8c4bde6d9d4": {
"Name": "minio.1.fo2su2quv8herbmnxqfi3g8w2",
"EndpointID": "3e85786304ed08f02c09b8e1ed6a153a3b4c2ef7afe503a1b0ca6cf341521645",
"MacAddress": "02:42:0a:00:00:d6",
"IPv4Address": "10.0.0.214/24",
"IPv6Address": ""
},
"ce99b3788a4f9438e276e0f52a8f4d29fa09179e3e93b31b14f45339ce3c5315": {
"Name": "load-balancer.1.j64h1eecsc05b7d397ejvedv3",
"EndpointID": "3b7e73d27fe30151f2dc2a0ba8a5afc7f74fd283159a03a592be10e297f58d51",
"MacAddress": "02:42:0a:00:00:d0",
"IPv4Address": "10.0.0.208/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]
it seems this problem was because of the nodes being not being able to connect to each other on the required ports.
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
before you open those ports.
a better and simpler solution is to use the docker image portainer/agent. like the documentation says,
The Portainer Agent is a workaround for a Docker API limitation when using the Docker API to manage a Docker environment.
https://portainer.readthedocs.io/en/stable/agent.html
i hope this helps anyone else experiencing this problem.
I am not able to leave a comment yet, but i managed to solve this issue with the solution provided by X0r0N, and i am leaving this comment to help people in my position to find a solution in the future.
I was deploying 10 Droplets in DigitalOcean, with the default Docker image provided by Docker. It says in the description that it closes all ports, but them related to Docker. This is clearly not included Swarm usecases.
After allowing port 2377, 4789 and 7946 in ufw the Docker Swarm is now working as expected.
To make this answer stand on its own, the ports map to the following functionality:
TCP port 2377: Cluster Management Communication
TCP and UDP port 7649: Communication between nodes
UDP port 4789: Overlay Network Traffic
Check if your nodes have the ports the swarm needs to operate opened properly as described here https://docs.docker.com/network/overlay/ in "Prerequisites":
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic

Resources