I run two containers in compose mode. Containers can communicate with each other by IP, but not by container name - it is not resolved.
Docker network inspect where containers are ran
docker inspect mycompose
[
{
"Name": "mycompose",
"Id": "5d6f614b1a67efa38143adf745700cac103be07f74bcb219fd547aa8ce8abd1e",
"Created": "2019-11-07T17:08:49.940162+01:00",
"Scope": "local",
"Driver": "nat",
"EnableIPv6": false,
"IPAM": {
"Driver": "windows",
"Options": null,
"Config": [
{
"Subnet": "172.22.144.0/20",
"Gateway": "172.22.144.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"876efda0f26487b4c56dd52485e6f76ecc5c97214ae99cb0760a5cad4c65ea74": {
"Name": "mycompose_brim_1",
"EndpointID": "15a3049b1f79d5da3129030574fd16dc46ced5246f15fe00b08b778a2b8ab8ef",
"MacAddress": "00:15:5d:57:23:87",
"IPv4Address": "172.22.144.113/16",
"IPv6Address": ""
},
"b8c596491ae84a1da8d597ea6ab6edf5872405856520e46d8f35581f48314b5f": {
"Name": "mycompose_brimdb_1",
"EndpointID": "8bbf233cfeb57729570f581dd34f6677a6a6655dd64c2090dacc26b91604eb7c",
"MacAddress": "00:15:5d:57:25:be",
"IPv4Address": "172.25.113.185/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.windowsshim.hnsid": "48240B53-F49A-43B9-9A20-113C047D65A1"
},
"Labels": {
"com.docker.compose.network": "v2",
"com.docker.compose.project": "mycompose",
"com.docker.compose.version": "1.24.1"
}
}
]
Trying to ping one container within another by container name:
PS C:\inetpub\wwwroot> ping mycompose_brimdb_1
Ping request could not find host mycompose_brimdb_1. Please check the name and try again
Trying to ping by service name:
PS C:\inetpub\wwwroot> ping brimdb
Ping request could not find host brimdb. Please check the name and try again.
Trying to ping same container by IP:
PS C:\inetpub\wwwroot> ping 172.25.113.185
Pinging 172.25.113.185 with 32 bytes of data:
Reply from 172.25.113.185: bytes=32 time=1ms TTL=128
Reply from 172.25.113.185: bytes=32 time<1ms TTL=128
Reply from 172.25.113.185: bytes=32 time=5ms TTL=128
Reply from 172.25.113.185: bytes=32 time<1ms TTL=128
Ping statistics for 172.25.113.185:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 5ms, Average = 1ms
compose file:
version: "3.7"
services:
brim:
image: brim:latest
ports:
- target: 50893
published: 50893
protocol: tcp
brimdb:
image: brimdb:latest
ports:
- target: 1555
published: 1555
protocol: tcp
Related
I am trying to create to create a docker swarm and containers created on different machines are unable to talk to each other, I have done a lot of research, but still I am unable to figure out whats happening
Swarm command : docker swarm init --default-addr-pool 30.30.0.0/16
Docker compose file :
version: '3.9'
services:
chrome:
image: selenium/node-chrome:4.1.2-20220217
shm_size: 2gb
networks:
- mynet
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
deploy:
replicas: 1
entrypoint: bash -c 'SE_OPTS="--host $$HOSTNAME" /opt/bin/entry_point.sh'
selenium-hub:
image: selenium/hub:4.1.2-20220217
networks:
- mynet
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
networks:
mynet:
driver: overlay
attachable: true
After this stack is deployed, chrome service is unable to find selenium-hub service.
Docker network inspect on Worker node looks like this
[
{
"Name": "grid_mynet",
"Id": "dm4nx67j7tc6309fzp6rs7cl6",
"Created": "2022-03-08T09:57:17.654456805-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "30.30.3.0/24",
"Gateway": "30.30.3.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"bdf55ddb60384c47733a104f9e2478b3a1276383b2f02883b4e4bd082dbc9667": {
"Name": "grid_chrome.1.8iuwofyuk53xosbz3hmnk0ylb",
"EndpointID": "9c77c6b827e6ce224db3a8d4ee17590e96f258a20c4083be7a6b28cdc3af4065",
"MacAddress": "02:42:1e:1e:03:03",
"IPv4Address": "30.30.3.3/24",
"IPv6Address": ""
},
"lb-grid_mynet": {
"Name": "grid_mynet-endpoint",
"EndpointID": "29ba13bf815cb98bd507ec3a4934cfc3b3b1d7f819c139d0a8a1576bbfb7e935",
"MacAddress": "02:42:1e:1e:03:04",
"IPv4Address": "30.30.3.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {
"com.docker.stack.namespace": "grid"
},
"Peers": [
{
"Name": "4bb6a3117f30",
"IP": "10.211.55.18"
}
]
}
]
Docker network inspect on Manager node looks like this
[
{
"Name": "grid_mynet",
"Id": "dm4nx67j7tc6309fzp6rs7cl6",
"Created": "2022-03-08T09:57:19.105558268-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "30.30.3.0/24",
"Gateway": "30.30.3.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"e6c571aad3215675727603e786ecf2902a0c9423dc8f5888523b8cb4f3a0fbb2": {
"Name": "grid_selenium-hub.1.klcnq45z2im4p1fpj2aoaxqq8",
"EndpointID": "7689edd38aae48c1e087ddebcc2dde7ad6f01fb1d7bfc979a1ae86d8b9de474f",
"MacAddress": "02:42:1e:1e:03:06",
"IPv4Address": "30.30.3.6/24",
"IPv6Address": ""
},
"lb-grid_mynet": {
"Name": "grid_mynet-endpoint",
"EndpointID": "fa6f18959fe04bf25f9e15970983999eb2d04f61f372408ac5dabdcb246e0751",
"MacAddress": "02:42:1e:1e:03:07",
"IPv4Address": "30.30.3.7/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {
"com.docker.stack.namespace": "grid"
},
"Peers": [
{
"Name": "f3a79e1a65b5",
"IP": "10.211.55.16"
}
]
}
]
/etc/host file on chrome container looks like this
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
30.30.3.3 bdf55ddb6038
Pinging chrome from chrome container produce this result
seluser#bdf55ddb6038:/$ ping chrome
PING chrome (30.30.3.2) 56(84) bytes of data.
64 bytes from 30.30.3.2 (30.30.3.2): icmp_seq=1 ttl=64 time=0.077 ms
64 bytes from 30.30.3.2 (30.30.3.2): icmp_seq=2 ttl=64 time=0.078 ms
Pinging Selenium-hub from chrome container produces this
seluser#bdf55ddb6038:/$ ping selenium-hub
ping: selenium-hub: Name or service not known
/etc/hosts file on selenium-hub container looks
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
30.30.3.6 e6c571aad321
Pinging chrome from selenium-hub container produce this result
seluser#e6c571aad321:/$ ping chrome
ping: chrome: No address associated with hostname
seluser#e6c571aad321:/$ ^C
seluser#e6c571aad321:/$ ping grid_chrome
ping: grid_chrome: Name or service not known
Pinging selenium-hub from selenium-hub container produce this result
seluser#e6c571aad321:/$ ping selenium-hub
PING selenium-hub (30.30.3.5) 56(84) bytes of data.
64 bytes from 30.30.3.5 (30.30.3.5): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from 30.30.3.5 (30.30.3.5): icmp_seq=2 ttl=64 time=0.079 ms
Chrome container have this error in the end
Caused by: java.net.UnknownHostException: selenium-hub
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
at zmq.io.net.tcp.TcpAddress.resolve(TcpAddress.java:132)
... 37 more
I have many processing tasks and require the use of many hosts to distribute these tasks to. For my use case, I am using zmq's malamute because of familiarity and my favor with zmq, but I would be glad to hear suggestions for other libraries that would make it easy to distribute tasks to workers.
I am trying to connect one container, a worker, to a swarm manager, which is a malamute broker, during development. I am developing what will be a global mode docker service. During development, I need to interactively give commands to write my code, so I am using a container with a shared overlay.
#### manager node, host 1
docker service create --replicas 0 --name malamute zeromqorg/malamute
#docker service update --publish-add 9999 malamute
docker service update --publish-add published=9999,target=9999 malamute
docker network create --attachable --driver overlay primarynet
docker service update --network-add primarynet malamute
### docker inspect primarynet
[
{
"Name": "primarynet",
"Id": "b7vq0p0purgiykebtykdn7syh",
"Created": "2021-11-08T13:34:08.600557322-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f50adb23eebcac36e4c1c5816d140da469df1e1ee72ae2c7bada832ba69e1535": {
"Name": "malamute.1.e29sipw9i0rijvldeqbmh4r9r",
"EndpointID": "2fe7789b4c50eeca7d19007e447aa32a3bb8d6a33435051d84ba04e34f206446",
"MacAddress": "02:42:0a:00:02:0f",
"IPv4Address": "10.0.2.15/24",
"IPv6Address": ""
},
"lb-primarynet": {
"Name": "primarynet-endpoint",
"EndpointID": "9b3926bcfea39d77a48ac4fcc1c5df37c1dd5e7add6b790bc9cbc549a5bea589",
"MacAddress": "02:42:0a:00:02:04",
"IPv4Address": "10.0.2.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": {},
"Peers": [
{
"Name": "582d161489c7",
"IP": "192.168.1.106"
},
{
"Name": "c71e09e3cd2c",
"IP": "192.168.1.107"
}
]
}
]
### docker inspect ingress
docker inspect ingress
[
{
"Name": "ingress",
"Id": "od7bn815iuxyq4v9jzwe17n4p",
"Created": "2021-11-08T10:59:00.304709057-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f50adb23eebcac36e4c1c5816d140da469df1e1ee72ae2c7bada832ba69e1535": {
"Name": "malamute.1.e29sipw9i0rijvldeqbmh4r9r",
"EndpointID": "bb87dbf390d23c8992c4f2b27597af80bb8d3b96b3bd62aa58618cca82cc0426",
"MacAddress": "02:42:0a:00:00:0a",
"IPv4Address": "10.0.0.10/24",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "2c1ef914e0aa506756b779ece181c7d8efb8697b71cb5dce0db1d9426137660f",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "582d161489c7",
"IP": "192.168.1.106"
},
{
"Name": "c71e09e3cd2c",
"IP": "192.168.1.107"
},
{
"Name": "c67bb64801c0",
"IP": "192.168.1.143"
}
]
}
]
This is where the error occurs
#### worker node, host 2
docker swarm join --token <token> 192.168.1.106:2377
docker run --net primarynet -ti zeromqorg/malamute /bin/bash
### within the container
export PYTHONPATH=/home/zmq/malamute/bindings/python/:/home/zmq/czmq/bindings/python
python3
>>> from malamute import MalamuteClient
>>> client=MalamuteClient()
>>> client.connect(b'tcp://manager.addr.ip.addr:9999', 100, b'service')
### failure happens on the connect, but it shsould go through.
### the error output:
I: 21-11-10 07:12:59 My address is 'service'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zmq/malamute/bindings/python/malamute/__init__.py", line 51, in connect
"Could not connect to malamute server at {!r}", endpoint,
File "/home/zmq/malamute/bindings/python/malamute/__init__.py", line 41, in _check_error
fmt.format(*args, **kw) + ': ' + str(reason)
malamute.MalamuteError: Could not connect to malamute server at <addr>: Server is not reachable
# the malamute server is confirmed to be reachable with ping from the worker
zmq#8e2256f6a806:~/malamute$ ping 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.646 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.399 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.398 ms
64 bytes from 10.0.2.15: icmp_seq=4 ttl=64 time=0.401 ms
^C
--- 10.0.2.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3079ms
rtt min/avg/max/mdev = 0.398/0.461/0.646/0.106 ms
The hostnames of the worker and manager host also confirm with ping, similar output as above; ping manager... ping worker..., just as above. But, if I were enter the docker id of the service running on swarm manager as the argument to ping in the worker, then it doesn't work.
After reading the guides, I don't know why the connect error occurs. Basically, once I'm done, the connect call needs to go through and then the rest of the script should execute. I will then script what I need executed and create a docker service called worker.
I've got stack with some containers. One of them can't be reached by others by his hostname and it's seems to be an ip address problem.
docker network inspect mystack
"Name": "mystack_default",
"Id": "k9tanhwcyv42473ehsehqhqp7",
"Created": "2019-08-22T16:10:45.097992076+02:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"5d8e4b8cba8889036a869a280d5996f104f250677b8c962dc45ba72441e1840d": {
"Name": "mystack_api.1.t4oax9f5uyw0237h2ysgwrzxq",
"EndpointID": "34037c244f828e035c54b5ef3f21f020cf046218b39ffc2835dd4156f3d2b688",
"MacAddress": "02:42:0a:00:02:23",
"IPv4Address": "10.0.2.35/24",
"IPv6Address": ""
},
"49f6a8444475fdcea2f96bdb7fbc62b908b5cd83175c3068a675761e64500e0e": {
"Name": "mystack_webview.1.biby87oba9z3awkb3n4439yho",
"EndpointID": "d9c0551a0213e38651c352970d5970b3f80b067676b3fb959845e139b7261c1a",
"MacAddress": "02:42:0a:00:02:20",
"IPv4Address": "10.0.2.32/24",
"IPv6Address": ""
},
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {
"com.docker.stack.namespace": "mystack"
},
"Peers": [
{
"Name": "1b4f79e8e881",
"IP": "192.168.1.67"
}
]
api service can ping webview service using hostname but it's not the good ip of my webview service :
# ping webview
PING webview (10.0.2.17) 56(84) bytes of data. // NOT THE GOOD IP ! (it should be 10.0.2.32)
64 bytes from 10.0.2.17 (10.0.2.17): icmp_seq=1 ttl=64 time=0.126 ms
64 bytes from 10.0.2.17 (10.0.2.17): icmp_seq=2 ttl=64 time=0.099 ms
webview can't ping api service using hostname (bad address error) but it works with ip address of my service :
/app # ping 10.0.2.35
PING 10.0.2.35 (10.0.2.35): 56 data bytes
64 bytes from 10.0.2.35: seq=0 ttl=64 time=0.331 ms
64 bytes from 10.0.2.35: seq=1 ttl=64 time=0.140 ms
^C
--- 10.0.2.35 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.140/0.235/0.331 ms
/app # ping api
ping: bad address 'api'
There is a problem with docker network but I don't know how to solve it. I already uninstall and reinstall docker, remove docker eth entries... Any idea ? Thank you very much for your help !
I set up a docker bridge network (on Linux) for the purpose of testing how network traffic of individual applications (containers) looks like. Therefore, a key requirement for the network is that it is completely isolated from traffic that originates from other applications or devices.
A simple example I created with compose is a ping-container that sends ICMP-packets to another one, with a third container running tcpdump to collect the traffic:
version: '3'
services:
ping:
image: 'detlearsom/ping'
environment:
- HOSTNAME=blank
- TIMEOUT=2
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
blank:
image: 'alpine'
command: sleep 300
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
tcpdump:
image: 'detlearsom/tcpdump'
volumes:
- '$PWD/data:/data'
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
network_mode: 'service:ping'
command: -v -w "/data/dump-011-ping2-${CAPTURETIME}.pcap"
networks:
capture:
driver: "bridge"
internal: true
Note that I have set the network to internal, and I have also disabled IPV6. However, when I run it and collect the traffic, additional to the expected ICMP packets I get IPV6 packets:
10:42:40.863619 IP6 fe80::42:2aff:fe42:e303 > ip6-allrouters: ICMP6, router solicitation, length 16
10:42:43.135167 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local.
10:42:37.875646 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0*- [0q] 2/0/0 (Cache flush) PTR he...F.local., (Cache flush) AAAA fe80::e437:76ff:fe9e:36b4 (161)
What is even stranger is that I receive UDP packets from port 57621:
10:42:51.868199 IP 172.25.0.1.57621 > 172.25.255.255.57621: UDP, length 44
This port corresponds to spotify traffic and most likely originates from my spotify application that is running on the host machine.
My question: Why do I see this traffic in my network that is supposed to be isolated?
For anyone interested, here is the network configuration:
[
{
"Name": "capture-011-ping2_capture",
"Id": "35512f852332351a9f677f75b522982aa6bd288e813a31a3c36477baa005c0fd",
"Created": "2018-08-07T10:42:31.610178964+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.25.0.0/16",
"Gateway": "172.25.0.1"
}
]
},
"Internal": true,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"dac25cb8810b2c786735a76c9b8387d1cfb4d6006dbb7549f5c7c3f381d884c2": {
"Name": "capture-011-ping2_tcpdump_1",
"EndpointID": "2463a46cf00a35c8c77ff9f224ff052aea7f061684b7a24b41dab150496f5c3d",
"MacAddress": "02:42:ac:19:00:02",
"IPv4Address": "172.25.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "capture",
"com.docker.compose.project": "capture-011-ping2",
"com.docker.compose.version": "1.22.0"
}
}
]
i am trying to connect my docker services together in docker swarm.
the network is made of 2 raspberry pi's.
i can create an overlay network called test-overlay and i can see that services on either raspberry pi node can connect to the network.
my problem:
i cannot link to services between nodes with the overlay network.
given the following configuration of nodes and services, service1 can use the address http://service2 to connect to service2. but it does NOT work for http://service3. however http://service3 is accessible from service4.
node1:
- service1
- service2
node2:
- service3
- service4
i am new to docker swarm and any help is appreciated.
inspecting overlay
i have run the command sudo docker inspect network test-overlay on both nodes.
on the master node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-15T20:31:20.629719732Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3acb436a0cc9a4d584d537edb1546988d334afa4793cc4fae4dd6ac9b48828ea": {
"Name": "docker-registry.1.la1myuodpkq0x5h39pqo6lt7f",
"EndpointID": "66887fb1f5f253c6cbec149aa51ab85168903fdd2290719f26d2bcd8d6c68dc8",
"MacAddress": "02:42:0a:00:00:04",
"IPv4Address": "10.0.0.4/24",
"IPv6Address": ""
},
"786e1fee538f81fe41ccd082800c646a0e191b0fd912e5c15530e61c248e81ac": {
"Name": "portainer.1.qyvvlcdqo5sewuku3eiykaplz",
"EndpointID": "0d29e5452c208ed637ae2e7dcec026f39d2431e8e0e20765a9e0e6d6dfdc60ca",
"MacAddress": "02:42:0a:00:00:15",
"IPv4Address": "10.0.0.21/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]
on the worker node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-20T14:04:57.870696195Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4cb50161119e4b58a472e1b5c380c301bbb00a23fc99fc2e0712a8c4bde6d9d4": {
"Name": "minio.1.fo2su2quv8herbmnxqfi3g8w2",
"EndpointID": "3e85786304ed08f02c09b8e1ed6a153a3b4c2ef7afe503a1b0ca6cf341521645",
"MacAddress": "02:42:0a:00:00:d6",
"IPv4Address": "10.0.0.214/24",
"IPv6Address": ""
},
"ce99b3788a4f9438e276e0f52a8f4d29fa09179e3e93b31b14f45339ce3c5315": {
"Name": "load-balancer.1.j64h1eecsc05b7d397ejvedv3",
"EndpointID": "3b7e73d27fe30151f2dc2a0ba8a5afc7f74fd283159a03a592be10e297f58d51",
"MacAddress": "02:42:0a:00:00:d0",
"IPv4Address": "10.0.0.208/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]
it seems this problem was because of the nodes being not being able to connect to each other on the required ports.
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
before you open those ports.
a better and simpler solution is to use the docker image portainer/agent. like the documentation says,
The Portainer Agent is a workaround for a Docker API limitation when using the Docker API to manage a Docker environment.
https://portainer.readthedocs.io/en/stable/agent.html
i hope this helps anyone else experiencing this problem.
I am not able to leave a comment yet, but i managed to solve this issue with the solution provided by X0r0N, and i am leaving this comment to help people in my position to find a solution in the future.
I was deploying 10 Droplets in DigitalOcean, with the default Docker image provided by Docker. It says in the description that it closes all ports, but them related to Docker. This is clearly not included Swarm usecases.
After allowing port 2377, 4789 and 7946 in ufw the Docker Swarm is now working as expected.
To make this answer stand on its own, the ports map to the following functionality:
TCP port 2377: Cluster Management Communication
TCP and UDP port 7649: Communication between nodes
UDP port 4789: Overlay Network Traffic
Check if your nodes have the ports the swarm needs to operate opened properly as described here https://docs.docker.com/network/overlay/ in "Prerequisites":
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic