Docker containers containers unable to talk to each other - docker

I am trying to create to create a docker swarm and containers created on different machines are unable to talk to each other, I have done a lot of research, but still I am unable to figure out whats happening
Swarm command : docker swarm init --default-addr-pool 30.30.0.0/16
Docker compose file :
version: '3.9'
services:
chrome:
image: selenium/node-chrome:4.1.2-20220217
shm_size: 2gb
networks:
- mynet
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
deploy:
replicas: 1
entrypoint: bash -c 'SE_OPTS="--host $$HOSTNAME" /opt/bin/entry_point.sh'
selenium-hub:
image: selenium/hub:4.1.2-20220217
networks:
- mynet
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
networks:
mynet:
driver: overlay
attachable: true
After this stack is deployed, chrome service is unable to find selenium-hub service.
Docker network inspect on Worker node looks like this
[
{
"Name": "grid_mynet",
"Id": "dm4nx67j7tc6309fzp6rs7cl6",
"Created": "2022-03-08T09:57:17.654456805-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "30.30.3.0/24",
"Gateway": "30.30.3.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"bdf55ddb60384c47733a104f9e2478b3a1276383b2f02883b4e4bd082dbc9667": {
"Name": "grid_chrome.1.8iuwofyuk53xosbz3hmnk0ylb",
"EndpointID": "9c77c6b827e6ce224db3a8d4ee17590e96f258a20c4083be7a6b28cdc3af4065",
"MacAddress": "02:42:1e:1e:03:03",
"IPv4Address": "30.30.3.3/24",
"IPv6Address": ""
},
"lb-grid_mynet": {
"Name": "grid_mynet-endpoint",
"EndpointID": "29ba13bf815cb98bd507ec3a4934cfc3b3b1d7f819c139d0a8a1576bbfb7e935",
"MacAddress": "02:42:1e:1e:03:04",
"IPv4Address": "30.30.3.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {
"com.docker.stack.namespace": "grid"
},
"Peers": [
{
"Name": "4bb6a3117f30",
"IP": "10.211.55.18"
}
]
}
]
Docker network inspect on Manager node looks like this
[
{
"Name": "grid_mynet",
"Id": "dm4nx67j7tc6309fzp6rs7cl6",
"Created": "2022-03-08T09:57:19.105558268-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "30.30.3.0/24",
"Gateway": "30.30.3.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"e6c571aad3215675727603e786ecf2902a0c9423dc8f5888523b8cb4f3a0fbb2": {
"Name": "grid_selenium-hub.1.klcnq45z2im4p1fpj2aoaxqq8",
"EndpointID": "7689edd38aae48c1e087ddebcc2dde7ad6f01fb1d7bfc979a1ae86d8b9de474f",
"MacAddress": "02:42:1e:1e:03:06",
"IPv4Address": "30.30.3.6/24",
"IPv6Address": ""
},
"lb-grid_mynet": {
"Name": "grid_mynet-endpoint",
"EndpointID": "fa6f18959fe04bf25f9e15970983999eb2d04f61f372408ac5dabdcb246e0751",
"MacAddress": "02:42:1e:1e:03:07",
"IPv4Address": "30.30.3.7/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {
"com.docker.stack.namespace": "grid"
},
"Peers": [
{
"Name": "f3a79e1a65b5",
"IP": "10.211.55.16"
}
]
}
]
/etc/host file on chrome container looks like this
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
30.30.3.3 bdf55ddb6038
Pinging chrome from chrome container produce this result
seluser#bdf55ddb6038:/$ ping chrome
PING chrome (30.30.3.2) 56(84) bytes of data.
64 bytes from 30.30.3.2 (30.30.3.2): icmp_seq=1 ttl=64 time=0.077 ms
64 bytes from 30.30.3.2 (30.30.3.2): icmp_seq=2 ttl=64 time=0.078 ms
Pinging Selenium-hub from chrome container produces this
seluser#bdf55ddb6038:/$ ping selenium-hub
ping: selenium-hub: Name or service not known
/etc/hosts file on selenium-hub container looks
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
30.30.3.6 e6c571aad321
Pinging chrome from selenium-hub container produce this result
seluser#e6c571aad321:/$ ping chrome
ping: chrome: No address associated with hostname
seluser#e6c571aad321:/$ ^C
seluser#e6c571aad321:/$ ping grid_chrome
ping: grid_chrome: Name or service not known
Pinging selenium-hub from selenium-hub container produce this result
seluser#e6c571aad321:/$ ping selenium-hub
PING selenium-hub (30.30.3.5) 56(84) bytes of data.
64 bytes from 30.30.3.5 (30.30.3.5): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from 30.30.3.5 (30.30.3.5): icmp_seq=2 ttl=64 time=0.079 ms
Chrome container have this error in the end
Caused by: java.net.UnknownHostException: selenium-hub
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
at zmq.io.net.tcp.TcpAddress.resolve(TcpAddress.java:132)
... 37 more

Related

docker does not connect to bridge network

I have a container and I want to connect to DB, the docker host machine has a IP X.X.2.26 and the database X.X.2.27. I tried to connect the network in bridge mode. But I can't connect to te database. The host machine has connection to database.
This is my docker-compose.yml
version: '3.7'
​
networks:
sfp:
name: sfp
driver: bridge
​
services:
sfpapi:
image: st/sfp-api:${VERSION-latest}
container_name: "sfp-api"
restart: always
ports:
- "8082:8081"
networks:
- sfp
environment:
- TZ=America/Mexico_City
- SPRING_DATASOURCE_URL
- SPRING_DATASOURCE_USERNAME
- SPRING_DATASOURCE_PASSWORD
​
app:
image: st/sfp-app:${VERSION-latest}
container_name: "app"
restart: always
ports:
- "8081:80"
networks:
- sfp
environment:
- API_HOST
If I check the networks, it was created successfuly.
docker network ls
NETWORK ID NAME DRIVER SCOPE
86a58ac8a053 bridge bridge local
1890c6433c09 host host local
bab0a88222a3 none null local
01a411ad42df sfp bridge local
But if I see the inspect to the network, I can't see added containers
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "86a58ac8a05398bb827252b2dbe4c99e52aedf0896be6aa6c4358c41cf0e766e",
"Created": "2022-04-06T12:50:09.922881204-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "false",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
This is the inspect to container
docker inspect --format "{{ json .NetworkSettings.Networks }}" sfp-api
{"sfp":{"IPAMConfig":null,"Links":null,"Aliases":["sfpapi","bab30efe892b"],"NetworkID":"2076ee845b06df6ace975e1cf3fd360eb174ee97a9ae608911c243b08e98aa42","EndpointID":"3837a6f55449a59267aea7bbafc754d0fab6fedad282e280cce9d880d0c299a7","Gateway":"172.26.0.1","IPAddress":"172.26.0.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:1a:00:03","DriverOpts":null}}

How to connect docker container and swarm in different hosts

I have many processing tasks and require the use of many hosts to distribute these tasks to. For my use case, I am using zmq's malamute because of familiarity and my favor with zmq, but I would be glad to hear suggestions for other libraries that would make it easy to distribute tasks to workers.
I am trying to connect one container, a worker, to a swarm manager, which is a malamute broker, during development. I am developing what will be a global mode docker service. During development, I need to interactively give commands to write my code, so I am using a container with a shared overlay.
#### manager node, host 1
docker service create --replicas 0 --name malamute zeromqorg/malamute
#docker service update --publish-add 9999 malamute
docker service update --publish-add published=9999,target=9999 malamute
docker network create --attachable --driver overlay primarynet
docker service update --network-add primarynet malamute
### docker inspect primarynet
[
{
"Name": "primarynet",
"Id": "b7vq0p0purgiykebtykdn7syh",
"Created": "2021-11-08T13:34:08.600557322-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f50adb23eebcac36e4c1c5816d140da469df1e1ee72ae2c7bada832ba69e1535": {
"Name": "malamute.1.e29sipw9i0rijvldeqbmh4r9r",
"EndpointID": "2fe7789b4c50eeca7d19007e447aa32a3bb8d6a33435051d84ba04e34f206446",
"MacAddress": "02:42:0a:00:02:0f",
"IPv4Address": "10.0.2.15/24",
"IPv6Address": ""
},
"lb-primarynet": {
"Name": "primarynet-endpoint",
"EndpointID": "9b3926bcfea39d77a48ac4fcc1c5df37c1dd5e7add6b790bc9cbc549a5bea589",
"MacAddress": "02:42:0a:00:02:04",
"IPv4Address": "10.0.2.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": {},
"Peers": [
{
"Name": "582d161489c7",
"IP": "192.168.1.106"
},
{
"Name": "c71e09e3cd2c",
"IP": "192.168.1.107"
}
]
}
]
### docker inspect ingress
docker inspect ingress
[
{
"Name": "ingress",
"Id": "od7bn815iuxyq4v9jzwe17n4p",
"Created": "2021-11-08T10:59:00.304709057-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f50adb23eebcac36e4c1c5816d140da469df1e1ee72ae2c7bada832ba69e1535": {
"Name": "malamute.1.e29sipw9i0rijvldeqbmh4r9r",
"EndpointID": "bb87dbf390d23c8992c4f2b27597af80bb8d3b96b3bd62aa58618cca82cc0426",
"MacAddress": "02:42:0a:00:00:0a",
"IPv4Address": "10.0.0.10/24",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "2c1ef914e0aa506756b779ece181c7d8efb8697b71cb5dce0db1d9426137660f",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "582d161489c7",
"IP": "192.168.1.106"
},
{
"Name": "c71e09e3cd2c",
"IP": "192.168.1.107"
},
{
"Name": "c67bb64801c0",
"IP": "192.168.1.143"
}
]
}
]
This is where the error occurs
#### worker node, host 2
docker swarm join --token <token> 192.168.1.106:2377
docker run --net primarynet -ti zeromqorg/malamute /bin/bash
### within the container
export PYTHONPATH=/home/zmq/malamute/bindings/python/:/home/zmq/czmq/bindings/python
python3
>>> from malamute import MalamuteClient
>>> client=MalamuteClient()
>>> client.connect(b'tcp://manager.addr.ip.addr:9999', 100, b'service')
### failure happens on the connect, but it shsould go through.
### the error output:
I: 21-11-10 07:12:59 My address is 'service'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zmq/malamute/bindings/python/malamute/__init__.py", line 51, in connect
"Could not connect to malamute server at {!r}", endpoint,
File "/home/zmq/malamute/bindings/python/malamute/__init__.py", line 41, in _check_error
fmt.format(*args, **kw) + ': ' + str(reason)
malamute.MalamuteError: Could not connect to malamute server at <addr>: Server is not reachable
# the malamute server is confirmed to be reachable with ping from the worker
zmq#8e2256f6a806:~/malamute$ ping 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.646 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.399 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.398 ms
64 bytes from 10.0.2.15: icmp_seq=4 ttl=64 time=0.401 ms
^C
--- 10.0.2.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3079ms
rtt min/avg/max/mdev = 0.398/0.461/0.646/0.106 ms
The hostnames of the worker and manager host also confirm with ping, similar output as above; ping manager... ping worker..., just as above. But, if I were enter the docker id of the service running on swarm manager as the argument to ping in the worker, then it doesn't work.
After reading the guides, I don't know why the connect error occurs. Basically, once I'm done, the connect call needs to go through and then the rest of the script should execute. I will then script what I need executed and create a docker service called worker.

Docker network not using all NIC's

My docker container is not reachable on all host network interfaces.
My host server has 2 network interfaces (and ip adresses). When running my docker container without a specific defined docker network, it works and the container is reachable on both ip adresses.
But when I'm running with a self defined docker network and add it to the docker-compose file only 1 ip is working. The other timesout. Why does this happen?
Docker-compose file
version: '3.7
services:
servicename-1:
#network_mode: "host"
image: nginxdemos/hello
init: true
ports:
- 8081:80
volumes:
omitted
environment:
ommitted
networks:
- a-netwerk-1
networks:
a-netwerk-1:
external:
name: a-network-1
docker inspect network:
[
{
"Name": "a-network-1",
"Id": "df4ab5e3285c75b71f8f88f66c4c5d85ad8f2f9b17e66f960b11778007810b96",
"Created": "2020-01-30T10:55:14.853289976+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.29.0.0/16",
"Gateway": "172.29.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2f2d5b2e22b3066085246ea53d1ca2c9f963b5e9138ae7202d8382be98428476": {
"Name": "test_testservicename_1",
"EndpointID": "c750b0d9d6ae82fec109da15d385b936f79f09bf814dd3b8d03642a2f03d46e2",
"MacAddress": "02:42:ac:1d:00:02",
"IPv4Address": "172.29.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Docker isolated network receives packets from outside

I set up a docker bridge network (on Linux) for the purpose of testing how network traffic of individual applications (containers) looks like. Therefore, a key requirement for the network is that it is completely isolated from traffic that originates from other applications or devices.
A simple example I created with compose is a ping-container that sends ICMP-packets to another one, with a third container running tcpdump to collect the traffic:
version: '3'
services:
ping:
image: 'detlearsom/ping'
environment:
- HOSTNAME=blank
- TIMEOUT=2
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
blank:
image: 'alpine'
command: sleep 300
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
tcpdump:
image: 'detlearsom/tcpdump'
volumes:
- '$PWD/data:/data'
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
network_mode: 'service:ping'
command: -v -w "/data/dump-011-ping2-${CAPTURETIME}.pcap"
networks:
capture:
driver: "bridge"
internal: true
Note that I have set the network to internal, and I have also disabled IPV6. However, when I run it and collect the traffic, additional to the expected ICMP packets I get IPV6 packets:
10:42:40.863619 IP6 fe80::42:2aff:fe42:e303 > ip6-allrouters: ICMP6, router solicitation, length 16
10:42:43.135167 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local.
10:42:37.875646 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0*- [0q] 2/0/0 (Cache flush) PTR he...F.local., (Cache flush) AAAA fe80::e437:76ff:fe9e:36b4 (161)
What is even stranger is that I receive UDP packets from port 57621:
10:42:51.868199 IP 172.25.0.1.57621 > 172.25.255.255.57621: UDP, length 44
This port corresponds to spotify traffic and most likely originates from my spotify application that is running on the host machine.
My question: Why do I see this traffic in my network that is supposed to be isolated?
For anyone interested, here is the network configuration:
[
{
"Name": "capture-011-ping2_capture",
"Id": "35512f852332351a9f677f75b522982aa6bd288e813a31a3c36477baa005c0fd",
"Created": "2018-08-07T10:42:31.610178964+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.25.0.0/16",
"Gateway": "172.25.0.1"
}
]
},
"Internal": true,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"dac25cb8810b2c786735a76c9b8387d1cfb4d6006dbb7549f5c7c3f381d884c2": {
"Name": "capture-011-ping2_tcpdump_1",
"EndpointID": "2463a46cf00a35c8c77ff9f224ff052aea7f061684b7a24b41dab150496f5c3d",
"MacAddress": "02:42:ac:19:00:02",
"IPv4Address": "172.25.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "capture",
"com.docker.compose.project": "capture-011-ping2",
"com.docker.compose.version": "1.22.0"
}
}
]

Docker swarm network not recognizing service/container on worker node. Using Traefik

I'm trying to test out a Traefik load balanced Docker Swarm and added a blank Apache service to the compose file.
For some reason I'm unable to place this Apache service on a worker node. I get a 502 bad gateway error unless it's on the manager node. Did I configure something wrong in the YML file?
networks:
proxy:
external: true
configs:
traefik_toml_v2:
file: $PWD/infra/traefik.toml
services:
traefik:
image: traefik:1.5-alpine
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/infra/acme.json:/acme.json
networks:
- proxy
ports:
- target: 80
protocol: tcp
published: 80
mode: ingress
- target: 443
protocol: tcp
published: 443
mode: ingress
- target: 8080
protocol: tcp
published: 8080
mode: ingress
configs:
- source: traefik_toml_v2
target: /etc/traefik/traefik.toml
mode: 444
server:
image: bitnami/apache:latest
networks:
- proxy
deploy:
replicas: 1
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.port=80
- traefik.backend=nerdmercs
- traefik.backend.loadbalancer.swarm=true
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
- traefik.frontend.rule=Host:www.example.com
You'll see I've enabled swarm and everything
The proxy network is an overlay network and I'm able to see it in the worker node:
ubuntu#staging-worker1:~$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
f91525416b42 bridge bridge local
7c3264136bcd docker_gwbridge bridge local
7752e312e43f host host local
epaziubbr9r1 ingress overlay swarm
4b50618f0eb4 none null local
qo4wmqsi12lc proxy overlay swarm
ubuntu#staging-worker1:~$
And when I inspect that network ID
$ docker network inspect qo4wmqsi12lcvsqd1pqfq9jxj
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T09:40:37.822595405Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1860b30e97b7ea824ffc28319747b23b05c01b3fb11713fa5a2708321882bc5e": {
"Name": "proxy_visualizer.1.dc0elaiyoe88s0mp5xn96ipw0",
"EndpointID": "d6b70d4896ff906958c21afa443ae6c3b5b6950ea365553d8cc06104a6274276",
"MacAddress": "02:42:0a:00:00:09",
"IPv4Address": "10.0.0.9/24",
"IPv6Address": ""
},
"3ad45d8197055f22f5ce629d896236419db71ff5661681e39c50869953892d4e": {
"Name": "proxy_traefik.1.wvsg02fel9qricm3hs6pa78xz",
"EndpointID": "e293f8c98795d0fdfff37be16861afe868e8d3077bbb24df4ecc4185adda1afb",
"MacAddress": "02:42:0a:00:00:18",
"IPv4Address": "10.0.0.24/24",
"IPv6Address": ""
},
"735191796dd68da2da718ebb952b0a431ec8aa1718fe3be2880d8110862644a9": {
"Name": "proxy_portainer.1.xkr5losjx9m5kolo8kjihznvr",
"EndpointID": "de7ef4135e25939a2d8a10b9fd9bad42c544589684b30a9ded5acfa751f9c327",
"MacAddress": "02:42:0a:00:00:07",
"IPv4Address": "10.0.0.7/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
You'll see Traefik, Portainer, and Visualizer all present but not the apache container on the worker node
Inspecting the network on the worker node
$ sudo docker network inspect qo4wmqsi12lc
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T19:53:29.104259115Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c5725a332db5922a16b9a5e663424548a77ab44ab021e25dc124109e744b9794": {
"Name": "example_site.1.pwqqddbhhg5tv0t3cysajj9ux",
"EndpointID": "6866abe0ae2a64e7d04aa111adc8f2e35d876a62ad3d5190b121e055ef729182",
"MacAddress": "02:42:0a:00:00:3c",
"IPv4Address": "10.0.0.60/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
It shows up in the network's container list but the manager node containers are not there either.
Portainer is unable to see the apache site when it's on the worker node as well.
This problem is related to this: Creating new docker-machine instance always fails validating certs using openstack driver
Basically the answer is
It turns out my hosting service locked down everything other than 22,
80, and 443 on the Open Stack Security Group Rules. I had to add 2376
TCP Ingress for docker-machine's commands to work.
It helps explain why docker-machine ssh worked but not docker-machine
env
should look at this https://docs.docker.com/datacenter/ucp/2.2/guides/admin/install/system-requirements/#ports-used and make sure they're all open

Resources