I'm working with docker swarm and trying to create a network with the overlay driver.
Whenever I create the network, the driver is not attached.
If I try and attach a service to the network, the process just hangs infinitely.
If I create a service without attaching it to the network, it works right away.
pi#node3:~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
a1cc2e1f4f2b bridge bridge local
83597f713bcf docker_gwbridge bridge local
277f1166485e host host local
fs2vvjeuejxc ingress overlay swarm
5d0ce08c744c none null local
pi#node3:~ $ docker network create --driver overlay test
4bfkahhkhrblod2t79yd83vws
pi#node3:~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
a1cc2e1f4f2b bridge bridge local
83597f713bcf docker_gwbridge bridge local
277f1166485e host host local
fs2vvjeuejxc ingress overlay swarm
5d0ce08c744c none null local
4bfkahhkhrbl test swarm
I can't figure out why it's not adding the driver. I have a suspicion it has something to do with the ingress network settings, but I'm stuck as for troubleshooting here.
Relevant Info
Swarm:
pi#node3:~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
ygcte2diochpbgu7bqtw41k70 node1 Ready Active 20.10.7
xbllxgfa35937rmvdi8mi8dlb node2 Ready Active 20.10.7
tvw4b53w6g3qv2k3919dg3a81 * node3 Ready Active Leader 20.10.7
Manager node:
pi#node3:~ $ docker node inspect node3
[
{
"ID": "tvw4b53w6g3qv2k3919dg3a81",
"Version": {
"Index": 165
},
"CreatedAt": "2021-07-10T16:41:23.043334654Z",
"UpdatedAt": "2021-07-11T00:27:25.807737662Z",
"Spec": {
"Labels": {},
"Role": "manager",
"Availability": "active"
},
"Description": {
"Hostname": "node3",
"Platform": {
"Architecture": "armv7l",
"OS": "linux"
},
"Resources": {
"NanoCPUs": 4000000000,
"MemoryBytes": 969105408
},
"Engine": {
"EngineVersion": "20.10.7",
"Plugins": [
{
"Type": "Log",
"Name": "awslogs"
},
{
"Type": "Log",
"Name": "fluentd"
},
{
"Type": "Log",
"Name": "gcplogs"
},
{
"Type": "Log",
"Name": "gelf"
},
{
"Type": "Log",
"Name": "journald"
},
{
"Type": "Log",
"Name": "json-file"
},
{
"Type": "Log",
"Name": "local"
},
{
"Type": "Log",
"Name": "logentries"
},
{
"Type": "Log",
"Name": "splunk"
},
{
"Type": "Log",
"Name": "syslog"
},
{
"Type": "Network",
"Name": "bridge"
},
{
"Type": "Network",
"Name": "host"
},
{
"Type": "Network",
"Name": "ipvlan"
},
{
"Type": "Network",
"Name": "macvlan"
},
{
"Type": "Network",
"Name": "null"
},
{
"Type": "Network",
"Name": "overlay"
},
{
"Type": "Volume",
"Name": "local"
}
]
},
"TLSInfo": {
"TrustRoot": "-----BEGIN CERTIFICATE-----\nMIIBajCCARCgAwIBAgIUFIx3NAw+jgaasNXCoi+QP4GxaOQwCgYIKoZIzj0EAwIw\nEzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjEwNzEwMTYyMjAwWhcNNDEwNzA1MTYy\nMjAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABKyunnrZtfkOO+Cc/MX/qbyJjG12ee8es0IHB1HXF2MhqSfYOeUuBlTvuHuB\nxl8s8eQ4IMfjP0w5LYJNqypZp0KjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBRq6yBEIFv03tQqBkohCh4A+mIZdTAKBggqhkjO\nPQQDAgNIADBFAiA5kKgC2WxcOMyfrmFr8fU6w1Mo1mq5GMKA4owTB7pcEQIhALZi\n9AH0vVyR+7NmmR7LfPO65CIJ9UVuPZBXRZ6pcmzX\n-----END CERTIFICATE-----\n",
"CertIssuerSubject": "MBMxETAPBgNVBAMTCHN3YXJtLWNh",
"CertIssuerPublicKey": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAErK6eetm1+Q474Jz8xf+pvImMbXZ57x6zQgcHUdcXYyGpJ9g55S4GVO+4e4HGXyzx5Dggx+M/TDktgk2rKlmnQg=="
}
},
"Status": {
"State": "ready",
"Addr": "0.0.0.0"
},
"ManagerStatus": {
"Leader": true,
"Reachability": "reachable",
"Addr": "10.0.0.93:2377"
}
}
Ingress network:
pi#node3:~ $ docker network inspect ingress
[
{
"Name": "ingress",
"Id": "fs2vvjeuejxcjxqivenb76kgj",
"Created": "2021-07-10T17:24:14.228552858-07:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.10.0.0/24",
"Gateway": "10.10.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "34003d042d395b90328ed90c8133505a6bec6df90065c5b47b47ee3853545c91",
"MacAddress": "02:42:0a:0a:00:02",
"IPv4Address": "10.10.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "e2f4d4e6ba20",
"IP": "10.0.0.93"
},
{
"Name": "de3d98ce0f8d",
"IP": "10.0.0.25"
},
{
"Name": "b61722e30756",
"IP": "10.0.0.12"
}
]
}
]
Docker version:
pi#node3:~ $ docker --version
Docker version 20.10.7, build f0df350
Docker info:
pi#node3:~ $ docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 5
Server Version: 20.10.7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: tvw4b53w6g3qv2k3919dg3a81
Is Manager: true
ClusterID: 4vf16jdlegf3ctys5k6wumcfc
Managers: 1
Nodes: 3
Default Address Pool: 10.10.0.0/24
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 10.0.0.93
Manager Addresses:
10.0.0.93:2377
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.10.17-v7+
Operating System: Raspbian GNU/Linux 10 (buster)
OSType: linux
Architecture: armv7l
CPUs: 4
Total Memory: 924.2MiB
Name: node3
ID: A67O:SIT4:QOMH:SILY:WHAY:KSGQ:VWMF:QVEJ:VCOZ:KW32:PZRV:ZD4B
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory TCP limit support
WARNING: No oom kill disable support
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
What I've tried:
Removing all the nodes and creating a new swarm
Removing the ingress network and creating a new one following the instructions here
Tried to go through the walkthrough here but can't get past Create the Services 2.
Rebooted all the nodes
Any advice or pointing in the right direction would be much appreciated! I've been stuck here for 48 hours.
Solved!
The issue ended up being:
The nodes were all on 10.0.0.x
I set the --default-addr-pool 10.10.0.0/24 when initializing the swarm
Any network I tried to create using the --driver overlay would end up without any subnet or gateway information.
How I resolved the issue:
I was able to solve it using the --subnet flag when creating a custom network.
pi#node3:~ $ docker network create --driver overlay --subnet 10.10.10.0/24 test
pi#node3:~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
55ab64773261 bridge bridge local
ce1a0f497e9d docker_gwbridge bridge local
7c85cac72cf8 host host local
o7iew29j70nl ingress overlay swarm
ca5fc5682911 none null local
plezwc8zahpl test overlay swarm
pi#node3:~ $ docker network inspect test
[
{
"Name": "test",
"Id": "plezwc8zahpl9gs8hbv64bbo3",
"Created": "2021-07-16T17:30:28.773110478Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.10.10.0/24",
"Gateway": "10.10.10.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097"
},
"Labels": null
}
]
Related
I am experimenting with a Docker single-node Swarm on a Windows 2019 host with the Mirantis container with Hyper-V and LCOW and would like to run a alpine/linux container.
I've been able to deploy the linux container via the standard 'docker' command, but am not able to do it with Docker Swarm. When I try to create a linux based service the error no suitable node is given.
PS C:\Windows\system32>docker service create --replicas 1 --name helloworld alpine ping docker.com
overall progress: 0 out of 1 tasks
1/1: no suitable node (unsupported platform on 1 node)
Presumably this is because the reported capabilities of the node does not include linux even though running Linux containers works via the 'docker' command.
Is there a way to configure what capabilities a node possess?
PS C:\Windows\system32> docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
cluster: Manage Mirantis Container Cloud clusters (Mirantis Inc., v1.9.0)
registry: Manage Docker registries (Docker Inc., 0.1.0)
Server:
Containers: 2
Running: 1
Paused: 0
Stopped: 1
Images: 21
Server Version: 20.10.9
Storage Driver: windowsfilter (windows) lcow (linux)
Windows:
LCOW:
Logging Driver: etwlogs
Plugins:
Volume: local
Network: ics internal l2bridge l2tunnel nat null overlay private transparent
Log: awslogs etwlogs fluentd gcplogs gelf json-file local logentries splunk syslog
Swarm: active
NodeID: xxxxxx
Is Manager: true
ClusterID: xxxxx
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: xxx
Manager Addresses:
xxx
Default Isolation: process
Kernel Version: 10.0 17763 (17763.1.amd64fre.rs5_release.180914-1434)
Operating System: Windows Server 2019 Standard Version 1809 (OS Build 17763.3046)
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 32GiB
Name: xxxxx
ID: xxxxx
Docker Root Dir: C:\ProgramData\docker
Debug Mode: false
Username: fazleskhan
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
The node info
PS C:\Windows\system32>docker node inspect px
[
{
"ID": "xxx",
"Version": {
"Index": 8
},
"CreatedAt": "2022-07-07T21:06:17.0196727Z",
"UpdatedAt": "2022-07-07T21:06:17.5509223Z",
"Spec": {
"Labels": {},
"Role": "manager",
"Availability": "active"
},
"Description": {
"Hostname": "xxx",
"Platform": {
"Architecture": "x86_64",
"OS": "windows"
},
"Resources": {
"NanoCPUs": 4000000000,
"MemoryBytes": 34358669312
},
"Engine": {
"EngineVersion": "20.10.9",
"Plugins": [
{
"Type": "Log",
"Name": "awslogs"
},
{
"Type": "Log",
"Name": "etwlogs"
},
{
"Type": "Log",
"Name": "fluentd"
},
{
"Type": "Log",
"Name": "gcplogs"
},
{
"Type": "Log",
"Name": "gelf"
},
{
"Type": "Log",
"Name": "json-file"
},
{
"Type": "Log",
"Name": "local"
},
{
"Type": "Log",
"Name": "logentries"
},
{
"Type": "Log",
"Name": "splunk"
},
{
"Type": "Log",
"Name": "syslog"
},
{
"Type": "Network",
"Name": "ics"
},
{
"Type": "Network",
"Name": "internal"
},
{
"Type": "Network",
"Name": "l2bridge"
},
{
"Type": "Network",
"Name": "l2tunnel"
},
{
"Type": "Network",
"Name": "nat"
},
{
"Type": "Network",
"Name": "null"
},
{
"Type": "Network",
"Name": "overlay"
},
{
"Type": "Network",
"Name": "private"
},
{
"Type": "Network",
"Name": "transparent"
},
{
"Type": "Volume",
"Name": "local"
}
]
},
"TLSInfo": { xxx }
},
"Status": {
"State": "ready",
"Addr": "xxx"
},
"ManagerStatus": {
"Leader": true,
"Reachability": "reachable",
"Addr": "xxx"
}
}
]
My notes for getting things running as a reference
https://github.com/fazleskhan/docker-deep-dive/blob/master/Intsalling%20DockerEE%20Windows%20Server%202019.md
I have a container and I want to connect to DB, the docker host machine has a IP X.X.2.26 and the database X.X.2.27. I tried to connect the network in bridge mode. But I can't connect to te database. The host machine has connection to database.
This is my docker-compose.yml
version: '3.7'
networks:
sfp:
name: sfp
driver: bridge
services:
sfpapi:
image: st/sfp-api:${VERSION-latest}
container_name: "sfp-api"
restart: always
ports:
- "8082:8081"
networks:
- sfp
environment:
- TZ=America/Mexico_City
- SPRING_DATASOURCE_URL
- SPRING_DATASOURCE_USERNAME
- SPRING_DATASOURCE_PASSWORD
app:
image: st/sfp-app:${VERSION-latest}
container_name: "app"
restart: always
ports:
- "8081:80"
networks:
- sfp
environment:
- API_HOST
If I check the networks, it was created successfuly.
docker network ls
NETWORK ID NAME DRIVER SCOPE
86a58ac8a053 bridge bridge local
1890c6433c09 host host local
bab0a88222a3 none null local
01a411ad42df sfp bridge local
But if I see the inspect to the network, I can't see added containers
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "86a58ac8a05398bb827252b2dbe4c99e52aedf0896be6aa6c4358c41cf0e766e",
"Created": "2022-04-06T12:50:09.922881204-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "false",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
This is the inspect to container
docker inspect --format "{{ json .NetworkSettings.Networks }}" sfp-api
{"sfp":{"IPAMConfig":null,"Links":null,"Aliases":["sfpapi","bab30efe892b"],"NetworkID":"2076ee845b06df6ace975e1cf3fd360eb174ee97a9ae608911c243b08e98aa42","EndpointID":"3837a6f55449a59267aea7bbafc754d0fab6fedad282e280cce9d880d0c299a7","Gateway":"172.26.0.1","IPAddress":"172.26.0.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:1a:00:03","DriverOpts":null}}
My docker container is not reachable on all host network interfaces.
My host server has 2 network interfaces (and ip adresses). When running my docker container without a specific defined docker network, it works and the container is reachable on both ip adresses.
But when I'm running with a self defined docker network and add it to the docker-compose file only 1 ip is working. The other timesout. Why does this happen?
Docker-compose file
version: '3.7
services:
servicename-1:
#network_mode: "host"
image: nginxdemos/hello
init: true
ports:
- 8081:80
volumes:
omitted
environment:
ommitted
networks:
- a-netwerk-1
networks:
a-netwerk-1:
external:
name: a-network-1
docker inspect network:
[
{
"Name": "a-network-1",
"Id": "df4ab5e3285c75b71f8f88f66c4c5d85ad8f2f9b17e66f960b11778007810b96",
"Created": "2020-01-30T10:55:14.853289976+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.29.0.0/16",
"Gateway": "172.29.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2f2d5b2e22b3066085246ea53d1ca2c9f963b5e9138ae7202d8382be98428476": {
"Name": "test_testservicename_1",
"EndpointID": "c750b0d9d6ae82fec109da15d385b936f79f09bf814dd3b8d03642a2f03d46e2",
"MacAddress": "02:42:ac:1d:00:02",
"IPv4Address": "172.29.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
I set up a docker bridge network (on Linux) for the purpose of testing how network traffic of individual applications (containers) looks like. Therefore, a key requirement for the network is that it is completely isolated from traffic that originates from other applications or devices.
A simple example I created with compose is a ping-container that sends ICMP-packets to another one, with a third container running tcpdump to collect the traffic:
version: '3'
services:
ping:
image: 'detlearsom/ping'
environment:
- HOSTNAME=blank
- TIMEOUT=2
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
blank:
image: 'alpine'
command: sleep 300
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
networks:
- capture
tcpdump:
image: 'detlearsom/tcpdump'
volumes:
- '$PWD/data:/data'
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
network_mode: 'service:ping'
command: -v -w "/data/dump-011-ping2-${CAPTURETIME}.pcap"
networks:
capture:
driver: "bridge"
internal: true
Note that I have set the network to internal, and I have also disabled IPV6. However, when I run it and collect the traffic, additional to the expected ICMP packets I get IPV6 packets:
10:42:40.863619 IP6 fe80::42:2aff:fe42:e303 > ip6-allrouters: ICMP6, router solicitation, length 16
10:42:43.135167 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local.
10:42:37.875646 IP6 fe80::e437:76ff:fe9e:36b4.mdns > ff02::fb.mdns: 0*- [0q] 2/0/0 (Cache flush) PTR he...F.local., (Cache flush) AAAA fe80::e437:76ff:fe9e:36b4 (161)
What is even stranger is that I receive UDP packets from port 57621:
10:42:51.868199 IP 172.25.0.1.57621 > 172.25.255.255.57621: UDP, length 44
This port corresponds to spotify traffic and most likely originates from my spotify application that is running on the host machine.
My question: Why do I see this traffic in my network that is supposed to be isolated?
For anyone interested, here is the network configuration:
[
{
"Name": "capture-011-ping2_capture",
"Id": "35512f852332351a9f677f75b522982aa6bd288e813a31a3c36477baa005c0fd",
"Created": "2018-08-07T10:42:31.610178964+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.25.0.0/16",
"Gateway": "172.25.0.1"
}
]
},
"Internal": true,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"dac25cb8810b2c786735a76c9b8387d1cfb4d6006dbb7549f5c7c3f381d884c2": {
"Name": "capture-011-ping2_tcpdump_1",
"EndpointID": "2463a46cf00a35c8c77ff9f224ff052aea7f061684b7a24b41dab150496f5c3d",
"MacAddress": "02:42:ac:19:00:02",
"IPv4Address": "172.25.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "capture",
"com.docker.compose.project": "capture-011-ping2",
"com.docker.compose.version": "1.22.0"
}
}
]
I'm trying to test out a Traefik load balanced Docker Swarm and added a blank Apache service to the compose file.
For some reason I'm unable to place this Apache service on a worker node. I get a 502 bad gateway error unless it's on the manager node. Did I configure something wrong in the YML file?
networks:
proxy:
external: true
configs:
traefik_toml_v2:
file: $PWD/infra/traefik.toml
services:
traefik:
image: traefik:1.5-alpine
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/infra/acme.json:/acme.json
networks:
- proxy
ports:
- target: 80
protocol: tcp
published: 80
mode: ingress
- target: 443
protocol: tcp
published: 443
mode: ingress
- target: 8080
protocol: tcp
published: 8080
mode: ingress
configs:
- source: traefik_toml_v2
target: /etc/traefik/traefik.toml
mode: 444
server:
image: bitnami/apache:latest
networks:
- proxy
deploy:
replicas: 1
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.port=80
- traefik.backend=nerdmercs
- traefik.backend.loadbalancer.swarm=true
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
- traefik.frontend.rule=Host:www.example.com
You'll see I've enabled swarm and everything
The proxy network is an overlay network and I'm able to see it in the worker node:
ubuntu#staging-worker1:~$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
f91525416b42 bridge bridge local
7c3264136bcd docker_gwbridge bridge local
7752e312e43f host host local
epaziubbr9r1 ingress overlay swarm
4b50618f0eb4 none null local
qo4wmqsi12lc proxy overlay swarm
ubuntu#staging-worker1:~$
And when I inspect that network ID
$ docker network inspect qo4wmqsi12lcvsqd1pqfq9jxj
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T09:40:37.822595405Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1860b30e97b7ea824ffc28319747b23b05c01b3fb11713fa5a2708321882bc5e": {
"Name": "proxy_visualizer.1.dc0elaiyoe88s0mp5xn96ipw0",
"EndpointID": "d6b70d4896ff906958c21afa443ae6c3b5b6950ea365553d8cc06104a6274276",
"MacAddress": "02:42:0a:00:00:09",
"IPv4Address": "10.0.0.9/24",
"IPv6Address": ""
},
"3ad45d8197055f22f5ce629d896236419db71ff5661681e39c50869953892d4e": {
"Name": "proxy_traefik.1.wvsg02fel9qricm3hs6pa78xz",
"EndpointID": "e293f8c98795d0fdfff37be16861afe868e8d3077bbb24df4ecc4185adda1afb",
"MacAddress": "02:42:0a:00:00:18",
"IPv4Address": "10.0.0.24/24",
"IPv6Address": ""
},
"735191796dd68da2da718ebb952b0a431ec8aa1718fe3be2880d8110862644a9": {
"Name": "proxy_portainer.1.xkr5losjx9m5kolo8kjihznvr",
"EndpointID": "de7ef4135e25939a2d8a10b9fd9bad42c544589684b30a9ded5acfa751f9c327",
"MacAddress": "02:42:0a:00:00:07",
"IPv4Address": "10.0.0.7/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
You'll see Traefik, Portainer, and Visualizer all present but not the apache container on the worker node
Inspecting the network on the worker node
$ sudo docker network inspect qo4wmqsi12lc
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T19:53:29.104259115Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c5725a332db5922a16b9a5e663424548a77ab44ab021e25dc124109e744b9794": {
"Name": "example_site.1.pwqqddbhhg5tv0t3cysajj9ux",
"EndpointID": "6866abe0ae2a64e7d04aa111adc8f2e35d876a62ad3d5190b121e055ef729182",
"MacAddress": "02:42:0a:00:00:3c",
"IPv4Address": "10.0.0.60/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
It shows up in the network's container list but the manager node containers are not there either.
Portainer is unable to see the apache site when it's on the worker node as well.
This problem is related to this: Creating new docker-machine instance always fails validating certs using openstack driver
Basically the answer is
It turns out my hosting service locked down everything other than 22,
80, and 443 on the Open Stack Security Group Rules. I had to add 2376
TCP Ingress for docker-machine's commands to work.
It helps explain why docker-machine ssh worked but not docker-machine
env
should look at this https://docs.docker.com/datacenter/ucp/2.2/guides/admin/install/system-requirements/#ports-used and make sure they're all open