Help me please. I am trying to setup configuration with backend server in docker container and nginx container which is proxying requests to backend server.
Here is my configuration:
docker network inspect
[
{
"Name": "note",
"Id": "b58827aea0e606437a6be690f68c2f28226775da1cf060e5f3d66e8a7a5ecd2b",
"Created": "2023-01-14T21:09:25.741085061+05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4afa74e27d45b9a8aba8f2f9135ee60ea1dc8900cf8eaacd7b23edb9524fbc47": {
"Name": "backend",
"EndpointID": "5d3f7920df2db22bbd4d62f1a362642660931626524d097b5c3c9f0fbcecd464",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"cc60ab137301a32c6b5daee5e185eb732460616e35a85b140900d388b305366b": {
"Name": "mongo",
"EndpointID": "6e2ba2aeed86c09e10ee60c32fbc9977fce3d50ee927ecaa081be8a906be6709",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"fddcabdf6d5a676d381e08edf45315697940e05a38b9ebc36728820fa778a6eb": {
"Name": "nginx",
"EndpointID": "a25f40ad8967844d26ab247e81c17e5316d1774441d544157e7742bc3dd115a9",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fddcabdf6d5a nginx "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 0.0.0.0:8080->80/tcp, :::8080->80/tcp nginx
cc60ab137301 mongo:latest "docker-entrypoint.s…" 55 minutes ago Up 55 minutes 0.0.0.0:27017->27017/tcp, :::27017->27017/tcp mongo
4afa74e27d45 registry.gitlab.com/noteit/backend:master "/bin/sh -c 'npm run…" 55 minutes ago Up 55 minutes 0.0.0.0:3002->3001/tcp, :::3002->3001/tcp backend
nginx.conf
events {
worker_connections 1024;
}
http {
include mime.types;
upstream backend {
server backend:3002;
}
server {
listen 80;
location /api {
proxy_pass http://backend;
}
location /auth {
proxy_pass http://backend;
}
location / {
return 500;
}
}
}
So, request to http://localhost:8080/auth/login recives 502 response from nginx.
In nginx logs I see:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/01/15 15:20:37 [error] 31#31: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /auth/login/ HTTP/1.1", upstream: "http://172.19.0.2:3002/auth/login/", host: "localhost:8080"
172.19.0.1 - - [15/Jan/2023:15:20:37 +0000] "POST /auth/login/ HTTP/1.1" 502 157 "-" "PostmanRuntime/7.29.2"
Explain me please, how can I solve it?
I tried to search the solution, but all my tries were unsucsessful.
Related
I have many processing tasks and require the use of many hosts to distribute these tasks to. For my use case, I am using zmq's malamute because of familiarity and my favor with zmq, but I would be glad to hear suggestions for other libraries that would make it easy to distribute tasks to workers.
I am trying to connect one container, a worker, to a swarm manager, which is a malamute broker, during development. I am developing what will be a global mode docker service. During development, I need to interactively give commands to write my code, so I am using a container with a shared overlay.
#### manager node, host 1
docker service create --replicas 0 --name malamute zeromqorg/malamute
#docker service update --publish-add 9999 malamute
docker service update --publish-add published=9999,target=9999 malamute
docker network create --attachable --driver overlay primarynet
docker service update --network-add primarynet malamute
### docker inspect primarynet
[
{
"Name": "primarynet",
"Id": "b7vq0p0purgiykebtykdn7syh",
"Created": "2021-11-08T13:34:08.600557322-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f50adb23eebcac36e4c1c5816d140da469df1e1ee72ae2c7bada832ba69e1535": {
"Name": "malamute.1.e29sipw9i0rijvldeqbmh4r9r",
"EndpointID": "2fe7789b4c50eeca7d19007e447aa32a3bb8d6a33435051d84ba04e34f206446",
"MacAddress": "02:42:0a:00:02:0f",
"IPv4Address": "10.0.2.15/24",
"IPv6Address": ""
},
"lb-primarynet": {
"Name": "primarynet-endpoint",
"EndpointID": "9b3926bcfea39d77a48ac4fcc1c5df37c1dd5e7add6b790bc9cbc549a5bea589",
"MacAddress": "02:42:0a:00:02:04",
"IPv4Address": "10.0.2.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": {},
"Peers": [
{
"Name": "582d161489c7",
"IP": "192.168.1.106"
},
{
"Name": "c71e09e3cd2c",
"IP": "192.168.1.107"
}
]
}
]
### docker inspect ingress
docker inspect ingress
[
{
"Name": "ingress",
"Id": "od7bn815iuxyq4v9jzwe17n4p",
"Created": "2021-11-08T10:59:00.304709057-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f50adb23eebcac36e4c1c5816d140da469df1e1ee72ae2c7bada832ba69e1535": {
"Name": "malamute.1.e29sipw9i0rijvldeqbmh4r9r",
"EndpointID": "bb87dbf390d23c8992c4f2b27597af80bb8d3b96b3bd62aa58618cca82cc0426",
"MacAddress": "02:42:0a:00:00:0a",
"IPv4Address": "10.0.0.10/24",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "2c1ef914e0aa506756b779ece181c7d8efb8697b71cb5dce0db1d9426137660f",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {},
"Peers": [
{
"Name": "582d161489c7",
"IP": "192.168.1.106"
},
{
"Name": "c71e09e3cd2c",
"IP": "192.168.1.107"
},
{
"Name": "c67bb64801c0",
"IP": "192.168.1.143"
}
]
}
]
This is where the error occurs
#### worker node, host 2
docker swarm join --token <token> 192.168.1.106:2377
docker run --net primarynet -ti zeromqorg/malamute /bin/bash
### within the container
export PYTHONPATH=/home/zmq/malamute/bindings/python/:/home/zmq/czmq/bindings/python
python3
>>> from malamute import MalamuteClient
>>> client=MalamuteClient()
>>> client.connect(b'tcp://manager.addr.ip.addr:9999', 100, b'service')
### failure happens on the connect, but it shsould go through.
### the error output:
I: 21-11-10 07:12:59 My address is 'service'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zmq/malamute/bindings/python/malamute/__init__.py", line 51, in connect
"Could not connect to malamute server at {!r}", endpoint,
File "/home/zmq/malamute/bindings/python/malamute/__init__.py", line 41, in _check_error
fmt.format(*args, **kw) + ': ' + str(reason)
malamute.MalamuteError: Could not connect to malamute server at <addr>: Server is not reachable
# the malamute server is confirmed to be reachable with ping from the worker
zmq#8e2256f6a806:~/malamute$ ping 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.646 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.399 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.398 ms
64 bytes from 10.0.2.15: icmp_seq=4 ttl=64 time=0.401 ms
^C
--- 10.0.2.15 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3079ms
rtt min/avg/max/mdev = 0.398/0.461/0.646/0.106 ms
The hostnames of the worker and manager host also confirm with ping, similar output as above; ping manager... ping worker..., just as above. But, if I were enter the docker id of the service running on swarm manager as the argument to ping in the worker, then it doesn't work.
After reading the guides, I don't know why the connect error occurs. Basically, once I'm done, the connect call needs to go through and then the rest of the script should execute. I will then script what I need executed and create a docker service called worker.
I recently installed a new Linux (Ubuntu 18.04) from scratch on my computer and now I run a Postgres Docker container via,
sudo docker run --name oes-dev -p 5440:5432 -d kartoza/postgis:9.6-2.4
But, now curl --verbose localhost:5440 returns,
* Rebuilt URL to: localhost:5440/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 5440 (#0)
> GET / HTTP/1.1
> Host: localhost:5440
> User-Agent: curl/7.61.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
I googled for more than half an hour without success. Pretty sure a networking issue (after all, the same image is running on a remote server and I can curl that one just fine), but don't have the slightest idea where to start.
Can someone point me in the right direction pls?
Thanks
Nils
EDIT
sudo docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1beff7b6aaf3 kartoza/postgis:9.6-2.4 "/bin/sh -c /docker-…" About an hour ago Up About an hour 0.0.0.0:5440->5432/tcp oes-dev
EDIT2 sudo docker network inspect bridge:
[
{
"Name": "bridge",
"Id": "5d3e43f0716aec7c29a56eb7f74e396ac72e010cc760078633549c1901f488cc",
"Created": "2018-10-11T18:26:52.277074856+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1beff7b6aaf32b359fa85901db7efa65c7090117fdde93d81d665231f8e32960": {
"Name": "oes-dev",
"EndpointID": "42dab201b145ff71b81e48618ec80cc8124625c89169d8932fb97a150f716e2c",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
i'm used custom docker network named "backend-network"
[root#localhost docker]# docker inspect backend-network
{
"Name": "backend-network",
"Id": "18180c0c1ef14460a25b66b7fb971e090f7bb85f549921704d11937af70766c7",
"Created": "2018-08-07T12:36:02.4175991+09:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"531c1ecbe993ee13e632fbd9697b392ee989d756ff60c07eae96a700901aaa01": {
"Name": "splash",
"EndpointID": "c9e4e7ec319ecf9cdcbb9ca50170efb63c4fca33bcbbabb584c4a4e41576b15d",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
},
"c6a5aa827e901b6b6d7b35d4a8be5a5b2fc73f1a7a385416ce200e847d400b21": {
"Name": "flask",
"EndpointID": "5d5abb3bc964d251379a7f6a84cb5b5d9bddac9b778f2222d52aba657b28dd34",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"da839143fb58d738e38922c669efa332c545fee4dd0a5b733583ed7b8df60875": {
"Name": "django",
"EndpointID": "f046e9cc93f895b12ce1c4de983fbe0e54a3904460c04db3ba238ba84ba82327",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"fc9e6ef183c81a3fe7dd29ecb5c17c0dc27fb803ef8e214d4f344a2b3407ec54": {
"Name": "mongo",
"EndpointID": "ab94182f4b175f105ab01ccbbc43b7dad37cf5506eee831168fd5bd9094ccde8",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
But each Container not used host DNS.
host DNS is.
[root#localhost docker]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.88.1
and container DNS is
(django) root#da839143fb58:/opt/django_backend/scrapy_app# cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
i added
nameserver 192.168.88.1
container's resolv.conf, and it works but request too long.
I think first search in
nameserver 127.0.0.11
and search
nameserver 192.168.88.1
how can i set docker-network to host dns?
remove
nameserver 127.0.0.11
then don't communicate other container by name. like
(django) root#da839143fb58:/opt/django_backend/scrapy_app# ping splash
ping: splash: Name or service not known
Docker containers are resolving DNS requests through embedded DNS server (this is the IP you are seeing in the container's /etc/resolv.conf – see bottom note in documentation). Depending on your configuration the embedded DNS server forwards the query to your host (default) or another DNS-server. You can pass a custom DNS server with the --dns- flag.
Please find more information about that in the documentation.
I've two docker containers apiserver and loginserver. Both of them are provide REST API and are built using spring boot. I've created a bridge network called my-network and both the containers are attached to the same bridge.
I pinged loginserver from apiserver via interactive shell and it is accessible. I make REST request from the host machine so I know the socket exposed. But, when I make the same REST request from apiserver to loginserver, I am getting error:
: HttpQueryService::uri=http://172.28.0.7:8090/users/login
2018-06-19 19:08:24.196 ERROR 7 --- [nio-9000-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception
org.apache.http.conn.HttpHostConnectException: Connect to 172.28.0.7:8090 [/172.28.0.7] failed: Connection refused (Connection refused)
Here are the details from my-network:
docker network inspect my-network
[
{
"Name": "my-network",
"Id": "ef610688b58b6757cba57caf6261f7a1eaeb083798098214c4848cbb152cae26",
"Created": "2018-04-21T00:19:46.918124848Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.28.0.0/16",
"Gateway": "172.28.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"71863d2f61789d4350fcabb1330b757500d5734a85c68b60eb1ac8f6f1e8344e": {
"Name": "mymongo",
"EndpointID": "717c8dbdc8993a70f9d3e97e549cb9784020b8e68e7a557c30b0818b4c9acb90",
"MacAddress": "02:42:ac:1c:00:02",
"IPv4Address": "172.28.0.2/16",
"IPv6Address": ""
},
"936447ce8325f3a7273a7fb462d75e55841a9ff37ccf27647831b3db1b8a1371": {
"Name": "mypg",
"EndpointID": "6a1a1b2f7852b89a9d2cb9b9abecdabd134849cd789c31613c7ddb91a4bc43d1",
"MacAddress": "02:42:ac:1c:00:06",
"IPv4Address": "172.28.0.6/16",
"IPv6Address": ""
},
"ad03348dffaef3edd916d349c88e8adf6cf7d88dbc40f82dc2384dee826cfa83": {
"Name": "myloginserver",
"EndpointID": "fe22c2b5f57b7fe4776087972ffa5f7f089ca6a59fde2fa677848b3f238ea026",
"MacAddress": "02:42:ac:1c:00:07",
"IPv4Address": "172.28.0.7/16",
"IPv6Address": ""
},
"c69bfbf9ccdc9e29e87d2847b5c2a51e9c232fb8d06635bcef0cdd1f7c66e051": {
"Name": "apiserver",
"EndpointID": "46e94a52d34670eb00448b1d39a0cc365b882ece790c9d868dcee04ad141d1ca",
"MacAddress": "02:42:ac:1c:00:0b",
"IPv4Address": "172.28.0.11/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Is port 8090 exposed by your loginserver image? For checking type in command
docker images
and then find the ImageID of your loginserver image. Then enter command
docker inspect image {Login server image id}
In the output check ExposedPorts if 8090 is exposed or not
Late to the party but I just fixed this on my system by setting the address to get the REST request from as the public IP address:
eg: http://217.114.203.196/myrequest
I'm trying to test out a Traefik load balanced Docker Swarm and added a blank Apache service to the compose file.
For some reason I'm unable to place this Apache service on a worker node. I get a 502 bad gateway error unless it's on the manager node. Did I configure something wrong in the YML file?
networks:
proxy:
external: true
configs:
traefik_toml_v2:
file: $PWD/infra/traefik.toml
services:
traefik:
image: traefik:1.5-alpine
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/infra/acme.json:/acme.json
networks:
- proxy
ports:
- target: 80
protocol: tcp
published: 80
mode: ingress
- target: 443
protocol: tcp
published: 443
mode: ingress
- target: 8080
protocol: tcp
published: 8080
mode: ingress
configs:
- source: traefik_toml_v2
target: /etc/traefik/traefik.toml
mode: 444
server:
image: bitnami/apache:latest
networks:
- proxy
deploy:
replicas: 1
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
labels:
- traefik.enable=true
- traefik.docker.network=proxy
- traefik.port=80
- traefik.backend=nerdmercs
- traefik.backend.loadbalancer.swarm=true
- traefik.backend.loadbalancer.sticky=true
- traefik.frontend.passHostHeader=true
- traefik.frontend.rule=Host:www.example.com
You'll see I've enabled swarm and everything
The proxy network is an overlay network and I'm able to see it in the worker node:
ubuntu#staging-worker1:~$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
f91525416b42 bridge bridge local
7c3264136bcd docker_gwbridge bridge local
7752e312e43f host host local
epaziubbr9r1 ingress overlay swarm
4b50618f0eb4 none null local
qo4wmqsi12lc proxy overlay swarm
ubuntu#staging-worker1:~$
And when I inspect that network ID
$ docker network inspect qo4wmqsi12lcvsqd1pqfq9jxj
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T09:40:37.822595405Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1860b30e97b7ea824ffc28319747b23b05c01b3fb11713fa5a2708321882bc5e": {
"Name": "proxy_visualizer.1.dc0elaiyoe88s0mp5xn96ipw0",
"EndpointID": "d6b70d4896ff906958c21afa443ae6c3b5b6950ea365553d8cc06104a6274276",
"MacAddress": "02:42:0a:00:00:09",
"IPv4Address": "10.0.0.9/24",
"IPv6Address": ""
},
"3ad45d8197055f22f5ce629d896236419db71ff5661681e39c50869953892d4e": {
"Name": "proxy_traefik.1.wvsg02fel9qricm3hs6pa78xz",
"EndpointID": "e293f8c98795d0fdfff37be16861afe868e8d3077bbb24df4ecc4185adda1afb",
"MacAddress": "02:42:0a:00:00:18",
"IPv4Address": "10.0.0.24/24",
"IPv6Address": ""
},
"735191796dd68da2da718ebb952b0a431ec8aa1718fe3be2880d8110862644a9": {
"Name": "proxy_portainer.1.xkr5losjx9m5kolo8kjihznvr",
"EndpointID": "de7ef4135e25939a2d8a10b9fd9bad42c544589684b30a9ded5acfa751f9c327",
"MacAddress": "02:42:0a:00:00:07",
"IPv4Address": "10.0.0.7/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
You'll see Traefik, Portainer, and Visualizer all present but not the apache container on the worker node
Inspecting the network on the worker node
$ sudo docker network inspect qo4wmqsi12lc
[
{
"Name": "proxy",
"Id": "qo4wmqsi12lcvsqd1pqfq9jxj",
"Created": "2018-02-06T19:53:29.104259115Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c5725a332db5922a16b9a5e663424548a77ab44ab021e25dc124109e744b9794": {
"Name": "example_site.1.pwqqddbhhg5tv0t3cysajj9ux",
"EndpointID": "6866abe0ae2a64e7d04aa111adc8f2e35d876a62ad3d5190b121e055ef729182",
"MacAddress": "02:42:0a:00:00:3c",
"IPv4Address": "10.0.0.60/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4102"
},
"Labels": {},
"Peers": [
{
"Name": "be4fb35c80f8",
"IP": "manager IP"
},
{
"Name": "4281cfd9ca73",
"IP": "worker IP"
}
]
}
]
It shows up in the network's container list but the manager node containers are not there either.
Portainer is unable to see the apache site when it's on the worker node as well.
This problem is related to this: Creating new docker-machine instance always fails validating certs using openstack driver
Basically the answer is
It turns out my hosting service locked down everything other than 22,
80, and 443 on the Open Stack Security Group Rules. I had to add 2376
TCP Ingress for docker-machine's commands to work.
It helps explain why docker-machine ssh worked but not docker-machine
env
should look at this https://docs.docker.com/datacenter/ucp/2.2/guides/admin/install/system-requirements/#ports-used and make sure they're all open