I'm not sure if that's correct or not but I have an nginx on a container that proxies the request to a Nodejs app and everything is working fine on a docker-compose file with the depends on feature and all that. However now I want to separate them into different docker compose files and then it happens that nginx can't find the upstream nodejsapp:
2017/11/10 15:21:38 [emerg] 1#1: host not found in upstream "nodejsapp" in /etc/nginx/conf.d/default.nodejsapp.conf:8
nginx: [emerg] host not found in upstream "nodejsapp" in /etc/nginx/conf.d/default.nodejsapp.conf:8
My configuration for the proxy stuff works but in any case is:
server {
listen 80;
location / {
proxy_pass http://nodejsapp:4000;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location ~* ^.+\.(atom|bmp|bz2|css|doc|docx|eot|exe|gif|gz|ico|jpeg|jpg|js|mid|midi|mp4|ogg|ogv|otf|pdf|png|ppt|pptx|rar|rss|rtf|svg|svgz|swf|tar|tgz|ttf|txt|wav|woff|xls|xml|zip)$ {
access_log off;
log_not_found on;
expires max;
proxy_pass http://nodejsapp:4000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
If I do docker ps I get this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4d888e6c37d nginx "nginx -g 'daemon ..." 3 minutes ago Restarting (1) 33 seconds ago nginx
5867ac82e5da nodejsapp:3.2.0-dev "pm2-docker proces..." 25 minutes ago Up 25 minutes 80/tcp, 443/tcp, 43554/tcp, 0.0.0.0:4000->4000/tcp nodejsapp
I understood that docker containers can reach other containers by the name. Then I inspected the networks I have: docker network ls
NETWORK ID NAME DRIVER SCOPE
cf9b7ea0a5b7 bridge bridge local
549c48fa592a docker_default bridge local
652ffb4094f0 host host local
412ed3bbfd01 nginx_default bridge local
85a803c70f83 none null local
And then I docker network inspect bridge I can see the nodejsapp container among others:
"Containers": {
"5867ac82e5dad7642155a7c3df05c37cd83c1be5a0eb49d55cf5325bfaa7ea4d": {
"Name": "nodejsapp",
"EndpointID": "a362d02b083e90bb0acc50a2af8ec5a5f0a9a23a9f0ed2bfb62b1a6c60586fb8",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
},
"b272cd07cc1e5c28632cbaf05858cf373b6a13304f69bee5de73fc57e5a3cf79": {
"Name": "sad_poitras",
"EndpointID": "c5b1e2389f5da8b9acf4f6c7a13f5d6e09411cd960b9e4dcd8ed38fef2982780",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"ffd171939e534a4749a4d215f52c1add353537e91e4a95d7b752775ee2b4c70f": {
"Name": "elated_hawking",
"EndpointID": "eca2709e189d8d5922e1dd5d8c5af14826bd77d5a399b63363e3b07671fb5169",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
And I can see that nginx is not connected to any network however it looks to me it tries to create its own network (nginx_default)? Note that I cannot see any container attached to that network, possibly because it's failing on start:
[
{
"Name": "nginx_default",
"Id": "412ed3bbfd01336c86473a8d07fe6c444389cac44838552a40d6b7ec7f4c972d",
"Created": "2017-11-10T15:21:35.2016921Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
Is there a way to get the it using the bridge one? Am I doing something else wrong?
UPDATE: I have set the nodejsapp to be on the network nginx_default and then I got nginx to start. However the proxy is not working at all and I get the home screen of nginx. My docker compose files after I split it into two different compose files:
version: '2'
services:
nginx:
build:
context: .
dockerfile: Dockerfile.dev
image: nginx:3.1.0-dev
ports:
- "10100:80"
restart: always
container_name: nginx
My Dockerfile.dev:
FROM nginx:latest
RUN mkdir /tmp/cache
COPY ./nodejsapp/default.dev.nodejsapp.conf /etc/nginx/conf.d/default.nodejsapp.conf
COPY nginx.dev.conf /etc/nginx/nginx.conf
The other docker-compose:
version: '2'
services:
nodejsapp:
build:
context: .
dockerfile: Dockerfile.dev
image: nodejsapp:3.2.0-dev
ports:
- "4000:4000"
restart: always
container_name: nodejsapp
and the corresponding Dockerfile.dev:
FROM keymetrics/pm2:latest
COPY dist dist/
COPY process.yml .
CMD ["pm2-docker", "process.yml"]
You can use an external network to connect the two containers. But first you must create the network, we'll call it my-network:
docker network create my-network
Then we need to declare the network in each docker-compose file:
networks:
default:
external:
name: my-network
Docker creates 2 different and isolated networks if you create 2 docker-compose files. AFAIK, the only way is to create a network and "link" the containers to that network.
So, create a network then use it. See the doc here: https://docs.docker.com/compose/networking/#use-a-pre-existing-network
Related
I am trying to get metrics from a running nginx container ( \nginx_status endpoint restricted to 127.0.0.1, 172.17.0.0/16) docker exec nginx curl 127.0.0.1/nginx_status. is good When I run the ELK metricbeat container and set nginx as the host to monitor Error fetching data for metricset nginx.stubstatus: error fetching status: error making http request: Get "http://nginx/nginx_status": dial tcp 172.27.0.6:8080: connect: connection refused. nginx container is created from a different compose file than the metricbeat. They run in different bridge network. Now I inspected both the networks and found that it is completely on different IP address range.
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.16.0/20",
"Gateway": "192.168.16.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
The other bridge network is on
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.32.0/20",
"Gateway": "192.168.32.1"
}
]
},
This was a surprise to me. I am totally confused now and would like to know what is going on. These are my current settings
server {
listen 127.0.0.1;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow 172.17.0.0/16;
deny all;
}
}
How can I reliably set nginx server block ???
I was thinking that docker is using only 172.17.0.0/16 range. Is this something specific to docker-compose ?
docker-compose WILL use any of those:
172.17.0.0/16
172.18.0.0/16
172.19.0.0/16
You might need to read a bit of theory ( I never had , I actually slept during the class when it was preached in school ... )
so the answer is probably
172.16.0.0/12
because of this text from the source above ^^^
172.16.0.0/12 For private internal networks. IP addresses from this space should never be seen on the public Internet.
For example this is the part of my pg_hba.conf in a postgres container
host all all 172.16.0.0/12 password
which seems to be the similar problem you had ...
We can specify the bridge network within docker-compose.yml by using the user defined bridge.
I hope this is what you are looking for.
version: "2.1"
services:
nginx:
image: ghcr.io/linuxserver/nginx
container_name: nginx
volumes:
- ./config:/config
ports:
- 443:443
restart: always
networks:
br-uplink:
ipv4_address: 192.168.11.2
networks:
br-uplink:
driver: bridge
name: br-uplink
ipam:
config:
- subnet: "192.168.11.0/24"
gateway: "192.168.11.1"
I tried to come into the nginx container to curl the url:http://my-boot-system:8079, but the error occured as the title.
In the nginx Dockerfile, I have:
FROM nginx
VOLUME /tmp
ENV LANG en_US.UTF-8
RUN echo "server { \
listen 80; \
location ^~ /my-boot { \
proxy_pass http://my-boot-system:8079/my-boot/; \
proxy_set_header Host my-boot-system; \
proxy_set_header X-Real-IP \$remote_addr; \
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for; \
} \
location / { \
root /var/www/html/; \
index index.html index.htm; \
if (!-e \$request_filename) { \
rewrite ^(.*)\$ /index.html?s=\$1 last; \
break; \
} \
} \
access_log /var/log/nginx/access.log ; \
} " > /etc/nginx/conf.d/default.conf \
&& mkdir -p /var/www \
&& mkdir -p /var/www/html
ADD dist/ /var/www/html/
EXPOSE 80
It seems the nginx container couldn't find the network ?? But, In the docker-compose, I have:
version: '2.4'
services:
my-iot-survey-web:
build:
context: .
restart: always
container_name: my-iot-survey-web
image: my-iot-survey-web
ports:
- 7070:80
networks:
- my-iot-surver-api_default
networks:
my-iot-surver-api_default:
external: true
I have already had a network named my-iot-surver-api_default which shows in the 'docker network ls' command. and the network my-iot-surver-api_default is also present in the docker-compose of my-boot-system definition
version: '2.4'
services:
my-boot-redis:
image: redis:5.0
ports:
- 6378:6379
restart: always
container_name: my-boot-redis
my-boot-system:
build:
context: ./my-boot-module-system
restart: always
container_name: my-boot-system
image: my-boot-system
ports:
- 8079:8080
networks:
- my-iot-surver-api_default
networks:
my-iot-surver-api_default:
external: true
Following is the network inspection of docker:
[
{
"Name": "my-iot-surver-api_default",
"Id": "aaeda9e6419a1d603e6c3de6364025ef7c3ea034de57ba3a63c00b608f844d5f",
"Created": "2021-03-19T14:39:56.432760565+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3c59d1d44ea2d08a3c6a68fb16539db0830d535ce585d128d10e90b57c1f5642": {
"Name": "my-boot-redis",
"EndpointID": "30b21c59c1da1b031f8ce2b85c7fc8c62f03b7623fb69ee205af8dfa5a95d61a",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"9fd99e54d2bc7ad51dc539586f8a49a295f9ea0ba2bd1c9555d864d351d4d4be": {
"Name": "my-iot-survey-web",
"EndpointID": "dc8fabe7113742476c00f0eee98f40c99772835e1e3fe9b41f5cef8cda9824ae",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"fd999975f007537c4449b65cf5f2cb86b1d5b40655a7c324c2cc9f35bf4632f5": {
"Name": "my-boot-system",
"EndpointID": "ede496dc1ff7f73d45bff969a15dd5e3a05d82979cbfa7f3226360c24b4369f4",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Connections between containers on the same Docker network ignore ports:. You always need to make the connection to the port the service inside the container is listening on; if you do have ports:, the port number for inter-container connections need to match the second port number. (If the service doesn't need to be reached from outside Docker it's also valid to leave off ports: entirely.)
In this particular setup, you can also notice you're getting a "connection refused" error. If you get that error (and not a "no such host" error), Nginx has successfully looked up the host name it's been given, which implies the Docker-level setup is correct.
This means you can change your Nginx configuration to:
proxy_pass http://my-boot-system:8080/my-boot/;
# Not remapped 8079 but the standard port ^^^^
(I'd consider some other cleanups in the Dockerfile. COPY the configuration file in instead of trying to RUN a long-winded escape-prone shell command to create it inline. Don't declare a VOLUME; it mostly only has confusing side effects and doesn't bring any benefits. Prefer COPY to ADD in most cases. The Docker Hub nginx base image also already includes EXPOSE and a content directory, so use its /usr/share/nginx/html instead of /var/www/html. That would reduce the Dockerfile to the FROM line and two COPY lines to add in the configuration and content.)
I am running my containers on the docker swarm. asset-frontend service is my frontend application which is running Nginx inside the container and exposing port 80. now if I do
curl http://10.255.8.21:80
or
curl http://127.0.0.1:80
from my host where I am running these containers I am able to see my asset-frontend application but it is not accessible outside of the host. I am not able to access it from another machine, my host machine operating system is centos 8.
this is my docker-compose file
version: "3.3"
networks:
basic:
services:
asset-backend:
image: asset/asset-management-backend
env_file: .env
deploy:
replicas: 1
depends_on:
- asset-mongodb
- asset-postgres
networks:
- basic
asset-mongodb:
image: mongo
restart: always
env_file: .env
ports:
- "27017:27017"
volumes:
- $HOME/asset/mongodb:/data/db
networks:
- basic
asset-postgres:
image: asset/postgresql
restart: always
env_file: .env
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=asset-management
volumes:
- $HOME/asset/postgres:/var/lib/postgresql/data
networks:
- basic
asset-frontend:
image: asset/asset-management-frontend
restart: always
ports:
- "80:80"
environment:
- ENV=dev
depends_on:
- asset-backend
deploy:
replicas: 1
networks:
- basic
asset-autodiscovery-cron:
image: asset/auto-discovery-cron
restart: always
env_file: .env
deploy:
replicas: 1
depends_on:
- asset-mongodb
- asset-postgres
networks:
- basic
this is my docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
auz640zl60bx asset_asset-autodiscovery-cron replicated 1/1 asset/auto-discovery-cron:latest
g6poofhvmoal asset_asset-backend replicated 1/1 asset/asset-management-backend:latest
brhq4g4mz7cf asset_asset-frontend replicated 1/1 asset/asset-management-frontend:latest *:80->80/tcp
rmkncnsm2pjn asset_asset-mongodb replicated 1/1 mongo:latest *:27017->27017/tcp
rmlmdpa5fz69 asset_asset-postgres replicated 1/1 asset/postgresql:latest *:5432->5432/tcp
My 80 port is open in firewall
following is the output of firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: cockpit dhcpv6-client ssh
ports: 22/tcp 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
if i inspect my created network the output is following
[
{
"Name": "asset_basic",
"Id": "zw73vr9xigfx7hy16u1myw5gc",
"Created": "2019-11-26T02:36:38.241352385-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.3.0/24",
"Gateway": "10.0.3.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9348f4fc6bfc1b14b84570e205c88a67aba46f295a5e61bda301fdb3e55f3576": {
"Name": "asset_asset-frontend.1.zew1obp21ozmg8r1tzmi5h8g8",
"EndpointID": "27624fe2a7b282cef1762c4328ce0239dc70ebccba8e00d7a61595a7a1da2066",
"MacAddress": "02:42:0a:00:03:08",
"IPv4Address": "10.0.3.8/24",
"IPv6Address": ""
},
"943895f12de86d85fd03d0ce77567ef88555cf4766fa50b2a8088e220fe1eafe": {
"Name": "asset_asset-mongodb.1.ygswft1l34o5vfaxbzmnf0hrr",
"EndpointID": "98fd1ce6e16ade2b165b11c8f2875a0bdd3bc326c807ba6a1eb3c92f4417feed",
"MacAddress": "02:42:0a:00:03:04",
"IPv4Address": "10.0.3.4/24",
"IPv6Address": ""
},
"afab468aefab0689aa3488ee7f85dbc2cebe0202669ab4a58d570c12ee2bde21": {
"Name": "asset_asset-autodiscovery-cron.1.5k23u87w7224mpuasiyakgbdx",
"EndpointID": "d3d4c303e1bc665969ad9e4c9672e65a625fb71ed76e2423dca444a89779e4ee",
"MacAddress": "02:42:0a:00:03:0a",
"IPv4Address": "10.0.3.10/24",
"IPv6Address": ""
},
"f0a768e5cb2f1f700ee39d94e380aeb4bab5fe477bd136fd0abfa776917e90c1": {
"Name": "asset_asset-backend.1.8ql9t3qqt512etekjuntkft4q",
"EndpointID": "41587022c339023f15c57a5efc5e5adf6e57dc173286753216f90a976741d292",
"MacAddress": "02:42:0a:00:03:0c",
"IPv4Address": "10.0.3.12/24",
"IPv6Address": ""
},
"f577c539bbc3c06a501612d747f0d28d8a7994b843c6a37e18eeccb77717539e": {
"Name": "asset_asset-postgres.1.ynrqbzvba9kvfdkek3hurs7hl",
"EndpointID": "272d642a9e20e45f661ba01e8731f5256cef87898de7976f19577e16082c5854",
"MacAddress": "02:42:0a:00:03:06",
"IPv4Address": "10.0.3.6/24",
"IPv6Address": ""
},
"lb-asset_basic": {
"Name": "asset_basic-endpoint",
"EndpointID": "142373fd9c0d56d5a633b640d1ec9e4248bac22fa383ba2f754c1ff567a3502e",
"MacAddress": "02:42:0a:00:03:02",
"IPv4Address": "10.0.3.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4100"
},
"Labels": {
"com.docker.stack.namespace": "asset"
},
"Peers": [
{
"Name": "8170c4487a4b",
"IP": "10.255.8.21"
}
]
}
]
Ran into this same issue and it turns out it was a clash between my local networks subnet and the subnet of the automatically created ingress network. This can be verified using docker network inspect ingress and checking if the IPAM.Config.Subnet value overlaps with your local network.
To fix you can update the configuration of the ingress network as specified in Customize the default ingress network; in summary:
Remove services that publish ports
Remove existing network: docker network rm ingress
Recreate using non-conflicting subnet:
docker network create \
--driver overlay \
--ingress \
--subnet 172.16.0.0/16 \ # Or whatever other subnet you want to use
--gateway 172.16.0.1 \
ingress
Restart services
You can avoid a clash to begin with by specifying the default subnet pool when initializing the swarm using the --default-addr-pool option.
docker service update your-service --publish-add 80:80
You can publish ports by updating the service.
Can you try this url instead of the ip adres? host.docker.internal so something like http://host.docker.internal:80
I suggest you verify the "right" behavior using docker-compose first. Then, try to use docker swarm without network specification just to verify there are no network interface problems.
Also, you could use the below command to verify your LISTEN ports:
netstat -tulpn
EDIT: I faced this same issue but I was able to access my services through 127.0.0.1
While running docker provide an port mapping, like
docker run -p 8081:8081 your-docker-image
Or, provide the port mapping in the docker desktop while starting the container.
I got into this same issue. It turns out that's my iptables filter causes external connections not work.
In docker swarm mode, docker create a virtual network bridge device docker_gwbridge to access to overlap network. My iptables has following line to drop packet forwards:
:FORWARD DROP
That makes network packets from physical NIC can't reach the docker ingress network, so that my docker service only works on localhost.
Change iptables rule to
:FORWARD ACCEPT
And problem solved without touching the docker.
I am running docker for mac. My docker compose configuration file is:
version: "2.3"
services:
base:
build:
context: .
dev:
network_mode: "host"
extends:
service: base
when the container is launched via docker-compose run --rm dev sh, it can't ping a IP address (172.25.36.32). But I can ping this address from host. I have set network_mode: "host" on the configuration file. How can I make the docker container share host network?
I found that host network doesn't work for Mac. Is there a solution for that in Mac?
Below is the docker network inspect ID output:
[
{
"Name": "my_container_default",
"Id": "0441cf2b99b692d2047ded88d29a470e2622a1669a7bfce96804b50d609dc3b0",
"Created": "2019-08-27T06:06:30.984427063Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"22d3e7500ccfdc7fcd192a9f5977ef32e086e340908b1c0ff007e4144cc91f2e": {
"Name": "time-series-api_dev_run_b35174fdf692",
"EndpointID": "23924b4f68570bc99e01768db53a083533092208a3c8c92b20152c7d2fefe8ce",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "time-series-api",
"com.docker.compose.version": "1.24.1"
}
}
]
i believe you need to add network option during the build. Try with
version: "2.3"
services:
base:
build:
context: .
network: host
dev:
network_mode: "host"
extends:
service: base
EDIT: Works on Linux, please see documentation for Mac
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
I think you need to start the container with up option not run since run override many options:
docker-compose up dev
or you may try with --use-aliases with run
--use-aliases Use the service's network aliases in the network(s) the
container connects to.
see this
P.S
After your update
the following will work on MAC
dev:
network: host
extends:
service: base
I feel like this is simple, but I can't figure it out. I have two services, consul and traefik up in a single node swarm on the same host.
> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3g1obv9l7a9q consul_consul replicated 1/1 progrium/consul:latest
ogdnlfe1v8qx proxy_proxy global 1/1 traefik:alpine *:80->80/tcp, *:443->443/tcp
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
090f1ed90972 progrium/consul:latest "/bin/start -server …" 12 minutes ago Up 12 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp consul_consul.1.o0j8kijns4lag6odmwkvikexv
20f03023d511 traefik:alpine "/entrypoint.sh -c /…" 12 minutes ago Up 12 minutes 80/tcp
Both containers have access to the "consul" overlay network, which was created as such.
> docker network create --driver overlay --attachable consul
ypdmdyx2ulqt8l8glejfn2t25
Traefik is complaining that it can't reach consul.
time="2019-03-18T18:58:08Z" level=error msg="Load config error: Get http://consul:8500/v1/kv/traefik?consistent=&recurse=&wait=30000ms: dial tcp 10.0.2.2:8500: connect: connection refused, retrying in 7.492175404s"
I can go into the traefik container and confirm that I can't reach consul through the overlay network, although it is pingable.
> docker exec -it 20f03023d511 ash
/ # nslookup consul
Name: consul
Address 1: 10.0.2.2
/ # curl consul:8500
curl: (7) Failed to connect to consul port 8500: Connection refused
# ping consul
PING consul (10.0.2.2): 56 data bytes
64 bytes from 10.0.2.2: seq=0 ttl=64 time=0.085 ms
However, if I look a little deeper, I find that they are connected, just that the overlay network isn't transmitting traffic to the actual destination for some reason. If I go directly to the actual consul ip, it works.
/ # nslookup tasks.consul
Name: tasks.consul
Address 1: 10.0.2.3 0327c8e1bdd7.consul
/ # curl tasks.consul:8500
Moved Permanently.
I could workaround this, technically there will only ever be one copy of consul running, but I'd like to know why the data isn't routing in the first place before I get deeper into it. I can't think of anything else to try. Here is various information related to this setup.
> docker --version
Docker version 18.09.2, build 6247962
> docker network ls
NETWORK ID NAME DRIVER SCOPE
cee3cdfe1194 bridge bridge local
ypdmdyx2ulqt consul overlay swarm
5469e4538c2d docker_gwbridge bridge local
5fd928ea1e31 host host local
9v22k03pg9sl ingress overlay swarm
> docker network inspect consul
[
{
"Name": "consul",
"Id": "ypdmdyx2ulqt8l8glejfn2t25",
"Created": "2019-03-18T14:44:27.213690506-04:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0327c8e1bdd7ebb5a7871d16cf12df03240996f9e590509984783715a4c09193": {
"Name": "consul_consul.1.8v4bshotrco8fv3sclwx61106",
"EndpointID": "ae9d5ef1d19b67e297ebf40f6db410c33e4e3c0266c56e539e696be3ed4c81a5",
"MacAddress": "02:42:0a:00:02:03",
"IPv4Address": "10.0.2.3/24",
"IPv6Address": ""
},
"c21f5dfa93a2f43b747aedc64a343d94d6c1c2e6558d81bd4a52e2ba4b5fa90f": {
"Name": "proxy_proxy.sb6oindhmfukq4gcne6ynb2o2.4zvco02we58i3ulbyrsw1b2ok",
"EndpointID": "7596a208e0b05ba688f318814e24a2a1a3401765ed53ca421bf61c73e65c235a",
"MacAddress": "02:42:0a:00:02:06",
"IPv4Address": "10.0.2.6/24",
"IPv6Address": ""
},
"lb-consul": {
"Name": "consul-endpoint",
"EndpointID": "23e74716ef54f3fb6537b305176b790b4bc4132dda55f20588d7ce4ca71d7372",
"MacAddress": "02:42:0a:00:02:04",
"IPv4Address": "10.0.2.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {},
"Peers": [
{
"Name": "e11b9bd30b31",
"IP": "10.8.0.1"
}
]
}
]
> cat consul/docker-compose.yml
version: '3.1'
services:
consul:
image: progrium/consul
command: -server -bootstrap
networks:
- consul
volumes:
- consul:/data
deploy:
labels:
- "traefik.enable=false"
networks:
consul:
external: true
> cat proxy/docker-compose.yml
version: '3.3'
services:
proxy:
image: traefik:alpine
command: -c /traefik.toml
networks:
# We need an external proxy network and the consul network
# - proxy
- consul
ports:
# Send HTTP and HTTPS traffic to the proxy service
- 80:80
- 443:443
configs:
- traefik.toml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
# Deploy the service to all nodes that match our constraints
mode: global
placement:
constraints:
- "node.role==manager"
- "node.labels.proxy==true"
labels:
# Traefik uses labels to configure routing to your services
# Change the domain to your own
- "traefik.frontend.rule=Host:proxy.mcwebsite.net"
# Route traffic to the web interface hosted on port 8080 in the container
- "traefik.port=8080"
# Name the backend (not required here)
- "traefik.backend=traefik"
# Manually set entrypoints (not required here)
- "traefik.frontend.entryPoints=http,https"
configs:
# Traefik configuration file
traefik.toml:
file: ./traefik.toml
# This service will be using two external networks
networks:
# proxy:
# external: true
consul:
external: true
There were two optional kernel configs CONFIG_IP_VS_PROTO_TCP and CONFIG_IP_VS_PROTO_UDP disabled in my kernel which, you guessed it, enable tcp and udp load balancing.
I wish I'd checked that about four hours sooner than I did.