I want to use nginx-proxy in front of another nginx (my-nginx) instance and this is the most minimal problem setup I can come up with. I am on the same machine and have the following:
Setup
NGINX-Proxy docker-compose.yml
version: '3.8'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- nginx-proxy
networks:
nginx-proxy:
external: true
Network
docker network create nginx-proxy
Commands
These are the commands that I run to start the nginx-proxy and my-nginx
docker compose up -d
and (I made this as minimal as possible, actually this NGINX is part of a larger docker compose project, but the error connection refused remains)
docker run -p 888:80 --network nginx-proxy -e VIRTUAL_HOST=my.sub.domain.tld -e VIRTUAL_PORT=888 --name my-nginx nginx
Logs
docker ps shows me that both containers are working
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
744f4df7059d nginx "/docker-entrypoint.…" 4 minutes ago Up 4 minutes 0.0.0.0:888->80/tcp, :::888->80/tcp my-nginx
e3c57213f2bc jwilder/nginx-proxy "/app/docker-entrypo…" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp jwilder-nginx-proxy-1
And this docker network inspect nginx-proxy tells me that they are infact connected to the same network.
[
{
"Name": "nginx-proxy",
"Id": "f71dcf5c1125005f1527eaa6c47d77c4aeeafb763faefc17c42e10b73c23d52b",
"Created": "2022-11-08T18:36:31.022872372Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.208.0/20",
"Gateway": "192.168.208.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"744f4df7059d02aa0b805a769fbc8b6428c0f3ace165b89d4b3c93395d517e72": {
"Name": "my-nginx",
"EndpointID": "7252798a05774d3d07ce04524859820026f9bad93b0b72d0350033fad821d285",
"MacAddress": "02:42:c0:a8:d0:03",
"IPv4Address": "192.168.208.3/20",
"IPv6Address": ""
},
"e3c57213f2bcbe70e91c961492c4157d6fc1f4f1a5c6456969201b102d689442": {
"Name": "jwilder-nginx-proxy-1",
"EndpointID": "5e41bc5c60c6b3d697e70f32b443e9c5b2852ae6c6099ad21a5ef978981cf9be",
"MacAddress": "02:42:c0:a8:d0:02",
"IPv4Address": "192.168.208.2/20",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Problem
When I exec into the nginx-proxy with
docker compose exec nginx-proxy bash
and I try to curl my-nginx with
curl my-nginx:888
I get:
curl: (7) Failed to connect to my-nginx port 888: Connection refused
What I tried:
use the default network instead of nginx-network
curl from host (does not work)
access via internet gives this error (the same issue as the above curl)
jwilder-nginx-proxy-1 | nginx.1 | 2022/11/09 10:02:39 [error] 38#38: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 84.56.95.53, server: my.sub.domain.tld, request: "GET / HTTP/1.1", upstream: "http://192.168.208.3:888/", host: "my.sub.domain.tld"
jwilder-nginx-proxy-1 | nginx.1 | my.sub.domain.tld 84.56.95.53 - - [09/Nov/2022:10:02:39 +0000] "GET / HTTP/1.1" 502 157 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0" "192.168.208.3:888"
Any help is much appreciated!
Related
I'm currently struggling a lot to spin up a small traefik example on my docker swarm instance.
I started first with an docker-compose file for local development and everything is working as expected.
But when I define this as swarm file to bring that environment into production I always get an Bad Gateway from traefik.
After searching a lot about this it seems to be related to an networking issue from traefik since it tries to request between two different networks, but I'm not able to find the issue.
After certain iterations I tried to reproduce the Issue with "official" containers to provide an better example for other people.
So this is my traefik.yml
version: "3.7"
networks:
external:
external: true
services:
traefik:
image: "traefik:v2.8.1"
command:
- "--log.level=INFO"
- "--accesslog=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=external"
- "--entrypoints.web.address=:80"
- "--entrypoints.web.forwardedHeaders.insecure"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- external
deploy:
placement:
constraints: [node.role == manager]
host-app:
image: traefik/whoami
ports:
- "9000:80"
networks:
- external
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.host-app.rule=PathPrefix(`/whoami`)"
- "traefik.http.services.host-app.loadbalancer.server.port=9000"
- "traefik.http.routers.host-app.entrypoints=web"
- "traefik.http.middlewares.host-app-stripprefix.stripprefix.prefixes=/"
- "traefik.http.routers.host-app.middlewares=host-app-stripprefix#docker"
- "traefik.docker.network=external"
The network is created with: docker network create -d overlay external
and I deploy the stack with docker stack deploy -c traefik.yml server
Until here no issues and everything spins up fine.
When I curl localhost:9000 I get the correct response:
curl localhost:9000
Hostname: 7aa77bc62b44
IP: 127.0.0.1
IP: 10.0.0.8
IP: 172.25.0.4
IP: 10.0.4.6
RemoteAddr: 10.0.0.2:35068
GET / HTTP/1.1
Host: localhost:9000
User-Agent: curl/7.68.0
Accept: */*
but on
curl localhost/whoami
Bad Gateway%
I always get the bad Gateway issue.
So I checked my network with docker network inspect external to ensure that both are running in the same network and this is the case.
[
{
"Name": "external",
"Id": "iianul6ua9u1f1bb8ibsnwkyc",
"Created": "2022-08-09T19:32:01.4491323Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.4.0/24",
"Gateway": "10.0.4.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7aa77bc62b440e32c7b904fcbd91aea14e7a73133af0889ad9e0c9f75f2a884a": {
"Name": "server_host-app.1.m2f5x8jvn76p2ssya692f4ydp",
"EndpointID": "5d5175b73f1aadf2da30f0855dc0697628801a31d37aa50d78a20c21858ccdae",
"MacAddress": "02:42:0a:00:04:06",
"IPv4Address": "10.0.4.6/24",
"IPv6Address": ""
},
"e23f5c2897833f800a961ab49a4f76870f0377b5467178a060ec938391da46c7": {
"Name": "server_traefik.1.v5g3af00gqpulfcac84rwmnkx",
"EndpointID": "4db5d69e1ad805954503eb31c4ece5a2461a866e10fcbf579357bf998bf3490b",
"MacAddress": "02:42:0a:00:04:03",
"IPv4Address": "10.0.4.3/24",
"IPv6Address": ""
},
"lb-external": {
"Name": "external-endpoint",
"EndpointID": "ed668b033450646629ca050e4777ae95a5a65fa12a5eb617dbe0c4a20d84be28",
"MacAddress": "02:42:0a:00:04:04",
"IPv4Address": "10.0.4.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4100"
},
"Labels": {},
"Peers": [
{
"Name": "3cb3e7ba42dc",
"IP": "192.168.65.3"
}
]
}
]
and by checking the traefik logs I get the following
10.0.0.2 - - [09/Aug/2022:19:42:34 +0000] "GET /whoami HTTP/1.1" 502 11 "-" "-" 4 "host-app#docker" "http://10.0.4.9:9000" 0ms
which is the correct server:port for the whoami service. And even connecting into the traefik container and ping 10.0.4.9 works fine.
PING 10.0.4.9 (10.0.4.9): 56 data bytes
64 bytes from 10.0.4.9: seq=0 ttl=64 time=0.066 ms
64 bytes from 10.0.4.9: seq=1 ttl=64 time=0.057 ms
^C
--- 10.0.4.9 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.061/0.066 ms
This logs and snippets are all on my local swarm on Docker for Windows with wsl2 Ubuntu distribution. But I tested this on an CentOS Swarm which can be requested within my company and also with https://labs.play-with-docker.com/ and leads all to the same error.
So please can anybody tell me what configuration I'm missing or what mistake I made to get this running?
After consulting a coworker and creating another example we finally found the solution by our self.
Its just my own failure that I used the published port for loadbalancing the traefik to the service which is wrong.
host-app:
image: traefik/whoami
ports:
- "9000:80"
networks:
- external
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.host-app.rule=PathPrefix(`/whoami`)"
- "traefik.http.services.host-app.loadbalancer.server.port=80" # <--- this was wrong
- "traefik.http.routers.host-app.entrypoints=web"
- "traefik.http.middlewares.host-app-stripprefix.stripprefix.prefixes=/"
- "traefik.http.routers.host-app.middlewares=host-app-stripprefix#docker"
- "traefik.docker.network=external"
and that's the reason for the Bad Gateway since traefik tries to reach the published port from the server which is not present internally.
I have the following docker-compose.yml:
services:
postgres:
image: "postgres:11.0-alpine"
app:
build: .
ports:
- "4000:4000"
depends_on:
- postgres
- nuxt
nuxt:
image: node:latest
ports:
- "3000:3000"
I need nuxt service to communicate with app.
Within the nuxt service (docker-compose run --rm --service-ports nuxt bash), if I run
root#62cafc299e8a:/app# ping postgres
PING postgres (172.18.0.2) 56(84) bytes of data.
64 bytes from avril_postgres_1.avril_default (172.18.0.2): icmp_seq=1 ttl=64 time=0.283 ms
64 bytes from avril_postgres_1.avril_default (172.18.0.2): icmp_seq=2 ttl=64 time=0.130 ms
64 bytes from avril_postgres_1.avril_default (172.18.0.2): icmp_seq=3 ttl=64 time=0.103 ms
but if I do:
root#62cafc299e8a:/app# ping app
ping: app: No address associated with hostname
Why does it work for postgres but not with app?
If I do docker network inspect 4fcb63b4b1c9, they appear to all be on the same network:
[
{
"Name": "myapp_default",
"Id": "4fcb63b4b1c9fe37ebb26e9d4d22c359c9d5ed6153bd390b6f0b63ffeb0d5c37",
"Created": "2019-05-16T16:46:27.820758377+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"53b726bdd01159b5f18e8dcb858e979e6e2f8ef68c62e049b824899a74b186c3": {
"Name": "myapp_app_run_c82e91ca4ba0",
"EndpointID": "b535b6ca855a5dea19060b2f7c1bd82247b94740d4699eff1c8669c5b0677f78",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"62cafc299e8a90fd39530bbe4a6af8b86098405e54e4c9e61128539ffd5ba928": {
"Name": "myapp_nuxt_run_3fb01bb2f778",
"EndpointID": "7eb8f5f8798baee4d65cbbfe5f0f5372790374b48f599f32490700198fa6d54c",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"9dc1c848b2e347876292650c312e8aaf3f469f2efa96710fb50d033b797124b4": {
"Name": "myapp_postgres_1",
"EndpointID": "a925438ad5644c03731b7f7c926cff095709b2689fd5f404e9ac4e04c2fbc26a",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "myapp",
"com.docker.compose.version": "1.23.2"
}
}
]
So why is that? Also tried with aliases, without success. :(
Your app container is most likely not running. Its appearance in docker network inspect means that the container exists but it may be exited (i.e. is not running). You can check with docker ps -a, for example:
$ docker ps -a
CONTAINER ID ... STATUS ... NAMES
fe908e014fdd Exited (0) Less than a second ago so_app_1
3b2ca418c051 Up 2 minutes so_postgres_1
container app exists but is not running: you won't be able to ping it even if it exists in the network
container postgres exists and is running: you will be able to ping it
It's probably due to the fact that docker-compose run --rm --service-ports nuxt bash will only create and run the nuxt container, it won't run app nor postgres. You are able to ping postgres because it was already running before you used docker-compose run nuxt bash
To be able to ping other containers after running docker-compose run nuxt ..., you should either:
Have the other containers already running before (such as by using docker-compose up -d)
Have the other containers depends_on the container you are trying to run, for example:
nuxt:
image: node:latest
ports:
- "3000:3000"
# this will ensure posgres and app are run as well when using docker-compose run
depends_on:
- app
- nuxt
Even with that, your container may fail to start (or exit right after start) and you won't be able to ping it. Check with docker ps -a that it is running and docker logs to see why it may have exited.
As #Pierre said, most probably your container is not running.
From below docker-compose from your question, it seems you are not doing anything in that container such as running a server or uwsgi to keep it alive.
app:
build: .
ports:
- "4000:4000"
depends_on:
- postgres
- nuxt
To keep it running in docker compose, add command directive like below.
app:
build: .
command: tail -f /dev/null #trick to stop immediate exit and keep the container alive.
ports:
- "4000:4000"
depends_on:
- postgres
- nuxt
It should be 'pingable' container now.
If you wish to run via docker run, use -t, which creates a psuedo-tty
docker run -t -d <image> <command>
I have ran ubuntu docker-containers (mysql) and (nodejs server app) on windows
docker run -d --network bridge --name own -p 80:3000 own:latest
docker run -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=12345678 mysql:5
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ce966e43414 own:latest "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:80->3000/tcp own
ed10cfc93dd5 mysql:5 "docker-entrypoint.s…" 20 minutes ago Up 20 minutes 0.0.0.0:3306->3306/tcp, 33060/tcp mysql
When i open port localhost:3000 with server app just via cmd (NOT via docker VM) all is good, I see success connection to the docker-container 0.0.0.0:3306, but when i:
docker start own
check browser 0.0.0.0:80 and i see Error: connect ECONNREFUSED 127.0.0.1:3306
docker network ls
NETWORK ID NAME DRIVER SCOPE
019f0886d253 bridge bridge local
fa1842bad14c host host local
85e7d1e38e14 none null local
docker inspect bridge
[
{
"Name": "bridge",
"Id": "019f0886d253091c1367863e38a199fe9b539a72ddb7575b26f40d0d1b1f78dc",
"Created": "2019-11-19T09:15:53.2096944Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"a79ec12c4cc908326c54abc2b47f80ffa3da31c5e735bf5ff2755f23b9d562dd": {
"Name": "own",
"EndpointID": "2afc225e29138ff9f1da0f557e9f7659d3c4ccaeb5bfaa578df88a672dac003f",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"ed10cfc93dd5eda7cfb8a26e5e4b2a8ccb4e9db7a4957b3d1048cb93f5137fd4": {
"Name": "mysql",
"EndpointID": "ea23d009f959d954269c0554cecf37d01f8fe71481965077f1372df27f05208a",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
maybe could i somehow assign the own container to the bridge network like mysql. After localhost of the mysql container will be seen for own container? Help please what should i do?
#create network mynetwork
docker network create --subnet 172.17.0.0/16 mynetwork
#create own container (without starting it)
docker create -d --name own -p 80:3000 own:latest
#add own container to the network mynetwork
docker network connect --ip 172.17.0.2 mynetwork own
#start container own
docker start own
#same as above but with different ip
docker create -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=12345678 mysql:5
docker network connect --ip 172.17.0.3 mynetwork mysql
docker start mysql
when you stop and remove your containers do, you may remove network this way:
docker network rm mynetwork
or if you don't do it then there is no need to create it again as above but just connect your new/other containers to it.
In your application you should use 172.17.0.3 as the MySQL address.
I feel like this is simple, but I can't figure it out. I have two services, consul and traefik up in a single node swarm on the same host.
> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3g1obv9l7a9q consul_consul replicated 1/1 progrium/consul:latest
ogdnlfe1v8qx proxy_proxy global 1/1 traefik:alpine *:80->80/tcp, *:443->443/tcp
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
090f1ed90972 progrium/consul:latest "/bin/start -server …" 12 minutes ago Up 12 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp consul_consul.1.o0j8kijns4lag6odmwkvikexv
20f03023d511 traefik:alpine "/entrypoint.sh -c /…" 12 minutes ago Up 12 minutes 80/tcp
Both containers have access to the "consul" overlay network, which was created as such.
> docker network create --driver overlay --attachable consul
ypdmdyx2ulqt8l8glejfn2t25
Traefik is complaining that it can't reach consul.
time="2019-03-18T18:58:08Z" level=error msg="Load config error: Get http://consul:8500/v1/kv/traefik?consistent=&recurse=&wait=30000ms: dial tcp 10.0.2.2:8500: connect: connection refused, retrying in 7.492175404s"
I can go into the traefik container and confirm that I can't reach consul through the overlay network, although it is pingable.
> docker exec -it 20f03023d511 ash
/ # nslookup consul
Name: consul
Address 1: 10.0.2.2
/ # curl consul:8500
curl: (7) Failed to connect to consul port 8500: Connection refused
# ping consul
PING consul (10.0.2.2): 56 data bytes
64 bytes from 10.0.2.2: seq=0 ttl=64 time=0.085 ms
However, if I look a little deeper, I find that they are connected, just that the overlay network isn't transmitting traffic to the actual destination for some reason. If I go directly to the actual consul ip, it works.
/ # nslookup tasks.consul
Name: tasks.consul
Address 1: 10.0.2.3 0327c8e1bdd7.consul
/ # curl tasks.consul:8500
Moved Permanently.
I could workaround this, technically there will only ever be one copy of consul running, but I'd like to know why the data isn't routing in the first place before I get deeper into it. I can't think of anything else to try. Here is various information related to this setup.
> docker --version
Docker version 18.09.2, build 6247962
> docker network ls
NETWORK ID NAME DRIVER SCOPE
cee3cdfe1194 bridge bridge local
ypdmdyx2ulqt consul overlay swarm
5469e4538c2d docker_gwbridge bridge local
5fd928ea1e31 host host local
9v22k03pg9sl ingress overlay swarm
> docker network inspect consul
[
{
"Name": "consul",
"Id": "ypdmdyx2ulqt8l8glejfn2t25",
"Created": "2019-03-18T14:44:27.213690506-04:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0327c8e1bdd7ebb5a7871d16cf12df03240996f9e590509984783715a4c09193": {
"Name": "consul_consul.1.8v4bshotrco8fv3sclwx61106",
"EndpointID": "ae9d5ef1d19b67e297ebf40f6db410c33e4e3c0266c56e539e696be3ed4c81a5",
"MacAddress": "02:42:0a:00:02:03",
"IPv4Address": "10.0.2.3/24",
"IPv6Address": ""
},
"c21f5dfa93a2f43b747aedc64a343d94d6c1c2e6558d81bd4a52e2ba4b5fa90f": {
"Name": "proxy_proxy.sb6oindhmfukq4gcne6ynb2o2.4zvco02we58i3ulbyrsw1b2ok",
"EndpointID": "7596a208e0b05ba688f318814e24a2a1a3401765ed53ca421bf61c73e65c235a",
"MacAddress": "02:42:0a:00:02:06",
"IPv4Address": "10.0.2.6/24",
"IPv6Address": ""
},
"lb-consul": {
"Name": "consul-endpoint",
"EndpointID": "23e74716ef54f3fb6537b305176b790b4bc4132dda55f20588d7ce4ca71d7372",
"MacAddress": "02:42:0a:00:02:04",
"IPv4Address": "10.0.2.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {},
"Peers": [
{
"Name": "e11b9bd30b31",
"IP": "10.8.0.1"
}
]
}
]
> cat consul/docker-compose.yml
version: '3.1'
services:
consul:
image: progrium/consul
command: -server -bootstrap
networks:
- consul
volumes:
- consul:/data
deploy:
labels:
- "traefik.enable=false"
networks:
consul:
external: true
> cat proxy/docker-compose.yml
version: '3.3'
services:
proxy:
image: traefik:alpine
command: -c /traefik.toml
networks:
# We need an external proxy network and the consul network
# - proxy
- consul
ports:
# Send HTTP and HTTPS traffic to the proxy service
- 80:80
- 443:443
configs:
- traefik.toml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
# Deploy the service to all nodes that match our constraints
mode: global
placement:
constraints:
- "node.role==manager"
- "node.labels.proxy==true"
labels:
# Traefik uses labels to configure routing to your services
# Change the domain to your own
- "traefik.frontend.rule=Host:proxy.mcwebsite.net"
# Route traffic to the web interface hosted on port 8080 in the container
- "traefik.port=8080"
# Name the backend (not required here)
- "traefik.backend=traefik"
# Manually set entrypoints (not required here)
- "traefik.frontend.entryPoints=http,https"
configs:
# Traefik configuration file
traefik.toml:
file: ./traefik.toml
# This service will be using two external networks
networks:
# proxy:
# external: true
consul:
external: true
There were two optional kernel configs CONFIG_IP_VS_PROTO_TCP and CONFIG_IP_VS_PROTO_UDP disabled in my kernel which, you guessed it, enable tcp and udp load balancing.
I wish I'd checked that about four hours sooner than I did.
I'm not sure if that's correct or not but I have an nginx on a container that proxies the request to a Nodejs app and everything is working fine on a docker-compose file with the depends on feature and all that. However now I want to separate them into different docker compose files and then it happens that nginx can't find the upstream nodejsapp:
2017/11/10 15:21:38 [emerg] 1#1: host not found in upstream "nodejsapp" in /etc/nginx/conf.d/default.nodejsapp.conf:8
nginx: [emerg] host not found in upstream "nodejsapp" in /etc/nginx/conf.d/default.nodejsapp.conf:8
My configuration for the proxy stuff works but in any case is:
server {
listen 80;
location / {
proxy_pass http://nodejsapp:4000;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location ~* ^.+\.(atom|bmp|bz2|css|doc|docx|eot|exe|gif|gz|ico|jpeg|jpg|js|mid|midi|mp4|ogg|ogv|otf|pdf|png|ppt|pptx|rar|rss|rtf|svg|svgz|swf|tar|tgz|ttf|txt|wav|woff|xls|xml|zip)$ {
access_log off;
log_not_found on;
expires max;
proxy_pass http://nodejsapp:4000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
If I do docker ps I get this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4d888e6c37d nginx "nginx -g 'daemon ..." 3 minutes ago Restarting (1) 33 seconds ago nginx
5867ac82e5da nodejsapp:3.2.0-dev "pm2-docker proces..." 25 minutes ago Up 25 minutes 80/tcp, 443/tcp, 43554/tcp, 0.0.0.0:4000->4000/tcp nodejsapp
I understood that docker containers can reach other containers by the name. Then I inspected the networks I have: docker network ls
NETWORK ID NAME DRIVER SCOPE
cf9b7ea0a5b7 bridge bridge local
549c48fa592a docker_default bridge local
652ffb4094f0 host host local
412ed3bbfd01 nginx_default bridge local
85a803c70f83 none null local
And then I docker network inspect bridge I can see the nodejsapp container among others:
"Containers": {
"5867ac82e5dad7642155a7c3df05c37cd83c1be5a0eb49d55cf5325bfaa7ea4d": {
"Name": "nodejsapp",
"EndpointID": "a362d02b083e90bb0acc50a2af8ec5a5f0a9a23a9f0ed2bfb62b1a6c60586fb8",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
},
"b272cd07cc1e5c28632cbaf05858cf373b6a13304f69bee5de73fc57e5a3cf79": {
"Name": "sad_poitras",
"EndpointID": "c5b1e2389f5da8b9acf4f6c7a13f5d6e09411cd960b9e4dcd8ed38fef2982780",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"ffd171939e534a4749a4d215f52c1add353537e91e4a95d7b752775ee2b4c70f": {
"Name": "elated_hawking",
"EndpointID": "eca2709e189d8d5922e1dd5d8c5af14826bd77d5a399b63363e3b07671fb5169",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
And I can see that nginx is not connected to any network however it looks to me it tries to create its own network (nginx_default)? Note that I cannot see any container attached to that network, possibly because it's failing on start:
[
{
"Name": "nginx_default",
"Id": "412ed3bbfd01336c86473a8d07fe6c444389cac44838552a40d6b7ec7f4c972d",
"Created": "2017-11-10T15:21:35.2016921Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
Is there a way to get the it using the bridge one? Am I doing something else wrong?
UPDATE: I have set the nodejsapp to be on the network nginx_default and then I got nginx to start. However the proxy is not working at all and I get the home screen of nginx. My docker compose files after I split it into two different compose files:
version: '2'
services:
nginx:
build:
context: .
dockerfile: Dockerfile.dev
image: nginx:3.1.0-dev
ports:
- "10100:80"
restart: always
container_name: nginx
My Dockerfile.dev:
FROM nginx:latest
RUN mkdir /tmp/cache
COPY ./nodejsapp/default.dev.nodejsapp.conf /etc/nginx/conf.d/default.nodejsapp.conf
COPY nginx.dev.conf /etc/nginx/nginx.conf
The other docker-compose:
version: '2'
services:
nodejsapp:
build:
context: .
dockerfile: Dockerfile.dev
image: nodejsapp:3.2.0-dev
ports:
- "4000:4000"
restart: always
container_name: nodejsapp
and the corresponding Dockerfile.dev:
FROM keymetrics/pm2:latest
COPY dist dist/
COPY process.yml .
CMD ["pm2-docker", "process.yml"]
You can use an external network to connect the two containers. But first you must create the network, we'll call it my-network:
docker network create my-network
Then we need to declare the network in each docker-compose file:
networks:
default:
external:
name: my-network
Docker creates 2 different and isolated networks if you create 2 docker-compose files. AFAIK, the only way is to create a network and "link" the containers to that network.
So, create a network then use it. See the doc here: https://docs.docker.com/compose/networking/#use-a-pre-existing-network