I have been trying to setup a Docker Swarm setup with Traefik:v2.x for some time and searched wide and broad on Google, but I still cannot connect to my reverse proxy from my outside domain.
My setup is as following:
Hardware (from outer to inner):
Technicolor MediaAccess TG799vac Xtream (modem)
|
Unifi Security Gateway (Unifi Controller is a Raspberry Pi)
|
x86_64 server where my (currently) single docker swarm node is running
Both domain and wildcard domain is pointing at my system and if I am running a single container with port 80 exposed it is working from the domain. As soon as I set it up for Traefik I can't reach my containers from outside, but my test container can be reach with curl commands from inside my network. Even if I curl the USG.
On the server I have installed Docker + Docker Swarm and running the following 2 stacks:
version: '3'
services:
reverse-proxy:
image: traefik:v2.3.4
command:
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=traefik-public"
- "--entrypoints.web.address=:80"
ports:
- 80:80
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- traefik-public
deploy:
placement:
constraints:
- node.role == manager
networks:
traefik-public:
external: true
and
version: '3'
services:
helloworld:
image: nginx
networks:
- traefik-public
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.helloworld.rule=Host(`test.mydomain.com`)"
- "traefik.http.routers.helloworld.entrypoints=web"
- "traefik.http.services.helloworld.loadbalancer.server.port=80"
networks:
traefik-public:
external: true
I little update, it is possible to access a regular container with port 80 exposed on my domain, but as soon as I spin the container up with Docker Swarm, it is no longer exposed to the internet.
The network is created as follow, which I also used for a regular container:
docker network create -d overlay --attachable test
and the yml:
version: '3'
services:
nginx:
image: nginx
ports:
- 80:80
networks:
- test-swarm-network
networks:
test:
external: true
So the above does not work but the following is visible on my domain from the outside:
docker run -d -p 80:80 nginx
Related
I am trying to make combination of docker + consul + traefik from last several days and it doesn't seem to be working. I am at a point where I just don't know what I am missing in my configuration.
My docker host IP address is: 192.168.30.12
I created a bridge network called consulwhich has a subnet of 172.28.0.5/16
Here is my docker compose for consul (for simplicity, I am running just one consul server so that I can debug an issue)
services:
consul-server:
container_name: consul-server-bootstrap
image: consul:latest
networks:
- consul
ports:
- 8400:8400
- 8500:8500
- 53:8600
- 53:8600/udp
command: agent -server -bootstrap -ui -node=consul-server -client=0.0.0.0 -advertise=192.168.30.12 -recursor=8.8.8.8
restart: unless-stopped
I am using registrator to register service to the consul. Here is docker compose for that service:
registrator:
image: gliderlabs/registrator:latest
volumes:
- /var/run/docker.sock:/tmp/docker.sock
container_name: consul-registrator
restart: unless-stopped
command: consul://consul-server-bootstrap:8500
networks:
- consul
Here is my traefik docker compose section
reverse-proxy:
container_name: traefik
image: traefik:v2.9
networks:
- consul
command: --api.insecure=true --providers.consulcatalog=true --providers.consulcatalog.prefix=traefik --providers.consulcatalog.endpoint.address=http://192.168.30.12:8500
ports:
- "80:80"
- "8080:8080"
Here is whoami container that I am registering with consul
whoami:
# A container that exposes an API to show its IP address
image: traefik/whoami
networks:
- consul
restart: unless-stopped
environment:
- SERVICE_TAGS=whoami
- SERVICE_NAME=whoami
- SERVICE_80_ID=whoami
ports:
- "80"
labels:
- traefik.enable=true
- traefik.backend=whoami
- traefik.port=80
- traefik.default.protocol=http
- traefik.http.routers.whoami.rule=Host(`whoami`)
When I visit http://192.168.30.12:8500, I see that whoami is registered with consul as seen below:
I see whoami on traefik dashboard as well when I visit http://192.168.30.12:8080
I also run dig command dig #127.0.0.1 whoami.service.consul on my docker host and that also can discover the service just fine as seen below:
I made a host entry on my other computer as seen below
192.168.30.12 whoami
When I try to visit http://whoami in browser, I get "Bad Gateway" error.
I want to register new containers to consul using registrator and than add it to the traefik load balancer using service tags and then consume those service from outside of my docker host.
Can someone please point me where I am making mistake. I have spent several days on it to make it work.
I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?
I want to switch from using the docker run-command to a docker-compose file with my nextcloud instance that runs behind a reverse proxy (jwilder/nginx-proxy).
This is the run command I used to use:
sudo docker run -d -p 8080:80 --expose 80 --expose 443 -e VIRTUAL_HOST=nextcloud.example.com -v nextcloud:/var/www/html --restart=always --name=nextcloud nextcloud:24.0.8
I installed the mariaDB later in the container so that I didn't have to struggle with networking. Also I use the Port 8080 only in my internal network for fast up- and downloading.
This worked quite well, but now I want to create a similar environment with docker-compose:
version: '3.8'
volumes:
nextcloud:
db:
services:
db:
image: mariadb:10.5
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=my-super-strong-password
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:24.0.8
restart: always
ports:
- 8080:80
expose:
- 80
- 443
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- PHP_MEMORY_LIMIT=1G
- PHP_UPLOAD_LIMIT=128M
- VIRTUAL_HOST=nextcloud.example.com
The containers are starting successfully and I can use nextcloud in my internal network. But I cannot reach them from my domain. Instead I get a 502 Bad request. The VIRTUAL_HOST redirection seems to work since I'd get a 503 Service Temporarily Unavailable instead.
I think exposing the ports 80 and 443 doesn't work.
I've tried to add a proxy network:
networks:
proxy:
driver: bridge
external: true
and added
networks:
- default
to the db service and
networks:
- default
- proxy
to the app service.
That didn't fixed the problem. Does any of you have an idea what I can try next?
I've tried different ways to expose the ports and tried to create different networks
Nevermind found the problem.
Instead of simply creating an network named proxy, I had to create a new jwilder reverse-proxy service via docker compose with a name, as an example myreverseproxy. In each service I want to make public I needed to name this proxy as:
networks:
- default
- myreverseproxy
Also I had to use the name in the networks service area:
networks:
myreverseproxy:
external: true
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.
I set up a cluster of 3x Raspberry Pi 3 running Raspbian Stretch Lite and Docker 18.06.1-ce. Swarm is initialized and working fine so far. I read the docs on setting up traefik on docker swarm (1, 2 ) but I can't get the whoami container proxied from traefik.
Here's my stack.yml:
version: '3'
networks:
proxy:
external: true
services:
traefik:
image: traefik
command: --api --docker --docker.swarmMode --docker.watch
deploy:
placement:
constraints:
- node.role == manager
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- proxy
ports:
- "80:80"
- "443:443"
- "8002:8080"
whoami:
image: stefanscherer/whoami
networks:
- proxy
deploy:
labels:
- "traefik.port=80"
- "traefik.docker.network=proxy"
- "traefik.frontend.rule=Path:/whoami"
Stack is running:
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
tx0npbsb3t0k traefik_traefik replicated 1/1 traefik:latest *:80->80/tcp, *:443->443/tcp, *:8002->8080/tcp
7fqaew880p9p traefik_whoami replicated 1/1 stefanscherer/whoami:latest
The proxy network is set up with the overlay driver and the attachable flag.
Traefik dashboard is accessible and shows the whoami frontend and backend. But opening http://pinode1/whoami/ in the browser I get Error 502 Bad Gateway (with or without trailing slash).
I have traefik running and serving whoami successfully on another non-swarm machine so I wonder what's wrong in the swarm setup.