traefik discovers consul service but can not reach from outside - docker

I am trying to make combination of docker + consul + traefik from last several days and it doesn't seem to be working. I am at a point where I just don't know what I am missing in my configuration.
My docker host IP address is: 192.168.30.12
I created a bridge network called consulwhich has a subnet of 172.28.0.5/16
Here is my docker compose for consul (for simplicity, I am running just one consul server so that I can debug an issue)
services:
consul-server:
container_name: consul-server-bootstrap
image: consul:latest
networks:
- consul
ports:
- 8400:8400
- 8500:8500
- 53:8600
- 53:8600/udp
command: agent -server -bootstrap -ui -node=consul-server -client=0.0.0.0 -advertise=192.168.30.12 -recursor=8.8.8.8
restart: unless-stopped
I am using registrator to register service to the consul. Here is docker compose for that service:
registrator:
image: gliderlabs/registrator:latest
volumes:
- /var/run/docker.sock:/tmp/docker.sock
container_name: consul-registrator
restart: unless-stopped
command: consul://consul-server-bootstrap:8500
networks:
- consul
Here is my traefik docker compose section
reverse-proxy:
container_name: traefik
image: traefik:v2.9
networks:
- consul
command: --api.insecure=true --providers.consulcatalog=true --providers.consulcatalog.prefix=traefik --providers.consulcatalog.endpoint.address=http://192.168.30.12:8500
ports:
- "80:80"
- "8080:8080"
Here is whoami container that I am registering with consul
whoami:
# A container that exposes an API to show its IP address
image: traefik/whoami
networks:
- consul
restart: unless-stopped
environment:
- SERVICE_TAGS=whoami
- SERVICE_NAME=whoami
- SERVICE_80_ID=whoami
ports:
- "80"
labels:
- traefik.enable=true
- traefik.backend=whoami
- traefik.port=80
- traefik.default.protocol=http
- traefik.http.routers.whoami.rule=Host(`whoami`)
When I visit http://192.168.30.12:8500, I see that whoami is registered with consul as seen below:
I see whoami on traefik dashboard as well when I visit http://192.168.30.12:8080
I also run dig command dig #127.0.0.1 whoami.service.consul on my docker host and that also can discover the service just fine as seen below:
I made a host entry on my other computer as seen below
192.168.30.12 whoami
When I try to visit http://whoami in browser, I get "Bad Gateway" error.
I want to register new containers to consul using registrator and than add it to the traefik load balancer using service tags and then consume those service from outside of my docker host.
Can someone please point me where I am making mistake. I have spent several days on it to make it work.

Related

Traefik + cloudflared with full strict tunnel on docker

I have a VM which run multiple containers all linked to one docker network.
Traefik (as reverse proxy & load balancer)
cloudflared as tunnel
whoami (for testing purposes)
and some containers like photoprism, nextcloud, node-red,...
I generated an origin cert via Cloudflare which has been added to Traefik.
In Cloudflare, I have a subdomain which points via the tunnel to https://172.16.10.11 (ip from the VM). This causes an unsecure connection (IP SAN applied -> I don't think this is possible on a private ip?). When I disable TLS verification on Cloudflare, it works. However, I am trying to set this up properly. Next,I tried pointing my domain towards https://localhost. the cloudflared service running in a container cannot reach any other services as these are located other containers.
I was thinking, what if I run the cloudflared service within the Traefik container, I believe I can reach Traefik via localhost?
Do you have any advice on how to achieve a secure tunnel with cert verification? Or is this not realistic when self-hosting?
Current docker compose:
version: '3'
services:
traefik:
image: traefik:latest
command:
- --log.level=debug
- --api.insecure=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --serverstransport.insecureskipverify
- --providers.file.filename=/etc/traefik/dynamic_conf.yml
- --providers.file.watch=true
ports:
- "8080:8080"
- "443:443"
- "80:80"
networks:
- proxy_network
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- traefik-data:/etc/traefik
labels:
- traefik.enable=true
- traefik.docker.network=proxy_network
- traefik.http.routers.traefik.rule=Host(`${DOMAINNAME_TRAEFIK}`)
- traefik.http.routers.traefik.entrypoints=web
- traefik.http.routers.traefik.service=traefik
- traefik.http.services.traefik.loadbalancer.server.port=8080
tunnel:
container_name: cloudflared-tunnel
image: cloudflare/cloudflared
#restart: unless-stopped
networks:
- proxy_network
command: tunnel --no-autoupdate run --token ${CLOUDFLARED_TOKEN}
whoami:
image: traefik/whoami
container_name: whoami1
command:
# It tells whoami to start listening on 2001 instead of 80
- --port=2000
- --name=iamfoo
networks:
- proxy_network
labels:
- traefik.enable=true
- traefik.http.routers.whoami.rule=Host(`${DOMAINNAME}`)
- traefik.http.routers.whoami.entrypoints=websecure
- traefik.http.routers.whoami.tls=true
- traefik.http.routers.whoami.service=whoami
- traefik.http.services.whoami.loadbalancer.server.port=2000
volumes:
traefik-data:
driver: local
networks:
proxy_network:
name: proxy_network
external: true
I expect a secure tunnel solution and to make sure that this architecture is setup in a good way.

docker host: use docker dns to resolve container name from host network

I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?

docker-compose Nextcloud Instance behind reverse proxy Bad gateway 502

I want to switch from using the docker run-command to a docker-compose file with my nextcloud instance that runs behind a reverse proxy (jwilder/nginx-proxy).
This is the run command I used to use:
sudo docker run -d -p 8080:80 --expose 80 --expose 443 -e VIRTUAL_HOST=nextcloud.example.com -v nextcloud:/var/www/html --restart=always --name=nextcloud nextcloud:24.0.8
I installed the mariaDB later in the container so that I didn't have to struggle with networking. Also I use the Port 8080 only in my internal network for fast up- and downloading.
This worked quite well, but now I want to create a similar environment with docker-compose:
version: '3.8'
volumes:
nextcloud:
db:
services:
db:
image: mariadb:10.5
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=my-super-strong-password
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:24.0.8
restart: always
ports:
- 8080:80
expose:
- 80
- 443
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- PHP_MEMORY_LIMIT=1G
- PHP_UPLOAD_LIMIT=128M
- VIRTUAL_HOST=nextcloud.example.com
The containers are starting successfully and I can use nextcloud in my internal network. But I cannot reach them from my domain. Instead I get a 502 Bad request. The VIRTUAL_HOST redirection seems to work since I'd get a 503 Service Temporarily Unavailable instead.
I think exposing the ports 80 and 443 doesn't work.
I've tried to add a proxy network:
networks:
proxy:
driver: bridge
external: true
and added
networks:
- default
to the db service and
networks:
- default
- proxy
to the app service.
That didn't fixed the problem. Does any of you have an idea what I can try next?
I've tried different ways to expose the ports and tried to create different networks
Nevermind found the problem.
Instead of simply creating an network named proxy, I had to create a new jwilder reverse-proxy service via docker compose with a name, as an example myreverseproxy. In each service I want to make public I needed to name this proxy as:
networks:
- default
- myreverseproxy
Also I had to use the name in the networks service area:
networks:
myreverseproxy:
external: true

Traefik Docker Swarm Unifi setup does is not reachable from domain

I have been trying to setup a Docker Swarm setup with Traefik:v2.x for some time and searched wide and broad on Google, but I still cannot connect to my reverse proxy from my outside domain.
My setup is as following:
Hardware (from outer to inner):
Technicolor MediaAccess TG799vac Xtream (modem)
|
Unifi Security Gateway (Unifi Controller is a Raspberry Pi)
|
x86_64 server where my (currently) single docker swarm node is running
Both domain and wildcard domain is pointing at my system and if I am running a single container with port 80 exposed it is working from the domain. As soon as I set it up for Traefik I can't reach my containers from outside, but my test container can be reach with curl commands from inside my network. Even if I curl the USG.
On the server I have installed Docker + Docker Swarm and running the following 2 stacks:
version: '3'
services:
reverse-proxy:
image: traefik:v2.3.4
command:
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=traefik-public"
- "--entrypoints.web.address=:80"
ports:
- 80:80
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- traefik-public
deploy:
placement:
constraints:
- node.role == manager
networks:
traefik-public:
external: true
and
version: '3'
services:
helloworld:
image: nginx
networks:
- traefik-public
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.helloworld.rule=Host(`test.mydomain.com`)"
- "traefik.http.routers.helloworld.entrypoints=web"
- "traefik.http.services.helloworld.loadbalancer.server.port=80"
networks:
traefik-public:
external: true
I little update, it is possible to access a regular container with port 80 exposed on my domain, but as soon as I spin the container up with Docker Swarm, it is no longer exposed to the internet.
The network is created as follow, which I also used for a regular container:
docker network create -d overlay --attachable test
and the yml:
version: '3'
services:
nginx:
image: nginx
ports:
- 80:80
networks:
- test-swarm-network
networks:
test:
external: true
So the above does not work but the following is visible on my domain from the outside:
docker run -d -p 80:80 nginx

Access docker ports from a container inside another container at localhost

I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.

Resources