Traefik - Forward to external IP - docker

I've setup my traefik in docker, and it works as intended for container discovery etc.
But I'm getting tired of having to forward port 80 to my Synology NAS in order to renew LetsEncrypt certificates.
Therefore, I want all traffic on port 80 to be forwarded to my NAS (192.168.1.4) on port 80. Based on this answer How to get traefik to redirect to specific non-docker port from inside docker, I have added the following to my docker-compose:
labels:
- "--providers.file=true"
- "--providers.file.filename=/rules.toml"
volumes:
- "/opt/docker_volumes/traefik/rules.toml:/rules.toml"
My rules.toml looks like this:
[http.routers]
# Define a connection between requests and services
[http.routers.nasweb]
rule = "Host(`nas.example.com`)"
entrypoints = ["web"]
service = "nas"
[http.services]
# Define how to reach an existing service on our infrastructure
[http.services.nas.loadBalancer]
[[http.services.nas.loadBalancer.servers]]
url = "http://192.168.1.4:80"
However, I don't see any services in the traefik dashboard, nor does the certificate renew sucessfully. Can anyone spot any errors in the above?
I'm also completely open for a different solution.

Related

Wireguard in Docker container cannot connect to bridged containers forwarded ports

I have the following setup:
Raspi with Docker and multiple Containers connected to my Router. Some containers are on a MACVLAN network and receive regular IP Address in my LAN (e.g. Pihole, Unbound, etc.), some are on bridged networks and expose certain ports (Portainer, nginx, etc.)
Router LAN (192.x.y.0/24)
|Raspi (192.x.y.5)
|Pihole (192.x.y.11)
|Webserver (192.x.y.20)
|Wireguard (192.x.y.13) - (VPN: 10.x.y.0/32, DNS 192.x.y.11) - (Allowed IPs: 192.x.y.0/24)
|
|Portainer (bridged - exposing 8000, 9000, 9443)
|NGINX (bridged - exposing 81, 80, 443)
When I connect a client through Wireguard,
I can access the internet (Pihole on 192.x.y.11 works as DNS - adblocking works)
I can access Piholes webUI on 192.x.y.11
I can access my webserver on 192.x.y.20
NOT working:
I cannot access the Portainer UI or NGINX UI on their respective forwarded IP:ports e.g. 192.x.y.5:81 for NGINX
What is missing in any config? I found nothing solving this issue - please help!

Unable to connect to port 53589 on EC2 instance using Docker and Caddy server

What I'm trying to do
Host a Taskwarrior Server on an AWS EC2 instance, and connect to it via a subdomain (e.g. task.mydomain.dev).
Taskwarrior server operates on port 53589.
Tech involved
AWS EC2: the server (Ubuntu)
Caddy Server: for creating a reverse proxy for each app on the EC2 instance
Docker (docker-compose): for launching apps, including the Caddy Server and the Taskwarrior server
Cloudflare: DNS hosting and SSL certificates
How I've tried to do this
I have:
allowed incoming connections for ports 22, 80, 443 and 53589 in the instance's security policy
given the EC2 instance an elastic IP
setup the DNS records (task.mydomain.dev is CNAME'd to mydomain.dev, mydomain.dev has an A record pointing to the elastic IP)
used Caddy server to setup a reverse proxy on port 53589 for task.mydomain.dev
setup the Taskwarrior server as per instructions (i.e. certificates created; user and organisation created; taskrc file updated with cert, auth and server info; etc)
Config files
/opt/task/docker-compose.yml
version: '3.3'
services:
taskd:
image: connectical/taskd
restart: always
volumes:
- /opt/task:/var/taskd
ports:
- 53589:53589
networks:
default:
external:
name: caddy_net
/opt/caddy/docker-compose.yml
version: "3.4"
services:
caddy:
build:
context: .
dockerfile: Dockerfile
container_name: caddy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./config:/config
- ./data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
networks:
default:
external:
name: caddy_net
/opt/caddy/Caddyfile:
task.mydomain.dev:53589 {
reverse_proxy taskd:53589
tls {
dns cloudflare myCloudflareAPIkey
}
}
What's actually happening
I'm unable to connect to port 53589 on task.mydomain.dev
Running telnet task.mydomain.dev 53589 times out
I'm unable to connect to port 53589 on mydomain.dev
Running telnet mydomain.dev 53589 times out
I'm able to connect to port 53589 at 127.0.0.1 by ssh'ing into the EC2 instance
Runningtelnet 127.0.0.1 53589 from the EC2 instance successfully connects
I'm able to connect to port 80 on task.mydomain.dev, but unable to sync with the Taskwarrior server
Running task sync init returns:
c: 1 Received record packet of unknown type 72
Syncing with task.mydomain.dev:80
Cannot perform this action while handshake is in progress.
Sync failed. Could not connect to the Taskserver.
I'm able to connect to port 443 on task.mydomain.dev, but unable to sync with the Taskwarrior server
Running task sync init returns:
Syncing with task.mydomain.dev:443
Malformed message
Sync failed. Could not connect to the Taskserver.
What I've tried to fix it
Changing the Caddyfile's first line to:
task.mydomain.dev { and task.mydomain.dev:80 {, then connecting to port 80
Running task sync init returns:
c: 1 Received record packet of unknown type 72
Syncing with task.mydomain.dev:80
Cannot perform this action while handshake is in progress.
Sync failed. Could not connect to the Taskserver.
task.mydomain.dev { and task.mydomain.dev:443 {, then connecting to port 443
Running task sync init returns:
Syncing with task.mydomain.dev:443
Malformed message
Sync failed. Could not connect to the Taskserver.
Changing Caddyfile's second line to reverse_proxy 127.0.0.1:53589, reverse_proxy 0.0.0.0:53589 and reverse_proxy localhost:53589. Same errors occur.
Removing the CNAME records for the subdomain. Same errors occur
Does anyone have any idea what's happening or could point me in the right direction?
If you are attempting to proxy HTTPS traffic on Cloudflare on a port not on the standard list, you will need to follow one of these options:
Set it up as a Cloudflare HTTPS Spectrum app on the required port 53589
Set up the record in the Cloudflare DNS tab as Grey cloud (in other words, it will only perform the DNS resolution - meaning you will need to manage the certificates on your side)
Change your service so that it listens on one of the standard HTTPS ports listed in the documentation in point (1)

Traefik and Minecraft

i'm trying to set up a Minecraft server in a VPS managed with traefik.
After I start the docker container and I tried to reach the server via weburl it fails for timeout.
If I use the server IP address is's working.
I think that the problem is that if I try to reach the default server port in Minecraft (25565) via domain the port is not redirected correctly to the container.
Also, I got to mention that my domain is under Cloudflare, but I don't think that is the problem because I've tried to bypass it turning on the development mode whit no positive results.
I've added a custom entry point as so
defaultEntryPoints = ["https","http","mc"]
[entryPoints.mc]
address = ":25565"
then in the labels of my docker-compose I've used these settings:
# map host port
ports:
- 25565:25565
networks:
- traefik_proxy
- default
labels:
- "traefik.docker.network=traefik_proxy"
- "traefik.enable=true"
- "traefik.basic.frontend.rule=Host:mc.myserver.net"
- "traefik.basic.port=25565"
- "traefik.frontend.entryPoints=mc"
But it still fails.
what am I doing wrong?
Ok, after some research the problem should be that at the moment traefik doesn't handle all the tcp traffic but only the http related one (https://github.com/containous/traefik/issues/10). From what i understand it will be supported in the V2
I've managed to make it work just altering the cloudflare setting adding a SRV records
as follow
Name - _minecraft
Value - SRV 1 1 25565

Docker Nginx-Proxy Container used for Port 80 Forwarding to other container based on Domain

I am trying to set up a Docker Nginx Proxy server to forward incoming requests to their corresponding Docker Container on 192.168.1.120 or to the Router's Web-Admin at 192.168.1.1
So right now I am in a bit of a pickle, but I need to set this up regardless. I have this setup right now
Router 192.168.1.1 (Web Admin + Port Forwarding)
Server1 LAMP - (Router Forwards -> port 80 for LAMP Server)
Server2 Docker - (Router Forwards -> 20 SSH, 8080, 9000 Docker Admin)
So I have to configure the port forwarding through my Router's web interface, which is accessible on port 8080. But the issue is that right now I moved to Florida, and I had stupidly added a port-forwarding rule on 8080 to forward to Shipyard Docker Manager, which I eventually planned to install an Nginx-Proxy Forwarding Docker container. I never got the forwarding Docker container working, and I eventually switched to Portainer on port 9000 which I had to configure because it was the only other port I had already set forwarded before I lost access to my Router's web interface, and thus lost the ability to forward ports.
The downside is that I cannot access my Router's web interface. The upside is that - I still have to implement an Nginx-Proxy port forwarding Container anyways, to set up dynamic port 80 forwarding to different Docker containers based on the URL.
So I want to mvoe my LAMP server on as a new Docker Container, and then I will also have a few other Rails Docker containers - but I need to configure a Docker Container to forward the app to differnt servers based on the port. I assume I need to have 2 dockers running - one for port 80 forwarding, and then one for port 8080 forwarding - this is not a problem.
I have not been able to correctly configure my Nginx config to forwarding an incoming request from my domain-name that I have point to my server (my.domain.com below), needs to get forwarded to my router 192.168.1.1. Any help / suggestions on how to configure my Nginx-Proxy Docker Container to forward this correctly, or what I should setup here to forward incoming requests to a web-server dynamically based on the URL. I can install any Docker containers I need for this.
My current Config /etc/nginx/nginx.conf, running on a Nginx-Proxy Docker Container on port 8080 (Google to find the Docker Image for nginx-proxy)
# My Nginx Config to forward my.domain.com
http {
resolver 127.0.0.1;
access_log /var/logs/nginx/access.log;
server {
listen 8080;
server_name my.domain.com;
return 301 http://192.168.1.1:8080/$request_uri;
}
}
I get these errors:
[error] 55#55: *2274 datacenter.URL.com could not be resolved (110: Operation timed out), client: 166.172.189.185, server: datacenter.URL.com, request: "GET / HTTP/1.1", host: "datacenter.URL.com:8080"
[error] 55#55: recv() failed (111: Connection refused) while resolving, resolver: 192.168.1.1:8080
EDIT: I just noticed that I can only have one Docker Container running at-a-time for each port. So I need to figure out how to forward requests to different servers's + ports based on the Domain Name. So each URL forwarding rule entry needs to be able to go to different servers all running on all different ports.

How to dockerize two applications talking to each other via a http server?

TL;DR
How can we setup a docker-compose environment so we can reach a container under multiple, custom defined aliases? (Or any alternative that solves our problem in an other fashion.)
Existing setup
We have two applications† (nodejs servers), each behind an HTTP reverse proxy (Nginx), that need to talk to each other. On localhost, configuring this is easy:
Add /etc/hosts entries for ServerA and ServerB:
127.0.0.1 server-a.testing
127.0.0.1 server-b.testing
Run ServerA on port e.g. 2001 and ServerB on port 2002
Configure two virtual hosts, reverse proxying to ServerA and ServerB:
server { # Forward all traffic for server-a.testing to localhost:2001
listen 80;
server_name server-a.testing;
location / {
proxy_pass http://localhost:2001;
}
}
server { # Forward all traffic for server-b.testing to localhost:2002
listen 80;
server_name server-b.testing;
location / {
proxy_pass http://localhost:2002;
}
}
This setup is great for testing: Both applications can communicate each other in a way that is very close to the production environment, e.g. request('https://server-b.testing', fn); and we can test how the HTTP server configuration interacts with our apps (e.g. TLS config, CORS headers, HTTP2 proxying).
Dockerize all the things!
We now want to move this setup to docker and docker-compose. The docker-compose.yaml that would work in theory is this:
nginx:
build: nginx
ports:
- "80:80"
links:
- server-a
- server-b
server-a:
build: serverA
ports:
- "2001:2001"
links:
- nginx:server-b.testing
server-b:
build: serverB
ports:
- "2002:2002"
links:
- nginx:server-a.testing
So when ServerA addresses http://server-b.testing it actually reaches the Nginx which reverse proxies it to ServerB. Unfortunately, circular dependencies are not possible with links. There are three typical solutions to this problems:
use ambassadors
use nameservers
use the brand new networking (--x-networking).
Neither of these work for us, because, for the virtual hosting to work, we need to be able to address the Nginx container under the name server-a.testing and server-b.testing. What can we do?
(†) Actually it's a little bit more complicated – four components and links – but that shouldn't make any difference to the solution:
testClient (-> Nginx) -> ServerA,
testClient (-> Nginx) -> ServerB,
ServerA (-> Nginx) -> ServerB,
testClient (-> Nginx) -> ServerC,
Try this:
Link you server-a and server-b container to nginx with --link server-a:server-a --link server-b:server-b
Update nginx conf file with
location /sa
proxy_pass http://server-a:2001
location /sb
proxy_pass http://server-a:2001
When you link two containers, docker adds "conatiner_name container_ip" to /etc/hosts file of the linking container. So, in this case, server-a and server-b is resolved to their respective container IPs via /etc/hosts file.
And you can access them from http://localhost/sa or http://localhost/sb

Resources