Why can't I reach my traefik dashboard via HTTPS? - docker

I am trying to run a traefik container on my docker swarm cluster. Because we are using TLS encrypted communication, I want the traefik dashboard to be available via https.
In my browser, I try to access traefik via the docker swarm manager hostname via https://my.docker.manager and therefor I mounted my hosts certificate and key into the traefik service.
When I try to open https://my.docker.manager in my browser, I get a timeout.
When I try to curl https://my.docker.manager directly on the host (my.docker.manager) I get HTTP code 403 as response
My traefik config:
debug=true
logLevel = "DEBUG"
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/traefik/certs/my.docker.manager.crt"
keyFile = "/etc/traefik/certs/my.docker.manager.key"
[entryPoints.https.tls.defaultCertificate]
certFile = "/etc/traefik/certs/my.docker.manager.crt"
keyFile = "/etc/traefik/certs/my.docker.manager.key"
[api]
address = ":8080"
[docker]
watch = true
swarmMode = true
My traefik compose file:
version: "3.7"
services:
traefik:
image: traefik
ports:
- 80:80
- 443:443
networks:
- devops-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /mnt/docker-data/secrets/certs/:/etc/traefik/certs/
configs:
- source: traefik.conf
target: /etc/traefik/traefik.toml
deploy:
placement:
constraints:
- node.role == manager
labels:
- "traefik.docker.network=devops-net"
- "traefik.frontend.rule=Host:my.docker.manager"
- "traefik.port=8080"
networks:
devops-net:
driver: overlay
external: true
configs:
traefik.conf:
external: true
As described in this article (https://www.digitalocean.com/community/tutorials/how-to-use-traefik-as-a-reverse-proxy-for-docker-containers-on-ubuntu-16-04), I expected to see the traefik dashboard, when I call https://my.docker.manager in my browser. But I only get a timeout. When using curl https://my.docker.manager I get HTTP code 403. I followed the mentioned article except two differences:
1) I did not configure credentials
2) I used my hosts own certificates instead of letsencrypt

In the meantime I found the reason for my problem (not sure what is the best solution). For the case someone is interested, I will try to explain it.
I have a Swarm of three nodes, in the network prod.company.de
My client is in another network intranet.comnpany.de
My swarm manager is adressed by docker-manager.prod.company.de. On this host, I have deployed the traefik service, that I want to access via https://docker-manager.prod.company.de (This is port 443 and because of traefik config forwarded to the traefik dashboard on 8080 inside the container).
If I track my network traffic, I can see that the https request (https://docker-manager.prod.company.de) from my client browser reaches the server and than the traffic is forwarded to the docker_gwbridge address 17.18.0.2. But the answer does not find the way back to my client because of the docker NAT configuration.
iptables -t nat -L -v
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
8 504 MASQUERADE all -- any docker_gwbridge anywhere anywhere ADDRTYPE match src-type LOCAL
0 0 MASQUERADE all -- any !docker0 172.17.0.0/16 anywhere
0 0 MASQUERADE all -- any !docker_gwbridge 172.18.0.0/16 anywhere
MASQERADE says that the source IP of the request should be replaced with the bridge IP (in my case 172.18.0.1) so that answers are routed back to this IP. IN the case above, the rule 8 504 MASQUERADE all -- any docker_gwbridge anywhere anywhere would do this, but it is limited to requests from LOCAL by ADDRTYPE match src-type LOCAL, which means that using a browser on the docker host would work, but my connections from client do not work, because the answer will not find the way back to my client address.
Currently, I added one more NAT rule to my iptables:
iptables -t nat -A POSTROUTING -o docker_gwbridge -j MASQUERADE
which results in
1 52 MASQUERADE all -- any docker_gwbridge anywhere anywhere
After that, I can see the traefik Dashboard, when opening https://docker-manager.prod.company.de in my browser on the client.
But I do not understand, why I have to do this, because I found nothing about that in any documentation and I don't think that my usecase is really rare. Thats why I would be happy, if someone could have a look at this post and maybe check if I did some other thing wrong or explain to my, why I have to do such things to get a standard usecase working.
Kind regards

Related

Putting a QEMU guest onto a network created by docker compose

I am trying to emulate a hardware configuration between two Intel NUC servers and a Raspberry Pi server using docker containers and docker compose. Since the latter is ARM, and the test host is x86, I decided to run the RPi image within QEMU encapsulated in one of the docker containers. The docker compose file is pretty simple:
services:
rpi:
build: ./rpi
networks:
iot:
ipv4_address: 192.168.1.3
ether:
ipv4_address: 192.168.2.3
nuc1:
build: ./nuc
networks:
- "iot"
nuc2:
build: ./nuc
networks:
- "ether"
networks:
iot:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
ether:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.2.0/24
gateway: 192.168.2.1
You'll notice that there are two separate networks, I have two vlans I am trying to emulate and the rpi server is apart of both, hence the docker container is a part of both. I have QEMU running fine on the rpi container, and the docker containers themselves are behaving on the network as intended.
The problem I am having is trying to set up the networking so that the QEMU image appears to be just another node on the network at the addresses 192.168.1.3/192.168.2.3. My requirements are that:
The QEMU guest knows that its own IP on each network is 192.168.1.3 and 192.168.2.3 respectively.
The other NUC docker containers can reach the QEMU image at those IP address (i.e. I don't want to give the docker container running the QEMU image its own IP address, have the other containers hit that IP address, then NAT the address to the QEMU address).
I tried the steps listed on this gist with no luck. Additionally, I tried creating a TAP with an address of 10.0.0.1 on the QEMU docker container, a bound the QEMU guest to at TAP, then creating an IP tables rule to NAT traffic received to the TAP, however, the issue is that then the destination address is 10.0.0.1 and the QEMU guest thinks its own address is 192.168.1.3, so it won't receive the packet.
As you can see, I am a bit stuck conceptually, and need some help with a direction to take this.
Right now, this is the network configuration that I set up to handle traffic on the QEMU container (excuse the lack of consistent ip usage, iproute2 is not installed on the image I am using and I can't seem to get the containers to hit the internet):
brctl addbr br0
ip addr flush dev eth0
ip addr flush dev eth1
brctl addif br0 eth0
brctl addif br0 eth1
tunctl -t tap0 -u $(whoami)
brctl addif br0 tap0
ifconfig br0 up
ifconfig tap0 up
ip addr add 192.168.1.3 dev br0
ip addr add 192.168.2.3 dev br0
ip route add 192.168.1.1/24 dev br0
ip route add 192.168.2.1/24 dev br0
ip addr add 10.0.0.1 dev tap0
Then I've done the following forwarding rules:
iptables -t nat -A PREROUTING -i br0 -d 192.168.1.1 -j DNAT --to 10.0.0.1
iptables -t nat -A POSTROUTING -o br0 -s 10.0.0.1 -j SNAT --to 192.168.1.1
iptables -t nat -A PREROUTING -i br0 -d 192.168.2.1 -j DNAT --to 10.0.0.1
iptables -t nat -A POSTROUTING -o br0 -s 10.0.0.1 -j SNAT --to 192.168.2.1

Port Forwarding from Wireguard to Docker Containers

I am running a Wireguard server from a VPS provider. What I want to achieve is to be able to route specific internet traffic (ports 10000:11000 are set to accept traffic from the VPS firewall) from VPN to my Docker containers at home server.
[Internet] <-> [Wireguard 10.100.0.1] <-> [Home Server 10.100.0.2 (Docker Containers)]
Details:
Wireguard Server
OS: Ubuntu 20.04.2 LTS
iptables post up/down rules from wg0.conf:
iptables -A FORWARD -i eth0 -j ACCEPT;
iptables -t nat -A PREROUTING -p tcp --dport 10000:11000 -j DNAT --to-destination 10.100.0.2;
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE;
sysctl -p:
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
Home Server
OS: Ubuntu 20.04.2 LTS (Desktop)
systemd-networkd (.network file for wireguard interface) configuration:
[Match]
Name = wguard
[Network]
Address = 10.100.0.2/32
[RoutingPolicyRule]
From = 10.200.0.0/16
Table = 250
[Route]
Gateway = 10.100.0.2
Table = 250
[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
Table = 250
I created a Docker network in 10.200.0.0/16, and containers are using using this network. Docker containers return the VPN ip address when I check it with curl.
Home server returns my home ip address with a plain curl query; but it returns VPN ip address via wireguard interface.
My problems:
I cannnot ping the VPN host from home server, but I can ping other VPN clients. Also I can ping the VPN host from inside the containers, which is strange.
Port forwarding from the VPN to containers does not work properly.
Example docker-compose:
version: "3.7"
services:
rutorrent:
container_name: rutorrent1
image: romancin/rutorrent:latest
networks:
- wireguard0
ports:
- "10010:51415"
- "10011:80"
- "10012:443"
environment:
- PUID=1000
- PGID=1000
volumes:
- /home/user/Downloads/rutorrent/config:/config
- /home/user/Downloads/rutorrent/downloads:/downloads
networks:
wireguard0:
external: true
I would appreciate if you could at least point me to the part where I am doing wrong. I think my iptables rules have missing lines but I couldn't find a good reference or a book to fully understand how to set it properly.
Thanks

Deny incoming traffic with docker and ansible iptables

In a standalone server, I have a docker container running jenkins. I want to write a playbook that allows me (some public ip, let's say it is 107.33.11.111), to connect via 22 and 8080 to the jenkins server on my standalone server, but nobody else. Public traffic is coming on eth0 on my standalone server. I am using this guide to try and make this work.
Here is an example of how I run jenkins:
# privileged is needed to allow browser based testing via chrome
- name: Run jenkins container
command: docker run --privileged -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts-jdk11
I then set this firewall rule:
- name: Firewall rule - allow port 22/SSH traffic only for me
iptables:
chain: INPUT
in_interface: eth0
destination_port: 22
jump: ACCEPT
protocol: tcp
source: 107.33.11.111
- name: Firewall rule - allow port 8080 traffic only for me
iptables:
chain: INPUT
in_interface: eth0
destination_port: 8080
jump: ACCEPT
protocol: tcp
source: 107.33.11.111
- name: Firewall rule - drop any traffic without rule
iptables:
chain: INPUT
jump: DROP
in_interface: eth0
When I execute the above playbook, and run iptables -L, my output trims to only:
Chain INPUT (policy ACCEPT)
target prot opt source destination
When I remove the last rule to drop all traffic, I can observe the entire output, and the ip tables show my ip being permitted for ports 22 and 8080. However, all other traffic coming to eth0 is able to reach those ports as well.
What do I need to do to allow 22 and 8080 only for a specific public address in my ansible playbook?
For your specific use case, you do not need to jump to another chain, but you can set the default policy to DROP after allowing 8080 and 22.
Replace the last stanza in your playbook with
- name: Firewall rule - drop any traffic without rule
iptables:
chain: INPUT
policy: DROP

Docker expose a port only to localhost

I want to restrict my database access to 127.0.0.1, so I executed the following command:
docker run -it mysql:5.5 -p 127.0.0.1:3306:3306 -name db.mysql
But I have some confusion...
You can see here that only the port of 127.0.0.1 will be forwarded:
; docker ps
mysql:5.5 127.0.0.1:3306->3306/tcp db.mysql
Interestingly, I cannot find this restriction in iptables:
; iptables -L
Chain FORWARD (policy DROP)
DOCKER all -- anywhere anywhere
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 192.168.112.2 tcp dpt:mysql
The source of this rule is anywhere.
The incoming traffic will go as next:
Incoming package to host's network -> use ip tables to forward to container
And, your restrict was not in iptables, it was in host's network, you just open 3306 bind on 127.0.0.1, not 0.0.0.0, so you of course not see anything in iptables. 127.0.0.1:3306:3306 means hostIp:hostPort:containerPort.
You could confirm it with netstat -oanltp | grep 3306 to see no 0.0.0.0 was there, so no foreign host could visit your host machine, thus also could not visit your container.

Docker container hits iptables to proxy

I have two VPSs, first machine (proxy from now) is for proxy and second machine (dock from now) is docker host. I want to redirect all traffic generated inside a docker container itself over proxy, to not exposure dock machines public IP.
As connection between VPSs is over internet, no local connection, created a tunnel between them by ip tunnel as follows:
On proxy:
ip tunnel add tun10 mode ipip remote x.x.x.x local y.y.y.y dev eth0
ip addr add 192.168.10.1/24 peer 192.168.10.2 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
On dock:
ip tunnel add tun10 mode ipip remote y.y.y.y local x.x.x.x dev eth0
ip addr add 192.168.10.2/24 peer 192.168.10.1 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
PS: Do not know if ip tunnel can be used for production, it is another question, anyway planning to use libreswan or openvpn as a tunnel between VPSs.
After, added SNAT rules to iptables on both VPSs and some routing rules as follows:
On proxy:
iptables -t nat -A POSTROUTING -s 192.168.10.2/32 -j SNAT --to-source y.y.y.y
On dock:
iptables -t nat -A POSTROUTING -s 172.27.10.0/24 -j SNAT --to-source 192.168.10.2
ip route add default via 192.168.10.1 dev tun10 table rt2
ip rule add from 192.168.10.2 table rt2
And last but not least created a docker network with one test container attached to it as follows:
docker network create --attachable --opt com.docker.network.bridge.name=br-test --opt com.docker.network.bridge.enable_ip_masquerade=false --subnet=172.27.10.0/24 testnet
docker run --network testnet alpine:latest /bin/sh
Unfortunately all these ended with no success. So the question is how to debug that? Is it correct way? How would you do the redirection over proxy?
Some words about theory: Traffic coming from 172.27.10.0/24 subnet hits iptables SNAT rule, source IP changes to 192.168.10.2. By routing rule it routes over tun10 device, it is the tunnel. And hits another iptables SNAT rule that changes IP to y.y.y.y and finally goes to destination.

Resources