Docker containers on same host but different bridges can't connect - docker

I have (2) Docker-Compose configs running on the SAME host but DIFFERENT Docker bridges. Each are on different subnets so their traffic must be routed. One Docker-Compose config is for a containerized website, while the other Docker-Compose config is for a Zabbix Agent to monitor the website Docker-Compose config.
Although the Docker host they both live on has routes in the Gateway Router to both subnets, the Linux Docker host itself is also configured as a router, so should route traffic between the subnets it hosts.
Why can't traffic pass between different bridges on the SAME Docker host?!?!?

Intro:
Before implementing containerized monitoring, I had no prior requirement to pass traffic between docker bridges on the same host. I'm a Linux & network engineer and this wasted an hour of my life trying to understand how things were breaking, so I imagine if you're not a network engineer, you'll waste a lot more time- or fail completely. Thus felt it was worthy of a moment to document.
Short Answer:
Docker was being "helpful" - again- by automagically inserting iptables rules in the DOCKER-ISOLATION-STAGE-1 & 2 Chains in the FORWARD table breaking connectivity. Delete these rules and connectivity between containers raised on different bridges assigned to different subnets on the same host can now be achieved.
Longer Answer w/ Proofs:
Diagnostics:
I re-cut the image for the Zabbix Agent including some diagnostic tools- traceroute, inetutils-ping & iproute2- and after logging into the container with docker exec -u root -it <container ID> bash, I found the Agent's container couldn't ping the containers on the other bridge despite ip route list proving there was a correct route out of the Agent's container.
A review of the Docker host's firewall rules revealed that passing traffic between the Docker Bridges is DISALLOWED by design:
iptables -nvx -L --line-numbers
<SNIP>
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num pkts bytes target prot opt in out source destination
1 530374 174564169 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 3559 5117334 DOCKER-ISOLATION-STAGE-2 all -- br-2dfcb90fe695 !br-2dfcb90fe695 0.0.0.0/0 0.0.0.0/0
3 1229457 499057258 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 16 960 DROP all -- * br-2dfcb90fe695 0.0.0.0/0 0.0.0.0/0
3 533917 179680543 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
<SNIP>
If the source & destination of traffic are NOT the same for our Docker bridge in DOCKER-ISOLATION-STAGE-1, then it matches the chain's rule 2 which passes it to Chain DOCKER-ISOLATION-STAGE-2 where it matches rule 2 here and drops the traffic.
We know this rule is having effect as we can see the packets incrementing for it; traffic is indeed being dropped. So Rule 2 in Chain DOCKER-ISOLATION-STAGE-2 if the offender.
Solution:
Print the rules so we can determine the rule numbers of the iptables busting our connectivity:
sudo iptables -nvx -L --line-numbers
Then delete the problematic rules by their respective numbers. Note the final number "2" at the end of each iptables command is the rule number you want to delete. We'll delete both the target and the referring rule:
sudo iptables -D DOCKER-ISOLATION-STAGE-1 2
sudo iptables -D DOCKER-ISOLATION-STAGE-2 2
WARNING: Although restarting containers will NOT cause the deleted iptables rules to be recreated, doing a docker-compose down followed by an up WILL.
Hope this saves others wasted cycles figuring out broken container network connectivity...

Related

How to prevent docker containers from accessing my local network

I would like to be able to prevent docker containers connected to a bridge network from accessing my local network in order to add extra security since they will be accessible from outside (in case a container is compromised). I saw that I should probably use ebtables or the physdev module of iptables but I can't create a rule that works. Thanks to the one who can help me.
After some research and if anyone is interested, it is possible to use ebtables.
# Authorize DNS queries
ebtables -A INPUT -p IPV4 --ip-protocol TCP --ip-destination-port 53 --ip-destination 192.168.1.1 --ip-source 172.18.0.0/16 -j ACCEPT
ebtables -A INPUT -p IPV4 --ip-protocol UDP --ip-destination-port 53 --ip-destination 192.168.1.1 --ip-source 172.18.0.0/16 -j ACCEPT
# Drop all others packets
ebtables -A INPUT -p IPV4 --ip-destination 192.168.1.0/24 --ip-source 172.18.0.0/16 -j DROP
Do not forget to replace the 172.18.0.0/16 subnet with the one on which your containers are connected.
I was stumbling through this myself and found one solution was to insert (-I) a new rule into the DOCKER-USER chain.
Please see this answer: https://stackoverflow.com/a/73994723/20189349

Iptables rules with dockerized web application - can't block incoming traffic

i'm hosting a dockerized web application binded with port 8081 in my remote server.
I want to block that web application for external ips, as I already did wit port 8080 hosting a plain jenkins server.
Here's what i've tried:
iptables -A INPUT -d <my-server-ip> -p tcp --dport 8081 -j DROP
As I did with port 8080.
Here is
iptables -nv -L INPUT
output:
Chain INPUT (policy ACCEPT 2836 packets, 590K bytes)
pkts bytes target prot opt in out source destination
495 23676 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8080
0 0 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8081
Has it possibily something to do with DOCKER chain in iptables ?
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
9 568 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 <container-eth1-addr> tcp dpt:8080
There are more specific rules i need to add ?
Isn't my server INPUT rules supposed to be applied before those listed in the DOCKER chain?
UPDATE - SOLVED
Thanks to larsks's comments I found the solution.
The goal here was to block tcp traffic on port 8081 binded with docker docker container but being able to use ssh tunneling as "poor man" VPN (so non publish the port was not an option).
Just had to add this rule:
iptables -I DOCKER-USER 1 -d <container-eth-ip> ! -s 127.0.0.1 -p tcp --dport 8080 -j DROP

Accessing Docker container from non-host device

Bear with me, I'm new to Docker...
I'm trying to get a Docker environment going on a Red Hat Linux server (7.6) and am having trouble accessing containers from a computer other than the host.
I got Docker installed no problem. Then, the first container I installed was Portainer and the Portainer Agent:
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
docker run -d -p 9001:9001 --name portainer_agent --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker/volumes:/var/lib/docker/volumes portainer/agent
Seems peachy:
# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
973a685cfbe1 portainer/portainer "/portainer" 19 hours ago Up 2 minutes 0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp portainer
602537dc21ec portainer/agent "./agent" 45 hours ago Up 19 hours 0.0.0.0:9001->9001/tcp portainer_agent
And using # curl http://localhost:9000 connects just fine. However, the connection gets dropped when attempting to connect from another computer on the same network (in a different subnet, if that matters). I can connect to the server just fine (I'm managing it via SSH, and even tested netcat on port 9002 for good measure).
The iptables, if this helps:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:etlservicemgr
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:cslistener
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:irdmi
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
I've searched around a bit but keep finding conflicting answers (some suggesting that it should just work, and others suggesting that there's a lot more I've got left to learn and configure). I'm afraid that I'm fumbling in the dark. I gather that I need a route configured to forward host traffic to the container? Or an iptables rule? What exactly am I missing?
...Nevermind.
On a lark, I tried connecting to the server from a device that's on-premises; rather than my computer which is connected via VPN. The on-prem device connected fine.

Container-to-container communication via a host mapped port

I am using Docker version 1.9.1 and docker-compose 1.5.2 with --x-networking (experimental networking).
I start a trivial node application with docker-compose up; this application maps port 8000 to port 9999 on the host.
From the host I can curl http://localhost:9999; or http://[host-ip]:9999; or any of the 172.x.0.1 addresses that the host has and they all work.
I start another application with docker-compose up. If I attempt to curl http://[host-ip]:9999, or any of the http://172.x.0.1 addresses the packet is dropped due to iptables entries -- in particular the entry that specifies DROP from the subnet of this container to the first container.
I understand that container-to-container communication may not be allowed but how can my second container talk to the first via the port mapped on the host?
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DROP all -- 172.17.0.0/16 172.19.0.0/16
DROP all -- 172.19.0.0/16 172.17.0.0/16
DROP all -- 172.18.0.0/16 172.19.0.0/16
DROP all -- 172.19.0.0/16 172.18.0.0/16
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DROP all -- 172.17.0.0/16 172.18.0.0/16
DROP all -- 172.18.0.0/16 172.17.0.0/16
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (3 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.2 tcp dpt:8000
Container to container communication is allowed of course. You could forbid it with firewall rules etc...
What you actually need is to have these two containers in the same subnet. So you need to create a subnet with
docker network create --subnet=172.18.0.0/16 mySubNet
then run the containers with
docker run --net mynet123
And that is it. Additionally when running you could assign a static ip to container with --ip, assign a hostname with --hostname or add another host entry with --add-host.
EDIT: I see now your docker version so I have to say that what I wrote here works with docker 1.10.x
Subnet solution
You can either create a subnet for your containers, but to keep things clean you will need a subnet for each distributed application in order to isolate them. Not the easiest nor the simplest way of doing so while it works.
--link solution
Another solution is to link your containers. I suggest you to read this comment, so just I don't copy/paste its content ;)

Docker - exposing IP addresses to DNS server

Looking at the iptables of my docker host i get something like this:
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:http-alt
I was able to assign a durable IP address like this:
sudo ip addr add 10.0.0.99/8 dev eth0
docker run -d -p 10.0.0.99:8888:8080 tomcat:8
but that address is only available on this machine, as in I need to ssh into it and ping it from this box.
Reading through this, it looks like i need to add a custom bridge:
Custom Docker Bridge
Is there a way to make the bridge hand fresh ips from the DHCP server? For example if my DHCP server assigned addresses from 10.1.1.x - I want to assign those addresses to Docker containers.
Would this involve a generic *nix way of pushing my iptable /etc/hosts ip addresses and dns names to a DNS server so other machines outside of the Docker cluster can ping those machines?
I have port forwarding working but need to do the same with the ip addresses as Zookeeper only tracks the internal Docker ip addresses and hostnames as I've defined them in my docker-compose.yml.

Resources