I am trying to understand the IP table configurations inserted by docker on my host.
Below is the output of sudo iptables -t nat -S DOCKER
For the output below, what does the ! after the chain name do?
-N DOCKER
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i br-d98117320157 -j RETURN
-A DOCKER ! -i br-d98117320157 -p tcp -m tcp --dport 9202 -j DNAT --to-destination 172.23.0.3:9200
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8034 -j DNAT --to-destination 172.17.0.2:8004
Related
I have a docker compose file with 2 services that I want to both listen to port 80, but on different IP addresses (reverse proxy is not suitable for these). I have a virtual network interface with a separate IP for the second service to listen to.
When I list a specific IP address for a service to bind to, it isn't accessible from the network anymore. But it is when no IP is specified and only 1 service is bound to port 80.
Example:
When my docker-compose file contains:
ports:
- "192.168.1.2:80:80"
I can successfully connect from my local machine using curl 192.168.1.2, but not from a machine on the local network.
However, when my docker-compose file contains:
ports:
- "80:80"
I can successfully connect from the network using curl 192.168.1.2. Which is the same IP I was binding to before! So this is clearly the IP address of one of my network interfaces.
It doesn't seem to be firewall related, because if I run sudo python3 -m http.server 80 --bind 192.168.1.2 I can also reach it from the network. So why does docker not respond from outside requests when configured with a specific IP address?
Edit:
Yes, I use Linux. Output from iptables -t nat -S with IP specified for port 80:
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N OUTPUT_direct
-N POSTROUTING_ZONES
-N POSTROUTING_ZONES_SOURCE
-N POSTROUTING_direct
-N POST_public
-N POST_public_allow
-N POST_public_deny
-N POST_public_log
-N PREROUTING_ZONES
-N PREROUTING_ZONES_SOURCE
-N PREROUTING_direct
-N PRE_public
-N PRE_public_allow
-N PRE_public_deny
-N PRE_public_log
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -j OUTPUT_direct
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 192.168.240.0/20 ! -o br-d7a59c2d8d7f -j MASQUERADE
-A POSTROUTING -s 172.18.0.0/16 ! -o br-a6d78077661d -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j POSTROUTING_direct
-A POSTROUTING -j POSTROUTING_ZONES_SOURCE
-A POSTROUTING -j POSTROUTING_ZONES
-A POSTROUTING -s 192.168.240.4/32 -d 192.168.240.4/32 -p tcp -m tcp --dport 8080 -j MASQUERADE
-A POSTROUTING -s 172.18.0.2/32 -d 172.18.0.2/32 -p tcp -m tcp --dport 53 -j MASQUERADE
-A POSTROUTING -s 192.168.240.4/32 -d 192.168.240.4/32 -p tcp -m tcp --dport 7890 -j MASQUERADE
-A POSTROUTING -s 172.18.0.2/32 -d 172.18.0.2/32 -p udp -m udp --dport 53 -j MASQUERADE
-A POSTROUTING -s 192.168.240.4/32 -d 192.168.240.4/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 192.168.240.4/32 -d 192.168.240.4/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i br-d7a59c2d8d7f -j RETURN
-A DOCKER -i br-a6d78077661d -j RETURN
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i br-d7a59c2d8d7f -p tcp -m tcp --dport 8080 -j DNAT --to-destination 192.168.240.4:8080
-A DOCKER ! -i br-a6d78077661d -p tcp -m tcp --dport 53 -j DNAT --to-destination 172.18.0.2:53
-A DOCKER ! -i br-d7a59c2d8d7f -p tcp -m tcp --dport 7890 -j DNAT --to-destination 192.168.240.4:7890
-A DOCKER ! -i br-a6d78077661d -p udp -m udp --dport 53 -j DNAT --to-destination 172.18.0.2:53
-A DOCKER ! -i br-d7a59c2d8d7f -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.240.4:443
-A DOCKER -d 192.168.1.2/32 ! -i br-d7a59c2d8d7f -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.240.4:80
-A POSTROUTING_ZONES -o eth0 -g POST_public
-A POSTROUTING_ZONES -g POST_public
-A POST_public -j POST_public_log
-A POST_public -j POST_public_deny
-A POST_public -j POST_public_allow
-A PREROUTING_ZONES -i eth0 -g PRE_public
-A PREROUTING_ZONES -g PRE_public
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
After adding TRACE rules I've found that the external packets follow the PREROUTING chain all the way to the relevant DNAT rule. Log excerpt:
TRACE: nat:PREROUTING:rule:4 IN=eth0 OUT= MAC=52:54:00:83:f9:b4:50:ed:3c:23:4d:b9:08:00 SRC=192.168.1.250 DST=192.168.1.2 LEN=64 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=49864 DPT=80 SEQ=400376858 ACK=0 WINDOW=65535 RES=0x00 SYN URGP=0 OPT (020405B4010303060101080A7F6085070000000004020000)
TRACE: nat:DOCKER:rule:9 IN=eth0 OUT= MAC=52:54:00:83:f9:b4:50:ed:3c:23:4d:b9:08:00 SRC=192.168.1.250 DST=192.168.1.2 LEN=64 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=49864 DPT=80 SEQ=400376858 ACK=0 WINDOW=65535 RES=0x00 SYN URGP=0 OPT (020405B4010303060101080A7F6085070000000004020000)
But then the trail ends. It doesn't continue with neither the INPUT or FILTER chain. Since the ip I was using is the ip of a virtual interface, I tried disabling strict RPF after reading https://serverfault.com/questions/586628/packets-sent-on-virtual-ip-dont-hit-iptables-rule?newreg=edb07f9ec9ee416f8bd012ee6d94b8b9, through sysctl -w net.ipv4.conf.all.rp_filter=2. This didn't help however. I also tried replacing 192.168.1.2 with 192.168.1.238, which is the ip of the physical NIC, with similar results. The nstat -rsz | grep IPReversePathFilter counter is not increasing between requests either. I don't get why the packets are just silently dropped.
I've realised that my web server itself doesn't actually need to have ads blocked by pihole, so I have decided to use a macvlan network for this docker container instead. This also gives it a separate ip address, but with the caveat that it and the rest of the host cannot communicate with each other.
I'm trying to host my backend services on a Ubuntu 16.04 server with docker. There is an nginx handling all HTTP requests and proxy-passing them to backend services.
With iptables INPUT and OUTPUT ACCEPT - everything works perfectly, however if I try to restrict any access except HTTP/HTTPS to nginx - communication between
localhost and docker containers breaks.
This is my iptables:
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
# Drop empty flag packets and sync-flood packets
-A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP
-A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP
-A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP
# Allow HTTP/HTTPS
-A INPUT -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 8080 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 8080 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
# Allow DNS
-A INPUT -p udp -m udp --sport 53 -j ACCEPT
-A INPUT -p tcp -m tcp --sport 53 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 53 -j ACCEPT
# Block ping
-A INPUT -p icmp -m state --state NEW -m icmp --icmp-type 8 -j DROP
# Allow any loopback
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
# Allow forwarding from/to localhost to/from docker
-A FORWARD -i docker0 -o lo -j ACCEPT
-A FORWARD -i lo -o docker0 -j ACCEPT
# Docker-generated rules
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-30c18a0778b5 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-30c18a0778b5 -j DOCKER
-A FORWARD -i br-30c18a0778b5 ! -o br-30c18a0778b5 -j ACCEPT
-A FORWARD -i br-30c18a0778b5 -o br-30c18a0778b5 -j ACCEPT
-A DOCKER -d 172.18.0.2/32 ! -i br-30c18a0778b5 -o br-30c18a0778b5 -p tcp -m tcp --dport 27017 -j ACCEPT
-A DOCKER -d 172.18.0.3/32 ! -i br-30c18a0778b5 -o br-30c18a0778b5 -p tcp -m tcp --dport 4000 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-30c18a0778b5 ! -o br-30c18a0778b5 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-30c18a0778b5 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
The container I proxy-pass to is running on port 4000 mapped to 3003 in docker-compose.yml:
webapi:
build: .
depends_on:
- mongo
deploy:
replicas: 1
resources:
limits:
cpus: "0.1"
memory: 256M
restart_policy:
condition: on-failure
ports:
- "3003:4000"
networks:
- webnet
But if I run curl http://localhost:3003/api/healthcheck - I get curl: (56) Recv failure: Connection reset by peer which is confusing to me since I don't have any restrictions for loopback or forwarding to docker0.
The only idea I have is: forwarding from container's port 4000 to localhost 3003 is blocked, but I can't come up with how to allow it.
I too ran into an issue where I created a docker network and only wanted to expose the DNS server and Nginx services. When doing this, everything worked fine external to the system but within the host I didn't have access as long as the INPUT policy was DROP. Adding localhost and the exact IP addresses and ports to the INPUT chain had no effect.
The solution I found was to ACCEPT the docker network bridge interface in the INPUT chain. Docker will by default name this something like br- (e.g. br-22153f050ac4). You can add this nasty label to the iptables config or you can name your docker network bridge interface and add the more deterministic interface name.
Assuming you created your network with:
docker network create -d bridge -o com.docker.network.bridge.name=webnet webnet
You should be able to allow localhost -> docker container with something like:
sudo iptables -A INPUT -i webnet -j ACCEPT
Assuming everything was successful, you should now be able to access the container via a localhost address (i.e. 127.0.0.1) and the docker container address (e.g. 172.X.Y.Z).
The easy answer to this is to only publish the port on the loopback interface so you don't need to firewall them:
port:
- "127.0.0.1:3003:4000"
For debugging the firewall rules, I'd put some logging in there before dropping the packets. Docker uses a lot more than just the docker-0 bridge, and there are various nat/mangle rules, and proxy processes, involved in publishing ports.
I am running google cloud container instance (cos-beta-70-11021-29-0) and I run nginx:
docker run --name xx -d -p 80:80 nginx
I can access nginx welcome page despite port 80 not being open in iptables:
$ sudo iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp -m tcp --dport 23 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
Why is so?
In order to expose a port, you have to communicate the internal docker network with the external one, so Docker adds it's own DOCKER chain to iptables, managed by itself. When you expose a port on a container, using the -p 80:80 option, Docker adds a rule to that chain.
On your rules list you can find:
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
If you don't want Docker to fiddle with iptables, you can add the argument --iptables=false to your Docker daemon executor, but then probably the 'expose' part of your docker command might not work automatically, and you might need to add some additional iptables rules. I haven't tested that.
You might find that options /etc/default/docker or /etc/systemd/system/docker.service.d depending if you're using systemd, upstart, or others...
You might want to check either of this links:
https://docs.docker.com/config/daemon/systemd/
https://docs.docker.com/engine/reference/commandline/dockerd//#daemon-configuration-file
I am trying Rancher (v.1.2.3) and I am not able to run the agent in the nodes.
1) I've installed the racher server in one node with the following command:
sudo docker run -d --restart=unless-stopped -p 80:8080 rancher/server:v1.2.3
2) Then I go to Add Host, and Ranchers gives me the command to add it.
3) I go to the Node 1, and put the following:
sudo docker run -d --privileged -v /var/run/docker.sock:/var/run /docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.1.2 http:/xxx/v1/scripts/D822D98E34752ABCDE:1890908200000:RASZERSE
4) The command line returns
docker: Error response from daemon: containerd: container did not start before the specified
I don't know what is going wrong, I think the container can not access to Rancher Server, but If do a
curl http:/xxx/v1/scripts/D822D98E34752ABCDE:1890908200000:RASZERSE
I can access it. In addition this is my IPTABLES:
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N CATTLE_FORWARD
-N DOCKER
-N DOCKER-ISOLATION
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp- mss-to-pmtu
-A FORWARD -j CATTLE_FORWARD
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o docker_gwbridge -j DOCKER
-A FORWARD -o docker_gwbridge -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker_gwbridge ! -o docker_gwbridge -j ACCEPT
-A FORWARD -i docker_gwbridge -o docker_gwbridge -j DROP
-A CATTLE_FORWARD -m mark --mark 0x668a0 -j ACCEPT
-A DOCKER-ISOLATION -i docker_gwbridge -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o docker_gwbridge -j DROP
-A DOCKER-ISOLATION -j RETURN
Ubuntu v14.04
Docker v1.12.3
It would be greatly appreciated if you could help me.
Thanks
The full error is presumably "containerd: container did not start before the specified timeout", which means Docker isn't starting the container. Rebooting the host will probably help.
If the nodes which you are using, ie the one where you start rancher/server:v1.2.3 and the one where you start the agent are the same, then there could be internal port access issue.
Rancher uses UDP services/ports like 500 for internal communications. These must be permitted, maybe by adding to firewalld zones etc. Issues might occur if you use managed networking.
I have a wifi access point with a captive portal set up, but i cant connect to the network unless i allow the mac in forehand
(iptables -t nat -I PREROUTING -i wlan0 -m mac --mac-source 0:0:0:0:0:0 -j MARK --set-xmark 0x2/0x0)
What packets do i need to allow to connect to the network? (not currently working on a nexus 5)
This is the iptables rules i use
#!/bin/sh
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -A PREROUTING -i wlan0 -j MARK --set-xmark 0x1/0x0
iptables -t nat -I PREROUTING -i wlan0 -m mac --mac-source 0:0:0:0:0:0 -j MARK --set-xmark 0x2/0x0
iptables -t nat -I PREROUTING -i wlan0 -m mac --mac-source 1:1:1:1:1:1 -j MARK --set-xmark 0x2/0x0
iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 53 -j MARK --set-xmark 0x2/0x0
iptables -t nat -A PREROUTING -i wlan0 -p tcp --sport 53 -j MARK --set-xmark 0x2/0x0
iptables -t nat -A PREROUTING -i wlan0 -p udp --dport 53 -j MARK --set-xmark 0x2/0x0
iptables -t nat -A PREROUTING -i wlan0 -p udp --sport 53 -j MARK --set-xmark 0x2/0x0
iptables -t nat -A PREROUTING -i wlan0 -p tcp -m mark --mark 0x1 -j DNAT --to-destination 10.0.0.1
iptables -t nat -A PREROUTING -i wlan0 -p udp -m mark --mark 0x1 -j DNAT --to-destination 10.0.0.1
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
#Save, reload and view the new rules
iptables-save > /home/pi/rules.v4
iptables-restore < /home/pi/rules.v4
iptables -L
I think your problem might be using the --set-xmark option that executes a XOR rather than the --set-mark or --or-mark options that just set the required bits.
Therefore, on some packets you need that additional rule to execute an additional XOR that brings the total number of XOR operations to an odd amount.
Here is a useful explanation on the various mark options.