Centos iptables fails with docker rules - docker

I have installed a docker service:
docker run --rm --name some-nginx -d -p 8080:80 nginx:stable-alpine
Then I have added iptable rule for it:
PORT=8080
/sbin/iptables -A OUTPUT -p tcp --dport $PORT -m state --state NEW,ESTABLISHED -j ACCEPT
/sbin/iptables -A INPUT -p tcp --sport $PORT -m state --state ESTABLISHED -j ACCEPT
When I try to check if port is open it says it is open:
$netstat -anp | grep 8080
tcp6 0 0 :::8080 :::* LISTEN 11211/docker-proxy
When I try to investigete more it seems still open:
iptables-save | grep 8080
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.17.0.3:80
-A INPUT -p tcp -m tcp --sport 8080 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 8080 -m state --state NEW,ESTABLISHED -j ACCEPT
I see docker added some rules but my rules are present.
The problem is I cant connect to nginx at given port!
telnet XX.XX.XX.XX 8080
Trying XX.XX.XX.XX...
telnet: Unable to connect to remote host: Connection timed out
When I turn off iptables it works even with browser giving nginx start page.
How to enable iptables or docker to work with given port?

i have some pages you can read.:
https://fralef.me/docker-and-iptables.html
https://riptutorial.com/docker/topic/9201/iptables-with-docker
I have also a linux, try this here:
iptables -A INPUT -p tcp -m tcp --dport xxx -j ACCEPT
iptables -A OUTPUT -p tcp --sport xxx -m state --state ESTABLISHED -j ACCEPT
or
iptables -A OUTPUT -o eth0 -p tcp --dport xxx -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport xx -m state --state ESTABLISHED -j ACCEPT
Let me know if it helps.

Related

Linux webserver based on docker not available with ufw

I created a small webserver which is running on docker and want to secure it with ufw.
At first it doesn't work, but after I googled a little bit I found several articles about it.
So I added the following code to the after.rules:
# BEGIN UFW AND DOCKER
*filter
:ufw-user-forward - [0:0]
:ufw-docker-logging-deny - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER -j ufw-user-forward
-A DOCKER-USER -j RETURN -s 10.0.0.0/8
-A DOCKER-USER -j RETURN -s 172.16.0.0/12
-A DOCKER-USER -j RETURN -s 192.168.0.0/16
-A DOCKER-USER -p udp -m udp --sport 53 --dport 1024:65535 -j RETURN
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
-A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
-A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 172.16.0.0/12
-A DOCKER-USER -j RETURN
-A ufw-docker-logging-deny -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW DOCKER BLOCK] "
-A ufw-docker-logging-deny -j DROP
COMMIT
# END UFW AND DOCKER
After that I restarted ufw and added my ports with:
sudo ufw route allow proto tcp from any to any port 8080
My containers are running on the following ports:
- 7000:80
- 8080:80
- 1270:1270
- 1260:1260
- 9443:9443
So I allowed the ports 7000, 8080, 1270, 1260, 9443 and 80
But my webserver or all other ports are still not available.
Does anyone know how I can solve this?
Or maybe how I can start to debug?
Right now I got no clue how the get further.

iptables and docker - what is the meaning of '!' character after chain name

I am trying to understand the IP table configurations inserted by docker on my host.
Below is the output of sudo iptables -t nat -S DOCKER
For the output below, what does the ! after the chain name do?
-N DOCKER
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i br-d98117320157 -j RETURN
-A DOCKER ! -i br-d98117320157 -p tcp -m tcp --dport 9202 -j DNAT --to-destination 172.23.0.3:9200
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 8034 -j DNAT --to-destination 172.17.0.2:8004

iptables silently dropping packets with docker DNAT at end of PREROUTING chain instead of continuing to INPUT chain

I have a docker compose file with 2 services that I want to both listen to port 80, but on different IP addresses (reverse proxy is not suitable for these). I have a virtual network interface with a separate IP for the second service to listen to.
When I list a specific IP address for a service to bind to, it isn't accessible from the network anymore. But it is when no IP is specified and only 1 service is bound to port 80.
Example:
When my docker-compose file contains:
ports:
- "192.168.1.2:80:80"
I can successfully connect from my local machine using curl 192.168.1.2, but not from a machine on the local network.
However, when my docker-compose file contains:
ports:
- "80:80"
I can successfully connect from the network using curl 192.168.1.2. Which is the same IP I was binding to before! So this is clearly the IP address of one of my network interfaces.
It doesn't seem to be firewall related, because if I run sudo python3 -m http.server 80 --bind 192.168.1.2 I can also reach it from the network. So why does docker not respond from outside requests when configured with a specific IP address?
Edit:
Yes, I use Linux. Output from iptables -t nat -S with IP specified for port 80:
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N OUTPUT_direct
-N POSTROUTING_ZONES
-N POSTROUTING_ZONES_SOURCE
-N POSTROUTING_direct
-N POST_public
-N POST_public_allow
-N POST_public_deny
-N POST_public_log
-N PREROUTING_ZONES
-N PREROUTING_ZONES_SOURCE
-N PREROUTING_direct
-N PRE_public
-N PRE_public_allow
-N PRE_public_deny
-N PRE_public_log
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -j OUTPUT_direct
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 192.168.240.0/20 ! -o br-d7a59c2d8d7f -j MASQUERADE
-A POSTROUTING -s 172.18.0.0/16 ! -o br-a6d78077661d -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j POSTROUTING_direct
-A POSTROUTING -j POSTROUTING_ZONES_SOURCE
-A POSTROUTING -j POSTROUTING_ZONES
-A POSTROUTING -s 192.168.240.4/32 -d 192.168.240.4/32 -p tcp -m tcp --dport 8080 -j MASQUERADE
-A POSTROUTING -s 172.18.0.2/32 -d 172.18.0.2/32 -p tcp -m tcp --dport 53 -j MASQUERADE
-A POSTROUTING -s 192.168.240.4/32 -d 192.168.240.4/32 -p tcp -m tcp --dport 7890 -j MASQUERADE
-A POSTROUTING -s 172.18.0.2/32 -d 172.18.0.2/32 -p udp -m udp --dport 53 -j MASQUERADE
-A POSTROUTING -s 192.168.240.4/32 -d 192.168.240.4/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 192.168.240.4/32 -d 192.168.240.4/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i br-d7a59c2d8d7f -j RETURN
-A DOCKER -i br-a6d78077661d -j RETURN
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i br-d7a59c2d8d7f -p tcp -m tcp --dport 8080 -j DNAT --to-destination 192.168.240.4:8080
-A DOCKER ! -i br-a6d78077661d -p tcp -m tcp --dport 53 -j DNAT --to-destination 172.18.0.2:53
-A DOCKER ! -i br-d7a59c2d8d7f -p tcp -m tcp --dport 7890 -j DNAT --to-destination 192.168.240.4:7890
-A DOCKER ! -i br-a6d78077661d -p udp -m udp --dport 53 -j DNAT --to-destination 172.18.0.2:53
-A DOCKER ! -i br-d7a59c2d8d7f -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.240.4:443
-A DOCKER -d 192.168.1.2/32 ! -i br-d7a59c2d8d7f -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.240.4:80
-A POSTROUTING_ZONES -o eth0 -g POST_public
-A POSTROUTING_ZONES -g POST_public
-A POST_public -j POST_public_log
-A POST_public -j POST_public_deny
-A POST_public -j POST_public_allow
-A PREROUTING_ZONES -i eth0 -g PRE_public
-A PREROUTING_ZONES -g PRE_public
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
After adding TRACE rules I've found that the external packets follow the PREROUTING chain all the way to the relevant DNAT rule. Log excerpt:
TRACE: nat:PREROUTING:rule:4 IN=eth0 OUT= MAC=52:54:00:83:f9:b4:50:ed:3c:23:4d:b9:08:00 SRC=192.168.1.250 DST=192.168.1.2 LEN=64 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=49864 DPT=80 SEQ=400376858 ACK=0 WINDOW=65535 RES=0x00 SYN URGP=0 OPT (020405B4010303060101080A7F6085070000000004020000)
TRACE: nat:DOCKER:rule:9 IN=eth0 OUT= MAC=52:54:00:83:f9:b4:50:ed:3c:23:4d:b9:08:00 SRC=192.168.1.250 DST=192.168.1.2 LEN=64 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=49864 DPT=80 SEQ=400376858 ACK=0 WINDOW=65535 RES=0x00 SYN URGP=0 OPT (020405B4010303060101080A7F6085070000000004020000)
But then the trail ends. It doesn't continue with neither the INPUT or FILTER chain. Since the ip I was using is the ip of a virtual interface, I tried disabling strict RPF after reading https://serverfault.com/questions/586628/packets-sent-on-virtual-ip-dont-hit-iptables-rule?newreg=edb07f9ec9ee416f8bd012ee6d94b8b9, through sysctl -w net.ipv4.conf.all.rp_filter=2. This didn't help however. I also tried replacing 192.168.1.2 with 192.168.1.238, which is the ip of the physical NIC, with similar results. The nstat -rsz | grep IPReversePathFilter counter is not increasing between requests either. I don't get why the packets are just silently dropped.
I've realised that my web server itself doesn't actually need to have ads blocked by pihole, so I have decided to use a macvlan network for this docker container instead. This also gives it a separate ip address, but with the caveat that it and the rest of the host cannot communicate with each other.

Allow traffic from localhost to docker container

I'm trying to host my backend services on a Ubuntu 16.04 server with docker. There is an nginx handling all HTTP requests and proxy-passing them to backend services.
With iptables INPUT and OUTPUT ACCEPT - everything works perfectly, however if I try to restrict any access except HTTP/HTTPS to nginx - communication between
localhost and docker containers breaks.
This is my iptables:
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
# Drop empty flag packets and sync-flood packets
-A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP
-A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP
-A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP
# Allow HTTP/HTTPS
-A INPUT -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 8080 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 8080 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
# Allow DNS
-A INPUT -p udp -m udp --sport 53 -j ACCEPT
-A INPUT -p tcp -m tcp --sport 53 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 53 -j ACCEPT
# Block ping
-A INPUT -p icmp -m state --state NEW -m icmp --icmp-type 8 -j DROP
# Allow any loopback
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
# Allow forwarding from/to localhost to/from docker
-A FORWARD -i docker0 -o lo -j ACCEPT
-A FORWARD -i lo -o docker0 -j ACCEPT
# Docker-generated rules
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-30c18a0778b5 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-30c18a0778b5 -j DOCKER
-A FORWARD -i br-30c18a0778b5 ! -o br-30c18a0778b5 -j ACCEPT
-A FORWARD -i br-30c18a0778b5 -o br-30c18a0778b5 -j ACCEPT
-A DOCKER -d 172.18.0.2/32 ! -i br-30c18a0778b5 -o br-30c18a0778b5 -p tcp -m tcp --dport 27017 -j ACCEPT
-A DOCKER -d 172.18.0.3/32 ! -i br-30c18a0778b5 -o br-30c18a0778b5 -p tcp -m tcp --dport 4000 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-30c18a0778b5 ! -o br-30c18a0778b5 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-30c18a0778b5 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
The container I proxy-pass to is running on port 4000 mapped to 3003 in docker-compose.yml:
webapi:
build: .
depends_on:
- mongo
deploy:
replicas: 1
resources:
limits:
cpus: "0.1"
memory: 256M
restart_policy:
condition: on-failure
ports:
- "3003:4000"
networks:
- webnet
But if I run curl http://localhost:3003/api/healthcheck - I get curl: (56) Recv failure: Connection reset by peer which is confusing to me since I don't have any restrictions for loopback or forwarding to docker0.
The only idea I have is: forwarding from container's port 4000 to localhost 3003 is blocked, but I can't come up with how to allow it.
I too ran into an issue where I created a docker network and only wanted to expose the DNS server and Nginx services. When doing this, everything worked fine external to the system but within the host I didn't have access as long as the INPUT policy was DROP. Adding localhost and the exact IP addresses and ports to the INPUT chain had no effect.
The solution I found was to ACCEPT the docker network bridge interface in the INPUT chain. Docker will by default name this something like br- (e.g. br-22153f050ac4). You can add this nasty label to the iptables config or you can name your docker network bridge interface and add the more deterministic interface name.
Assuming you created your network with:
docker network create -d bridge -o com.docker.network.bridge.name=webnet webnet
You should be able to allow localhost -> docker container with something like:
sudo iptables -A INPUT -i webnet -j ACCEPT
Assuming everything was successful, you should now be able to access the container via a localhost address (i.e. 127.0.0.1) and the docker container address (e.g. 172.X.Y.Z).
The easy answer to this is to only publish the port on the loopback interface so you don't need to firewall them:
port:
- "127.0.0.1:3003:4000"
For debugging the firewall rules, I'd put some logging in there before dropping the packets. Docker uses a lot more than just the docker-0 bridge, and there are various nat/mangle rules, and proxy processes, involved in publishing ports.

problems running a docker container behind nginx and/or firewall

I'm running docker behind nginx, with the registry container and my own container running a gunicorn django webapp.
The django webapp runs fine outside the docker container. However, as soon I try and run the django webapp from within the container, the webapp fails with this message from nginx:
2018/03/20 15:39:30 [error] 14767#0: *360 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.38.181.123, server: app.ukrdc.nhs.uk, request: "POST /convert/pv-to-rda/ HTTP/1.1", upstream: "http://127.0.0.1:9300/convert/pv-to-rda/", host: "app.ukrdc.nhs.uk"
when I do a get on the webapp.
The registry container works fine.
I've exposed the right port in the Dockerfile
Run command is:
docker run -ti -p 9300:9300 ukrdc/ukrdc-webapi
Added the port to the iptables.
(output from iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 9300 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.20/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9300 -j ACCEPT)
The signs point to something being wrong with my container and/or firewall rules, but I'm not sure what. I think I'm doing the same as the registry container.
Running on Centos 6.9 with Docker version 1.7.1, build 786b29d/1.7.1
The answer is:
run the django app with
exec gunicorn mysite.wsgi \
- -b 127.0.0.1:9300 \
+ -b 0.0.0.0:9300 \
--name ukrdc_django \
--workers 3 \
--log-level=info \
I'd bound it to the local loop-back address. It's now bound to all addresses, and now works.
Try adding -P to the run command:
docker run -P <container>
That will automatically publish the exposed ports. Note the difference: exposing a port makes it available to other containers on the docker network, where as publishing the port makes it available to the host machine, as well as other containers on the network.
I think you're using EXPOSE when you really want a -P or -p flag on your docker run command, where "P" is for "publish". According to the docker docs, EXPOSE is just for linking ports between containers, where as docker run -P <container> or docker run -p 1234:1234/tcp <container> will actually make a port or ports available outside the container so nginx can reach it from the host machine. Another option is you could run nginx in a container on the same network (there is an easy-to-use standard nginx container out there), and then nginx could reach all of the exposed ports on the network, but you would need to publish at least one of the nginx container's ports anyway in that instance.
Here's another SO post that helped me a lot with expose vs. publish:
Difference between "expose" and "publish" in docker

Resources