I have a CentOS server with two static IP address (192.168.3.100 and 192.168.3.101) on same NIC and two containers running on it with port mapping as below. The containers use the same default 'bridge' network of docker
192.168.3.100:80->80/tcp container1
192.168.3.101:80->80/tcp container2
From the host, I can execute curl 192.168.3.100 or curl 192.168.3.101 and works fine. From the host/containers I can execute curl 172.17.0.2 or curl 172.17.0.3 and works fine.
But I cannot execute curl 192.168.3.100 or curl 192.168.3.101 from neither of these containers. Ends up with error No route to host. I can ping it though.
What am I missing here? I want to try to avoid using a 192 docker network as I do not want to tie up the address space with one machine. Using docker 1.12.6
Output for iptables reject rules iptables -S | grep -i reject
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
Thanks for your input
If you are able to ping and not able to curl and you get no route to host then it mostly means that your packets are getting rejected through firewall.
Check the iptables using sudo iptables -S or sudo iptables -L -n. If you see a REJECT or REJECT using icmp rule than thats the problem.
If you are not worried about iptables and are ok to clear it. Stop the docker service and run the below
$ iptables -F
$ iptables -X
$ iptables -t nat -F
$ iptables -t nat -X
$ iptables -t mangle -F
$ iptables -t mangle -X
This will clear all the tables. Then start the docker service and run the container again
Related
To build a certain image I need to create a tunnel and make docker use this tunnel as a socks5 proxy (to use the proxy for DNS too).
So now i've got several problems:
How to make docker use the proxy that is on the host?
How to make docker use the proxy to get the base image?
How to make docker use the proxy for the RUN instruction?
How to make docker use the proxy for the ADD instruction?
Since I spent all day researching this, here are the answers.
I will leave the partially incomplete/wrong/old answer below, since I set up a new system today and needed to figure out all of the questions again because some parts of the old answer didn't make sense anymore.
Using localhost:port does not work. Until this issue is resolved, you need to use the IP address of your docker0 network interface (172.17.0.1 in my case). If your host OS is linux, you can use localhost:port by passing additional --network=host parameter to docker build as mentioned in some other answer.
and 3. Just put this content (change IP and port if needed) into ~/.docker/config.json (notice that the protocol is socks5h)
{
"proxies":
{
"default":
{
"httpProxy": "socks5h://172.17.0.1:3128",
// or "httpProxy": "socks5h://localhost:3128", with --network=host
"httpsProxy": "socks5h://172.17.0.1:3128",
"noProxy": ""
}
}
}
It seems that the ADD command is executed with the (proxy) environment variables of the host, ignoring those in config.json. To make things more complicated, since the daemon is usually running with the root user, only root user's environment variables are picked up. Even more complicated because the host of course needs to use localhost as host for the proxy. And the cherry on top: the protocol needs to be socks5 (missing the h at the end) in this case for whatever reason.
In my case, since I switched to WSL2 and use docker within WSL2 (starting the dockerd docker daemon manually), I just export the needed environment variable before the call to dockerd:
#!/bin/bash
# Start Docker daemon automatically when logging in if not running.
RUNNING=`ps aux | grep dockerd | grep -v grep`
if [ -z "$RUNNING" ]; then
unset http_proxy https_proxy HTTP_PROXY HTTPS_PROXY no_proxy NO_PROXY
export http_proxy=socks5h://localhost:30000
sudo -E dockerd > /dev/null 2>&1 &
disown
fi
If you have the "regular" setup on a linux machine, you could use the old answer to 4., but beware the probably there you also need to use localhost.
Incomplete/wrong/old answer starting here
Using localhost:port does not work. Until this issue is resolved, you need to use the IP address of your docker0 network interface (172.17.0.1 in my case).
This answer applies to question 3 too. Just put this content (change IP and port if needed) into ~/.docker/config.json (notice that the protocol is socks5h)
{
"proxies":
{
"default":
{
"httpProxy": "socks5h://172.17.0.1:3128",
"httpsProxy": "socks5h://172.17.0.1:3128",
"noProxy": ""
}
}
}
I do not know why (edit: now I know, it's because dockerd is running as root and it does not pick up proxy environment variables from the regular user), but for the ADD instruction the former settings to not apply (names do not get resolved through proxy). We need to put this content into /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=socks5://172.17.0.1:3128/"
then
sudo systemctl daemon-reload
sudo systemctl restart docker
(This is just wrong/unneeded with answer 2.)Also, for package managers like yum to be able to update the packages during build, you need to pass the environment variable like this:
docker build --build-arg http_proxy=socks5://172.17.0.1:3128
Using localhost:port works by adding "--network=host" option in "docker build ..." command.
In order to connect a container to socks5 locally, that is, the entire Internet goes to a proxy, the container must have access to the machine host.
To access machine hosting within Linux you must:
put --network="host" in run command:
docker run --name test --network="host" --env http_proxy="socks5://127.0.0.1:1080" --env https_proxy="127.0.0.1:1080" nginx sh -c "curl ifconfig.io"
for mac and windows users we using host.local.internal:local_port:
docker run --name test --env http_proxy="socks5://host.local.internal:1080" --env https_proxy="socks5://host.local.internal:1080" nginx sh -c "curl ifconfig.io"
sudo iptables -t nat -N REDSOCKS
sudo iptables -t nat -A REDSOCKS -d 0.0.0.0/8 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 10.0.0.0/8 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 127.0.0.0/8 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 169.254.0.0/16 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 172.16.0.0/12 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 224.0.0.0/4 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 240.0.0.0/4 -j RETURN
sudo iptables -t nat -A REDSOCKS -d 192.168.0.0/16 -j RETURN
sudo iptables -t nat -A REDSOCKS -p tcp -j REDIRECT --to-ports 5000
sudo iptables -t nat -A REDSOCKS -d 172.17.0.0/12 -j RETURN
sudo iptables -t nat -A OUTPUT -p tcp -o docker0 -j REDSOCKS
sudo iptables -t nat -A PREROUTING -p tcp -i docker0 -j REDSOCKS
TLDR; - Added some iptable rules to a docker container to limit internet access. Working fine except that now I am unable to access container app from host machine but can do so from within container itself
I have a container running a webapp. This container uses mysql, redis etc. Every dependency are remote, accessible by an ip address, on a particular port.
So, for instance, mysql is accessible on ip 13.255.255.255
What I want is to allow the container only to be able to use mysql ip address and not any other. There are few curl requests originating from within the code which I do not want to go beyond my host machine's network.
I've added an entrypoint script in docker which adds some iptables rule in container.
ALLOWED_CIDR1=172.0.0.0/16
ALLOWED_CIDR2=13.255.255.255 #For mysql access
#iptables -P FORWARD DROP # we aren't a router
iptables -A INPUT -m state --state INVALID -j DROP
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -s 127.0.0.1 -j ACCEPT
iptables -A OUTPUT -d 127.0.0.1 -j ACCEPT
iptables -P INPUT DROP # Drop everything we don't accept
iptables -A INPUT -s 0.0.0.0 -j ACCEPT
iptables -A INPUT -s ::1 -j ACCEPT
iptables -A OUTPUT -d ::1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -P INPUT DROP
iptables -P OUTPUT DROP
When I run container and do
docker-compose exec <container-name> curl http://google.com
I get following in response:
curl: (6) Could not resolve host: google.com
which is expected. Now, when I do
docker-compose exec <container-name> curl http://0.0.0.0
I get following response:
"Hello World!"
Which again is expected. However, when I do curl http://0.0.0.0 from my host machine, following is the output
* Trying 0.0.0.0...
* TCP_NODELAY set
* Connected to 0.0.0.0 (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: 0.0.0.0
> User-Agent: curl/7.62.0
> Accept: */*
> // Hangs here
So, I am unable to connect to http://0.0.0.0 from host machine, but can connect from inside the docker.
I am utterly stupid to overlook the iptables rules I posted myself.
Wrong set of rules
ALLOWED_CIDR1=172.0.0.0/16
ALLOWED_CIDR2=13.255.255.255 #For mysql access
#iptables -P FORWARD DROP # we aren't a router
iptables -A INPUT -m state --state INVALID -j DROP
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -s 127.0.0.1 -j ACCEPT
iptables -A OUTPUT -d 127.0.0.1 -j ACCEPT
iptables -P INPUT DROP # Drop everything we don't accept
iptables -A INPUT -s 0.0.0.0 -j ACCEPT
iptables -A INPUT -s ::1 -j ACCEPT
iptables -A OUTPUT -d ::1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -P INPUT DROP
iptables -P OUTPUT DROP
You can see, in the above section:
There is no OUTPUT ACCEPT for ip 0.0.0.0
There is no OUTPUT ACCEPT rule for ip 192.168.x.x which is the ip address of my docker-0 network interface
Both docker and host machine communicates using the docker0 network interface if network mode bridged is used while launching container (which happened to be my case).
Another thing I noticed, I didn't required the 0.0.0.0 or 127.0.0.1 rules at all. Since the entrypoint script will add the iptable rules within docker container, we may never want to access webapp from within container itself. Hence, why bother with 127.0.0.1?
All in all, here is what I did:
Get my ip address for docker0 network. ip addr show docker0. It outputted 192.168.144.1/20
I added 192.168.0.0/16 to ACCEPT rules in my entrypoint iptable rules, which covered my ipaddress in point 1
Now I can access my container from outside
My iptable rules looks like this now:
ALLOWED_CIDR1=172.0.0.0/16
ALLOWED_CIDR2=13.255.255.255
ALLOWED_CIDR3=192.168.0.0/16
iptables -A INPUT -m state --state INVALID -j DROP
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -P INPUT DROP # Drop everything we don't accept
iptables -P OUTPUT DROP
iptables -A INPUT -s $ALLOWED_CIDR1 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR2 -j ACCEPT
iptables -A INPUT -s $ALLOWED_CIDR3 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR1 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR2 -j ACCEPT
iptables -A OUTPUT -d $ALLOWED_CIDR3 -j ACCEPT
I'm running docker behind nginx, with the registry container and my own container running a gunicorn django webapp.
The django webapp runs fine outside the docker container. However, as soon I try and run the django webapp from within the container, the webapp fails with this message from nginx:
2018/03/20 15:39:30 [error] 14767#0: *360 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.38.181.123, server: app.ukrdc.nhs.uk, request: "POST /convert/pv-to-rda/ HTTP/1.1", upstream: "http://127.0.0.1:9300/convert/pv-to-rda/", host: "app.ukrdc.nhs.uk"
when I do a get on the webapp.
The registry container works fine.
I've exposed the right port in the Dockerfile
Run command is:
docker run -ti -p 9300:9300 ukrdc/ukrdc-webapi
Added the port to the iptables.
(output from iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 5000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 9300 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.20/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9300 -j ACCEPT)
The signs point to something being wrong with my container and/or firewall rules, but I'm not sure what. I think I'm doing the same as the registry container.
Running on Centos 6.9 with Docker version 1.7.1, build 786b29d/1.7.1
The answer is:
run the django app with
exec gunicorn mysite.wsgi \
- -b 127.0.0.1:9300 \
+ -b 0.0.0.0:9300 \
--name ukrdc_django \
--workers 3 \
--log-level=info \
I'd bound it to the local loop-back address. It's now bound to all addresses, and now works.
Try adding -P to the run command:
docker run -P <container>
That will automatically publish the exposed ports. Note the difference: exposing a port makes it available to other containers on the docker network, where as publishing the port makes it available to the host machine, as well as other containers on the network.
I think you're using EXPOSE when you really want a -P or -p flag on your docker run command, where "P" is for "publish". According to the docker docs, EXPOSE is just for linking ports between containers, where as docker run -P <container> or docker run -p 1234:1234/tcp <container> will actually make a port or ports available outside the container so nginx can reach it from the host machine. Another option is you could run nginx in a container on the same network (there is an easy-to-use standard nginx container out there), and then nginx could reach all of the exposed ports on the network, but you would need to publish at least one of the nginx container's ports anyway in that instance.
Here's another SO post that helped me a lot with expose vs. publish:
Difference between "expose" and "publish" in docker
I am trying Rancher (v.1.2.3) and I am not able to run the agent in the nodes.
1) I've installed the racher server in one node with the following command:
sudo docker run -d --restart=unless-stopped -p 80:8080 rancher/server:v1.2.3
2) Then I go to Add Host, and Ranchers gives me the command to add it.
3) I go to the Node 1, and put the following:
sudo docker run -d --privileged -v /var/run/docker.sock:/var/run /docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.1.2 http:/xxx/v1/scripts/D822D98E34752ABCDE:1890908200000:RASZERSE
4) The command line returns
docker: Error response from daemon: containerd: container did not start before the specified
I don't know what is going wrong, I think the container can not access to Rancher Server, but If do a
curl http:/xxx/v1/scripts/D822D98E34752ABCDE:1890908200000:RASZERSE
I can access it. In addition this is my IPTABLES:
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N CATTLE_FORWARD
-N DOCKER
-N DOCKER-ISOLATION
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp- mss-to-pmtu
-A FORWARD -j CATTLE_FORWARD
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o docker_gwbridge -j DOCKER
-A FORWARD -o docker_gwbridge -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker_gwbridge ! -o docker_gwbridge -j ACCEPT
-A FORWARD -i docker_gwbridge -o docker_gwbridge -j DROP
-A CATTLE_FORWARD -m mark --mark 0x668a0 -j ACCEPT
-A DOCKER-ISOLATION -i docker_gwbridge -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o docker_gwbridge -j DROP
-A DOCKER-ISOLATION -j RETURN
Ubuntu v14.04
Docker v1.12.3
It would be greatly appreciated if you could help me.
Thanks
The full error is presumably "containerd: container did not start before the specified timeout", which means Docker isn't starting the container. Rebooting the host will probably help.
If the nodes which you are using, ie the one where you start rancher/server:v1.2.3 and the one where you start the agent are the same, then there could be internal port access issue.
Rancher uses UDP services/ports like 500 for internal communications. These must be permitted, maybe by adding to firewalld zones etc. Issues might occur if you use managed networking.
When the docker-demon starts it adds a couple of rules to iptables.
When all rules are deleted via iptables -F i have to stop and restart the docker demon to re-create dockers rules.
Is there a way to have docker re-add it's additional rules?
the best way is to restart your docker service, then it'll re-add your docker rules to iptables. (on deb-based: sudo service docker restart)
however, if you just want to restore those rules without restarting your service, i saved mine so you can inspect, and adjust it to work for you, then load using sudo iptables-restore ./iptables-docker-ports.backup
edit and save this to ./iptables-docker-ports.backup
# Generated by iptables-save v1.4.21 on Thu Apr 30 20:48:42 2015
*nat
:PREROUTING ACCEPT [18:1080]
:INPUT ACCEPT [18:1080]
:OUTPUT ACCEPT [22:1550]
:POSTROUTING ACCEPT [22:1550]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.1/32 -d 172.17.0.1/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 3001 -j DNAT --to-destination 172.17.0.1:80
COMMIT
# Completed on Thu Apr 30 20:48:42 2015
# Generated by iptables-save v1.4.21 on Thu Apr 30 20:48:42 2015
*filter
:INPUT ACCEPT [495:53218]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [480:89217]
:DOCKER - [0:0]
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
COMMIT
# Completed on Thu Apr 30 20:48:42 2015
If you're running Ubuntu on the host, you can use the iptables-save utility to save the iptables rules to a file after you start the docker daemon. Then, once you flush the old rules, you can simply restore the original docker rules using iptables-restore & the saved rules file.
If you don't want to restore all the old iptables rules, you can alter the saved rules file to keep only the ones you need.
If you're running another operating system, you might find a similar alternative.
Docker in default configuration, when running in bridge mode, does manipulate iptables (a lot) unless you disable it (then you would have to configure your own NAT rules).
The default network-related configuration is probably following, although the config /etc/docker/daemon.json might not exist (and as of now you can't print effective configuration):
{
"userland-proxy": true,
"iptables": true,
"ip-forward": true,
"ip-masq": true,
"ipv6": false
}
After Docker daemon starts, it injects following rules (in filter):
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
In order to understand what Docker does, here is a list of Docker-generated iptables rules with a short explanation. If you flush iptables rules, while Docker daemon and some containers are running, you might break access to existing containers (but probably won't break anything, more about this below).
After service docker restart all default rules are injected into firewall (you can check it by running iptables-save or iptables -S, iptables -S -t nat). Assuming you want to keep your containers running and only generate missing NAT rules.
docker ps gives us list of running containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
865569da8d36 nginx "nginx -g 'daemon of…" 17 hours ago Up 17 hours 0.0.0.0:4564->80/tcp, 0.0.0.0:32237->80/tcp jovial_sammet
And from docker inspect we can obtain the port mapping
$ docker inspect -f '{{.NetworkSettings.Ports}}' 865569da8d36
map[80/tcp:[{0.0.0.0 4564} {0.0.0.0 32237}]]
now we need just the internal IP address of Docker container:
$ docker inspect -f '{{.NetworkSettings.IPAddress}}' 865569da8d36
172.17.0.2
Now using some bash/jq we can generate the dynamic iptables rules:
$ bash docker_iptables --noop
iptables -A DOCKER -d 172.17.0.2
iptables -t nat -A DOCKER ! -i docker0 -p tcp -m tcp --dport 4564 -j DNAT --to-destination 172.17.0.2:80
iptables -A DOCKER -d 172.17.0.2
iptables -t nat -A DOCKER ! -i docker0 -p tcp -m tcp --dport 32237 -j DNAT --to-destination 172.17.0.2:80
So the answer to the question is: No, not without stopping all containers. But the rules can be re-added manually (NOTE: this script doesn't cover all Docker functionality, e.g. if you're exposing some service running in other network than Docker container).
When you start Docker container with exposed ports (-p):
docker run --rm -d -p 32237:80 -p 4564:80 nginx
Docker spins up also docker-proxy. What's that?
$ netstat -tulpn | grep docker-proxy
tcp 0 0 0.0.0.0:32237 0.0.0.0:* LISTEN 20487/docker-proxy
tcp 0 0 0.0.0.0:4564 0.0.0.0:* LISTEN 20479/docker-proxy
The Linux kernel does not allow the routing of loopback traffic, and therefore it's not possible to apply netfilter NAT rules to packets originating from 127.0.0.0/8. docker-proxy is generally considered as an inelegant solution to such problems.
When you restore iptables without Docker rules, the container ports might be still available via docker-proxy. However this might bring some performance issues in networking, as docker-proxy won't be as fast as kernel's netfilter.