access remote network from within docker container - docker

I have a Docker Host that is connected to a non-default route network. My problem is now that I can't reach this Network from within the Docker Containers on the Docker Host.
Primary IP: 189.69.77.21 (default route)
Secondary IP: 192.168.77.21
Routing is like the following:
[root#mgmt]# route -n
Kernel IP Routentabelle
Ziel Router Genmask Flags Metric Ref Use Iface
0.0.0.0 189.69.77.1 0.0.0.0 UG 0 0 0 enp0s31f6
189.69.77.1 0.0.0.0 255.255.255.255 UH 0 0 0 enp0s31f6
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.77.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s31f6.4000
and IPtables untouched:
[root#mgmt]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:3000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
I start the Docker Container with the following:
docker run -d --restart=unless-stopped -p 127.0.0.1:3000:3000/tcp --name mongoclient -e MONGOCLIENT_DEFAULT_CONNECTION_URL=mongodb://192.168.77.21:27017,192.168.77.40:27017,192.168.77.41:27017/graylog?replicaSet=ars0 -e ROOT_URL=http://192.168.77.21/nosqlclient mongoclient/mongoclient
I can reach the container (via NGINX Proxy) over Network but the container itself can only ping/reach the Docker Host IP and not others.
node#1c5cf0e8d14c:/opt/meteor/dist/bundle$ ping 192.168.77.21
PING 192.168.77.21 (192.168.77.21) 56(84) bytes of data.
64 bytes from 192.168.77.21: icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from 192.168.77.21: icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from 192.168.77.21: icmp_seq=3 ttl=64 time=0.079 ms
^C
--- 192.168.77.21 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.078/0.079/0.080/0.000 ms
node#1c5cf0e8d14c:/opt/meteor/dist/bundle$ ping 192.168.77.40
PING 192.168.77.40 (192.168.77.40) 56(84) bytes of data.
^C
--- 192.168.77.40 ping statistics ---
240 packets transmitted, 0 received, 100% packet loss, time 239000ms
So my question is, how can I make the Docker Container reach the hosts on the Network? My goal is to have a running mongoclient via Docker that can be used to manage the MongoDB ReplicaSet that is in that additional private Network.

You can use the network host in the container. So the container is use the host network and you can access the container and host network.
Here is the documentation:
https://docs.docker.com/network/host/
BR
Carlos

Related

Docker Swarm: Service Load Balancing between nodes does not work [duplicate]

I have a small 1-manager, 3-worker cluster setup to pilot a few things. It is running swarm orchestration and is able to spin up services across the cluster from any stack yaml and serve the webapps through the ingress network. I've made no changes to the default yum installation of docker-ce. Vanilla installation with no configuration changes to any of the nodes.
There is, however, issue of inter-services communication over other overlay networks.
I create a docker overlay network testnet with --attachable flag and attach an nginx (named: nginx1) container to it on node-1 and a netshoot (named: netshoot1) container to it on manager-1.
I can then ping nginx1 from netshoot1 and vice versa. I can observe these packet exchanges over tcpdump on both nodes.
# tcpdump -vvnn -i any src 10.1.72.70 and dst 10.1.72.71 and port 4789
00:20:39.302561 IP (tos 0x0, ttl 64, id 49791, offset 0, flags [none], proto UDP (17), length 134)
10.1.72.70.53237 > 10.1.72.71.4789: [udp sum ok] VXLAN, flags [I] (0x08), vni 4101
IP (tos 0x0, ttl 64, id 20598, offset 0, flags [DF], proto ICMP (1), length 84)
10.0.5.18 > 10.0.5.24: ICMP echo request, id 21429, seq 1, length 64
Here you can see netshoot1 (10.0.5.18) ping nginx1 (10.0.5.24) - echo successful.
However if I then # curl -v nginx1:80, the whole thing times out.
Using tcpdump, I can see the packets leave manager-1 node, but they never arrive on node-1.
00:22:22.809057 IP (tos 0x0, ttl 64, id 42866, offset 0, flags [none], proto UDP (17), length 110)
10.1.72.70.53764 > 10.1.72.71.4789: [bad udp cksum 0x5b97 -> 0x697d!] VXLAN, flags [I] (0x08), vni 4101
IP (tos 0x0, ttl 64, id 43409, offset 0, flags [DF], proto TCP (6), length 60)
10.0.5.18.53668 > 10.0.5.24.80: Flags [S], cksum 0x1e58 (incorrect -> 0x2c3e), seq 1616566654, win 28200, options [mss 1410,sack OK,TS val 913132903 ecr 0,nop,wscale 7], length 0
These are VMs running on an in-house datacenter over vmware. The networking team says network firewall shouldn't be blocking or inspecting them as the ips are on the same subnet.
Is this an issue with docker configuration? Iptables?
OS: RHEL 8
Docker CE: 20.10.2
containerd: 1.4.3
IPTABLES on manager-1
Chain INPUT (policy DROP 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 9819K 2542M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 8 317 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 255
3 473 33064 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
4 0 0 DROP all -- * * 127.0.0.0/8 0.0.0.0/0
5 116 6192 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
6 351K 21M ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 source IP range 10.1.72.71-10.1.72.73 state NEW multiport dports 2377,7946
7 435 58400 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 source IP range 10.1.72.71-10.1.72.73 state NEW multiport dports 7946,4789
8 17142 1747K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy DROP 8 packets, 384 bytes)
num pkts bytes target prot opt in out source destination
1 14081 36M DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 14081 36M DOCKER-INGRESS all -- * * 0.0.0.0/0 0.0.0.0/0
3 267K 995M DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
4 39782 121M ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
5 1598 95684 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
6 41470 717M ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
7 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
8 90279 23M ACCEPT all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
9 5 300 DOCKER all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
10 94041 134M ACCEPT all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
11 0 0 DROP all -- docker_gwbridge docker_gwbridge 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 11M packets, 2365M bytes)
num pkts bytes target prot opt in out source destination
Chain DOCKER (2 references)
num pkts bytes target prot opt in out source destination
1 1598 95684 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.2 tcp dpt:5000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num pkts bytes target prot opt in out source destination
1 41470 717M DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 93853 133M DOCKER-ISOLATION-STAGE-2 all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 267K 995M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 1033K 1699M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-INGRESS (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8502
2 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED tcp spt:8502
3 267K 995M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 0 0 DROP all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 135K 851M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
IPTABLES on node-1
Chain INPUT (policy DROP 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 6211K 3343M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 7 233 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 255
3 471 32891 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
4 0 0 DROP all -- * * 127.0.0.0/8 0.0.0.0/0
5 84 4504 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 /* ssh from anywhere */
6 26940 1616K ACCEPT tcp -- * * 10.1.72.70 0.0.0.0/0 state NEW multiport dports 7946 /* docker swarm cluster comm- manager,node2,3 */
7 31624 1897K ACCEPT tcp -- * * 10.1.72.72 0.0.0.0/0 state NEW multiport dports 7946 /* docker swarm cluster comm- manager,node2,3 */
8 30583 1835K ACCEPT tcp -- * * 10.1.72.73 0.0.0.0/0 state NEW multiport dports 7946 /* docker swarm cluster comm- manager,node2,3 */
9 432 58828 ACCEPT udp -- * * 10.1.72.70 0.0.0.0/0 state NEW multiport dports 7946,4789 /* docker swarm cluster comm and overlay netw- manager,node2,3 */
10 10 1523 ACCEPT udp -- * * 10.1.72.72 0.0.0.0/0 state NEW multiport dports 7946,4789 /* docker swarm cluster comm and overlay netw- manager,node2,3 */
11 7 1159 ACCEPT udp -- * * 10.1.72.73 0.0.0.0/0 state NEW multiport dports 7946,4789 /* docker swarm cluster comm and overlay netw- manager,node2,3 */
12 17172 1749K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy DROP 19921 packets, 1648K bytes)
num pkts bytes target prot opt in out source destination
1 23299 22M DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 23299 22M DOCKER-INGRESS all -- * * 0.0.0.0/0 0.0.0.0/0
3 787K 1473M DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
4 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
5 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
6 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
7 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
8 386K 220M ACCEPT all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
9 0 0 DOCKER all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
10 402K 1254M ACCEPT all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
11 0 0 DROP all -- docker_gwbridge docker_gwbridge 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 8193K packets, 2659M bytes)
num pkts bytes target prot opt in out source destination
Chain DOCKER-INGRESS (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8502
2 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED tcp spt:8502
3 787K 1473M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 792K 1474M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
num pkts bytes target prot opt in out source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 402K 1254M DOCKER-ISOLATION-STAGE-2 all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 787K 1473M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 0 0 DROP all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 402K 1254M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
The issue was indeed the bad checksums on the outbound packets. The vmware network interface was dropping the packets due to bad checksums.
The solution was to disable checksum offloading. Using ethtool:
# ethtool -K <interface> tx off
I had the exact same problem (the only thing that was working in my overlay network was ping, everything else just disappeared). This thread saved me after days of pulling my hair, so I thought I'd add my five cents.
This was also on vmware servers, running Ubuntu 22.04. My solution was to change network interface type from vmxnet3 to a simple E1000E card and suddenly everything just started working. So obviously there's something weird happening in vmxnet3. What buffles me is that this doesn't seem to be a big issue for more users, running a Docker swarm on vmware servers should be kinda normal, right?
The same issue solved for me without using "ethtool", just by using endpoint_mode and using host mode for publishing ports.
here is the changes that I added in my compose:
1-
ports:
- target: 2379
published: 2379
protocol: tcp
mode: host
2-
deploy:
endpoint_mode: dnsrr
3- adding
hostname: <service_name>
Just in case anyone tried ethtool -K <iface name> tx off and still does not work, try to change the MTU size of your overlay network to lover than the standard (1500).
For example:
docker network create -d overlay --attachable --opt com.docker.network.driver.mtu=1450 my-network

How do I prevent Docker from source NAT'ing traffic on Synology NAS

On a Synology NAS, it appears the default setup for docker/iptables is to source NAT traffic going to the container to the gateway IP. This fundamentally causes problems when the container needs to see the correct/real client IP. I do not see this issue when running on an ubuntu system, for example.
Let me walk you though what I'm seeing. On the Synology NAS, I'll run a simple nginx container:
docker run --rm -ti -p 80:80 nginx
(I have disabled the Synology NAS from using port 80)
I'll do a curl from my laptop and I see the following log line:
172.17.0.1 - - [05/May/2020:17:02:44 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "-"
Hm... the client IP (172.17.0.1) is the docker0 interface on the NAS.
user#synology:~$ ifconfig docker0
docker0 Link encap:Ethernet HWaddr 02:42:B5:BE:C5:51
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
<snip>
When I first saw this, I was quite confused because I did not recall docker networking working this way. So I fired up my ubuntu VM and tried the same test (same docker run command from earlier).
The log line from my curl test off box:
172.16.207.1 - - [05/May/2020:17:12:04 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "-"
In this case the docker0 IP was not the source of the traffic as seen by the container.
user#ubuntu-vm:~# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
<snip>
So it appears that the docker network setup for a Synolgy NAS is different vs a more standard ubuntu deployment. Okay, neat. So how does one fix this?
This is where I'm struggling. Clearly something is happening with iptables. Running the same iptables -t nat -L -n command on both systems shows vastly different results.
user#synology:~$ iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DEFAULT_OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
DEFAULT_POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
Chain DEFAULT_OUTPUT (1 references)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain DEFAULT_POSTROUTING (1 references)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:80
Chain DOCKER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80
user#ubuntu-vm:~# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE all -- 172.18.0.0/16 0.0.0.0/0
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:80
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80
I do not really understand how these chains work or why they're different for different systems. I haven't changed any docker level settings on either system, these are default configs.
Has anyone ran into this before? Is there a quick way to toggle this?
I've experienced this issue myself while using Pi-hole (described here) and this iptables rule seems to fix it:
sudo iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
Be aware that this is not permanent, so if you reboot the NAS you will have to apply it again.
Update: Here's a more "permanent" fix: https://gist.github.com/PedroLamas/db809a2b9112166da4a2dbf8e3a72ae9

Docker networking, baffled and puzzled

I have a simple python application which stores and searches it's data in an Elasticsearch instance. The python application runs in it's own container, just as Elasticsearch is.
Elasticsearch exposes it's default ports 9200 and 9300, the python application exposes port 5000. The networktype used for Docker is a user defined bridged network.
When i start both containers the application starts up nicely, both containers see each other by container name and communicate just fine.
But from the docker host (linux) it's not possible to connect to the exposed port 5000. So a simple curl http://localhost:5000/ renders in a time-out. The Docker tips from this documentation: https://docs.docker.com/network/bridge/ did not solve this.
After a lot of struggling I tried something completely different, I tried connecting from the outside of the docker host to the python application. I was baffled, from anywhere in the world i could do curl http://<fqdn>:5000/ and was served with the application.
So that means, real problem solved because I'm able to serve the application to the outside world. (So yes, the application inside the container listens on 0.0.0.0 as is the solution for 90% of the network problems reported by others.)
But that still leaves me puzzled, what causes this strange behavior? On my development machine (Windows 10, WSL, Docker desktop, Linux containers) I am able to connect to the service on localhost:5000, 127.0.0.1:5000 etc. On my Linux (production) machine everything works except connecting from the docker host to the containers.
I hope someone can shed a light on this, I trying to understand why this is happening.
Some relevant information
Docker host:
# ifconfig -a
br-77127ce4b631: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
[snip]
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
[snip]
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 1xx.1xx.199.134 netmask 255.255.255.0 broadcast 1xx.1xx.199.255
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1e7f2f7a271b pplbase_api "flask run --host=0.…" 20 hours ago Up 19 hours 0.0.0.0:5000->5000/tcp pplbase_api_1
fdfa10b1ce99 elasticsearch:7.5.1 "/usr/local/bin/dock…" 21 hours ago Up 19 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp pplbase_elastic_1
# docker network ls
NETWORK ID NAME DRIVER SCOPE
[snip]
77127ce4b631 pplbase_pplbase bridge local
# iptables -L -n
[snip]
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9300
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9200
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:5000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Docker compose file:
version: '3'
services:
api:
build: .
links:
- elastic
ports:
- "5000:5000"
networks:
- pplbase
environment:
- ELASTIC_HOSTS=elastic localhost
- FLASK_APP=app.py
- FLASK_ENV=development
- FLASK_DEBUG=0
tty: true
elastic:
image: "elasticsearch:7.5.1"
ports:
- "9200:9200"
- "9300:9300"
networks:
- pplbase
environment:
- discovery.type=single-node
volumes:
- ${PPLBASE_STORE}:/usr/share/elasticsearch/data
networks:
pplbase:
driver: bridge
After more digging in the riddle is getting bigger and bigger. When using netcat I can establish a connection
Connection to 127.0.0.1 5000 port [tcp/*] succeeded!
Checking with netstat when no clients are connected is see:
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
While trying to connect from the dockerhost, the connection is made:
tcp 0 1 172.20.0.1:56866 172.20.0.3:5000 SYN_SENT 27824/docker-proxy
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
tcp6 0 0 ::1:58900 ::1:5000 ESTABLISHED 31642/links
tcp6 592 0 ::1:5000 ::1:58900 ESTABLISHED 27824/docker-proxy
So I'm suspecting now some networkvoodoo on the docker host.
The Flask instance is running at 0.0.0.0:5000.
Have you tried: curl http://0.0.0.0:5000/?
It might be that your host configuration maps localhost with 127.0.0.1 rather than 0.0.0.0
So as I was working on this problem, slowly towards a solution I found my last suggestion was right after all. In the firewall (iptables) I logged all dropped packets and yes, the packets between the docker-bridge (not docker0, but br- and the container (veth) were being dropped by iptables. Adding a rule allowing traffic from the interfaces to flow resolved the problem.
In my case: sudo iptables -I INPUT 3 -s 172.20.0.3 -d 172.20.0.1 -j ACCEPT
Where 172.20.0.0/32 is the bridged network generated by Docker.

how block docker container port with iptables?

I use docker service to setup a container network. and I just open a port 7035 for a target ip and expose it to the host.
when i check the iptables with 'iptables -nvL'
I saw the FORWARD chain:
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DROP tcp -- * * 0.0.0.0/0 172.18.0.2 tcp dpt:7053
1680K 119M DOCKER-ISOLATION all -- * * 0.0.0.0/0 0.0.0.0/0
1680K 119M DOCKER all -- * br-287ce7f19804 0.0.0.0/0 0.0.0.0/0
1680K 119M ACCEPT all -- * br-287ce7f19804 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
and the DOCKER chain:
Chain DOCKER (4 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.2 tcp dpt:7053
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.2 tcp dpt:7051
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.3 tcp dpt:2181
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.4 tcp dpt:7053
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.4 tcp dpt:7051
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.6 tcp dpt:7053
0 0 ACCEPT tcp -- !br-287ce7f19804 br-287ce7f19804 0.0.0.0/0 172.18.0.6
AndI want to block the container 172.18.0.2, and it's port 7053. so I use the sudo iptables -I FORWARD -p tcp -d 172.18.0.2 --dport 7053 -j DROP.
But, It doesn't work.
So, what should I do to block the target ip and port?
The following should work:
iptables -I DOCKER 1 -p tcp --dport 7053 -j DROP
This will insert the DROP rule before all the other rules in the DOCKER chain.
The following is a useful commands well:
iptables --list DOCKER -n --line
As well, if you add -v (verbose) you get more detail
By now, you probably have your answer, but it may help others.

Block port from the outside except for Docker

I installed a jenkins via Docker on my server and assigned to a specific domain (jenkins.mydomain.com), which works perfectly fine. But I can also reach jenkins (and every other service in docker) if I browse my domain with the service's port, for example: mydomain.com:8181
I'v already tried a few thing to block the port from the outside and make it only accessible via domain, but no luck.
First I tried to block the port for the eth0 interface:
iptables -A INPUT -i eth0 -p tcp --destination-port 8181 -j DROP
But it didn't work because when I tried to reach jenkins from the domain, I'v got an error 503.
Also tried to block the port for every incoming requests except docker's ip. It didn't work either.
So how can I make the ports unaccessible from the outside but accessible for Docker?
iptables -L -n --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 DOCKER-ISOLATION all -- 0.0.0.0/0 0.0.0.0/0
2 DOCKER all -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
4 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
5 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
Chain DOCKER (1 references)
num target prot opt source destination
1 ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:3000
2 ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:22
3 ACCEPT tcp -- 0.0.0.0/0 172.17.0.3 tcp dpt:8081
4 ACCEPT tcp -- 0.0.0.0/0 172.17.0.4 tcp dpt:50000
5 ACCEPT tcp -- 0.0.0.0/0 172.17.0.4 tcp dpt:8080
Chain DOCKER-ISOLATION (1 references)
num target prot opt source destination
1 RETURN all -- 0.0.0.0/0 0.0.0.0/0

Resources