Docker: Deny external access to port - docker

I got hosted two containers with following docker files
docker run --name openvpn \
-v /root/easyrsa_master:/etc/openvpn/easyrsa \
-v /root/ccd_master:/etc/openvpn/ccd \
-e OVPN_SERVER_NET='192.168.100.0' \
-e OVPN_SERVER_MASK='255.255.255.0' \
-p 7777:1194 \
-p 8080:8080 \
--net test \
--cap-add=NET_ADMIN \
openvpn:local
docker run --name ovpn_admin \
-v /root/easyrsa_master:/mnt/easyrsa \
-v /root/ccd_master:/mnt/ccd \
-e OVPN_CCD="True" \
-e OVPN_CCD_PATH="/mnt/ccd" \
-e EASYRSA_PATH="/mnt/easyrsa" \
-e OVPN_DEBUG="True" \
-e OVPN_VERBOSE="True" \
-e OVPN_NETWORK="192.168.100.0/24" \
-e OVPN_SERVER="<external_address>:7777:tcp" \
-e OVPN_INDEX_PATH="/mnt/easyrsa/pki/index.txt" \
--network="container:openvpn" \
ovpn-admin:local
Hosted on cloud.
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:b5:a3:90 brd ff:ff:ff:ff:ff:ff
inet 192.168.250.5/24 brd 192.168.250.255 scope global dynamic eth0
valid_lft 58754sec preferred_lft 58754sec
inet6 fe80::f816:3eff:feb5:a390/64 scope link
valid_lft forever preferred_lft forever
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.250.1 0.0.0.0 UG 0 0 0 eth0
169.254.169.254 192.168.250.1 255.255.255.255 UGH 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-9919e04681fb
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-c3e87ce3f05f
172.20.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-60aff89f36e9
172.21.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-eee0ae05a7a6
172.22.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-5d087fed339d
192.168.250.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
Host has external access to the Internet.
So 2 ports mapped.
I need port 8080 only be accessible via network 192.168.250.0/24
But when I run it, it will be accessible externally as well. How can I deal with it?
127.0.0.1:8080:8080 doesn't work.

Solution 1
(ansible)
iptables: chain=DOCKER-USER in_interface=eth0 protocol=tcp destination_port=8080 source=!192.168.250.0/24 jump=DROP action=insert
Solution 2
Create reverse proxy in same docker network.
Remove -p 8080:8080
proxy_pass http://<host port>:8080;

Related

Docker: Route Traffic from One Container to Another Container Through a VPN Container

I set up an SSTP client container (172.17.0.3) that communicates with an SSTP server container (172.17.0.2) via the ppp0 interface. All traffic from the SSTP client container is routed through its ppp0 interface, as seen using netstat on the SSTP client container (192.168.20.1 is the SSTP server container's ppp0 IP address):
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.20.1 0.0.0.0 UG 0 0 0 ppp0
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
Now, I have an HTTP server container (172.17.0.4) running, and I want to use yet another client container (for example, a container that runs Apache Benchmark ab) to talk to the HTTP server container via the SSTP server container. To do so, I use --net=container:sstp-client on the ab client container so it uses the SSTP client container's network. However, the ab client container cannot seem to reach the HTTP server container, even though it is able to benchmark servers on the Internet (e.g., 8.8.8.8). For another example, if I do traceroute from a container through the SSTP client container to 8.8.8.8:
docker run -it --name alpine --net=container:sstp-client alpine ash
/ # traceroute -s 192.168.20.2 google.com
traceroute to google.com (142.250.65.206) from 192.168.20.2, 30 hops max, 46 byte packets
1 192.168.20.1 (192.168.20.1) 1.088 ms 1.006 ms 1.077 ms
2 * * *
3 10.0.2.2 (10.0.2.2) 1.710 ms 1.695 ms 0.977 ms
4 * * *
...
I am able to finally reach Google.
But if I traceroute to my HTTP server container:
/ # traceroute -s 192.168.20.2 172.17.0.4
traceroute to 172.17.0.4 (172.17.0.4) from 192.168.20.2, 30 hops max, 46 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
...
It fails.
My suspicion is that the routing configuration on the SSTP server container is incorrect, but I am not sure how I can fix that to make it work. My goal is to be able to reach both the outside world and the internal containers. I've messed around with both iptables and route quite a bit, but still can't make it work. This is my current configuration of the SSTP server container:
/ # netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.255 UGH 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.0 UG 0 0 0 eth0
192.168.20.2 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
/ # iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i ppp+ -j ACCEPT
-A FORWARD -j ACCEPT
-A OUTPUT -o ppp+ -j ACCEPT
I've seen many online solutions on how to route via a VPN container to the Internet, but nothing about to another containers. Very much a newbie in this area. Any suggestions welcome! Thank you.
I had to choose similar approach to this problem. Nevertheless I have it inside gitlab-ci...
Start VPN container and find its IP
docker run -it -d --rm \
--name vpn-client-${SOME_VARIABLE} \
--privileged \
--net gitlab-runners \
-e VPNADDR=... \
-e VPNUSER=... \
-e VPNPASS=... \
auchandirect/forticlient || true && sleep 2
export VPN_CONTAINER_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' vpn-client-${SOME_VARIABLE})
Next you have to add new variable into your application container and add new ROUTE, eg docker-compose:
version: "3"
services:
test-server:
image: alpine
variables:
ROUTE_IP: "${VPN_CONTAINER_IP}"
command: "ip route add <CIDR_TARGET_SUBNET> via ${ROUTE_IP}"
networks:
default:
name: gitlab-runners
external: true
But be advised IP mentioned above HAS to be in CIDR format (eg 192.168.1.0/24) and also be in SAME docker network

Docker exposed port unavailable on browser, though the former run works

I use docker on Virtual Box and I try to add one more image-running.
The first 'run' that I had is
$ docker run -Pit --name rpython -p 8888:8888 -p 8787:8787 -p 6006:6006 -p
8022:22 -v /c/Users/lenovo:/home/dockeruser/hosthome
datascienceschool/rpython
$ docker port rpython
8888/tcp -> 0.0.0.0:8888
22/tcp -> 0.0.0.0:8022
27017/tcp -> 0.0.0.0:32781
28017/tcp -> 0.0.0.0:32780
5432/tcp -> 0.0.0.0:32783
6006/tcp -> 0.0.0.0:6006
6379/tcp -> 0.0.0.0:32782
8787/tcp -> 0.0.0.0:8787
It works fine, on local browser through those tcp.
But about the second 'running'
docker run -Pit -i -t -p 8888:8888 -p 8787:8787 -p 8022:22 -p 3306:3306 --
name ubuntu jhleeroot/dave_ubuntu:1.0.1
$ docker port ubuntu
22/tcp -> 0.0.0.0:8022
3306/tcp -> 0.0.0.0:3306
8787/tcp -> 0.0.0.0:8787
8888/tcp -> 0.0.0.0:8888
It doesn't work.
root#90dd963fe685:/# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
Any idea about this?
Your first command ran an image (datascienceschool/rpython) that presumably kicked off a python app which must have listened to the port you were testing.
Your second command ran a different image (jhleeroot/dave_ubuntu:1.0.1) and from the pasted output you are only running a bash shell in that image. Bash isn't listening on these ports inside the container, so docker will forward to a closed port and the browser will see a closed connection.
Docker doesn't run the server for your listening ports, it relies on you to run that inside the container and it just forwards the requests.

Iptables with Docker - All created networks accessible via lan, routing issue?

I would like to make the docker host a gateway to route all traffic on the 172.0.0.0 range, so all machines will be accessible via static routes on the local lan.
For example, take a look at the following table.
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 bond0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 bond0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-8cb984474cf3
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-08751d4f00ac
172.20.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-205529b1f9cc
172.21.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-f199a191f679
172.22.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-67ac401705aa
172.23.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-ec7ad4f839dd
172.24.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-f7af361c29fb
172.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker_gwbridge
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
Bond0 is the LAN in question.
From the host we can ping each individual machine created by Docker because of the routing table.
Setup the static route on a windows box route add 172.0.0.0 MASK 255.0.0.0 192.168.2.3
Checked forwarding is enabled on the Linux Docker Host
Enabled Masquerading on the bond0 interface iptables -t nat -A POSTROUTING -o bond0 -j MASQUERADE
I chose an example interface br-08751d4f00ac that is 172.19.0.0
Setup forwarding
sudo iptables -t nat -A POSTROUTING -o bond0 -j MASQUERADE
sudo iptables -A FORWARD -i bond0 -o br-08751d4f00ac -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i br-08751d4f00ac -o bond0 -j ACCEPT
However, I still can't ping from the windows machine on the same network as bond0.
Tracing route to 172.19.0.2 over a maximum of 30 hops
1 <1 ms <1 ms <1 ms 192.168.2.3
2
Okay, I was overthinking it!
I solved it with
sudo iptables -t nat -A POSTROUTING -o bond0 -j MASQUERADE
sudo iptables -A FORWARD -i bond0 -o docker0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i docker0 -o bond0 -j ACCEPT
Solved the problem - I can now access all IP addresses allocated inside the docker containers from my LAN :)

Kubernetes flannel network does not work as expected

I encounter a very strange kubernetes network issue with kubeadm installation with flannel. Could you please help?
I have 3 nodes, 1 for master, 2 for minion nodes.and has 4 pods are running.
list all nodes(added a column # to simplify description)
[root#snap460c04 ~]# kubectl get nodes
# NAME STATUS AGE
1 snap460c03 Ready 11h
2 snap460c04 Ready,master 11h
3 snap460c06 Ready 11h
List all PODs (added a column # to simplify description)
[root#snap460c04 ~]# kubectl get pods -o wide -n eium1
# NAME READY STATUS RESTARTS AGE IP NODE Node#
1 demo-1229769353-7gf70 1/1 Running 0 10h 192.168.2.4 snap460c03 1
2 demo-1229769353-93xwm 1/1 Running 0 10h 192.168.1.4 snap460c06 3
3 demo-1229769353-kxzs9 1/1 Running 0 10h 192.168.1.5 snap460c06 3
4 demo-1229769353-ljvtg 1/1 Running 0 10h 192.168.2.3 snap460c03 1
I did 2 test, one for nodes->pods, another one for pods->pods.
In the nodes->pods test, the result is:
Test 1: Node => POD Test
From Node #1 (c03) => Why can ping local node pods only?
Ping POD #1: OK (ping 192.168.2.4)
Ping POD #2: NOK (ping 192.168.1.4)
Ping POD #3: NOK (ping 192.168.1.5)
Ping POD #4: OK (ping 192.168.2.3)
From Node #2 (c04) => All pods are remote, why cannot ping pods on node #3?
Ping POD #1: OK (ping 192.168.2.4)
Ping POD #2: NOK (ping 192.168.1.4)
Ping POD #3: NOK (ping 192.168.1.5)
Ping POD #4: OK (ping 192.168.2.3)
From Node #3 (c06) => It is an expected result
Ping POD #1: OK (ping 192.168.2.4)
Ping POD #2: OK (ping 192.168.1.4)
Ping POD #3: OK (ping 192.168.1.5)
Ping POD #4: OK (ping 192.168.2.3)
Test 2: POD=>POD => Why pod can ping local node pods only?
From POD #1 # Node#1
Ping POD #1: OK (kubectl -n eium1 exec demo-1229769353-7gf70 ping 192.168.2.4)
Ping POD #2: NOK (kubectl -n eium1 exec demo-1229769353-7gf70 ping 192.168.1.4)
Ping POD #3: NOK (kubectl -n eium1 exec demo-1229769353-7gf70 ping 192.168.1.5)
Ping POD #4: OK (kubectl -n eium1 exec demo-1229769353-7gf70 ping 192.168.2.3)
From POD #2 # Node#3
Ping POD #1: NOK (kubectl -n eium1 exec demo-1229769353-93xwm ping 192.168.2.4)
Ping POD #2: OK (kubectl -n eium1 exec demo-1229769353-93xwm ping 192.168.1.4)
Ping POD #3: OK (kubectl -n eium1 exec demo-1229769353-93xwm ping 192.168.1.5)
Ping POD #4: NOK (kubectl -n eium1 exec demo-1229769353-93xwm ping 192.168.2.3)
From POD #3 # Node#3
Ping POD #1: NOK (kubectl -n eium1 exec demo-1229769353-kxzs9 ping 192.168.2.4)
Ping POD #2: OK (kubectl -n eium1 exec demo-1229769353-kxzs9 ping 192.168.1.4)
Ping POD #3: OK (kubectl -n eium1 exec demo-1229769353-kxzs9 ping 192.168.1.5)
Ping POD #4: NOK (kubectl -n eium1 exec demo-1229769353-kxzs9 ping 192.168.2.3)
From POD #4 # Node#1
Ping POD #1: OK (kubectl -n eium1 exec demo-1229769353-ljvtg ping 192.168.2.4)
Ping POD #2: NOK (kubectl -n eium1 exec demo-1229769353-ljvtg ping 192.168.1.4)
Ping POD #3: NOK (kubectl -n eium1 exec demo-1229769353-ljvtg ping 192.168.1.5)
Ping POD #4: OK (kubectl -n eium1 exec demo-1229769353-ljvtg ping 192.168.2.3)
Env information
K8s version
[root#snap460c04 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.0", GitCommit:"58b7c16a52c03e4a849874602be42ee71afdcab1", GitTreeState:"clean", BuildDate:"2016-12-12T23:31:15Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
flannel pods
[root#snap460c04 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-flannel-ds-03w6l 2/2 Running 0 11h 15.114.116.126 snap460c04
kube-flannel-ds-fdgdh 2/2 Running 0 11h 15.114.116.125 snap460c03
kube-flannel-ds-xnzx3 2/2 Running 0 11h 15.114.116.128 snap460c06
System PODS
[root#snap460c04 ~]# kubectl get pods -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE
dummy-2088944543-kcj44 1/1 Running 0 11h 15.114.116.126 snap460c04
etcd-snap460c04 1/1 Running 19 11h 15.114.116.126 snap460c04
kube-apiserver-snap460c04 1/1 Running 0 11h 15.114.116.126 snap460c04
kube-controller-manager-snap460c04 1/1 Running 0 11h 15.114.116.126 snap460c04
kube-discovery-1769846148-5x4gr 1/1 Running 0 11h 15.114.116.126 snap460c04
kube-dns-2924299975-9tdl9 4/4 Running 0 11h 192.168.0.2 snap460c04
kube-proxy-7wtr4 1/1 Running 0 11h 15.114.116.128 snap460c06
kube-proxy-j0h4g 1/1 Running 0 11h 15.114.116.126 snap460c04
kube-proxy-knbrl 1/1 Running 0 11h 15.114.116.125 snap460c03
kube-scheduler-snap460c04 1/1 Running 0 11h 15.114.116.126 snap460c04
kubernetes-dashboard-3203831700-1nw59 1/1 Running 0 10h 192.168.0.4 snap460c04
The flannel is installed with the guide:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Network information for node 1 (c03)
[root#snap460c03 ~]# iptables-save
# Generated by iptables-save v1.4.21 on Wed Feb 22 18:01:12 2017
*nat
:PREROUTING ACCEPT [1:78]
:INPUT ACCEPT [1:78]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-2MEKZI7PJUEHR67T - [0:0]
:KUBE-SEP-3OT3I6HGM4K7SHGI - [0:0]
:KUBE-SEP-6TVSO4B75FMUOZPV - [0:0]
:KUBE-SEP-6YIOEPRBG6LZYDNQ - [0:0]
:KUBE-SEP-A6J4YW3AMR2ZVZMA - [0:0]
:KUBE-SEP-DBP5C3QJN36XNYPX - [0:0]
:KUBE-SEP-ES7Q53Y6P2YLIO4O - [0:0]
:KUBE-SEP-FWJIKOY3NRVP7HUX - [0:0]
:KUBE-SEP-JTN4UBVS7OG5RONX - [0:0]
:KUBE-SEP-PNOYUP2SIIHRG34N - [0:0]
:KUBE-SEP-UPZX2EM3TRFH2ASL - [0:0]
:KUBE-SEP-X7MGMJMV5H5T4NJN - [0:0]
:KUBE-SEP-ZZLC6ELJT43VDXYQ - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-5J5TVDDOSFKU7A7D - [0:0]
:KUBE-SVC-5RKFNKIUXDFI3AVK - [0:0]
:KUBE-SVC-EP4VGANCYXDST444 - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-KOBH2JYY2L2SF2XK - [0:0]
:KUBE-SVC-NGBEVGRJNPASKNGR - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-OL65KRZ5QEUS2RPN - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.87/32 -d 172.17.0.87/32 -p tcp -m tcp --dport 8158 -j MASQUERADE
-A POSTROUTING -s 172.17.0.87/32 -d 172.17.0.87/32 -p tcp -m tcp --dport 8159 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:db" -m tcp --dport 30156 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:db" -m tcp --dport 30156 -j KUBE-SVC-5RKFNKIUXDFI3AVK
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 32180 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 32180 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:repo" -m tcp --dport 30157 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:repo" -m tcp --dport 30157 -j KUBE-SVC-OL65KRZ5QEUS2RPN
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:ior" -m tcp --dport 30158 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:ior" -m tcp --dport 30158 -j KUBE-SVC-KOBH2JYY2L2SF2XK
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:web" -m tcp --dport 30159 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:web" -m tcp --dport 30159 -j KUBE-SVC-NGBEVGRJNPASKNGR
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:vnc" -m tcp --dport 30160 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:vnc" -m tcp --dport 30160 -j KUBE-SVC-5J5TVDDOSFKU7A7D
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-2MEKZI7PJUEHR67T -s 192.168.0.4/32 -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-MARK-MASQ
-A KUBE-SEP-2MEKZI7PJUEHR67T -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp -j DNAT --to-destination 192.168.0.4:9090
-A KUBE-SEP-3OT3I6HGM4K7SHGI -s 192.168.1.5/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ
-A KUBE-SEP-3OT3I6HGM4K7SHGI -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.1.5:9901
-A KUBE-SEP-6TVSO4B75FMUOZPV -s 15.114.116.128/32 -m comment --comment "eium1/ems:db" -j KUBE-MARK-MASQ
-A KUBE-SEP-6TVSO4B75FMUOZPV -p tcp -m comment --comment "eium1/ems:db" -m tcp -j DNAT --to-destination 15.114.116.128:3306
-A KUBE-SEP-6YIOEPRBG6LZYDNQ -s 15.114.116.128/32 -m comment --comment "eium1/ems:vnc" -j KUBE-MARK-MASQ
-A KUBE-SEP-6YIOEPRBG6LZYDNQ -p tcp -m comment --comment "eium1/ems:vnc" -m tcp -j DNAT --to-destination 15.114.116.128:5911
-A KUBE-SEP-A6J4YW3AMR2ZVZMA -s 192.168.1.4/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ
-A KUBE-SEP-A6J4YW3AMR2ZVZMA -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.1.4:9901
-A KUBE-SEP-DBP5C3QJN36XNYPX -s 15.114.116.128/32 -m comment --comment "eium1/ems:ior" -j KUBE-MARK-MASQ
-A KUBE-SEP-DBP5C3QJN36XNYPX -p tcp -m comment --comment "eium1/ems:ior" -m tcp -j DNAT --to-destination 15.114.116.128:8158
-A KUBE-SEP-ES7Q53Y6P2YLIO4O -s 15.114.116.128/32 -m comment --comment "eium1/ems:web" -j KUBE-MARK-MASQ
-A KUBE-SEP-ES7Q53Y6P2YLIO4O -p tcp -m comment --comment "eium1/ems:web" -m tcp -j DNAT --to-destination 15.114.116.128:8159
-A KUBE-SEP-FWJIKOY3NRVP7HUX -s 15.114.116.126/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-FWJIKOY3NRVP7HUX -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-FWJIKOY3NRVP7HUX --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 15.114.116.126:6443
-A KUBE-SEP-JTN4UBVS7OG5RONX -s 192.168.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-JTN4UBVS7OG5RONX -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 192.168.0.2:53
-A KUBE-SEP-PNOYUP2SIIHRG34N -s 192.168.2.4/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ
-A KUBE-SEP-PNOYUP2SIIHRG34N -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.2.4:9901
-A KUBE-SEP-UPZX2EM3TRFH2ASL -s 192.168.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-UPZX2EM3TRFH2ASL -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 192.168.0.2:53
-A KUBE-SEP-X7MGMJMV5H5T4NJN -s 192.168.2.3/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ
-A KUBE-SEP-X7MGMJMV5H5T4NJN -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.2.3:9901
-A KUBE-SEP-ZZLC6ELJT43VDXYQ -s 15.114.116.128/32 -m comment --comment "eium1/ems:repo" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZZLC6ELJT43VDXYQ -p tcp -m comment --comment "eium1/ems:repo" -m tcp -j DNAT --to-destination 15.114.116.128:8300
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:db cluster IP" -m tcp --dport 3306 -j KUBE-SVC-5RKFNKIUXDFI3AVK
-A KUBE-SERVICES -d 10.102.162.2/32 -p tcp -m comment --comment "eium1/demo:ro cluster IP" -m tcp --dport 9901 -j KUBE-SVC-EP4VGANCYXDST444
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.108.36.183/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 80 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:repo cluster IP" -m tcp --dport 8300 -j KUBE-SVC-OL65KRZ5QEUS2RPN
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:ior cluster IP" -m tcp --dport 8158 -j KUBE-SVC-KOBH2JYY2L2SF2XK
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:web cluster IP" -m tcp --dport 8159 -j KUBE-SVC-NGBEVGRJNPASKNGR
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:vnc cluster IP" -m tcp --dport 5911 -j KUBE-SVC-5J5TVDDOSFKU7A7D
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-5J5TVDDOSFKU7A7D -m comment --comment "eium1/ems:vnc" -j KUBE-SEP-6YIOEPRBG6LZYDNQ
-A KUBE-SVC-5RKFNKIUXDFI3AVK -m comment --comment "eium1/ems:db" -j KUBE-SEP-6TVSO4B75FMUOZPV
-A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-A6J4YW3AMR2ZVZMA
-A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-3OT3I6HGM4K7SHGI
-A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X7MGMJMV5H5T4NJN
-A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -j KUBE-SEP-PNOYUP2SIIHRG34N
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-UPZX2EM3TRFH2ASL
-A KUBE-SVC-KOBH2JYY2L2SF2XK -m comment --comment "eium1/ems:ior" -j KUBE-SEP-DBP5C3QJN36XNYPX
-A KUBE-SVC-NGBEVGRJNPASKNGR -m comment --comment "eium1/ems:web" -j KUBE-SEP-ES7Q53Y6P2YLIO4O
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-FWJIKOY3NRVP7HUX --mask 255.255.255.255 --rsource -j KUBE-SEP-FWJIKOY3NRVP7HUX
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-FWJIKOY3NRVP7HUX
-A KUBE-SVC-OL65KRZ5QEUS2RPN -m comment --comment "eium1/ems:repo" -j KUBE-SEP-ZZLC6ELJT43VDXYQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-JTN4UBVS7OG5RONX
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-SEP-2MEKZI7PJUEHR67T
COMMIT
# Completed on Wed Feb 22 18:01:12 2017
# Generated by iptables-save v1.4.21 on Wed Feb 22 18:01:12 2017
*filter
:INPUT ACCEPT [147:180978]
:FORWARD ACCEPT [16:1344]
:OUTPUT ACCEPT [20:11774]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Wed Feb 22 18:01:12 2017
[root#snap460c03 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 98:4b:e1:62:14:00 brd ff:ff:ff:ff:ff:ff
inet 15.114.116.125/22 brd 15.114.119.255 scope global enp2s0f0
valid_lft forever preferred_lft forever
inet6 2002:109d:45fd:b:9a4b:e1ff:fe62:1400/64 scope global dynamic
valid_lft 6703sec preferred_lft 1303sec
inet6 fec0::b:9a4b:e1ff:fe62:1400/64 scope site dynamic
valid_lft 6703sec preferred_lft 1303sec
inet6 fe80::9a4b:e1ff:fe62:1400/64 scope link
valid_lft forever preferred_lft forever
3: enp2s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 98:4b:e1:62:14:04 brd ff:ff:ff:ff:ff:ff
4: enp2s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 98:4b:e1:62:14:01 brd ff:ff:ff:ff:ff:ff
5: enp2s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 98:4b:e1:62:14:05 brd ff:ff:ff:ff:ff:ff
6: enp2s0f4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 98:4b:e1:62:14:02 brd ff:ff:ff:ff:ff:ff
7: enp2s0f5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 98:4b:e1:62:14:06 brd ff:ff:ff:ff:ff:ff
8: enp2s0f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 98:4b:e1:62:14:03 brd ff:ff:ff:ff:ff:ff
9: enp2s0f7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 98:4b:e1:62:14:07 brd ff:ff:ff:ff:ff:ff
10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::5484:7aff:fefe:9799/64 scope link
valid_lft forever preferred_lft forever
1822: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether 0a:58:c0:a8:02:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::858:c0ff:fea8:201/64 scope link
valid_lft forever preferred_lft forever
1824: veth6c162dff: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether 36:2b:f9:cf:1d:aa brd ff:ff:ff:ff:ff:ff
inet6 fe80::342b:f9ff:fecf:1daa/64 scope link
valid_lft forever preferred_lft forever
1825: veth34ca824a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether ae:79:85:50:0b:da brd ff:ff:ff:ff:ff:ff
inet6 fe80::ac79:85ff:fe50:bda/64 scope link
valid_lft forever preferred_lft forever
916: vethab43ed7: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
link/ether da:5a:e9:f2:6b:0a brd ff:ff:ff:ff:ff:ff
inet6 fe80::d85a:e9ff:fef2:6b0a/64 scope link
valid_lft forever preferred_lft forever
918: veth1bbb133: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
link/ether 56:3c:47:e1:5a:c0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::543c:47ff:fee1:5ac0/64 scope link
valid_lft forever preferred_lft forever
921: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 8e:8a:81:67:a6:92 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::8c8a:81ff:fe67:a692/64 scope link
valid_lft forever preferred_lft forever
[root#snap460c03 ~]#
[root#snap460c03 ~]# ip -s link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
RX: bytes packets errors dropped overrun mcast
1652461277938 1256303971 0 0 0 0
TX: bytes packets errors dropped carrier collsns
1652461277938 1256303971 0 0 0 0
2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
link/ether 98:4b:e1:62:14:00 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
273943938058 464934981 0 4475811 0 4994783
TX: bytes packets errors dropped carrier collsns
112001303439 313492490 0 0 0 0
3: enp2s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
link/ether 98:4b:e1:62:14:04 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
4: enp2s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
link/ether 98:4b:e1:62:14:01 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
5: enp2s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
link/ether 98:4b:e1:62:14:05 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
6: enp2s0f4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
link/ether 98:4b:e1:62:14:02 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
7: enp2s0f5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
link/ether 98:4b:e1:62:14:06 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
8: enp2s0f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
link/ether 98:4b:e1:62:14:03 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
9: enp2s0f7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
link/ether 98:4b:e1:62:14:07 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
0 0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
0 0 0 0 0 0
10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
326660431 4780762 0 0 0 0
TX: bytes packets errors dropped carrier collsns
3574619827 5529921 0 0 0 0
1822: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT
link/ether 0a:58:c0:a8:02:01 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
12473828 150176 0 0 0 0
TX: bytes packets errors dropped carrier collsns
116444 2577 0 0 0 0
1824: veth6c162dff: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT
link/ether 36:2b:f9:cf:1d:aa brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
14089148 145722 0 0 0 0
TX: bytes packets errors dropped carrier collsns
7131026 74713 0 0 0 0
1825: veth34ca824a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT
link/ether ae:79:85:50:0b:da brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
14647882 151667 0 0 0 0
TX: bytes packets errors dropped carrier collsns
7149198 75141 0 0 0 0
916: vethab43ed7: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT
link/ether da:5a:e9:f2:6b:0a brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
66752218 734347 0 0 0 0
TX: bytes packets errors dropped carrier collsns
43439443 733394 0 0 0 0
918: veth1bbb133: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT
link/ether 56:3c:47:e1:5a:c0 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
66755200 734343 0 0 0 0
TX: bytes packets errors dropped carrier collsns
43434663 733264 0 0 0 0
921: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT
link/ether 8e:8a:81:67:a6:92 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
82703829917 59818766 0 0 0 0
TX: bytes packets errors dropped carrier collsns
940475554 9717823 0 898 0 0
[root#snap460c03 ~]#
[root#snap460c03 ~]# ip route
default via 15.114.116.1 dev enp2s0f0 proto static metric 100
15.114.116.0/22 dev enp2s0f0 proto kernel scope link src 15.114.116.125 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
192.168.0.0/16 dev flannel.1
192.168.2.0/24 dev cni0 proto kernel scope link src 192.168.2.1
What parameters did you provide for kubeadm?
If you want to use flannel as the pod network, specify --pod-network-cidr 10.244.0.0/16 if you’re using the daemonset manifest below. However, please note that this is not required for any other networks besides Flannel
Execute these commands on every node:
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.bridge.bridge-nf-call-ip6tables=1
sysctl -w net.bridge.bridge-nf-call-iptables=1
I have similar issue were flanneld service is active but I can't ping pods located in another node. I found out that my firewall on the nodes are up (firewalld.service) so I turned them off and I was able to ping pods regardless on what node I am located. Hope this helps.

docker connecting to host tunnel from container

I would like to connect from inside my docker container with a postgres db that is using a tunnel in the host. In the host I have a tunnel pointing to the DB host:
host$ sudo netstat -tulpen | grep 555
tcp 0 0 127.0.0.1:5555 0.0.0.0:* LISTEN 1000 535901 18361/ssh
tcp6 0 0 ::1:5555 :::* LISTEN 1000 535900 18361/ssh
the tunnel is setup with :
host$ ps -aux | grep 18361
ubuntu 9619 0.0 0.0 10432 628 pts/0 S+ 10:11 0:00 grep --color=auto 18361
ubuntu 18361 0.0 0.0 46652 1420 ? Ss Nov16 0:00 ssh -i /home/ubuntu/.ssh/id_rsa -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -f -N -L 5555:localhost:5432 user#remotehost
and from the host I can launch psql commands:
host$ psql -h localhost -p 5555 --username user db_name
psql (9.3.15, server 9.5.4)
SSL connection (cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256)
Type "help" for help.
db_name=#
As I am using the network mode BRIDGE [I cannot use HOST as docker is not correctly exposing the containers ports to host see: https://github.com/docker/compose/issues/3442 ] I read that I have to use the container ip:
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:6c:01:5c:a5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:6cff:fe01:5ca5/64 scope link
Which in this case would be 172.17.0.1
However when I go inside the container:
host$ docker exec -ti container_name /bin/bash
I try to connect I have:
container# psql -h 172.17.0.1 -p 5555
psql: could not connect to server: Connection refused
Is the server running on host "172.17.0.1" and accepting
TCP/IP connections on port 5555?
Anything that I am missing ?
You missed bind_address, so now your binding address is 127.0.0.1.
Setting your tunnel you have to add bind_address parameter.
-L [bind_address:]port:host:hostport
E.g.
sudo ssh -f -N -L 172.17.0.1:5555:localhost:5432 user#remotehost

Resources