I have a small 1-manager, 3-worker cluster setup to pilot a few things. It is running swarm orchestration and is able to spin up services across the cluster from any stack yaml and serve the webapps through the ingress network. I've made no changes to the default yum installation of docker-ce. Vanilla installation with no configuration changes to any of the nodes.
There is, however, issue of inter-services communication over other overlay networks.
I create a docker overlay network testnet with --attachable flag and attach an nginx (named: nginx1) container to it on node-1 and a netshoot (named: netshoot1) container to it on manager-1.
I can then ping nginx1 from netshoot1 and vice versa. I can observe these packet exchanges over tcpdump on both nodes.
# tcpdump -vvnn -i any src 10.1.72.70 and dst 10.1.72.71 and port 4789
00:20:39.302561 IP (tos 0x0, ttl 64, id 49791, offset 0, flags [none], proto UDP (17), length 134)
10.1.72.70.53237 > 10.1.72.71.4789: [udp sum ok] VXLAN, flags [I] (0x08), vni 4101
IP (tos 0x0, ttl 64, id 20598, offset 0, flags [DF], proto ICMP (1), length 84)
10.0.5.18 > 10.0.5.24: ICMP echo request, id 21429, seq 1, length 64
Here you can see netshoot1 (10.0.5.18) ping nginx1 (10.0.5.24) - echo successful.
However if I then # curl -v nginx1:80, the whole thing times out.
Using tcpdump, I can see the packets leave manager-1 node, but they never arrive on node-1.
00:22:22.809057 IP (tos 0x0, ttl 64, id 42866, offset 0, flags [none], proto UDP (17), length 110)
10.1.72.70.53764 > 10.1.72.71.4789: [bad udp cksum 0x5b97 -> 0x697d!] VXLAN, flags [I] (0x08), vni 4101
IP (tos 0x0, ttl 64, id 43409, offset 0, flags [DF], proto TCP (6), length 60)
10.0.5.18.53668 > 10.0.5.24.80: Flags [S], cksum 0x1e58 (incorrect -> 0x2c3e), seq 1616566654, win 28200, options [mss 1410,sack OK,TS val 913132903 ecr 0,nop,wscale 7], length 0
These are VMs running on an in-house datacenter over vmware. The networking team says network firewall shouldn't be blocking or inspecting them as the ips are on the same subnet.
Is this an issue with docker configuration? Iptables?
OS: RHEL 8
Docker CE: 20.10.2
containerd: 1.4.3
IPTABLES on manager-1
Chain INPUT (policy DROP 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 9819K 2542M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 8 317 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 255
3 473 33064 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
4 0 0 DROP all -- * * 127.0.0.0/8 0.0.0.0/0
5 116 6192 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
6 351K 21M ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 source IP range 10.1.72.71-10.1.72.73 state NEW multiport dports 2377,7946
7 435 58400 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 source IP range 10.1.72.71-10.1.72.73 state NEW multiport dports 7946,4789
8 17142 1747K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy DROP 8 packets, 384 bytes)
num pkts bytes target prot opt in out source destination
1 14081 36M DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 14081 36M DOCKER-INGRESS all -- * * 0.0.0.0/0 0.0.0.0/0
3 267K 995M DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
4 39782 121M ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
5 1598 95684 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
6 41470 717M ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
7 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
8 90279 23M ACCEPT all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
9 5 300 DOCKER all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
10 94041 134M ACCEPT all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
11 0 0 DROP all -- docker_gwbridge docker_gwbridge 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 11M packets, 2365M bytes)
num pkts bytes target prot opt in out source destination
Chain DOCKER (2 references)
num pkts bytes target prot opt in out source destination
1 1598 95684 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.2 tcp dpt:5000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num pkts bytes target prot opt in out source destination
1 41470 717M DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 93853 133M DOCKER-ISOLATION-STAGE-2 all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 267K 995M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 1033K 1699M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-INGRESS (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8502
2 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED tcp spt:8502
3 267K 995M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 0 0 DROP all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 135K 851M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
IPTABLES on node-1
Chain INPUT (policy DROP 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 6211K 3343M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 7 233 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 255
3 471 32891 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
4 0 0 DROP all -- * * 127.0.0.0/8 0.0.0.0/0
5 84 4504 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 /* ssh from anywhere */
6 26940 1616K ACCEPT tcp -- * * 10.1.72.70 0.0.0.0/0 state NEW multiport dports 7946 /* docker swarm cluster comm- manager,node2,3 */
7 31624 1897K ACCEPT tcp -- * * 10.1.72.72 0.0.0.0/0 state NEW multiport dports 7946 /* docker swarm cluster comm- manager,node2,3 */
8 30583 1835K ACCEPT tcp -- * * 10.1.72.73 0.0.0.0/0 state NEW multiport dports 7946 /* docker swarm cluster comm- manager,node2,3 */
9 432 58828 ACCEPT udp -- * * 10.1.72.70 0.0.0.0/0 state NEW multiport dports 7946,4789 /* docker swarm cluster comm and overlay netw- manager,node2,3 */
10 10 1523 ACCEPT udp -- * * 10.1.72.72 0.0.0.0/0 state NEW multiport dports 7946,4789 /* docker swarm cluster comm and overlay netw- manager,node2,3 */
11 7 1159 ACCEPT udp -- * * 10.1.72.73 0.0.0.0/0 state NEW multiport dports 7946,4789 /* docker swarm cluster comm and overlay netw- manager,node2,3 */
12 17172 1749K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy DROP 19921 packets, 1648K bytes)
num pkts bytes target prot opt in out source destination
1 23299 22M DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 23299 22M DOCKER-INGRESS all -- * * 0.0.0.0/0 0.0.0.0/0
3 787K 1473M DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
4 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
5 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
6 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
7 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
8 386K 220M ACCEPT all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
9 0 0 DOCKER all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
10 402K 1254M ACCEPT all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
11 0 0 DROP all -- docker_gwbridge docker_gwbridge 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 8193K packets, 2659M bytes)
num pkts bytes target prot opt in out source destination
Chain DOCKER-INGRESS (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8502
2 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED tcp spt:8502
3 787K 1473M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 792K 1474M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
num pkts bytes target prot opt in out source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 402K 1254M DOCKER-ISOLATION-STAGE-2 all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 787K 1473M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 0 0 DROP all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 402K 1254M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
The issue was indeed the bad checksums on the outbound packets. The vmware network interface was dropping the packets due to bad checksums.
The solution was to disable checksum offloading. Using ethtool:
# ethtool -K <interface> tx off
I had the exact same problem (the only thing that was working in my overlay network was ping, everything else just disappeared). This thread saved me after days of pulling my hair, so I thought I'd add my five cents.
This was also on vmware servers, running Ubuntu 22.04. My solution was to change network interface type from vmxnet3 to a simple E1000E card and suddenly everything just started working. So obviously there's something weird happening in vmxnet3. What buffles me is that this doesn't seem to be a big issue for more users, running a Docker swarm on vmware servers should be kinda normal, right?
The same issue solved for me without using "ethtool", just by using endpoint_mode and using host mode for publishing ports.
here is the changes that I added in my compose:
1-
ports:
- target: 2379
published: 2379
protocol: tcp
mode: host
2-
deploy:
endpoint_mode: dnsrr
3- adding
hostname: <service_name>
Just in case anyone tried ethtool -K <iface name> tx off and still does not work, try to change the MTU size of your overlay network to lover than the standard (1500).
For example:
docker network create -d overlay --attachable --opt com.docker.network.driver.mtu=1450 my-network
I have inserted an iptables rule to block access to my containers from the internet (according to the official docker docs), but now my containers cannot access the internet either.
I run a container on a dedicated server like this:
docker run --name mycontainer --network network1 -d -p 10000:80 someImage
I can access that container from my home network...:
telnet servername.com 10000
... even though I have limited access using ufw:
ufw status
Status: active
To Action From
-- ------ ----
22 ALLOW Anywhere
80/tcp ALLOW Anywhere
...
It makes no mention of port 10000, and I initially denied all incoming ports.
According to the docs at https://docs.docker.com/network/iptables/ when starting a container with -p, docker will automatically expose the port through the firewall by manipulating iptables, using rules which are evaluated before ufw rules.
Those docs then go on to suggest blocking incoming requests like this, in the section titled "Restrict connections to the Docker host":
iptables -I DOCKER-USER -i eno ! -s 192.168.1.1 -j DROP
I got the interface name by running route:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default static-ip-182-1 0.0.0.0 UG 0 0 0 eno
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-ef37f7b34afa
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-979cf8868fcd
182.133.43.627 0.0.0.0 255.255.255.192 U 0 0 0 eno
It works, I can no longer telnet onto port 10000. (surprisingly I didn't need to reload / restart anything).
Unfortunately, I can no longer access the internet from inside a container. The dedicated server can still ping google.com, but my container cannot:
docker exec mycontainer ping google.com
ping: unknown host
That was working before inserting the iptables rule, but doesn't work any more.
Question 1: what do I have to change in ufw / iptables so that my containers can again access the internet (outgoing), but so that I still cannot access my container from the internet (incoming)?
Question 2: what do I have to change in the iptables rule so that it survives a reboot, because I noticed that after a reboot, I can again telnet to port 10000.
UPDATE:
docker network ls
NETWORK ID NAME DRIVER SCOPE
a1e2c7cdbc65 bridge bridge local
218e121af9cd host host local
979cf8868fcd network1 bridge local
cee02cfd1dba none null local
ef37f7b34afa network2 bridge local
I noticed that /etc/resolv.conf is different depending upon whether I check on a container inside a network or one with the default network:
docker run -it --rm --network network1 alpine cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
docker run -it --rm alpine cat /etc/resolv.conf
nameserver 80.237.128.56
nameserver 80.237.128.57
Those two nameservers are the ones that my dedicated server uses.
ufw show raw | grep DROP
Chain INPUT (policy DROP 0 packets, 0 bytes)
Chain FORWARD (policy DROP 0 packets, 0 bytes)
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-ef37f7b34afa 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-979cf8868fcd 0.0.0.0/0 0.0.0.0/0
14262 1201304 DROP all -- eno * !127.0.0.1 0.0.0.0/0
173 37057 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
427 35720 DROP all -- * * 0.0.0.0/0 0.0.0.0/0
Chain INPUT (policy DROP 0 packets, 0 bytes)
Chain FORWARD (policy DROP 0 packets, 0 bytes)
0 0 DROP all * * ::/0 ::/0 rt type:0
0 0 DROP all * * ::/0 ::/0 rt type:0
0 0 DROP all * * ::/0 ::/0 ctstate INVALID
0 0 DROP all * * ::/0 ::/0 rt type:0
0 0 DROP all * * ::/0 ::/0
0 0 DROP all * * ::/0 ::/0
a little more detail around those drops and docker generally:
Chain DOCKER (3 references)
pkts bytes target prot opt in out source destination
2 120 ACCEPT tcp -- !br-979cf8868fcd br-979cf8868fcd 0.0.0.0/0 172.19.0.2 tcp dpt:3306
1 60 ACCEPT tcp -- !br-ef37f7b34afa br-ef37f7b34afa 0.0.0.0/0 172.18.0.2 tcp dpt:18088
0 0 ACCEPT tcp -- !br-979cf8868fcd br-979cf8868fcd 0.0.0.0/0 172.19.0.3 tcp dpt:80
0 0 ACCEPT tcp -- !br-ef37f7b34afa br-ef37f7b34afa 0.0.0.0/0 172.18.0.3 tcp dpt:3306
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
19 1164 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
14533 1228562 DOCKER-ISOLATION-STAGE-2 all -- br-ef37f7b34afa !br-ef37f7b34afa 0.0.0.0/0 0.0.0.0/0
168 14646 DOCKER-ISOLATION-STAGE-2 all -- br-979cf8868fcd !br-979cf8868fcd 0.0.0.0/0 0.0.0.0/0
25893 4430799 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (3 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-ef37f7b34afa 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * br-979cf8868fcd 0.0.0.0/0 0.0.0.0/0
14720 1244372 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
14465 1218356 DROP all -- eno2 * !127.0.0.1 0.0.0.0/0
25893 4430799 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
ufw show raw | grep REJECT # contains ip addresses from fail2ban
0 0 REJECT all -- * * 85.202.169.48 0.0.0.0/0 reject-with icmp-port-unreachable
25 1904 REJECT all -- * * 112.85.42.128 0.0.0.0/0 reject-with icmp-port-unreachable
19 1664 REJECT all -- * * 61.177.172.89 0.0.0.0/0 reject-with icmp-port-unreachable
23 1884 REJECT all -- * * 112.85.42.74 0.0.0.0/0 reject-with icmp-port-unreachable
21 1764 REJECT all -- * * 112.85.42.15 0.0.0.0/0 reject-with icmp-port-unreachable
23 1884 REJECT all -- * * 112.85.42.87 0.0.0.0/0 reject-with icmp-port-unreachable
22 1748 REJECT all -- * * 122.194.229.54 0.0.0.0/0 reject-with icmp-port-unreachable
19 1644 REJECT all -- * * 122.194.229.45 0.0.0.0/0 reject-with icmp-port-unreachable
22 1844 REJECT all -- * * 218.92.0.221 0.0.0.0/0 reject-with icmp-port-unreachable
18 1104 REJECT all -- * * 112.85.42.88 0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all * * ::/0 ::/0 reject-with icmp6-port-unreachable
I have a Docker Host that is connected to a non-default route network. My problem is now that I can't reach this Network from within the Docker Containers on the Docker Host.
Primary IP: 189.69.77.21 (default route)
Secondary IP: 192.168.77.21
Routing is like the following:
[root#mgmt]# route -n
Kernel IP Routentabelle
Ziel Router Genmask Flags Metric Ref Use Iface
0.0.0.0 189.69.77.1 0.0.0.0 UG 0 0 0 enp0s31f6
189.69.77.1 0.0.0.0 255.255.255.255 UH 0 0 0 enp0s31f6
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.77.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s31f6.4000
and IPtables untouched:
[root#mgmt]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:3000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
I start the Docker Container with the following:
docker run -d --restart=unless-stopped -p 127.0.0.1:3000:3000/tcp --name mongoclient -e MONGOCLIENT_DEFAULT_CONNECTION_URL=mongodb://192.168.77.21:27017,192.168.77.40:27017,192.168.77.41:27017/graylog?replicaSet=ars0 -e ROOT_URL=http://192.168.77.21/nosqlclient mongoclient/mongoclient
I can reach the container (via NGINX Proxy) over Network but the container itself can only ping/reach the Docker Host IP and not others.
node#1c5cf0e8d14c:/opt/meteor/dist/bundle$ ping 192.168.77.21
PING 192.168.77.21 (192.168.77.21) 56(84) bytes of data.
64 bytes from 192.168.77.21: icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from 192.168.77.21: icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from 192.168.77.21: icmp_seq=3 ttl=64 time=0.079 ms
^C
--- 192.168.77.21 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.078/0.079/0.080/0.000 ms
node#1c5cf0e8d14c:/opt/meteor/dist/bundle$ ping 192.168.77.40
PING 192.168.77.40 (192.168.77.40) 56(84) bytes of data.
^C
--- 192.168.77.40 ping statistics ---
240 packets transmitted, 0 received, 100% packet loss, time 239000ms
So my question is, how can I make the Docker Container reach the hosts on the Network? My goal is to have a running mongoclient via Docker that can be used to manage the MongoDB ReplicaSet that is in that additional private Network.
You can use the network host in the container. So the container is use the host network and you can access the container and host network.
Here is the documentation:
https://docs.docker.com/network/host/
BR
Carlos
Since we are a small team we want to put JetBrains' Hub, Youtrack, Upsource and Teamcity as docker containers (all on the same machine for now). Docker is running on Photon OS 2.0 running on ESXi 6.7. Nginx in another container acts as a DNS proxy so all services are reachable with their own domain names on port 80 for now...
I got all 5 services running and and can access them in a browser. However connecting Youtrack, Upsource and Teamcity to Hub is a challenge. Youtrack, Upsource and Teamcity ask for the Hub URL to confirm it and ask for permission to access Hub.
The Problem:
Hub URL: http://hub.teamtools.mydomain.com -> the container can not access it under that address and verification fails with timeout
Hub URL: http://172.18.0.3:8080 -> the container can access Hub on the internal docker net and then shows a pop up which is trying to show a confirmation page by redirecting to Hub on that internal IP which of course fails in the browser (I tried to copy the URL from the popup into a new window and adjust it there as a hack but that does not work.)
Questions:
How can I link Youtrack, Upsource and Teamcity to the Hub? In order for the confimation process to work the docker containers need to be able to access each other with an external IP/domain name.
Does anything speak against having all four Teamtools on the same machine to get started and separate them later as demand grows?
Configuration so far:
The containers have been turned into services like so:
/etc/systemd/system/docker.nginx.servcie
[Unit]
Description=Nginx DNS proxy
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=/usr/bin/docker network create --subnet=172.18.0.0/16 dockerNet
ExecStartPre=-/usr/bin/docker exec %n stop
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull jwilder/nginx-proxy
ExecStart=/usr/bin/docker run --rm --name %n \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
--net dockerNet --ip 172.18.0.2 \
-p 80:80 \
jwilder/nginx-proxy
[Install]
WantedBy=multi-user.target
/etc/systemd/system/docker.hub.service
[Unit]
Description=JetBrains Hub Service
After=docker.nginx-proxy.service
Requires=docker.nginx-proxy.service
[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker exec %n stop
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull jetbrains/hub:2018.2.9635
ExecStart=/usr/bin/docker run --rm --name %n \
-v /opt/hub/data:/opt/hub/data \
-v /opt/hub/conf:/opt/hub/conf \
-v /opt/hub/logs:/opt/hub/logs \
-v /opt/hub/backups:/opt/hub/backups \
--net dockerNet --ip 172.18.0.3 \
-p 8010:8080 \
--expose 8080 \
-e VIRTUAL_PORT=8080 \
-e VIRTUAL_HOST=hub,teamtools.mydomain.com,hub.teamtools.mydomain.com \
jetbrains/hub:2018.2.9635
[Install]
WantedBy=multi-user.target
... and so on. Since I'm, still trying things out, the ports are mapped on the host and exposed so nginx-proxy can pick them up. I also added static IPs to the containers hoping this would help with my problem.
Running those services results in:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ba8ed89b832 jetbrains/teamcity-server "/run-services.sh" 12 hours ago Up 12 hours 0.0.0.0:8111->8111/tcp docker.teamcity.service
5c819c48cbcc jetbrains/upsource:2018.1.357 "/bin/bash /run.sh" 12 hours ago Up 12 hours 0.0.0.0:8030->8080/tcp docker.upsource.service
cf9dcd1b534c jetbrains/youtrack:2018.2.42223 "/bin/bash /run.sh" 14 hours ago Up 14 hours 0.0.0.0:8020->8080/tcp docker.youtrack.service
de86c3e1f2e2 jetbrains/hub:2018.2.9635 "/bin/bash /run.sh" 14 hours ago Up 14 hours 0.0.0.0:8010->8080/tcp docker.hub.service
9df9cb44e485 jwilder/nginx-proxy "/app/docker-entry..." 14 hours ago Up 14 hours 0.0.0.0:80->80/tcp docker.nginx-proxy.service
Additional Info:
I did consider this could be a firewall issue and this post seems to suggest the same thing:
https://forums.docker.com/t/access-docker-container-from-inside-of-the-container-via-external-url/33271
After some discussion with the provider of the virtual server it
turned out, that conflicting firewall rules between plesk firewall and
iptables caused this problem. After the conflict had been fixed by the
provider the container could be accessed.
Firewall on Photon with rules auto added by docker:
Chain INPUT (policy DROP 2 packets, 203 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
258 19408 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
6 360 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
186 13066 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
186 13066 DOCKER-ISOLATION all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
103 7224 ACCEPT all -- * br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
9 524 DOCKER all -- * br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
74 5318 ACCEPT all -- br-83f08846fc2e !br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
1 52 ACCEPT all -- br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
300 78566 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
8 472 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.2 tcp dpt:80
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.3 tcp dpt:8080
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.5 tcp dpt:8080
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.4 tcp dpt:8080
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.6 tcp dpt:8111
Chain DOCKER-ISOLATION (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- br-83f08846fc2e docker0 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- docker0 br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
186 13066 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
186 13066 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
It turned out to be a firewall issue.
I resolved it using this https://unrouted.io/2017/08/15/docker-firewall/ as a starting point.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:FILTERS - [0:0]
:DOCKER-USER - [0:0]
-F INPUT
-F DOCKER-USER
-F FILTERS
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp --icmp-type any -j ACCEPT
-A INPUT -j FILTERS
-A DOCKER-USER -i eth0 -j FILTERS
-A FILTERS -m state --state ESTABLISHED,RELATED -j ACCEPT
# full access from my workstation
-A FILTERS -m state --state NEW -s 192.168.0.10/32
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT
-A FILTERS -j REJECT --reject-with icmp-host-prohibited
COMMIT
I installed a jenkins via Docker on my server and assigned to a specific domain (jenkins.mydomain.com), which works perfectly fine. But I can also reach jenkins (and every other service in docker) if I browse my domain with the service's port, for example: mydomain.com:8181
I'v already tried a few thing to block the port from the outside and make it only accessible via domain, but no luck.
First I tried to block the port for the eth0 interface:
iptables -A INPUT -i eth0 -p tcp --destination-port 8181 -j DROP
But it didn't work because when I tried to reach jenkins from the domain, I'v got an error 503.
Also tried to block the port for every incoming requests except docker's ip. It didn't work either.
So how can I make the ports unaccessible from the outside but accessible for Docker?
iptables -L -n --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 DOCKER-ISOLATION all -- 0.0.0.0/0 0.0.0.0/0
2 DOCKER all -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
4 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
5 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
Chain DOCKER (1 references)
num target prot opt source destination
1 ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:3000
2 ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:22
3 ACCEPT tcp -- 0.0.0.0/0 172.17.0.3 tcp dpt:8081
4 ACCEPT tcp -- 0.0.0.0/0 172.17.0.4 tcp dpt:50000
5 ACCEPT tcp -- 0.0.0.0/0 172.17.0.4 tcp dpt:8080
Chain DOCKER-ISOLATION (1 references)
num target prot opt source destination
1 RETURN all -- 0.0.0.0/0 0.0.0.0/0