Docker networking, baffled and puzzled - docker

I have a simple python application which stores and searches it's data in an Elasticsearch instance. The python application runs in it's own container, just as Elasticsearch is.
Elasticsearch exposes it's default ports 9200 and 9300, the python application exposes port 5000. The networktype used for Docker is a user defined bridged network.
When i start both containers the application starts up nicely, both containers see each other by container name and communicate just fine.
But from the docker host (linux) it's not possible to connect to the exposed port 5000. So a simple curl http://localhost:5000/ renders in a time-out. The Docker tips from this documentation: https://docs.docker.com/network/bridge/ did not solve this.
After a lot of struggling I tried something completely different, I tried connecting from the outside of the docker host to the python application. I was baffled, from anywhere in the world i could do curl http://<fqdn>:5000/ and was served with the application.
So that means, real problem solved because I'm able to serve the application to the outside world. (So yes, the application inside the container listens on 0.0.0.0 as is the solution for 90% of the network problems reported by others.)
But that still leaves me puzzled, what causes this strange behavior? On my development machine (Windows 10, WSL, Docker desktop, Linux containers) I am able to connect to the service on localhost:5000, 127.0.0.1:5000 etc. On my Linux (production) machine everything works except connecting from the docker host to the containers.
I hope someone can shed a light on this, I trying to understand why this is happening.
Some relevant information
Docker host:
# ifconfig -a
br-77127ce4b631: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
[snip]
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
[snip]
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 1xx.1xx.199.134 netmask 255.255.255.0 broadcast 1xx.1xx.199.255
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1e7f2f7a271b pplbase_api "flask run --host=0.…" 20 hours ago Up 19 hours 0.0.0.0:5000->5000/tcp pplbase_api_1
fdfa10b1ce99 elasticsearch:7.5.1 "/usr/local/bin/dock…" 21 hours ago Up 19 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp pplbase_elastic_1
# docker network ls
NETWORK ID NAME DRIVER SCOPE
[snip]
77127ce4b631 pplbase_pplbase bridge local
# iptables -L -n
[snip]
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9300
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:9200
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:5000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Docker compose file:
version: '3'
services:
api:
build: .
links:
- elastic
ports:
- "5000:5000"
networks:
- pplbase
environment:
- ELASTIC_HOSTS=elastic localhost
- FLASK_APP=app.py
- FLASK_ENV=development
- FLASK_DEBUG=0
tty: true
elastic:
image: "elasticsearch:7.5.1"
ports:
- "9200:9200"
- "9300:9300"
networks:
- pplbase
environment:
- discovery.type=single-node
volumes:
- ${PPLBASE_STORE}:/usr/share/elasticsearch/data
networks:
pplbase:
driver: bridge
After more digging in the riddle is getting bigger and bigger. When using netcat I can establish a connection
Connection to 127.0.0.1 5000 port [tcp/*] succeeded!
Checking with netstat when no clients are connected is see:
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
While trying to connect from the dockerhost, the connection is made:
tcp 0 1 172.20.0.1:56866 172.20.0.3:5000 SYN_SENT 27824/docker-proxy
tcp6 0 0 :::5000 :::* LISTEN 27824/docker-proxy
tcp6 0 0 ::1:58900 ::1:5000 ESTABLISHED 31642/links
tcp6 592 0 ::1:5000 ::1:58900 ESTABLISHED 27824/docker-proxy
So I'm suspecting now some networkvoodoo on the docker host.

The Flask instance is running at 0.0.0.0:5000.
Have you tried: curl http://0.0.0.0:5000/?
It might be that your host configuration maps localhost with 127.0.0.1 rather than 0.0.0.0

So as I was working on this problem, slowly towards a solution I found my last suggestion was right after all. In the firewall (iptables) I logged all dropped packets and yes, the packets between the docker-bridge (not docker0, but br- and the container (veth) were being dropped by iptables. Adding a rule allowing traffic from the interfaces to flow resolved the problem.
In my case: sudo iptables -I INPUT 3 -s 172.20.0.3 -d 172.20.0.1 -j ACCEPT
Where 172.20.0.0/32 is the bridged network generated by Docker.

Related

Iptables rules with dockerized web application - can't block incoming traffic

i'm hosting a dockerized web application binded with port 8081 in my remote server.
I want to block that web application for external ips, as I already did wit port 8080 hosting a plain jenkins server.
Here's what i've tried:
iptables -A INPUT -d <my-server-ip> -p tcp --dport 8081 -j DROP
As I did with port 8080.
Here is
iptables -nv -L INPUT
output:
Chain INPUT (policy ACCEPT 2836 packets, 590K bytes)
pkts bytes target prot opt in out source destination
495 23676 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8080
0 0 DROP tcp -- * * 0.0.0.0/0 <my-ip-addr> tcp dpt:8081
Has it possibily something to do with DOCKER chain in iptables ?
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
9 568 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 <container-eth1-addr> tcp dpt:8080
There are more specific rules i need to add ?
Isn't my server INPUT rules supposed to be applied before those listed in the DOCKER chain?
UPDATE - SOLVED
Thanks to larsks's comments I found the solution.
The goal here was to block tcp traffic on port 8081 binded with docker docker container but being able to use ssh tunneling as "poor man" VPN (so non publish the port was not an option).
Just had to add this rule:
iptables -I DOCKER-USER 1 -d <container-eth-ip> ! -s 127.0.0.1 -p tcp --dport 8080 -j DROP

How do I prevent Docker from source NAT'ing traffic on Synology NAS

On a Synology NAS, it appears the default setup for docker/iptables is to source NAT traffic going to the container to the gateway IP. This fundamentally causes problems when the container needs to see the correct/real client IP. I do not see this issue when running on an ubuntu system, for example.
Let me walk you though what I'm seeing. On the Synology NAS, I'll run a simple nginx container:
docker run --rm -ti -p 80:80 nginx
(I have disabled the Synology NAS from using port 80)
I'll do a curl from my laptop and I see the following log line:
172.17.0.1 - - [05/May/2020:17:02:44 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "-"
Hm... the client IP (172.17.0.1) is the docker0 interface on the NAS.
user#synology:~$ ifconfig docker0
docker0 Link encap:Ethernet HWaddr 02:42:B5:BE:C5:51
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
<snip>
When I first saw this, I was quite confused because I did not recall docker networking working this way. So I fired up my ubuntu VM and tried the same test (same docker run command from earlier).
The log line from my curl test off box:
172.16.207.1 - - [05/May/2020:17:12:04 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "-"
In this case the docker0 IP was not the source of the traffic as seen by the container.
user#ubuntu-vm:~# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
<snip>
So it appears that the docker network setup for a Synolgy NAS is different vs a more standard ubuntu deployment. Okay, neat. So how does one fix this?
This is where I'm struggling. Clearly something is happening with iptables. Running the same iptables -t nat -L -n command on both systems shows vastly different results.
user#synology:~$ iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DEFAULT_OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
DEFAULT_POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
Chain DEFAULT_OUTPUT (1 references)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain DEFAULT_POSTROUTING (1 references)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:80
Chain DOCKER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80
user#ubuntu-vm:~# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE all -- 172.18.0.0/16 0.0.0.0/0
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:80
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80
I do not really understand how these chains work or why they're different for different systems. I haven't changed any docker level settings on either system, these are default configs.
Has anyone ran into this before? Is there a quick way to toggle this?
I've experienced this issue myself while using Pi-hole (described here) and this iptables rule seems to fix it:
sudo iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
Be aware that this is not permanent, so if you reboot the NAS you will have to apply it again.
Update: Here's a more "permanent" fix: https://gist.github.com/PedroLamas/db809a2b9112166da4a2dbf8e3a72ae9

access remote network from within docker container

I have a Docker Host that is connected to a non-default route network. My problem is now that I can't reach this Network from within the Docker Containers on the Docker Host.
Primary IP: 189.69.77.21 (default route)
Secondary IP: 192.168.77.21
Routing is like the following:
[root#mgmt]# route -n
Kernel IP Routentabelle
Ziel Router Genmask Flags Metric Ref Use Iface
0.0.0.0 189.69.77.1 0.0.0.0 UG 0 0 0 enp0s31f6
189.69.77.1 0.0.0.0 255.255.255.255 UH 0 0 0 enp0s31f6
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.77.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s31f6.4000
and IPtables untouched:
[root#mgmt]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:3000
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
I start the Docker Container with the following:
docker run -d --restart=unless-stopped -p 127.0.0.1:3000:3000/tcp --name mongoclient -e MONGOCLIENT_DEFAULT_CONNECTION_URL=mongodb://192.168.77.21:27017,192.168.77.40:27017,192.168.77.41:27017/graylog?replicaSet=ars0 -e ROOT_URL=http://192.168.77.21/nosqlclient mongoclient/mongoclient
I can reach the container (via NGINX Proxy) over Network but the container itself can only ping/reach the Docker Host IP and not others.
node#1c5cf0e8d14c:/opt/meteor/dist/bundle$ ping 192.168.77.21
PING 192.168.77.21 (192.168.77.21) 56(84) bytes of data.
64 bytes from 192.168.77.21: icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from 192.168.77.21: icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from 192.168.77.21: icmp_seq=3 ttl=64 time=0.079 ms
^C
--- 192.168.77.21 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.078/0.079/0.080/0.000 ms
node#1c5cf0e8d14c:/opt/meteor/dist/bundle$ ping 192.168.77.40
PING 192.168.77.40 (192.168.77.40) 56(84) bytes of data.
^C
--- 192.168.77.40 ping statistics ---
240 packets transmitted, 0 received, 100% packet loss, time 239000ms
So my question is, how can I make the Docker Container reach the hosts on the Network? My goal is to have a running mongoclient via Docker that can be used to manage the MongoDB ReplicaSet that is in that additional private Network.
You can use the network host in the container. So the container is use the host network and you can access the container and host network.
Here is the documentation:
https://docs.docker.com/network/host/
BR
Carlos

How does the docker container connect to a port on the host?

My question is very similar to this(From inside of a Docker container, how do I connect to the localhost of the machine?).
I tried to use --network="host" to connect to the host's 8118 proxy, but this is not what I want. I still want to use the bridge mode. In fact, I feel that docker's bridging is similar to NAT in the traditional sense.The virtual switch docker0 installed on the host, the different containers rely on this switch to communicate with each other, and the container can also ping the host, in theory, can communicate with the host and access its open port, but in fact it can't, I don't know why, who can help me? (The ping protocol is based on tcp, it also means that 20/21 ports are reachable.why unreachable for 8118?)
Ok, I may have found the reason, the port is to be monitored, I will try to change the monitoring of the host agent software.
The following is my attempt, the container can not successfully connect to the 8118 proxy port on the host:
The terminal on the left is my host, and on the right is my docker container
host:
VirtualBox-centos7 (ip:192.168.125.95, shadowsocks[127.0.0.1:1080], privoxy[127.0.0.1:8118]): wget is ok.
docker:
a container setting http_proxy=192.168.125.95:8118... and wget get an error:No route to host,then I turn off the firewall and try again get another error:Connection refused.
docker container:
root#bee1d2892df4:/go# ip route show
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2
root#bee1d2892df4:/go# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
root#bee1d2892df4:/go# telnet 172.17.0.1 8118
Trying 172.17.0.1...
telnet: Unable to connect to remote host: Connection refused
root#bee1d2892df4:/go# telnet 192.168.125.95 8118
Trying 192.168.125.95...
telnet: Unable to connect to remote host: Connection refused
root#bee1d2892df4:/go#
host:(This should be useless, my iptables should not be started.)
[root#localhost shadowsocks]# iptables -A INPUT -i docker0 -j ACCEPT
[root#localhost shadowsocks]# iptables -nL --line-number
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
2 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
2 DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
4 DOCKER all -- 0.0.0.0/0 0.0.0.0/0
5 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
6 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
7 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
8 DOCKER all -- 0.0.0.0/0 0.0.0.0/0
9 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
10 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
Chain DOCKER (2 references)
num target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num target prot opt source destination
1 DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
2 DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
3 RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
num target prot opt source destination
1 DROP all -- 0.0.0.0/0 0.0.0.0/0
2 DROP all -- 0.0.0.0/0 0.0.0.0/0
3 RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
num target prot opt source destination
1 RETURN all -- 0.0.0.0/0 0.0.0.0/0
Solution:
echo 'listen-address 172.17.0.1:8118' > /usr/local/etc/privoxy/config
service privoxy restart
netstat -nltp|grep 8118
tcp 0 0 172.17.0.1:8118 0.0.0.0:* LISTEN 27154/privoxy
tcp 0 0 127.0.0.1:8118 0.0.0.0:* LISTEN 27154/privoxy
ps:
I made a very low-level mistake, I need to learn the principle of the system, I have a lot of misunderstandings, a service is not said to be able to ping ip can be used, the role of the port is used to monitor and accept packets, pingPassing only represents tcp port monitoring no problem (may be 0.0.0.0), but the third-party service must pay attention to.
If you can, yum install strace. Then run wget again with strace in front. so,
strace -f wget .........
Look for the system level failure message. Don't forget wget --debug as well.
Another thing to consider is what user is each of these examples running as. Are they different users when you run wget directly etc?

How to connect youtrack, upsource and teamcity to hub all running as docker containers on the same machine

Since we are a small team we want to put JetBrains' Hub, Youtrack, Upsource and Teamcity as docker containers (all on the same machine for now). Docker is running on Photon OS 2.0 running on ESXi 6.7. Nginx in another container acts as a DNS proxy so all services are reachable with their own domain names on port 80 for now...
I got all 5 services running and and can access them in a browser. However connecting Youtrack, Upsource and Teamcity to Hub is a challenge. Youtrack, Upsource and Teamcity ask for the Hub URL to confirm it and ask for permission to access Hub.
The Problem:
Hub URL: http://hub.teamtools.mydomain.com -> the container can not access it under that address and verification fails with timeout
Hub URL: http://172.18.0.3:8080 -> the container can access Hub on the internal docker net and then shows a pop up which is trying to show a confirmation page by redirecting to Hub on that internal IP which of course fails in the browser (I tried to copy the URL from the popup into a new window and adjust it there as a hack but that does not work.)
Questions:
How can I link Youtrack, Upsource and Teamcity to the Hub? In order for the confimation process to work the docker containers need to be able to access each other with an external IP/domain name.
Does anything speak against having all four Teamtools on the same machine to get started and separate them later as demand grows?
Configuration so far:
The containers have been turned into services like so:
/etc/systemd/system/docker.nginx.servcie
[Unit]
Description=Nginx DNS proxy
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=/usr/bin/docker network create --subnet=172.18.0.0/16 dockerNet
ExecStartPre=-/usr/bin/docker exec %n stop
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull jwilder/nginx-proxy
ExecStart=/usr/bin/docker run --rm --name %n \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
--net dockerNet --ip 172.18.0.2 \
-p 80:80 \
jwilder/nginx-proxy
[Install]
WantedBy=multi-user.target
/etc/systemd/system/docker.hub.service
[Unit]
Description=JetBrains Hub Service
After=docker.nginx-proxy.service
Requires=docker.nginx-proxy.service
[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker exec %n stop
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull jetbrains/hub:2018.2.9635
ExecStart=/usr/bin/docker run --rm --name %n \
-v /opt/hub/data:/opt/hub/data \
-v /opt/hub/conf:/opt/hub/conf \
-v /opt/hub/logs:/opt/hub/logs \
-v /opt/hub/backups:/opt/hub/backups \
--net dockerNet --ip 172.18.0.3 \
-p 8010:8080 \
--expose 8080 \
-e VIRTUAL_PORT=8080 \
-e VIRTUAL_HOST=hub,teamtools.mydomain.com,hub.teamtools.mydomain.com \
jetbrains/hub:2018.2.9635
[Install]
WantedBy=multi-user.target
... and so on. Since I'm, still trying things out, the ports are mapped on the host and exposed so nginx-proxy can pick them up. I also added static IPs to the containers hoping this would help with my problem.
Running those services results in:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ba8ed89b832 jetbrains/teamcity-server "/run-services.sh" 12 hours ago Up 12 hours 0.0.0.0:8111->8111/tcp docker.teamcity.service
5c819c48cbcc jetbrains/upsource:2018.1.357 "/bin/bash /run.sh" 12 hours ago Up 12 hours 0.0.0.0:8030->8080/tcp docker.upsource.service
cf9dcd1b534c jetbrains/youtrack:2018.2.42223 "/bin/bash /run.sh" 14 hours ago Up 14 hours 0.0.0.0:8020->8080/tcp docker.youtrack.service
de86c3e1f2e2 jetbrains/hub:2018.2.9635 "/bin/bash /run.sh" 14 hours ago Up 14 hours 0.0.0.0:8010->8080/tcp docker.hub.service
9df9cb44e485 jwilder/nginx-proxy "/app/docker-entry..." 14 hours ago Up 14 hours 0.0.0.0:80->80/tcp docker.nginx-proxy.service
Additional Info:
I did consider this could be a firewall issue and this post seems to suggest the same thing:
https://forums.docker.com/t/access-docker-container-from-inside-of-the-container-via-external-url/33271
After some discussion with the provider of the virtual server it
turned out, that conflicting firewall rules between plesk firewall and
iptables caused this problem. After the conflict had been fixed by the
provider the container could be accessed.
Firewall on Photon with rules auto added by docker:
Chain INPUT (policy DROP 2 packets, 203 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
258 19408 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
6 360 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
186 13066 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
186 13066 DOCKER-ISOLATION all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
103 7224 ACCEPT all -- * br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
9 524 DOCKER all -- * br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
74 5318 ACCEPT all -- br-83f08846fc2e !br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
1 52 ACCEPT all -- br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
300 78566 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
8 472 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.2 tcp dpt:80
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.3 tcp dpt:8080
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.5 tcp dpt:8080
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.4 tcp dpt:8080
0 0 ACCEPT tcp -- !br-83f08846fc2e br-83f08846fc2e 0.0.0.0/0 172.18.0.6 tcp dpt:8111
Chain DOCKER-ISOLATION (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- br-83f08846fc2e docker0 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- docker0 br-83f08846fc2e 0.0.0.0/0 0.0.0.0/0
186 13066 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
186 13066 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
It turned out to be a firewall issue.
I resolved it using this https://unrouted.io/2017/08/15/docker-firewall/ as a starting point.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:FILTERS - [0:0]
:DOCKER-USER - [0:0]
-F INPUT
-F DOCKER-USER
-F FILTERS
-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp --icmp-type any -j ACCEPT
-A INPUT -j FILTERS
-A DOCKER-USER -i eth0 -j FILTERS
-A FILTERS -m state --state ESTABLISHED,RELATED -j ACCEPT
# full access from my workstation
-A FILTERS -m state --state NEW -s 192.168.0.10/32
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT
-A FILTERS -j REJECT --reject-with icmp-host-prohibited
COMMIT

Resources