docker swarm container connect to host port - docker

I have a swarm cluster in which I created a global service to run on all docker hosts in the cluster.
The goal is to have each container instance for this service connect to a port listening on the docker host.
For further information, I am following this Docker Daemon Metrics guide for exposing the new docker metrics API on all hosts and then proxying that host port into the overlay network so that Prometheus can scrape metrics from all swarm hosts.
I have read several docker github issues #8395 #32101 #32277 #1143 - from this my understanding is the same as outlined in the Docker Daemon Metrics. In order to connect to the host from within a swarm container, I should use the docker-gwbridge network which by default is 172.18.0.1.
Every container in my swarm has a network interface for the docker-gwbridge network:
326: eth0#if327: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:ff:00:06 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.6/16 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.5/32 scope global eth0
valid_lft forever preferred_lft forever
333: eth1#if334: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.4/16 scope global eth1
valid_lft forever preferred_lft forever
Also, every container in the swarm has a default route that is via 172.0.0.1:
/prometheus # ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print $2 }'
172.18.0.1
/prometheus # netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'
172.18.0.1
/prometheus # ip route
default via 172.18.0.1 dev eth1
10.0.1.0/24 dev eth2 src 10.0.1.9
10.255.0.0/16 dev eth0 src 10.255.0.6
172.18.0.0/16 dev eth1 src 172.18.0.4
Despite this, I cannot communicate with 172.18.0.1 from within the container:
/ # wget -O- 172.18.0.1:4999
Connecting to 172.18.0.1:4999 (172.18.0.1:4999)
wget: can't connect to remote host (172.18.0.1): No route to host
On the host, I can access the docker metrics API on 172.18.0.1. I can ping and I can make a successful HTTP request.
Can anyone shed some light as to why this does not work from within the container as outlined in the Docker Daemon Metrics guide?
If the container has a network interface on the 172.18.0.1 network and has routes configured for 172.18.0.1 why do pings fail to 172.18.0.1 from within the container?
If this is not a valid approach for accessing a host port from within a swarm container, then how would one go about achieving this?
EDIT:
Just realized that I did not give all the information in the original post.
I am running docker swarm on a CentOS 7.2 host with docker version 17.04.0-ce, build 4845c56. My kernel is a build of 4.9.11 with vxlan and ipvs modules enabled.
After some further digging I have noted that this appears to be a firewall issue. I discovered that not only was I unable to ping 172.18.0.1 from within the containers - but I was not able to ping my host machine at all! I tried my domain name, the FQDN for the server and even its public IP address but the container could not ping the host (there is network access as I can ping google/etc).
I disabled firewalld on my host and then restarted the docker daemon. After this I was able to ping my host from within the containers (both domain name and 172.18.0.1). Unfortunately this is not a solution for me. I need to identify what firewall rules I need to put in place to allow container->host communication without requiring firewalld being disabled.

Firstly, I owe you a huge THANK YOU. Before I read your EDIT part, I'd spent literally day and night to solve a similar issue, and never realized that the devil is the firewall.
Without disabling the firewall, I have solved my problem on Ubunt 16.04, using
sudo ufw allow in on docker_gwbridge
sudo ufw allow out on docker_gwbridge
sudo ufw enable
I'm not much familiar with CentOS, but I do believe the following should help you, or at least serve as a hint
sudo firewall-cmd --permanent --zone=trusted --change-interface=docker_gwbridge
sudo systemctl restart firewalld
You might have to restart docker as well.

Related

Can't Connect to Docker container within local network

Trying to run QuakeJS within the docker container. I'm new to docker (and networking). Couldn't connect. Decided to start easier and ran nginxdemos/helloworld. Still can't connect to the server (running Ubuntu Server).
Tried:
docker run -d -p 8080:80 nginxdemos/hello
Probably relevant ip addr:
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 18:03:73:be:70:eb brd ff:ff:ff:ff:ff:ff
altname enp0s25
inet 10.89.233.61/20 metric 100 brd 10.89.239.255 scope global dynamic eno1
valid_lft 27400sec preferred_lft 27400sec
inet6 fe80::1a03:73ff:febe:70eb/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:7c:bb:47 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe7c:bb47/64 scope link
valid_lft forever preferred_lft forever
Here's docker-network-ls:
NETWORK ID NAME DRIVER SCOPE
5671ad4b57fe bridge bridge local
a9348e40fb3c host host local
fdb16382afbd none null local
ufw-status
To Action From
-- ------ ----
8080 ALLOW Anywhere
8080 (v6) ALLOW Anywhere (v6)
Anywhere ALLOW OUT 172.17.0.0/16 on docker0
But when I try to access in a web browser (chrome and firefox) at 172.17.0.0:8080 (or many other permutations) I just end up in a time out. I'm sure this is a stupid think but I'm very stuck.
UPDATE
I installed a basic apache server and it worked fine. So it's something with Docker. I think.
UPDATE AGAIN
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a7bbfee83954 nginxdemos/hello "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:8080->80/tcp, :::8080->80/tcp relaxed_morse
I can use curl localhost:8080 and see the nginx page
I was playing with ufw but disabled it, not worried about network security. Tried ufw-docker too
FINAL UPDATE
Restarting Docker worked :|
When you publish a port with the -p option to docker run, the syntax is -p <host port>:<container port>, and you are saying, "expose the service running in the container on port <container port> on the Docker host as port <host port>.".
So when you run:
docker run -d -p 8080:80 nginxdemos/hello
You could open a browser on your host and connect to http://localhost:8080 (because port 8080 is the <host_port>). If you have the address of the container, you could also connect to http://<container_ip>:80, but you almost never want to do that, because every time you start a new container it receives a new ip address.
We publish ports to the host so that we don't need to muck about finding container ip address.
running 172.17.0.0:8080 (0.1, 0.2) or 10.89.233.61:8080 result in a timeout
172.17.0.0:8080 doesn't make any sense.
Both 172.17.0.1:8080 and 10.89.233.61:8080 ought to work (as should any other address assigned to a host interface). Some diagnostics to try:
Is the container actually running (docker ps)?
On the docker host are you able to connect to localhost:8080?
It looks like you're using UFW. Have you made any recent changes to the firewall?
If you restart docker (systemctl restart docker), do you see any changes in behavior?

Firewalld And Container Published Ports

On a KVM guest of my RHEL8 host, whose KVM guest is running CentOS7, I was expecting firewalld to by default block outside access to an ephemeral port published to by a Docker Container running nginx. To my surprise the access ISN'T blocked.
Again, the host (myhost) is running RHEL8, and it has a KVM guest (myguest) running CentOS7.
The firewalld configuration on myguest is standard, nothin' fancy:
[root#myguest ~]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0 eth1
sources:
services: http https ssh
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Here are the eth0 and eth1 interfaces that fall under the firewalld public zone:
[root#myguest ~]# ip a s dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:96:9c:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.100.111/24 brd 192.168.100.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe96:9cfc/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root#myguest ~]# ip a s dev eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:66:6c:a1 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.111/24 brd 192.168.1.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe66:6ca1/64 scope link noprefixroute
valid_lft forever preferred_lft forever
On myguest I'm running Docker, and the nginx container is publishing its Port 80 to an ephemeral port:
[me#myguest ~]$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
06471204f091 nginx "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:49154->80/tcp focused_robinson
Notice that in the prior firewall-cmd output I was not permitting access via this ephemeral TCP Port 49154 (or to any other ephemeral ports for that matter). So, I was expecting that unless I did so, outside access to nginx would be blocked. But to my surprise, from another host in the home network running Windows, I was able to access it:
C:\Users\me>curl http://myguest:49154
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
.
.etc etc
If a container publishes its container port to an ephemeral one on the host (myguest in this case), shouldn't the host firewall utility protect access to that port in the same manner as it would a standard port? Am I missing something?
But I also noticed that in fact the nginx container is listening on a TCP6 socket:
[root#myguest ~]# netstat -tlpan | grep 49154
tcp6 0 0 :::49154 :::* LISTEN 23231/docker-proxy
It seems, then, that firewalld may not be blocking tcp6 sockets? I'm confused.
This is obviously not a production issue, nor something to lose sleep over. I'd just like to make sense of it. Thanks.
The integration between docker and firewalld has changed over the years, but based on your OS versions and CLI output I think you can get the behavior you expect by setting AllowZoneDrifting=no it /etc/firewalld/firewalld.conf 1 on the RHEL-8 host.
Due to zone drifting, it possible for packets received in a zone with --set-target=default (e.g. public zone) to drift to a zone with --set-target=accept (e.g. trusted zone). This means FORWARDed packets received in zone public will be forwarded to zone trusted. If your docker containers are using a real bridge interface, then this issue may apply to your setup. Docker defaults to SNAT so usually this problem is hidden.
Newer firewalld 2 releases have completely removed this behavior, because as you have found it's both unexpected and a security issue.

Docker compose api cannot connect to host MongoDB database

I've moved my Mongodb from a container to a local service (it was really flaky when containerised). Problem is I cannot connect from a Node api into the locally running MongoDB service. I can get this working on my Mac, but not on Ubuntu. I've tried:
- DB_HOST=mongodb://172.17.0.1:27017/proto?authSource=admin
- DB_HOST=mongodb://localhost:27017/proto?authSource=admin
// this works locally, but not on my Ubuntu server
- DB_HOST=mongodb://host.docker.internal:27017/proto?authSource=admin
Tried adding this to my docker file:
ip -4 route list match 0/0 | awk '{print $3 "host.docker.internal"}' >> /etc/hosts && \
Also tried network bridge to no avail. Example docker compose
version: '3.3'
services:
search-api:
build: ../search-api
environment:
- PORT=3333
- DB_HOST=mongodb://host.docker.internal:27017/search?authSource=admin
- DB_USER=dbuser
- DB_PASS=password
ports:
- 3333:3333
restart: always
Problem can be caused by MongoDb not listening on the correct ip address and therefore blocking your access.
Either make sure you're listening to a specific ip or listening to all: 0.0.0.0
On linux the config file is per default installed here: /etc/mongod.conf
Configuration specific Ip address:
net:
bindIp: 172.17.0.1 #being your host's ip address
port: 27017
Configuration open to all connections:
net:
bindIp: 0.0.0.0
port: 27017
To get your hosts ip address (from within a container)
On docker-for-mac and docker-for-windows you can use host.docker.internal
While on linux you need to run ip route show in the container.
When running Docker natively on Linux, you can access host services using the IP address of the docker0 interface. From inside the container, this will be your default route.
For example, on my system:
$ ip addr show docker0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::f4d2:49ff:fedd:28a0/64 scope link
valid_lft forever preferred_lft forever
And inside a container:
# ip route show
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 src 172.17.0.4
(copied from here: How to access host port from docker container)

Docker connect to mocked service on host port

I am using docker to run my web app on my local machine and I have created mocked web service using SoapUI on host machine.
The mocked service is accessible through localhost:8099 and IP 127.0.0.1:8099 (using telnet), I am however unable to access it from running docker container.
I have read some articles about discovering host IP address through
ip addr show docker0
with results:
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:e3:36:43:5b brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:e3ff:fe36:435b/64 scope link
valid_lft forever preferred_lft forever
When I tried to ping IP 172.17.0.1 from docker container I am getting responses just OK, but when trying to call the mocked web service from my web app I get responses No route to host.
I have also tried to modify iptables using iptables -A INPUT -i docker0 -j ACCEPT but with no success.
Is there any other setting that I am missing?
Any help is appreciated.
Thanks, shimon
If I have read your question right your local and host machines are not the same machine. In which case you won't be able to (unless you have set a tunnel up on localhost:8099) be able to access your mocker service on the host machine using localhost as it will resolve to your local ip (on your local machine).
What you need to do is make sure both machines can talk to each other and use the host machines IP instead of localhost.

Java application cannot get IP address of the host in docker container with static IP

I use OpenStack for a while to manage my applications. Now I want to transfer them to docker as container per app because docker is more lightweight and efficient.
The problem is almost every thing related to networking went wrong in runtime.
In my design, every application container should have a static IP address and I can use hosts file to locate the container network.
Here is my implementation. (the bash filename is docker_addnet.sh)
# Useages
# docker_addnet.sh container_name IP
# interface name: veth_(containername)
# gateway 172.17.42.1
if [ $# != 2 ]; then
echo -e "ERROR! Wrong args"
exit 1
fi
container_netmask=16
container_gw=172.17.42.1
container_name=$1
bridge_if=veth_`echo ${container_name} | cut -c 1-10`
container_ip=$2/${container_netmask}
container_id=`docker ps | grep $1 | awk '{print \$1}'`
pid=`docker inspect -f '{{.State.Pid}}' ${container_name}`
echo "Contaner: " $container_name "pid: " $pid
mkdir -p /var/run/netns
ln -s /proc/$pid/ns/net /var/run/netns/$pid
brctl delif docker0 $bridge_if
ip link add A type veth peer name B
ip link set A name $bridge_if
brctl addif docker0 $bridge_if
ip link set $bridge_if up
ip link set B netns $pid
ip netns exec $pid ip link set dev B name eth0
ip netns exec $pid ip link set eth0 up
ip netns exec $pid ip addr add $container_ip dev eth0
ip netns exec $pid ip route add default via $container_gw
The script is use to set the static ip address of the container, then you run the container, you must append --net=none to manually setup the network
You can now start a container by
sudo docker run --rm -it --name repl --dns=8.8.8.8 --net=none clojure bash
and set the network by
sudo zsh docker_addnet.sh repl 172.17.15.1
In the container bash, you can see the IP address by ip addr, the output is something like
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
67: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 2e:7b:7e:5a:b5:d6 brd ff:ff:ff:ff:ff:ff
inet 172.17.15.1/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::2c7b:7eff:fe5a:b5d6/64 scope link
valid_lft forever preferred_lft forever
So far so good.
Let's try to get the container host ip address by using clojure repl. First the repl by:
lein repl
next eval the code below
(. java.net.InetAddress getLocalHost)
The clojure code is equals to
System.out.println(Inet4Address.getLocalHost());
What you get is an exception
UnknownHostException 5a8efbf89c79: Name or service not known
java.net.Inet6AddressImpl.lookupAllHostAddr (Inet6AddressImpl.java:-2)
Other things going weird is the RMI server cannot get client IP address by RemoteServer.getClientHost();.
So what may cause this issue? I remember that java sometimes get the wrong network configures, but I don't know the reason.
The documentation for InetAddress.getLocalHost() says:
Returns the address of the local host. This is achieved by retrieving the name of the host from the system, then resolving that name into an InetAddress.
Since you didn't take any steps to make your static IP address resolvable inside the container, it doesn't work.
To find the address in Java without going via hostname you could enumerate all network interfaces via NetworkInterface.getNetworkInterfaces() then iterate over each interface inspecting each address to find the one you want. Example code at
Getting the IP address of the current machine using Java
Another option would be to use Docker's --add-host and --hostname options on the docker run command to put in a mapping for the address you want, then getLocalHost() should work as you expect.
In my design, every application container should have a static IP
address and I can use hosts file to locate the container network.
Why not rethink your original design? Let's start with two obvious options:
Container linking
Add a service discovery component
Container linking
This approach is described in the Docker documentation.
https://docs.docker.com/userguide/dockerlinks/
When launching a container you specify that it is linked to another. This results in environment variables being injected into the linked container containing the IP address and port number details of the collaborating container.
Your application's configuration stops using hard coded IP addresses and instead uses soft code references that are set at run-time.
Currently docker container linking is limited to a single host, but I expect this concept will continue to evolve into multi-host implementations. Worst case you could inject environment variables into your container at run-time.
Service discovery
This is a common approach taken by large distributed applications. Examples implementations of such systems would be:
zookeeper
etcd
consul
..
With such a system in place, your back-end service components (eg database) would register themselves on startup and client processes would dynamically discover their location at run-time. This form of decoupled operation is very Docker friendly and scales very very well.

Resources