Access container on second Docker host by name with overlay network - docker

I have two Linux machines (host1 and host2) running Docker in swarm mode and configured an overlay network by running:
host1:$ docker network create -d overlay --attachable appnet
host1 is my manager there I have a web server and connected it to the network above:
host1:$ docker run --name web --network appnet --rm -d -p 8000:80 nginx
On host2 I created a simple container to test some connectivity:
host2:$ docker run -dit --network appnet --name alp1 alpine ash
host2:$ docker exec -it alp1 ash
Pinging the web container by name works fine:
# ping web -c 2
PING web (10.0.2.2): 56 data bytes
64 bytes from 10.0.2.2: seq=0 ttl=64 time=0.777 ms
64 bytes from 10.0.2.2: seq=1 ttl=64 time=0.473 ms
wget with the IP address of host1 also works:
# wget -O out <host1-ip>:8000
Connecting to <host1-ip>:8000 (<host1-ip>:8000)
...
But when I try to wget by container name the connection times out:
# wget -O out web
Connecting to web (10.0.2.2:80)
wget: can't connect to remote host (10.0.2.2): Operation timed out
Does someone know why this is happening? I would have expected this to work as I thought the whole point of connecting containers over overlays was to not have to publish a dozen ports.

The overlay network is a virtual network that all nodes in your swarm can communicate across. Containers that are associated with one of these networks get an IP address that belongs to that network which is different than the IP address of the host it's running on.
The only way your GET request to <host1-ip>:8000 would work would be if you exposed that port on the host. That's how traffic external to the swarm can hit one of your containers.
What you really want is a service.
# Get rid of your container named web on host1
host1:$ docker kill web
# Create a new service on a manage node named web that attaches to your network
host1:$ docker service create --name web --network appnet nginx
yqigiyn9yqhxqcw74voe0tdcg
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
At this point, you can reach the nginx container from any of the nodes on the port that it exposes (80 by default).
host2:$ curl http://web:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</html>
You can definitely get this working imperatively with the docker cli, but you'll have a much easier time managing changes and configuration if you look at docker compose. It's a declarative way of setting all of the switches and options you'll need for managing a service.

Related

Docker internal DNS not resolving [duplicate]

I plan to split my monolthic server up into many small docker containers but haven't found a good solution for "inter-container communication" yet. This is my target scenario:
I know how to link containers together and how to expose ports, but none of these solutions are satisfying to me.
Is there any solution to communicate via hostnames (container names) between the containers like in a traditional server network?
The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
Edit: After Docker 1.9, the docker network command (see below https://stackoverflow.com/a/35184695/977939) is the recommended way to achieve this.
My solution is to set up a dnsmasq on the host to have DNS record automatically updated: "A" records have the names of containers and point to the IP addresses of the containers automatically (every 10 sec). The automatic updating script is pasted here:
#!/bin/bash
# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}
# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}
# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}
declare -A service_map
while true
do
changed=false
while read line
do
name=${line##* }
ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
then
service_map[$name]=$ip
# write to file
echo $name has a new IP Address $ip >&2
echo "host-record=$name,$ip" > "${DNSMASQ_CONFIG}/docker-$name"
changed=true
fi
done < <(${DOCKER} ps | ${TAIL} -n +2)
# a change of IP address occured, restart dnsmasq
if [ $changed = true ]
then
systemctl restart dnsmasq
fi
${SLEEP} $INTERVAL
done
Make sure your dnsmasq service is available on docker0. Then, start your container with --dns HOST_ADDRESS to use this mini dns service.
Reference: http://docs.blowb.org/setup-host/dnsmasq.html
That should be what --link is for, at least for the hostname part.
With docker 1.10, and PR 19242, that would be:
docker network create --net-alias=[]: Add network-scoped alias for the container
(see last section below)
That is what Updating the /etc/hosts file details
In addition to the environment variables, Docker adds a host entry for the source container to the /etc/hosts file.
For instance, launch an LDAP server:
docker run -t --name openldap -d -p 389:389 larrycai/openldap
And define an image to test that LDAP server:
FROM ubuntu
RUN apt-get -y install ldap-utils
RUN touch /root/.bash_aliases
RUN echo "alias lds='ldapsearch -H ldap://internalopenldap -LL -b
ou=Users,dc=openstack,dc=org -D cn=admin,dc=openstack,dc=org -w
password'" > /root/.bash_aliases
ENTRYPOINT bash
You can expose the 'openldap' container as 'internalopenldap' within the test image with --link:
docker run -it --rm --name ldp --link openldap:internalopenldap ldaptest
Then, if you type 'lds', that alias will work:
ldapsearch -H ldap://internalopenldap ...
That would return people. Meaning internalopenldap is correctly reached from the ldaptest image.
Of course, docker 1.7 will add libnetwork, which provides a native Go implementation for connecting containers. See the blog post.
It introduced a more complete architecture, with the Container Network Model (CNM)
That will Update the Docker CLI with new “network” commands, and document how the “-net” flag is used to assign containers to networks.
docker 1.10 has a new section Network-scoped alias, now officially documented in network connect:
While links provide private name resolution that is localized within a container, the network-scoped alias provides a way for a container to be discovered by an alternate name by any other container within the scope of a particular network.
Unlike the link alias, which is defined by the consumer of a service, the network-scoped alias is defined by the container that is offering the service to the network.
Continuing with the above example, create another container in isolated_nw with a network alias.
$ docker run --net=isolated_nw -itd --name=container6 -alias app busybox
8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17
--alias=[]
Add network-scoped alias for the container
You can use --link option to link another container with a preferred alias
You can pause, restart, and stop containers that are connected to a network. Paused containers remain connected and can be revealed by a network inspect. When the container is stopped, it does not appear on the network until you restart it.
If specified, the container's IP address(es) is reapplied when a stopped container is restarted. If the IP address is no longer available, the container fails to start.
One way to guarantee that the IP address is available is to specify an --ip-range when creating the network, and choose the static IP address(es) from outside that range. This ensures that the IP address is not given to another container while this container is not on the network.
$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
$ docker network connect --ip 172.20.128.2 multi-host-network container2
$ docker network connect --link container1:c1 multi-host-network container2
EDIT : It is not bleeding edge anymore : http://blog.docker.com/2016/02/docker-1-10/
Original Answer
I battled with it the whole night.
If you're not afraid of bleeding edge, the latest version of Docker engine and Docker compose both implement libnetwork.
With the right config file (that need to be put in version 2), you will create services that will all see each other. And, bonus, you can scale them with docker-compose as well (you can scale any service you want that doesn't bind port on the host)
Here is an example file
version: "2"
services:
router:
build: services/router/
ports:
- "8080:8080"
auth:
build: services/auth/
todo:
build: services/todo/
data:
build: services/data/
And the reference for this new version of compose file:
https://github.com/docker/compose/blob/1.6.0-rc1/docs/networking.md
As far as I know, by using only Docker this is not possible. You need some DNS to map container ip:s to hostnames.
If you want out of the box solution. One solution is to use for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks for each service and every service can be reached by service_name.kontena.local-address.
Here is simple example of Wordpress application's YAML file where Wordpress service connects to MySQL server with wordpress-mysql.kontena.local address:
wordpress:
image: wordpress:4.1
stateful: true
ports:
- 80:80
links:
- mysql:wordpress-mysql
environment:
- WORDPRESS_DB_HOST=wordpress-mysql.kontena.local
- WORDPRESS_DB_PASSWORD=secret
mysql:
image: mariadb:5.5
stateful: true
environment:
- MYSQL_ROOT_PASSWORD=secret

Unable to get Docker Swarm on Windows Server 2019 ingress network working between containers

I have found some posts mentioning the support for routing mesh using an overlay network on Windows Server 2019 (in references bellow).
After lots of troubleshooting, I am unable to properly configure 2 simple containers on a user defined overlay network created using the following network and services:
docker network create -d overlay --attachable testnet
docker service create -d --name web --network testnet --publish 80:80 microsoft/iis
docker service create -d --network testnet --name pingweb mcr.microsoft.com/windows/nanoserver:1809 ping web
I am able to reach the iis website when browsing my docker host on port 80, but my other container pingweb is unable to ping my main web container when they are on the same overlay network.
PS C:\Users\me> docker network ls
NETWORK ID NAME DRIVER SCOPE
ga8egf2nwsir ingress overlay swarm
bf164fa77349 nat nat local
81fb626259e1 none null local
l9p7c8p2fy3g testnet overlay swarm
PS C:\Users\me> docker service create -d --name web --network testnet --publish 80:80 microsoft/iis
mk3r1a7za4jk21321kmzlddxr
PS C:\Users\me> docker service create -d --network testnet --name pingweb mcr.microsoft.com/windows/nanoserver:1809 ping web
j3z0xso7shghctva3od9qct10
PS C:\Users\me> docker service logs pingweb
pingweb.1.wbtpizulcxvg#WS2019DockerNode1 |
pingweb.1.wbtpizulcxvg#WS2019DockerNode1 | Pinging web [10.0.29.180] with 32 bytes of data:
pingweb.1.wbtpizulcxvg#WS2019DockerNode1 | Request timed out.
pingweb.1.wbtpizulcxvg#WS2019DockerNode1 | Request timed out.
pingweb.1.wbtpizulcxvg#WS2019DockerNode1 | Request timed out.
pingweb.1.wbtpizulcxvg#WS2019DockerNode1 | Request timed out.
pingweb.1.wbtpizulcxvg#WS2019DockerNode1 |
pingweb.1.wbtpizulcxvg#WS2019DockerNode1 | Ping statistics for 10.0.29.180:
pingweb.1.wbtpizulcxvg#WS2019DockerNode1 | Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
PS C:\Users\me>
I also have noticed that I am unable to ping external sites whenever my pingweb container is on the overlay network. I've tested pinging 8.8.8.8, but it doesn't work when running on the overlay network as I am getting the same Request timed out as when I am trying to ping my web container on the testnet network.
docker service create -d --network testnet --name pingweb mcr.microsoft.com/windows/nanoserver:1809 ping 8.8.8.8
Question(s):
Is this a known issue?
How can I get this to work?
References:
https://learn.microsoft.com/en-us/virtualization/community/team-blog/2017/20170926-docker-s-routing-mesh-available-with-windows-server-version-1709
https://www.docker.com/blog/docker-windows-server-1709/
Docker ingress mode service publishing on Windows
Parity with Linux service publishing options has been highly requested by Windows customers. Adding support for service publishing using ingress mode in Windows Server 1709 enables use of Docker’s routing mesh, allowing external endpoints to access a service via any node in the swarm regardless of which nodes are running tasks for the service.
These networking improvements also unlock VIP-based service discovery when using overlay networks so that Windows users are not limited to DNS Round Robin.
Check out the corresponding post on the Microsoft Virtualization blog for details on the improvements.
After lots of struggle on this, turns out the fix was provided as part of the Windows Server 2019 Update KB4580390
Github thread around the issue:
https://github.com/moby/moby/issues/40998#issuecomment-719889423
Update fixing the issue:
https://www.catalog.update.microsoft.com/Search.aspx?q=KB4580390

Link containers in docker (RancherOS and command line)

I run a RancherOS to run docker containers
I created a container on the GUI to run my databases (image: mysql, name: r-mysql-e4e8df05). Different containers use it.
I can link other containers to it on the GUI
This time I would like to automate the creation and starting of a container on jenkins, but the linking is not working well
My command:
docker run -d --name=app-that-needs-mysql --link mysql:mysql myimages.mycompany.com/appthatneedsmysql
I get error:
Error response from daemon: Could not get container for mysql
I tried different things:
1)
--link r-mysql-e4e8df05:mysql
Error:
Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
2)
Try to use --net options
Running: docker network ls
NETWORK ID NAME DRIVER SCOPE
c..........e bridge bridge local
4..........c host host local
c..........a none null local
With --net none it succeeds but actually it is not working. The app cannot connect to the DB
With --net host error message conflicting options: host type networking can't be used with links. This would result in undefined behavior
With --net bridge error message: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network
I also checked on rancher GUI where this mysql runs:
It get a continer IP startin with: 10.X.X.X
I also tried to add --net managed but the error: network managed not found
I believe I miss understanding something in this docker linking process. Please give me some idea, how can I make these work.
(previously it was working when I created the same container and linked to the mysql in the GUI)
Hey #Tomi you can expose the mysql container on whatever port you like, from rancher. That way you dont have to link the container, then your jenkins spawned container connect to that on the exposed port on the host. You could also use jenkins to spin up the container within rancher, using the rancher cli. Thay way you dont have to surface mysql on the hosts network... a few ways to skin that cat with rancher.
At first glance it seems that Rancher uses a managed network, which docker network ls does not show.
Reproducing the problem
I used dummy alpine containers to reproduce this:
# create some network
docker network create your_invisible_network
# run a container belonging to this network
docker container run \
--detach \
--name r-mysql-e4e8df05 \
--net your_invisible_network \
alpine tail -f /dev/null
# trying to link this container
docker container run \
--link r-mysql-e4e8df05:mysql \
alpine ping mysql
Indeed I get docker: Error response from daemon: Cannot link to /r-mysql-e4e8df05, as it does not belong to the default network.
Possible Solution
A workaround would be to create a user-defined bridge network and simpy add your mysql container to it:
# create a network
docker network create \
--driver bridge \
a_workaround_network
# connect the mysql to this network (and alias it)
docker network connect \
--alias mysql \
a_workaround_network r-mysql-e4e8df05
# try to ping it using its alias
docker container run \
--net a_workaround_network \
alpine \
ping mysql
# yay!
PING mysql (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.135 ms
64 bytes from 127.0.0.1: seq=1 ttl=64 time=0.084 ms
As you can see in the output pinging the mysql container via its DNS name is possible.
Good to know:
With a user-created bridge networks DNS resolution works out of the box without having to explicitly --link containers :)
Containers can belong to several networks, this is why this works. In this case the mysql container belongs to both your_invisible_network and a_workaround_network
I hope this helps!

Status of RabbitMQ in different docker container

I am starting a docker container with RabbitMQ for testing purposes. I would like to start a second container which runs a short command and checks that the Rabbitmq is actually running. The second container should block my build pipeline until it has determined that RabbitMQ has successfully started in the first container.
How can I specify to rabbitmqctl which hostname to use to get the status of RabbitMq? I am linking the two containers together via docker so port issues should not be a problem.
Example:
rabbitmqctl -n rabbitmq status # does not work, prints diagnostic info
Status of node rabbitmq#rabbitmq ...
Error: unable to perform an operation on node 'rabbitmq#rabbitmq'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
Target node is not running
In addition to the diagnostics info below:
See the CLI, clustering and networking guides on http://rabbitmq.com/documentation.html to learn more
Consult server logs on node rabbitmq#rabbitmq
DIAGNOSTICS
attempted to contact: [rabbitmq#rabbitmq]
rabbitmq#rabbitmq:
* connected to epmd (port 4369) on rabbitmq
* epmd reports: node 'rabbitmq' not running at all
other nodes on rabbitmq: [rabbit]
* suggestion: start the node
Current node details:
* node name: rabbitmqcli52#e3ea1e73df02
* effective user's home directory: /var/lib/rabbitmq
* Erlang cookie hash: AB9AFN3zvcyAWBl6ZVVOJw==
Your second container needs to be aware of the first one:
docker run --link rabbitmq ...
The host will be available now from inside the container:
$ grep rabbitmq /etc/hosts
172.17.0.2 rabbitmq 01ad3098b423
$ ping rabbitmq
PING rabbitmq (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.073 ms
Keep in mind container linking is deprecated in favor of custom networks.
First create a network so that you can assign IPs: docker network create --subnet=172.18.0.0/16 mynet1
I'm going to assume you use rabbitmq management container, and for and I'll call it (hostname) rab1. I'll give the name ubuntu1 to the other container from which you want to access rab1.So first start rab1 and add ubuntu1 to hosts file:
docker run -d --net mynet1 --ip 172.18.0.11 --hostname rab1 --add-host ubuntu1:172.18.0.12 --name rab1con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
And after that start the ubuntu1con with hostname ubuntu1
docker run -d --net mynet1 --ip 172.18.0.12 --hostname ubuntu1 --add-host rab1:172.18.0.11 --name ubuntu1con ubuntu
Now when you go into ubuntu1con you are able to access rab1 by name or ip address.
Assuming the two containers are linked/networked properly, can ping each other, and the Rabbitmq client rabbitmqctl is installed in the second container, the following should work:
docker exec -it <second container's name or ID> rabbitmqctl -n rabbit#rabbitmq status
Pass --hostname rabbitmq to docker run when starting the rabbitmq container, and make sure that hostname -s inside the rabbitmq container prints rabbitmq.
If the error is still seen, add <IP address of rabbitmq container> rabbitmq in /etc/hosts of the second container and re-try.
https://docs.docker.com/network/network-tutorial-standalone/#use-user-defined-bridge-networks has info about how to network two containers so that they can ping each other on the same network.

Docker ping container on other nodes

I have 2 virtual machines (VM1 with IP 192.168.56.101 and VM2 with IP 192.16.56.102 which can ping each other) and these are the steps I'm doing:
- Create consul container on VM1 with 'docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap'
- Create swarm manager on VM1 with 'docker run -d -p 3376:3376 swarm manage -H 0.0.0.0:3376 --advertise 192.168.56.101:3376 consul://192.168.56.101:8500
- Create swarm agents on each VM with 'docker run -d swarm join --advertise <VM-IP>:2376 consul://192.168.56.101:8500
If i run docker -H 0.0.0.0:3376 info I can see both nodes connected to the swarm and they are both healthy. I can also run container and they are scheduled to the nodes. However, If I create a network and assign a few nodes to this network and then SSH into one node and try to ping every other node I can only reach the nodes which are running on the same virtual machine.
Both Virtual Machines have these DOCKER_OPTS:
DOCKER_OPTS = DOCKER_OPTS="--cluster-store=consul://192.168.56.101:8500 --cluster-advertise=<VM-IP>:0 -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock"
I don't have a direct quote, but from what I've read on Docker GitHub issue tracker, ICMP packets (ping) are never routed between containers on different nodes.
TCP connection to explicitly opened ports should work. But as of Docker 1.12.1 it is buggy.
Docker 1.12.2 has some bug fixes wrt establishing a connection to containers on other hosts. But ping is not going to work across hosts.
You can only ping containers on the same node because you attach them to a local scope network.
As suggested in the comments, if you want to ping containers across hosts (meaning from a container on VM1 to a container on VM2) using docker swarm (or docker swarm mode) without explicitly opening ports, you need to create an overlay network (or globally scoped network) and assign/start containers on that network.
To create an overlay network:
docker network create -d overlay mynet
Then start the containers using that network:
For Docker Swarm mode:
docker service create --replicas 2 --network mynet --name web nginx
For Docker Swarm (legacy):
docker run -itd --network=mynet busybox
For example, if we create two containers (on legacy Swarm):
docker run -itd --network=mynet --name=test1 busybox
docker run -itd --network=mynet --name=test2 busybox
You should be able to docker attach on test2 to ping test1 and vice-versa.
For more details you can refer to the networking documentation.
Note: If containers still can't ping each other after the creation of an overlay network and attaching containers to it, check the firewall configurations of the VMs and make sure that these ports are open:
data plane / vxlan: UDP 4789
control plane / gossip: TCP/UDP 7946

Resources