Link Docker containers running on different hosts? - docker

I'm starting in container based archictectures with Docker and I have a doubt that maybe is a nonsense.
Has it sense to link Docker containers that are running on different hosts?
Say we have two containers:
barDatabase
fooService
If both are in the same host, we would link the barDatabase to fooService giving, this way, a hostname to communicate between them.
But if they are running on different machines:
barDatabase -> machine1.company.local
fooService -> machine2.company.local
Would be yet necessary to link them? Couldn't we use the original hostname without link them?
Thanks.

Yes and no. Newer versions of Docker have the docker network - this requires a bit of extra config, like - for example - an etcd to manage the config.
In doing so, you can then:
docker network create sometnetname
docker run -d --net somenetname --name barDatabase yourimage
And on your other host:
docker run -d -p 8080:8080 --net somenetname --name fooService service_image
You'll then be able to 'ping' barDatabase as if it was a hostname, from fooService. And fooService will attach to the external net, and act as a gateway.
This works on my 1.9.1 docker, and not on my 1.8.2 - on centos. (So I would assume it's a 1.9+ feature, but I can't find a direct source).
More detail:
https://docs.docker.com/engine/userguide/networking/get-started-overlay/
Requires a bit more faff to set up though, because you do have to configure etcd (or another key value store)
I've been using this to put a multi-node elasticsearch instance on a private network, which I would assume is similar to your use case. (3 es nodes on 3 hosts, with logstash feeding in, and kibana acting as a gateway, along with an nginx admin proxy that does some security/rewrite)

In this case you'd have to expose the ports you want access from the database container in machine1, and then in machine2 you'd just point to machine1 at the exposed port, as you expected. There's no need (and AFAIK no way) to directly link the containers from different machines.

Related

Multiple Docker host machine communication

Suppose, I want to connect a container with another container, where both docker containers are running on a different machine. How do I do that? Hopefully, the attached picture will help to understand what I need. thanks.
This works exactly the same way as if neither process was running in Docker: connect to the other system's IP address and the port you published when you launched the container.
machine02$ docker run --name m2-c1 -p 12345:80 image1
machine01$ docker run --name m1-c5 \
> -e CONTAINER_1_URL=http://192.168.1.102:12345 \
> image5
If you find yourself doing this often, a clustered setup like Kubernetes or Docker Swarm is built for this sort of environment. They have a piece called an overlay network that would allow all 10 containers to share a single "network", so you can directly call c1 as a host name and reach either copy of it. A non-Docker service discovery system, like Hashicorp's Consul, can also help remember what service is running on which node.

docker's embedded dns on the default bridged network

This question is probably addressed to all docker gurus. But let me give some background first. I faced dns resolution problems (on docker's default network "bridge") until i read the following in the documentation at https://docs.docker.com/engine/userguide/networking/
The docker network inspect command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy docker run --link option.
As the --link option is deprecated, makes any docker run command hang and finally smashes the docker daemon process (locally) i tried using a different bridged user network and pinned dummy instances to it.
docker network create -d bridge --subnet=172.15.0.0/16
--gateway=172.15.0.1
-o com.docker.network.bridge.default_bridge=false
-o com.docker.network.bridge.enable_icc=true
-o com.docker.network.bridge.enable_ip_masquerade=true
-o com.docker.network.driver.mtu=1500
-o com.docker.network.bridge.name=docker1
-o com.docker.network.bridge.host_binding_ipv4=0.0.0.0 a
docker run --name db1 -e MYSQL_ROOT_PASSWORD=a -d mysql:5.7.16
docker run --name db2 -e MYSQL_ROOT_PASSWORD=a -d mysql:5.7.16
docker network connect --ip 172.15.0.40 a db1
docker network connect --ip 172.15.0.40 a db2
Now the resolution of services/containers named via --name works fine using ping but here is the question:
Why is service/container name resolution not possible on the default bridge network?
Would be great if any docker network guru could give a hint. Regards.
Why is service/container name resolution not possible on the default bridge network?
There's no technical reason this would not be possible, but a decision to keep backward compatibility.
The default ("bridge") network never supported service discovery through a built in DNS, and when the feature was under development, maintainers of some projects raised concerns that they did not want this added on the default network, as it would block alternative implementations.
In addition, custom networks are designed to explicitly allow containers to communicate. On the default network, this is achieved by disabling "inter container communication" (--icc=false), and using --link to establish a link between containers. Having automatic discovery for any container connected to the default network would make this a lot more complicated to use.
So; create a custom network, and attach containers to that network if they should be able to communicate with each other.
Note that in many cases, not all of the options you specified are needed; simply running docker network create foo should work for most use cases.

RabbitMQ cluster by docker-compose on different hosts and different projects

I have 3 projects, that deploys on different hosts. Every project have it's own RabbitMQ container. But I need to create cluster with this 3 hosts, using the same vhost, but different user/login pair.
I was tried Swarm and overlay networks, but swarm is aimed to run solo containers and with compose it doesn't work. Also, I was tried docker-compose bundle, but this is not work as expected :(
I assumed that it would work something like this:
1) On manager node I create overlay network
2) In every compose file I extend networks config for rabbitmq container with my overlay network.
3) They work as expected and I don't publish to Internet rabbitmq port.
Any idea, how can I do this?
Your approach is right, but Docker Compose doesn't work with Swarm Mode at the moment. Compose just runs docker commands, so you could script up what you want instead. For each project you'd have a script like this:
docker network create -d overlay app1-net
docker service create --network app1-net --name rabbit-app1 rabbitmq:3
docker service create --network app1-net --name app1 your-app-1-image
...
When you run all three scripts on the manager, you'll have three networks, each network will have its own RabbitMQ service (just 1 container by default, use --replicas to run more than one). Within the network other services can reach the message queue by the DNS name rabbit-appX. You don't need to publish any ports, so Rabbit is not accessible outside of the Docker network.

docker deploy won't publish port in swarm

I've got a swarm set up with a two nodes, one manager and one worker. I'd like to have a port published in the swarm so I can access my applications and I wonder how I achieve this.
version: '2'
services:
server:
build: .
image: my-hub.company.com/application/server:latest
ports:
- "80:80"
This exposes port 80 when I run docker-compose up and it works just fine, however when I run a bundled deploy
docker deploy my-service
This won't publish the port, so it just says 80/tcp in docker ps, instead of pointing on a port. Maybe this is because I need to attach a load balancer or run some fancy command or maybe add another layer of config to actually expose this port in a multi-host swarm.
Can someone help me understand what I need to configure/do to make this expose a port.
My best case scenario would be that port 80 is exposed, and if I access it from different hostnames it will send me to different applications.
Update:
It seems to work if I run the following commands after deploying the application
docker service update -p 80:80 my-service_server
docker kill <my-service_server id>
I found this repository for running a HA proxy, it seems great and is supported by docker themselves, however I cannot seem to apply this separate to my services using the new swarm mode.
https://github.com/docker/dockercloud-haproxy
There's a nice description in the bottom describing how the network should look:
Internet -> HAProxy -> Service_A -> Container A
However I cannot find a way to link services through the docker service create command, optimally now looks like a way to set up a network, and when I apply this network on a service it will pick it up in the HAProxy.
-- Marcus
As far as I understood for the moment you just can publish ports updating the service later the creation, like this:
docker service update my-service --publish-add 80:80
Swarm mode publishes ports in a different way. It won't show up in docker ps because it's not publishing the port on the host, it publishes the port to all nodes so that takes it can load balancing between service replicas.
You should see the port from docker service inspect my-service.
Any other service should be able to connect to my-service:80
docker service ls will display the port mappings.

How to link Docker services across hosts?

Docker allows servers from multiple containers to connect to each other via links and service discovery. However, from what I can see this service discovery is host-local. I would like to implement a service that uses other services hosted on a different machine.
There have been several approaches to solving this problem in Docker, such as CoreOS's jumpers, host-local services that essentially proxy to the other machine, and a whole bunch of github projects for managing Docker deployments that appear to have attempted to support this use-case.
Given the pace of development it is hard to follow what current best practices are. Therefore my question is essentially:
What (if any) is the current predominant method for linking across hosts in Docker, and
Are there any plans for supporting this functionality directly in the Docker system?
Update
Docker has recently announced a new tool called Swarm for Docker orchestration.
Swarm allows you do "join" multiple docker daemons: You first create a swarm, start a swarm manager on one machine, and have docker daemons "join" the swarm manager using the swarm's identifier. The docker client connects to the swarm manager as if it were a regular docker server.
When a container started with Swarm, it is automatically assigned to a free node that meets any constraints that have been defined. The following example is taken from the blog post:
$ docker run -d -P -e constraint:storage=ssd mysql
One of the supported constraints is "node" that allows you pin a container to a specific hostname. The swarm also resolves links across nodes.
In my testing I got the impression that Swarm doesn't yet work with volumes at a fixed location very well (or at least the process of linking them is not very intuitive), so this is something to keep in mind.
Swarm is now in beta phase.
Until recently, the Ambassador Pattern was the only Docker-native approach to remote-host service discovery. This pattern can still be used and doesn't require any magic beyond plain Docker in that the pattern consists of one or more additional containers that act as proxies.
Additionally, there are several third-party extensions to make Docker cluster-capable. Third-party solutions include:
Connecting the Docker network bridges on two hosts, lightweight and various solutions exist, but generally with some caveats
DNS-based discovery e.g. with skydock and SkyDNS
Docker management tools such as Shipyard, and Docker orchestration tools. See this question for an extensive list: How to scale Docker containers in production
UPDATE 3
Libswarm has been renamed as swarm and is now a separate application.
Here is the github page demo to use as a starting point:
# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8
# on each of your nodes, start the swarm agent
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
# as long as the other nodes can reach it, it is fine.
$ swarm join --token=6856663cdefdec325839a4b7e1de38e8 --addr=<node_ip:2375>
# start the manager on any machine or your laptop
$ swarm manage --token=6856663cdefdec325839a4b7e1de38e8 --addr=<swarm_ip:swarm_port>
# use the regular docker cli
$ docker -H <swarm_ip:swarm_port> info
$ docker -H <swarm_ip:swarm_port> run ...
$ docker -H <swarm_ip:swarm_port> ps
$ docker -H <swarm_ip:swarm_port> logs ...
...
# list nodes in your cluster
$ swarm list --token=6856663cdefdec325839a4b7e1de38e8
http://<node_ip:2375>
UPDATE 2
The official approach is now to use libswarm see a demo here
UPDATE
There is a nice gist for openvswitch hosts communication in docker using the same approach.
To allow service discovery there is an interesting approach based on DNS called skydock.
There is also a screencast.
This is also a nice article using the same pieces of the puzzle but adding also vlans on top:
http://fbevmware.blogspot.it/2013/12/coupling-docker-and-open-vswitch.html
The patching has nothing to do with the robustness of the solution. Docker is actually only a sort of DSL upon Linux Containers and both solutions in these articles simply bypass some Docker automatic settings and fall back directly to Linux Containers.
So you can use the solutions safely and wait to be able to do it in a simpler way once Docker will implement it.
Weave is a new Docker virtual network technology that acts as a virtual ethernet switch over TCP/UDP - all you need is a Docker container running Weave on your host.
What's interesting here is
Instead of links, use static IPs/hostnames in your virtual network
Hosts don't need full connectivity, a mesh is formed based on what peers are available, and packets will be routed multi-hop to where they need to go
This leads to interesting scenarios like
Create a virtual network across the WAN, none of the Docker containers will know or care what actual network they sit in
Move your containers to different physical docker hosts, Weave will detect the peer accordingly
For example, there's an example guide on how to create a multi-node Cassandra cluster across your laptop and a few cloud (EC2) hosts with two commands per host. I launched a CoreOS cluster with AWS CloudFormation, installed weave on each in /home/core, plus my laptop vagrant docker VM, and got a cluster up in under an hour. My laptop is firewalled but Weave seemed to be okay with that, it just connects out to its EC2 peers.
Update
Docker 1.12 contains the so called swarm mode and also adds a service abstraction. They probably aren't mature enough for every use case, but I suggest you to keep them under observation. The swarm mode at least helps in a multi-host setup, which doesn't necessarily make linking easier. The Docker-internal DNS server (since 1.11) should help you to access container names, if they are well-known - meaning that the generated names in a Swarm context won't be so easy to address.
With the Docker 1.9 release you'll get built in multi host networking. They also provide an example script to easily provision a working cluster.
You'll need a K/V store (e.g. Consul) which allows to share state across the different Docker engines on every host. Every Docker engine need to be configured with that K/V store and you can then use Swarm to connect your hosts.
Then you create a new overlay network like this:
$ docker network create --driver overlay my-network
Containers can now be run with the network name as run parameter:
$ docker run -itd --net=my-network busybox
They can also be connected to a network when already running:
$ docker network connect my-network my-container
More details are available in the documentation.
The following article describes nicely how to connect docker containers on multiple hosts: http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/
It is possible to bridge several Docker subnets together using Open vSwitch or Tinc. I have prepared Gists to show how to do it:
Open vSwitch: https://gist.github.com/noteed/8656989
Tinc: https://gist.github.com/noteed/11031504
The advantage I see using this solution instead of the --link option and the ambassador pattern is that I find it more transparent: there is no need to have additional containers and more importantly, no need to expose ports on the host. Actually I think of the --link option to be a temporary hack before Docker get a nicer story about multi-host (or multi-daemon) setups.
Note: I know there is another answer pointing to my first Gist but I don't have enough karma to edit or comment on that answer.
As mentioned above, Weave is definitely a viable solution to link Docker containers across the hosts. Based on my own experience with it, it is fairly straightfoward to set it up. It is now also has DNS service which you can address container's by its DNS names.
On the other hand, there is CoreOS's Flannel and Juniper's Opencontrail for wiring the containers across the hosts.
Seems like docker swarm 1.14 allows you to:
assing hostname to container, using --hostname tag, but i haven't been able to make it work, containers are not able to ping each other by assigned hostnames.
assigning services to machine using --constraint 'node.hostname == <host>'

Resources