I've managed to run Docker Swam mode with multiple hosts with Docker Toolbox, but I am unable to create a swarm with Docker Desktop since it apparently only offers single node swarm.
Is there any way to get this working with Docker Desktop or is it not supported?
No. But yes. But actually no. But technically yes.
No. Docker Desktop does not support this. It manages a single docker node in a vm and has no capability to manage multiple dockers.
But yes. docker:dind is an image you can easily use to deploy multiple docker nodes as containers, and then swarm init / swarm join to create a swarm cluster hosted on docker. You can even swarm join the docker-desktop node to be the swarm manager which means you can communicate with your local docker desktop node to control the swarm.
But actually, no.
Unless your use case is a very limited hello-world on swarm tutorial, there is no support for exposing ports from the dind-swarm to the host. Even if the host docker acts as the manager, overlay networking that is required for ingress will require communications over :2377, :4789/udp, and :7946, and as the host is not part of its own overlay networks, this will never work.
So, communicating with tasks running on the swarm is basically impossible.
But technically yes. play-with-docker apparently runs docker swarms using dind. They do some heavy lifting to expose a restricted set of ports via l7 loadbalancers. Pretty cool. but not at all easy to do at home. If you have a spare Dell PowerEdge or equivalent blade server with 120+ cores just laying around, and want to expose it as a docker swarm rather than split it into VMS... perhaps this is a viable approach.
Related
i'm learning about docker swarm, and got confused about the swarm discovery option, i see that lots of tutorials on internet use this option to create containers with docker-machine, but when i enter the documentation on docker swarm doc it says:
You are viewing docs for legacy standalone Swarm. These topics describe standalone Docker Swarm. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. Most users should use integrated Swarm mode.
So, what are the use cases for the discovery options? All the tutorials use the docker-machine to create a swarm, i always need it or can just install the docker on machines in my cluster, join them in swarm and use normal?
I saw some names like Docker Swarm and Docker Swarm Mode, are there any difference or just different ways to call the same feature?
Q. Docker Swarm discovery is still relevant?
A: No, if you use docker Swarm Mode and an overlay network (see below)
Q. Are there any difference between Docker Swarm and Docker Swarm Mode?
A: Yes, TL;DR Docker Swarm is deprecated and should not be used anymore, Docker Swarm Mode (we should just say Swarm Mode) is the recommended way of clustering containers and have reliability, load-balancing, scaling, and rolling service upgrades.
Docker Swarm (official doc) :
is the old fashioned way (<1.12) of clustering containers
uses a dedicated container for building a Docker Swarm cluster
needs a discovery service like Consul to reference containers in cluster
Swarm Mode (official doc):
is the new and recommended way (>=1.12) of clustering containers on host nodes (called managers / workers)
is built-in in Docker engine, you don't need an additional container
has a built-in discovery service if you use an overlay network (DNS resolution is done within this network), you don't need an additional container
You can have a look to this SO thread on same topic.
Q. Do i always need docker-machine to create a swarm?
A: No, docker-machine is a helper to create virtual hosts in the cloud like amazon ec2, azure, digitalocean, google, openstack..., or your own network with virtual box.
To create a Swarm Mode, you need :
a multiple hosts cluster with docker engine installed on each host (called node) (that is what docker-machine facilitates)
run docker swarm init to switch to Swarm Mode on your first manager node
run docker swarm join on worker nodes to add them in the cluster
There are some subtle adjustments to Swarm mode to increase high availability (recommended number of managers in the swarm, node placement in multiple availability zones in the cloud)
Hope this helps!
After looking through docker official swarm explanations, github issues and stackoverflow answers im still at a loss on why i am having the problem that i have.
Issue at hand: docker-compose up starts services not in the swarm even though swarm is active and has 2 nodes.
Im using 1.12.1 docker version.
Looking at swarm tutorial i was able to start and scale my swarm using docker service create without any issues.
running docker-compose up with version 2 docker-compose.yml results in services starting outside of swarm, i can see them through docker ps but not docker service ls
I can see that docker-machine as the tool that solves this problems, but then again it needs virtual box to be installed.
so my questions would be
Can i use docker-compose with docker-swarm (NOT docker-engine) without docker-machine and without experimental build bundle functionality?
If docker service create can start a service on any nodes is it an indication that network configuration of the swarm is correct ?
What is the advantages/disadvantages of docker-machine versus experimental build functionality
1) No. Docker Compose isn't integrated with the new Swarm Mode yet. Issue 3656 in GitHub is tracking that. If you start containers on a swarm with Docker Compose at the moment, it uses docker run to start containers, which is why you see them all on one node.
2) Yes. Actually you can use docker node ls on the manager to confirm all the nodes are up and active, and docker node inspect to check a particular node, you don't need to create a service to validate the swarm.
3) Docker Machine is also behind the 1.12 release, so if you start a swarm with Docker Machine it will be the 'old' type of swarm. The old Docker Swarm product needed a whole lot of extra setup for a key-value store, TLS etc. which Swarm Mode does for free.
1) You can't start services using docker-compose on the new Docker "Swarm Mode". There's a feature to convert a docker-compose file to the new dab format which is understood by the new swarm mode but that's incomplete and experimental at this point. You basically need to use bash scripts to start services at the moment.
2) The nodes in a swarm (swarm mode) interact using their own overlay network. It's the one named ingress when you do docker network ls. You need to setup your own overlay network to run services in. eg:
docker network create -d overlay mynet
docker service create --name serv1 --network mynet nginx
3) I'm not sure what feature you mean by "experimental build'. docker-machine is just a way to create hosts (the nodes). It facilitates the setting up of the docker daemon on each host, the certificates and allows some basic maintenance (renewing the certs, stopping/starting a host if you're the one who created it). It doesn't create services, volumes, networks or manages them. That's the job of the docker api.
I currently have three hosts (docker1, docker2 and docker3) which I have not set up using Docker Machine, each one running the v1.12-rc4 Docker daemon.
I run docker swarm init on docker1, which in turn prints a docker swarm join command which I run on both docker2 and docker3. At that point, running docker info on each host contains the Swarm: active line.
It is at this point that the behavior seems to differ from what I used to get with the standalone Swarm container. Especially, running docker network ls will only show me the networks on the local host, and when trying to create an overlay network, it does not seem like worker nodes are aware of it (i.e. it does not show up on their docker network ls.)
I feel like I have missed out on some important information relating to the workings of the Swarm Mode as opposed to the Swarm container.
What is the correct way of setting up such a cluster without Docker Machine on Docker 1.12 while getting the overlay network feature?
I too thought this was an issue when I first started using it.
This works a little differently in 1.12rc4 - when you deploy a container to your swarm with that network attached to it, it should then create the network on the other nodes as well.
Hope this helps!
Issue
You are using the docker command (used to communicate with your localhost Docker daemon) and not the "swarm" command (used to communicate with the Swarm master).
Solution
It depends on the command you used to start Swarm.
A full step-by-step tutorial (including details on how to deploy an overlay network) is detailled on this answer. I'm sure that reading this will help you ;)
With a network scope of swarm, the network is only propagated to worker nodes on an as-needed basis. If you create a service using that network, and it gets scheduled on that worker node, the network will show up in the docker network ls.
With the now-upcoming 1.13 release, you can get a network that has similar behavior to the non-swarm networks by doing docker network create --attachable .... That network will be valid for both services and normal containers, and will be available to all members of the cluster. As of 1.13.0-rc2, those don't seem to show up in the output of docker network ls.
I have a silly question regarding docker swarm.
I am thinking I can start a web application image in two containers, either in same server or two vm servers, then I start a load balance container, pointing to two web app containers through IP and port.
In this case, why do I need docker swarm for clustering management? What benefits can docker swarm bring?
I have read from docker documentation, they only introduce what is swarm and how to use swarm. But I can not find out answer for why I have to use swarm.
Thanks
What is swarming managing? turns a pool of Docker hosts into a single, virtual Docker host.
Can swarm auto-start the container if the container died? Yes it can, so can the Docker daemon on each host.
Can swarm auto-create more nodes if the resource is not enough? No it cannot. It does not aims on providing this service. Nevertheless you can program a node that start and run containers when needed.
Which mean, if traffic grows fast, do we still manually create more node and deploy more containers? Yes, unfortunately.
update
If needed, here is an answer that details how to deploy a Swarm cluster.
Docker allows servers from multiple containers to connect to each other via links and service discovery. However, from what I can see this service discovery is host-local. I would like to implement a service that uses other services hosted on a different machine.
There have been several approaches to solving this problem in Docker, such as CoreOS's jumpers, host-local services that essentially proxy to the other machine, and a whole bunch of github projects for managing Docker deployments that appear to have attempted to support this use-case.
Given the pace of development it is hard to follow what current best practices are. Therefore my question is essentially:
What (if any) is the current predominant method for linking across hosts in Docker, and
Are there any plans for supporting this functionality directly in the Docker system?
Update
Docker has recently announced a new tool called Swarm for Docker orchestration.
Swarm allows you do "join" multiple docker daemons: You first create a swarm, start a swarm manager on one machine, and have docker daemons "join" the swarm manager using the swarm's identifier. The docker client connects to the swarm manager as if it were a regular docker server.
When a container started with Swarm, it is automatically assigned to a free node that meets any constraints that have been defined. The following example is taken from the blog post:
$ docker run -d -P -e constraint:storage=ssd mysql
One of the supported constraints is "node" that allows you pin a container to a specific hostname. The swarm also resolves links across nodes.
In my testing I got the impression that Swarm doesn't yet work with volumes at a fixed location very well (or at least the process of linking them is not very intuitive), so this is something to keep in mind.
Swarm is now in beta phase.
Until recently, the Ambassador Pattern was the only Docker-native approach to remote-host service discovery. This pattern can still be used and doesn't require any magic beyond plain Docker in that the pattern consists of one or more additional containers that act as proxies.
Additionally, there are several third-party extensions to make Docker cluster-capable. Third-party solutions include:
Connecting the Docker network bridges on two hosts, lightweight and various solutions exist, but generally with some caveats
DNS-based discovery e.g. with skydock and SkyDNS
Docker management tools such as Shipyard, and Docker orchestration tools. See this question for an extensive list: How to scale Docker containers in production
UPDATE 3
Libswarm has been renamed as swarm and is now a separate application.
Here is the github page demo to use as a starting point:
# create a cluster
$ swarm create
6856663cdefdec325839a4b7e1de38e8
# on each of your nodes, start the swarm agent
# <node_ip> doesn't have to be public (eg. 192.168.0.X),
# as long as the other nodes can reach it, it is fine.
$ swarm join --token=6856663cdefdec325839a4b7e1de38e8 --addr=<node_ip:2375>
# start the manager on any machine or your laptop
$ swarm manage --token=6856663cdefdec325839a4b7e1de38e8 --addr=<swarm_ip:swarm_port>
# use the regular docker cli
$ docker -H <swarm_ip:swarm_port> info
$ docker -H <swarm_ip:swarm_port> run ...
$ docker -H <swarm_ip:swarm_port> ps
$ docker -H <swarm_ip:swarm_port> logs ...
...
# list nodes in your cluster
$ swarm list --token=6856663cdefdec325839a4b7e1de38e8
http://<node_ip:2375>
UPDATE 2
The official approach is now to use libswarm see a demo here
UPDATE
There is a nice gist for openvswitch hosts communication in docker using the same approach.
To allow service discovery there is an interesting approach based on DNS called skydock.
There is also a screencast.
This is also a nice article using the same pieces of the puzzle but adding also vlans on top:
http://fbevmware.blogspot.it/2013/12/coupling-docker-and-open-vswitch.html
The patching has nothing to do with the robustness of the solution. Docker is actually only a sort of DSL upon Linux Containers and both solutions in these articles simply bypass some Docker automatic settings and fall back directly to Linux Containers.
So you can use the solutions safely and wait to be able to do it in a simpler way once Docker will implement it.
Weave is a new Docker virtual network technology that acts as a virtual ethernet switch over TCP/UDP - all you need is a Docker container running Weave on your host.
What's interesting here is
Instead of links, use static IPs/hostnames in your virtual network
Hosts don't need full connectivity, a mesh is formed based on what peers are available, and packets will be routed multi-hop to where they need to go
This leads to interesting scenarios like
Create a virtual network across the WAN, none of the Docker containers will know or care what actual network they sit in
Move your containers to different physical docker hosts, Weave will detect the peer accordingly
For example, there's an example guide on how to create a multi-node Cassandra cluster across your laptop and a few cloud (EC2) hosts with two commands per host. I launched a CoreOS cluster with AWS CloudFormation, installed weave on each in /home/core, plus my laptop vagrant docker VM, and got a cluster up in under an hour. My laptop is firewalled but Weave seemed to be okay with that, it just connects out to its EC2 peers.
Update
Docker 1.12 contains the so called swarm mode and also adds a service abstraction. They probably aren't mature enough for every use case, but I suggest you to keep them under observation. The swarm mode at least helps in a multi-host setup, which doesn't necessarily make linking easier. The Docker-internal DNS server (since 1.11) should help you to access container names, if they are well-known - meaning that the generated names in a Swarm context won't be so easy to address.
With the Docker 1.9 release you'll get built in multi host networking. They also provide an example script to easily provision a working cluster.
You'll need a K/V store (e.g. Consul) which allows to share state across the different Docker engines on every host. Every Docker engine need to be configured with that K/V store and you can then use Swarm to connect your hosts.
Then you create a new overlay network like this:
$ docker network create --driver overlay my-network
Containers can now be run with the network name as run parameter:
$ docker run -itd --net=my-network busybox
They can also be connected to a network when already running:
$ docker network connect my-network my-container
More details are available in the documentation.
The following article describes nicely how to connect docker containers on multiple hosts: http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/
It is possible to bridge several Docker subnets together using Open vSwitch or Tinc. I have prepared Gists to show how to do it:
Open vSwitch: https://gist.github.com/noteed/8656989
Tinc: https://gist.github.com/noteed/11031504
The advantage I see using this solution instead of the --link option and the ambassador pattern is that I find it more transparent: there is no need to have additional containers and more importantly, no need to expose ports on the host. Actually I think of the --link option to be a temporary hack before Docker get a nicer story about multi-host (or multi-daemon) setups.
Note: I know there is another answer pointing to my first Gist but I don't have enough karma to edit or comment on that answer.
As mentioned above, Weave is definitely a viable solution to link Docker containers across the hosts. Based on my own experience with it, it is fairly straightfoward to set it up. It is now also has DNS service which you can address container's by its DNS names.
On the other hand, there is CoreOS's Flannel and Juniper's Opencontrail for wiring the containers across the hosts.
Seems like docker swarm 1.14 allows you to:
assing hostname to container, using --hostname tag, but i haven't been able to make it work, containers are not able to ping each other by assigned hostnames.
assigning services to machine using --constraint 'node.hostname == <host>'