In http://docs.docker.com/swarm/install-w-machine/
There are four machines:
Local, where swarm create will run
swarm-master
swarm-agent-00
swarm-agent-01
My understanding is swarm-master will control agents, but what is Local used for?
It is for generating the discovery token using the Docker Swarm image.
That token is used when creating the swarm master.
This discovery service associates a token with instances of the Docker Daemon running on each node. Other discovery service backends such as etcd, consul, and zookeeper are available.
So the "local" machine is there to make sure the swarm manager discovers nodes. Its functions are:
register: register a new node
watch: callback method for the swarm manager
fetch: fetch the list of entries
See this introduction:
Related
I have two instances of keycloak running on container each on is running on a single node.
The nodes are bare-metal nodes inside my company network.
keycloak uses TCPPING as discovery protocol.
Since the two containers are running on different nodes, and each instance is pining inside docker default network they are not able to find each other.
I said docker default network because I didn’t specify special network for the two containers.
Any idea how can I make the two instances in this architectural design discover each others!
and I was thinking about docker swarm as a solution.
Assuming the two nodes are on the same network and are able to connect to each other, you can get the two container to discover each other using docker host networking
It would be as easy as docker run --net=host
Docker host networking makes the container to use the networking of the host node and thus will be allocated an IP address by the DHCP server used by the host node and for all practical purposes , would look like another host in that network.
This allows the two containers to discover each other using TCPPING
Docker swarm would also enable this .Docker swarm basically abstracts multiple host nodes such that you can containers on them as if you are running docker on single host. But that will require docker-machine and whole new setup.
I'd like to set up a Percona XtraDB Cluster
using this Docker image. The documentation for the Docker image assumes the use of the etcd discovery service.
My question is this: Doesn't Docker ship with a built in service discovery (i.e. DNS server), makeing the use of etcd redundant? Or are there use cases where the build in service discovery is still needed?
How would one typically go about using the build in service discovery for muliti host setups?
From what I have read, Docker discovery service is only available on docker-swarm-based images/setups. This PXC Docker image is not swarm capable and thus uses an external discovery service. Running etcd is completely optional, as stated on the Docker image README. You can manually specify each other node when starting multiple nodes.
How do I start a docker swarm cluster with consul back-end?
I can't see any discovery param in the docker swarm init command? or the docker swarm join command? I successfully ran
docker swarm init ....
and than
docker swarm join
to start a cluster on the internal swarm discovery mechanism, but it's not recommended for production. So what am I missing?
You are running the newer Swarm Mode commands but asking about the usage of the classic Swarm that runs as a container, these are two very different things.
Swarm Mode uses a raft implementation for the manager state that is not swappable with an external key/value store. You run swarm mode with the commands you listed (docker swarm init and docker swarm join). The join command eliminates the need for an external node discovery database. https://docs.docker.com/engine/swarm/how-swarm-mode-works/nodes/
Classic swarm used an external node discovery, the default using a docker hub token that was not recommended for production. To implement classic Swarm you run the docker run swarm manage with the options to publish the port to access the manager and option to discover the nodes in the swarm. Classic Swarm has more in common with a reverse proxy to the docker api than an orchestration tool like Swarm Mode or Kubernetes. https://docs.docker.com/swarm/reference/manage/
So the answer to your question is to either not use Swarm Mode commands and instead run the classic Swarm containers, or if you want Swarm Mode, to not try to implement your own external node discovery database because that's not an option. I'd recommend the latter unless you have a specific need for classic Swarm.
According to the official documentation on Install and Create a Docker Swarm, first step is to create a vm named local which is needed to obtain the token with swarm create.
Once the manager and all nodes have been created and added to the swarm cluster, do I need to keep running the local vm?
Note: this tutorial is for the first version of Swarm (called Swarm legacy). There is a new version called Swarm mode available since Docker 1.12. Putting it out there because there seems to be a lot of confusion between the two.
No you don't have to keep the local VM, this is just to get a unique cluster token with the Docker Hub discovery service.
Now this is a bit overkill just to generate a token. You can bypass this step by:
Running the swarm container directly if you have Docker for Mac or a more generally a local instance of Docker running:
docker run --rm swarm create
Directly query the service discovery URL to generate a token:
curl -X POST "https://discovery.hub.docker.com/v1/clusters"
I'd like to understand the communication mechanism between the Docker Swarm Manager and the Docker Swarm Agents :
The Swarm Manager generates a token.
The Swarm Agents are generated, with this token passed to them. (and their own IP)
Now that the Manager needs to give instructions to the agents, how was it informed that the agents were existing to these IPs ?
Hypotesis :
Does the Agents register themselves on some docker.com server with their token, and the Manager gets their addresses from it using the same token ?
Thank you
Options are described in the doc here:
https://docs.docker.com/swarm/discovery/
In this example I use the hosted discovery with Docker Hub. There are other options like a static file, consul, etcd etc.
You create your docker cluster:
docker run -rm swarm create
This will give you a token to be used as your cluster id: e4802398adc58493...longtoken
You register one/multiple docker host(s) with your cluster
docker run -d swarm join --addr=172.17.42.10:2375 token://e4802398adc58493...longtoken
The ip address provided is the address of your docker host node.
This is how the future manager will know about agents/nodes
You deploy the swarm manager to any of your docker host (let's say 172.17.42.10:2375, the same one I used to create the swarm and register my first docker host)
docker run -d -p 9999:2375 swarm manager token://e4802398adc58493...longtoken
To use the cluster you set the DOCKER_HOST to the ip address and port of your swarm manager
export DOCKER_HOST="tcp://172.17.42.10:2375"
Using something like docker info should now return information about nodes in your cluster.