Adding services in different consul clients running on same host - docker

I've followed the section of in Testing a Consul cluster on a single host using consul. Three consul servers are successfully added and running in same host for testing purpose. Afterwards, I've also followed the tutorial and created a consul client node4 to expose ports. Is it possible to add more services and bind to one of those consul clients ?

Use the new 'swarm mode' instead of the legacy Swarm. Swarm mode doesn't require Consul. Service discovery and key/value store is now part of the docker daemon. Here's how to create a 3 nodes High Available cluster (3 masters).
Create three nodes
docker-machine create --driver vmwarefusion node01
docker-machine create --driver vmwarefusion node02
docker-machine create --driver vmwarefusion node03
Find the ip of node01
docker-machine ls
Set one as the initial swarm master
docker $(docker-machine config node01) swarm init --advertise-addr <ip-of-node01>
Retrieve the token to let other nodes join as master
docker $(docker-machine config node01) swarm join-token manager
This will print out something like
docker swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Add the other two nodes to the swarm as masters
docker $(docker-machine config node02) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
docker $(docker-machine config node03) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Examine the swarm
docker node ls
You should now be able to shutdown the leader node and see another pick up as manager.

Best practice for Consul, is to run consul one per HOST, and when you want to talk to consul, you always talk locally. In general, everything 1 consul node knows, every other consul node also knows. So you can just talk to your localhost consul (127.0.0.1:8500) and do everything you need to do. When you add services, you add them to the local consul node that has the service's process on it. There are projects like Registrator (https://github.com/gliderlabs/registrator) That will automatically add services from running docker containers, which makes life easier.
Overall, welcome to Consul, it's great stuff!

Related

Differences in the ways of creating a Docker swarm

I have been reading a lot of articles and documentation for the past 3 days now about the "new" docker swarm recently built within its engine.
Having identified a couple of ways of creating a swarm (whether it's local or on a cloud provider), I can't help my confusion of understanding the differences between those methods and when you'd use one over the other.
Here are the methods to create a swarm that I have identified so far:
Method 1
docker-machine create -d virtualbox swarm-manager
docker-machine create -d virtualbox swarm-worker-1
docker-machine create -d virtualbox swarm-worker-2
manager_ip=$(docker-machine ip swarm-manager)
swarm_join_command="docker swarm join --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c 192.168.99.100:2377"
docker-machine ssh swarm-manager "docker swarm init --advertise-addr $manager_ip"
docker-machine ssh swarm-worker-1 "${swarm_join_command}"
docker-machine ssh swarm-worker-2 "${swarm_join_command}"
Method 2
docker-machine create -d virtualbox token
token=$(docker-machine ssh token "docker run swarm create" | tail -n 1)
docker-machine create -d virtualbox \
--swarm --swarm-master \
--swarm-discovery token://${token} \
master-node
docker-machine create -d virtualbox \
--swarm --swarm-discovery token://${token} \
node-01
I am excluding consul because it seems that is no longer needed.
What is the difference between those methods?
When should I use one over the other?
Confusingly there are two implementations of Docker Swarm. The first ran as containers, the second was integrated into the docker Engine as part of the v1.12 release.
So embrace Method 1. The following example creates a HA setup with multiple managers:
Troubles using docker-machine to setup Swarm
The older Swarm documentation contains the following:
You are viewing docs for legacy standalone Swarm. These topics
describe standalone Docker Swarm. If you use Docker 1.12 or higher,
Swarm mode is integrated with Docker Engine. Most users should use
integrated Swarm mode — a good place to start is Getting started with
swarm mode and Swarm mode CLI commands. Standalone Docker Swarm is not
integrated into the Docker Engine API and CLI commands.

Is it possible to create container in multiple host using a single docker compose file?

I have to create containers in multiple host. I have a dockerfile for each container. I found that docker-compose can be used to run multiple containers from a single yaml file. I have to run containerA in HostA, containerB in HostB and so on.. is it possible to achieve this using docker-compose ? or what is the best way to create container in different host using the dockerfile.
No, docker-compose alone won't achieve this. Managing containers across multiple hosts is generally the job of schedulers. The Docker Ecosystem: Scheduling and Orchestration is an older article, but should give an introduction.
The scheduler provided by Docker is called Swarm. Use Compose with Swarm is a good place to start to understand how to use them together.
This part of the answer is time limited, but during Mentor Week there are a few free courses you can take to learn more about Swarm. In your case Operations Beginner and Intermediate may be interesting.
The answer is no.
Docker compose is for multiple containers on a single host.
Docker swarm is for container(s) at a cluster level.
And, you cannot decide where to run a container, only the docker swarm scheduler decides, but you can influence it. Check the link below, and focus on image affinity. i.e. you put specific images on only some nodes and configure the scheduler to use image affinity.
Filters - Which hosts are chosen
Strategies - ranking the nodes.
docker filters
Here is a script to create a docker swarm cluster. Play with until you learn the process itself
id=$(docker run swarm create)
docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://$id swarm-master
docker-machine create -d virtualbox --swarm --swarm-discovery token://$id node1
docker-machine create -d virtualbox --swarm --swarm-discovery token://$id node2
docker-machine create -d virtualbox --swarm --swarm-discovery token://$id node3
docker-machine create -d virtualbox --swarm --swarm-discovery token://$id node4
Here is what happened:
A cluster with one master and four nodes is created.
run eval $(docker-machine env --swarm swarm-master) to configure your shell.
From this point on, creating and running containers is as usual.
running docker ps would show you where the container is running.
Docker-compose is exactly what you need. It is a tool for defining and running multi-container Docker applications. A good starting point is official documentation: https://docs.docker.com/compose/gettingstarted/

How to setup multi-host networking with docker swarm on multiple remote machines

Before asking this question I have read quiet of articles and stackoverflow questions but I couldn't get the right answer for my setup(perhaps it is already answered). Here is the architecture I have been struggling to get it to work.
I have three physical machines and I would like to setup the Docker swarm with multi-host networking so that I can run docker-compose.
For example:
Machine 1(Docker Swarm Manager and Contains Consoul)(192.168.5.11)
Machine 2(Docker Swarm Node)(192.168.5.12)
Machine 3 (Docker Swarm Node)(192.168.5.13)
And I need to run docker-compose from any other separate machine.
I have tried Docker article but in that article it is all setup under the same physical machine using docker-machine and virtual box. How can I achieve above in three remote machines. Any help appreciated.
The latest version of Docker has Swarm Mode built in, so you don't need Consul.
To set up on your boxes, make sure they all have docker version of 1.12 or higher and then you just need to initialise the swarm and join it.
On Machine 1 run:
docker swarm init --advertise-addr 192.168.5.11
The output from that will tell you the command to run on Machine 2 and 3 to join them to the swarm. You'll have a unique swarm token, and the command is something like:
docker swarm join \
--token SWMTKN-1-49nj1... \
192.168.5.11:2377
Now you have a 3-node swarm. Back on Machine 1 you can create a multi-host overlay network:
docker network create -d overlay my-app
And then you run workloads in the network by deploying services. If you want to use Compose with Swarm Mode, you need to use distributed application bundles - which are currently only in the experimental build of Docker.
I figured this needs an update, as docker compose files are supported in docker swarm
Initialize the swarm on Machine 1 using
docker swarm init --advertise-addr 192.168.5.11
Join the swarm from Machine 2 & 3 using
docker swarm join \
--token <swarm token from previous step> 192.168.5.11:2377 \
--advertise-addr eth0
eth0 is the network interface on machines 2 & 3, & could be different
based on your config. I found that without the --advertise-addr
option, containers couldn't talk to each other across hosts.
To list all the nodes in the swarm & see their status
docker node ls
After this, deploy the stack (group of services or containers) from a compose file
docker stack deploy -c <compose-file> my-app
This will create all the containers across multiple hosts
To list services (containers) on the swarm run docker service ls
See docker docs Getting started with swarm mode

Docker Swarm vs. Docker Cluster

I created a swarm cluster via
docker-machine -d azure --swarm --swarm-master --swarm-discovery token://SWARM_CLUSTER_TOKEN my-swarm-master
and
docker-machine -d azure--swarm --swarm-discovery token://SWARM_CLUSTER_TOKEN my-node-01
After that, I Logged into cloud.docker.com - but when I click on Node Clusters or Nodes I can't see my swarm.
So is swarm (via command line) and cluster (via cloud.docker.com) not the same thing? What's the difference and when should I use which one?
Edit:
Yes, my Azure subscription is added in cloud.docker.com under Cloud Settings.
They are separate. The docker-machine commands you ran create a self hosted swarm that you manage yourself (from your first docker-machine command). The Docker Cloud creates an environment that's managed for you from the Docker infrastructure. Without access to that token used by Swarm, Docker Cloud won't know about the nodes in your Swarm.

Docker 1.12 Swarm Nodes IP's

Is there a way how I could get IPs of nodes joined in cluster?
In "old" swarm there is command that you can run on manager machine. docker exec -it <containerid> /swarm list consul://x.x.x.x:8500
To see a list of nodes, use:
docker node ls
Unfortunately they don't include IP's and ports in this output. You can run a docker node inspect $hostname on each one to get it's swarm ip/port. Then if you need to add more nodes to your cluster, you can use docker swarm join-token worker which does include the needed IP/port in it's output.
What docker node ls does provide is hostnames of each node in your swarm cluster. Unlike the standalone swarm, you do not connect your docker client directly to the swarm port. You now access it from one of the manager hosts in the same way you'd connect to that host before to init/join the swarm. After connecting to one of the manager hosts, you use docker service commands to control your running services.

Resources