Docker 1.12 Swarm Nodes IP's - docker

Is there a way how I could get IPs of nodes joined in cluster?
In "old" swarm there is command that you can run on manager machine. docker exec -it <containerid> /swarm list consul://x.x.x.x:8500

To see a list of nodes, use:
docker node ls
Unfortunately they don't include IP's and ports in this output. You can run a docker node inspect $hostname on each one to get it's swarm ip/port. Then if you need to add more nodes to your cluster, you can use docker swarm join-token worker which does include the needed IP/port in it's output.
What docker node ls does provide is hostnames of each node in your swarm cluster. Unlike the standalone swarm, you do not connect your docker client directly to the swarm port. You now access it from one of the manager hosts in the same way you'd connect to that host before to init/join the swarm. After connecting to one of the manager hosts, you use docker service commands to control your running services.

Related

Docker Swarm mode routing mesh not working with wireguard VPN

I'm trying to setup a 3 node Docker swarm cluster on Hetzner cloud, using wireguard VPN (setup on interface wg0) to build the local network between nodes. Networking works fine across nodes using VPN IP (ports 7946/tcp , 7946/udp and 4789/udp are open as reported here). I start docker swarm cluster with the following commands:
docker swarm init --advertise-addr wg0 --listen-addr wg0
docker swarm join --token SWMTKN-1-xxx --advertise-addr wg0 --listen-addr wg0 10.0.0.1:2377
If I try to run a service on this swarm, it seems to run correctly, every container can reach the other on different node and inspecting them, they join the ingress network and an overlay network created by me, as expected. The problem arises when I try to access from outside the service exposed port; it only works if I target the node where the container is running, so it seems that the routing mesh is not correctly working. I've not found any error in docker logs or syslog.
Note: I'm using docker 18.06.1-ce
I had this issue and made the following changes:
I moved my wireguard addresses from 10.0.* to 192.168.* (i have a feeling that swarm is allocating on top of these).
docker swarm init --advertise-addr 192.168.2.123 with the wireguard ip4 address of the master node.
That managed to fix it, and it still works after rebooting the master and worker nodes!

Set hostname of service container to host's hostname

I run a docker swarm with docker swarm mode. Let's say I have 4 nodes, 1 manager, 3 worker. The hostnames are:
manager0
worker0
worker1
worker2
I start the service in global mode, so every node runs the service once.
Let's say the command looks like this:
docker service create --name myservice --mode global --network mynetwork ubuntu wait 3600
mynetwork is an overlay network.
Now I am trying to access the hostname of the docker host in the containers, so I can pass the hostname to an application in the container.
I tried to pass the hostname with the environment variables (--env hostname=$(hostname)), but actually ${hostname} is only executed on the manager and the hostname is set to manager0 for all nodes.
Is there a way to access the hostname or pass the hostname to the containers?
You can use latest naming templates to create service with hostname.
Here is the feature request, that has been implemented in docker version 17.10
https://github.com/moby/moby/issues/30966

How to setup multi-host networking with docker swarm on multiple remote machines

Before asking this question I have read quiet of articles and stackoverflow questions but I couldn't get the right answer for my setup(perhaps it is already answered). Here is the architecture I have been struggling to get it to work.
I have three physical machines and I would like to setup the Docker swarm with multi-host networking so that I can run docker-compose.
For example:
Machine 1(Docker Swarm Manager and Contains Consoul)(192.168.5.11)
Machine 2(Docker Swarm Node)(192.168.5.12)
Machine 3 (Docker Swarm Node)(192.168.5.13)
And I need to run docker-compose from any other separate machine.
I have tried Docker article but in that article it is all setup under the same physical machine using docker-machine and virtual box. How can I achieve above in three remote machines. Any help appreciated.
The latest version of Docker has Swarm Mode built in, so you don't need Consul.
To set up on your boxes, make sure they all have docker version of 1.12 or higher and then you just need to initialise the swarm and join it.
On Machine 1 run:
docker swarm init --advertise-addr 192.168.5.11
The output from that will tell you the command to run on Machine 2 and 3 to join them to the swarm. You'll have a unique swarm token, and the command is something like:
docker swarm join \
--token SWMTKN-1-49nj1... \
192.168.5.11:2377
Now you have a 3-node swarm. Back on Machine 1 you can create a multi-host overlay network:
docker network create -d overlay my-app
And then you run workloads in the network by deploying services. If you want to use Compose with Swarm Mode, you need to use distributed application bundles - which are currently only in the experimental build of Docker.
I figured this needs an update, as docker compose files are supported in docker swarm
Initialize the swarm on Machine 1 using
docker swarm init --advertise-addr 192.168.5.11
Join the swarm from Machine 2 & 3 using
docker swarm join \
--token <swarm token from previous step> 192.168.5.11:2377 \
--advertise-addr eth0
eth0 is the network interface on machines 2 & 3, & could be different
based on your config. I found that without the --advertise-addr
option, containers couldn't talk to each other across hosts.
To list all the nodes in the swarm & see their status
docker node ls
After this, deploy the stack (group of services or containers) from a compose file
docker stack deploy -c <compose-file> my-app
This will create all the containers across multiple hosts
To list services (containers) on the swarm run docker service ls
See docker docs Getting started with swarm mode

Adding services in different consul clients running on same host

I've followed the section of in Testing a Consul cluster on a single host using consul. Three consul servers are successfully added and running in same host for testing purpose. Afterwards, I've also followed the tutorial and created a consul client node4 to expose ports. Is it possible to add more services and bind to one of those consul clients ?
Use the new 'swarm mode' instead of the legacy Swarm. Swarm mode doesn't require Consul. Service discovery and key/value store is now part of the docker daemon. Here's how to create a 3 nodes High Available cluster (3 masters).
Create three nodes
docker-machine create --driver vmwarefusion node01
docker-machine create --driver vmwarefusion node02
docker-machine create --driver vmwarefusion node03
Find the ip of node01
docker-machine ls
Set one as the initial swarm master
docker $(docker-machine config node01) swarm init --advertise-addr <ip-of-node01>
Retrieve the token to let other nodes join as master
docker $(docker-machine config node01) swarm join-token manager
This will print out something like
docker swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Add the other two nodes to the swarm as masters
docker $(docker-machine config node02) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
docker $(docker-machine config node03) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Examine the swarm
docker node ls
You should now be able to shutdown the leader node and see another pick up as manager.
Best practice for Consul, is to run consul one per HOST, and when you want to talk to consul, you always talk locally. In general, everything 1 consul node knows, every other consul node also knows. So you can just talk to your localhost consul (127.0.0.1:8500) and do everything you need to do. When you add services, you add them to the local consul node that has the service's process on it. There are projects like Registrator (https://github.com/gliderlabs/registrator) That will automatically add services from running docker containers, which makes life easier.
Overall, welcome to Consul, it's great stuff!

How does Docker Swarm start a container

For Docker Swarm, the Swarm manager runs on master node while swarm agent runs on slave node. I’m interested in the steps of starting a container. There are two options:
Swarm manager starts containers directly through Docker remote API.
Swarm manager asks Swarm agent to start container, then Swarm agent ask local Docker daemon to start container.
Personally, I think the first one is right. But I’m not sure...
Swarm agents don't have access to the Docker daemon, they are only there to communicate via etcd, consul or zookeeper with the master. So the first one is correct. They agent registers the host with the discovery service and from then on the manager can access it via the daemon listening on a TCP port.

Resources