For Docker Swarm, the Swarm manager runs on master node while swarm agent runs on slave node. I’m interested in the steps of starting a container. There are two options:
Swarm manager starts containers directly through Docker remote API.
Swarm manager asks Swarm agent to start container, then Swarm agent ask local Docker daemon to start container.
Personally, I think the first one is right. But I’m not sure...
Swarm agents don't have access to the Docker daemon, they are only there to communicate via etcd, consul or zookeeper with the master. So the first one is correct. They agent registers the host with the discovery service and from then on the manager can access it via the daemon listening on a TCP port.
Related
I am running a docker service in swarm mode. When I want to restart it, there are 2 options I know:
from swarm manager: docker service scale myservice=0 then docker service scale myservice=1
from server running the server: docker ps, take the container id of my service and do docker stop <containerId>
And this works fine. However, if I go with option #2 and instead of docker stop I write docker restart it will restart the current instance, but because being in swarm mode it will also start a new one. So in the end I will end up having 2 of the same service, even though in my compose I have specified I want only 1 replica.
Is there any way to prevent docker restart and the docker swarm to start a 2nd service while one is already there?
I am using docker 18.09.2 on ubuntu 18.04
I'm trying to setup a 3 node Docker swarm cluster on Hetzner cloud, using wireguard VPN (setup on interface wg0) to build the local network between nodes. Networking works fine across nodes using VPN IP (ports 7946/tcp , 7946/udp and 4789/udp are open as reported here). I start docker swarm cluster with the following commands:
docker swarm init --advertise-addr wg0 --listen-addr wg0
docker swarm join --token SWMTKN-1-xxx --advertise-addr wg0 --listen-addr wg0 10.0.0.1:2377
If I try to run a service on this swarm, it seems to run correctly, every container can reach the other on different node and inspecting them, they join the ingress network and an overlay network created by me, as expected. The problem arises when I try to access from outside the service exposed port; it only works if I target the node where the container is running, so it seems that the routing mesh is not correctly working. I've not found any error in docker logs or syslog.
Note: I'm using docker 18.06.1-ce
I had this issue and made the following changes:
I moved my wireguard addresses from 10.0.* to 192.168.* (i have a feeling that swarm is allocating on top of these).
docker swarm init --advertise-addr 192.168.2.123 with the wireguard ip4 address of the master node.
That managed to fix it, and it still works after rebooting the master and worker nodes!
Is there a way how I could get IPs of nodes joined in cluster?
In "old" swarm there is command that you can run on manager machine. docker exec -it <containerid> /swarm list consul://x.x.x.x:8500
To see a list of nodes, use:
docker node ls
Unfortunately they don't include IP's and ports in this output. You can run a docker node inspect $hostname on each one to get it's swarm ip/port. Then if you need to add more nodes to your cluster, you can use docker swarm join-token worker which does include the needed IP/port in it's output.
What docker node ls does provide is hostnames of each node in your swarm cluster. Unlike the standalone swarm, you do not connect your docker client directly to the swarm port. You now access it from one of the manager hosts in the same way you'd connect to that host before to init/join the swarm. After connecting to one of the manager hosts, you use docker service commands to control your running services.
I'm trying to connect to a Manager with swarm version 1.12.1 from the docker client:
$ docker -H tcp://MY_MANAGER_1_IP:2377 info
I got the following error message:
Are you trying to connect to a TLS-enabled daemon without TLS?
Anyone has idea, thank you in advance.
The integrated docker swarm in 1.12 is managed via the docker host, not via the swarm port as you would have done before in the standalone swarm product (which you can still install in a 1.12 environment if you wish). Connect to the docker host as you always have, and manage it via docker swarm, docker service, and docker node commands.
The port you open for the integrated swarm isn't for the docker API, it's for traffic between swarm managers and workers. To see the info on the swarm, the docker info on the swarm manager will include some details, and docker node will give a status of managers and workers. Note that this also means you cannot submit jobs to the integrated swarm with a docker -H ... run ... command, you must use the new docker service commands to manage containers in the new swarm.
For remote access to any docker host, which would let you run API commands from another machine, see the docs on securing the Docker API which is a procedure to enable TLS and setup the daemon to listen for external traffic instead of using the docker.sock socket.
The docker containers are running on different host machine that are running their separate docker deamons. So I am using weave network to connect them. The problem is I don't want to run the consul server on a docker conatiner, I want to run it on a host machine. So how do I add this host machine that is running consul server to a weave network so that other containers can register to consul server.