I have Windows Server 2016 Core(Hyper-V VM). Docker is installed, working and I want to create swarm.
IP config at the beginning:
1. Ethernet - 192.168.0.1
2. vEthernet (HSN Internal NIC) - 172.30.208.1
Then I run
docker swarm init --advertise-addr 192.168.0.1
Swarm is created, but I have lost my main IP address. IP config:
1. vEthernet (HNS internal NIC) - 172.30.208.1
2. vEthernet (HNS Transparent) - 169.254.225.229
Created swarm manager node is not reachable on main address 192.168.0.1. I can't connect to it and swarm workers are not able to join with this IP. Where is the problem?
A little late answering this but ... Docker is going to take over your network card when you bring up the Swarm. What I did was use two network cards: one I left alone for Docker to use and the second I used for everything else including virtual machines.
Currently, you cannot use Docker for Mac or Docker for Windows alone to test a multi-node swarm. For single node swarm cluster,
If you are using Docker for Mac or Docker for Windows to test single-node swarm, simply run docker swarm init with no arguments
However, you can use the included version of Docker Machine to create the swarm nodes (see Get started with Docker Machine and a local VM), then follow the tutorial for all multi-node features
For furthere info read this
Edit:
Also refer to this
Related
I'm trying to create swarm consisting of 2 nodes, using docker-machine, it is easy to provision a VM and add it as a node, but I want to create a swarm using a ubuntu VM machine and Windows docker as manager without using docker-machine.
Running
docker swarm init
in Windows (Host Machine) gives me a token to add a worker. I have Ubuntu running in VirtualBox, Docker is also installed in the VM and I'm able to ssh into it and run commands but whenever I try to add this Ubuntu Machine as a worker node by using the token generated from Windows Machine, it says
Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
I think it is related to port forwarding. I'm forwarding my VM port 22 to 127.0.0.1:22 in VBox for connecting via SSH. But I tried several combinations of forwarding. Still the VM is not able to join as a node in the swarm that I created in Windows.
Any guidance will be of great value.
Check if you have connectivity from your Ubuntu to your Windows machine. First, ssh to your Ubuntu and check:
Windows is addressable, for example using ping windows-ip.
If it is not, make sure both are in the same network, for example setting a bridge network in your VM configuration.
Windows is listening in ports needed by docker swarm:
TCP port 2376 for secure Docker client communication. This port is required for Docker Machine to work. Docker Machine is used to orchestrate Docker hosts.
TCP port 2377. This port is used for communication between the nodes of a Docker Swarm or cluster. It only needs to be opened on manager nodes.
TCP and UDP port 7946 for communication among nodes (container network discovery).
UDP port 4789 for overlay network traffic (container ingress networking).
You can check this using telnet windows-ip port.
If they are not reachable, check your Windows firewall.
I hope it helps!
I tried to create a similar Swarm with a Windows manager node but never really got it to work. You can initialize a single-node Swarm from Windows with docker swarm init. However adding multiple worker nodes does not appear to be supported at the moment:
https://docs.docker.com/engine/swarm/swarm-tutorial/.
"Currently, you cannot use Docker Desktop for Mac or Docker Desktop for Windows alone to test a multi-node swarm".
The following options are possible:
Pure Linux swarm (Linux manager + Linux workers) which runs only Linux containers
Hybrid Swarm (Linux manager + Windows workers + Linux workers) which runs Windows and Linux containers
(Sometimes) Pure Windows Swarm using Win Server 2019 as the manager. The regular Windows updates have been known to break various features of Swarm. For example, https://github.com/moby/moby/issues/40998
Then everyone either tries workarounds or waits for the next Windows update to fix the problem.
Personally I've had good luck with hybrid Swarm. It works fine with simple Ubuntu manager + standard Windows 10 workers. No need for Win Server.
I am trying to set up docker swarm with an overlay network. I have some hosts on aws while others are laptops running Ubuntu(same as on aws). Every node has a static public IP. I have created an overlay network as:
docker network create --driver=overlay --attachable test-net
I have created a swarm network on one of the aws hosts. Every other node is able to join that swarm network.
However when I run docker run -it --name alpine2 --network test-net alpine on any node not on aws, I get the error: docker: Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded.
But if I run the same on any aws host, then everything is working fine. Is there anything more I need to do in terms of networking/ports If there are some nodes on aws while others are not?
I have opened the ports required for swarm networking on all machines.
EDIT: All the nodes are marked as "active" when listing in the manager node.
UPDATE Solved this issue by opening the respective ports. It now works if all the nodes are Linux based. But when I try to make a swarm with the manager as Linux(ubuntu) os, mac os machines are not able to join the swarm.
check if the node in drain state:
docker node inspect --format {{.Spec.Availability}} node
if yes then update the state:
docker node update --availability active node
here is the explanation:
Resolution
When a node is in drain state, it is expected behavior that you should
not be able to allocate swarm mode resources such as multi-host
overlay network IP addresses to the node.However, swarm mode does not
currently provide a messaging mechanism between the swarm leader where
IP address management occurs back to the worker node that requested
the IP address. So docker run fails with context deadline exceeded.
Internal engineering issue escalation/292 has been opened to provide a
better error message in a future release of the Docker daemon.
source
Check if the below ports are opened on both machines.
TCP port 2377
TCP and UDP port 7946
UDP port 4789
You may use ufw to allow the ports:
ufw allow 2377/tcp
I had a similar issue, managed to fix it by making sure the ENGINE VERSION of the nodes were the same.
sudo docker node ls
Another common cause for this is Ubuntu server installer installing docker using snap, and that package is buggy. Uninstall with snap and install using apt. And reconsider Ubuntu. :-/
Docker network is created in a docker swarm, which contains several nodes, with this command:
docker network create --attachable --driver overlay [network-name]
And containers are attached to the network with "docker service create" command.
There is extra container with the name "lb-[network-name]" appeared after in the network.
What is that container and how to configure docker network not to have that?
From docker swarm documentation (https://docs.docker.com/engine/swarm/key-concepts/):
Swarm mode has an internal DNS component that automatically assigns
each service in the swarm a DNS entry. The swarm manager uses internal
load balancing to distribute requests among services within the
cluster based upon the DNS name of the service.
It's a part of swarm architecture, you can't deactivate it.
Take a look also to this detailed answer regarding networking of docker swarm:
https://stackoverflow.com/a/44649746/3730077
I am new to Docker. And have few easy questions hope you could help.
I have a windows 10 machine which installed "docker for windows". In its HyperV manager I could see a virtual machine called "MobyLinuxVM".
So my questions are:
1, When people talking about "Docker Host" and "Docker Engine", what are they according to my situation?
-- I assume "Docker Host" should be my windows PC, and "Docker Engine" is that Virtual machine inside Hyper-V.
2, If I use ipconfig to see my PC, I will find I got at lease 2 networks and IP addresses:
(a) Lan Adapter -- show my IP is 192.168.xxx.yyy
(b) DockerNAT -- show my IP is 10.0.75.1
Then when I try to use dock-compose.yml to create container, I found I could ONLY use:
environment:
- MAGENTO_HOST=10.0.75.2
- MARIADB_HOST=10.0.75.2
to create container and can be directly accessed (e.g. via browser to Magento website). So question is:
If my machine is 10.0.75.1 within Docker network, then what is 10.0.75.2? why I cannot use e.g. 10.0.75.3?
3, My yml script actually contains multiple containers creation -- e.g. 2 Magento containers + 2 MariaDB containers + etc. When I specify their docker 'HOST', why it's not my machine? (If we call my machine to be 'docker host' & hyper-v virtual image to be 'docker engine' in my 1st question.)
4, Also according to my 3rd question, I current deploy all containers within 1 host. Is it worth to use Docker Swarm which people can use to cluster multiple Docker hosts? If so, does that mean I need to use Hyper-V to create another "MobyLinuxVM"?
Thanks a lot!
1 Docker Engine + Docker Host
The Docker Engine is the group of processes that manage Docker containers. dockerd is usually the head of that process tree.
The Docker Host is the OS running Docker engine, that is MobyLinuxVM
Your VM host is your Windows box.
2 Docker Host IP
10.0.75.2 is most likely the address assigned to MobyLinuxVM. I don't run Docker for Windows so can't entirely confirm but searching the web seems to back this up.
3 - see 1
4 Swarm
You would need to run multiple VMs to setup swarm. Docker machine is the tool to use when setting up swarm instances. It allows you to manage multiple Docker instances and comes with a HyperV driver.
I currently have three hosts (docker1, docker2 and docker3) which I have not set up using Docker Machine, each one running the v1.12-rc4 Docker daemon.
I run docker swarm init on docker1, which in turn prints a docker swarm join command which I run on both docker2 and docker3. At that point, running docker info on each host contains the Swarm: active line.
It is at this point that the behavior seems to differ from what I used to get with the standalone Swarm container. Especially, running docker network ls will only show me the networks on the local host, and when trying to create an overlay network, it does not seem like worker nodes are aware of it (i.e. it does not show up on their docker network ls.)
I feel like I have missed out on some important information relating to the workings of the Swarm Mode as opposed to the Swarm container.
What is the correct way of setting up such a cluster without Docker Machine on Docker 1.12 while getting the overlay network feature?
I too thought this was an issue when I first started using it.
This works a little differently in 1.12rc4 - when you deploy a container to your swarm with that network attached to it, it should then create the network on the other nodes as well.
Hope this helps!
Issue
You are using the docker command (used to communicate with your localhost Docker daemon) and not the "swarm" command (used to communicate with the Swarm master).
Solution
It depends on the command you used to start Swarm.
A full step-by-step tutorial (including details on how to deploy an overlay network) is detailled on this answer. I'm sure that reading this will help you ;)
With a network scope of swarm, the network is only propagated to worker nodes on an as-needed basis. If you create a service using that network, and it gets scheduled on that worker node, the network will show up in the docker network ls.
With the now-upcoming 1.13 release, you can get a network that has similar behavior to the non-swarm networks by doing docker network create --attachable .... That network will be valid for both services and normal containers, and will be available to all members of the cluster. As of 1.13.0-rc2, those don't seem to show up in the output of docker network ls.