Not able to create Docker Swarm - docker

I am trying to create a simple docker swarm of two systems in my local network. I gave command docker swarm join --token SWMTKN-1-ns78a9s9d9alnma7qnhwdna9o0hdf8ei8f xx.xx.xx.xx:2377 to make other system join the swarm. But I am getting error Error response from daemon: manager stopped: can't initialize raft node: rpc error: code = Unavailable desc = grpc: the connection is unavailable.
My systems are behind proxy and I have configured docker with proxy as well. I am able to download docker images and I can ping the other system as well.

Swarm nodes must have direct access to each other, and can't communitate through NAT or a proxy for intra-Swarm communcations.
Plus, you'll want to be sure they can talk to each other on the proper Swarm ports:
TCP port 2377 for cluster management & raft sync communications
TCP and UDP port 7946 for "control plane" gossip discovery communication
UDP port 4789 for "data plane" VXLAN overlay network traffic

Related

Disable docker SNAT for external connections

I have a docker container that runs a TCP server which is attached to a custom docker network that I have set up. The container is exposed with an external port mapping.
There is also a TCP client outside the container that is trying to access the TCP server through the exposed port. The TCP client binds to a specific source port, say 5000.
My problem is that due to the docker network SNAT, the container running on docker sees the remote port as one that is generated by the network gateway (say 6000), rather than the original source port (5000).
Is there a way to modify the network behavior so it doesn't apply SNAT for these external connections?

Cannot join Docker manager node in Windows using tokens

My friend and I are trying to connect our Docker daemon using Docker Swarm. We both are using Windows OS and we are NOT on the same network. According to Docker docs each docker host must have the following ports open;
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
We both have added new rules for the given ports in inbound and outbound rules in the firewall. Though we keep getting the same two errors while trying to join using token created by the manager node using docker swarm join --token command;
1. error response from daemon: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 192.168.65.3:2377: connect: connection refused"
2. Timeout error
Also, if either of us runs docker swarm init it shows 192.168.65.3 IP address that isn't part of any network we're connected to. What does it mean?
Docker overlay tutorial also states that in order to connect to the manager node, the worker node should add the IP address of the manager.
docker swarm join --token \ --advertise-addr IP-ADDRESS-OF-WORKER-1
IP-ADDRESS-OF-MANAGER:2377
Does it mean that in our case we have to use public IP address of the manager node after enabling port forwarding?
Potential network issues aside, here is your problem:
We both are using Windows OS
I have seen this issue in other threads when attempting to use Windows nodes in a multi-node swarm. Here are some important pieces of information from the Docker overlay networks documentation:
Before you can create an overlay network, you need to either initialize your Docker daemon as a swarm manager using docker swarm init or join it to an existing swarm using docker swarm join. Either of these creates the default ingress overlay network which is used by swarm services by default.
Overlay network encryption is not supported on Windows. If a Windows node attempts to connect to an encrypted overlay network, no error is detected but the node cannot communicate.
By default, Docker encrypts all swarm service management traffic. As far as I know, disabling this encryption is not possible. Do not confuse this with the --opt encrypted option, as that involves encrypting application data, not swarm management traffic.
For a single-node swarm, using Windows is just fine. For a multi-node swarm, which would be deployed using Docker stack, I highly recommend using Linux for all worker and manager nodes.
A while ago I was using Linux as a manager node and Windows as a worker node. I noticed that joining the swarm would only work if the Linux machine was the swarm manager; If the Windows machine was the manager, joining the swarm would not work. After the Windows machine joined the swarm, container-to-container communication over a user-defined overlay network would not work no matter what. Replacing the Windows machine with a Linux machine fixed all issues.

Docker swarm overlay Connect: no route to host

I have a swarm with 2 nodes. One is an ubuntu VM on azure and the other one is my VM on my local machine.
When the containers try to make requests to each other with I get this dial tcp 10.0.0.88:9999: connect: no route to host
I've enabled in the 2 nodes all the swarm communication ports needed: tcp 2377 udp/tcp 7946 and udp 4789.
Communication works if I run everything local.
Any ideas?
Thanks
An overlay network doesn't create connectivity between two nodes, it requires connectivity, and then uses that to connect containers running on each node. From the prerequisites, each node needs to be able to reach the overlay ports on every other node in the cluster. See the documentation for more details:
https://docs.docker.com/network/overlay/

Can Consul be run inside a Docker container using Docker for Windows?

I am trying to make Consul work inside a Docker container, but using Docker for Windows and Linux containers. I am using the official Consul Docker image. The documentation states that the container must use --net=host for Consul's consensus and gossip protocols.
The problem is, as far as I can tell, that Docker for Windows uses a Linux VM under the hood, and the "host" of the container is not the actual host machine, but that VM. I could not find a combination of -bind, -client and -advertise parameters (IP addresses), so that:
Other Consul agents on other hosts can connect to the local agent using the host machine's IP address.
Other containerized services on the same host can query the local agent's REST interface.
Whenever I pass the host machines IP address in the LAN through -advertise, I get these errors inside the container:
2018/04/03 15:15:55 [WARN] consul: error getting server health from "linuxkit-00155d02430b": rpc error getting client: failed to get conn: dial tcp
127.0.0.1:0->10.241.2.67:8300: connect: invalid argument 2018/04/03 15:15:56 [WARN] consul: error getting server health from "linuxkit-00155d02430b": context deadline exceeded
Also, other agents on other hosts cannot connect to that agent.
Using -bind on that address fails - my guess is, since the container is inside the Linux VM, the host machine's address is not the container's host's address, and therefore cannot be bound.
I have tried various combinations of -bind, -client and -advertise, using addresses like 0.0.0.0, 127.0.0.1, 10.0.75.2 (addresss on the Docker virtual switch) and the host machine's IP, but to no avail.
I am now wondering whether this is achievable at all. I have been trying this for quite some time, and I am despairing. Any advice would be appreciated!
I have tried the whole process without using --net=host, and everything works fine. I can connect agents across hosts, and I can query the local agents REST interface from other containerized applications... Is --net=host really crucial to the functioning of Consul?

using listen-addr with docker daemon

I am creating docker swarm by deploying docker daemon and running swarm related containers ( old method ). As I am deploying it on AWS cloud so my listening ip address and advertising ip address is different. Currently this feature is in docker swarm only i.e. provide --listen-addr and --advertise-addr.
I wanted to ask if docker daemon have such functionality?
With dockerd you can define --ip=0.0.0.0 which is the default ip interface containers listen on when they start up. The default 0.0.0.0 tends to be correct for users.
You can also pass an option like -H tcp://127.0.0.1:2375 to listen on an IP for client connection instead of the default /var/run/docker.sock socket (please use TLS if you listen on a public IP). Dockerd is a server half of a client server application, but by default, it isn't listening on any IP interface.
The advertise addr doesn't apply at all to dockerd since no part of it connects to a key/value store to advertise it's location like Swarm does.

Resources