The container from the worker node can't join the swarm's attachable network - docker

I encounter a problem when using the 'docker run' on the worker node,
the scenario is as follows:
I have following three VMs in my environment,
and they are already in the Swarn mode.
VM.1 -> Master node in the Swarm
VM.2 -> Worker node in the Swarm
VM.3 -> Worker node in the Swarm
and I've also created the overlay network in this environment via:
docker network create --attachable --driver overlay --subnet 10.10.0.0/16 --gateway 10.10.0.1 test-net
and the overlay network is created successfully
# docker network ls
NETWORK ID NAME DRIVER SCOPE
fc1b70304011 bridge bridge local
f9ca924c1a4d docker_gwbridge bridge local
ea8fc696d6f1 host host local
r311gaq7iobo ingress overlay swarm
bd08afac574d none null local
wb7vfpxzdkyt test-net overlay swarm
but, once I use 'docker run' to run a container and let it join the "test-net" from the worker node(VM.2 and VM.3), then I will encounter the following problem:
# docker run -itd --name=test --net=test-net kafka:latest
c0324e6c3a8720b291cfc3aa7980846348f7a4450381036927924c52d343f622
docker: Error response from daemon: error creating external
connectivity network:
cannot create network
246bb018a15a6641a9cb26afec30c62eb4714816cfc0a307786c8a209a2418e6
(docker_gwbridge):
conflicts with network
0093ca50dcbcf729aeeae537f424727b674843312ef63ea647db48c7b0077e45
(docker_gwbridge): networks have same bridge name.
but, it will be worked if I use the same 'docker run' command on the Master node, I've google this problem for serval times but still not understand what is happening on the worker node...
Thanks for your reading and help!

During investigating,
this issue is not 100% reproduced on the other machine/distribution.
some machines can be worked by docker run -itd --name=test --net=test-net kafka:latest
but, if the way above is not worked in specific machine,
then you can try to run the container without --net first,
then use docker network connect --ip <ipaddress> <network> <container> for
appending the specific network at your container.

Related

docker network create command - swarm

Below is the command used to create overlay network driver for swarm cluster instead of using bridge network driver
$ docker network create -d overlay xyz
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
9c431bc9fec7 bridge bridge local
88a4c6a29fa4 docker_gwbridge bridge local
10a4bc649237 host host local
o79qllmq86xw ingress overlay swarm
417aca5efd6b none null local
nsteeoxfu9b1 xyz overlay swarm
$
$ docker service create --name service_name --network xyz -p 80:80 --repicas 12 <image>
What exactly is the purpose of service command using option --network xyz? is this the network namespace driver?
docker service create --network is described as Network attachments (ref. docker service create --help), it is to attach a service to an existing docker network as documented here. You can attach a service to multiple docker networks.

How to create docker overlay network?

My efforts to create overlay network are in vain.
docker network create --driver overlay new_network
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
Docker-machine list
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
dev - virtualbox Stopped Unknown
swarm-manager-1 - virtualbox Running tcp://192.168.99.103:2376 v18.09.5
If I try
docker $(docker-machine config swarm-manager-1) swarm init --advertise-addr $(docker-machine ip swarm-manager-1)
it says
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
How to create overlay network?
How to inspect the swarm?
I am on Ubuntu 18.04.
EDIT
This works
docker $(docker-machine config swarm-manager-1) network create --driver overlay new_network
ym9wva4e8ejqji9cn61tf14kv
Anyway overlay network is not visible
docker network ls
NETWORK ID NAME DRIVER SCOPE
ab450fe43ca5 bridge bridge local
14dbdf7dc1d9 chapter11_kong-net bridge local
0a76583939bc dockerapp_default bridge local
b2c31f5e97c7 host host local
569e2a86568b microservices-docker-go-mongodb_default bridge local
68174733413c miki_default bridge local
fbafcb186ac9 none
Why?
Most probably you have different configurations on your machine. You have to run the docker network command in the same context as the docker swarm command from your example:
docker $(docker-machine config swarm-manager-1) network create --driver overlay new_network

How to change the network of a running docker container?

I'm trying to update the network of a running docker container.
Note: I didn't attach any network while running the container.
[root#stagingrbt ~]# docker network connect host cdf8d6e3013d
Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network
[root#stagingrbt ~]# docker network connect docker_gwbridge cdf8d6e3013d
error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/networks/docker_gwbridge/connect: EOF
[root#stagingrbt ~]# docker network create -d host my-host-network
Error response from daemon: only one instance of "host" network is allowed
[root#stagingrbt ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
495080cf93e3 bridge bridge local
cf0408d6f13f docker_gwbridge bridge local
2c5461835eaf host host local
87e9cohcbogh ingress overlay swarm
84dbd78101e3 none null local
774882ac9b09 sudhirnetwork bridge local
When you start a container, such as:
docker run -d --name alpine1 alpine
It is by default connected to the bridge network, check it with:
docker container inspect alpine1
If you try to connect it to host network with:
docker network connect host alpine1
you obtain an error:
Error response from daemon: container cannot be disconnected from host network or connected to host network
you have to delete the container and run it again on the host network:
docker stop alpine1
docker rm alpine1
docker run -d --network host --name alpine1 alpine
This limitation is not present on bridge networks. You can start a container:
docker run -d --name alpine2 alpine
disconnect it from the bridge network and reconnect it to another bridge network.
docker network disconnect bridge alpine2
docker network create --driver bridge alpine-net
docker network connect alpine-net alpine2
Note also that according to the documentation:
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
If you want to circumvent the command line and change the network of your docker container via portainer, you can do so. I'm not sure exactly which is the best way of doing this, but the steps below worked for me (changing a container that was running on the bridge network by default into the host network):
In the Container list, click on the container name (emby, in my case)
Stop the container
Click on Duplicate/Edit
Scroll down to Advanced container settings and select the Network tab
Change the Network to host (or whatever you want to set it to)
Click on Deploy the container right above.
Confirm that you want to replace the old container (or deploy it under a new name if you want to be on the save side and keep the old one).
Done!
Run or connect a container to a specific network: Note first of all, the network must exist already on the host. Either specify the network at container creation/startup time (docker create or docker run) with the --net option; or attach an existing container by using the docker network connect command. For example:
docker network connect my-network my-container
I am not sure if we can change the container network while running, however, assuming that the new docker network already exists, you can run the following commands to update your container network.
Executed on Version: 20.10.21 Community Edition
# docker stop <container-name>
# docker network disconnect <old-network-id> <container-name>
# docker network connect <new-network-id> <container-name>
# docker start <container-name>
Note: you won't be able to switch to host network from other network

Can I connect directly to a docker swarm network?

I want a shell inside a Docker Service / Swarm network. Specifically, I want to be able to connect to a database that's inside the network.
From the manager node, I tried:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
481c20b4039a bridge bridge local
2fhe9rtim9mz my-network overlay swarm
Then
docker run -it --network my-network alpine sh
But I get the error:
docker: Error response from daemon: swarm-scoped network (event-data-core-prod) is not compatible with docker create or docker run. This network can only be used by a docker service.
Is it possible to somehow start an interactive session that can connect to a network service?
Since Docker Engine v1.13 (like already mentioned by johnharris85) it is possible for non-service container to attach to a swarm-mode overlay networks using the --attachable commandline parameter when creating the network:
docker network create --driver overlay --attachable my-attachable-overlay-network
Regarding your followup question:
Is there a way to change this for an extant network?
Yes and no, like I already described in another question you can make use of the docker service update feature:
To update an already running docker service:
Create an attachable overlay network:
docker network create --driver overlay --attachable my-attachable-overlay-network
Remove the network stack with a disabled "attachable" overlay network (in this example called: my-non-attachable-overlay-network):
docker service update --network-rm my-non-attachable-overlay-network myservice
Add the network stack with an enabled "attachable" overlay network:
docker service update --network-add my-attachable-overlay-network myservice

Docker ping container on other nodes

I have 2 virtual machines (VM1 with IP 192.168.56.101 and VM2 with IP 192.16.56.102 which can ping each other) and these are the steps I'm doing:
- Create consul container on VM1 with 'docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap'
- Create swarm manager on VM1 with 'docker run -d -p 3376:3376 swarm manage -H 0.0.0.0:3376 --advertise 192.168.56.101:3376 consul://192.168.56.101:8500
- Create swarm agents on each VM with 'docker run -d swarm join --advertise <VM-IP>:2376 consul://192.168.56.101:8500
If i run docker -H 0.0.0.0:3376 info I can see both nodes connected to the swarm and they are both healthy. I can also run container and they are scheduled to the nodes. However, If I create a network and assign a few nodes to this network and then SSH into one node and try to ping every other node I can only reach the nodes which are running on the same virtual machine.
Both Virtual Machines have these DOCKER_OPTS:
DOCKER_OPTS = DOCKER_OPTS="--cluster-store=consul://192.168.56.101:8500 --cluster-advertise=<VM-IP>:0 -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock"
I don't have a direct quote, but from what I've read on Docker GitHub issue tracker, ICMP packets (ping) are never routed between containers on different nodes.
TCP connection to explicitly opened ports should work. But as of Docker 1.12.1 it is buggy.
Docker 1.12.2 has some bug fixes wrt establishing a connection to containers on other hosts. But ping is not going to work across hosts.
You can only ping containers on the same node because you attach them to a local scope network.
As suggested in the comments, if you want to ping containers across hosts (meaning from a container on VM1 to a container on VM2) using docker swarm (or docker swarm mode) without explicitly opening ports, you need to create an overlay network (or globally scoped network) and assign/start containers on that network.
To create an overlay network:
docker network create -d overlay mynet
Then start the containers using that network:
For Docker Swarm mode:
docker service create --replicas 2 --network mynet --name web nginx
For Docker Swarm (legacy):
docker run -itd --network=mynet busybox
For example, if we create two containers (on legacy Swarm):
docker run -itd --network=mynet --name=test1 busybox
docker run -itd --network=mynet --name=test2 busybox
You should be able to docker attach on test2 to ping test1 and vice-versa.
For more details you can refer to the networking documentation.
Note: If containers still can't ping each other after the creation of an overlay network and attaching containers to it, check the firewall configurations of the VMs and make sure that these ports are open:
data plane / vxlan: UDP 4789
control plane / gossip: TCP/UDP 7946

Resources