How to get all ip addresses on a docker network? - docker

I have containers running in a swarm stack of services (on different docker-machines each) connected together on an overlay docker network.
How would it be possible to get all used ip adresses on the network associated with their services or container name from inside a container on this network?
Thank you

If you want to execute this command from inside containers, first you have to mount docker.sock for each service (assuming that docker is installed in the container)
volumes:
- /var/run/docker.sock:/var/run/docker.sock
then in each container you have to install jq and after that you can simply run docker network inspect <network_name_here> | jq -r 'map(.Containers[].IPv4Address) []' expected output something like:
172.21.0.2/16
172.21.0.5/16
172.21.0.4/16
172.21.0.3/16

Find the name OR ID of overlay network -
$ docker network ls | grep overlay
Do a inspect -
docker inspect $NETWORK_NAME
You will be able to find the container names & IPs allocated to them. You can do a fetch/grep the required values from the inspect output. You will find the output something as below -
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.23.0.0/16",
"Gateway": "172.23.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"183584efd63af145490a9afb61eac5db994391ae94467b32086f1ece84ec0114": {
"Name": "emailparser_lr_1",
"EndpointID": "0a9d0958caf0fa454eb7dbe1568105bfaf1813471d466e10030db3f025121dd7",
"MacAddress": "02:42:ac:17:00:04",
"IPv4Address": "172.23.0.4/16",
"IPv6Address": ""
},
"576cb03e753a987eb3f51a36d4113ffb60432937a2313873b8608c51006ae832": {
"Name": "emailparser",
"EndpointID": "833b5c940d547437c4c3e81493b8742b76a3b8644be86af92e5cdf90a7bb23bd",
"MacAddress": "02:42:ac:17:00:02",
"IPv4Address": "172.23.0.2/16",
"IPv6Address": ""
},

Assuming you're using the default VIP endpoint, you can use DNS to resolve the IP's of a service. Here's an example of using dig to get VIP IP using and then get the individual container IP's behind that VIP using tasks.
docker network create --driver overlay --attachable sweet
docker service create --name nginx --replicas=5 --network sweet nginx
docker container run --network sweet -it bretfisher/netshoot dig nginx
~~~
;; ANSWER SECTION:
nginx. 600 IN A 10.0.0.3
~~~
docker container run --network sweet -it bretfisher/netshoot dig tasks.nginx
~~~
;; ANSWER SECTION:
tasks.nginx. 600 IN A 10.0.0.5
tasks.nginx. 600 IN A 10.0.0.8
tasks.nginx. 600 IN A 10.0.0.7
tasks.nginx. 600 IN A 10.0.0.6
tasks.nginx. 600 IN A 10.0.0.4
~~~

for n in `docker network ls | awk '!/NETWORK/ {print $1}'`; do docker network inspect $n; done

First, find the name of the network which your swarm is using.
Then run docker network inspect <NETWORK-NAME>. This will give you a JSON output, in which you'll find an object with key "Containers". This object reveals all the containers in the network and their IP addresses respectively.

Related

Docker swarm overlay, single node, no connection between services

I'm trying to make a connection from one service to another, to achieve it I created an overlay network and two services attached to it like so.
$ docker network create -d overlay net1
$ docker service create --name busybox --network net1 busybox sleep 3000
$ docker service create --name busybox2 --network net1 busybox sleep 3000
Now I make sure my services are running and both connected to overlay.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ecc8dd465cb1 busybox:latest "sleep 3000" About a minute ago Up About a minute busybox2.1.uw597s90tkvbcaisgaq7los2q
f8cfe793e3d9 busybox:latest "sleep 3000" About a minute ago Up About a minute busybox.1.l5lxp4v0mcbujqh79dne2ds42
$ docker network inspect net1
[
{
"Name": "net1",
"Id": "5dksx8hlxh1rbj42pva21obyz",
"Created": "2021-06-22T14:23:43.739770415Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.4.0/24",
"Gateway": "10.0.4.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ecc8dd465cb12c622f48b109529534279dddd4fe015a66c848395157fb73bc69": {
"Name": "busybox2.1.uw597s90tkvbcaisgaq7los2q",
"EndpointID": "b666f6374a815341cb8af7642a7523c9bb153f153b688218ad006605edd6e196",
"MacAddress": "02:42:0a:00:04:06",
"IPv4Address": "10.0.4.6/24",
"IPv6Address": ""
},
"f8cfe793e3d97f72393f556c2ae555217e32e35b00306e765489ac33455782aa": {
"Name": "busybox.1.l5lxp4v0mcbujqh79dne2ds42",
"EndpointID": "fff680bd13a235c4bb050ecd8318971612b66954f7bd79ac3ee0799ee18f16bf",
"MacAddress": "02:42:0a:00:04:03",
"IPv4Address": "10.0.4.3/24",
"IPv6Address": ""
},
"lb-net1": {
"Name": "net1-endpoint",
"EndpointID": "2a3b02f66f395e613c6bc88f16d0723762d28488b429a9e50f7df24c04e9f1f0",
"MacAddress": "02:42:0a:00:04:04",
"IPv4Address": "10.0.4.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "e1c2ac76b95b",
"IP": "10.18.0.6"
}
]
}
]
So far so good! Next I ssh into one of containers and try to nslookup the second one, but have no luck.
$ docker exec -it busybox.1.l5lxp4v0mcbujqh79dne2ds42 sh
/ # nslookup busybox2
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
*** Can't find busybox2: No answer
*** Can't find busybox2: No answer
/ # nslookup busybox2.1.uw597s90tkvbcaisgaq7los2q
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
*** Can't find busybox2.1.uw597s90tkvbcaisgaq7los2q: No answer
*** Can't find busybox2.1.uw597s90tkvbcaisgaq7los2q: No answer
I know that overlay questions are quite common here, but they are mostly about node to node connections, not single node swarm. Another think to keep in mind is there is no local firewall on that node at all.
Am I trying to connect in the wrong way or is it a configuration issue?
The solution was simply adding a --attachable flag to network create command. After that I could ping my services by name.
Turns out you need that flag no matter if you are adding stack (in my case I have multiple stacks in the same swarm) or single services.
docker service create ... --network net1 does not create network aliases by default. To get that behaviour you need to use the long form syntax of --network
docker service create --network name=net1,alias=busybox1 busybox tail -f /dev/null
Its interesting that making the network attachable has a similar effect. Usually a network is made attachable so that containers can be attached to it via docker run --network net1 ... so while this approach works, it has potentially undesirable side effects for whatever network attachability is supposed to protect against.

Error message in docker about user specified subnet

I'm trying to attribute an IP for a container using the --ip flag. But I get the following message:
Error response from daemon: user specified IP address is supported only when connecting to networks with user configured subnets.
What does this message mean? How do I get the container to run?
The network was created with the command:
docker network create my_network_name
And the container is called with:
docker run -it --net my_network_name --ip 172.22.0.30 image_name
When you create your network provide a subnet from the private IP range that is free in your network. Then when you create your container in this network pick an address from that subnet.
For instance with IP range 10.11.0.0/16 and container IP 10.11.0.10:
$ docker network create my_network_name --subnet=10.11.0.0/16
$ docker run -it --net my_network_name --ip 10.11.0.10 image_name
And here is an actual run:
$ docker --version
Docker version 19.03.6, build 369ce74a3c
$ uname -a
Linux 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
$ docker network create my_network_name --subnet=10.11.0.0/16
35a9e4e5fb4ff243202fc4f6b687901c3cbfcd8fe34e06290db5d257310417a2
$ docker run --rm -it --net my_network_name --ip 10.11.0.10 ubuntu
root#f0d283bc5023:/#
On another window:
$ docker network inspect my_network_name
[
{
"Name": "my_network_name",
"Id": "35a9e4e5fb4ff243202fc4f6b687901c3cbfcd8fe34e06290db5d257310417a2",
"Created": "2020-09-19T11:51:59.985580503-07:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.11.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"f0d283bc5023fbe8a1c854fd2bb5bdd121be7245013cfac62d9933f95ace7bbf": {
"Name": "sleepy_colden",
"EndpointID": "088fbd64b82e05920fda91b28ebb5b4a14c9fca3ac9fde457c8819663f6049df",
"MacAddress": "02:42:0a:0b:00:0a",
"IPv4Address": "10.11.0.10/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Why can't i attach a container to a docker network?

I've created a user defined attachable overlay swarm network. I can inspect it, but when i attempt to attach a container to it, i get the following error when running on the manager node:
$ docker network connect mrunner baz
Error response from daemon: network mrunner not found
The network is defined and is attachable
$ docker network inspect mrunner
[
{
"Name": "mrunner",
"Id": "kviwxfejsuyc9476eznb7a8yw",
"Created": "2019-06-20T21:25:45.271304082Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4098"
},
"Labels": null
}
]
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
4a454d677dea bridge bridge local
95383b47ee94 docker_gwbridge bridge local
249684755b51 host host local
zgx0nppx33vj ingress overlay swarm
kviwxfejsuyc mrunner overlay swarm
a30a12f8d7cc none null local
uftxcaoz9rzg taskman_default overlay swarm
Why is this network connection failing?
** This was answered here: https://github.com/moby/moby/issues/39391
See this:
To create an overlay network for use with swarm services, use a command like the following:
$ docker network create -d overlay my-overlay
To create an overlay network which can be used by swarm services or standalone containers to communicate with other standalone containers running on other Docker daemons, add the --attachable flag:
$ docker network create -d overlay --attachable my-attachable-overlay
So, by default overlay network cannot be used by standalone containers, if insist on, you need to add --attachable to allow this network be used by standalone containers.
Thanks to thaJeztah on docker git repo:
The solution is as follows, but essentially make the flow service centric:
docker network create -d overlay --attachable --scope=swarm somenetwork
docker service create --name someservice nginx:alpine
If you want to connect the service to the somenetwork after it was created, update the service;
docker service update --network-add somenetwork someservice
After this; all tasks of the someservice service will be connected to somenetwork (in addition to other overlay networks they were connected to).
https://github.com/moby/moby/issues/39391#issuecomment-505050610

Connecting across containers in Docker network

I have this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3252abd7587 cdt-tests "/bin/bash /home/new…" 5 seconds ago Exited (1) 22 seconds ago cdt-tests
f492760705e3 cdt-server "/bin/bash /usr/loca…" 52 seconds ago Up About a minute 0.0.0.0:3040->3040/tcp cdt-server
89c5a28855df mongo "docker-entrypoint.s…" 55 seconds ago Up About a minute 27017/tcp cdt-mongo
1eaa4aa684a9 selenium/standalone-firefox:3.4.0-chromium "/opt/bin/entry_poin…" 59 seconds ago Up About a minute 4444/tcp cdt-selenium
the cdt-tests container, is attempting to make connections to the other containers in the same network. the network looks like this:
$ docker network inspect cdt-net # this yields the below json
[
{
"Name": "cdt-net",
"Id": "8c2b486e950076130860e0c6c09f06eaf8cccf02127786b80bf7cc169f8eae0f",
"Created": "2018-01-23T21:52:34.5021152Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1eaa4aa684a9d7c1ad7a1b7ac28418b100e6b8d7a22ceb2a97cf51e4487c5fb2": {
"Name": "cdt-selenium",
"EndpointID": "674ce85e14339e67e767ab9844cd2fd1356fc3e60ab050e1cd1457e4168ac9fc",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"89c5a28855dfde05fe9db9a35bbe7bce232eb56d9024022785d2a65570c423b5": {
"Name": "cdt-mongo",
"EndpointID": "ed497939965363cd194b4fea8e6a26ee47ef7f24bef56c9726003a897be83dd1",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"f492760705e30be4fe8469ae422e96548ee2192f41314e3815762a9e39a4cc82": {
"Name": "cdt-server",
"EndpointID": "17e8bd6f7735a52669f0fe92b2d5717c7a3ae6954c108af3f29c13233ef20ee6",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
in my cdt-tests container, I run these commands:
export CDT_SELENIUM_HOST="cdt-selenium.cdt-net";
export OPENSHIFT_NODEJS_IP="127.0.0.1";
export OPENSHIFT_NODEJS_PORT="3040";
export CDT_SERVER_HOST="127.0.0.1";
export CDT_SERVER_PORT=3040;
export OPENSHIFT_MONGODB_DB_HOST="127.0.0.1"
export OPENSHIFT_MONGODB_DB_PORT=27017
nc -zv "$CDT_SELENIUM_HOST" 4444 > /dev/null 2>&1
nc_exit=$?
if [ ${nc_exit} -eq 0 ]; then
echo "selenium server is live"
else
echo "selenium server is NOT live"
exit 1;
fi
nc -zv "$OPENSHIFT_MONGODB_DB_HOST" 27017 > /dev/null 2>&1
nc_exit=$?
if [ ${nc_exit} -eq 0 ]; then
echo "mongo server is live"
else
echo "mongo server is NOT live"
exit 1;
fi
nc -zv "$CDT_SERVER_HOST" 3040 > /dev/null 2>&1
nc_exit=$?
if [ ${nc_exit} -eq 0 ]; then
echo "cdt server is live"
else
echo "cdt server is NOT live"
exit 1;
fi
and all of those connection tests fail. Does anyone know how to connect between containers in the same Docker network? is there some surefire pattern to use?
It looks like you're trying to use 127.0.0.1 as the address for your other containers. Docker containers all have a unique ip address in an isolated network space. Much like your own physical host, 127.0.0.1 is a special address that means "this container". So since none of those services are running in the container in which you're running your tests, you can't connect to anything.
You need to use the ip address of the container running the service you want to test. Because ip addresses change with every deployment, it's not convenient to use the literal address. You need some way to get the information dynamically. For this reason, Docker maintains a DNS service on each network so that you can simply use the name of a container like any other hostname.
For example, in your environment, you could set:
export OPENSHIFT_MONGODB_DB_HOST="cdt-mongo"
And then your mongo test should succeed. And so forth for the other _HOST and _IP variables you're using.
I could assure you that the way #larsks mentioned would work well.
But there is still an easy way. I could see container cdt-tests is in the same network with other containers based on your description above. So you may attach all containers to a user-defined bridge network created by docker network create -d bridge mynet.
When starting containers, just add --net=mynet flag.
E.g.docker run -tid --name=cdt-server --net=mynet cdt-server .
Thus, there is no need to add any ENVs into your container(s). Since user-defined bridge networks provide automatic DNS resolution between containers. You may see introduction here.

docker 1.12 swarm mode: how to connect to another container on overlay network and how to use loadbalance?

I used docker-machine on mac os. and create the swarm mode cluster like:
➜ docker-machine create --driver virtualbox docker1
➜ docker-machine create --driver virtualbox docker2
➜ docker-machine create --driver virtualbox docker3
➜ config docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
docker1 - virtualbox Running tcp://192.168.99.100:2376 v1.12.0-rc4
docker2 - virtualbox Running tcp://192.168.99.101:2376 v1.12.0-rc4
docker3 - virtualbox Running tcp://192.168.99.102:2376 v1.12.0-rc4
➜ config docker-machine ssh docker1
docker#docker1:~$ docker swarm init
No --secret provided. Generated random secret:
b0wcyub7lbp8574mk1oknvavq
Swarm initialized: current node (8txt830ivgrxxngddtx7k4xe4) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --secret b0wcyub7lbp8574mk1oknvavq \
--ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 \
10.0.2.15:2377
# on host docker2 and host docker3, I run cammand to join the cluster:
docker#docker2:~$ docker swarm join --secret b0wcyub7lbp8574mk1oknvavq --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 192.1
68.99.100:2377
This node joined a Swarm as a worker.
docker#docker3:~$ docker swarm join --secret b0wcyub7lbp8574mk1oknvavq --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 192.1
68.99.100:2377
This node joined a Swarm as a worker.
# on docker1:
docker#docker1:~$ docker node ls
ID HOSTNAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS
8txt830ivgrxxngddtx7k4xe4 * docker1 Accepted Ready Active Leader
9fliuzb9zl5jcqzqucy9wfl4y docker2 Accepted Ready Active
c4x8rbnferjvr33ff8gh4c6cr docker3 Accepted Ready Active
then I create the network mynet with overlay driver on docker1.
The first question: but I cann`t see the network on other docker hosts:
docker#docker1:~$ docker network create --driver overlay mynet
a1v8i656el5d3r45k985cn44e
docker#docker1:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
5ec55ffde8e4 bridge bridge local
83967a11e3dd docker_gwbridge bridge local
7f856c9040b3 host host local
bpoqtk71o6qo ingress overlay swarm
a1v8i656el5d mynet overlay swarm
829a614aa278 none null local
docker#docker2:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
da07b3913bd4 bridge bridge local
7a2e627634b9 docker_gwbridge bridge local
e8971c2b5b21 host host local
bpoqtk71o6qo ingress overlay swarm
c37de5447a14 none null local
docker#docker3:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
06eb8f0bad11 bridge bridge local
fb5e3bcae41c docker_gwbridge bridge local
e167d97cd07f host host local
bpoqtk71o6qo ingress overlay swarm
6540ece8e146 none null local
the I create the nginx service which echo the default hostname on index page on docker1:
docker#docker1:~$ docker service create --name nginx --network mynet --replicas 1 -p 80:80 dhub.yunpro.cn/shenshouer/nginx:hostname
9d7xxa8ukzo7209r30f0rmcut
docker#docker1:~$ docker service tasks nginx
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
0dvgh9xfwz7301jmsh8yc5zpe nginx.1 nginx dhub.yunpro.cn/shenshouer/nginx:hostname Running 12 seconds ago Running docker3
The second question: I cann`t access from the IP of docker1 host to the service. I only get the response to access the IP of docker3 .
➜ tools curl 192.168.99.100
curl: (52) Empty reply from server
➜ tools curl 192.168.99.102
fda9fb58f9d4
So I think there have no loadbalance. How do I to use the build-in loadbalance ?
Then I create another service on the same network with busybox image to test ping :
docker#docker1:~$ docker service create --name busybox --network mynet --replicas 1 busybox sleep 3000
akxvabx66ebjlak77zj6x1w4h
docker#docker1:~$ docker service tasks busybox
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
9yc3svckv98xtmv1d0tvoxbeu busybox.1 busybox busybox Running 11 seconds ago Running docke1
# on host docker3. I got the container name and the container IP to ping test:
docker#docker3:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fda9fb58f9d4 dhub.yunpro.cn/shenshouer/nginx:hostname "sh -c /entrypoint.sh" 7 minutes ago Up 7 minutes 80/tcp, 443/tcp nginx.1.0dvgh9xfwz7301jmsh8yc5zpe
docker#docker3:~$ docker inspect fda9fb58f9d4
...
"Networks": {
"ingress": {
"IPAMConfig": {
"IPv4Address": "10.255.0.7"
},
"Links": null,
"Aliases": [
"fda9fb58f9d4"
],
"NetworkID": "bpoqtk71o6qor8t2gyfs07yfc",
"EndpointID": "98c98a9cc0fcc71511f0345f6ce19cc9889e2958d9345e200b3634ac0a30edbb",
"Gateway": "",
"IPAddress": "10.255.0.7",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:0a:ff:00:07"
},
"mynet": {
"IPAMConfig": {
"IPv4Address": "10.0.0.3"
},
"Links": null,
"Aliases": [
"fda9fb58f9d4"
],
"NetworkID": "a1v8i656el5d3r45k985cn44e",
"EndpointID": "5f3c5678d40b6a7a2495963c16a873c6a2ba14e94cf99d2aa3fa087b67a46cce",
"Gateway": "",
"IPAddress": "10.0.0.3",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:0a:00:00:03"
}
}
}
}
]
# on host docker1 :
docker#docker1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b94716e9252e busybox:latest "sleep 3000" 2 minutes ago Up 2 minutes busybox.1.9yc3svckv98xtmv1d0tvoxbeu
docker#docker1:~$ docker exec -it b94716e9252e ping nginx.1.0dvgh9xfwz7301jmsh8yc5zpe
ping: bad address 'nginx.1.0dvgh9xfwz7301jmsh8yc5zpe'
docker#docker1:~$ docker exec -it b94716e9252e ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
90 packets transmitted, 0 packets received, 100% packet loss
The third question: How to communicate with each container on the same network?
and the network mynet as:
docker#docker1:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
5ec55ffde8e4 bridge bridge local
83967a11e3dd docker_gwbridge bridge local
7f856c9040b3 host host local
bpoqtk71o6qo ingress overlay swarm
a1v8i656el5d mynet overlay swarm
829a614aa278 none null local
docker#docker1:~$ docker network inspect mynet
[
{
"Name": "mynet",
"Id": "a1v8i656el5d3r45k985cn44e",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Containers": {
"b94716e9252e6616f0f4c81e0c7ef674d7d5f4fafe931953fced9ef059faeb5f": {
"Name": "busybox.1.9yc3svckv98xtmv1d0tvoxbeu",
"EndpointID": "794be0e92b34547e44e9a5e697ab41ddd908a5db31d0d31d7833c746395534f5",
"MacAddress": "02:42:0a:00:00:05",
"IPv4Address": "10.0.0.5/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257"
},
"Labels": {}
}
]
docker#docker2:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
da07b3913bd4 bridge bridge local
7a2e627634b9 docker_gwbridge bridge local
e8971c2b5b21 host host local
bpoqtk71o6qo ingress overlay swarm
c37de5447a14 none null local
docker#docker3:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
06eb8f0bad11 bridge bridge local
fb5e3bcae41c docker_gwbridge bridge local
e167d97cd07f host host local
bpoqtk71o6qo ingress overlay swarm
a1v8i656el5d mynet overlay swarm
6540ece8e146 none null local
docker#docker3:~$ docker network inspect mynet
[
{
"Name": "mynet",
"Id": "a1v8i656el5d3r45k985cn44e",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Containers": {
"fda9fb58f9d46317ef1df60e597bd14214ec3fac43e32f4b18a39bb92925aa7e": {
"Name": "nginx.1.0dvgh9xfwz7301jmsh8yc5zpe",
"EndpointID": "5f3c5678d40b6a7a2495963c16a873c6a2ba14e94cf99d2aa3fa087b67a46cce",
"MacAddress": "02:42:0a:00:00:03",
"IPv4Address": "10.0.0.3/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257"
},
"Labels": {}
}
]
So The fourth question: Is there have build-int kv store?
Question 1: the networks on other hosts are created on demand, when swarm allocate task on that host, the network will be created.
Question 2: The load balancing works out of box, there's maybe some problem with you docker swarm cluster. you need to check the iptables and ipvs rules
Question 3: containers on the same overlay network (mynet in your case) can talk with each other, and docker has a buildin dns server which can resolve container name to ip address
Question 4: yes they do.

Resources