Docker Swarm Overlay Network Not Working Between Nodes - docker

i am trying to connect my docker services together in docker swarm.
the network is made of 2 raspberry pi's.
i can create an overlay network called test-overlay and i can see that services on either raspberry pi node can connect to the network.
my problem:
i cannot link to services between nodes with the overlay network.
given the following configuration of nodes and services, service1 can use the address http://service2 to connect to service2. but it does NOT work for http://service3. however http://service3 is accessible from service4.
node1:
- service1
- service2
node2:
- service3
- service4
i am new to docker swarm and any help is appreciated.
inspecting overlay
i have run the command sudo docker inspect network test-overlay on both nodes.
on the master node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-15T20:31:20.629719732Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3acb436a0cc9a4d584d537edb1546988d334afa4793cc4fae4dd6ac9b48828ea": {
"Name": "docker-registry.1.la1myuodpkq0x5h39pqo6lt7f",
"EndpointID": "66887fb1f5f253c6cbec149aa51ab85168903fdd2290719f26d2bcd8d6c68dc8",
"MacAddress": "02:42:0a:00:00:04",
"IPv4Address": "10.0.0.4/24",
"IPv6Address": ""
},
"786e1fee538f81fe41ccd082800c646a0e191b0fd912e5c15530e61c248e81ac": {
"Name": "portainer.1.qyvvlcdqo5sewuku3eiykaplz",
"EndpointID": "0d29e5452c208ed637ae2e7dcec026f39d2431e8e0e20765a9e0e6d6dfdc60ca",
"MacAddress": "02:42:0a:00:00:15",
"IPv4Address": "10.0.0.21/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]
on the worker node this returns the following:
[
{
"Name": "test-overlay",
"Id": "skxhz8sb3f82dhh9jt9t3j5yl",
"Created": "2018-04-20T14:04:57.870696195Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4cb50161119e4b58a472e1b5c380c301bbb00a23fc99fc2e0712a8c4bde6d9d4": {
"Name": "minio.1.fo2su2quv8herbmnxqfi3g8w2",
"EndpointID": "3e85786304ed08f02c09b8e1ed6a153a3b4c2ef7afe503a1b0ca6cf341521645",
"MacAddress": "02:42:0a:00:00:d6",
"IPv4Address": "10.0.0.214/24",
"IPv6Address": ""
},
"ce99b3788a4f9438e276e0f52a8f4d29fa09179e3e93b31b14f45339ce3c5315": {
"Name": "load-balancer.1.j64h1eecsc05b7d397ejvedv3",
"EndpointID": "3b7e73d27fe30151f2dc2a0ba8a5afc7f74fd283159a03a592be10e297f58d51",
"MacAddress": "02:42:0a:00:00:d0",
"IPv4Address": "10.0.0.208/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "d049fc8f8ae1",
"IP": "192.168.1.2"
},
{
"Name": "6c0da128f308",
"IP": "192.168.1.3"
}
]
}
]

it seems this problem was because of the nodes being not being able to connect to each other on the required ports.
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
before you open those ports.
a better and simpler solution is to use the docker image portainer/agent. like the documentation says,
The Portainer Agent is a workaround for a Docker API limitation when using the Docker API to manage a Docker environment.
https://portainer.readthedocs.io/en/stable/agent.html
i hope this helps anyone else experiencing this problem.

I am not able to leave a comment yet, but i managed to solve this issue with the solution provided by X0r0N, and i am leaving this comment to help people in my position to find a solution in the future.
I was deploying 10 Droplets in DigitalOcean, with the default Docker image provided by Docker. It says in the description that it closes all ports, but them related to Docker. This is clearly not included Swarm usecases.
After allowing port 2377, 4789 and 7946 in ufw the Docker Swarm is now working as expected.
To make this answer stand on its own, the ports map to the following functionality:
TCP port 2377: Cluster Management Communication
TCP and UDP port 7649: Communication between nodes
UDP port 4789: Overlay Network Traffic

Check if your nodes have the ports the swarm needs to operate opened properly as described here https://docs.docker.com/network/overlay/ in "Prerequisites":
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic

Related

ping: proxy: Temporary failure in name resolution in docker container network -- "proxy" is container name here running in docker

Here I am having two proxy and my_ngnix containers running inside docker
I have two containers inside one network "bridge" as below
C:\> docker network inspect bridge
[
{
"Name": "bridge",
"Id": "82ffe522177d113af71d150c96b5e43df8946b9f17f901152cc2b4b96caf313a",
"Created": "2022-12-25T14:38:40.7492988Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"008e6142e13e91624e89b26ba697bff60765965a09daedafa6db766f30b6beb9": {
"Name": "proxy",
"EndpointID": "ed0536b6b97d9ad00b8deeb8dc5ad6f91e9809af3fc4032dfca4abd02760cc71",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"e215ead97a7e6580d3f6fec0f25790771af9f878b4194d61fa041b218a2117bf": {
"Name": "my_ngnix",
"EndpointID": "f0e7a6e824c18587a1cb32de89dbc8a1ec2fa62dcc7fe38516d294d7fdb19606",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
the moment I hit the command
C:\> docker container exec -it my_ngnix ping proxy
it shows me error : ping: proxy: Temporary failure in name resolution
can anyone help me here ---
I have installed all required update on container so I can use ping command
why container is not able to ping each other using container name ---same command work with ip address
I tried to use the same command using IP address as below
docker container exec -it my_ngnix ping 172.17.0.3
it works , however it wont work with the container name.

Not able to access container by user-defined network name from another container

I created a new network.
Exposed port 8086 for one of my containers and also published this port. Run both containers with --network influx-net.
Check docker network inspect influx-net gives my
docker network inspect influx-net
[
{
"Name": "influx-net",
"Id": "76ad2f3ec76e15d88330739c2b3db866a15d5affb0912c64c7ec7615b14e8570",
"Created": "2021-11-09T17:32:42.479637982+09:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"92c0f2c5dc5c75db15e5a5ea0bed6ec5047f0c57dba06283ef6e72c8adcc6a3a": {
"Name": "ultimate_hydroponics",
"EndpointID": "529c2c31baaec7685251a584e8f5643b4669966372b629d67f3a9910da5e809c",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"a678409dbc43685fd089ed29f42601b67c46c20b66c277e461a8fe2adad31b5a": {
"Name": "influxdb",
"EndpointID": "73263b9969dc9c67760ec3aab4ebab75e16763c75cd498828a737d9e56ed34ef",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
So my both containers are connected to the network influx-net.
But when I trying to ping or wget InfluxDB container (influx-net:8086) from another container by it's network name I'm obtaining nothing.
Also when I do the same with 192.168.111.11:8086 (my pc IP), I'm getting a response.
What is the problem?
Local PC firewall is off.
# create a network
$ docker network create influx-net
# start influx db attached to the network influx-net
$ docker run -d --name idb --net influx-net influxdb
Now, create a new container attached to same network and try connecting:
$ docker run -it --net influx-net ubuntu
root#fc26733c33da:/# telnet idb 8086
Trying 172.18.0.2...
Connected to idb.
Escape character is '^]'.
And it's working.
If you need to connect using IP, inspect the network to get container IP and then use the same to connect.
$ docker network inspect influx-net
[
{
"Name": "influx-net",
...
"Containers": {
"844311255fb9dd74fee1df2dc65023533ad961d1dd6345128cc2c92237ba35e0": {
"Name": "idb",
"EndpointID": "b294cb0661f9f1166833f381a02bcbfa531cdeba6657c51b9595814f5ee323be",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16", 👈 # this one here
"IPv6Address": ""
}, ...
},
"Options": {},
"Labels": {}
}
]
$ docker run -it --net influx-net ubuntu
root#360bbca6534d:/# telnet 172.18.0.2 8086
Trying 172.18.0.2...
Connected to 172.18.0.2.
Escape character is '^]'.

How can I connect my docker containers on the default bridge network?

I have 2 docker containers running on my windows 10 machine. I have been able to interact with them by binding container ports to host ports, but now I want to dockerize another application that I have been using to interact with these containers. Up until now I have been configuring the urls using localhost, but after moving the third application to a container that will no longer be an option, so I did some research and decided to use the default bridge network. I checked that all 3 containers were in the network:
[
{
"Name": "bridge",
"Id": "c570148be95b87b5bc768de573e85c25fa4584df2c5df5c63b2d317decabe651",
"Created": "2021-03-22T07:49:32.2206325Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"38beb0863d86dab0f014ef9f1ad85f02efa7fb96520455df6f6ea6b5519f60cc": {
"Name": "my_redis",
"EndpointID": "58a6cfab6f233ac39c9b043c660124fd9cb98970f99f154ad8b3774a3356e71b",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"70fe60faa0dc3b853edcf2005e16d6219015eafa1c65d48aebd57256ff329f2b": {
"Name": "rabbitmq",
"EndpointID": "ed4ac901659785eebfd58de4056efd51addd19eda8c184a38632f1486c178e53",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
},
"b34359519bbf0253af3eba8e800a1bcabeb3cfe6e5cc5007679c6f632f1d4820": {
"Name": "app",
"EndpointID": "3363141459cc7eebeca1651b047ed3af81c4af37c3706dfa74e5eadb6f95f302",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
From what I can see, icc is enabled and all 3 containers are on the network. I used the IPv4Address in the configuration in app: STA_REDIS_HOST = 172.17.0.3 (with and without the /16 at the end, because I'm not sure what it means), and it seems as if the ip is being resolved to something else, because I get the following error:
Error 111 connecting to 127.0.0.1:6379. Connection refused.
I don't know where 127.0.0.1 but it looks like the private ip of the host machine.
Where am I going wrong here?
I didn't get how you are trying to connecting one container to other...
docker_bridge_network demonstrates pinging one container to others by using bridge network

IPv6 packets to docker container appear to come from IPv4 of docker network gateway

I have a HTTP server running in a Docker container. I need to be able to log requests in this container, including their source IPs, but all packets over IPv6 appear to come from the IPv4 address of the Docker network's gateway.
This originally made sense, as the Docker network did not have IPv6 enabled, so I assume Docker automatically translated to IPv4, but having enabled IPv6 on the network, I see no change. This likely means that I configured something wrong, but I can't seem to figure it out.
My network configuration is as follows (created by Portainer):
{
"Name": "aais",
"Id": "2823152591e7f437244623ba46f66aff7eacba2e92942fbcf681f2f145fff783",
"Created": "2021-01-11T21:52:27.632515156Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": true,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
},
{
"Subnet": "fd00:1255:2111::/48",
"Gateway": "fd00:1255:2111::1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"6931497b76e36992c1ec9d2c6fadf4ed8492f8d7dd1bb4b01b2796f0e5204969": {
"Name": "caddy",
"EndpointID": "be6bb2824a3f7c5ddb73ec45c5b37d7c439a461a65d00759e7eab68edb49548c",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": "fd00:1255:2111::2/48"
}
},
"Options": {
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "false",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
I ended up using a semi-temporary solution, namely:
https://github.com/robbertkl/docker-ipv6nat
which makes configuring IPv6 in docker much simpler. However, a similar feature is expected to come do Docker soon anyway, as tracked by:
https://github.com/robbertkl/docker-ipv6nat/issues/65
https://github.com/moby/libnetwork/pull/2572
So this should be the preferred solution for any future users.

Getting connection refused error from one docker container to another only for REST request

I've two docker containers apiserver and loginserver. Both of them are provide REST API and are built using spring boot. I've created a bridge network called my-network and both the containers are attached to the same bridge.
I pinged loginserver from apiserver via interactive shell and it is accessible. I make REST request from the host machine so I know the socket exposed. But, when I make the same REST request from apiserver to loginserver, I am getting error:
: HttpQueryService::uri=http://172.28.0.7:8090/users/login
2018-06-19 19:08:24.196 ERROR 7 --- [nio-9000-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception
org.apache.http.conn.HttpHostConnectException: Connect to 172.28.0.7:8090 [/172.28.0.7] failed: Connection refused (Connection refused)
Here are the details from my-network:
docker network inspect my-network
[
{
"Name": "my-network",
"Id": "ef610688b58b6757cba57caf6261f7a1eaeb083798098214c4848cbb152cae26",
"Created": "2018-04-21T00:19:46.918124848Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.28.0.0/16",
"Gateway": "172.28.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"71863d2f61789d4350fcabb1330b757500d5734a85c68b60eb1ac8f6f1e8344e": {
"Name": "mymongo",
"EndpointID": "717c8dbdc8993a70f9d3e97e549cb9784020b8e68e7a557c30b0818b4c9acb90",
"MacAddress": "02:42:ac:1c:00:02",
"IPv4Address": "172.28.0.2/16",
"IPv6Address": ""
},
"936447ce8325f3a7273a7fb462d75e55841a9ff37ccf27647831b3db1b8a1371": {
"Name": "mypg",
"EndpointID": "6a1a1b2f7852b89a9d2cb9b9abecdabd134849cd789c31613c7ddb91a4bc43d1",
"MacAddress": "02:42:ac:1c:00:06",
"IPv4Address": "172.28.0.6/16",
"IPv6Address": ""
},
"ad03348dffaef3edd916d349c88e8adf6cf7d88dbc40f82dc2384dee826cfa83": {
"Name": "myloginserver",
"EndpointID": "fe22c2b5f57b7fe4776087972ffa5f7f089ca6a59fde2fa677848b3f238ea026",
"MacAddress": "02:42:ac:1c:00:07",
"IPv4Address": "172.28.0.7/16",
"IPv6Address": ""
},
"c69bfbf9ccdc9e29e87d2847b5c2a51e9c232fb8d06635bcef0cdd1f7c66e051": {
"Name": "apiserver",
"EndpointID": "46e94a52d34670eb00448b1d39a0cc365b882ece790c9d868dcee04ad141d1ca",
"MacAddress": "02:42:ac:1c:00:0b",
"IPv4Address": "172.28.0.11/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Is port 8090 exposed by your loginserver image? For checking type in command
docker images
and then find the ImageID of your loginserver image. Then enter command
docker inspect image {Login server image id}
In the output check ExposedPorts if 8090 is exposed or not
Late to the party but I just fixed this on my system by setting the address to get the REST request from as the public IP address:
eg: http://217.114.203.196/myrequest

Resources