I'm trying to make a connection from one service to another, to achieve it I created an overlay network and two services attached to it like so.
$ docker network create -d overlay net1
$ docker service create --name busybox --network net1 busybox sleep 3000
$ docker service create --name busybox2 --network net1 busybox sleep 3000
Now I make sure my services are running and both connected to overlay.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ecc8dd465cb1 busybox:latest "sleep 3000" About a minute ago Up About a minute busybox2.1.uw597s90tkvbcaisgaq7los2q
f8cfe793e3d9 busybox:latest "sleep 3000" About a minute ago Up About a minute busybox.1.l5lxp4v0mcbujqh79dne2ds42
$ docker network inspect net1
[
{
"Name": "net1",
"Id": "5dksx8hlxh1rbj42pva21obyz",
"Created": "2021-06-22T14:23:43.739770415Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.4.0/24",
"Gateway": "10.0.4.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ecc8dd465cb12c622f48b109529534279dddd4fe015a66c848395157fb73bc69": {
"Name": "busybox2.1.uw597s90tkvbcaisgaq7los2q",
"EndpointID": "b666f6374a815341cb8af7642a7523c9bb153f153b688218ad006605edd6e196",
"MacAddress": "02:42:0a:00:04:06",
"IPv4Address": "10.0.4.6/24",
"IPv6Address": ""
},
"f8cfe793e3d97f72393f556c2ae555217e32e35b00306e765489ac33455782aa": {
"Name": "busybox.1.l5lxp4v0mcbujqh79dne2ds42",
"EndpointID": "fff680bd13a235c4bb050ecd8318971612b66954f7bd79ac3ee0799ee18f16bf",
"MacAddress": "02:42:0a:00:04:03",
"IPv4Address": "10.0.4.3/24",
"IPv6Address": ""
},
"lb-net1": {
"Name": "net1-endpoint",
"EndpointID": "2a3b02f66f395e613c6bc88f16d0723762d28488b429a9e50f7df24c04e9f1f0",
"MacAddress": "02:42:0a:00:04:04",
"IPv4Address": "10.0.4.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4101"
},
"Labels": {},
"Peers": [
{
"Name": "e1c2ac76b95b",
"IP": "10.18.0.6"
}
]
}
]
So far so good! Next I ssh into one of containers and try to nslookup the second one, but have no luck.
$ docker exec -it busybox.1.l5lxp4v0mcbujqh79dne2ds42 sh
/ # nslookup busybox2
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
*** Can't find busybox2: No answer
*** Can't find busybox2: No answer
/ # nslookup busybox2.1.uw597s90tkvbcaisgaq7los2q
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
*** Can't find busybox2.1.uw597s90tkvbcaisgaq7los2q: No answer
*** Can't find busybox2.1.uw597s90tkvbcaisgaq7los2q: No answer
I know that overlay questions are quite common here, but they are mostly about node to node connections, not single node swarm. Another think to keep in mind is there is no local firewall on that node at all.
Am I trying to connect in the wrong way or is it a configuration issue?
The solution was simply adding a --attachable flag to network create command. After that I could ping my services by name.
Turns out you need that flag no matter if you are adding stack (in my case I have multiple stacks in the same swarm) or single services.
docker service create ... --network net1 does not create network aliases by default. To get that behaviour you need to use the long form syntax of --network
docker service create --network name=net1,alias=busybox1 busybox tail -f /dev/null
Its interesting that making the network attachable has a similar effect. Usually a network is made attachable so that containers can be attached to it via docker run --network net1 ... so while this approach works, it has potentially undesirable side effects for whatever network attachability is supposed to protect against.
Related
A problem with a Docker Container running NextJS application trying to access another Docker Container running a NestJS-API.
The environment looks like this:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b04de77cb381 ui "docker-entrypoint.s…" 9 minutes ago Up 9 minutes 0.0.0.0:8004->3000/tcp ui
6af7c952afd6 redis:latest "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:8003->6379/tcp redis
784b6f925817 api "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:8001->3001/tcp api
c0fb02031834 postgres:latest "docker-entrypoint.s…" 21 hours ago Up 21 hours 0.0.0.0:8002->5432/tcp db
All containers are in the same bridged network.
A 'docker network inspect ' shows all Containers.
Containers are started in different docker-compose files (ui, redis+api, db)
API to DB
The api talks to the database db with postgresql://username:password#db:5432/myDb?schema=public
Notice the 'db' being the name on the Docker Network and port 5432 in the url.
Since they are on the same network you need to use the internal port 5432 instead of 8002.
Local UI
When I run the UI on the Host (on port 3000), it is able to access the API (in the Container).
Data is transferred from db-container to api-container to ui-on-the-host.
UI in the Container
Now I start also a browser on localhost:8004. This is the UI in the Container.
The UI is accessing the api on http://api:3001/*.
Sounds logical to use Docker Networkname and internal port. I also do that from API to DB.
But, this does not work: "net::ERR_NAME_NOT_RESOLVED".
Test: ncat test
Docker-Exec into the UI Container and doing a check (with ncat) shows the port is open:
/app $ nc -v api 3001
api (192.168.48.4:3001) open
Test: curl in the UI Container
(Added later)
When doing a Curl test out of the UI-Container to the API-Container I do get a result.
(See the simple/stupid Debug=endpoint called /dbg)
$ docker exec -u 0 -it ui /bin/bash
UI$ url http://api:3001/dbg
{"status":"I'm alive and kicking!!"}
About the Network
I did create my own Bridged Network.
So, the network is there and it looks like all Containers are connected to the network.
/Users/bert/_> docker network inspect my-net
[
{
"Name": "my-net",
"Id": "e786d630f252cf12856372b708c309f90f8bf177b6b1f742d6ae02f5094c7223",
"Created": "2021-03-11T14:10:50.417675Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.48.0/20",
"Gateway": "192.168.48.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"6af7c952afd60a3b4f36e244273db5f9f8a993a6f738785f624ffb59f381cf3d": {
"Name": "redis",
"EndpointID": "d9a6e6f6a4467bf38d3348889c86dcc53c5b0fa5ddc9fcf17c7e722fc6673a25",
"MacAddress": "02:42:c0:a8:30:05",
"IPv4Address": "192.168.48.5/20",
"IPv6Address": ""
},
"784b6f9258179e8ac03ee8bbc8584582dd9199ef5ec1af1404f7cf600ac707e1": {
"Name": "api",
"EndpointID": "d4b82f37559a4ee567cb304f033e1394af8c83e84916dc62f7e81f3b875a6e5f",
"MacAddress": "02:42:c0:a8:30:04",
"IPv4Address": "192.168.48.4/20",
"IPv6Address": ""
},
"c0fb02031834b414522f8630fcde31482e32d948de98d3e05678b34b26a1e783": {
"Name": "db",
"EndpointID": "dde944e1eda2c69dd733bcf761b170db2756aad6c2a25c8993ca626b48dc0e81",
"MacAddress": "02:42:c0:a8:30:03",
"IPv4Address": "192.168.48.3/20",
"IPv6Address": ""
},
"d678b3e96e0f0765ed62a70cc880b07836cf1ebf17590dc0e3351e8ee8b9b639": {
"Name": "ui",
"EndpointID": "c5a8d7e3d8b31d8dacb2f343bb77f4b364f0f3e3a5ed1025cc4ec5b65b44fd27",
"MacAddress": "02:42:c0:a8:30:02",
"IPv4Address": "192.168.48.2/20",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Conclusion:
UI-Container with Curl in Container can talk to API-Container.
UI-on-Host with Browser on Host can talk to API-Container.
UI-Container with Browser on Host cannot talk to API-Containe. !!?? Why????
Question then is how to use a UI-container in the browser and talk to other Containers over the Docker Bridged Network?
Ok, problem solved.
It was a matter of confusion where the NextJS Application gets the API-location from.
Since the NextJS-Application (the UI) is in the end just running in a browser, you need to specify the API location as seen from the browser, not as seen as inter-Container communication.
I run docker containers on a Synology NAS. All container using the host driver have network connection but none of the containers using the bridge driver have. In the past it worked but some months ago one of my experimental containers experienced network problems
Environment:
Synology DS218+
DSM 6.2.3-25426 Update 2
10 GB internal memory
To simplify the description of the problem I have followed the tutorial from docker:
docker run –dit --name alpine1 alpine ash
docker run –dit --name alpine2 alpine ash
The containers have 172.17.0.2 and172.17.0.3 as IP addresses. When I attached to alpine1 I wasn’t able to ping to alpine2 using its IP-address (since the default bridge doesn’t do name resolution)
I also tried to use a user defined bridge:
docker network create –driver bridge test
and connected the containers to this network (and disconnected them from the default bridge network)
bash-4.3# docker network inspect test
[
{
"Name": "test",
"Id": "e0e203000f5cfae8103ed9b80dce113633e0e198c542f943ac2e7026cb684784",
"Created": "2020-12-22T22:47:08.331525073+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3da4fda1508743b36540d6848c5334c84c3c9c02df88170e617d08f15e85999b": {
"Name": "alpine1",
"EndpointID": "ccf4be3f89c45dc73183210fafcfdafee9bbe30309ef15cf27e37bbb3783ea58",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"c024024eb5a0e57720f7c2abe76ea5f5396a29eb02addd1f60d23075fcfcad78": {
"Name": "alpine2",
"EndpointID": "d4a8cf285d6dae7e8b7f96426a390b73ea800a72bf1739b0ea88c122de975650",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Also in this case I wasn’t able to ping one container from the other.
Apart from updates of DSM I also upgraded the internal memory. Don’t think this has anything to do with the problem but you never know
I had a similar issue, have you tried disabling the firewall rules on the NAS?
Edit: I found a work around, disabling the userland proxy (docker-proxy) via the daemon.json seems to resolve this. This likely means this is a bug in the docker-proxy and as far as I can tell everything I was running before works correctly.
I am attempting to debug an issue configuring TCP health checks with consul. The configuration of consul etc. isn't relevant as I have isolated this to a rather simple scenario. All containers are connected to bridge0 (config below). Host OS is Centos 7.
What I expect to see is nc from Container 2 to return connection refused but instead it appears to connect and then after sending some random characters a broken pipe. Is this to be expected?
Container 1:
[user#192.168.1.2 ~]$ docker run --net=bridge0 -it -p 50032:8000 centos:7 bash
[root#a691f149c045 /]#
Container 2:
[user#192.168.1.2 ~]$ docker run -it --net=bridge0 centos:7 bash
[root#e9c1cbaf3922 /]# nc 192.168.1.2 50032
asd
asd
Ncat: Broken pipe.
Host:
[user#192.168.1.2 ~]$ nc 192.168.1.2 50032
Ncat: Connection refused.
Docker bridge0 config
[user#host ~]$ docker network inspect bridge0
[
{
"Name": "bridge0",
"Id": "b50864883bb2c9482b2d0da595abbe4b12e0de6b7fa91657119316fd75dcac83",
"Created": "2018-08-16T21:38:11.501721012-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.24.0.0/16",
"IPRange": "172.24.0.0/24",
"Gateway": "172.24.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"a691f149c045be06ad90c66221a9c35f2586b75e9a5e2f104c443ced311fdf03": {
"Name": "gallant_lamport",
"EndpointID": "e66a16eac8d7a405d3698fa37f6d6a47484b63c9cf07a714bbab6caf107741d6",
"MacAddress": "02:42:ac:18:00:09",
"IPv4Address": "172.24.0.9/16",
"IPv6Address": ""
},
"e9c1cbaf3922774183afe613c6641e19346cac8d707bb2374d1251b02855a94f": {
"Name": "xenodochial_bose",
"EndpointID": "a01755d0468a2aa188f1b607ee63590bda4dc3e89e15dc78f1556b79fa1aac42",
"MacAddress": "02:42:ac:18:00:0a",
"IPv4Address": "172.24.0.10/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
I have containers running in a swarm stack of services (on different docker-machines each) connected together on an overlay docker network.
How would it be possible to get all used ip adresses on the network associated with their services or container name from inside a container on this network?
Thank you
If you want to execute this command from inside containers, first you have to mount docker.sock for each service (assuming that docker is installed in the container)
volumes:
- /var/run/docker.sock:/var/run/docker.sock
then in each container you have to install jq and after that you can simply run docker network inspect <network_name_here> | jq -r 'map(.Containers[].IPv4Address) []' expected output something like:
172.21.0.2/16
172.21.0.5/16
172.21.0.4/16
172.21.0.3/16
Find the name OR ID of overlay network -
$ docker network ls | grep overlay
Do a inspect -
docker inspect $NETWORK_NAME
You will be able to find the container names & IPs allocated to them. You can do a fetch/grep the required values from the inspect output. You will find the output something as below -
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.23.0.0/16",
"Gateway": "172.23.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"183584efd63af145490a9afb61eac5db994391ae94467b32086f1ece84ec0114": {
"Name": "emailparser_lr_1",
"EndpointID": "0a9d0958caf0fa454eb7dbe1568105bfaf1813471d466e10030db3f025121dd7",
"MacAddress": "02:42:ac:17:00:04",
"IPv4Address": "172.23.0.4/16",
"IPv6Address": ""
},
"576cb03e753a987eb3f51a36d4113ffb60432937a2313873b8608c51006ae832": {
"Name": "emailparser",
"EndpointID": "833b5c940d547437c4c3e81493b8742b76a3b8644be86af92e5cdf90a7bb23bd",
"MacAddress": "02:42:ac:17:00:02",
"IPv4Address": "172.23.0.2/16",
"IPv6Address": ""
},
Assuming you're using the default VIP endpoint, you can use DNS to resolve the IP's of a service. Here's an example of using dig to get VIP IP using and then get the individual container IP's behind that VIP using tasks.
docker network create --driver overlay --attachable sweet
docker service create --name nginx --replicas=5 --network sweet nginx
docker container run --network sweet -it bretfisher/netshoot dig nginx
~~~
;; ANSWER SECTION:
nginx. 600 IN A 10.0.0.3
~~~
docker container run --network sweet -it bretfisher/netshoot dig tasks.nginx
~~~
;; ANSWER SECTION:
tasks.nginx. 600 IN A 10.0.0.5
tasks.nginx. 600 IN A 10.0.0.8
tasks.nginx. 600 IN A 10.0.0.7
tasks.nginx. 600 IN A 10.0.0.6
tasks.nginx. 600 IN A 10.0.0.4
~~~
for n in `docker network ls | awk '!/NETWORK/ {print $1}'`; do docker network inspect $n; done
First, find the name of the network which your swarm is using.
Then run docker network inspect <NETWORK-NAME>. This will give you a JSON output, in which you'll find an object with key "Containers". This object reveals all the containers in the network and their IP addresses respectively.
I have this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3252abd7587 cdt-tests "/bin/bash /home/new…" 5 seconds ago Exited (1) 22 seconds ago cdt-tests
f492760705e3 cdt-server "/bin/bash /usr/loca…" 52 seconds ago Up About a minute 0.0.0.0:3040->3040/tcp cdt-server
89c5a28855df mongo "docker-entrypoint.s…" 55 seconds ago Up About a minute 27017/tcp cdt-mongo
1eaa4aa684a9 selenium/standalone-firefox:3.4.0-chromium "/opt/bin/entry_poin…" 59 seconds ago Up About a minute 4444/tcp cdt-selenium
the cdt-tests container, is attempting to make connections to the other containers in the same network. the network looks like this:
$ docker network inspect cdt-net # this yields the below json
[
{
"Name": "cdt-net",
"Id": "8c2b486e950076130860e0c6c09f06eaf8cccf02127786b80bf7cc169f8eae0f",
"Created": "2018-01-23T21:52:34.5021152Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1eaa4aa684a9d7c1ad7a1b7ac28418b100e6b8d7a22ceb2a97cf51e4487c5fb2": {
"Name": "cdt-selenium",
"EndpointID": "674ce85e14339e67e767ab9844cd2fd1356fc3e60ab050e1cd1457e4168ac9fc",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"89c5a28855dfde05fe9db9a35bbe7bce232eb56d9024022785d2a65570c423b5": {
"Name": "cdt-mongo",
"EndpointID": "ed497939965363cd194b4fea8e6a26ee47ef7f24bef56c9726003a897be83dd1",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"f492760705e30be4fe8469ae422e96548ee2192f41314e3815762a9e39a4cc82": {
"Name": "cdt-server",
"EndpointID": "17e8bd6f7735a52669f0fe92b2d5717c7a3ae6954c108af3f29c13233ef20ee6",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
in my cdt-tests container, I run these commands:
export CDT_SELENIUM_HOST="cdt-selenium.cdt-net";
export OPENSHIFT_NODEJS_IP="127.0.0.1";
export OPENSHIFT_NODEJS_PORT="3040";
export CDT_SERVER_HOST="127.0.0.1";
export CDT_SERVER_PORT=3040;
export OPENSHIFT_MONGODB_DB_HOST="127.0.0.1"
export OPENSHIFT_MONGODB_DB_PORT=27017
nc -zv "$CDT_SELENIUM_HOST" 4444 > /dev/null 2>&1
nc_exit=$?
if [ ${nc_exit} -eq 0 ]; then
echo "selenium server is live"
else
echo "selenium server is NOT live"
exit 1;
fi
nc -zv "$OPENSHIFT_MONGODB_DB_HOST" 27017 > /dev/null 2>&1
nc_exit=$?
if [ ${nc_exit} -eq 0 ]; then
echo "mongo server is live"
else
echo "mongo server is NOT live"
exit 1;
fi
nc -zv "$CDT_SERVER_HOST" 3040 > /dev/null 2>&1
nc_exit=$?
if [ ${nc_exit} -eq 0 ]; then
echo "cdt server is live"
else
echo "cdt server is NOT live"
exit 1;
fi
and all of those connection tests fail. Does anyone know how to connect between containers in the same Docker network? is there some surefire pattern to use?
It looks like you're trying to use 127.0.0.1 as the address for your other containers. Docker containers all have a unique ip address in an isolated network space. Much like your own physical host, 127.0.0.1 is a special address that means "this container". So since none of those services are running in the container in which you're running your tests, you can't connect to anything.
You need to use the ip address of the container running the service you want to test. Because ip addresses change with every deployment, it's not convenient to use the literal address. You need some way to get the information dynamically. For this reason, Docker maintains a DNS service on each network so that you can simply use the name of a container like any other hostname.
For example, in your environment, you could set:
export OPENSHIFT_MONGODB_DB_HOST="cdt-mongo"
And then your mongo test should succeed. And so forth for the other _HOST and _IP variables you're using.
I could assure you that the way #larsks mentioned would work well.
But there is still an easy way. I could see container cdt-tests is in the same network with other containers based on your description above. So you may attach all containers to a user-defined bridge network created by docker network create -d bridge mynet.
When starting containers, just add --net=mynet flag.
E.g.docker run -tid --name=cdt-server --net=mynet cdt-server .
Thus, there is no need to add any ENVs into your container(s). Since user-defined bridge networks provide automatic DNS resolution between containers. You may see introduction here.