I run docker containers on a Synology NAS. All container using the host driver have network connection but none of the containers using the bridge driver have. In the past it worked but some months ago one of my experimental containers experienced network problems
Environment:
Synology DS218+
DSM 6.2.3-25426 Update 2
10 GB internal memory
To simplify the description of the problem I have followed the tutorial from docker:
docker run –dit --name alpine1 alpine ash
docker run –dit --name alpine2 alpine ash
The containers have 172.17.0.2 and172.17.0.3 as IP addresses. When I attached to alpine1 I wasn’t able to ping to alpine2 using its IP-address (since the default bridge doesn’t do name resolution)
I also tried to use a user defined bridge:
docker network create –driver bridge test
and connected the containers to this network (and disconnected them from the default bridge network)
bash-4.3# docker network inspect test
[
{
"Name": "test",
"Id": "e0e203000f5cfae8103ed9b80dce113633e0e198c542f943ac2e7026cb684784",
"Created": "2020-12-22T22:47:08.331525073+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3da4fda1508743b36540d6848c5334c84c3c9c02df88170e617d08f15e85999b": {
"Name": "alpine1",
"EndpointID": "ccf4be3f89c45dc73183210fafcfdafee9bbe30309ef15cf27e37bbb3783ea58",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"c024024eb5a0e57720f7c2abe76ea5f5396a29eb02addd1f60d23075fcfcad78": {
"Name": "alpine2",
"EndpointID": "d4a8cf285d6dae7e8b7f96426a390b73ea800a72bf1739b0ea88c122de975650",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Also in this case I wasn’t able to ping one container from the other.
Apart from updates of DSM I also upgraded the internal memory. Don’t think this has anything to do with the problem but you never know
I had a similar issue, have you tried disabling the firewall rules on the NAS?
Related
New to docker, please correct my statement
I'm trying to access docker container ex:nginx web server using port 80 in docker engine machine but am unable to access it.
Here my docker Engine network 10.20.20.0/24.
Docker Engine IP : 10.20.20.3
> Telnet 10.20.20.3 80 Connection failed
tcp 0 0 10.20.20.3:80 0.0.0.0:* LISTEN 28953/docker-proxy
Docker Container IP : 172.18.0.4
> Telnet 172.18.0.4 80 Connection success
Docker network detail
[root#xxxxxxxxx]# docker network inspect 1984f08c739d [
{
"Name": "xxxxxxxxxxxxx",
"Id": "1984f08c739d6d6fc6b4769e877714844b9e57ca680f61edf3a528bd12ca5ad1",
"Created": "2021-11-13T21:01:27.53591809+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"126d5128621fa6cde0389f4c6e0b53be36670bce5f62e981da6e17722b88c4a9": {
"Name": "xxxxxxxxxxxxxxx",
"EndpointID": "b011052062ae137e3d190032311e0fbc5850f0459c9b24d8cc1d315a9dc18773",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "xxxxxxxx",
"com.docker.compose.version": "1.29.2"
}
} ]
I can access these nginx in other networks like 10.20.21.0/24 so on. But not on the same network 10.20.20.0/24 or same docker engine running on it.
My Environment Docker Engine VM having 2 Interfaces i.e. eth0 and eth1 with different subnet. In Previously it'll not work because both interfaces having separate routing table in /etc/sysconfig/network-scripts (route-eth0,route-eth1 and rule-eth0,eth1) Base hyper-v AHV. These route written to persistent interface. I tried to removing route for eth0. Since eth0 doesnt required routing table to persistent, it'll come by default route table in Linux. Then restarted the network..Hola there we go the docker listening on eth0 interfaces and something did for eth1. it's worked. Both eth0 and eth1 interfaces I can map to the dockers network. It's work like charm. I believe in AHV doesnt not required routing table for AHV VMs for different and same network subnets. So here the concluded its routing issues. Issues was resolved, I can access docker container with eth0,eth1 interfaces across different subnets and same subnet.
Both interfaces worked after restarting without any routes in AHV VMs and power off.
My API couldn't be published on the specific ip address (VM host) when using docker
First, I run the file in terminal :
Rscript run.R
This works fine, my api is up and running on the ip address http://35.157.131.3:8000/swagger/ . After which, I would like to deploy it with docker:
docker run --rm -p 8000:8000 --expose 8000 -d --name diemdiem trestletech/plumber
This showed the file was plumbed successfully, however, when i went to the api link, http://35.157.131.3:8000/swagger/ showed 404-error.
After reading docker documentations, i created a container network which specifies the host ip address that i want the docker container would run on:
-o "com.docker.network.bridge.host_binding_ipv4"="35.157.131.3" \
simple-network````
then, i connect the running diemdiem container to simple-network:
``` docker network connect simple-network diemdiem```
I inspect to see whether the container is connected or not:
```docker network inspect simple-network```
The result is:
[
{
"Name": "simple-network",
"Id": "95ec0c55aeb984952459edda2d4d0bb7c9eea71824e6cec184b7c61d2e807e7b",
"Created": "2019-07-08T17:30:23.709654207Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c83125bf68a89aebda3effe28ebee4d6323657e1427cf08fd3d63b6e411f8448": {
"Name": "diemdiem",
"EndpointID": "7fab3354e051dc81ef798bd86c19361f6a721b578237b3a3695cb415b1aee2e4",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.host_binding_ipv4": "35.157.131.3"
},
"Labels": {}
}
]
The final API is still not up and running in the ip address which i specified. I appreciate your advice.
I have 2 containers - backend & frontend. I run them on remote server with this commands:
docker run -p 3000:3000 xpendence/api-checker:0.0.1
docker run -p 8099:8099 --name rebounder-backend-0017a xpendence/rebounder-chain-backend:0.0.17
As documentation says, containers connect to 'bridge' network by default. And I see this containers inside there:
# docker network inspect bridge
[
{
"Name": "bridge",
"Id": "27f9d6240b4022b6ccbfff93daeff32d2639aa22f7f2a19c9cbc21ce77b435",
"Created": "2019-05-12T12:26:35.903309613Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"82446be7a9254c79264d921059129711f150a43ac412700cdc21eb5312522ea4": {
"Name": "rebounder-backend-0017a",
"EndpointID": "41fb5be38cff7f052ebbbb9d31ee7b877f664bb620b3063e57cd87cc6c7ef5c9",
"MacAddress": "03:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"da82a03c5d3bfe26cbd750da7f8872cf22dc9d43117123b9069e9ab4e17dbce6": {
"Name": "elastic_galileo",
"EndpointID": "13878a6db60ef854dcbdf6b7e729817a1d96fbec6364d0c18d7845fcbc040222",
"MacAddress": "03:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
I send requests from frontend to backend, but they not reach it:
GET http://localhost:8099/log net::ERR_CONNECTION_REFUSED
GET http://172.17.0.2:8099/log net::ERR_ADDRESS_UNREACHABLE
GET http://172.17.0.2/16:8099/log net::ERR_ADDRESS_UNREACHABLE
GET http://0.0.0.0:8099/log net::ERR_CONNECTION_REFUSED
Please give me advice, how to solve problem?
Requests to backend from outside are ok.
Although your two containers link to the same default bridge, but this doesn't mean they can visit each other.
In the past, we suggest to use --link to make container directly talk to each other without the host participate, but now this is deprecated.
Instead, you need to use user-defined bridge.
Containers connected to the same user-defined bridge network automatically expose all ports to each other.
User-defined bridges provide automatic DNS resolution between containers.
Steps as follows:
docker network create my-net
docker run --network my-net -p 3000:3000 xpendence/api-checker:0.0.1
docker run --network my-net -p 8099:8099 --name rebounder-backend-0017a xpendence/rebounder-chain-backend:0.0.17
Detail references to official guide
I have a Docker bridge network, to which are attached 2 containers:
a Node.js server running on the port 3333
a Flask server running on the port 5000
The network has this configuration:
[
{
"Name": "mynetwork",
"Id": "f94f76533b065d39515b65d20b8645c22617be51ec9335fcfad8ce707ca48841",
"Created": "2019-02-20T17:17:29.029434324+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.1.0.0/16",
"Gateway": "10.1.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"Containers": {
"c8084141e36c756710cbfa020f664127f234e407986362331ab127d415c9b074": {
"Name": "nodeContainer",
"EndpointID": "e25f8797c1b7488d7c3810d8f38c4b3dea6b9f19f17558a164b710015fdd9e1a",
"MacAddress": "02:42:0a:01:00:03",
"IPv4Address": "10.1.0.3/16",
"IPv6Address": ""
},
"f9c582d031515f4bba910286118df806a6a2b04a36917234eca09fdf335d4457": {
"Name": "flaskContainer",
"EndpointID": "fbf053f97acc7b9491c536966b640862d366d1599fbfb400915cd8bc26b04f6a",
"MacAddress": "02:42:0a:01:00:02",
"IPv4Address": "10.1.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Normally, those 2 containers communicate (nodeContainer makes requests to http://flaskContainer:5000), but this stopped to work after setting a different Subnet and Gateway (for external network constraints).
In particular, I get an error like ETIMEDOUT 10.1.0.2:3333. This makes me think that the address is correctly resolved, but for some reason, there is no answer (and in fact flaskContainer logs nothing).
As additional information:
docker exec flaskContainer curl flaskContainer
docker exec nodeContainer curl nodeContainer
obviously does not work (Failed to connect to flaskContainer port 80).
docker exec flaskContainer curl flaskContainer:5000
docker exec nodeContainer curl nodeContainer:3333
correctly give results.
docker exec flaskContainer curl nodeContainer:3333
docker exec nodeContainer curl flaskContainer:5000
goes in timeout.
Have you any idea of what can be the reason? How can I solve this?
Thank you
Edit: I found a work around, disabling the userland proxy (docker-proxy) via the daemon.json seems to resolve this. This likely means this is a bug in the docker-proxy and as far as I can tell everything I was running before works correctly.
I am attempting to debug an issue configuring TCP health checks with consul. The configuration of consul etc. isn't relevant as I have isolated this to a rather simple scenario. All containers are connected to bridge0 (config below). Host OS is Centos 7.
What I expect to see is nc from Container 2 to return connection refused but instead it appears to connect and then after sending some random characters a broken pipe. Is this to be expected?
Container 1:
[user#192.168.1.2 ~]$ docker run --net=bridge0 -it -p 50032:8000 centos:7 bash
[root#a691f149c045 /]#
Container 2:
[user#192.168.1.2 ~]$ docker run -it --net=bridge0 centos:7 bash
[root#e9c1cbaf3922 /]# nc 192.168.1.2 50032
asd
asd
Ncat: Broken pipe.
Host:
[user#192.168.1.2 ~]$ nc 192.168.1.2 50032
Ncat: Connection refused.
Docker bridge0 config
[user#host ~]$ docker network inspect bridge0
[
{
"Name": "bridge0",
"Id": "b50864883bb2c9482b2d0da595abbe4b12e0de6b7fa91657119316fd75dcac83",
"Created": "2018-08-16T21:38:11.501721012-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.24.0.0/16",
"IPRange": "172.24.0.0/24",
"Gateway": "172.24.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"a691f149c045be06ad90c66221a9c35f2586b75e9a5e2f104c443ced311fdf03": {
"Name": "gallant_lamport",
"EndpointID": "e66a16eac8d7a405d3698fa37f6d6a47484b63c9cf07a714bbab6caf107741d6",
"MacAddress": "02:42:ac:18:00:09",
"IPv4Address": "172.24.0.9/16",
"IPv6Address": ""
},
"e9c1cbaf3922774183afe613c6641e19346cac8d707bb2374d1251b02855a94f": {
"Name": "xenodochial_bose",
"EndpointID": "a01755d0468a2aa188f1b607ee63590bda4dc3e89e15dc78f1556b79fa1aac42",
"MacAddress": "02:42:ac:18:00:0a",
"IPv4Address": "172.24.0.10/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]