Connect to redis inside bridge network - docker

I created a bridge network names app and connect to two containers as you can see below:
[
{
"Name": "app",
"Id": "54fc6dc62ce366d9a019f556a7efd78dfb60676542e6cc4a494678f7faf6a63a",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {
"0280af19da941b4a83101bf9a6d4a51e0a41436374f2e403ac1e1a7169d75b57": {
"EndpointID": "be4b0587262ea402a8e83c1db6e71fbb7347773dce89869c575d3ace85cdaab5",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"759b09fe9b5d154eb2b0676547e21a576b31c917087a206ed46bea9acced2017": {
"EndpointID": "78611933ff99b34306944044ed7ec988c16dc05d5b42953547300c7a11cd4b64",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
The first container is redis db container and the second is sinatra webapp based container. I run the webapp container like this:
sudo docker run -p 4567 \--net=app --name webapp -t -i \-v /home/developer/sinatra/webapp:/opt/webapp kostonstyle/sinatra \/bin/bash
and inside console from webapp, I can ping to the redis db server.
root#759b09fe9b5d:/opt/webapp/bin# ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.074 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.047 ms
64 bytes from 172.18.0.2: icmp_seq=3 ttl=64 time=0.046 ms
How can I connect to redis db in console? I've try
root#759b09fe9b5d:/opt/webapp/bin# redis-cli -h remote.172.18.0.2
but it says me:
Could not connect to Redis at remote.172.18.0.2:6379: Name or service not known
What am I doing wrong?

Seems like it's a typo. Try redis-cli -h 172.18.0.2 without remote.
Use redis-cli --help for detailed information.

Related

why isn't docker compose dns resolving to actual ip but docker run network-alias is able to resolve?

I have an external network which is used by docker-compose as well as docker run. I can specify network alias in 'docker run' and it would resolve to the actual container ip, but the alias I define in docker compose doesn't resolve to actual ip. why is this? What should I do to get the alias in docker-compose resolve to actual IP?
step1: create an external network
docker network create --attachable -d overlay test-docker-network
step2: create a docker-compose which creates an alias
docker-compose.yml
version: '3.0'
services:
host1:
image: linuxserver/openssh-server
environment:
USER_PASSWORD: 'password'
USER_NAME: 'user'
PASSWORD_ACCESS: 'true'
SUDO_ACCESS: 'true'
ports:
- 2222:2222
networks:
default:
aliases:
- netcatalias
networks:
default:
external:
name: test-docker-network
step3: deploy stack
docker stack deploy -c docker-compose.yml netcat
step4: deploy 'docker run' container in same network
docker run --rm --name host2 --network-alias=myalias -ti --network test-docker-network debian:buster bash
step5: resolve both the aliases
root#de1f75728a7e:~/gitprojects/docker-network-troubleshoot# docker run --rm --name host2 --network-alias=myalias -ti --network test-docker-network debian:buster bash
root#ea765c15dae8:/# ping myalias
PING myalias (10.0.8.5) 56(84) bytes of data.
64 bytes from ea765c15dae8 (10.0.8.5): icmp_seq=1 ttl=255 time=0.022 ms
64 bytes from ea765c15dae8 (10.0.8.5): icmp_seq=2 ttl=255 time=0.042 ms
64 bytes from ea765c15dae8 (10.0.8.5): icmp_seq=3 ttl=255 time=0.034 ms
^C
--- myalias ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 49ms
rtt min/avg/max/mdev = 0.022/0.032/0.042/0.010 ms
root#ea765c15dae8:/# ping netcatalias
PING netcatalias (10.0.8.2) 56(84) bytes of data.
64 bytes from ip-10-0-8-2.ec2.internal (10.0.8.2): icmp_seq=1 ttl=255 time=0.069 ms
64 bytes from ip-10-0-8-2.ec2.internal (10.0.8.2): icmp_seq=2 ttl=255 time=0.068 ms
64 bytes from ip-10-0-8-2.ec2.internal (10.0.8.2): icmp_seq=3 ttl=255 time=0.067 ms
^C
--- netcatalias ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 19ms
rtt min/avg/max/mdev = 0.067/0.068/0.069/0.000 ms
root#ea765c15dae8:/#
step 6: get actual ip from 'network inspect'
root#de1f75728a7e:~/gitprojects/docker-network-troubleshoot# docker network inspect test-docker-network
[
{
"Name": "test-docker-network",
"Id": "3ev3r0eo2rg81pyb2yovlmmg3",
"Created": "2020-01-18T03:09:58.748025872Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.8.0/24",
"Gateway": "10.0.8.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2ba6c329d157b4a03480f978645e558bb6b46d9d5c7af3d152d943aae75c696a": {
"Name": "netcat_host1.1.180sln82qyxp03rk8o5od5p9a",
"EndpointID": "cf2eaf42b10083296696c3cad8e43fe392ed2374cd65fd8aa8c048a134171bd2",
"MacAddress": "02:42:0a:00:08:03",
"IPv4Address": "10.0.8.3/24",
"IPv6Address": ""
},
"ea765c15dae8c1cf6f6945447897a126fdf03ae1e42d2811c95d94a9d9112f39": {
"Name": "host2",
"EndpointID": "67ca483fd4bd231db74a39ba8f782a95c102fc04937ef9e245bfc14100f61d11",
"MacAddress": "02:42:0a:00:08:05",
"IPv4Address": "10.0.8.5/24",
"IPv6Address": ""
},
"lb-test-docker-network": {
"Name": "test-docker-network-endpoint",
"EndpointID": "0754c146c555fdf0e2d683c8ead3e0670196e201148c411f35899df226d77cc4",
"MacAddress": "02:42:0a:00:08:04",
"IPv4Address": "10.0.8.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4106"
},
"Labels": {},
"Peers": [
{
"Name": "08bdcafe53fb",
"IP": "10.0.0.30"
}
]
}
]
Issue:
we can see that the 'docker run' alias 'myalias' correctly resolves to (10.0.8.5) from 'docker network inspect'. But 'netcatalias' resolves to (10.0.8.2). But it should actually resolved to "10.0.8.3". Why is this happening. How can resolve netcatalias to "10.0.8.3"?
It's the ip of a virtual service load balancer that sits in front of services and distributes traffic to replicas.
If you change service routing mode to dnsrr instead vip (virtual-ip), then docker dns service will resolve names to container ips in round-robin mode.

docker-compose hostname to communicate between containers works with postgres but not app

I have the following docker-compose.yml:
services:
postgres:
image: "postgres:11.0-alpine"
app:
build: .
ports:
- "4000:4000"
depends_on:
- postgres
- nuxt
nuxt:
image: node:latest
ports:
- "3000:3000"
I need nuxt service to communicate with app.
Within the nuxt service (docker-compose run --rm --service-ports nuxt bash), if I run
root#62cafc299e8a:/app# ping postgres
PING postgres (172.18.0.2) 56(84) bytes of data.
64 bytes from avril_postgres_1.avril_default (172.18.0.2): icmp_seq=1 ttl=64 time=0.283 ms
64 bytes from avril_postgres_1.avril_default (172.18.0.2): icmp_seq=2 ttl=64 time=0.130 ms
64 bytes from avril_postgres_1.avril_default (172.18.0.2): icmp_seq=3 ttl=64 time=0.103 ms
but if I do:
root#62cafc299e8a:/app# ping app
ping: app: No address associated with hostname
Why does it work for postgres but not with app?
If I do docker network inspect 4fcb63b4b1c9, they appear to all be on the same network:
[
{
"Name": "myapp_default",
"Id": "4fcb63b4b1c9fe37ebb26e9d4d22c359c9d5ed6153bd390b6f0b63ffeb0d5c37",
"Created": "2019-05-16T16:46:27.820758377+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"53b726bdd01159b5f18e8dcb858e979e6e2f8ef68c62e049b824899a74b186c3": {
"Name": "myapp_app_run_c82e91ca4ba0",
"EndpointID": "b535b6ca855a5dea19060b2f7c1bd82247b94740d4699eff1c8669c5b0677f78",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"62cafc299e8a90fd39530bbe4a6af8b86098405e54e4c9e61128539ffd5ba928": {
"Name": "myapp_nuxt_run_3fb01bb2f778",
"EndpointID": "7eb8f5f8798baee4d65cbbfe5f0f5372790374b48f599f32490700198fa6d54c",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"9dc1c848b2e347876292650c312e8aaf3f469f2efa96710fb50d033b797124b4": {
"Name": "myapp_postgres_1",
"EndpointID": "a925438ad5644c03731b7f7c926cff095709b2689fd5f404e9ac4e04c2fbc26a",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "myapp",
"com.docker.compose.version": "1.23.2"
}
}
]
So why is that? Also tried with aliases, without success. :(
Your app container is most likely not running. Its appearance in docker network inspect means that the container exists but it may be exited (i.e. is not running). You can check with docker ps -a, for example:
$ docker ps -a
CONTAINER ID ... STATUS ... NAMES
fe908e014fdd Exited (0) Less than a second ago so_app_1
3b2ca418c051 Up 2 minutes so_postgres_1
container app exists but is not running: you won't be able to ping it even if it exists in the network
container postgres exists and is running: you will be able to ping it
It's probably due to the fact that docker-compose run --rm --service-ports nuxt bash will only create and run the nuxt container, it won't run app nor postgres. You are able to ping postgres because it was already running before you used docker-compose run nuxt bash
To be able to ping other containers after running docker-compose run nuxt ..., you should either:
Have the other containers already running before (such as by using docker-compose up -d)
Have the other containers depends_on the container you are trying to run, for example:
nuxt:
image: node:latest
ports:
- "3000:3000"
# this will ensure posgres and app are run as well when using docker-compose run
depends_on:
- app
- nuxt
Even with that, your container may fail to start (or exit right after start) and you won't be able to ping it. Check with docker ps -a that it is running and docker logs to see why it may have exited.
As #Pierre said, most probably your container is not running.
From below docker-compose from your question, it seems you are not doing anything in that container such as running a server or uwsgi to keep it alive.
app:
build: .
ports:
- "4000:4000"
depends_on:
- postgres
- nuxt
To keep it running in docker compose, add command directive like below.
app:
build: .
command: tail -f /dev/null #trick to stop immediate exit and keep the container alive.
ports:
- "4000:4000"
depends_on:
- postgres
- nuxt
It should be 'pingable' container now.
If you wish to run via docker run, use -t, which creates a psuedo-tty
docker run -t -d <image> <command>

How to configure containers in one network to connect to each other (server -> mysql)?

I have ran ubuntu docker-containers (mysql) and (nodejs server app) on windows
docker run -d --network bridge --name own -p 80:3000 own:latest
docker run -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=12345678 mysql:5
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ce966e43414 own:latest "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:80->3000/tcp own
ed10cfc93dd5 mysql:5 "docker-entrypoint.s…" 20 minutes ago Up 20 minutes 0.0.0.0:3306->3306/tcp, 33060/tcp mysql
When i open port localhost:3000 with server app just via cmd (NOT via docker VM) all is good, I see success connection to the docker-container 0.0.0.0:3306, but when i:
docker start own
check browser 0.0.0.0:80 and i see Error: connect ECONNREFUSED 127.0.0.1:3306
docker network ls
NETWORK ID NAME DRIVER SCOPE
019f0886d253 bridge bridge local
fa1842bad14c host host local
85e7d1e38e14 none null local
docker inspect bridge
[
{
"Name": "bridge",
"Id": "019f0886d253091c1367863e38a199fe9b539a72ddb7575b26f40d0d1b1f78dc",
"Created": "2019-11-19T09:15:53.2096944Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"a79ec12c4cc908326c54abc2b47f80ffa3da31c5e735bf5ff2755f23b9d562dd": {
"Name": "own",
"EndpointID": "2afc225e29138ff9f1da0f557e9f7659d3c4ccaeb5bfaa578df88a672dac003f",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"ed10cfc93dd5eda7cfb8a26e5e4b2a8ccb4e9db7a4957b3d1048cb93f5137fd4": {
"Name": "mysql",
"EndpointID": "ea23d009f959d954269c0554cecf37d01f8fe71481965077f1372df27f05208a",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
maybe could i somehow assign the own container to the bridge network like mysql. After localhost of the mysql container will be seen for own container? Help please what should i do?
#create network mynetwork
docker network create --subnet 172.17.0.0/16 mynetwork
#create own container (without starting it)
docker create -d --name own -p 80:3000 own:latest
#add own container to the network mynetwork
docker network connect --ip 172.17.0.2 mynetwork own
#start container own
docker start own
#same as above but with different ip
docker create -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=12345678 mysql:5
docker network connect --ip 172.17.0.3 mynetwork mysql
docker start mysql
when you stop and remove your containers do, you may remove network this way:
docker network rm mynetwork
or if you don't do it then there is no need to create it again as above but just connect your new/other containers to it.
In your application you should use 172.17.0.3 as the MySQL address.

How to set up mysql host limitation working with docker container

I'm setting up a mysql:5 container (image from docker hub) into a sub-network shared with another container (web server).
I'd like to limit my mysql user to host that correspond to the web server container.
If i limit it directly whith the web-server container ip it's work but this ip cant change because of the docker environement so i'd like to have somethig like:
GRANT ALL PRIVILEGES ON `db`.* TO 'user'#'container-name'
And when i try to connect server respond:
Access denied for user 'user'#'172.18.0.4'
Where 172.18.0.4 is the correct ip for the web-server container.
Example :
docker-compose.yaml
version: '2'
services:
mysql1:
image: mysql
container_name: mysql1
ports:
- '3306:3306'
environment:
- MYSQL_ROOT_PASSWORD=rootpwd
mysql2:
image: mysql
container_name: mysql2
environment:
- MYSQL_ROOT_PASSWORD=rootpwd
Up containers
docker-compose up -d
Create user into mysql1
docker-compose exec mysql1 mysql -u root --password="rootpwd" -e "CREATE USER user#mysql2 IDENTIFIED BY 'pwd'; GRANT ALL PRIVILEGES ON * TO user#mysql2" sys
Try to access mysql1 from mysql2 by user
docker-compose exec mysql2 mysql -u user --password="pwd" -h mysql1 sys
ERROR 1045 (28000): Access denied for user 'user'#'172.18.0.3' (using password: YES)
Docker network info
docker network inspect test-mysql-user-host_default
{
"Name": "test-mysql-user-host_default",
"Id": "305f4da33e0b79d899ac289e6b3fc1ebf2733baf0bf3d60a53cc94cec44176d1",
"Created": "2019-04-26T09:53:23.3237197Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"133ebd1e6ba485212e5448670c66c1718917650bc217264183c725fb1a928118": {
"Name": "mysql1",
"EndpointID": "ce89aa1674e9c46fad50b2f36aec8d1eecf2227f597a785be67785ade770fef7",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"d67072a5c486752b7018fa321e47ae09fb873199604c2b520f2305968d43b577": {
"Name": "mysql2",
"EndpointID": "e6b62c6ce9e266d38be383fa6029f378e1ca67a18420dd3f508a3089200c0d98",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Comment from bellackn
It seems to be a problem of MySQL's skip-resolve-host option which is currently hard-coded in the Dockerfile (see this GitHub issue). TLDR: You could create an own image based on mysql:5 and sed this option away, or you use the % wildcard instead of mysql2.
He's right here the official dockerfile for mysql:5.7, and at line 70 we can find:
#don't reverse lookup hostnames, they are usually another container
&& echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
I did a new Dockerfile witch remove this configuration file.
FROM mysql:5.7
RUN rm /etc/mysql/conf.d/docker.cnf
EXPOSE 3306 33060
CMD ["mysqld"]

how can a docker container on 2 subnets access the internet? (using docker-compose)

I have a container with 2 subnets:
one is the reverse proxy subnet
the second one is the internal subnet for the different containers of that project
The container needs to access an external SMTP server (on mailgun.com), but it looks like, with docker-compose, you can put a container on both one or more subnets and give it access to the host network at the same time.
Is there a way to allow this container to initiate connections to the outside world?
and, if no, what common workarounds are used? (for example, adding an extra IP to the container to be on the host network, etc.)
This is the docker compose file:
version: '2.3'
services:
keycloak:
container_name: keycloak
image: jboss/keycloak
restart: unless-stopped
volumes:
- '/appdata/keycloak:/opt/jboss/keycloak/standalone/data'
expose:
- 8080
external_links:
- auth
networks:
- default
- nginx
environment:
KEYCLOAK_USER: XXXX
KEYCLOAK_PASSWORD: XXXX
PROXY_ADDRESS_FORWARDING: 'true'
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
VIRTUAL_HOST: auth.XXXX.com
VIRTUAL_PORT: 80
LETSENCRYPT_HOST: auth.XXXX.com
LETSENTRYPT_EMAIL: admin#XXXX.com
networks:
default:
external:
name: app-network
nginx:
external:
name: nginx-proxy
The networks are as follows:
$ dk network ls
NETWORK ID NAME DRIVER SCOPE
caba49ae8b1c bridge bridge local
2b311986a6f6 app-network bridge local
67f70f82aea2 host host local
9e0e2fe50385 nginx-proxy bridge local
dab9f171e37f none null local
and nginx-proxy network info is:
$ dk network inspect nginx-proxy
[
{
"Name": "nginx-proxy",
"Id": "9e0e2fe503857c5bc532032afb6646598ee0a08e834f4bd89b87b35db1739dae",
"Created": "2019-02-18T10:16:38.949628821Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"360b49ab066853a25cd739a4c1464a9ac25fe56132c596ce48a5f01465d07d12": {
"Name": "keycloak",
"EndpointID": "271ed86cac77db76f69f6e76686abddefa871b92bb60a007eb131de4e6a8cb53",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"379dfe83d6739612c82e99f3e8ad9fcdfe5ebb8cdc5d780e37a3212a3bf6c11b": {
"Name": "nginx-proxy",
"EndpointID": "0fcf186c6785dd585b677ccc98fa68cc9bc66c4ae02d086155afd82c7c465fef",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"4c944078bcb1cca2647be30c516b8fa70b45293203b355f5d5e00b800ad9a0d4": {
"Name": "adminmongo",
"EndpointID": "65f1a7a0f0bcef37ba02b98be8fa1f29a8d7868162482ac0b957f73764f73ccf",
"MacAddress": "02:42:ac:12:00:06",
"IPv4Address": "172.18.0.6/16",
"IPv6Address": ""
},
"671cc99775e09077edc72617836fa563932675800cb938397597e17d521c53fe": {
"Name": "portainer",
"EndpointID": "950e4b5dcd5ba2a13acba37f50e315483123d7da673c8feac9a0f8d6f8b9eb2b",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"90a98111cbdebe76920ac2ebc50dafa5ea77eba9f42197216fcd57bad9e0516e": {
"Name": "kibana",
"EndpointID": "fe1768274eec9c02c28c74be0104326052b9b9a9c98d475015cd80fba82ec45d",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Update:
The following test was done to test the solution proposed by lbndev:
a test network was created:
# docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
test_network
e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9
we can display the contents:
# docker inspect test_network
[
{
"Name": "test_network",
"Id": "e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9",
"Created": "2019-02-24T21:52:44.678870135+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Then we can inspect the container:
I put the contents on pastebin: https://pastebin.com/5bJ7A9Yp since it's quite large and would make this post unreadable.
and testing:
# docker exec -it 5d09230158dd sh
sh-4.2$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
^C
--- 1.1.1.1 ping statistics ---
11 packets transmitted, 0 received, 100% packet loss, time 10006ms
So, we couldn't get this solution to work.
Looks like your bridge network is missing a few options, to allow it to reach the outside world.
Try executing docker network inspect bridge (the default bridge network). You'll see this in the options :
...
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
...
On your nginx-proxy network, these are missing.
You should delete your network and re-create it with these additional options. From the documentation on user-defined bridged networks and docker network create command :
docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
nginx-proxy
Enabling ICC or not is up to you.
What will enable you to reach your mail server is ip_masquerade to be enabled. Without this setup, your physical infrastructure (= network routers) would need to properly route the IPs of the docker network subnet (which I assume is not the case).
Alternatively, you could configure your docker network's subnet, ip range and gateway, to match those of your physical network.
In the end, the problem turned out to be very simple:
In the daemon.json file, in the docker config, there was the following line:
{"iptables": false, "dns": ["1.1.1.1", "1.0.0.1"]}
It comes from the setup scripts we’ve been using and we didn’t know about iptables:false
It prevents docker from updating the host’s iptables; while the bridge networks were set up correctly, there was no communication possible with the outside.
While simple in nature, it proved very long to find, so I’m posting it as an answer with the hope it might help someone.
Thanks to everyone involved for trying to solve this issue!

Resources