Docker containers in the same network can't communicate (ARP is possible but not upper layers messages) - docker

I have been trying to split a simple API-rest into different services using docker. Unfortunately, I have not been able to make it work. I have read the docker docs several times and have followed multiple stack-over-flow and docker forum threads but non of the answers worked for me. I am new to Docker, so I might be missing something.
I detected that the communication host-container was ok but container-container wasn't, so in order to see what was going on I installed ping on get and post services (which run on a debian:bullseye-slim based image) and also wireshark in my host machine. What I have detected is that I can ping the host (172.22.0.1) and also the name resolution is okay (when I run ping post its IP is displayed) but for some reason when I send a ping request from post to get no reply is received.
My docker-compose.yaml file is the following:
version: '3.9'
services:
mydb:
image: mariadb:latest
environment:
MYSQL_DATABASE: 'cars'
MYSQL_ALLOW_EMPTY_PASSWORD: 'true'
ports:
- "3306:3306"
container_name: mydb
networks:
- mynw
post:
build: ./post-service
ports:
- "8081:8081"
container_name: post
networks:
- mynw
privileged: true
get:
build: ./get-service
ports:
- "8080:8080"
container_name: get
networks:
- mynw
privileged: true
nginx2:
build: ./nginx2
ports:
- "80:80"
container_name: nginx2
networks:
- mynw
networks:
mynw:
external: true
Initially, I was using the default network, but I read that this might cause internal DNS problems I changed it. I created the network by CLI without any special parameters (docker network create mynw). The JSON displayed when running docker network inspect mynw is the following:
[
{
"Name": "mynw",
"Id": "f925467f7efee99330f0eaaa82158006ac645cc92e7abda693f052c10da485bd",
"Created": "2022-10-14T18:42:14.145569533+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4eb6e348d84b2433199e6581b4406eb74fb93c1fc2269691b81b34c13c723db5": {
"Name": "nginx2",
"EndpointID": "b19fab264c1489b616d919f09a5b80a1774561ea6f2538beb86157065c1e787b",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"5f20802a59708bf4a592e137f52fca29dc857734983abc1c61548783e2e61896": {
"Name": "mydb",
"EndpointID": "3ef7b5d619b5b9ad9441dbc2efabd5a0e5a6bb2ea68bbd58fae8f7dfd2ac36ed",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
},
"dee816dd62aa08773134bb7a7a653544ab316275ec111817e11ba499552dea5b": {
"Name": "post",
"EndpointID": "cca2cbe801160fa6c35b3a34493d6cc9a10689cd33505ece36db9ca6dcf43900",
"MacAddress": "02:42:ac:16:00:04",
"IPv4Address": "172.22.0.4/16",
"IPv6Address": ""
},
"e23dcd0cecdb609e4df236fd8aed0999c12e1adc7b91b505fc88c53385a81292": {
"Name": "get",
"EndpointID": "83b73045887827ecbb1779cd27d5c4dac63ef3224ec42f067cfc39ba69b5484e",
"MacAddress": "02:42:ac:16:00:05",
"IPv4Address": "172.22.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Curiously, when sniffing the network using wireshark I see that the ARP messages between the containers are exchanged without problem (get service asks for post's MAC adress and this one replies with its MAC, and then this information is processed correctly to send the ICMP request).
I thought that maybe the network layer was dropping the replies for some reason and installed iptables to both services and added a ACCEPT rule for icmp messages to both INPUT and OUTPUT, but also didn't change anything. If someone knows what else could I do or what am I missing it would be very helpful.

Finally the solution was to delete everything and reinstall Docker and Docker Compose.

Related

Bad Gateway when using Traefik in docker swarm

I'm currently struggling a lot to spin up a small traefik example on my docker swarm instance.
I started first with an docker-compose file for local development and everything is working as expected.
But when I define this as swarm file to bring that environment into production I always get an Bad Gateway from traefik.
After searching a lot about this it seems to be related to an networking issue from traefik since it tries to request between two different networks, but I'm not able to find the issue.
After certain iterations I tried to reproduce the Issue with "official" containers to provide an better example for other people.
So this is my traefik.yml
version: "3.7"
networks:
external:
external: true
services:
traefik:
image: "traefik:v2.8.1"
command:
- "--log.level=INFO"
- "--accesslog=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=external"
- "--entrypoints.web.address=:80"
- "--entrypoints.web.forwardedHeaders.insecure"
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- external
deploy:
placement:
constraints: [node.role == manager]
host-app:
image: traefik/whoami
ports:
- "9000:80"
networks:
- external
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.host-app.rule=PathPrefix(`/whoami`)"
- "traefik.http.services.host-app.loadbalancer.server.port=9000"
- "traefik.http.routers.host-app.entrypoints=web"
- "traefik.http.middlewares.host-app-stripprefix.stripprefix.prefixes=/"
- "traefik.http.routers.host-app.middlewares=host-app-stripprefix#docker"
- "traefik.docker.network=external"
The network is created with: docker network create -d overlay external
and I deploy the stack with docker stack deploy -c traefik.yml server
Until here no issues and everything spins up fine.
When I curl localhost:9000 I get the correct response:
curl localhost:9000
Hostname: 7aa77bc62b44
IP: 127.0.0.1
IP: 10.0.0.8
IP: 172.25.0.4
IP: 10.0.4.6
RemoteAddr: 10.0.0.2:35068
GET / HTTP/1.1
Host: localhost:9000
User-Agent: curl/7.68.0
Accept: */*
but on
curl localhost/whoami
Bad Gateway%
I always get the bad Gateway issue.
So I checked my network with docker network inspect external to ensure that both are running in the same network and this is the case.
[
{
"Name": "external",
"Id": "iianul6ua9u1f1bb8ibsnwkyc",
"Created": "2022-08-09T19:32:01.4491323Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.4.0/24",
"Gateway": "10.0.4.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7aa77bc62b440e32c7b904fcbd91aea14e7a73133af0889ad9e0c9f75f2a884a": {
"Name": "server_host-app.1.m2f5x8jvn76p2ssya692f4ydp",
"EndpointID": "5d5175b73f1aadf2da30f0855dc0697628801a31d37aa50d78a20c21858ccdae",
"MacAddress": "02:42:0a:00:04:06",
"IPv4Address": "10.0.4.6/24",
"IPv6Address": ""
},
"e23f5c2897833f800a961ab49a4f76870f0377b5467178a060ec938391da46c7": {
"Name": "server_traefik.1.v5g3af00gqpulfcac84rwmnkx",
"EndpointID": "4db5d69e1ad805954503eb31c4ece5a2461a866e10fcbf579357bf998bf3490b",
"MacAddress": "02:42:0a:00:04:03",
"IPv4Address": "10.0.4.3/24",
"IPv6Address": ""
},
"lb-external": {
"Name": "external-endpoint",
"EndpointID": "ed668b033450646629ca050e4777ae95a5a65fa12a5eb617dbe0c4a20d84be28",
"MacAddress": "02:42:0a:00:04:04",
"IPv4Address": "10.0.4.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4100"
},
"Labels": {},
"Peers": [
{
"Name": "3cb3e7ba42dc",
"IP": "192.168.65.3"
}
]
}
]
and by checking the traefik logs I get the following
10.0.0.2 - - [09/Aug/2022:19:42:34 +0000] "GET /whoami HTTP/1.1" 502 11 "-" "-" 4 "host-app#docker" "http://10.0.4.9:9000" 0ms
which is the correct server:port for the whoami service. And even connecting into the traefik container and ping 10.0.4.9 works fine.
PING 10.0.4.9 (10.0.4.9): 56 data bytes
64 bytes from 10.0.4.9: seq=0 ttl=64 time=0.066 ms
64 bytes from 10.0.4.9: seq=1 ttl=64 time=0.057 ms
^C
--- 10.0.4.9 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.061/0.066 ms
This logs and snippets are all on my local swarm on Docker for Windows with wsl2 Ubuntu distribution. But I tested this on an CentOS Swarm which can be requested within my company and also with https://labs.play-with-docker.com/ and leads all to the same error.
So please can anybody tell me what configuration I'm missing or what mistake I made to get this running?
After consulting a coworker and creating another example we finally found the solution by our self.
Its just my own failure that I used the published port for loadbalancing the traefik to the service which is wrong.
host-app:
image: traefik/whoami
ports:
- "9000:80"
networks:
- external
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.host-app.rule=PathPrefix(`/whoami`)"
- "traefik.http.services.host-app.loadbalancer.server.port=80" # <--- this was wrong
- "traefik.http.routers.host-app.entrypoints=web"
- "traefik.http.middlewares.host-app-stripprefix.stripprefix.prefixes=/"
- "traefik.http.routers.host-app.middlewares=host-app-stripprefix#docker"
- "traefik.docker.network=external"
and that's the reason for the Bad Gateway since traefik tries to reach the published port from the server which is not present internally.

Cannot access Postgress (in container) from another container in the same network

I learned a lot by my self about docker but I still facing one major problem and I need your help, please.
This is my dockercompose:
version: '3.3'
services:
postgres:
container_name: postgres-tc
networks:
- tools-net
image: postgres
expose:
- 5432
environment:
- POSTGRES_PASSWORD=Admin10..
- POSTGRES_HOST_AUTH_METHOD=trust
#service
teamcity-server:
ports:
- '8112:8111'
volumes:
- '/var/run/docker.sock:/tmp/docker.sock:ro'
- '/data/teamcity_server/datadir:/data/envdata/tc/datadir'
- '/opt/teamcity/logs:/data/envdata/tc/logs'
links:
- "postgres:postgres"
logging:
options:
max-size: 1g
container_name: tc-server
networks:
- tools-net
image: jetbrains/teamcity-server
networks:
tools-net:
external: true
teamcity-serve needs to access postgress on its port to start working.
Both are in the same netwotk created by this command.
docker network create -d bridge --subnet 172.50.0.0/16 tools-net
Here the network inspect after running dockercompose up:
[
{
"Name": "tools-net",
"Id": "74708d3d114394032cbeb5f0a2a93893da38ce5dae2a555a451a189b00b52b2e",
"Created": "2021-07-04T07:04:39.105791768Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.50.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"15a57ccfc229e361e40e940d01b6d6025820fee5ad50db4a61d0c411d4d61750": {
"Name": "postgres-tc",
"EndpointID": "8d5b9d192ed90545abe958134b9853d0aecba33cabd56f31a2c9681106ccdf6e",
"MacAddress": "02:42:ac:32:00:02",
"IPv4Address": "172.50.0.2/16",
"IPv6Address": ""
},
"94eaa0ea0524ca4419ba8e300e80687db487e4f46b6623dabcc15d65c60bdde6": {
"Name": "tc-server",
"EndpointID": "90825befcc5633c3c59c5ec9d58b188d2862cd65cd2283b5c56ec3ecf5a95fd6",
"MacAddress": "02:42:ac:32:00:03",
"IPv4Address": "172.50.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Now when I try to access the postgress from teamcity-server I got this error:
Please guys help me to solve this issue.
Thanks in advance
UPDATE :
It seems to be better with your help #Hans.
For now I have another issue :
My be I have to add a permission into postgress pg_hba.conf which I can lot locate within the container.
Can youu help please?
Thanks
You're telling Teamcity to connect to a postgres server running on the default localhost address. In Docker terms, that means running in in the same container as Teamcity. It isn't.
You need to tell Teamcity to connect to database host postgres-tc. That's the network name of the Postgres container on your network.

container running on docker swarm not accessible from outside

I am running my containers on the docker swarm. asset-frontend service is my frontend application which is running Nginx inside the container and exposing port 80. now if I do
curl http://10.255.8.21:80
or
curl http://127.0.0.1:80
from my host where I am running these containers I am able to see my asset-frontend application but it is not accessible outside of the host. I am not able to access it from another machine, my host machine operating system is centos 8.
this is my docker-compose file
version: "3.3"
networks:
basic:
services:
asset-backend:
image: asset/asset-management-backend
env_file: .env
deploy:
replicas: 1
depends_on:
- asset-mongodb
- asset-postgres
networks:
- basic
asset-mongodb:
image: mongo
restart: always
env_file: .env
ports:
- "27017:27017"
volumes:
- $HOME/asset/mongodb:/data/db
networks:
- basic
asset-postgres:
image: asset/postgresql
restart: always
env_file: .env
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=asset-management
volumes:
- $HOME/asset/postgres:/var/lib/postgresql/data
networks:
- basic
asset-frontend:
image: asset/asset-management-frontend
restart: always
ports:
- "80:80"
environment:
- ENV=dev
depends_on:
- asset-backend
deploy:
replicas: 1
networks:
- basic
asset-autodiscovery-cron:
image: asset/auto-discovery-cron
restart: always
env_file: .env
deploy:
replicas: 1
depends_on:
- asset-mongodb
- asset-postgres
networks:
- basic
this is my docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
auz640zl60bx asset_asset-autodiscovery-cron replicated 1/1 asset/auto-discovery-cron:latest
g6poofhvmoal asset_asset-backend replicated 1/1 asset/asset-management-backend:latest
brhq4g4mz7cf asset_asset-frontend replicated 1/1 asset/asset-management-frontend:latest *:80->80/tcp
rmkncnsm2pjn asset_asset-mongodb replicated 1/1 mongo:latest *:27017->27017/tcp
rmlmdpa5fz69 asset_asset-postgres replicated 1/1 asset/postgresql:latest *:5432->5432/tcp
My 80 port is open in firewall
following is the output of firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: cockpit dhcpv6-client ssh
ports: 22/tcp 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
if i inspect my created network the output is following
[
{
"Name": "asset_basic",
"Id": "zw73vr9xigfx7hy16u1myw5gc",
"Created": "2019-11-26T02:36:38.241352385-05:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.3.0/24",
"Gateway": "10.0.3.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9348f4fc6bfc1b14b84570e205c88a67aba46f295a5e61bda301fdb3e55f3576": {
"Name": "asset_asset-frontend.1.zew1obp21ozmg8r1tzmi5h8g8",
"EndpointID": "27624fe2a7b282cef1762c4328ce0239dc70ebccba8e00d7a61595a7a1da2066",
"MacAddress": "02:42:0a:00:03:08",
"IPv4Address": "10.0.3.8/24",
"IPv6Address": ""
},
"943895f12de86d85fd03d0ce77567ef88555cf4766fa50b2a8088e220fe1eafe": {
"Name": "asset_asset-mongodb.1.ygswft1l34o5vfaxbzmnf0hrr",
"EndpointID": "98fd1ce6e16ade2b165b11c8f2875a0bdd3bc326c807ba6a1eb3c92f4417feed",
"MacAddress": "02:42:0a:00:03:04",
"IPv4Address": "10.0.3.4/24",
"IPv6Address": ""
},
"afab468aefab0689aa3488ee7f85dbc2cebe0202669ab4a58d570c12ee2bde21": {
"Name": "asset_asset-autodiscovery-cron.1.5k23u87w7224mpuasiyakgbdx",
"EndpointID": "d3d4c303e1bc665969ad9e4c9672e65a625fb71ed76e2423dca444a89779e4ee",
"MacAddress": "02:42:0a:00:03:0a",
"IPv4Address": "10.0.3.10/24",
"IPv6Address": ""
},
"f0a768e5cb2f1f700ee39d94e380aeb4bab5fe477bd136fd0abfa776917e90c1": {
"Name": "asset_asset-backend.1.8ql9t3qqt512etekjuntkft4q",
"EndpointID": "41587022c339023f15c57a5efc5e5adf6e57dc173286753216f90a976741d292",
"MacAddress": "02:42:0a:00:03:0c",
"IPv4Address": "10.0.3.12/24",
"IPv6Address": ""
},
"f577c539bbc3c06a501612d747f0d28d8a7994b843c6a37e18eeccb77717539e": {
"Name": "asset_asset-postgres.1.ynrqbzvba9kvfdkek3hurs7hl",
"EndpointID": "272d642a9e20e45f661ba01e8731f5256cef87898de7976f19577e16082c5854",
"MacAddress": "02:42:0a:00:03:06",
"IPv4Address": "10.0.3.6/24",
"IPv6Address": ""
},
"lb-asset_basic": {
"Name": "asset_basic-endpoint",
"EndpointID": "142373fd9c0d56d5a633b640d1ec9e4248bac22fa383ba2f754c1ff567a3502e",
"MacAddress": "02:42:0a:00:03:02",
"IPv4Address": "10.0.3.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4100"
},
"Labels": {
"com.docker.stack.namespace": "asset"
},
"Peers": [
{
"Name": "8170c4487a4b",
"IP": "10.255.8.21"
}
]
}
]
Ran into this same issue and it turns out it was a clash between my local networks subnet and the subnet of the automatically created ingress network. This can be verified using docker network inspect ingress and checking if the IPAM.Config.Subnet value overlaps with your local network.
To fix you can update the configuration of the ingress network as specified in Customize the default ingress network; in summary:
Remove services that publish ports
Remove existing network: docker network rm ingress
Recreate using non-conflicting subnet:
docker network create \
--driver overlay \
--ingress \
--subnet 172.16.0.0/16 \ # Or whatever other subnet you want to use
--gateway 172.16.0.1 \
ingress
Restart services
You can avoid a clash to begin with by specifying the default subnet pool when initializing the swarm using the --default-addr-pool option.
docker service update your-service --publish-add 80:80
You can publish ports by updating the service.
Can you try this url instead of the ip adres? host.docker.internal so something like http://host.docker.internal:80
I suggest you verify the "right" behavior using docker-compose first. Then, try to use docker swarm without network specification just to verify there are no network interface problems.
Also, you could use the below command to verify your LISTEN ports:
netstat -tulpn
EDIT: I faced this same issue but I was able to access my services through 127.0.0.1
While running docker provide an port mapping, like
docker run -p 8081:8081 your-docker-image
Or, provide the port mapping in the docker desktop while starting the container.
I got into this same issue. It turns out that's my iptables filter causes external connections not work.
In docker swarm mode, docker create a virtual network bridge device docker_gwbridge to access to overlap network. My iptables has following line to drop packet forwards:
:FORWARD DROP
That makes network packets from physical NIC can't reach the docker ingress network, so that my docker service only works on localhost.
Change iptables rule to
:FORWARD ACCEPT
And problem solved without touching the docker.

How to set up mysql host limitation working with docker container

I'm setting up a mysql:5 container (image from docker hub) into a sub-network shared with another container (web server).
I'd like to limit my mysql user to host that correspond to the web server container.
If i limit it directly whith the web-server container ip it's work but this ip cant change because of the docker environement so i'd like to have somethig like:
GRANT ALL PRIVILEGES ON `db`.* TO 'user'#'container-name'
And when i try to connect server respond:
Access denied for user 'user'#'172.18.0.4'
Where 172.18.0.4 is the correct ip for the web-server container.
Example :
docker-compose.yaml
version: '2'
services:
mysql1:
image: mysql
container_name: mysql1
ports:
- '3306:3306'
environment:
- MYSQL_ROOT_PASSWORD=rootpwd
mysql2:
image: mysql
container_name: mysql2
environment:
- MYSQL_ROOT_PASSWORD=rootpwd
Up containers
docker-compose up -d
Create user into mysql1
docker-compose exec mysql1 mysql -u root --password="rootpwd" -e "CREATE USER user#mysql2 IDENTIFIED BY 'pwd'; GRANT ALL PRIVILEGES ON * TO user#mysql2" sys
Try to access mysql1 from mysql2 by user
docker-compose exec mysql2 mysql -u user --password="pwd" -h mysql1 sys
ERROR 1045 (28000): Access denied for user 'user'#'172.18.0.3' (using password: YES)
Docker network info
docker network inspect test-mysql-user-host_default
{
"Name": "test-mysql-user-host_default",
"Id": "305f4da33e0b79d899ac289e6b3fc1ebf2733baf0bf3d60a53cc94cec44176d1",
"Created": "2019-04-26T09:53:23.3237197Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"133ebd1e6ba485212e5448670c66c1718917650bc217264183c725fb1a928118": {
"Name": "mysql1",
"EndpointID": "ce89aa1674e9c46fad50b2f36aec8d1eecf2227f597a785be67785ade770fef7",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"d67072a5c486752b7018fa321e47ae09fb873199604c2b520f2305968d43b577": {
"Name": "mysql2",
"EndpointID": "e6b62c6ce9e266d38be383fa6029f378e1ca67a18420dd3f508a3089200c0d98",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Comment from bellackn
It seems to be a problem of MySQL's skip-resolve-host option which is currently hard-coded in the Dockerfile (see this GitHub issue). TLDR: You could create an own image based on mysql:5 and sed this option away, or you use the % wildcard instead of mysql2.
He's right here the official dockerfile for mysql:5.7, and at line 70 we can find:
#don't reverse lookup hostnames, they are usually another container
&& echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
I did a new Dockerfile witch remove this configuration file.
FROM mysql:5.7
RUN rm /etc/mysql/conf.d/docker.cnf
EXPOSE 3306 33060
CMD ["mysqld"]

how can a docker container on 2 subnets access the internet? (using docker-compose)

I have a container with 2 subnets:
one is the reverse proxy subnet
the second one is the internal subnet for the different containers of that project
The container needs to access an external SMTP server (on mailgun.com), but it looks like, with docker-compose, you can put a container on both one or more subnets and give it access to the host network at the same time.
Is there a way to allow this container to initiate connections to the outside world?
and, if no, what common workarounds are used? (for example, adding an extra IP to the container to be on the host network, etc.)
This is the docker compose file:
version: '2.3'
services:
keycloak:
container_name: keycloak
image: jboss/keycloak
restart: unless-stopped
volumes:
- '/appdata/keycloak:/opt/jboss/keycloak/standalone/data'
expose:
- 8080
external_links:
- auth
networks:
- default
- nginx
environment:
KEYCLOAK_USER: XXXX
KEYCLOAK_PASSWORD: XXXX
PROXY_ADDRESS_FORWARDING: 'true'
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
VIRTUAL_HOST: auth.XXXX.com
VIRTUAL_PORT: 80
LETSENCRYPT_HOST: auth.XXXX.com
LETSENTRYPT_EMAIL: admin#XXXX.com
networks:
default:
external:
name: app-network
nginx:
external:
name: nginx-proxy
The networks are as follows:
$ dk network ls
NETWORK ID NAME DRIVER SCOPE
caba49ae8b1c bridge bridge local
2b311986a6f6 app-network bridge local
67f70f82aea2 host host local
9e0e2fe50385 nginx-proxy bridge local
dab9f171e37f none null local
and nginx-proxy network info is:
$ dk network inspect nginx-proxy
[
{
"Name": "nginx-proxy",
"Id": "9e0e2fe503857c5bc532032afb6646598ee0a08e834f4bd89b87b35db1739dae",
"Created": "2019-02-18T10:16:38.949628821Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"360b49ab066853a25cd739a4c1464a9ac25fe56132c596ce48a5f01465d07d12": {
"Name": "keycloak",
"EndpointID": "271ed86cac77db76f69f6e76686abddefa871b92bb60a007eb131de4e6a8cb53",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"379dfe83d6739612c82e99f3e8ad9fcdfe5ebb8cdc5d780e37a3212a3bf6c11b": {
"Name": "nginx-proxy",
"EndpointID": "0fcf186c6785dd585b677ccc98fa68cc9bc66c4ae02d086155afd82c7c465fef",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"4c944078bcb1cca2647be30c516b8fa70b45293203b355f5d5e00b800ad9a0d4": {
"Name": "adminmongo",
"EndpointID": "65f1a7a0f0bcef37ba02b98be8fa1f29a8d7868162482ac0b957f73764f73ccf",
"MacAddress": "02:42:ac:12:00:06",
"IPv4Address": "172.18.0.6/16",
"IPv6Address": ""
},
"671cc99775e09077edc72617836fa563932675800cb938397597e17d521c53fe": {
"Name": "portainer",
"EndpointID": "950e4b5dcd5ba2a13acba37f50e315483123d7da673c8feac9a0f8d6f8b9eb2b",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"90a98111cbdebe76920ac2ebc50dafa5ea77eba9f42197216fcd57bad9e0516e": {
"Name": "kibana",
"EndpointID": "fe1768274eec9c02c28c74be0104326052b9b9a9c98d475015cd80fba82ec45d",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Update:
The following test was done to test the solution proposed by lbndev:
a test network was created:
# docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
test_network
e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9
we can display the contents:
# docker inspect test_network
[
{
"Name": "test_network",
"Id": "e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9",
"Created": "2019-02-24T21:52:44.678870135+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Then we can inspect the container:
I put the contents on pastebin: https://pastebin.com/5bJ7A9Yp since it's quite large and would make this post unreadable.
and testing:
# docker exec -it 5d09230158dd sh
sh-4.2$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
^C
--- 1.1.1.1 ping statistics ---
11 packets transmitted, 0 received, 100% packet loss, time 10006ms
So, we couldn't get this solution to work.
Looks like your bridge network is missing a few options, to allow it to reach the outside world.
Try executing docker network inspect bridge (the default bridge network). You'll see this in the options :
...
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
...
On your nginx-proxy network, these are missing.
You should delete your network and re-create it with these additional options. From the documentation on user-defined bridged networks and docker network create command :
docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
nginx-proxy
Enabling ICC or not is up to you.
What will enable you to reach your mail server is ip_masquerade to be enabled. Without this setup, your physical infrastructure (= network routers) would need to properly route the IPs of the docker network subnet (which I assume is not the case).
Alternatively, you could configure your docker network's subnet, ip range and gateway, to match those of your physical network.
In the end, the problem turned out to be very simple:
In the daemon.json file, in the docker config, there was the following line:
{"iptables": false, "dns": ["1.1.1.1", "1.0.0.1"]}
It comes from the setup scripts we’ve been using and we didn’t know about iptables:false
It prevents docker from updating the host’s iptables; while the bridge networks were set up correctly, there was no communication possible with the outside.
While simple in nature, it proved very long to find, so I’m posting it as an answer with the hope it might help someone.
Thanks to everyone involved for trying to solve this issue!

Resources