I have a situation to restrict internet access of the container in load balancer network. for example in that below picture
Only container4 connects to the Internet; other three only communicate through container4 with the outside world. For example if container1 needs smtp support, it will forward smtp request to container4 to get access.
No container other than container4 should be allowed to access the Internet directly! This should be enforced on Docker level.
I believe it will be configurable on docker network creation, can any one explain how to achieve this?
As found here, I got this to work with docker-compose. Save as docker-compose.yml:
version: '3'
services:
outgoing-wont-work:
image: alpine
networks:
- no-internet
command: ping -c 3 google.com # will crash
internal-will-work:
image: alpine
networks:
- no-internet
command: ping -c 3 internal-and-external
internal-and-external:
image: alpine
networks:
- no-internet
- internet
command: ping -c 3 google.com
networks:
no-internet:
driver: bridge
internal: true
internet:
driver: bridge
Then run docker-compose up -d, docker-compose ps will show something like this after a few seconds:
Name Command State Ports
----------------------------------------------------------------------------------
dco_inet_internal-and-external_1 ping -c 3 google.com Exit 0
dco_inet_internal-will-work_1 ping -c 3 internal-and-ext ... Exit 0
dco_inet_outgoing-wont-work_1 ping -c 3 google.com Exit 1
Network creation for access internet
docker network create --subnet=172.19.0.0/16 internet
Network creation for block internet access
docker network create --internal --subnet 10.1.1.0/24 no-internet
If you want to connect docker container into internet
docker network connect internet container-name
If you want to block internet access
docker network connect no-internet container-name
Note
in internal network we can't expose ports to connect outside world, please refer this question for more details
Another option, if you need to expose ports on a container without internet access, but want to let it talk to other containers would be to provide a bogus DNS configuration. This isn't a perfect solution though, since it doesn't prevent direct IP access to the outside world.
docker-compose.yaml
version: '3'
services:
service1:
image: alpine
command: sh -c 'ping service2 -c 1; ping google.com -c 1'
dns: 0.0.0.0
service2:
image: alpine
command: sh -c 'ping service1 -c 1; ping google.com -c 1'
dns: 0.0.0.0
isolated> docker-compose up
Recreating isolated_service1_1 ... done Recreating isolated_service2_1 ... done Attaching to isolated_service2_1, isolated_service1_1
service1_1 | PING service2 (172.18.0.2) 56(84) bytes of data.
service1_1 | 64 bytes from isolated_service2_1.isolated_default (172.18.0.2): icmp_seq=1 ttl=64 time=0.038 ms
service1_1 |
service1_1 | --- service2 ping statistics ---
service1_1 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
service1_1 | rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
service2_1 | PING service1 (172.18.0.3) 56(84) bytes of data.
service2_1 | 64 bytes from isolated_service1_1.isolated_default (172.18.0.3): icmp_seq=1 ttl=64 time=0.093 ms
service2_1 |
service2_1 | --- service1 ping statistics ---
service2_1 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
service2_1 | rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms
service1_1 | ping: google.com: Temporary failure in name resolution
service2_1 | ping: google.com: Temporary failure in name resolution
isolated_service1_1 exited with code 2
isolated_service2_1 exited with code 2
As stated in Bilal's answer, the internal network is a good solution if you do not need to expose the ports.
If you do need to expose the ports, the below solution using iptables does the job for my requirements:
docker network create --subnet 172.19.0.0/16 no-internet
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -j REJECT --reject-with icmp-port-unreachable
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -m state --state RELATED,ESTABLISHED -j RETURN
Then add
--network no-internet
when you run your docker container. For instance:
$ docker run -it --network no-internet ubuntu:focal /bin/bash
root#9f2181f79985:/# apt update
Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
Temporary failure resolving 'archive.ubuntu.com'
Related
I have a dockerized app and I use the following docker-compose.yml to run it:
version: '3.1'
services:
db:
image: mysql:5.7
ports:
- "3306:3306"
env_file:
- ./docker/db/.env
volumes:
- ./docker/db/data:/var/lib/mysql:rw
- ./docker/db/config:/etc/mysql/conf.d
command: mysqld --sql_mode="NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
php:
build: ./docker/php/7.4/
volumes:
- ./docker/php/app.ini:/usr/local/etc/php/conf.d/docker-php-ext-app.ini:ro
- ./docker/logs/app:/var/www/app/var/log:cached
- .:/var/www/app:cached
working_dir: /var/www/app
links:
- db
env_file:
- ./docker/php/.env
webserver:
image: nginx:1
depends_on:
- php
volumes:
- ./docker/webserver/app.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/logs/webserver/:/var/log/nginx:cached
- .:/var/www/app:ro
ports:
- "80:80"
I have a server that is not dockerized runing on my machine, I can access it via localhost:3000. I would like my php service to be able to access it.
I found people suggesting to add to following to my php service configuration:
extra_hosts:
- "host.docker.internal:host-gateway"
But when I add this, then docker-compose up -d and try docker exec -ti php_1 curl http://localhost:3000, I get curl: (7) Failed to connect to localhost port 3000 after 0 ms: Connection refused. I have the same error when I try to curl http://host.docker.internal:3000.
I desperatly tried to add a port mapping to the php container:
ports:
- 3000:3000
But then when I start the services I have the following error:
ERROR: for php_1 Cannot start service php: driver failed programming external connectivity on endpoint php_1 (9dacd567ee97b9a46699969f9704899b04ed0b61b32ff55c67c27cb6867b7cef): Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
ERROR: for php Cannot start service php: driver failed programming external connectivity on endpoint php_1 (9dacd567ee97b9a46699969f9704899b04ed0b61b32ff55c67c27cb6867b7cef): Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
Which is obvious since my server is running on that 3000 port.
I also tried to add
network_mode: host
But it fails because I already have a links. I get the following error:
Cannot create container for service php: conflicting options: host type networking can't be used with links.
I am running docker v20.10.6 on Ubuntu 21.10.
Any help appreciated, thanks in advance!
Make sure you are using version of docker that supports host.docker.internal.
If you are using linux version, then 20.10+ supports it.
For other systems you should probably consult documentation and probably some issues on github of docker-for-linux / other projects OS revelant.
After that...
Make sure extra_hosts is direct child of php service:
php:
extra_hosts:
host.docker.internal: host-gateway
build: ./docker/php/7.4/
Try using ping host.docker.internal first to check whether your host machine responds correctly.
Make sure that your service on port 3000 is working properly and there is no firewall issue.
Remember that localhost means always local ip from current container point of view. It means that localhost inside container maps to local container IP and not your host machine IP. This is a reason for sending extra_hosts section.
Also docker.host.internal is not your host loopback interface.
If service you are trying to reach listens only on localhost interface then there is no chance to reach it without doing some magic with iptables / firewall.
You can check what service is listening on which interface / ip address running following command on your host machine: netstat -tulpn
This should return something like following output:
$ netstat -tulpn
(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:39195 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
From docker container I can reach services listening on 0.0.0.0 (all interfaces) but cannot access 631 port as it is only on 127.0.0.1
$ docker run --rm -it --add-host="host.docker.internal:host-gateway" busybox
/ # ping host.docker.internal
PING host.docker.internal (172.17.0.1): 56 data bytes
64 bytes from 172.17.0.1: seq=0 ttl=64 time=0.124 ms
64 bytes from 172.17.0.1: seq=1 ttl=64 time=0.060 ms
^C
--- host.docker.internal ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.060/0.092/0.124 ms
/ # telnet host.docker.internal 631
telnet: can't connect to remote host (172.17.0.1): Connection refused
/ # telnet host.docker.internal 22
Connected to host.docker.internal
SSH-2.0-OpenSSH_8.6
I have a situation to restrict internet access of the container in load balancer network. for example in that below picture
Only container4 connects to the Internet; other three only communicate through container4 with the outside world. For example if container1 needs smtp support, it will forward smtp request to container4 to get access.
No container other than container4 should be allowed to access the Internet directly! This should be enforced on Docker level.
I believe it will be configurable on docker network creation, can any one explain how to achieve this?
As found here, I got this to work with docker-compose. Save as docker-compose.yml:
version: '3'
services:
outgoing-wont-work:
image: alpine
networks:
- no-internet
command: ping -c 3 google.com # will crash
internal-will-work:
image: alpine
networks:
- no-internet
command: ping -c 3 internal-and-external
internal-and-external:
image: alpine
networks:
- no-internet
- internet
command: ping -c 3 google.com
networks:
no-internet:
driver: bridge
internal: true
internet:
driver: bridge
Then run docker-compose up -d, docker-compose ps will show something like this after a few seconds:
Name Command State Ports
----------------------------------------------------------------------------------
dco_inet_internal-and-external_1 ping -c 3 google.com Exit 0
dco_inet_internal-will-work_1 ping -c 3 internal-and-ext ... Exit 0
dco_inet_outgoing-wont-work_1 ping -c 3 google.com Exit 1
Network creation for access internet
docker network create --subnet=172.19.0.0/16 internet
Network creation for block internet access
docker network create --internal --subnet 10.1.1.0/24 no-internet
If you want to connect docker container into internet
docker network connect internet container-name
If you want to block internet access
docker network connect no-internet container-name
Note
in internal network we can't expose ports to connect outside world, please refer this question for more details
Another option, if you need to expose ports on a container without internet access, but want to let it talk to other containers would be to provide a bogus DNS configuration. This isn't a perfect solution though, since it doesn't prevent direct IP access to the outside world.
docker-compose.yaml
version: '3'
services:
service1:
image: alpine
command: sh -c 'ping service2 -c 1; ping google.com -c 1'
dns: 0.0.0.0
service2:
image: alpine
command: sh -c 'ping service1 -c 1; ping google.com -c 1'
dns: 0.0.0.0
isolated> docker-compose up
Recreating isolated_service1_1 ... done Recreating isolated_service2_1 ... done Attaching to isolated_service2_1, isolated_service1_1
service1_1 | PING service2 (172.18.0.2) 56(84) bytes of data.
service1_1 | 64 bytes from isolated_service2_1.isolated_default (172.18.0.2): icmp_seq=1 ttl=64 time=0.038 ms
service1_1 |
service1_1 | --- service2 ping statistics ---
service1_1 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
service1_1 | rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
service2_1 | PING service1 (172.18.0.3) 56(84) bytes of data.
service2_1 | 64 bytes from isolated_service1_1.isolated_default (172.18.0.3): icmp_seq=1 ttl=64 time=0.093 ms
service2_1 |
service2_1 | --- service1 ping statistics ---
service2_1 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
service2_1 | rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms
service1_1 | ping: google.com: Temporary failure in name resolution
service2_1 | ping: google.com: Temporary failure in name resolution
isolated_service1_1 exited with code 2
isolated_service2_1 exited with code 2
As stated in Bilal's answer, the internal network is a good solution if you do not need to expose the ports.
If you do need to expose the ports, the below solution using iptables does the job for my requirements:
docker network create --subnet 172.19.0.0/16 no-internet
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -j REJECT --reject-with icmp-port-unreachable
sudo iptables --insert DOCKER-USER -s 172.19.0.0/16 -m state --state RELATED,ESTABLISHED -j RETURN
Then add
--network no-internet
when you run your docker container. For instance:
$ docker run -it --network no-internet ubuntu:focal /bin/bash
root#9f2181f79985:/# apt update
Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
Temporary failure resolving 'archive.ubuntu.com'
I installed jenkins container on docekr.
I used docker-compose with yml file.
version: '2'
services:
jenkins:
image: 'bitnami/jenkins:2'
ports:
- '8080:8080'
- '8443:8443'
- '50000:50000'
volumes:
- 'jenkins_data:/bitnami/jenkins'
dns:
- '8.8.8.8'
- '1.1.1.1'
volumes:
jenkins_data:
driver: local
In logs, i found UnknownHostException error.
jenkins_1 | 2020-03-23 17:45:06.490+0000 [id=46] INFO hudson.util.Retrier#start: The attempt #1 to do the action check updates server failed with an allowed exception:
jenkins_1 | java.net.UnknownHostException: updates.jenkins.io
...
jenkins_1 | 2020-03-23 17:45:06.490+0000 [id=46] INFO hudson.util.Retrier#start: Calling the listener of the allowed exception 'updates.jenkins.io' at the attempt #1 to do the action check updates server
jenkins_1 | 2020-03-23 17:45:06.492+0000 [id=46] INFO hudson.util.Retrier#start: Attempted the action check updates server for 1 time(s) with no success
I tried to resolve this error. But failed finally.
set 'dns' parameter.
nameserver 8.8.8.8
nameserver 1.1.1.1
reset bridge network.
systemctl stop docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
systemctl start docker
test ping
docker run -it bitnami/jenkins:2 ping 8.8.8.8
[FATAL tini (8)] exec ping failed: No such file or directory
docker run -it ubuntu:trusty ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8:
icmp_seq=1 ttl=52 time=31.3 ms 64 bytes from 8.8.8.8: icmp_seq=2
ttl=52 time=30.8 ms
docker run -it ubuntu:trusty ping google.com
ping: unknown host google.com
I think bitnami/jenkins maybe doesn't include ping.
Maybe It's not problem about bridge because of Test case 3.
I don't know what should I check.
Can you give me some hints?
Thank you!
You are only exposing your ports on loopback interface. Change your ports declaration from
ports:
- '8080:8080'
- '8443:8443'
- '50000:50000'
to
ports:
- '0.0.0.0:8080:8080'
- '0.0.0.0:8443:8443'
- '0.0.0.0:50000:50000'
To allow accessing those ports on all interfaces (i.e. including from outside the host).
I installed Apache Guacamole using Docker on a CentOS 8.1 with Docker 19.03.
I followed the steps described here:
https://guacamole.apache.org/doc/gug/guacamole-docker.html
https://www.linode.com/docs/applications/remote-desktop/remote-desktop-using-apache-guacamole-on-docker/
I started the containers like this:
# mysql container
docker run --name guacamole-mysql -e MYSQL_RANDOM_ROOT_PASSWORD=yes -e MYSQL_ONETIME_PASSWORD=yes -d mysql/mysql-server
# guacd container
docker run --name guacamole-guacd -e GUACD_LOG_LEVEL=debug -d guacamole/guacd
# guacamole container
docker run --name guacamole-guacamole --link guacamole-guacd:guacd --link guacamole-mysql:mysql -e MYSQL_DATABASE=guacamole -e MYSQL_USER=guacamole -e MYSQL_PASSWORD=password -d -p 8080:8080 guacamole/guacamole
All went fine and I was able to access the Guacamole web interface on port 8080. I configured one VNC connection to another machine on port 5900. Unfortunately when I try to use that connection I get the following error in the web interface:
"An internal error has occurred within the Guacamole server, and the connection has been terminated..."
I had a look on the logs too and in the guacamole log I found this:
docker logs --tail all -f guacamole-guacamole
...
15:54:06.262 [http-nio-8080-exec-2] ERROR o.a.g.w.GuacamoleWebSocketTunnelEndpoint - Creation of WebSocket tunnel to guacd failed: End of stream while waiting for "args".
15:54:06.685 [http-nio-8080-exec-8] ERROR o.a.g.s.GuacamoleHTTPTunnelServlet - HTTP tunnel request failed: End of stream while waiting for "args".
I'm sure that the target machine (which is running the VNC server) is fine. I'm able to connect to it from both a VNC client and another older Guacamole which I installed previously (not using Docker).
My containers look ok too:
docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad62aaca5627 guacamole/guacamole "/opt/guacamole/bin/…" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp guacamole-guacamole
a46bd76234ea guacamole/guacd "/bin/sh -c '/usr/lo…" About an hour ago Up About an hour 4822/tcp guacamole-guacd
ed3a590b19d3 mysql/mysql-server "/entrypoint.sh mysq…" 2 hours ago Up 2 hours (healthy) 3306/tcp, 33060/tcp guacamole-mysql
I connected to the guacamole-guacamole container and pinged the other two containers: guacamole-mysql and guacamole-guacd. Both look fine and reachable.
docker exec -it guacamole-guacamole bash
root#ad62aaca5627:/opt/guacamole# ping guacd
PING guacd (172.17.0.2) 56(84) bytes of data.
64 bytes from guacd (172.17.0.2): icmp_seq=1 ttl=64 time=0.191 ms
64 bytes from guacd (172.17.0.2): icmp_seq=2 ttl=64 time=0.091 ms
root#ad62aaca5627:/opt/guacamole# ping mysql
PING mysql (172.17.0.3) 56(84) bytes of data.
64 bytes from mysql (172.17.0.3): icmp_seq=1 ttl=64 time=0.143 ms
64 bytes from mysql (172.17.0.3): icmp_seq=2 ttl=64 time=0.102 ms
Looks like there is a communication issue between the guacamole itself and guacd. And this is where I'm completely stuck.
EDIT
I tried on CentOS 7 and I got the same issues.
I also tried this solution https://github.com/boschkundendienst/guacamole-docker-compose as suggested by #BatchenRegev but I got the same issue again.
I've been experiencing the same issues under centos.
My only difference is that I'm hosting the database on a separate machine as this is all cloud-hosted and I want to be able to destroy/rebuild the guacamole server at will.
I ended creating a docker-compose.yml file as that seemed to work better.
Other gotcha's I came across:
make sure the guacd_hostname is the actual machine hostname and not 127.0.0.1
setting Selinux to allow httpd.
sudo setsebool -P httpd_can_network_connect
My docker-compose.yml is shown below replace all {variables} with your own and update the file if you are using a sql image as well.
version: "2"
services:
guacd:
image: "guacamole/guacd"
container_name: guacd
hostname: guacd
restart: always
volumes:
- "/data/shared/guacamole/guacd/data:/data"
- "/data/shared/guacamole/guacd/conf:/conf:ro"
expose:
- "4822"
ports:
- "4822:4822"
network_mode: bridge
guacamole:
image: "guacamole/guacamole"
container_name: guacamole
hostname: guacamole
restart: always
volumes:
- "/data/shared/guacamole/guacamole/guac-home:/data"
- "/data/shared/guacamole/guacamole/conf:/conf:ro"
expose:
- "8080"
ports:
- "8088:8080"
network_mode: bridge
environment:
- "GUACD_HOSTNAME={my_server_hostname}"
- "GUACD_PORT=4822"
- "MYSQL_PORT=3306"
- "MYSQL_DATABASE=guacamole"
- "GUACAMOLE_HOME=/data"
- "MYSQL_USER=${my_db_user}"
- "MYSQL_PASSWORD=${my_db_password}"
- "MYSQL_HOSTNAME=${my_db_hostname}"
i have the same problem on FreeBSD 12.2 - SOLUTION
Change "localhost" hostname in
/usr/local/etc/guacamole-client/guacamole.properties
to "example"
guacd-hostname: 192.168.10.10
next: /usr/local/etc/guacamole-server/guacd.conf
[server]
bind_host = 192.168.10.10
Check /etc/guacamole/guacamole.properties i have link:
guacd-hostname: 192.168.10.10
Restart:
/usr/local/etc/rc.d/guacd restart
/usr/local/etc/rc.d/tomcat9 restart
with name "localhost" i have:
11:01:48.010 [http-nio-8085-exec-3] DEBUG o.a.g.s.GuacamoleHTTPTunnelServlet - Internal error in HTTP tunnel.
I hope it will be useful to someone else - it works for me`
I have 2 nodes in docker swarm mode, and deployed a mysql service on one node and a mysql client on the other node with 'docker stack deploy -c composeYaml stackName'. But it turns out the mysql client can not reach mysql by its service name 'mysql', but can do with its container name 'aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53'
Inside the client container, I can ping and nslookup 'mysql' container, but can not reach it with 3306 port
root#ced2d59027e8:/opt/docker# ping mysql
PING mysql (10.0.2.2) 56(84) bytes of data.
64 bytes from 10.0.2.2: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 10.0.2.2: icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from 10.0.2.2: icmp_seq=3 ttl=64 time=0.044 ms
64 bytes from 10.0.2.2: icmp_seq=4 ttl=64 time=0.042 ms
^C
--- mysql ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.030/0.042/0.052/0.007 ms
root#ced2d59027e8:/opt/docker# nslookup mysql
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: mysql
Address: 10.0.2.2
root#ced2d59027e8:/opt/docker# nmap -p 3306 mysql
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:34 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 0.49 seconds
root#ced2d59027e8:/opt/docker# nmap -p 3306 10.0.2.2
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:41 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 0.48 seconds
But if I try with container name of 'mysql' got by 'docker ps', it's working, and its VirtualIP also working
On node where mysql container running:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebe25854c5b0 nysql:latest "docker-entrypoint..." 4 minutes ago Up 4 minutes 3306/tcp aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53
Back to inside the client container:
root#ced2d59027e8:/opt/docker# nmap -p 3306 aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:43 UTC
Nmap scan report for aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53 (10.0.2.3)
Host is up (0.000077s latency).
rDNS record for 10.0.2.3: aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53.aqi_backend
PORT STATE SERVICE
3306/tcp open mysql
MAC Address: 02:42:0A:00:02:03 (Unknown)
Nmap done: 1 IP address (1 host
root#ced2d59027e8:/opt/docker# nmap -p 3306 10.0.2.3
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:37 UTC
Nmap scan report for aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53.aqi_backend (10.0.2.3)
Host is up (0.000098s latency).
PORT STATE SERVICE
3306/tcp open mysql
MAC Address: 02:42:0A:00:02:03 (Unknown)
Nmap done: 1
my compose file looks like follows:
version: '3.2'
services:
mysql:
image: mysql
ports:
- target: 3306
published: 3306
protocol: tcp
mode: ingress
environment:
MYSQL_ROOT_PASSWORD: 1234
MYSQL_DATABASE: aqitradetest
MYSQL_USER: aqidbmaster
MYSQL_PASSWORD: aqidbmaster
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == prod-03]
networks:
- backend
mysql_client:
image: mysql_client
ports:
- "9000:9000"
deploy:
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 30s
placement:
constraints: [node.hostname == production-01]
networks:
- backend
depends_on:
- mysql
networks:
frontend:
backend:
I think you are confusing some concepts.
In swarm when you publish a port it will be published on all nodes and accessible from outside using the IP of any of your nodes and that port (or using 0.0.0.0:port from an application on any of your nodes). Playing around with these ports won't help you access the other service by servicename.
When two services are on the same network (if you define no networks all services in the same compose file join the same default network) they should be able to reach all internal ports of the other service by servicename:port.
Probably there is a problem with your compose file. I would try to make a minimal compose file where you don't publish any ports on mysql and you don't define any networks because its easier to find an issue in a minimal compose file.
Most probably
ports:
- target: 3306
published: 3306
protocol: tcp
mode: ingress
causes the problem.