Docker Swarm Networking - no communication to some exposed ports - docker

I have following docker-dompose file:
version: "3.9"
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
ports:
- target: 53
published: 53
protocol: tcp
mode: host
- target: 53
published: 53
protocol: udp
mode: host
# - target: 80
# published: 80
# protocol: tcp
# mode: host
environment:
TZ: 'Europe/Warsaw'
DNS1: 1.1.1.1
DNS2: 8.8.8.8
VIRTUAL_HOST: 'pihole.local'
volumes:
- ./etc/pihole/:/etc/pihole
- ./etc-dnsmasq.d:/etc/dnsmasq.d
dns:
- 1.1.1.1
- 8.8.8.8
cap_add:
- NET_ADMIN
restart: unless-stopped
networks:
- public
networks:
public:
Working solution with docker-compose
Running this with:
docker-compose --file docker-compose-pihole.yml up -d
exposes ports 53 tcp/udp on host ip address
$ nmap 172.30.0.100 -Pn
Starting Nmap 7.80 ( https://nmap.org ) at 2022-01-02 10:42 CET
Nmap scan report for 172.30.0.100
Host is up (0.0038s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
53/tcp open domain
and dns resolution is working
$ nslookup google.pl 172.30.0.100
Server: 172.30.0.100
Address: 172.30.0.100#53
Non-authoritative answer:
Name: google.pl
Address: 172.217.16.3
Name: google.pl
Address: 2a00:1450:401b:804::2003
and I'm able to telnet to port 53
$ telnet 172.30.0.100 53
Trying 172.30.0.100...
Connected to 172.30.0.100.
Escape character is '^]'.
NOT Working solution with docker stack deploy
Running the same docker-compose file with
docker stack deploy -c docker-compose-pihole.yml pihole
also exposes 53 port tcp/udp on host IP address
$ nmap 172.30.0.100 -Pn
Starting Nmap 7.80 ( https://nmap.org ) at 2022-01-02 10:46 CET
Nmap scan report for 172.30.0.100
Host is up (0.0022s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
53/tcp open domain
however name resolution is not working
nslookup google.pl 172.30.0.100
;; connection timed out; no servers could be reached
telnet to port 53 is closed by remote host
$ telnet 172.30.0.100 53
Trying 172.30.0.100...
Connected to 172.30.0.100.
Escape character is '^]'.
Connection closed by foreign host.
Another strange thing is when port 80 is exposed.
In both cases I can access web UI on port 80 connecting to host IP
I have no idea what's going on and how to fix communication on port 53.

Fixed.
One ENV was missing for pihole:
- DNSMASQ_LISTENING: all
Two days to figure this out!

Related

Concourse : Web Connection Refused

I composed Concourse on EC2 Linux (Ubuntu 22.04) and revised CONCOURSE_EXTERNAL_URL in docker-compose.yml to Elastic IP Address of EC2 Linux.
Even though secrurity group inbound and ACL allow all tcp / http /https, http://{myElasticIP}:8080/ connection refused.
(Instance is running, can ping to {myElasticIP} without fail)
This was my first time to set Concourse, so I guess something is wrong in my procedure.
Any advice would be highly appreciated.
-- command and result
$ docker-compose up -d
Starting ubuntu_concourse-db_1
Recreating ubuntu_concourse_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
56f8859a67ba concourse/concourse "dumb-init /usr/loca…" 24 minutes ago Restarting (1) 28 seconds ago ubuntu_concourse-web_1
307a647554eb postgres:9.5 "docker-entrypoint.s…" 24 minutes ago Up 24 minutes 5432/tcp ubuntu_concourse-db_1
Error (Fiddler):
ConnectionRefused (0x274d).
-- kernel
$ uname -r
5.15.0-1015-aws
--docker-compose.yml
version: '3'
services:
concourse-db:
image: postgres:9.5
environment:
POSTGRES_DB: concourse
POSTGRES_USER: "${CONCOURSE_POSTGRES_USER}"
POSTGRES_PASSWORD: "${CONCOURSE_POSTGRES_PASSWORD}"
PGDATA: /database
concourse-web:
image: concourse/concourse
links: [concourse-db]
command: web
depends_on: [concourse-db]
ports: ["8080:8080"]
volumes: ["./keys/web:/concourse-keys"]
restart: unless-stopped # required so that it retries until conocurse-db comes up
environment:
CONCOURSE_BASIC_AUTH_USERNAME: "${CONCOURSE_BASIC_AUTH_USERNAME}"
CONCOURSE_BASIC_AUTH_PASSWORD: "${CONCOURSE_BASIC_AUTH_PASSWORD}"
CONCOURSE_EXTERNAL_URL: "${CONCOURSE_EXTERNAL_URL}"
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: "${CONCOURSE_POSTGRES_USER}"
CONCOURSE_POSTGRES_PASSWORD: "${CONCOURSE_POSTGRES_PASSWORD}"
CONCOURSE_POSTGRES_DATABASE: concourse
--.env
CONCOURSE_BASIC_AUTH_USERNAME=concourse
CONCOURSE_BASIC_AUTH_PASSWORD=changeme
CONCOURSE_EXTERNAL_URL=http://{myElasticIP}:8080
CONCOURSE_POSTGRES_USER=concourse
CONCOURSE_POSTGRES_PASSWORD=changeme
-- port check
$ sudo lsof -i -P -n | grep LISTEN
systemd-r 390 systemd-resolve 14u IPv4 16470 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 644 root 3u IPv4 17932 0t0 TCP *:22 (LISTEN)
sshd 644 root 4u IPv6 17943 0t0 TCP *:22 (LISTEN)

Access host from within a docker container

I have a dockerized app and I use the following docker-compose.yml to run it:
version: '3.1'
services:
db:
image: mysql:5.7
ports:
- "3306:3306"
env_file:
- ./docker/db/.env
volumes:
- ./docker/db/data:/var/lib/mysql:rw
- ./docker/db/config:/etc/mysql/conf.d
command: mysqld --sql_mode="NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
php:
build: ./docker/php/7.4/
volumes:
- ./docker/php/app.ini:/usr/local/etc/php/conf.d/docker-php-ext-app.ini:ro
- ./docker/logs/app:/var/www/app/var/log:cached
- .:/var/www/app:cached
working_dir: /var/www/app
links:
- db
env_file:
- ./docker/php/.env
webserver:
image: nginx:1
depends_on:
- php
volumes:
- ./docker/webserver/app.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/logs/webserver/:/var/log/nginx:cached
- .:/var/www/app:ro
ports:
- "80:80"
I have a server that is not dockerized runing on my machine, I can access it via localhost:3000. I would like my php service to be able to access it.
I found people suggesting to add to following to my php service configuration:
extra_hosts:
- "host.docker.internal:host-gateway"
But when I add this, then docker-compose up -d and try docker exec -ti php_1 curl http://localhost:3000, I get curl: (7) Failed to connect to localhost port 3000 after 0 ms: Connection refused. I have the same error when I try to curl http://host.docker.internal:3000.
I desperatly tried to add a port mapping to the php container:
ports:
- 3000:3000
But then when I start the services I have the following error:
ERROR: for php_1 Cannot start service php: driver failed programming external connectivity on endpoint php_1 (9dacd567ee97b9a46699969f9704899b04ed0b61b32ff55c67c27cb6867b7cef): Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
ERROR: for php Cannot start service php: driver failed programming external connectivity on endpoint php_1 (9dacd567ee97b9a46699969f9704899b04ed0b61b32ff55c67c27cb6867b7cef): Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
Which is obvious since my server is running on that 3000 port.
I also tried to add
network_mode: host
But it fails because I already have a links. I get the following error:
Cannot create container for service php: conflicting options: host type networking can't be used with links.
I am running docker v20.10.6 on Ubuntu 21.10.
Any help appreciated, thanks in advance!
Make sure you are using version of docker that supports host.docker.internal.
If you are using linux version, then 20.10+ supports it.
For other systems you should probably consult documentation and probably some issues on github of docker-for-linux / other projects OS revelant.
After that...
Make sure extra_hosts is direct child of php service:
php:
extra_hosts:
host.docker.internal: host-gateway
build: ./docker/php/7.4/
Try using ping host.docker.internal first to check whether your host machine responds correctly.
Make sure that your service on port 3000 is working properly and there is no firewall issue.
Remember that localhost means always local ip from current container point of view. It means that localhost inside container maps to local container IP and not your host machine IP. This is a reason for sending extra_hosts section.
Also docker.host.internal is not your host loopback interface.
If service you are trying to reach listens only on localhost interface then there is no chance to reach it without doing some magic with iptables / firewall.
You can check what service is listening on which interface / ip address running following command on your host machine: netstat -tulpn
This should return something like following output:
$ netstat -tulpn
(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:39195 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
From docker container I can reach services listening on 0.0.0.0 (all interfaces) but cannot access 631 port as it is only on 127.0.0.1
$ docker run --rm -it --add-host="host.docker.internal:host-gateway" busybox
/ # ping host.docker.internal
PING host.docker.internal (172.17.0.1): 56 data bytes
64 bytes from 172.17.0.1: seq=0 ttl=64 time=0.124 ms
64 bytes from 172.17.0.1: seq=1 ttl=64 time=0.060 ms
^C
--- host.docker.internal ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.060/0.092/0.124 ms
/ # telnet host.docker.internal 631
telnet: can't connect to remote host (172.17.0.1): Connection refused
/ # telnet host.docker.internal 22
Connected to host.docker.internal
SSH-2.0-OpenSSH_8.6

JetBrains/Teamtools in docker container "Could not listen on address 0.0.0.0 and port 443"

Problem
I'm trying to set up JetBrains Hub, Youtrack, Upsource and Teamcity in a docker container and configure each to be available on their own IP (macvlan) at the default ports 80 redirected to 443 and 443 for HTTPS (so the port numbers do not show up in the browser).
However if I do that I get:
Could not listen on address 0.0.0.0 and port 443
Leaving the teamtools on their default ports 8080 and 8443 works or giving them ports over 2000 seems to work as well.
I checked with fuser 443/tcp and netstat -tulpn but there is nothing running on port 80 or 443. (had to install the packages for those in the container)
I tried setting the listening address to the NICs IP or 172.0.0.1 but this is refused as well:
root#teamtools [ /opt/teamtools ]# docker run --rm -it \
-v /opt/hub/data:/opt/hub/data \
-v /opt/hub/conf:/opt/hub/conf \
-v /opt/hub/logs:/opt/hub/logs \
-v /opt/hub/backups:/opt/hub/backups \
jetbrains/hub:2018.2.9840 \
configure --listen-address=192.168.1.211
* Configuring JetBrains Hub 2018.2
* Setting property 'listen-address' to '192.168.1.211' from arguments
[APP-WRAPPER] Failed to configure Hub: java.util.concurrent.ExecutionException: com.jetbrains.bundle.exceptions.BadConfigurationException: Could not listen on address {192.168.1.211} . Please specify another listen address in property listen-address
Question:
Why can I not set ports 80 and 443?
Why does it work for ports over
2000?
How can I make this work without a reverse proxy?
(reverse-proxy comes with a whole bunch of other issues, that I'm trying to avoid with this setup)
Setup
ESXi 6.7 Host
- vSwitch0 (Allow promiscuous mode: Yes)
- port group: VM Netork (Allow promiscuous mode: No)
- other VMs
- port group: Promiscuous Ports (Allow promiscuous mode: Yes)
- Teamtools VM (Photon OS 2.0, IP: 192.168.1.210)
- firewall based on: https://unrouted.io/2017/08/15/docker-firewall/
- docker/docker-compose
- hub (IP: 192.168.1.211:80/443)
- youtrack (IP: 192.168.1.212:80/443)
- upsource (IP: 192.168.1.213:80/443)
- teamcity-server (IP: 192.168.1.214:80/443)
- teamcity_db (MariaDB 10.3) (IP: 192.168.1.215:3306)
docker-compose.yml
version: '2'
networks:
macnet:
driver: macvlan
driver_opts:
parent: eth0
ipam:
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
services:
hub:
# set a custom container name so no more than one container can be created from this config
container_name: hub
image: "jetbrains/hub:2018.2.9840"
restart: unless-stopped
volumes:
- /opt/hub/data:/opt/hub/data
- /opt/hub/conf:/opt/hub/conf
- /opt/hub/logs:/opt/hub/logs
- /opt/hub/backups:/opt/hub/backups
- /opt/teamtools:/opt/teamtools
expose:
- "80"
- "443"
- "8080"
- "8443"
networks:
macnet:
ipv4_address: 192.168.1.211
domainname: office.mydomain.com
hostname: hub
environment:
- "JAVA_OPTS=-J-Djavax.net.ssl.trustStore=/opt/teamtools/certs/keyStore.p12 -J-Djavax.net.ssl.trustStorePassword=xxxxxxxxxxxxxx"
...
Upsource is running by user jetbrans, which is non-root.
https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html

Docker swarm redis connectivity issue

I have following docker-compose.yml redis config.
version: '3.5'
services:
db:
image: redis:latest
command: redis-server --bind 0.0.0.0 --appendonly yes --protected-mode no
ports:
- target: 6379
published: 6379
protocol: tcp
mode: ingress
There are two hosts leader-0 (manager) and redis-0 (worker)
> root#leader-0:~# docker node ls
ID HOSTNAME STATUS
46tmallxr4l8xr7i90vlwntjq * leader-0 Ready
mofbedj4sqlxgnyatbxhlokc7 redis-0 Ready
Redis host redis-0 exposes 6379 port on the localhost as expected:
> root#redis-0:~# redis-cli -h 127.0.0.1 ping
PONG
but 6379 is not available on the manager (although it should):
> root#leader-0:~# redis-cli -h 127.0.0.1 ping
Could not connect to Redis at 127.0.0.1:6379: Connection timed out
Interesting part is:
Connection timed out (not refused).
redis-cli -h 127.0.0.1 ping on other workers hosts works as expected (returns PONG).
Docker overlay mash network should expose 6379 port on the local interface on each host, but it looks like something went wrong and I messed up figuring out what exactly.
Other services on the manager host works properly (I can
curl http://localhost:${SERVICE_PORT}/).
Manager host has the same firewall rules as worker hosts (+ additional ports opened)

Can not reach mysql container running in different node with its service name, but can do with its container name

I have 2 nodes in docker swarm mode, and deployed a mysql service on one node and a mysql client on the other node with 'docker stack deploy -c composeYaml stackName'. But it turns out the mysql client can not reach mysql by its service name 'mysql', but can do with its container name 'aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53'
Inside the client container, I can ping and nslookup 'mysql' container, but can not reach it with 3306 port
root#ced2d59027e8:/opt/docker# ping mysql
PING mysql (10.0.2.2) 56(84) bytes of data.
64 bytes from 10.0.2.2: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 10.0.2.2: icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from 10.0.2.2: icmp_seq=3 ttl=64 time=0.044 ms
64 bytes from 10.0.2.2: icmp_seq=4 ttl=64 time=0.042 ms
^C
--- mysql ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.030/0.042/0.052/0.007 ms
root#ced2d59027e8:/opt/docker# nslookup mysql
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: mysql
Address: 10.0.2.2
root#ced2d59027e8:/opt/docker# nmap -p 3306 mysql
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:34 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 0.49 seconds
root#ced2d59027e8:/opt/docker# nmap -p 3306 10.0.2.2
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:41 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 0.48 seconds
But if I try with container name of 'mysql' got by 'docker ps', it's working, and its VirtualIP also working
On node where mysql container running:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebe25854c5b0 nysql:latest "docker-entrypoint..." 4 minutes ago Up 4 minutes 3306/tcp aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53
Back to inside the client container:
root#ced2d59027e8:/opt/docker# nmap -p 3306 aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:43 UTC
Nmap scan report for aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53 (10.0.2.3)
Host is up (0.000077s latency).
rDNS record for 10.0.2.3: aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53.aqi_backend
PORT STATE SERVICE
3306/tcp open mysql
MAC Address: 02:42:0A:00:02:03 (Unknown)
Nmap done: 1 IP address (1 host
root#ced2d59027e8:/opt/docker# nmap -p 3306 10.0.2.3
Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-19 09:37 UTC
Nmap scan report for aqi_mysql.1.yv9t12wm3z4s9klw1gl3bnz53.aqi_backend (10.0.2.3)
Host is up (0.000098s latency).
PORT STATE SERVICE
3306/tcp open mysql
MAC Address: 02:42:0A:00:02:03 (Unknown)
Nmap done: 1
my compose file looks like follows:
version: '3.2'
services:
mysql:
image: mysql
ports:
- target: 3306
published: 3306
protocol: tcp
mode: ingress
environment:
MYSQL_ROOT_PASSWORD: 1234
MYSQL_DATABASE: aqitradetest
MYSQL_USER: aqidbmaster
MYSQL_PASSWORD: aqidbmaster
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == prod-03]
networks:
- backend
mysql_client:
image: mysql_client
ports:
- "9000:9000"
deploy:
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 30s
placement:
constraints: [node.hostname == production-01]
networks:
- backend
depends_on:
- mysql
networks:
frontend:
backend:
I think you are confusing some concepts.
In swarm when you publish a port it will be published on all nodes and accessible from outside using the IP of any of your nodes and that port (or using 0.0.0.0:port from an application on any of your nodes). Playing around with these ports won't help you access the other service by servicename.
When two services are on the same network (if you define no networks all services in the same compose file join the same default network) they should be able to reach all internal ports of the other service by servicename:port.
Probably there is a problem with your compose file. I would try to make a minimal compose file where you don't publish any ports on mysql and you don't define any networks because its easier to find an issue in a minimal compose file.
Most probably
ports:
- target: 3306
published: 3306
protocol: tcp
mode: ingress
causes the problem.

Resources