docker-compose connect server/client containers from different devices on LAN - docker

I am using docker-compose for deploying a server/client application on different devices on my local network. My setting is the following:
In my docker-compose.yml file, I have a service called 'server', which depends on two additional services ('database' and 'web'). These three services are running on the same device and are able to connect successfully with each other. The 'server' service deploys a Flask-based API which should ideally be waiting for requests from other devices in the same LAN.
In the very same docker-compose.yml file, I have a service called 'client', which runs an application that should be deployed on more than one device in the same LAN. The 'client' service, independently from the device where it is running, should be able to send requests to the 'server' service, which is on a different device in the same LAN.
Here is my docker-compose.yml file:
version: '3.5'
networks:
outside:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.220.0/24
services:
client:
build: ./client
environment:
TZ: "Europe/Madrid"
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
nmap -p 8080 192.168.220.220 &&
python -u client/main_controller.py"
restart: always
volumes:
- .:/code
networks:
outside:
server:
build: ./server
environment:
TZ: "Europe/Madrid"
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
python -u server/main_server.py"
volumes:
- .:/code
ports:
- "8080:8080" # host:container
restart: always
depends_on:
- database
- web
networks:
default:
outside:
ipv4_address: 192.168.220.220
database:
image: mysql:latest
#command: ./database/run_db.sh #mysqld --user=root --verbose
restart: always
volumes:
- ./database:/docker-entrypoint-initdb.d/:ro
ports:
- "3306:3306" # host:container
environment:
MYSQL_ROOT_PASSWORD: root
networks:
default:
web:
image: nginx:latest
restart: always
ports:
- "8081:80"
volumes:
- ./interface:/www
- ./interface/nginx.conf:/etc/nginx/conf.d/default.conf
networks:
default:
I am using the python requests library to send requests from 'client' to 'server' using the following url:
http://192.168.220.220:8080
My issue is, that when I run both containers, 'client' and 'service', on the same device [deviceA], they are able to successfully communicate.
But when I run the containers on different devices ('service' on a computer with Mac OS X [deviceA], and 'client' on a Raspberry Pi [deviceB], both connected to the same LAN using wi-fi), the 'client' is not able to reach the specified IP and port.
To test if the device is able to reach the IP:port combination I use the following command right after running the 'client' service:
nmap -p 8080 192.168.220.220
Which gives the following output on [deviceA]:
client_1 | Starting Nmap 7.01 ( https://nmap.org ) at 2019-03-03 12:22 Europe
client_1 | Nmap scan report for raspberry_escape_controller_server_1.raspberry_escape_controller_outside (192.168.220.220)
client_1 | Host is up (0.00012s latency).
client_1 | PORT STATE SERVICE
client_1 | 8080/tcp open http-proxy
client_1 | MAC Address: <mac_address> (Unknown)
client_1 |
client_1 | Nmap done: 1 IP address (1 host up) scanned in 0.71 seconds
and the following one on [deviceB]:
client_1 | Starting Nmap 7.40 ( https://nmap.org ) at 2019-03-03 13:24 CET
client_1 | Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
client_1 | Nmap done: 1 IP address (0 hosts up) scanned in 0.78 seconds
----------- [EDIT 1] -----------
As suggested by DTG here is the output of netstat command on [deviceB]:
root#a9923f852423:/code# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.220.1 0.0.0.0 UG 0 0 0 eth0
192.168.220.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
It looks like it is not able to see [deviceA], which should be 192.168.220.220

It looks to me that even though the service is up and running in your [deviceA] , there is some kind of firewall not allowing external connections to it from outside .
Maybe you should need to check the Firewall configuration at [deviceA].
ROUTING ISSUES
If it is a Routing issue you should see the routing table in hostB with
netstat -nr
And see that a valid route exist to hostA
If no valid route exist you should add one with
sudo route add -net hostA_IP/MASK gw HOSTB_DEFAULT_GATEWAY
INTER DOCKERS COMUNICATIONS
After you create the network, you can launch containers on it using the docker run --network= option. The containers you launch into this network must reside on the same Docker host. Each container in the network can immediately communicate with other containers in the network.
Read more about understanding docker communications:
See docker documentation here

Related

Dockerized Redis cluster nodes

I am trying to set up a dockerized redis cluster spanning multiple host machines.
In my curretn setup I have two hosts with public ip addresses and start a similar configuration on both, this config consists of a compose.yml:
services:
redis-cluster:
container_name: node-redis
build:
context: ../../
dockerfile: deployment/node/cluster-dockerfile
restart: always
ports:
- "7000:7000"
- "7001:7001"
- "7002:7002"
networks:
node_net:
ipv4_address: 10.20.0.6
networks:
node_net:
driver: bridge
ipam:
config:
- subnet: 10.20.0.0/16
gateway: 10.20.0.1
which is identical on both hosts.
The Dockerfile uses supervisord to start 3 redis instances (on ports 7000, 7001 and 7002) as such:
FROM ubuntu:20.04
RUN apt update && \
DEBIAN_FRONTEND=noninteractive apt install -y redis-server supervisor
COPY ./deployment/production-node/cluster-files/node1 /app/cluster-files
COPY ./deployment/production-node/cluster-files/node1/supervisord.conf /etc/supervisor/supervisord.conf
CMD supervisord -c /etc/supervisor/supervisord.conf && \
sleep infinity
Each redis instance is configured as such:
port <port number>
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
masterauth pass
requirepass pass
protected-mode no
bind 0.0.0.0
unixsocket /tmp/redis.sock
loglevel debug
logfile "serverlog.7000.txt"
cluster-config-file nodes7000.conf
cluster-announce-ip <public ip of host machine>
cluster-announce-port <port number>
After running docker compose up on both hosts and redis instances stating correctly i try to use
redis-cli to create cluster as such:
redis-cli -a pass --cluster create <host1-ip>:7000 <host1-ip>:7001 \
<host1-ip>:7002 <host2-ip>:7000 <host2-ip>:7001 <host2-ip>:7002 \
--cluster-replicas 1
This results in waiting infinitely for the cluster to join.
After some consideration I figured that this may be caused by not exposing proper cluster bus ports in docker to solve this I changed the compsoe file to list additional ports:
- "7000:7000"
- "7001:7001"
- "7002:7002"
- "17000:17000"
- "17001:17001"
- "17002:17002"
And added this line to the redis.conf files:
cluster-port 17000 <and 17001, 17002 respective to the other port used by instance>
After those changes I am not even able to connect to a single instance and get an instant connection refused when tryin to create cluster.
As of now I am not sure how to solve this problem and would be gratefull for any hints as to how properly configure this kind of redis cluster without starting containers in the host network mode.

Can't enable ssl by docker-letsencrypt-nginx-proxy-companion

I want to enable ssl by docker-letsencrypt-nginx-proxy-companion.
This is the docker-compose.yml
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- certs:/etc/nginx/certs:ro
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
db:
# ---
wordpress:
# ---
environment:
# ---
VIRTUAL_HOST: blog.ironsand.net
LETSENCRYPT_HOST: blog.ironsand.net
LETSENCRYPT_EMAIL: mymail#example.com
restart: always
letsencrypt-nginx-proxy-companion:
container_name: letsencrypt
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- certs:/etc/nginx/certs
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_PROXY_CONTAINER: nginx-proxy
restart: always
networks:
default:
external:
name: nginx-proxy
volumes:
certs:
vhostd:
html:
docker logs letsencrypt shows that a certificate exists already.
/etc/nginx/certs/blog.ironsand.net /app
Creating/renewal blog.ironsand.net certificates... (blog.ironsand.net)
2020-04-09 00:03:23,711:INFO:simp_le:1581: Certificates already exist and renewal is not necessary, exiting with status code 1.
/app
But ACME challenge returns nothing. (failure?)
$ docker exec letsencrypt bash -c 'echo "Hello world!" > /usr/share/nginx/html/.well-known/acme-challenge/hello-world'
$
The port 443 is listning, but the port is closed from outside.
// in remote server
$ sudo lsof -i:443
[sudo] password for ubuntu:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
docker-pr 10910 root 4u IPv6 633694 0t0 TCP *:https (LISTEN)
// from local pc
❯ nmap -p 443 blog.ironsand.net
Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-09 09:44 JST
Nmap scan report for blog.ironsand.net (153.127.40.107)
Host is up (0.035s latency).
rDNS record for 153.127.40.107: ik1-418-41103.vs.sakura.ne.jp
PORT STATE SERVICE
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds
I'm using packet filtering, but it's open for 80 and 443, and I'm not using firewall.
How can I investigate more where the problem exists?
I can't solve your problem directly, but I can wrote some hints, so can solve your problem.
Your command return nothing.
bash -c 'echo "Hello world!" > /usr/share/nginx/html/.well-known/acme-challenge/hello-world'
This comand only writes "Hello world!" to the location and normally return nothing. See https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Redirections
Look inside of certs-folder.
Have a look into the certs folder and maybe clean them up. Check that the folder was mounted corretly in your nginx-container. Take a bash into the container and check the ssl-folder.
Check that the firewall nothing breaks up
From outside is no connection possibly? What is from the inside? Login on your docker-host and check the connection from there (maybe openssl and curl are your friends),
Don't use SSL inside container.
I often see problems when sombody tries to use ssl with ACME-images and "wild mounting and shared volumes". But I heard never about problems, when the same people using a normal reverse proxy. I explain a good setup bellow.
So just remove the whole letscrypt-code from your container and close the 443 port of your container.
(Additionally you can switch to a non-root-image and expose only ports which doesn't need root-privileges.)
Then install nginx on your host and setup a reverse proxy (something like proxy_pass 127.0.0.1:8080). And now install certbot and start it. It helps you and is straight-forward.
The certbot can also maintain your certificates.

Docker compose ftpd-server access from host

I'm using trying to access a ftpd-server from the host
Using ftp localhost or ftp <my_ip>
But I'm getting ftp: connect: Connection refused
version: '3'
services:
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
ports:
- 21:21
- 20:20
- 30000-30009:30000-30009
volumes:
- './ftp/data:/home/username/'
- './ftp/pass:/etc/pure-ftpd/passwd'
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "user"
FTP_USER_PASS: "pass"
FTP_USER_HOME: "/home/username"
restart: always
Since I'm using PUBLICHOST: "0.0.0.0" and port forward 21:21 I was expecting to be able to connect.
Docker Log
Removing ftpd-server ... done
Removing network mytest_default
No stopped containers
Creating network "mytest_default" with the default driver
Creating ftpd-server ...
Creating ftpd-server ... done
Attaching to ftpd-server
ftpd-server | Creating user...
ftpd-server | Password:
ftpd-server | Enter it again:
ftpd-server | Setting default port range to: 30000:30009
ftpd-server | Setting default max clients to: 5
ftpd-server | Setting default max connections per ip to: 5
ftpd-server | Starting Pure-FTPd:
ftpd-server | pure-ftpd -l puredb:/etc/pure-ftpd/pureftpd.pdb -E -j -R -P 0.0.0.0 -s -A -j -Z -H -4 -E -R -G -X -x -p 30000:30009 -c 5 -C 5
How can I achieve to connect from host machine to my ftp server on the container?
You can add network_mode: host to your service definition to make it work.
services:
ftpd-server:
# ...
network_mode: host
# ...
Then test with:
$ ftp -p localhost 21
Connected to localhost.
220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------
220-You are user number 1 of 5 allowed.
220-Local time is now 16:04. Server port: 21.
220-This is a private system - No anonymous login
220 You will be disconnected after 15 minutes of inactivity.
A working example on my side is as following:
version: "1.0"
services:
ftpd-server:
image: stilliard/pure-ftpd:hardened
ports:
- "21:21"
- "30000-30009:30000-30009"
volumes:
- './ftp/data:/home/username/'
- './ftp/pass:/etc/pure-ftpd/passwd'
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "username"
FTP_USER_PASS: "changeme!"
FTP_USER_HOME: "/home/username"
restart: always
Adding line of "network_mode: host" would cause following error with latest Docker installation (for my case it's Docker version 20.10.13, build a224086 on a Windows 10 OS with WSL2 support):
"host" network_mode is incompatible with port_bindings
It's a guarding facility involved in newer version of docker to avoid mis-configuration of ports, check following link for details: https://forums.docker.com/t/docker-errors-invalidargument-host-network-mode-is-incompatible-with-port-bindings/103492.

Docker Compose: Expose not working

docker-ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b1503d2e7c app_nginx "nginx -g 'daemon ..." 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp app_nginx_1
c9dd2231e554 app_web "/home/start.sh" 2 hours ago Up 2 hours 8000/tcp app_web_1
baad0fb1fabf app_gremlin "/start.sh" 2 hours ago Up 2 hours 8182/tcp app_gremlin_1
b663a5f026bc postgres:9.5.1 "docker-entrypoint..." 25 hours ago Up 2 hours 5432/tcp app_db_1
They all work fine:
app_nginx connects well with app_web
app_web connects well with postgres
No working file:
app_web is not able to connect with app_gremlin
docker-compose.yaml
version: '3'
services:
db:
image: postgres:9.5.12
web:
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh
Errors:
Basically I am not able to connect to gremlin container from my app_web container.
All below have been executed inside web_app container
curl:
root#49a8f08a7b82:/# curl 0.0.0.0:8182
curl: (7) Failed to connect to 0.0.0.0 port 8182: Connection refused
netstat
root#49a8f08a7b82:/# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.11:42681 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
udp 0 0 127.0.0.11:54232 0.0.0.0:*
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
nmap
root#49a8f08a7b82:/# nmap -p 8182 0.0.0.0
Starting Nmap 7.60 ( https://nmap.org ) at 2018-06-22 09:28 UTC
Nmap scan report for 0.0.0.0
Host is up.
PORT STATE SERVICE
8182/tcp filtered vmware-fdm
Nmap done: 1 IP address (1 host up) scanned in 2.19 seconds
nslookup
root#88626de0c056:/# nslookup app_gremlin_1
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: app_gremlin_1
Address: 172.19.0.3
Experimenting:
For Gremlin container I did,
ports:
- "8182:8182"
Then from Host I can connect to gremlin container BUT still no connection between web and gremlin container
I am working on creating a re-creating sample Docker file (minimal stuff to recreate the issue) meanwhile anyone has any idea what the issue might be?
curl 0.0.0.0:8182
The 0.0.0.0 address is a wild card that tells an app to listen on all network interfaces, you do not connect to this interface as a client. For container to container communication, you need:
containers on the same user generated network (compose does this for you by default)
connect to the name of the service (or container name)
connect to the port inside the other container, not the published port.
In your case, the command should be:
curl http://gremlin:8182
Networking is namespaced in apps running inside containers, so each container gets it's open loopback interface and ip address on a bridge network. So moving an app into containers means you need to listen on 0.0.0.0 and connect to the bridge ip using DNS.
You should also remove links and depends_on from your Dockerfile, they don't apply in version 3. Links have long since been deprecated in favor of shared networks. And depends_on doesn't work in swarm mode along with probably not doing what you wanted since it never checked for the target app to be running, only the start of that container to have been kicked off.
One last note, expose doesn't affect the ability to communicate between containers on common networks or publish ports on the host. Expose simply sets meta data on the image that is documentation between the person creating the image and the person running the image. Applications are not required to use that value, but it's a good habit to make your app default to that value for the benefit of downstream users. Because of its role, unless you have another app checking for the exposed port list, like a self updating reverse proxy, there's no need to expose the port in the compose file unless you're giving the compose file to another person and they need the documentation.
There is no link configured in the docker-compose.yaml between web and gremlin. Try to use the following:
version: '3'
services:
db:
image: postgres:9.5.12
web:
links:
- gremlin
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh

How to deploy an IPv6 container with Docker Swarm Mode or Docker Compose

In the end I'd like to have a pure IPv6 network deployed via compose or swarm mode. For now, I'd just like to have a single container deployed with IPv6 (only). I am not currently interested in routing (just container to container connectivity).
My setup:
OS: Centos 7
dockerd --ipv6 --fixed-cidr-v6=2001:db8:1::/64 --iptables=true --ip-masq=true --mtu=1600 --experimental=true
docker-engine-17.05.0.ce-1.el7.centos.x86_64.rpm
Host has IPv4 and IPv6 addresses. Forwarding is on for both (not that it matters for me).
I've tried what seems to be every combination (I'm only listing a couple)
Self-contained Docker stack with container and network:
version: '3'
networks:
app_net:
driver: overlay
driver_opts:
com.docker.network.enable_ipv6: "true"
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
-
subnet: 2001:3984:3989::/64
services:
app:
image: alpine
command: sleep 600
networks:
app_net:
ipv4_address: 0.0.0.0
ipv6_address: 2001:3984:3989::10
Result: Only IPv4 address in container, 0.0.0.0 is ignored.
Externally pre-created network
(as per https://stackoverflow.com/a/39818953/1735931)
docker network create --driver overlay --ipv6
--subnet=2001:3984:3989::/64 --attachable ext_net
version: '3'
networks:
ext_net:
external:
name: ext_net
services:
app:
image: alpine
command: ifconfig eth0 0.0.0.0 ; sleep 600
cap_add:
- NET_ADMIN
networks:
ext_net:
ipv4_address: 0.0.0.0
ipv6_address: 2001:3984:3989::10
Result: Both IPv4 and IPv6 addresses in container, but cap_add is ignored (not supported in Swarm Mode), and thus the ifconfig disable ipv4 attempt above does not work.
I don't currently have docker-compose installed, and will probably try that next, but is there a way to run pure IPv6 containers in Docker Swarm Mode?
Note: I am able to run and configure a few IPv6-only containers manually without swarm/compose:
(Create network as above or even just use the default bridge)
$ docker run --cap-add=NET_ADMIN --rm -it alpine
$$ ifconfig eth0 0.0.0.0
$$ ping6 other-container-ipv6-address # WORKS!
or shorthand:
$ docker run --cap-add=NET_ADMIN --rm -it alpine sh -c "/sbin/ifconfig eth0 0.0.0.0 ; sh"
I was able to hack it with docker-compose via severe ugliness. If you're desperate, here it is. (This method can never work for Swarm Mode due to privilege escalation).
The Plan
Grant containers rights to manage IP's
Remove IPv4 IP address from within each container on startup.
Use a volume to improvise a hosts file in place of DNS (DNS is IPv4-only in docker).
Steps
Enable IPv6 in Docker daemon.
Create a docker-compose.yml file that creates an ipv6 network, a volume for shared files, and two containers
Run an entrypoint script in each container that performs the aforementioned steps.
Files
docker-compose.yml
# Note: enable_ipv6 does not work in version 3!
version: '2.1'
networks:
app_net:
enable_ipv6: true
driver: overlay
ipam:
driver: default
config:
-
subnet: 172.16.238.0/24
-
subnet: 2001:3984:3989::/64
services:
app1:
build: ./server
hostname: server1
command: blablabla # example of arg passing to ipv6.sh
cap_add:
- NET_ADMIN
volumes:
- ipv6stuff:/ipv6stuff
networks:
- app_net
app2:
build: ./server
hostname: server2
command: SOMETHING # example of arg passing to ipv6.sh
cap_add:
- NET_ADMIN
volumes:
- ipv6stuff:/ipv6stuff
networks:
- app_net
volumes:
ipv6stuff:
server/Dockerfile
FROM alpine:latest
ADD files /
RUN apk --update add bash #simpler scripts
# Has to be an array for parameters to work via command: x in compose file, if needed
ENTRYPOINT ["/ipv6.sh"]
server/files/ipv6.sh
#!/bin/bash
# Optionally conditional logic based on parameters here...
# (for example, conditionally leave ipv4 address alone in some containers)
#
# Remove ipv4
ifconfig eth0 0.0.0.0
IP6=$(ip addr show eth0 | grep inet6 | grep global | awk '{print $2}' | cut -d / -f 1)
echo "Host $HOSTNAME has ipv6 ip $IP6"
# Store our entry in the shared volume
echo "$IP6 $HOSTNAME" > /ipv6stuff/hosts.$HOSTNAME
# Remove existing ipv4 line from /etc/hosts just to be thorough
# Docker does not allow removal of this file and thus simple sed -i isn't going to work.
cp /etc/hosts /tmp/1 ; sed -i "s/^.*\s$HOSTNAME//" /tmp/1 ; cat /tmp/1 > /etc/hosts
# Wait for all containers to start
sleep 2
# Put everyone's entries in our hosts file.
cat /ipv6stuff/hosts.* >> /etc/hosts
echo "My hosts file:"
cat /etc/hosts
# test connectivity (hardcoded)
ping6 -c 3 server1
ping6 -c 3 server2

Resources