Docker - Overlay Network Driver - One leader for two clusters - docker

There are three servers:
192.168.31.141
192.168.31.142
192.168.31.143
Docker is installed on these servers. I want to make server 192.168.31.142 have connections to server 192.168.31.141 and 192.168.31.143.
And the server 192.168.31.141 had a connection only with the server 192.168.31.142.
And the same thing, the server 192.168.31.143 had a connection only with the server 192.168.31.142.
How to make one leader for two clusters??
See the picture for an example.

At the docker host level, overlay networking is implemented by gossip over :7946/* and :4789/udp for overlay traffic. Every node needs to be able to reach every other node.
However, each overlay network is a virtual network that works much like a NAT in terms of its assignment and routing. The default overlay network driver assigns addresses out of the 10.0.0.0/8 pool in /24 partitions.
So, to ensure that, in terms of overlay networking, Java can communicate with both nginx and databases, but databases cannot be connected from nginx you could declare something similar to this:
networks:
nginx:
database:
services:
nginx:
image: nginx
networks:
- nginx
java:
image: java
networks:
- nginx
- database
database:
image: mysql
networks:
- database
Because we have explicitly attached nginx and java to an nginx network, those containers will share addresses on a - for example - 10.0.1.0/24 network and will be able to route to, and discover each other via dns.
Likewise java will have a 2nd virtual network interface, and share addresses with the database on a 10.0.2.0/24 network.
Nginx and the database however, have a single network interface each, attached to different overlay networks. Overlay networks are routable to the internet, but not each other, so they cannot communicate directly.

Related

Docker container networking - interal ports open to everyone

I am new to docker and have trouble setting up the network between the containers to not allow unnecessary connections from outside.
I have a Docker running on a VPS with three containers on a remote IP 123.xxx.xxx.xxx
container name published ports IP adress
sqldb 3306:3306 172.xxx.xxx.4
applet1 80:3306 172.xxx.xxx.5
applet2 4444:4444 172.xxx.xxx.3
One is database and two are java apps. The trouble I am having right now is that when I create the containers the ports on the containers become exposed to the global internet so my database sqldb is exposed by 123.xxx.xxx.xxx:3306
Right now ny java apps are connect through JDBC like so jdbc:mysql://172.xxx.xxx.4:3306/db.
I am trying to accomplish the following:
port 80 on host so 123.xxx.xxx.xxx connects to java app applet1.
The goal is to give applet1 the ability to connect to sqldb and also applet2 but I don't wan't unecessary ports to be exposed to the whole internet. Preferably that internal URIs would be left but connections from outside (apart from SSH on port 22 and TCP on port 80) would be forbidden for ports 4444, 3306. Also, I don't yet know how to use docker-compose so if possible how can I solve it without it?
*I have heard you can connect to containers by writing container names like that: have not had success with it yet jdbc:mysql://sqldb/db.
If all your containers are running on the same docker bridge network, you don't need to expose any ports for them to communicate with each other.
Docker Compose is a particularly good tool for organising several containers like this as it automatically configures a network for you
# docker-compose.yaml
version: '3.9'
services:
sqldb:
image: sqldb
applet1:
image: applet1
ports:
- '80:3306' # you sure about this container port?
depends_on:
- sqldb
applet2:
image: applet2
depends_on:
- sqldb
Now only your applet1 container will have a host port mapping. Both applets will be able to connect to any other service within the network on their container ports.

Binding a port to a specific network in Docker Compose

I run pihole on my RPi behind nginx reverse proxy, along with several other proxied containers. I want to:
map the port 80 of the pihole container to an internal-only network (that nginx proxies to public port 80)
map the port 53 (DNS) to the default network (so that it's publicly available).
By default all ports are published on all networks the container is part of, which I'm trying to avoid. In essence I'd like to do this:
version: '3'
services:
pihole:
container_name: pihole
hostname: pihole
image: pihole/pihole:latest
networks:
- default
- intraonly
ports:
- default:53:53/tcp
- default:53:53/udp
- intraonly:80/tcp
- intraonly:443/tcp
[...nginx & other services definitions follow...]
networks:
intraonly:
driver: bridge
internal: true
The above obviously fails, because the documentation says clearly it expects an IP address only in the port definition:
Specify the host IP address to bind to AND both ports (the default is 0.0.0.0, meaning all interfaces): (IPADDR:HOSTPORT:CONTAINERPORT).
That seems crazy however, as the IP address changes every time I rebuild the container. In other places the documentation suggests to avoid addressing other containers by IP address and chose the symbolic service names (published by DNS) instead.
What am I missing? What is the right/robust way to expose a port on a specific interface without hardcoding IP address? (I'm aware I could achieve internal-only ports by using expose syntax), but the question of binding ports to specific custom networks still stands.)

Docker-compose: Docker containers can't connect using service names

I have 3 containers. One is a lighttpd server serving static content (front). I have 2 flask servers handling the backend (back and model)
This is my docker-compose.yml
version: "3"
services:
front:
image: ecd3:latest
ports:
- 4200:80
tty: true
links:
- "back"
depends_on:
- back
networks:
- mynet
back:
image: esd3:latest
ports:
- 5000:5000
links:
- "model"
depends_on:
- model
networks:
- mynet
model:
image: mok:latest
ports:
- 5001:5001
networks:
- mynet
networks:
mynet:
I'm trying to send an http request to my flask server (back) from my frontend (front). I have bound the flask server to 0.0.0.0 and even used the service name in the frontend (http://back:5000/endpoint)
Trying to curl the flask server inside the frontend container (curl back:5000) gives me this:
curl: (52) Empty reply from server
Pinging the flask server from inside the frontend container works. This means that the connection must have been established.
Why can't I connect to my flask server from my frontend?
We discovered several things in the comments. Firstly, that you had a proxy problem that prevented one container using the API in another container.
Secondly, and critically, you discovered that the service names in your Docker Compose configuration file are made available in the virtual networking system set up by Docker. So, you can ping front from back and vice-versa. Importantly, it's worth noting that you can do this because they are on the same virtual network, mynet. If they were on different Docker networks, then by design the DNS names would not be available, and the virtual container IP addresses would not be reachable.
Incidentally, since you have all of your containers on the same network, and you have not changed any network settings, you could drop this network for now. In other words, you can remove the networks definition and the three container references to it, since they can just join the default network instead.
Thirdly, you learned that Docker's virtual DNS entries are not made available on the host, and so front and back are not available here. Even if the were (e.g. if manual entries were made in the hosts file) those IPs would not work, since there is no direct networking route from the host to the containers.
Instead, those containers are exposed by a Docker device that proxies connections from a custom localhost port down to those containers (4200, 5000 and 5001 in your case).
A good interim solution is to load your frontend at http://localhost:4200 and hardwire its API address as http://localhost:5000. You may have some CORS issues with that though, since browsers will see these as different servers.
Moreover, if you go live, you may have some problems with mobile networks and corporate firewalls - you will probably want your frontend app to sit on port 443, but since it is a separate server, you will either need a different IP address for your API, so it can also go on 443, or you will need to use another port. A clean solution for this is to put a frontend proxy in front of both containers, and then just expose the proxy in Docker. This will send HTTP requests from the outside to the correct container, depending on a filtering criteria set by you. I recommend Traefik for this, but there are undoubtedly several other approaches.

After `docker-compose.yml` uses network ipv4 identical to WiFi's IP, why some websites not accessible?

Context: I am using docker-compose.yml to set up a container for the mongoDB, where network sets up as following
...
services:
mongo:
networks:
mongodb_net:
ipv4_address: 192.168.178.23
networks:
mongodb_net:
ipam:
config:
- subnet: 192.168.178.0/24
...
which is exactly the same as the IP address of my WiFi connection.
Question:
After the setting above, why some websites are not accessible anymore (e.g. PING doesn't return any packages) on my browser?
I tried to change the YAML file to other IP address, the problem resolves. But I want to understand what was the reason. Is it because that the docker service occupies the same IP as the WiFi so that interrupting the normal internet access?
Docker defines its own network setup. You can see some details of this on Linux running ifconfig and looking at iptables output. If you manually configure a Docker network to have the same CIDR block as your external network, you can wind up in a sequence where:
I want to call 8.8.8.8.
It's not on any of my local networks, so I'll route to the default gateway 192.168.178.1.
That address is on the docker1 network 192.168.178.0/24.
...and the outbound packets never actually leave your host.
You should almost never need to manually configure IP addresses or networks in Docker. It has its own internal network setup and handles this for you. In a Compose context, Compose will also do some additional setup that you generally need, like creating a default network; Networking in Compose has more details.
To get access to a container from outside of Docker space, you need to publish ports: out of that container, and then it will be reachable on your host's IP address at the published port.
services:
mongo:
ports: ['27017:27017']
# no networks: or manual IP configuration; just use the `default` network

How can I make docker-compose bind the containers only on defined network instead of 0.0.0.0?

In recent versions docker-compose automatically creates a new network for the services it creates. Basically, every docker-compose setup is getting its own IP range, so that in theory I could call my services on the network's IP address with the predefined ports. This is great when developing multiple projects at the same time, since there is then no need to change the ports in docker-compose.yml (i.e. I can run multiple nginx projects at the same time on port 8080 on different interfaces)
However, this does not work as intended: every exposed port is still exposed on 0.0.0.0 and thus there are port conflicts with multiple projects. It is possible to put the bind IP into docker-compose.yml, however this is a killer for portability -- not every developer on the team uses the same OS or works on the same projects, therefore it's not clear which IP to configure.
It's be great to define the IP to bind the containers to in terms of the network created for this particular project. docker-compose should both know which network it created as well as its IP, so this shouldn't be a problem, however I couldn't find an easy way to do it. Is there a way or is this something yet to be implemented?
EDIT: An example of a port conflict: imagine two projects, each with an application server running on port 8080 and a MySQL database running on port 3306, both respectively exposed as "8080:8080" and "3306:3306". Running the first one with docker-compose creates a network called something like app1_network with an IP range of 172.18.0.0/16. Every exposed port is exposed on 0.0.0.0, i.e. on 127.0.0.1, on the WAN address, on the default bridge (172.17.0.0/16) and also on the 172.18.0.0/16. In this case I can reach my application server of all of 127.0.0.1:8080, 172.17.0.1:8080, 172.18.0.1:8080 and als on $WAN_IP:8080. If I start the second application now, it starts a second network app2_network 172.19.0.0/16, but still tries to bind every exposed port on all interfaces. Those ports are of course already taken (except for 172.19.0.1). If there had been a possibility to restrict each application to its network, application 1 would have available at 172.18.0.1:8080 and the second at 172.19.0.1:8080 and I wouldn't need to change port mappings to 8081 and 3307 respectively to run both applications at the same time.
In your service configuration, in docker-compose.yml:
ports:
- "127.0.0.1:8001:8001"
Reference: https://github.com/compose-spec/compose-spec/blob/master/spec.md#ports
You can publish a port to a single IP address on the host by including the IP before the ports:
docker run -p 127.0.0.1:80:80 -d nginx
The above runs nginx on the loopback interface. You can use a similar port mapping inside of a docker-compose.yml file. e.g.:
ports:
- "127.0.0.1:80:80"
docker-compose doesn't have any special abilities to infer which network interface to use based on the docker network. You'd need to specify the unique IP address to use in each compose file, and that IP needs to be for a network interface on your host. For a developer machine, that IP may change as DHCP gives the laptop/workstation new addresses.
Because of the difficulty implementing your goal, most would either map different ports on the host to different containers, so 13307:3307 for container a, 23307:3307 for container b, 33307:3307 for container c, or whatever number scheme makes sense for you. And when dealing with HTTP traffic, then using a reverse proxy like traefik often makes the most sense.
It can be achieved by configuring network in docker-compose file.
Please consider below two docker-compose files. There is still drawback of needing to specify subnet unique across all project you work on at the same time. On the other hand you need to know which service you connecting too - this is why it cannot assign it dynamically.
my-project.yaml:
services:
nginx:
networks:
- my-project-network
image: nginx
ports:
- 80:80
networks:
my-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.20.0.1"
ipam:
config:
- subnet: "172.20.0.0/16"
my-other-project.yaml
services:
nginx:
networks:
- my-other-project-network
image: nginx
ports:
- 80:80
networks:
my-other-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.21.0.1"
ipam:
config:
- subnet: "172.21.0.0/16"
Note: that if you have other service binding to *:80 like for instance apache running on host - it will also bind on docker-compose networks' interfaces and you will not be able to use this port.
To run above two services:
docker-compose -f my-project.yaml up -d
docker-compose -f my-other-project.yaml up -d

Resources