I am using docker compose to bring up and dynamically scale a database.The containers all live within the same network, and there are a few different images that run, some which can scale and some which cant. Is there a way from within the other dockers to "see" all containers of type "x" on a certain network, so that I can load balance/etc effectively between them?
If you happen to know their names, you can use the docker DNS resolver to discover them. This is actually likely to be the thing you want to do with a load balancer anyway. Think of Nginx upstream directive, or HAProxy backends.
services:
app:
image: nginx
deploy:
replicas: 2
networks:
default:
name: sample
If you deploy this, you can query the docker DNS resolver from the same network to ask for the IP addresses of the individual app containers.
$ docker compose up -d
$ docker run --network sample --rm tutum/dnsutils dig +short app
172.22.0.3
172.22.0.2
You can actually tell load balancers like nginx or HAProxy to use the docker resolver for service discovery.
This doesn't tell you what capabilities the app containers have, but I think it also doesn't really matter. Unless I misunderstand your question and there is more to it.
Compose already adds these replicas under the same alias to the network. That's also why I could just query for app although the container names are actually project_app_1 and project_app_2.
$ docker run --network sample --rm tutum/dnsutils dig +short sample_app_1
172.22.0.3
So, you are essentially getting DNS round-robin as the built-in solution offered by compose.
You can even take this a step further, if you want to load balancer across separate services as one entity. Say type x or type y.
services:
app:
image: nginx
deploy:
replicas: 2
networks:
default:
aliases:
- nginx
other:
image: nginx
deploy:
replicas: 2
networks:
default:
aliases:
- nginx
networks:
default:
name: sample
docker run --network sample --rm tutum/dnsutils dig +short nginx
172.22.0.2
172.22.0.4
172.22.0.5
172.22.0.3
You could pair this as described above with a loadbalancer. I.E. haproxy.
For example, you could have a config file like this.
resolvers docker
nameserver dns1 127.0.0.11:53
resolve_retries 3
timeout resolve 1s
timeout retry 1s
hold other 10s
hold refused 10s
hold nx 10s
hold timeout 10s
hold valid 10s
hold obsolete 10s
global
log fd#2 local2
stats timeout 2m
spread-checks 15
defaults
log global
mode http
option httplog
timeout connect 5s
timeout check 5s
timeout client 2m
timeout server 2m
listen stats
bind *:4450
stats enable
stats uri /
stats refresh 15s
stats show-legends
stats show-node
frontend default
bind *:8080
default_backend nginx
backend nginx
balance leastconn
option httpchk GET /
server-template nginx- 10 nginx:80 resolvers docker init-addr libc,none check inter 30s
If you bake this into a HAProxy image or mount the file for simplicity, you get proper load balancer instead of DNS round-robin. You can have different load balancing algorithms and session affinity / persistence.
services:
app:
&nginx
image: nginx
deploy:
replicas: 2
networks:
default:
aliases:
- nginx
other: *nginx
lb:
image: haproxy:2.5-alpine3.15
ports:
- 8000:8080
- 4450:4450
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
Now we can see on the stats' page on port 4450 the 4 instances with nginx alias.
You can also check out this answer, which contains related information and shows some other strategies next to HAProxy. How does service discovery work with modern docker/docker-compose?.
This answer showcases aliases with the docker CLI, so you may understand what compose is doing there. how are compose services implemented?
In Kubernetes, you would pretty much get the same things to work with. Kubernetes has also its own DNS resolver and service abstractions to get VIP or DNS round-robin behaviour. The rest has to come from an external implementation, such as an ingress controller.
Related
At the moment I have implemented a flask application, connected with mysql database, and the entire implementation is running on a single webserver.
In order to avoid exposing my app publicly, I am running it on the localhost interface of the server, and I am only exposing the public interface (port 443), via a haproxy that redirects the traffic to localhost interface.
The configuration of docker-compose and haproxy can be found below
docker-compose:
version: '3.1'
services:
db:
image: mysql:latest
volumes:
- mysql-volume:/var/lib/mysql
container_name: mysql
ports:
- 127.0.0.1:3306:3306
environment:
MYSQL_ROOT_PASSWORD: xxxxxx
app:
#environment:
# - ENVIRONMENT=stage
# - BUILD_DATETIME=$(date +'%Y-%m-%d %H:%M:%S')
build:
context: .
dockerfile: Dockerfile
#labels:
# - "build_datetime=${BUILD_DATETIME}"
container_name: stage_backend
ports:
- 127.0.0.1:5000:5000
volumes:
mysql-volume:
driver: local
sample haproxy configuration:
global
log /dev/log local0
log /dev/log local1 notice
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 10s
timeout client 30s
timeout server 30s
frontend test
bind *:80
bind *:443 ssl crt /etc/letsencrypt/live/testdomain.com/haproxy.pem alpn h2,http/1.1
redirect scheme https if !{ ssl_fc }
mode http
acl domain_testdomain hdr_beg(host) -i testdomain.com
use_backend web_servers if domain_testdomain
backend web_servers
timeout connect 10s
timeout server 100s
balance roundrobin
mode http
server test_server 127.0.0.1:5000 check
So haproxy is running on the public interface as a service via systemd (not containerized) and containers are running on localhost.
This is going to become a production setup soon, so I want to deploy a single node docker swarm cluster, within that server only, as docker swarm implementation is more safe on a production environment.
My question is how can I deploy that on docker swarm.
Does it make sense to leave haproxy as a systemd service and somehow to make it forward requests to the docker swarm cluster?
Is it easier/better implementation, to also containerize the haproxy and put it inside the cluster as a docker-compose service?
If I follow the second approach, how can I make it run on a different interface than the application (haproxy --> public, flask & db --> localhost)
Again, I am talking about a single server here, so this is why I am trying to separate the network interfaces and only expose haproxy on 443 on the public interface.
Ideally I didn't want to change from haproxy to nginx reverse proxy, as I am familiar with it and how ssl termination exactly work there, but I am open to hear any other implementation that makes more sense.
You seem to be overthinking things, and in the process throwing away security features that docker offers.
first off, docker gives you private networking out the box in both compose and swarm modes. an implicit network called <stack>_default is created and services are attached to it, and DNS resolution is setup in each container to resolve each service name.
So, assuming your app and db don't explicitly declare any networks, then the following implicit declarations apply, and your app can connect to the db using the connection string mysql://db:3306 directly.
The db container does not need to either publish, or try and protect, access to this port, only other containers attached to the [stack_]default network will have access.
networks:
default: # implicit
services:
app:
networks:
default: # implicit
environment:
MYSQL: mysql://db:3306 #
db:
networks:
default: # implicit
At this point, its your choice to run HAProxy as a service or not. Personally I would (do). It is handy in swarm to have a single service that handles :80 and :443 ingress, does offloading, and then uses docker networks to direct traffic to other services on whatever service:port's handle those connections.
I use Traefik rather than HAProxy as it can use service labels to route traffic dynamically, but either way, having HAProxy as a service means, if you continue to use that, you can more easily deploy HAProxy config updates.
I have a service that requires that it can connect to the other instances of itself to establish a quorum.
The service has a environment variable like:
initialDiscoverMembers=db.1:5000,db.2:5000,db.3:5000
They can never find each other. I've tried logging into other containers and pinging other services by . like ping redis.1 and it doesn't work.
Is there a way in Docker (swarm) to get the incremental hostname working for connection as well? I looked at the endpoint_mode: dnsrr but that doesn't seem to be what I want.
I think I may have to just create three separate instances of the service and name it different things, but that seems so cumbersome.
You cannot refer independently to each container using the incremental host.<id> since the DNS resolution on Swarm is done on a service-basis; what you can do is to add a hostname alias to each container based on its Swarm slot.
For example, right now you're using a db service, so you could add:
version: '3.7'
services:
db:
image: postgres
deploy:
replicas: 3
hostname: "db-{{.Task.Slot}}"
ports:
- 5000:5432
In this case, since all the containers within each Swarm task are in the same network, you can address them by db-1, db-2 and db-3.
The goal: To deploy on docker swarm a set of services, one of which is only available for me when I am connected to the OpenVPN server which has also been spun up on docker swarm.
How can I, step by step, only connect to a whoami example container, with a domain in the browser, when I am connected to a VPN?
Background
The general idea would be have, say, kibana and elasticsearch running internally which can only be accessed when on the VPN (rather like a corporate network), with other services running perfectly fine publicly as normal. These will all be on separate nodes, so I am using an overlay network.
I do indeed have OpenVPN running on docker swarm along with a whoami container, and I can connect to the VPN, however it doesn't look like the IP is changing and I have no idea how to make it so that the whoami container is only available when on the VPN, especially considering I'm using an overlay network which is multi-host. I'm also using traefik, a reverse proxy which provides me with a mostly automatic letsencrypt setup (via DNS challenge) for wildcard domains. With this I can get:
https://traefik.mydomain.com
But I also want to connect to vpn.mydomain.com (which I can do right now), and then be able to visit:
https://whoami.mydomain.com
...which I cannot. Yet. I've posted my traefik configuration in a different place in case you want to take a look, as this thread will grow too big if I post it here.
Let's start with where I am right now.
OpenVPN
Firstly, the interesting thing about OpenVPN and docker swarm is that OpenVPN needs to run in privileged mode because it has to make network interfaces changes amongst other things, and swarm doesn't have CAP_ADD capabilities yet. So the idea is to launch the container via a sort of 'proxy container' that will run the container manually with these privileges added for you. It's a workaround for now, but it means you can deploy the service with swarm.
Here's my docker-compose for OpenVPN:
vpn-udp:
image: ixdotai/swarm-launcher:latest
hostname: mainnode
environment:
LAUNCH_IMAGE: ixdotai/openvpn:latest
LAUNCH_PULL: 'true'
LAUNCH_EXT_NETWORKS: 'app-net'
LAUNCH_PROJECT_NAME: 'vpn'
LAUNCH_SERVICE_NAME: 'vpn-udp'
LAUNCH_CAP_ADD: 'NET_ADMIN'
LAUNCH_PRIVILEGED: 'true'
LAUNCH_ENVIRONMENTS: 'OVPN_NATDEVICE=eth1'
LAUNCH_VOLUMES: '/etc/openvpn:/etc/openvpn:rw'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:rw'
networks:
- my-net
deploy:
placement:
constraints:
- node.hostname==mainnode
I can deploy the above with: docker stack deploy --with-registry-auth --compose-file docker/docker-compose.prod.yml my-app-name and this is what I'm using for the rest. Importantly I cannot just deploy this as it won't load yet. OpenVPN configuration needs to exist in /etc/openvpn on the node, which is then mounted in the container, and I do this during provisioning:
// Note that you have to create the overlay network with --attachable for standalone containers
docker network create -d overlay app-net --attachable
// Create the config
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn ovpn_genconfig -u udp://vpn.mydomain.com:1194 -b
// Generate all the vpn files, setup etc
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn bash -c 'yes yes | EASYRSA_REQ_CN=vpn.mydomain.com ovpn_initpki nopass'
// Setup a client config and grab the .ovpn file used for connecting
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn easyrsa build-client-full client nopass
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn ovpn_getclient client > client.ovpn
So now, I have an attachable overlay network, and when I deploy this, OpenVPN is up and running on the first node. I can grab a copy of client.ovpn and connect to the VPN. Even if I check "send all traffic through the VPN" though, it looks like the IP isn't being changed, and I'm still nowhere near hiding a container behind it.
Whoami
This simple container can be deployed with the following in docker-compose:
whoami:
image: "containous/whoami"
hostname: mainnode
networks:
- ${DOCKER_NETWORK_NAME}
ports:
- 1337:80
deploy:
placement:
constraints:
- node.hostname==mainnode
I put port 1337 there for testing, as I can visit my IP:1337 and see it, but this doesn't achieve my goal of having whoami.mydomain.com only resolving when connected to OpenVPN.
I can ping a 192.168 address when connected to the vpn
I ran the following on the host node:
ip -4 address add 192.168.146.16/24 dev eth0
Then when connected to the VPN, I can resolve this address! So it looks like something is working at least.
How can I achieve the goal stated at the top? What is required? What OpenVPN configuration needs to exist, what network configuration, and what container configuration? Do I need a custom DNS solution as I suggest below? What better alternatives are there?
Some considerations:
I can have the domains, including the private one whoami.mydomain.com public. This means I would have https and get wildcard certificates for them easily, I suppose? But my confusion here is - how can I get those domains only on the VPN but also have tls certs for them without using a self-signed certificate?
I can also run my own DNS server for resolving. I have tried this but I just couldn't get it working, probably because the VPN part isn't working properly yet. I found dnsmasq for this and I had to add the aforementioned local ip to resolve.conf to get anything working locally for this. But domains would still not resolve when connected to the VPN, so it doesn't look like DNS traffic was going over the VPN either (even though I set it as such - my client is viscosity.
Some mention using a bridge network, but a bridge network does not work for multi-host
Resources thus far (I will update with more)
- Using swarm-launcher to deploy OpenVPN
- A completely non-explanatory answer on stackexchange which I have seen referenced as basically unhelpful by multiple people across other Github threads, and one of the links is dead
So I was banging my head head against a brick wall about this problem and just sort of "solved" it by pivoting your idea:
Basically I opened the port of the vpn container to its host. And then enable a proxy. This means that I can reach that proxy by visiting the ip of the pc in which the vpn resides (AKA the Docker Host of the VPN container/stack).
Hang with me:
I used gluetun vpn but I think this applies also if you use openvpn one. I just find gluetun easier.
Also IMPORTANT NOTE: I tried this in a localhost environment, but theoretically this should work also in a multi-host situation since I'm working with separated stacks. Probably, in a multi-host situation you need to use the public ip of the main docker host.
1. Create the network
So, first of all you create an attachable network for this docker swarm stacks:
docker network create --driver overlay --attachable --scope swarm vpn-proxy
By the way, I'm starting to think that this passage is superfluous but need to test it more.
2. Set the vpn stack
Then you create your vpn stack file, lets call it stack-vpn.yml:
(here I used gluetun through swarm-launcher "trick". This gluetun service connects through a VPN via Wireguard. And it also enables an http proxy at the port 8888 - this port is also mapped to its host by setting LAUNCH_PORTS: '8888:8888/tcp')
version: '3.7'
services:
vpn_launcher:
image: registry.gitlab.com/ix.ai/swarm-launcher
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:rw'
networks:
- vpn-proxy
environment:
LAUNCH_IMAGE: qmcgaw/gluetun
LAUNCH_PULL: 'true'
LAUNCH_EXT_NETWORKS: 'vpn-proxy'
LAUNCH_PROJECT_NAME: 'vpn'
LAUNCH_SERVICE_NAME: 'vpn-gluetun'
LAUNCH_CAP_ADD: 'NET_ADMIN'
LAUNCH_ENVIRONMENTS: 'VPNSP=<your-vpn-service> VPN_TYPE=wireguard WIREGUARD_PRIVATE_KEY=<your-private-key> WIREGUARD_PRESHARED_KEY=<your-preshared-key> WIREGUARD_ADDRESS=<addrs> HTTPPROXY=on HTTPPROXY_LOG=on'
LAUNCH_PORTS: '8888:8888/tcp'
deploy:
placement:
constraints: [ node.role == manager ]
restart_policy:
condition: on-failure
networks:
vpn-proxy:
external: true
Notice that either the swarm-launcher and the gluetun containers are using the network previously created vpn-proxy.
3. Set the workers stack
For the time being we will set an example with 3 replicas of alpine image here (filename stack-workers.yml):
version: '3.7'
services:
alpine:
image: alpine
networks:
- vpn-proxy
command: 'ping 8.8.8.8'
deploy:
replicas: 3
networks:
vpn-proxy:
external: true
They also use the vpn-proxy overlay network.
4. Launch our stacks
docker stack deploy -c stack-vpn.yml vpn
docker stack deploy -c stack-workers workers
Once they are up you can access any worker task and try to use the proxy by using the host ip where the proxy resides.
As I said before, theoretically this should work on a multi-host situation, but probably you need to use the public ip of the main docker host (although if they share the same overlay network it could also work with the internal ip address (192...) ).
I have Consul running via docker using docker-compose
version: '3'
services:
consul:
image: unifio/consul:latest
ports:
- "8500:8500"
- "8300:8300"
volumes:
- ./config:/config
- ./.data/consul:/data
command: agent -server -data-dir=/data -ui -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect=1
mongo:
image: mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./.data/mongodb:/data/db
command: mongod --bind_ip_all
and have a nodejs service running on port 6001 which exposes a /health endpoint for health checks.
I am able to register the service via this consul package.
However, visiting the consul UI I can see that the service has a status of failing because the health check is not working.
The UI show this message:
Get http://127.0.0.1:6001/health: dial tcp 127.0.0.1:6001: getsockopt: connection refused
Not sure why is not working exactly, but I kind of sense that i may have misconfigured consul.
Any help would be great.
Consul is running in your docker container. When you use 127.0.0.1 in this container, it refers to itself, not to your host.
You need to use a host IP that is known to your container (and of course make sure your service is reachable and listening on this particular IP).
In most cases, you should be able to contact your host from a container through the default docker0 bridge ip that you can get with ip addr show dev docker0 from your host as outlined in this other answer.
The best solution IMO is to discover the gateway that your container is using which will point to the particular bridge IP on your host (i.e. the bridge created for your docker-compose project when starting it). There are several methods you can use to discover this ip from the container depending on the installed tooling and your linux flavor.
While Zeitounator's answer is perfectly fine and answers your direct question,
the "indirect" solution to your problem would be to manage the nodejs service
through docker-compose.
IMHO it's a good idea to manage all services involved using the same tool,
as then their lifecycles are aligned and also it's easy to configure them to talk
to each other (at least that's the case for docker-compose).
Moreover, letting containers access services on the host is risky in terms of security.
In production environments you usually want to shield host services from containers,
as otherwise the containers lose their "containment" role.
So, in your case you would need to add the nodejs service to docker-compose.yml:
services:
(...)
nodejs-service:
image: nodejs-service-image
ports:
- "6001:6001" [this is only required if you need to expose the port on the host]
command: nodejs service.js
And then your Consul service would be able to access nodejs-service
through http://nodejs-service:6001/health.
In docker-compose legacy yml if you link a service it used to create an environment variable servicename_PORT which you could use to discover the port of a linked container. In the new v2 format we have user defined networks which add the service name to the internal DNS and so we can connect to linked services, but how do we find the port a linked service exposes? The only way I can think of is to create an environment variable for each linked service where I can put the port, but then I will have the same port twice in the docker-compose: once in the expose section of the service itself and once as an environment variable in the service that connects to it. Is there a more DRY way of discovering the exposed port?
For this, you usually use a registrator + service-discover, this means, a service like https://www.consul.io / registrator
Basically, this adds an API for you to either watch a kv store for you service defintions ( port / ip ) which then can be random, or even use DNS included in consul. The latter wont help with ports, thats what you use a registry for.
If you want to dodge this best-practice way. mount the docker socket and use docker inspect <servicename> to find the port.
services:
other:
container_name: foo
image: YYYY
theonedoingthelookup:
image: ZZZZ
volumes:
- /var/run/docker.sock:/var/run/docker.sock
You will need to have the docker cli tool installed in the container, then run this inside the ZZZZ container
docker inspect YYYY
Use some grep / awk / filters to extract the information you need
It's whatever port the service is running as in the container. Port mappings don't apply to container <-> container communication, only host <-> container communication.
For example:
version: '2'
services:
a:
...
networks:
- my-net
b:
...
networks:
- my-net
networks:
my-net:
Let's say a is running a webserver at port 8080, b would be able to hit it by sending a request to a:8080.