How to fetch Ips of a service in docker swarm cluster ? - docker

I am running a docker swarm mode cluster with 2 nodes, and deploy 5 services : [ mysql , mongo , app ] and wish to filldb with an ansible script from my manager node. But I can not get the Ip from nodes to access db services in container ?
e.g:
mysql -h {{ mysql_service_host }} ....
how to get the container Ip or the service ip from node ?
is it possible to use mode host in docker swarm ?

For services (containers) that are part of the same network you can simply use the service name. Docker includes a DNS resolver that handles ip resolution. You will need to make your services part of an overlay network. An overlay network can span more than one node.
Eg:
services:
myapp:
image: myimage:1.0
deploy:
replicas: 1
networks:
- privnet
maindb:
image: mysql
deploy:
replicas: 1
networks:
- privnet
networks:
privnet:
driver: overlay
This creates an overlay network with two services. The corresponding containers could be created on any node. It doesn't matter where. They will all be able to communicate to each other since they're part of the same overlay network.
Within myapp, you can use maindb as a DNS for the mysql service. It will be resolved by Docker to the proper ip within the privnet network.
btw, a swarm cluster with 2 nodes doesn't make much sense. Swarm requires a minimum of 3 nodes for the Raft consensus protocol to work. https://raft.github.io

Related

Not deploy container on master node in Docker Swarm

I am working on a project which uses Raspberry Pis as worker nodes and my laptop as the master node. I hope to control the deployment of my containers from my laptop, but I hope the containers run on the worker nodes only(which means no container on the master node). How can I do it with Docker Swarm?
I am going to presume you are using a stack.yml file to describe your deployment using desired-state, but docker service create does have flags for this too.
There are a number of values that docker defines that can be tested under a placement-constraints node:
version: "3.9"
service:
worker:
image: nginx
deploy:
placement:
constraints:
- node.role==worker

Docker Swarm ping by hostname incremental host.<id>

I have a service that requires that it can connect to the other instances of itself to establish a quorum.
The service has a environment variable like:
initialDiscoverMembers=db.1:5000,db.2:5000,db.3:5000
They can never find each other. I've tried logging into other containers and pinging other services by . like ping redis.1 and it doesn't work.
Is there a way in Docker (swarm) to get the incremental hostname working for connection as well? I looked at the endpoint_mode: dnsrr but that doesn't seem to be what I want.
I think I may have to just create three separate instances of the service and name it different things, but that seems so cumbersome.
You cannot refer independently to each container using the incremental host.<id> since the DNS resolution on Swarm is done on a service-basis; what you can do is to add a hostname alias to each container based on its Swarm slot.
For example, right now you're using a db service, so you could add:
version: '3.7'
services:
db:
image: postgres
deploy:
replicas: 3
hostname: "db-{{.Task.Slot}}"
ports:
- 5000:5432
In this case, since all the containers within each Swarm task are in the same network, you can address them by db-1, db-2 and db-3.

Using Docker Swarm as a reverse proxy using overlay network routing mesh

I have a service stack that I am deploying to my Docker swarm which has 1 manager node and 1 worker node. Its services are constrained to be placed on only one of these nodes (in this case, the manager). The worker node is intended to function only as a separate ingress point.
The manager node has the label minecraft=main set on it via docker node update --label-add minecraft=main
The swarm-scoped overlay network named minecraft-net is created separately by a docker-compose stack.
That docker-compose.yml on the manager (in host mode, not swarm) contains:
...
networks:
minecraft-net:
name: minecraft-net
driver: overlay
attachable: true
driver_opts:
encrypted: "true"
For accessing the Minecraft server, players should be able to connect to the worker node hostname, and the routing mesh should redirect traffic to the manager.
To deploy this stack, I use
docker stack deploy --compose-file minecraft.yml minecraft
where minecraft.yml is
version: '3.7'
services:
crafty-controller:
image: crafty-controller # This image is only available on the manager node
ports:
- "25500-25600"
- "13121:13121"
volumes:
- ./minecraft/docker/minecraft_servers:/minecraft_servers
- ./minecraft/docker/db:/crafty_db
- /mnt/minecraft-backups:/crafty_web/backups
networks:
- minecraft-net
deploy:
placement:
constraints:
- node.labels.minecraft == main
networks:
minecraft-net:
external: true
I have used a similar setup in the past for an event, but now I'm running into a problem. Even though I am able to connect to the Minecraft server directly using the manager node's address, the worker node is not redirecting any traffic to or from the manager. This means I cannot connect to the Minecraft server via the worker node address.
root#debvm:/opt/app/my.site# docker stack deploy --compose-file minecraft.yml minecraft
Updating service minecraft_crafty-controller (id: aug42i46efu9bc7wv39jflxdc)
image crafty-controller:latest could not be accessed on a registry to record
its digest. Each node will access crafty-controller:latest independently,
possibly leading to different nodes running different
versions of the image.
root#debvm:/opt/app/my.site# docker stack ls
NAME SERVICES ORCHESTRATOR
minecraft 1 Swarm
root#debvm:/opt/app/my.site# docker node ps
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
to2781e7jtjm minecraft_crafty-controller.1 crafty-controller:latest debvm Running Running about a minute ago
root#debvm:/opt/app/my.site# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
aug42i46efu9 minecraft_crafty-controller replicated 1/1 crafty-controller:latest *:13121->13121/tcp, *:30000-30076->25500-25576/tcp, *:30077->25600/tcp, *:30078-30099->25578-25599/tcp, *:30100->25577/tcp
root#debvm:/opt/app/my.site# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
6tnekpdbkktl8h7puqeba06i8 * debvm Ready Active Leader 19.03.15
qiedc8wfogv25ezlasmhe52co workernode Ready Active 19.03.15
I can see that the worker node does not have the crafty-controller image to create its own instance of the service. This is intentional, because the image is large, and there should only be one instance anyway. What I would like to know is if it is possible to have the worker node forward traffic (requests) through the ingress overlay network to the manager node, even if the worker node does not have an image needed for that stack.
Somehow, I was able to do this a year ago, but I forgot how I got it working.
Currently, I am unable to connect to the Minecraft server via the worker node's address (but it can be connected to by using the manager node's address). Is there something that I can change in my configuration to allow players to connect to the worker node and have it redirect traffic to the manager node?
I see a misunderstanding about the networking within a Swarm here.
The overlay networks manage communications among the Docker daemons participating in the swarm.
The network you're looking for is the ingress network, a special overlay network which is in charge of the load balancing among service's nodes. When any swarm node receives a request on a published port, it hands the request off to a module called IPVS, and this one will select one of the IP addresses participating in the service and route the request to it, over the ingress network.
This ingress network should have been created already when you initiated the Docker Swarm, you can check by docker network inspect ingress.
So basically, in order to be able to access to a service through the published ports in any of the nodes, that service needs to be in the ingress network.
By default all the published ports will be in ingress mode. For example from the following Compose:
version: '3.7'
services:
crafty-controller:
image: crafty-controller # This image is only available on the manager node
ports:
- "25500-25600"
- "13121:13121"
volumes:
- ./minecraft/docker/minecraft_servers:/minecraft_servers
- ./minecraft/docker/db:/crafty_db
- /mnt/minecraft-backups:/crafty_web/backups
deploy:
placement:
constraints:
- node.labels.minecraft == main
We will have the following service:
...
{
"Protocol": "tcp",
"TargetPort": 25598,
"PublishedPort": 30098,
"PublishMode": "ingress"
},
{
"Protocol": "tcp",
"TargetPort": 25599,
"PublishedPort": 30099,
"PublishMode": "ingress"
}
...
You can only have one ingress network in the cluster, so if you want to actually call it minecraft-net, you'd have to inspect the already existing one, remove all the existing services whose containers are connected to the ingress network, remove the existing ingress network and create a new overlay network providing the --ingress flag.

Hiding a docker container behind OpenVPN, in docker swarm, with an overlay network

The goal: To deploy on docker swarm a set of services, one of which is only available for me when I am connected to the OpenVPN server which has also been spun up on docker swarm.
How can I, step by step, only connect to a whoami example container, with a domain in the browser, when I am connected to a VPN?
Background
The general idea would be have, say, kibana and elasticsearch running internally which can only be accessed when on the VPN (rather like a corporate network), with other services running perfectly fine publicly as normal. These will all be on separate nodes, so I am using an overlay network.
I do indeed have OpenVPN running on docker swarm along with a whoami container, and I can connect to the VPN, however it doesn't look like the IP is changing and I have no idea how to make it so that the whoami container is only available when on the VPN, especially considering I'm using an overlay network which is multi-host. I'm also using traefik, a reverse proxy which provides me with a mostly automatic letsencrypt setup (via DNS challenge) for wildcard domains. With this I can get:
https://traefik.mydomain.com
But I also want to connect to vpn.mydomain.com (which I can do right now), and then be able to visit:
https://whoami.mydomain.com
...which I cannot. Yet. I've posted my traefik configuration in a different place in case you want to take a look, as this thread will grow too big if I post it here.
Let's start with where I am right now.
OpenVPN
Firstly, the interesting thing about OpenVPN and docker swarm is that OpenVPN needs to run in privileged mode because it has to make network interfaces changes amongst other things, and swarm doesn't have CAP_ADD capabilities yet. So the idea is to launch the container via a sort of 'proxy container' that will run the container manually with these privileges added for you. It's a workaround for now, but it means you can deploy the service with swarm.
Here's my docker-compose for OpenVPN:
vpn-udp:
image: ixdotai/swarm-launcher:latest
hostname: mainnode
environment:
LAUNCH_IMAGE: ixdotai/openvpn:latest
LAUNCH_PULL: 'true'
LAUNCH_EXT_NETWORKS: 'app-net'
LAUNCH_PROJECT_NAME: 'vpn'
LAUNCH_SERVICE_NAME: 'vpn-udp'
LAUNCH_CAP_ADD: 'NET_ADMIN'
LAUNCH_PRIVILEGED: 'true'
LAUNCH_ENVIRONMENTS: 'OVPN_NATDEVICE=eth1'
LAUNCH_VOLUMES: '/etc/openvpn:/etc/openvpn:rw'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:rw'
networks:
- my-net
deploy:
placement:
constraints:
- node.hostname==mainnode
I can deploy the above with: docker stack deploy --with-registry-auth --compose-file docker/docker-compose.prod.yml my-app-name and this is what I'm using for the rest. Importantly I cannot just deploy this as it won't load yet. OpenVPN configuration needs to exist in /etc/openvpn on the node, which is then mounted in the container, and I do this during provisioning:
// Note that you have to create the overlay network with --attachable for standalone containers
docker network create -d overlay app-net --attachable
// Create the config
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn ovpn_genconfig -u udp://vpn.mydomain.com:1194 -b
// Generate all the vpn files, setup etc
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn bash -c 'yes yes | EASYRSA_REQ_CN=vpn.mydomain.com ovpn_initpki nopass'
// Setup a client config and grab the .ovpn file used for connecting
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn easyrsa build-client-full client nopass
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn ovpn_getclient client > client.ovpn
So now, I have an attachable overlay network, and when I deploy this, OpenVPN is up and running on the first node. I can grab a copy of client.ovpn and connect to the VPN. Even if I check "send all traffic through the VPN" though, it looks like the IP isn't being changed, and I'm still nowhere near hiding a container behind it.
Whoami
This simple container can be deployed with the following in docker-compose:
whoami:
image: "containous/whoami"
hostname: mainnode
networks:
- ${DOCKER_NETWORK_NAME}
ports:
- 1337:80
deploy:
placement:
constraints:
- node.hostname==mainnode
I put port 1337 there for testing, as I can visit my IP:1337 and see it, but this doesn't achieve my goal of having whoami.mydomain.com only resolving when connected to OpenVPN.
I can ping a 192.168 address when connected to the vpn
I ran the following on the host node:
ip -4 address add 192.168.146.16/24 dev eth0
Then when connected to the VPN, I can resolve this address! So it looks like something is working at least.
How can I achieve the goal stated at the top? What is required? What OpenVPN configuration needs to exist, what network configuration, and what container configuration? Do I need a custom DNS solution as I suggest below? What better alternatives are there?
Some considerations:
I can have the domains, including the private one whoami.mydomain.com public. This means I would have https and get wildcard certificates for them easily, I suppose? But my confusion here is - how can I get those domains only on the VPN but also have tls certs for them without using a self-signed certificate?
I can also run my own DNS server for resolving. I have tried this but I just couldn't get it working, probably because the VPN part isn't working properly yet. I found dnsmasq for this and I had to add the aforementioned local ip to resolve.conf to get anything working locally for this. But domains would still not resolve when connected to the VPN, so it doesn't look like DNS traffic was going over the VPN either (even though I set it as such - my client is viscosity.
Some mention using a bridge network, but a bridge network does not work for multi-host
Resources thus far (I will update with more)
- Using swarm-launcher to deploy OpenVPN
- A completely non-explanatory answer on stackexchange which I have seen referenced as basically unhelpful by multiple people across other Github threads, and one of the links is dead
So I was banging my head head against a brick wall about this problem and just sort of "solved" it by pivoting your idea:
Basically I opened the port of the vpn container to its host. And then enable a proxy. This means that I can reach that proxy by visiting the ip of the pc in which the vpn resides (AKA the Docker Host of the VPN container/stack).
Hang with me:
I used gluetun vpn but I think this applies also if you use openvpn one. I just find gluetun easier.
Also IMPORTANT NOTE: I tried this in a localhost environment, but theoretically this should work also in a multi-host situation since I'm working with separated stacks. Probably, in a multi-host situation you need to use the public ip of the main docker host.
1. Create the network
So, first of all you create an attachable network for this docker swarm stacks:
docker network create --driver overlay --attachable --scope swarm vpn-proxy
By the way, I'm starting to think that this passage is superfluous but need to test it more.
2. Set the vpn stack
Then you create your vpn stack file, lets call it stack-vpn.yml:
(here I used gluetun through swarm-launcher "trick". This gluetun service connects through a VPN via Wireguard. And it also enables an http proxy at the port 8888 - this port is also mapped to its host by setting LAUNCH_PORTS: '8888:8888/tcp')
version: '3.7'
services:
vpn_launcher:
image: registry.gitlab.com/ix.ai/swarm-launcher
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:rw'
networks:
- vpn-proxy
environment:
LAUNCH_IMAGE: qmcgaw/gluetun
LAUNCH_PULL: 'true'
LAUNCH_EXT_NETWORKS: 'vpn-proxy'
LAUNCH_PROJECT_NAME: 'vpn'
LAUNCH_SERVICE_NAME: 'vpn-gluetun'
LAUNCH_CAP_ADD: 'NET_ADMIN'
LAUNCH_ENVIRONMENTS: 'VPNSP=<your-vpn-service> VPN_TYPE=wireguard WIREGUARD_PRIVATE_KEY=<your-private-key> WIREGUARD_PRESHARED_KEY=<your-preshared-key> WIREGUARD_ADDRESS=<addrs> HTTPPROXY=on HTTPPROXY_LOG=on'
LAUNCH_PORTS: '8888:8888/tcp'
deploy:
placement:
constraints: [ node.role == manager ]
restart_policy:
condition: on-failure
networks:
vpn-proxy:
external: true
Notice that either the swarm-launcher and the gluetun containers are using the network previously created vpn-proxy.
3. Set the workers stack
For the time being we will set an example with 3 replicas of alpine image here (filename stack-workers.yml):
version: '3.7'
services:
alpine:
image: alpine
networks:
- vpn-proxy
command: 'ping 8.8.8.8'
deploy:
replicas: 3
networks:
vpn-proxy:
external: true
They also use the vpn-proxy overlay network.
4. Launch our stacks
docker stack deploy -c stack-vpn.yml vpn
docker stack deploy -c stack-workers workers
Once they are up you can access any worker task and try to use the proxy by using the host ip where the proxy resides.
As I said before, theoretically this should work on a multi-host situation, but probably you need to use the public ip of the main docker host (although if they share the same overlay network it could also work with the internal ip address (192...) ).

can not use user-defined bridge in swarm compose yaml file

I learned from docker documentation that I can not use docker DNS to find containers using their hostnames without utilizing user-defined bridge network. I created one using the command:
docker network create --driver=overlay --subnet=172.22.0.0/16 --gateway=172.22.0.1 user_defined_overlay
and tried to deploy a container that uses it. compose file looks like:
version: "3.0"
services:
web1:
image: "test"
ports:
- "12023:22"
hostname: "mytest-web1"
networks:
- test
web2:
image: "test"
ports:
- "12024:22"
hostname: "mytest-web2"
networks:
- test
networks:
test:
external:
name: user_defined_overlay
my docker version is: Docker version 17.06.2-ce, build cec0b72
and I got the following error when I tried deploying the stack:
network "user_defined_bridge" is declared as external, but it is not in the right scope: "local" instead of "swarm"
I was able to create an overlay network and define it in compose file. that worked fine but it didn't for bridge.
result of docker network ls:
NETWORK ID NAME DRIVER SCOPE
cd6c1e05fca1 bridge bridge local
f0df22fb157a docker_gwbridge bridge local
786416ba8d7f host host local
cuhjxyi98x15 ingress overlay swarm
531b858419ba none null local
15f7e38081eb user_defined_overlay overlay swarm
UPDATE
I tried creating two containers running on two different swarm nodes(1st container runs on manager while second runs on worker node) and I specified the user-defined overlay network as shown in stack above. I tried pinging mytest-web2 container from within mytest-web1 container using hostname but I got unknown host mytest-web2
As of 17.06, you can create node local networks with a swarm scope. Do so with the --scope=swarm option, e.g.:
docker network create --scope=swarm --driver=bridge \
--subnet=172.22.0.0/16 --gateway=172.22.0.1 user_defined_bridge
Then you can use this network with services and stacks defined in swarm mode. For more details, you can see PR #32981.
Edit: you appear to have significantly overcomplicated your problem. As long as everything is being done in a single compose file, there's no need to define the network as external. There is a requirement to use an overlay network if you want to communicate container-to-container. DNS discovery is included on bridge and overlay networks with the exception of the default "bridge" network that docker creates. With a compose file, you would never use this network without explicitly configuring it as an external network with that name. So to get container to container networking to work, you can let docker-compose or docker stack deploy create the network for your project/stack automatically with:
version: "3.0"
services:
web1:
image: "test"
ports:
- "12023:22"
web2:
image: "test"
ports:
- "12024:22"
Note that I have also removed the "hostname" setting. It's not needed for DNS resolution. You can communicate directly with a service VIP with the name "web1" or "web2" from either of these containers.
With docker-compose it will create a default bridge network. Swarm mode will create an overlay network. These defaults are ideal to allow DNS discovery and container-to-container communication in each of the scenarios.
The overlay network is the network to be used in swarm. Swarm is meant to be used to manage containers on multiple hosts and overlay networks are docker's multi-host networks https://docs.docker.com/engine/userguide/networking/get-started-overlay/

Resources