Docker Swarm ping by hostname incremental host.<id> - docker

I have a service that requires that it can connect to the other instances of itself to establish a quorum.
The service has a environment variable like:
initialDiscoverMembers=db.1:5000,db.2:5000,db.3:5000
They can never find each other. I've tried logging into other containers and pinging other services by . like ping redis.1 and it doesn't work.
Is there a way in Docker (swarm) to get the incremental hostname working for connection as well? I looked at the endpoint_mode: dnsrr but that doesn't seem to be what I want.
I think I may have to just create three separate instances of the service and name it different things, but that seems so cumbersome.

You cannot refer independently to each container using the incremental host.<id> since the DNS resolution on Swarm is done on a service-basis; what you can do is to add a hostname alias to each container based on its Swarm slot.
For example, right now you're using a db service, so you could add:
version: '3.7'
services:
db:
image: postgres
deploy:
replicas: 3
hostname: "db-{{.Task.Slot}}"
ports:
- 5000:5432
In this case, since all the containers within each Swarm task are in the same network, you can address them by db-1, db-2 and db-3.

Related

Hiding a docker container behind OpenVPN, in docker swarm, with an overlay network

The goal: To deploy on docker swarm a set of services, one of which is only available for me when I am connected to the OpenVPN server which has also been spun up on docker swarm.
How can I, step by step, only connect to a whoami example container, with a domain in the browser, when I am connected to a VPN?
Background
The general idea would be have, say, kibana and elasticsearch running internally which can only be accessed when on the VPN (rather like a corporate network), with other services running perfectly fine publicly as normal. These will all be on separate nodes, so I am using an overlay network.
I do indeed have OpenVPN running on docker swarm along with a whoami container, and I can connect to the VPN, however it doesn't look like the IP is changing and I have no idea how to make it so that the whoami container is only available when on the VPN, especially considering I'm using an overlay network which is multi-host. I'm also using traefik, a reverse proxy which provides me with a mostly automatic letsencrypt setup (via DNS challenge) for wildcard domains. With this I can get:
https://traefik.mydomain.com
But I also want to connect to vpn.mydomain.com (which I can do right now), and then be able to visit:
https://whoami.mydomain.com
...which I cannot. Yet. I've posted my traefik configuration in a different place in case you want to take a look, as this thread will grow too big if I post it here.
Let's start with where I am right now.
OpenVPN
Firstly, the interesting thing about OpenVPN and docker swarm is that OpenVPN needs to run in privileged mode because it has to make network interfaces changes amongst other things, and swarm doesn't have CAP_ADD capabilities yet. So the idea is to launch the container via a sort of 'proxy container' that will run the container manually with these privileges added for you. It's a workaround for now, but it means you can deploy the service with swarm.
Here's my docker-compose for OpenVPN:
vpn-udp:
image: ixdotai/swarm-launcher:latest
hostname: mainnode
environment:
LAUNCH_IMAGE: ixdotai/openvpn:latest
LAUNCH_PULL: 'true'
LAUNCH_EXT_NETWORKS: 'app-net'
LAUNCH_PROJECT_NAME: 'vpn'
LAUNCH_SERVICE_NAME: 'vpn-udp'
LAUNCH_CAP_ADD: 'NET_ADMIN'
LAUNCH_PRIVILEGED: 'true'
LAUNCH_ENVIRONMENTS: 'OVPN_NATDEVICE=eth1'
LAUNCH_VOLUMES: '/etc/openvpn:/etc/openvpn:rw'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:rw'
networks:
- my-net
deploy:
placement:
constraints:
- node.hostname==mainnode
I can deploy the above with: docker stack deploy --with-registry-auth --compose-file docker/docker-compose.prod.yml my-app-name and this is what I'm using for the rest. Importantly I cannot just deploy this as it won't load yet. OpenVPN configuration needs to exist in /etc/openvpn on the node, which is then mounted in the container, and I do this during provisioning:
// Note that you have to create the overlay network with --attachable for standalone containers
docker network create -d overlay app-net --attachable
// Create the config
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn ovpn_genconfig -u udp://vpn.mydomain.com:1194 -b
// Generate all the vpn files, setup etc
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn bash -c 'yes yes | EASYRSA_REQ_CN=vpn.mydomain.com ovpn_initpki nopass'
// Setup a client config and grab the .ovpn file used for connecting
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn easyrsa build-client-full client nopass
docker run -v /etc/openvpn:/etc/openvpn --log-driver=none --rm ixdotai/openvpn ovpn_getclient client > client.ovpn
So now, I have an attachable overlay network, and when I deploy this, OpenVPN is up and running on the first node. I can grab a copy of client.ovpn and connect to the VPN. Even if I check "send all traffic through the VPN" though, it looks like the IP isn't being changed, and I'm still nowhere near hiding a container behind it.
Whoami
This simple container can be deployed with the following in docker-compose:
whoami:
image: "containous/whoami"
hostname: mainnode
networks:
- ${DOCKER_NETWORK_NAME}
ports:
- 1337:80
deploy:
placement:
constraints:
- node.hostname==mainnode
I put port 1337 there for testing, as I can visit my IP:1337 and see it, but this doesn't achieve my goal of having whoami.mydomain.com only resolving when connected to OpenVPN.
I can ping a 192.168 address when connected to the vpn
I ran the following on the host node:
ip -4 address add 192.168.146.16/24 dev eth0
Then when connected to the VPN, I can resolve this address! So it looks like something is working at least.
How can I achieve the goal stated at the top? What is required? What OpenVPN configuration needs to exist, what network configuration, and what container configuration? Do I need a custom DNS solution as I suggest below? What better alternatives are there?
Some considerations:
I can have the domains, including the private one whoami.mydomain.com public. This means I would have https and get wildcard certificates for them easily, I suppose? But my confusion here is - how can I get those domains only on the VPN but also have tls certs for them without using a self-signed certificate?
I can also run my own DNS server for resolving. I have tried this but I just couldn't get it working, probably because the VPN part isn't working properly yet. I found dnsmasq for this and I had to add the aforementioned local ip to resolve.conf to get anything working locally for this. But domains would still not resolve when connected to the VPN, so it doesn't look like DNS traffic was going over the VPN either (even though I set it as such - my client is viscosity.
Some mention using a bridge network, but a bridge network does not work for multi-host
Resources thus far (I will update with more)
- Using swarm-launcher to deploy OpenVPN
- A completely non-explanatory answer on stackexchange which I have seen referenced as basically unhelpful by multiple people across other Github threads, and one of the links is dead
So I was banging my head head against a brick wall about this problem and just sort of "solved" it by pivoting your idea:
Basically I opened the port of the vpn container to its host. And then enable a proxy. This means that I can reach that proxy by visiting the ip of the pc in which the vpn resides (AKA the Docker Host of the VPN container/stack).
Hang with me:
I used gluetun vpn but I think this applies also if you use openvpn one. I just find gluetun easier.
Also IMPORTANT NOTE: I tried this in a localhost environment, but theoretically this should work also in a multi-host situation since I'm working with separated stacks. Probably, in a multi-host situation you need to use the public ip of the main docker host.
1. Create the network
So, first of all you create an attachable network for this docker swarm stacks:
docker network create --driver overlay --attachable --scope swarm vpn-proxy
By the way, I'm starting to think that this passage is superfluous but need to test it more.
2. Set the vpn stack
Then you create your vpn stack file, lets call it stack-vpn.yml:
(here I used gluetun through swarm-launcher "trick". This gluetun service connects through a VPN via Wireguard. And it also enables an http proxy at the port 8888 - this port is also mapped to its host by setting LAUNCH_PORTS: '8888:8888/tcp')
version: '3.7'
services:
vpn_launcher:
image: registry.gitlab.com/ix.ai/swarm-launcher
volumes:
- '/var/run/docker.sock:/var/run/docker.sock:rw'
networks:
- vpn-proxy
environment:
LAUNCH_IMAGE: qmcgaw/gluetun
LAUNCH_PULL: 'true'
LAUNCH_EXT_NETWORKS: 'vpn-proxy'
LAUNCH_PROJECT_NAME: 'vpn'
LAUNCH_SERVICE_NAME: 'vpn-gluetun'
LAUNCH_CAP_ADD: 'NET_ADMIN'
LAUNCH_ENVIRONMENTS: 'VPNSP=<your-vpn-service> VPN_TYPE=wireguard WIREGUARD_PRIVATE_KEY=<your-private-key> WIREGUARD_PRESHARED_KEY=<your-preshared-key> WIREGUARD_ADDRESS=<addrs> HTTPPROXY=on HTTPPROXY_LOG=on'
LAUNCH_PORTS: '8888:8888/tcp'
deploy:
placement:
constraints: [ node.role == manager ]
restart_policy:
condition: on-failure
networks:
vpn-proxy:
external: true
Notice that either the swarm-launcher and the gluetun containers are using the network previously created vpn-proxy.
3. Set the workers stack
For the time being we will set an example with 3 replicas of alpine image here (filename stack-workers.yml):
version: '3.7'
services:
alpine:
image: alpine
networks:
- vpn-proxy
command: 'ping 8.8.8.8'
deploy:
replicas: 3
networks:
vpn-proxy:
external: true
They also use the vpn-proxy overlay network.
4. Launch our stacks
docker stack deploy -c stack-vpn.yml vpn
docker stack deploy -c stack-workers workers
Once they are up you can access any worker task and try to use the proxy by using the host ip where the proxy resides.
As I said before, theoretically this should work on a multi-host situation, but probably you need to use the public ip of the main docker host (although if they share the same overlay network it could also work with the internal ip address (192...) ).

Connecting dockerized apps network to do api call

I have a bit of a problem with connecting the dots.
I managed to dockerized our legacy app and our newer app, but now I need to make them to talk to one another via API call.
Projects:
Project1 = using project1_appnet (bridge driver)
Project2 = using project2_appnet (bridge driver)
Project3 = using project3_appnet (bridge driver)
On my local, I have these 3 projects on 3 separates folders. Each project will have their own app, db and cache services.
This is the docker-compose.yml for one of the project. (They have nearly all the same docker-compose.yml only with different image and volume path)
version: '3'
services:
app:
build: ./docker/app
image: 'cms/app:latest'
networks:
- appnet
volumes:
- './:/var/www/html:cached'
ports:
- '${APP_PORT}:80'
working_dir: /var/www/html
cache:
image: 'redis:alpine'
networks:
- appnet
volumes:
- 'cachedata:/data'
db:
image: 'mysql:5.7'
environment:
MYSQL_ROOT_PASSWORD: '${DB_ROOT_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USER}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
ports:
- '${DB_PORT}:3306'
networks:
- appnet
volumes:
- 'dbdata:/var/lib/mysql'
networks:
appnet:
driver: bridge
volumes:
dbdata:
driver: local
cachedata:
driver: local
Question:
How can I make them be able to talk to one another via API call? (On my local for development and for prod environment)
On production, the setting will be a bit different, they will be in different machines but still in the same VPC or even through public network. What is the setting for that?
Note:
I have been looking at link but apparently it is deprecated for v3 or not really recommended
Tried curl from project1 container to project2 container, by doing:
root#bc3afb31a5f1:/var/www/html# curl localhost:8050/login
curl: (7) Failed to connect to localhost port 8050: Connection refused
If your final setup will be that each service will be running on a physically different system, there aren't really any choices. One system can't directly access the Docker network on another system; the only way service 1 will be able to reach service 2 is via its host's DNS name (or IP address) and the published port. Since this will be different in different environments, I'd suggest making that value a configured environment variable.
environment:
SERVICE_2_URL: 'http://service-2-host.example.com/' # default port 80
Once you've settled on that, you can use the same setup for a single-host deployment, mostly. If your developer systems use Docker for Mac or Docker for Windows you should be able to use a special Docker hostname to reach the other service
environment:
SERVICE_2_URL: 'http://host.docker.internal:8082/'
(If you use Linux on the desktop you will have to know some IP address for the host; not localhost because that means "this container", and not the docker0 interface address because that will be on a specific network, but something like the host's eth0 address.)
Your other option is to "borrow" the other Docker Compose network as an external network. There is some trickiness if all of your Docker Compose setups have the same names; from some experimentation it seems like the Docker-internal DNS will always resolve to your own Docker Compose file first, and you have to know something like the Compose-assigned container name (which isn't hard to reconstruct and is stable) to reach the other service.
version: '3'
networks:
app2:
external:
name: app2_appnet
services:
app:
networks:
- appnet
- app2_appnet
environment:
SERVICE_2_URL: 'http://app2_app_1/' # using the service-internal port
MYSQL_HOST: db # in this docker-compose.yml
(I would suggest using the Docker Compose default network over declaring your own; that will mostly let you delete all of the networks: blocks in the file without any ill effect, but in this specific case you will need to declare networks: [default, app2_default] to connect to both.)
You may also consider a multi-host container solution when you're starting to look at this. Kubernetes is kind of heavy-weight, but it will run containers on any node in the cluster (you don't specifically have to worry about placement) and it provides both namespaces and automatic DNS resolution for you; you can just set SERVICE_2_URL: 'http://app.app2/' to point at the other namespace without worrying about these networking details.
If you run this docker compose locally; given app and db are on the same network - appnet - app should be able to talk to db using localhost:${DB_PORT}.
In production, if app and db are on different machines; app would probably need to talk to database using ip or domain name.
Considering that you are using different machines for the different docker deployments you good put them behind a regular webserver (Apache2, Nginx) and then route the traffic from the specific domain to $APP_PORT using a simple vhost. I prefer to do that instead of directly exposing the container to the network. This way you would also be able to host multiple applications on the same machine ( if you like to ). So I suggest you should not try to connect docker networks but "regular " ones.
Was playing around with inspect and cURL. I think I found the solution.
Locally:
In my local, I inspected the container and view the NetworkSettings.Network.<network name>.Gateway which is 172.25.0.1
Then I get the the exposed port which is 8050
Then I did a curl inside the app1 container curl 172.25.0.1:8050/login to check whether app1 can do a http request to app2 container. OR docker exec -it project1_app_1 curl 172.25.0.1:8050/login
Vice versa, I did curl 172.25.0.1:80 for app2 -> app 1 OR docker exec -it project2_app_1 curl 172.25.0.1:80
The only issue is that, the Gateway value changes when we restart via docker-compose up -d
Production likewise:
I am not that pro with networking and stuff. My estimate for production would be:
Do curl app2-domain.com which is pointed to the app by the webserver as they are in their own machine (even with a load balancer).

Docker stack deploy using overlay network - inconsistent behavior

I am deploying 2 containers (application and SQL) to the same network using a docker-compose.yml file (Swarm stack deploy).
Most of the time, the application has no problems talking to the SQL via its host name as a datasource in the connection string.
However, there are times where it simply can't find it. In order to debug it, I have verified that the overlay network is indeed created in each node, and when inspecting the network on each node, I see that the container does belong to this network.
Moreover, when I run docker exec command to enter the application container, I try to send a ping to the SQL container, and the host name does resolves to the correct IP, but still there is no response back.
This is extremely frustrating, as it only occurs from time to time.
Any suggestions of how to debug the issue ?
version: '3.2'
services:
sqlserver:
image: xxxx:5000/sql_image
hostname: sqlserver
deploy:
endpoint_mode: dnsrr
networks:
devnetwork:
aliases:
- sqlserver
test:
image: xxxx:5000/test
deploy:
endpoint_mode: dnsrr
deploy:
restart_policy:
condition: none
resources:
reservations:
memory: 2048M
networks:
- devnetwork
networks:
devnetwork:
driver: overlay
Service discovery and DNS problems on load are known bag in swarm mode. We have this problem a lot of times. You can discover open issues here and here.
If you run heavy use network application consider separate your worker and manager nodes. It's will help to manager execute service discovery well.
You can change the service discovery component and use something as Consul or ZooKeeper as part of your stack implementation.
I would consider using some service mesh for data-bind communication between services. Consul can do it for you. You can earn a lot of benefits from this design pattern. Security and encrypted data communication for example.

Route to host machine instead of particular container

I have simple docker-compose aka:
version: '3'
services:
app:
container_name: app
ports:
- 8081:8081
db:
container_name: db
ports:
- 5432:5432
And by default this containers are created in default(brige) network.
The app has db connection property: jdbc:postgresql://db/some_db, and everythig works perfectly. But from time to time I want the app to connecto to other db, that is running on my host machine, not in docker container.
The main problem is that I can not change my connection properties. And, ideally, I do not want to run new container with some additional options every time I want to switch the db host (but restart is ok)
Hence my question: what is the best way to achive this? Is it possible to set up additional route for containers host resolving? For exapmle, if db container is unreachable, then route to host.
You can access host services from your host.
See "How to access host port from docker container":
ip addr show docker0
docker.for.mac.localhost # docker 17.06+ June 2017
if db container is unreachable, then route to host.
That is a job for an orchestrator.
For instance, with kubernetes, you can associate an external load balencer, which could be tuned to redirect traffic to your pod, unless it is not accessible.

How to fetch Ips of a service in docker swarm cluster ?

I am running a docker swarm mode cluster with 2 nodes, and deploy 5 services : [ mysql , mongo , app ] and wish to filldb with an ansible script from my manager node. But I can not get the Ip from nodes to access db services in container ?
e.g:
mysql -h {{ mysql_service_host }} ....
how to get the container Ip or the service ip from node ?
is it possible to use mode host in docker swarm ?
For services (containers) that are part of the same network you can simply use the service name. Docker includes a DNS resolver that handles ip resolution. You will need to make your services part of an overlay network. An overlay network can span more than one node.
Eg:
services:
myapp:
image: myimage:1.0
deploy:
replicas: 1
networks:
- privnet
maindb:
image: mysql
deploy:
replicas: 1
networks:
- privnet
networks:
privnet:
driver: overlay
This creates an overlay network with two services. The corresponding containers could be created on any node. It doesn't matter where. They will all be able to communicate to each other since they're part of the same overlay network.
Within myapp, you can use maindb as a DNS for the mysql service. It will be resolved by Docker to the proper ip within the privnet network.
btw, a swarm cluster with 2 nodes doesn't make much sense. Swarm requires a minimum of 3 nodes for the Raft consensus protocol to work. https://raft.github.io

Resources