Isolate containers connected to traefik overlay network on swarm - docker

I have multiple stacks running in docker swarm with traefik, where services in each stack are connected to an overlay network (traefik-net) so traefik can talk to them.
If I have a service in each stack that's called the same service name (service1), and then have another service (service2) in either stack try to access it by the service name (ping http://service1), it'll sometimes hit service1 in the other stack, and sometimes hit service1 in the same stack.
docker network create --driver overlay traefik-net
stack1:
services:
service1:
networks:
- default
- traefik-net
service2:
networks:
- default
- traefik-net
networks:
traefik-net:
external: true
stack2:
services:
service1:
networks:
- default
- traefik-net
networks:
traefik-net:
external: true
I want service2 to only hit service1 that is in the same stack.
I assumed that a service could only hit a service in another stack by prefixing the stack name to the service name (ping http//stack2_service1). But I learned that because of the traefik-net overlay network, they apparently can call each other without the stack name prefix.
Is there a way to turn off service communication across stacks without stack name prefixes?
Or maybe there's a traefik specific solution to the problem?
If anyone has run into this problem I would a very much appreciate a solution.

Yes there is a solution to what you want to achieve you just need to make proper use of overlay networks.
By default all the services that are connected in the same overlay network can talk/resolve each other.
So let's visualize your current implementation. Now you have one network the traefik-net and you have connected there all your services so your design looks like that:
What you need to do in order to isolate services on different stacks but keep them accessible by traefik is to create a different overlay network in each stack file and connect traefik service to these networks by defining them as external in traefik stack file. You are going to end up like this:
In this implementation all the traffic between different stacks is only possible via traefik service and not directly.

I found a similar question: Docker DNS with Multiple Projects Using the Same Network
There doesn't seem to be any solution other than to always hit stack_service instead of service.

Related

Docker-compose: Docker containers can't connect using service names

I have 3 containers. One is a lighttpd server serving static content (front). I have 2 flask servers handling the backend (back and model)
This is my docker-compose.yml
version: "3"
services:
front:
image: ecd3:latest
ports:
- 4200:80
tty: true
links:
- "back"
depends_on:
- back
networks:
- mynet
back:
image: esd3:latest
ports:
- 5000:5000
links:
- "model"
depends_on:
- model
networks:
- mynet
model:
image: mok:latest
ports:
- 5001:5001
networks:
- mynet
networks:
mynet:
I'm trying to send an http request to my flask server (back) from my frontend (front). I have bound the flask server to 0.0.0.0 and even used the service name in the frontend (http://back:5000/endpoint)
Trying to curl the flask server inside the frontend container (curl back:5000) gives me this:
curl: (52) Empty reply from server
Pinging the flask server from inside the frontend container works. This means that the connection must have been established.
Why can't I connect to my flask server from my frontend?
We discovered several things in the comments. Firstly, that you had a proxy problem that prevented one container using the API in another container.
Secondly, and critically, you discovered that the service names in your Docker Compose configuration file are made available in the virtual networking system set up by Docker. So, you can ping front from back and vice-versa. Importantly, it's worth noting that you can do this because they are on the same virtual network, mynet. If they were on different Docker networks, then by design the DNS names would not be available, and the virtual container IP addresses would not be reachable.
Incidentally, since you have all of your containers on the same network, and you have not changed any network settings, you could drop this network for now. In other words, you can remove the networks definition and the three container references to it, since they can just join the default network instead.
Thirdly, you learned that Docker's virtual DNS entries are not made available on the host, and so front and back are not available here. Even if the were (e.g. if manual entries were made in the hosts file) those IPs would not work, since there is no direct networking route from the host to the containers.
Instead, those containers are exposed by a Docker device that proxies connections from a custom localhost port down to those containers (4200, 5000 and 5001 in your case).
A good interim solution is to load your frontend at http://localhost:4200 and hardwire its API address as http://localhost:5000. You may have some CORS issues with that though, since browsers will see these as different servers.
Moreover, if you go live, you may have some problems with mobile networks and corporate firewalls - you will probably want your frontend app to sit on port 443, but since it is a separate server, you will either need a different IP address for your API, so it can also go on 443, or you will need to use another port. A clean solution for this is to put a frontend proxy in front of both containers, and then just expose the proxy in Docker. This will send HTTP requests from the outside to the correct container, depending on a filtering criteria set by you. I recommend Traefik for this, but there are undoubtedly several other approaches.

Problems with network in Docker Swarm

I have been trying to reproduce (in other machine in the same network) the video played by this container on the docker-compose via swarm.
services:
vlc:
image: boydachina/vlc-server
ports:
- 8080:8080
- 8554:8554
networks:
- vlc_net
command:
- cvlc -vvv /opt/vlc-media/python.mp4 --sout '#transcode{vcodec=h264,acodec=mpga,ab=128,channels=2,samplerate=44100}:rtp{sdp=rtsp://:8554/}'
volumes:
- ./media:/opt/vlc-media/
networks:
vlc_net:
But it is as if there was no network from the container of the other machine to my machine. I thought that putting it in bridge mode would solve it, but I saw that you can't put the Docker Swarm in bridge mode. I need to play the video on several machines on the network, does anyone have any solutions?
Before you deploy the stack to the swarm, create a Docker Network with the overlay driver (note that network names must be unique):
docker network create --driver overlay vlc_net
This will create an overlay network that spans the entire swarm.
Then try setting the network options like this:
networks:
vlc_net:
driver: overlay
external: true
It might also help you to look at how Traefik manages its network in a docker swarm and try to replicate it, since all containers in a swarm can connect to Traefik, and that seems like the use case you are trying to solve.

give node_exporter container access to network statistics from the host

I would like to use prometheus's node_exporter in a container, but also have access to the host network's statistics.
As I'm using prometheus and the other modules such as alertmanager or mysql-exporter also in containers, I don't want to simply use the host network, because then I wouldn't be able to easily connect prometheus to the node-exporter in a consistent way, given that prometheus is also deployed as a container. I'd like to use the docker logic (identifying services by names).
This is the relevant part of the docker-compose:
node_exporter:
build: "/root/prometheus-stack/node_exporter"
image: node_exporter:v0.18.1
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--log.level=debug'
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points'
- '^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)'
networks:
- back-tier
restart: always
As you can see I'm mounting the /proc pseudo file system, but unfortunately the statistics that I see are confined to the ones of the container, not the ones of the host. So I'll always going to see eth0, for instance, which is the interface of the container as seen from inside.
Is there any way I could get around this so that I have a fully containerised prometheus stack?
The other solution is cadvisor, which uses docker's socket, and I through that I can get network statistics for the server's main network interface, for instance. But that's not really nice.
On the server I also have containerised reverse-proxies, so it would make it even harder to install prometheus and its modules directly on the host.
So any ideas how I can smoothly give access to node-exporter to the host's network statistics?
Thank you!

Connecting dockerized apps network to do api call

I have a bit of a problem with connecting the dots.
I managed to dockerized our legacy app and our newer app, but now I need to make them to talk to one another via API call.
Projects:
Project1 = using project1_appnet (bridge driver)
Project2 = using project2_appnet (bridge driver)
Project3 = using project3_appnet (bridge driver)
On my local, I have these 3 projects on 3 separates folders. Each project will have their own app, db and cache services.
This is the docker-compose.yml for one of the project. (They have nearly all the same docker-compose.yml only with different image and volume path)
version: '3'
services:
app:
build: ./docker/app
image: 'cms/app:latest'
networks:
- appnet
volumes:
- './:/var/www/html:cached'
ports:
- '${APP_PORT}:80'
working_dir: /var/www/html
cache:
image: 'redis:alpine'
networks:
- appnet
volumes:
- 'cachedata:/data'
db:
image: 'mysql:5.7'
environment:
MYSQL_ROOT_PASSWORD: '${DB_ROOT_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USER}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
ports:
- '${DB_PORT}:3306'
networks:
- appnet
volumes:
- 'dbdata:/var/lib/mysql'
networks:
appnet:
driver: bridge
volumes:
dbdata:
driver: local
cachedata:
driver: local
Question:
How can I make them be able to talk to one another via API call? (On my local for development and for prod environment)
On production, the setting will be a bit different, they will be in different machines but still in the same VPC or even through public network. What is the setting for that?
Note:
I have been looking at link but apparently it is deprecated for v3 or not really recommended
Tried curl from project1 container to project2 container, by doing:
root#bc3afb31a5f1:/var/www/html# curl localhost:8050/login
curl: (7) Failed to connect to localhost port 8050: Connection refused
If your final setup will be that each service will be running on a physically different system, there aren't really any choices. One system can't directly access the Docker network on another system; the only way service 1 will be able to reach service 2 is via its host's DNS name (or IP address) and the published port. Since this will be different in different environments, I'd suggest making that value a configured environment variable.
environment:
SERVICE_2_URL: 'http://service-2-host.example.com/' # default port 80
Once you've settled on that, you can use the same setup for a single-host deployment, mostly. If your developer systems use Docker for Mac or Docker for Windows you should be able to use a special Docker hostname to reach the other service
environment:
SERVICE_2_URL: 'http://host.docker.internal:8082/'
(If you use Linux on the desktop you will have to know some IP address for the host; not localhost because that means "this container", and not the docker0 interface address because that will be on a specific network, but something like the host's eth0 address.)
Your other option is to "borrow" the other Docker Compose network as an external network. There is some trickiness if all of your Docker Compose setups have the same names; from some experimentation it seems like the Docker-internal DNS will always resolve to your own Docker Compose file first, and you have to know something like the Compose-assigned container name (which isn't hard to reconstruct and is stable) to reach the other service.
version: '3'
networks:
app2:
external:
name: app2_appnet
services:
app:
networks:
- appnet
- app2_appnet
environment:
SERVICE_2_URL: 'http://app2_app_1/' # using the service-internal port
MYSQL_HOST: db # in this docker-compose.yml
(I would suggest using the Docker Compose default network over declaring your own; that will mostly let you delete all of the networks: blocks in the file without any ill effect, but in this specific case you will need to declare networks: [default, app2_default] to connect to both.)
You may also consider a multi-host container solution when you're starting to look at this. Kubernetes is kind of heavy-weight, but it will run containers on any node in the cluster (you don't specifically have to worry about placement) and it provides both namespaces and automatic DNS resolution for you; you can just set SERVICE_2_URL: 'http://app.app2/' to point at the other namespace without worrying about these networking details.
If you run this docker compose locally; given app and db are on the same network - appnet - app should be able to talk to db using localhost:${DB_PORT}.
In production, if app and db are on different machines; app would probably need to talk to database using ip or domain name.
Considering that you are using different machines for the different docker deployments you good put them behind a regular webserver (Apache2, Nginx) and then route the traffic from the specific domain to $APP_PORT using a simple vhost. I prefer to do that instead of directly exposing the container to the network. This way you would also be able to host multiple applications on the same machine ( if you like to ). So I suggest you should not try to connect docker networks but "regular " ones.
Was playing around with inspect and cURL. I think I found the solution.
Locally:
In my local, I inspected the container and view the NetworkSettings.Network.<network name>.Gateway which is 172.25.0.1
Then I get the the exposed port which is 8050
Then I did a curl inside the app1 container curl 172.25.0.1:8050/login to check whether app1 can do a http request to app2 container. OR docker exec -it project1_app_1 curl 172.25.0.1:8050/login
Vice versa, I did curl 172.25.0.1:80 for app2 -> app 1 OR docker exec -it project2_app_1 curl 172.25.0.1:80
The only issue is that, the Gateway value changes when we restart via docker-compose up -d
Production likewise:
I am not that pro with networking and stuff. My estimate for production would be:
Do curl app2-domain.com which is pointed to the app by the webserver as they are in their own machine (even with a load balancer).

Docker stack deploy using overlay network - inconsistent behavior

I am deploying 2 containers (application and SQL) to the same network using a docker-compose.yml file (Swarm stack deploy).
Most of the time, the application has no problems talking to the SQL via its host name as a datasource in the connection string.
However, there are times where it simply can't find it. In order to debug it, I have verified that the overlay network is indeed created in each node, and when inspecting the network on each node, I see that the container does belong to this network.
Moreover, when I run docker exec command to enter the application container, I try to send a ping to the SQL container, and the host name does resolves to the correct IP, but still there is no response back.
This is extremely frustrating, as it only occurs from time to time.
Any suggestions of how to debug the issue ?
version: '3.2'
services:
sqlserver:
image: xxxx:5000/sql_image
hostname: sqlserver
deploy:
endpoint_mode: dnsrr
networks:
devnetwork:
aliases:
- sqlserver
test:
image: xxxx:5000/test
deploy:
endpoint_mode: dnsrr
deploy:
restart_policy:
condition: none
resources:
reservations:
memory: 2048M
networks:
- devnetwork
networks:
devnetwork:
driver: overlay
Service discovery and DNS problems on load are known bag in swarm mode. We have this problem a lot of times. You can discover open issues here and here.
If you run heavy use network application consider separate your worker and manager nodes. It's will help to manager execute service discovery well.
You can change the service discovery component and use something as Consul or ZooKeeper as part of your stack implementation.
I would consider using some service mesh for data-bind communication between services. Consul can do it for you. You can earn a lot of benefits from this design pattern. Security and encrypted data communication for example.

Resources