I run traefik as proxy for many containers successfully.
I also run a dockered prosody (XMPP-server) with published ports (5222 and 5269) in the traefik network.
As I want it to work with strict TLS and no STARTTLS I use HostSNI-rule and tcp instead of http protocol in traffic for labeling, I passthrough the tls because it is terminated in xmpp instance itself (not traefik). I place the certificates there. It works.
Soon I realize I might have done it all wrong: since I want to compare it with a second XMPP service (ejabberd) behind traefik, a port conflict (5222 and 5269 already in use) makes me wonder why I mapped the ports at all on a bridged network with traefik in first instance?
So I disabled them for prosody (no ports, no expose in docker-compose file of prosody). But now it can't be reached anymore although I have set the labels and service for traefik rules accordingly. (In Portainer>Inspect>Networksettings>ports show they are recognized, available in image).
First conclusion: I think traffic is not going over traefik at all (as local dns points to the docker host with the published ports). But if I stop traefik, xmpp clients won't connect with published ports neither. So then it must still go via traefik proxy, right?. How can one check/probe that actually? Why do I need the ports being exposed/published (I tried all of it) in docker in the use-case to work solely with traefik?
I decided to let prosody be with strict TLS (as is, including mapped ports which makes no sense to me) and enable STARTTLS with ejabberd and adding new entry-points for port 5221 and 5268 including mapping them in traefik. But no success for connection - what do I miss in the concept? (as standalone ejabberd container works well):
labels:
- "traefik.enable=true"
- "traefik.tcp.routers.xmpp2-c2c.entrypoints=xmpp2"
- "traefik.tcp.routers.xmpp2-c2c.rule=Host(`local.lan`)"
#- "traefik.tcp.routers.xmpp2-c2c.rule=HostSNI(`local.lan`)"
#- "traefik.tcp.routers.xmpp2-c2c.tls=true"
#- "traefik.tcp.routers.xmpp2-c2c.tls.passthrough=true"
- "traefik.tcp.routers.xmpp2-c2c.service=xmpp2-c2c-service"
- "traefik.tcp.services.xmpp2-c2c-service.loadbalancer.server.port=5222"
Why should I use traefik at all then? Maybe I should put them (XMPP servers) not behind a reverse-proxy as it is missing the primary goal for serving different services at the same domain (tls termination). But for the sake of comprehension I still expect it to work with traefik - or is there anything else I forgot to think about?
Related
From the Docker documentation:
--publish or -p flag. Publish a container's port(s) to the host.
--expose. Expose a port or a range of ports.
--link. Add link to another container. Is a legacy feature of Docker. It may eventually be removed.
I am using docker-compose with several networks. I do not want to publish any ports to the host, yet when I use expose, the port is then exposed to all the networks that container is connected to. It seems that after a lot of testing and reading I cannot figure out how to limit this to a specific network.
For example in this docker-compose file with where container1 joins the following three networks: internet, email and database.
services:
container1:
networks:
- internet
- email
- database
Now what if I have one specific port that I want to expose to ONLY the database network, so NOT to the host machine and also NOT to the email and internet networks in this example? If I would use ports: on container1 it is exposed to the host or I can bind it to a specific IP address of the host. *I also tried making a custom overlay network, giving the container a static IPv4 address and trying to set the ports in that format in ports: like - '10.8.0.3:80:80', but that also did not work because I think the binding can only happen to a HOST IP address. If i use expose: on container1 the port will be exposed to all three networks: internet, email and database.
I am aware I can make custom firewall ruling but it annoys me that I cannot write such simple config in my docker-compose file. Also, maybe something like 80:10.8.0.3:80 (HOST_IP:HOST_PORT:CONTAINER_IP:CONTAINER_PORT) would make perfect sense here (did not test it).*
Am I missing something or is this really not possible in Docker and Docker-compose?
Also posted here: https://github.com/docker/compose/issues/8795
No, container to container networking in docker is one-size-fits-many. When two containers are on the same network, and ICC has not been disabled, container-to-container communication is unrestricted. Given Docker's push into the developer workflow, I don't expect much development effort to change this.
This is handled by other projects like Kubernetes by offloading the networking to a CNI where various vendors support networking policies. This may be iptables rules, eBPF code, some kind of sidecar proxy, etc to implement it. But it has to be done as the container networking is setup, and docker doesn't have the hooks for you to implement anything there.
Perhaps you could hook into docker events and run various iptables commands for containers after they've been created. The application could also be configured to listen on the specific IP address for the network it trusts, but this requires injecting the subnet you trust and then looking up your container IP in your entrypoint, non-trivial to script up, and I'm not even sure it would work. Otherwise, this is solved by either restructuring the application so components that need to be on a less secure network are minimized, by hardening the sensitive ports, or switching the runtime over to something like Kubernetes with a network policy.
Things that won't help:
Removing exposed ports: this won't help since expose is just documentation. Changing exposed ports doesn't change networking between containers, or between the container and host.
Links: links are a legacy feature that adds entries to the host file when the container is created. This was replaced by creating networks with DNS resolution of other containers.
Removing published ports on the host: This doesn't impact container to container communication. The published port with -p creates a port forward from the host to the container, which you do want to limit, but containers can still communicate over a shared network without that published port.
The answer to this for me was to remove the -p command as that binds the container to the host and makes it available outside the host.
If you don't specify -p options. The container is available on all the networks it is connected to. On whichever port or ports the application is listening on.
It seems the -P forces the container on to the host and binds it to the port specified.
In your example if you don't use -p when staring "container1". "container1" would be available to the networks: internet, email, database with all its ports but not outside the host.
I have three tomcat containers running on different bridge networks with different subnet and gateway
For example:
container1 172.16.0.1 bridge1
container2 192.168.0.1 bridge2
container3 192.168.10.1 bridge3
These containers are running on different ports like 8081, 8082, 8083
Is there any way to run all three containers in same 8081?
If it is possible, how can I do it in docker.
You need to set-up a reverse proxy. As the name suggests, this is a proxy that works in an opposite way from the standard proxy. While standard proxy gets requests from internal network and serves them from external networks (internet), the reverse proxy gets requests from external network and serves them by fetching information from internal network.
There are multiple applications that can serve as a reverse proxy, but the most used are:
NginX
Apache
HAProxy mainly as a load-balancer
Envoy
Traefik
Majority of the reveres proxies can run as another container on your docker. Some of this tools are easy to start since there is ample amount of tutorials.
The reverse proxy is more than just exposing single port and forwarding traffic to back-end ports. The reverse proxy can manage and distribute the load (load balancing), can change the URI that is arriving from the client to a URI that the back-end understands (URL rewriting), can change the response form the back-end (content rewriting), etc.
Reverse HTTP/HTTP traffic
What you need to do to set a reverse proxy, assuming you have HTTP services, in your example is foloowing:
Decide which tool to use. As a beginner, I suggest NginX
Create a configuration file for the proxy which will take the requests from the port 80 and distribute to ports 8081, 8082, 8083. Since the containers are on different network, you will need to decide if you want to forward the traffic to their IP addresses (which I don't recommend since IP can change), or to publish the ports on the host and use the host IP. Another alternative is to run all of them on the same network.
Depending on the case, you need to setup the X-Forwarding-* flags and/or URL rewriting and content rewriting.
Run the container and publish the port 80 as 8080 (if you expose the containers on host, your 8081 will be already taken).
Reverse TCP/UDP traffic
If you have non-HTTP services (raw TCP or UDP services), then you can use HAProxy. Steps are same apart from the configuration step #2. The configuration is different due to non-HTTP nature of the traffic and you can find example in this SO
Let's say there are two MariaDB containers running on the same host of a Docker swarm. Each container has its internal port 3306 which is dynamically exposed to e.g. 30004 and 30056.
I'd like an external container (not defined in the stack) to access the database of one stack by a hostname and fixed port, e.g. mariadb_s1:3306 (redirected to MariaDB of stack 1 on port 30004) as shown in the following picture.
We also have a Traefik instance running on the Docker host. Is Traefik capable to create these routes?
I don't think traefik supports TCP proxying at the moment but it seems to be planned https://github.com/containous/traefik/issues/10
But even with TCP proxy support it might be hard to route based on hostname as I don't think the MySQL protocol includes the hostname (might be wrong). If so one solution could be to use TLS and route based on SNI.
I can't figure out how to implement this, if it's even possible:
I want to allow Traefik container to expose ports only on Traefik Network.
Does anyone know how to achieve this?
EDIT:
To clarify, my question isn't technical and not about docker but about Traefik. Since Traefik supports docker (a dynamic environment), is it capable of exposing ports only for one docker network with dynamic ip address it receives. If it does then please explain how to achieve it (which comes to one configuration line or one parameter to add in container deployment). If it doesn't then it's a nice toy for development and not enterprise ready since it can't handle security in dynamic environments.
I've been working on using Rancher for manager our dashboard applications, part of this has involved exposing multiple kibana containers from the same port, and one kibana 3 container exposing on port 80.
I want to therefore send requests on specified ports: 5602, 5603, 5604 to specific containers, so I setup the following docker-compose.yml config:
kibana:
image: rancher/load-balancer-service
ports:
- 5602:5602
- 5603:5603
- 5604:5604
links:
- kibana3:kibana3
- kibana4-logging:kibana4-logging
- kibana4-metrics:kibana4-metrics
labels:
io.rancher.loadbalancer.target.kibana3: 5602=80
io.rancher.loadbalancer.target.kibana4-logging: 5603=5601
io.rancher.loadbalancer.target.kibana4-metrics: 5604=5601
Everything works as expected, but I get sporadic 503's. When I go into the container and look at the haproxy.cfg I see:
frontend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_frontend
bind *:5603
mode http
default_backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
mode http
timeout check 2000
option httpchk GET /status HTTP/1.1
server cbc23ed9-a13a-4546-9001-a82220221513 10.42.60.179:5603 check port 5601 inter 2000 rise 2 fall 3
server 851bdb7d-1f6b-4f61-b454-1e910d5d1490 10.42.113.167:5603
server 215403bb-8cbb-4ff0-b868-6586a8941267 10.42.85.7:5601
The IPs listed are all three Kibana containers, the first container has a health check has it, but none of the others do (kibana3/kibana4.1 dont have a status endpoint). My understanding of the docker-compose config is it should have only the one server per backend, but all three appear to be listed, I assume this is in part down to the sporadic 503s, and removing this manually and restarting the haproxy service does seem to solve the problem.
I am configuring the load balancer incorrectly or is this worth raising as a Github issue with Rancher?
I posted on the Rancher forums as that was suggested from Rancher Labs on twitter: https://forums.rancher.com/t/load-balancer-sporadic-503s-with-multiple-port-bindings/2358
Someone from rancher posted a link to a github issue which was similar to what I was experiencing: https://github.com/rancher/rancher/issues/2475
In summary, the load balancers will rotate through all matching backends, there is a work around involving "dummy" domains, which I've confirmed with my configuration does work, even if it is slightly inelegant.
labels:
# Create a rule that forces all traffic to redis at port 3000 to have a hostname of bogus.com
# This eliminates any traffic from port 3000 to be directed to redis
io.rancher.loadbalancer.target.conf/redis: bogus.com:3000
# Create a rule that forces all traffic to api at port 6379 to have a hostname of bogus.com
# This eliminates any traffic from port 6379 to be directed to api
io.rancher.loadbalancer.target.conf/api: bogus.com:6379
(^^ Copied from rancher github issue, not my workaround)
I'm going to see how easy it would be to route via port and raise a PR/Github issue as I think it's a valid usecase for an LB in this scenario.
Make sure that you are using the port initially exposed on the docker container. For some reason, if you bind it to a different port, HAProxy fails to work. If you are using a container from DockerHub that is using a port already taken on your system, you may have to rebuild that docker container to use a different port by routing it through a proxy like nginx.