Looking for an example docker-compose file to have traefik to reverse proxy both a container and non container service - docker

I want to be able to use traefik so that I can reverse proxy both container and non-container services. And I’d like to be able to use a docker-compose file so it is easily setup and torn down. I thought this would be a common request, but I can’t find a unified example. And since I’m still really new to docker, this is a little outside of my wheelhouse. Ideally the docker-compose file would:
install the traefik container, including authentication so that traefik can be managed with a WebUI
Have traefik use Let’s encrypt to generate and maintain SSL certificates that traefik will use to reverse proxy both docker and non-docker services
install a sample container (like Apache) that will be tagged so traefik will reverse proxy to https://apache.example.com (http automatically redirects)
reverse-proxy a non-container service at http://192.168.1.15:8085 to https://foobar.example.com (http automatically redirects)
I’ve seen plenty of examples on how to use traefik and to tag new containers so that they are reversed proxied, but precious few on how to reverse proxy non-docker services. I’m sure I’m not the only one who would appreciate an example that does both at the same time.

Related

how to deal with changing ips of docker compose containers?

I've setup an app where a nodejs backend has to communicate with a rasa chatbot backend through a react frontend. All services are running through the same docker-compose. Being a docker beginner there are some things I'm not sure about:
communication between host and container is done using the container's ip
browser opening the local react server running on localhost:3000 or 172.22.0.1:3000
browser sending a request to the express backend on localhost:4000 172.22.0.2:4000
however communication between two docker containers is done is the container's name:
rasa server conmmunicating with the rasa custom action server through http://action_server:5055/webhooks
rasa custom action server communicating with the express backend through http://backend_name:4000/users/
my problem is that when I need to contact the rasa backend from my react front end I need to put the rasa docker container's ip which (sometimes) changes upon docker-compose reinitialization. To workaround this I do a docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' app_rasa_1 to get the ip and manually change it into the react frontend.
is there a way to avoid changing the ip alltogether and using the container name (or an alias/link) or what would be a way to automate the change of the container's ip in the react frontend (are environment variables updated via a script an option?)
Completely ignore the container-private IP addresses. They're implementation details that have several practical problems, including (as you note) them changing when a container is recreated. (They're also unreachable on non-Linux hosts, or if the browser isn't on the same host as the containers.)
You show the correct patterns in your question. For calls between containers, use the container names as host names (this setup is described more in Networking in Compose. For calls from outside containers, including from browser-based applications, use the host's DNS name or IP address and the first number from the ports: you publish.
If the browser application needs to contact a back-end server, it needs a path to do this. This could be via published ports:, or one of your other components could proxy the request to the service (maybe using the express-http-proxy middleware).
A dedicated container that only proxies to other backend services is also a useful pattern (Docker Nginx Proxy: how to route traffic to different container using path and not hostname includes some examples), particularly since this will let you use path-only URLs like /api or /rasa in your browser application. If the React application is served from http://localhost:8080/, and the main backend is http://localhost:8080/api, then you can just make HTTP requests to /api and they will be interpreted relative to the page's URL. This avoids the hostname problem completely, so long as your reverse proxy has path-based routes to every container you need to directly contact.

Accessing Apache Nifi through traefik load balancer on docker swarm

Trying to setup Apache NiFi docker container, with traefik as load balancer over docker swarm network, We are able to access web UI, while browsing through UI, it redirects to docker internal host instead of proxy host name, As per below thread from Nifi here looks we need to pass http headers from proxy, couldn't find a way to set it through Traefik, any help here is much appreciated.
On a side note tested Nifi with another reverse proxy, it works fine without any extra configurations needed.
Adding below label in docker-compose for the service resolved the issue.
traefik.frontend.headers.customRequestHeaders=X-ProxyScheme:https||X-ProxyHost:<Virtual HostName>||X-ProxyPort:<Virtual Port>

Running nginx in a container as reverse proxy with dynamic configuration

I'm trying to setup nginx as a reverse proxy in a container for my containers (Docker Swarm) and static sites which are being hosted on Google Cloud Platform & Netlify
I'm actually able to run nginx in containers, but I'm really worried about the configurations.
How will I update my site configurations in nginx to all containers (add / remove location)?
Is attaching a disk is the best option to store logs?
Is there any fault in my architecture?
If image isn't working, please use this link - https://s1.postimg.org/1tv4hka3zz/profitto-architecture_1.png
Hej Sanjay.
Have a look at:
https://github.com/jwilder/nginx-proxy
https://traefik.io/
The first one is a modified Nginx Reverse Proxy by J.Wilder.
The Second one is a new and native Reverse Proxy created specially for such use cases.
Both are able to listen to the docker.socks and dnynamicly add new containers to the reverse-proxy backend.
Regarding your Architecture:
Why not running the Reverse-Proxy Containers inside the Swarm Cluster?
Related to logging, have a Look at the Docker Log-Drivers.
You can collect the Logs of all Containers with eg. fluentd or splunk.

Run nginx in one container to proxy pass Gitlab in another container

I want to run multiple service such as GItlab, racktables on same host with https enabled in different containers. How can I achieve this?
You achieve this by running a reverse proxy (nginx or apache) that forwards traffic to the different containers using different virtualhosts.
gitlab.foo.bar -> gitlab container
racktables.foo.bar -> racktables container
etc
The reverse proxy container will map port 80 and 443 to the host. All the other containers will not need port mapping as all traffic goes through the revers proxy.
I think the quickest way to get this working is to use jwilder/nginx-proxy. It's at least extremely newbie friendly as it automates almost everything for you. You can also learn a lot by looking at the generated config files in the container. Even getting TLS to work is not that complicated and you get setup with A+ rating from ssllabs by default.
I've used this for my hobby projects for almost a year and it works great (with Let's Encrypt).
You can of course also manually configure everything, but it's a lot of work with so many pitfalls.
The really bad way to do this is to run the reverse proxy on the host and map lots of ports from all the containers to the host. Please don't do that.

CoreOS Fleet, link redundant Docker container

I have a small service that is split into 3 docker containers. One backend, one frontend and a small logging part. I now want to start them using coreOS and fleet.
I want to try and start 3 redundant backend containers, so the frontend can switch between them, if one of them fails.
How do i link them? If i only use one, it's easy, i just give it a name, e.g. 'back' and link it like this
docker run --name front --link back:back --link graphite:graphite -p 8080:8080 blurio/hystrixfront
Is it possible to link multiple ones?
The method you us will be somewhat dependent on the type of backend service you are running. If the backend service is http then there are a few good proxy / load balancers to choose from.
nginx
haproxy
The general idea behind these is that your frontend service need only be introduced to a single entry point which nginx or haproxy presents. The tricky part with this, or any cloud service, is that you need to be able to introduce backend services, or remove them, and have them available to the proxy service. There are some good writeups for nginx and haproxy to do this. Here is one:
haproxy tutorial
The real problem here is that it isn't automatic. There may be some techniques that automatically introduce/remove backends for these proxy servers.
Kubernetes (which can be run on top of coreos) has a concept called 'Services'. Using this deployment method you can create a 'service' and another thing called 'replication controller' which provides the 'backend' docker process for the service you describe. Then the replication controller can be instructed to increase/decrease the number of backend processes. You frontend accesses the 'service'. I have been using this recently and it works quite well.
I realize this isn't really a cut and paste answer. I think the question you ask is really the heart of cloud deployment.
As Michael stated, you can get this done automatically by adding a discovery service and bind it to the backend container. The discovery service will add the IP address (usually you'll want to bind this to be the IP address of your private network to avoid unnecessary bandwidth usage) and port in the etcd key-value store and can be read from the load balancer container to automatically update the load balancer to add the available nodes.
There is a good tutorial by Digital Ocean on this:
https://www.digitalocean.com/community/tutorials/how-to-use-confd-and-etcd-to-dynamically-reconfigure-services-in-coreos

Resources