Best practices to implement nginx ingress like functionality on docker (swarm)? - docker

I've decided that kubernetes is too complex for my use case, but I'd still like to implement an nginx ingress controller like pattern, where one container will act as the ingress point for all HTTP traffic and will then redirect it to other containers, and also serve as a single HTTPS endpoint so traffic between the controller and containers behind it can be plain HTTP, and each container doesn't need to have their own certs.
I know I can configure nginx / haproxy from scratch to do this, but is there a standard / best practices way to achieve functionality like this with just plain docker / docker swarm instead of k8s? I haven't found much by searching online.

Related

Looking for an example docker-compose file to have traefik to reverse proxy both a container and non container service

I want to be able to use traefik so that I can reverse proxy both container and non-container services. And I’d like to be able to use a docker-compose file so it is easily setup and torn down. I thought this would be a common request, but I can’t find a unified example. And since I’m still really new to docker, this is a little outside of my wheelhouse. Ideally the docker-compose file would:
install the traefik container, including authentication so that traefik can be managed with a WebUI
Have traefik use Let’s encrypt to generate and maintain SSL certificates that traefik will use to reverse proxy both docker and non-docker services
install a sample container (like Apache) that will be tagged so traefik will reverse proxy to https://apache.example.com (http automatically redirects)
reverse-proxy a non-container service at http://192.168.1.15:8085 to https://foobar.example.com (http automatically redirects)
I’ve seen plenty of examples on how to use traefik and to tag new containers so that they are reversed proxied, but precious few on how to reverse proxy non-docker services. I’m sure I’m not the only one who would appreciate an example that does both at the same time.

Traefik Docker Swarm Mode multiple networks listen address

I can't figure out how to implement this, if it's even possible:
I want to allow Traefik container to expose ports only on Traefik Network.
Does anyone know how to achieve this?
EDIT:
To clarify, my question isn't technical and not about docker but about Traefik. Since Traefik supports docker (a dynamic environment), is it capable of exposing ports only for one docker network with dynamic ip address it receives. If it does then please explain how to achieve it (which comes to one configuration line or one parameter to add in container deployment). If it doesn't then it's a nice toy for development and not enterprise ready since it can't handle security in dynamic environments.

Docker best practises for managing websites

What do you think is best practise for running multiple website on the same machine, would you have separate containers set for each website/domain? Or have all the sites in the one container set?
Website 1, Website 2, Website 3:
nginx
phpfpm
mysql
or
Website 1:
nginx_1
phpfpm_1
mysql_1
Website 2:
nginx_2
phpfpm_2
mysql_2
Website 3:
nginx_3
phpfpm_3
mysql_3
I prefer to use separate containers for the individual websites but use a single webserver as proxy. This allows you to access all websites of different domains on the same host ports (80/443). Additionally you don't necessarily need to run multiple nginx containers.
Structure:
Proxy
nginx (listens to port 80/443)
Website 1
phpfpm_1
mysql_1
Website 2
phpfpm_2
mysql_2
...
You can use automated config-generation for the proxy service such as jwilder/nginx-proxy that also opens the way to use convenient SSL-certification handling with e.g. jrcs/letsencrypt-nginx-proxy-companion.
The proxy service can then look like this:
Proxy
nginx (listens to port 80/443)
docker-gen (creates an nginx config based on the running containers and services)
letsencrypt (creates SSL certificates if needed)
Well, if they are not related, I would definitely not put them in the same machine...
Even if you have one website with nginx and mysql I would pull these two apart. It simply gives you more flexibility, and that's what Docker is all about.
Good luck and have fun with Docker!
I would definitely isolate each of the applications.
Why?
If they don't rely on each other at all, making changes in one application can affect the other simply because they're running in the same environment, and you wouldn't want all of them to run into problems all at once.

Run nginx in one container to proxy pass Gitlab in another container

I want to run multiple service such as GItlab, racktables on same host with https enabled in different containers. How can I achieve this?
You achieve this by running a reverse proxy (nginx or apache) that forwards traffic to the different containers using different virtualhosts.
gitlab.foo.bar -> gitlab container
racktables.foo.bar -> racktables container
etc
The reverse proxy container will map port 80 and 443 to the host. All the other containers will not need port mapping as all traffic goes through the revers proxy.
I think the quickest way to get this working is to use jwilder/nginx-proxy. It's at least extremely newbie friendly as it automates almost everything for you. You can also learn a lot by looking at the generated config files in the container. Even getting TLS to work is not that complicated and you get setup with A+ rating from ssllabs by default.
I've used this for my hobby projects for almost a year and it works great (with Let's Encrypt).
You can of course also manually configure everything, but it's a lot of work with so many pitfalls.
The really bad way to do this is to run the reverse proxy on the host and map lots of ports from all the containers to the host. Please don't do that.

CoreOS Fleet, link redundant Docker container

I have a small service that is split into 3 docker containers. One backend, one frontend and a small logging part. I now want to start them using coreOS and fleet.
I want to try and start 3 redundant backend containers, so the frontend can switch between them, if one of them fails.
How do i link them? If i only use one, it's easy, i just give it a name, e.g. 'back' and link it like this
docker run --name front --link back:back --link graphite:graphite -p 8080:8080 blurio/hystrixfront
Is it possible to link multiple ones?
The method you us will be somewhat dependent on the type of backend service you are running. If the backend service is http then there are a few good proxy / load balancers to choose from.
nginx
haproxy
The general idea behind these is that your frontend service need only be introduced to a single entry point which nginx or haproxy presents. The tricky part with this, or any cloud service, is that you need to be able to introduce backend services, or remove them, and have them available to the proxy service. There are some good writeups for nginx and haproxy to do this. Here is one:
haproxy tutorial
The real problem here is that it isn't automatic. There may be some techniques that automatically introduce/remove backends for these proxy servers.
Kubernetes (which can be run on top of coreos) has a concept called 'Services'. Using this deployment method you can create a 'service' and another thing called 'replication controller' which provides the 'backend' docker process for the service you describe. Then the replication controller can be instructed to increase/decrease the number of backend processes. You frontend accesses the 'service'. I have been using this recently and it works quite well.
I realize this isn't really a cut and paste answer. I think the question you ask is really the heart of cloud deployment.
As Michael stated, you can get this done automatically by adding a discovery service and bind it to the backend container. The discovery service will add the IP address (usually you'll want to bind this to be the IP address of your private network to avoid unnecessary bandwidth usage) and port in the etcd key-value store and can be read from the load balancer container to automatically update the load balancer to add the available nodes.
There is a good tutorial by Digital Ocean on this:
https://www.digitalocean.com/community/tutorials/how-to-use-confd-and-etcd-to-dynamically-reconfigure-services-in-coreos

Resources