Docker best practises for managing websites - docker

What do you think is best practise for running multiple website on the same machine, would you have separate containers set for each website/domain? Or have all the sites in the one container set?
Website 1, Website 2, Website 3:
nginx
phpfpm
mysql
or
Website 1:
nginx_1
phpfpm_1
mysql_1
Website 2:
nginx_2
phpfpm_2
mysql_2
Website 3:
nginx_3
phpfpm_3
mysql_3

I prefer to use separate containers for the individual websites but use a single webserver as proxy. This allows you to access all websites of different domains on the same host ports (80/443). Additionally you don't necessarily need to run multiple nginx containers.
Structure:
Proxy
nginx (listens to port 80/443)
Website 1
phpfpm_1
mysql_1
Website 2
phpfpm_2
mysql_2
...
You can use automated config-generation for the proxy service such as jwilder/nginx-proxy that also opens the way to use convenient SSL-certification handling with e.g. jrcs/letsencrypt-nginx-proxy-companion.
The proxy service can then look like this:
Proxy
nginx (listens to port 80/443)
docker-gen (creates an nginx config based on the running containers and services)
letsencrypt (creates SSL certificates if needed)

Well, if they are not related, I would definitely not put them in the same machine...
Even if you have one website with nginx and mysql I would pull these two apart. It simply gives you more flexibility, and that's what Docker is all about.
Good luck and have fun with Docker!

I would definitely isolate each of the applications.
Why?
If they don't rely on each other at all, making changes in one application can affect the other simply because they're running in the same environment, and you wouldn't want all of them to run into problems all at once.

Related

Running containerized multiple applications using one ip address

I have a domain name https://example.com that points to a vps server on amazon lightsail. I have several applications i want to run. The apps are in vue js and some in spring and i am using nginx as the web server.
The landing page is basically an app running on port 3000 but using reverse proxy to display it at the root of example.com on port 80
I would like to run another app like:
example.com/one, example.com/two and example.com/three where one, two and three are applications each running inside a docker container.
How would i go about configuring my apps in this way keeping in mind the apps are running separately inside docker?
I highly suggest using Caddy for this type of setup.
Nginx is awesome and you could use that for the same purpose.
But for what you want to do caddy will work perfectly.
Just make sure to run each container on a different port.
Then use caddy as a reverse proxy to each container:
https://medium.com/bumps-from-a-little-front-end-programmer/caddy-reverse-proxy-tutorial-faa2ce22a9c6
Lets say you have containers running on port 5000,8800 and 9000
the you could do:
example.com
reverse_proxy /one localhost:5000
reverse_proxy /two localhost:8800
reverse_proxy /three localhost:9000
Caddy is cool because it will also setup SSL via Letsencrypt.
I didn't have time or a server to test this now, but let me know if it works.
God bless :)
Docker can only route to different ports. It can not determine the container by a http-path.
You need a reverse proxy (RP).
You have two options:
Install RP on host
You can install the RP on your host machine. There are many pros, like you can use the certbot for automatic lets encrypt certs. And you have the opportunity to use more docker-containers.
For this you have to publish ports in docker to your hostmachine.
Use your docker-nginx as RP
You can also set your frontend as RP. Just put your docker-containers in a docker-network and add the RP-config to your nginx.

Single Nginx Docker vs Multiple Nginx Docker for websites

Forgive me if I am asking a stupid question, but I am building a server where I will host multiple Flask websites Docker Container using Nginx Docker. My question now is: is it better to have one main nginx docker container and then host all my Websites Docker containers on it or have an Nginx docker container for each application with docker compose?
I want to know in terms of resource handling and efficiency which one is better to go for ?
Many roads lead to Rome. If you follow the "microservice approach", i.e. you are closer to the backend with the Nginx paths etc., you have the advantage that you can change or break one service without having a big impact on the others.
We have DNS -> F5 -> Nginx -> Nginx -> backend at work for example. No problems.
An Nginx (container) does not consume many resources, partly because it is programmed in C.

How to route requests between docker containers

I have a cloud server where I host my web-services. Currently, there is only one docker container with JS + PHP + Mysql running on the server. It serves the web-service mysite.co. There are going to be more web-services. I want to host them on the same machine but in another docker container. I want to refactor and create a bunch of services and containers:
docker1 with MySQL --> DB for all services
docker2 with PHP + JS --> platform.mysite.co
docker3 with PHP + JS --> for mysite.co
docker4 with Python --> client.mysite.co. It's REST-endpoints for clients (ideally accessible only by VPN)
With which tool can I route web-requests between containers?
Not sure what is your exact problem.
If it is basic routing between three containers, you need a basic server (nginx, apache).
Il you want to perform load balandinc as well as routing between nodes among a swarm or pods in kubernetes, you may choose one that is more docker-suited, such as traefik.
It sounds like you see containers are some sort of impenetrable bastion... while it is acually acting exactly like your non-containerized web servers.
So the routing problems you have have the same solutions here... maybe a few more because docker add a few devoted solutions.

Run nginx in one container to proxy pass Gitlab in another container

I want to run multiple service such as GItlab, racktables on same host with https enabled in different containers. How can I achieve this?
You achieve this by running a reverse proxy (nginx or apache) that forwards traffic to the different containers using different virtualhosts.
gitlab.foo.bar -> gitlab container
racktables.foo.bar -> racktables container
etc
The reverse proxy container will map port 80 and 443 to the host. All the other containers will not need port mapping as all traffic goes through the revers proxy.
I think the quickest way to get this working is to use jwilder/nginx-proxy. It's at least extremely newbie friendly as it automates almost everything for you. You can also learn a lot by looking at the generated config files in the container. Even getting TLS to work is not that complicated and you get setup with A+ rating from ssllabs by default.
I've used this for my hobby projects for almost a year and it works great (with Let's Encrypt).
You can of course also manually configure everything, but it's a lot of work with so many pitfalls.
The really bad way to do this is to run the reverse proxy on the host and map lots of ports from all the containers to the host. Please don't do that.

CoreOS Fleet, link redundant Docker container

I have a small service that is split into 3 docker containers. One backend, one frontend and a small logging part. I now want to start them using coreOS and fleet.
I want to try and start 3 redundant backend containers, so the frontend can switch between them, if one of them fails.
How do i link them? If i only use one, it's easy, i just give it a name, e.g. 'back' and link it like this
docker run --name front --link back:back --link graphite:graphite -p 8080:8080 blurio/hystrixfront
Is it possible to link multiple ones?
The method you us will be somewhat dependent on the type of backend service you are running. If the backend service is http then there are a few good proxy / load balancers to choose from.
nginx
haproxy
The general idea behind these is that your frontend service need only be introduced to a single entry point which nginx or haproxy presents. The tricky part with this, or any cloud service, is that you need to be able to introduce backend services, or remove them, and have them available to the proxy service. There are some good writeups for nginx and haproxy to do this. Here is one:
haproxy tutorial
The real problem here is that it isn't automatic. There may be some techniques that automatically introduce/remove backends for these proxy servers.
Kubernetes (which can be run on top of coreos) has a concept called 'Services'. Using this deployment method you can create a 'service' and another thing called 'replication controller' which provides the 'backend' docker process for the service you describe. Then the replication controller can be instructed to increase/decrease the number of backend processes. You frontend accesses the 'service'. I have been using this recently and it works quite well.
I realize this isn't really a cut and paste answer. I think the question you ask is really the heart of cloud deployment.
As Michael stated, you can get this done automatically by adding a discovery service and bind it to the backend container. The discovery service will add the IP address (usually you'll want to bind this to be the IP address of your private network to avoid unnecessary bandwidth usage) and port in the etcd key-value store and can be read from the load balancer container to automatically update the load balancer to add the available nodes.
There is a good tutorial by Digital Ocean on this:
https://www.digitalocean.com/community/tutorials/how-to-use-confd-and-etcd-to-dynamically-reconfigure-services-in-coreos

Resources