Docker based Web Hosting - docker

I am posting this question due to lack of experience and I need professional suggestions. The questions in SO are mainly on how to deploy or host multiple websites using Docker running on a single Web Host. This can be done, but is it ideal for moderate traffic websites.
I deploy Docker based Containers in my local machine for development. A software container has a copy of the primary application, as well all dependencies — libraries, languages, frameworks, and everything else.
It becomes easy for me to simply migrate the “docker-compose.yml” or “dockerfile” into any remote Web Server. All the softwares and dependencies get installed and will run just like my local machine.
(Say) I have a VPS and I want to host multiple websites using Docker. The only thing that I need to configure is the Port, so that the domains can be mapped to port 80. For this I have to use an extra NGINX for routing.
But VPS can be used to host multiple websites without the need of Containerisation. So, is there any special benefit of running Docker in Web Servers like AWS, Google, Hostgator, etc., OR Is Docker best or idle for development only in local machine and not to be deployed in Web Servers for Hosting.

The main benefits of docker for simple web hosting are imo the following:
isolation each website/service might have different dependency requirements (one might require php 5, another php 7 and another nodejs).
separation of concerns if you split your setup into multiple containers you can easily upgrade or replace one part of it. (just consider a setup with 2 websites, which need a postgres database each. If each website has its own db container you won't have any issue bumping the postgres version of one of the websites, without affecting the other.)
reproducibility you can build the docker image once, test it on acceptance, promote the exact same image to staging and later to production. also you'll be able to have the same environment locally as on your server
environment and settings each of your services might depend on a different environment (for example smtp settings or a database connection). With containers you can easily supply each container it's specific environment variables.
security one can argue about this one as containers itself won't do much for you in terms of security. However due to easier dependency upgrades, seperated networking etc. most people will end up with a setup which is more secure. (just think about the db containers again here, these can share a network with your app/website container and there is no need to expose the port locally.)
Note that you should be careful with dockers port mapping. It uses the iptables and will override the settings of most firewalls (like ufw) per default. There is a repo with information on how to avoid this here: https://github.com/chaifeng/ufw-docker
Also there are quite a few projects which automate the routing of requests to the applications (in this case containers) very enjoyable and easy. They usually integrate a proper way to do ssl termination as well. I would strongly recommend looking into traefik if you setup a webserver with multiple containers which should all be accessible at port 80 and 443.

Related

Should I dockerize mysql and nginx in production?

Our company has a dedicated Linux server that wants to host all services on it.
We have several wordpress, laravel, asp and node websites. We want to dockerize all of these. But we want all services to use the same mysql.
Should we also run mysql in Docker? or not.
How will it be to up and down Docker Compose of one of the projects? Do they affect each other?
I am a little confused.
Well, it all depends on the size of your application/services. On a virtual machine, I would not suggest Dockerizing everything and running a docker-compose to up services. Take for example a database like MySQL, in docker container there are some constraints like the maximum size of the volume/container and networking, which by using the docker-compose you need to take care of with additional parameters, daemon changes. Which can be all configured but to know what exactly needs to be configured in what way is a painful process. There can also be problems with the replication of database, you should not have one database in production. What if the data gets lost? Shouldn't you have a second replica?
Now, for the reverse proxy, it depends. Depends on the size of the production as well. What happends if the container is restarted, upgraded? Will the proxy be down and all your services be unavailable? YES! It may be only for a few minutes, but this is production we are talking about.
On the other hand, it all depends on the size of the project, the size of the traffic, and the budget. Take for example a deployment on kubernetes (you did not specify the deployment target, only docker compose so i will default to kubernetes), where everything is in the form of containers. For each node, you have a ingress-controller (one of the most popular is nginx). If this is production you are talking about, then you can write ingress rules to route the traffic. Ingress-controller is deployed as a DaemonSet, so each node has its own ingress-controller and if one node is down, you would also have another one. The same goes for the database.
What I am trying to say, is that running a simple docker-compose on a machine in production is very risky. Use an environment that can scale up either horizontally or vertically (docker swarm, kube). I hope, I clarified the idea behind the production deployment well.

How should I containerize my application requiring apache/php/mysql with an authenticated and public site experience?

I’ve spent months building an application and now I’m looking to deploy it, but I’m new to Docker and I seem to have brain block when it comes to actually containerizing my application. I need to run the following technologies:
php 7.2
mysql 5.7
apache 2.4
phpMyAdmin 4.7
My application will need to be available exclusively through https and I’m assuming the connection between my application and the mysql container will also need to be through a secure port.
In addition to that I have a wordpress site that will serve as the pre-login experience for my application that I’d like to dockerize, but should not share the same DB. When I move this to a prod environment, I will not include the phpMyAdmin container.
How many containers do I need? I was thinking that I would need at least 5:
apache
php
mysql (my application)
mysql (wordpress)
phpmyAdmin
Should my application and the worpress site live in the php container? or should I create separate containers for each.
What should my docker-compose.yml file and dockerfiles look like to achieve this feat?
The driving idea here is that a container should contain a single "service". You don't break things into containers by software component (php, apache, etc.) but rather by whatever needs to be combined to create a single service. So if your application is a PHP application hosted by Apache, then you'd want a container for your application that contained PHP, Apache and your application code. That would provide your application as a service.
Same goes for Wordpress. If Wordpress is running behind Apache and needs PHP, you'd create a second container containing PHP, Apache, WordPress, and your WordPress content, producing your "Wordpress service".
Each of your individual databases can be seen as a service, so you might want two containers running MySQL, one serving each of your databases. You could choose to consider the database server as a whole to be a service, and have it serve both of your databases. Then you could get away with a single MySQL container. Which way you go with this is a minor issue. Having a single database server will likely save a little bit of resources by avoiding some duplication.
If all of your services need to talk to each other, the easiest way to do this with Docker is to use Docker Compose. This lets you create multiple containers that know about each other and can communicate very easily between each other by way of some simple DNS logic that Docker Compose provides. With Compose, you give each of your containers a simple name, and then that name can be looked up via DNS to provide the IP address of each container. So for example, if your MySql container was named "mysql", your app container could connect to it via the DNS address "mysql" with no additional work on your part.

Multiple web apps with Docker architecture

I have multiple web apps, all of them running on Apache, many of them using PHP, MySQL, node, etc.
I'm not currently using Docker, but I would like to use it, and I would like to know what would be the best architectureto use.
I suppose that in my localhost I should create a container with Apache, and all the applications would be using it (am I wrong?). The same with MySQL if the application uses it.
But then, what happens when I want to deploy my projects (or some of them) into a production environment? I'm currently using Microsoft Azure WebApps, and I don't thing that my 'localhost' setup will be valid. I suppose that in production, each project should have its own Apache, but this changes my Docker setup, and I don't think this is the Docker philosophy.
So, how should I structure my architecture?

How to manage multiple projects on Docker?

In our company ~7 projects, each based on Docker. Each project contain base services, like MySQL, Nginx, PHP. Some of projects communicate with other projects. Because of many services on same port, we make new docker host (docker-machine) for each project. From here few problems are coming:
VirtualBox assign random IP to each Docker host, depends on sequence of executing.
Hard to switch from project to project, need to set different shell envs all the time. Easy to make mistake.
Well, I'm searching for more enterprise solution to manage many docker machines. Or a some technique that can help me with current situation.
I had similar problems last summer.
First, I started to deploy my projects to swarm-cluster as services, instead of clustering several docker VMs. This enabled me to play around services with only the service IDs. It is important that how to separate projects into services, this part may be cumbersome depending on your project.
https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/
Then, I build my configuration and monitoring software once on swarm-manager and use it. You can use your automation tools on docker-manager to control services.
A virtual machine consumes resources and it is better to avoid it if is not necessarily. Instead you could deploy the projects in the docker swarm on bare metals.
But because every project has an entry point that needs to be accesible from the outside world (i.e. https://site1.com and https://site2.com) you can't expose the same port (443 or 80) for all the frontend services in the same swarm. For this you can use a reverse proxy like HAProxy or Nginx that forwards the requests to the right service based on the hostname. The reverse proxy could be also a service in the swarm. In this situation you should not expose the projects' ports anymore.
A reverse proxy has many other advantages, like SSL termination (this makes the SSL certificate management a lot easier).
If you add the projects to the same custom network then the services from different projects could communicate securely and directly, using their docker service name and the internal port (i.e. 80).

How to mimic a "Docker containers behind reverse-proxy or load balancer setup" during development

Developing and testing applications inside of Docker containers and finally running them in the same manner in production is a great approach to reduce the used to work on my machine but is magically failing in production risk.
This is also reflected in factor no. 10 of the well-known 12 factor manifesto which I try to follow wherever it makes sense for the given use case: Keep development, staging, and production as similar as possible.
On our production server, we have a reverse-proxy in place which is taking incoming requests on port 80 and forwards them to the correct container depending on the Host header used when accessing the host via virtual domains - e.g. requests to app1.domain.name go to the app1 container while requests to app2.domain.name go to the app2 container etc. We use traefik for that but it could also be jwilder/nginx-proxy or any other reverse-proxy or load balancer. No other container ports are publicly exposed for the application containers.
My question now is, what would be the most efficient way to simulate this setup during development? I could think of the following ones:
Ignore the reverse-proxy during development and expose a public port for each service under which one can reach it during development. In production, however, do not publicly expose any ports. While this is easy to do, it does not give exact parity between the development and production setup.
Run a similar reverse-proxy locally during development. While this gives much better parity between development and production environment, this requirement needs to be documented somewhere and places some burden on new developers to get the prerequisites right before actually diving into the application itself - something that the Docker approach actually tries to avoid.
Automate setting up a development environment which mimics the production one (including the reverse-proxy) by putting everything into a virtual machine (e.g. with Vagrant). While this would be convenient from a developer's point of view, it takes some further time to setup initially and consumes more resources.
Did I miss here some other approach which is superior to the ones described?

Resources