Multiple web apps with Docker architecture - docker

I have multiple web apps, all of them running on Apache, many of them using PHP, MySQL, node, etc.
I'm not currently using Docker, but I would like to use it, and I would like to know what would be the best architectureto use.
I suppose that in my localhost I should create a container with Apache, and all the applications would be using it (am I wrong?). The same with MySQL if the application uses it.
But then, what happens when I want to deploy my projects (or some of them) into a production environment? I'm currently using Microsoft Azure WebApps, and I don't thing that my 'localhost' setup will be valid. I suppose that in production, each project should have its own Apache, but this changes my Docker setup, and I don't think this is the Docker philosophy.
So, how should I structure my architecture?

Related

Docker based Web Hosting

I am posting this question due to lack of experience and I need professional suggestions. The questions in SO are mainly on how to deploy or host multiple websites using Docker running on a single Web Host. This can be done, but is it ideal for moderate traffic websites.
I deploy Docker based Containers in my local machine for development. A software container has a copy of the primary application, as well all dependencies — libraries, languages, frameworks, and everything else.
It becomes easy for me to simply migrate the “docker-compose.yml” or “dockerfile” into any remote Web Server. All the softwares and dependencies get installed and will run just like my local machine.
(Say) I have a VPS and I want to host multiple websites using Docker. The only thing that I need to configure is the Port, so that the domains can be mapped to port 80. For this I have to use an extra NGINX for routing.
But VPS can be used to host multiple websites without the need of Containerisation. So, is there any special benefit of running Docker in Web Servers like AWS, Google, Hostgator, etc., OR Is Docker best or idle for development only in local machine and not to be deployed in Web Servers for Hosting.
The main benefits of docker for simple web hosting are imo the following:
isolation each website/service might have different dependency requirements (one might require php 5, another php 7 and another nodejs).
separation of concerns if you split your setup into multiple containers you can easily upgrade or replace one part of it. (just consider a setup with 2 websites, which need a postgres database each. If each website has its own db container you won't have any issue bumping the postgres version of one of the websites, without affecting the other.)
reproducibility you can build the docker image once, test it on acceptance, promote the exact same image to staging and later to production. also you'll be able to have the same environment locally as on your server
environment and settings each of your services might depend on a different environment (for example smtp settings or a database connection). With containers you can easily supply each container it's specific environment variables.
security one can argue about this one as containers itself won't do much for you in terms of security. However due to easier dependency upgrades, seperated networking etc. most people will end up with a setup which is more secure. (just think about the db containers again here, these can share a network with your app/website container and there is no need to expose the port locally.)
Note that you should be careful with dockers port mapping. It uses the iptables and will override the settings of most firewalls (like ufw) per default. There is a repo with information on how to avoid this here: https://github.com/chaifeng/ufw-docker
Also there are quite a few projects which automate the routing of requests to the applications (in this case containers) very enjoyable and easy. They usually integrate a proper way to do ssl termination as well. I would strongly recommend looking into traefik if you setup a webserver with multiple containers which should all be accessible at port 80 and 443.

Best approach to create containers

I am developing an application with nodejs, mysql that has the following dependencies
Nginx (for reverse proxying the db and the nodejs server)
ghostscripts (dependent os is ubuntu)
pdftk (dependent os is ubuntu)
I would like to know what would be the best approach if I want to use docker containers to pack my application.
Should I create one Nginx container, one nodejs container and one MySQL and make them talk to each other? I know this is a better approach since its scalable, but in this case how and where should I install ghostscript and pdftk? (the nodejs application makes use of Ghostscript and pdftk for pdf files)
or
should I create one ubuntu docker container and install everything (viz. Nginx, pdftk, Ghostscript, mysql) in it?
Splitting an application up into separate containers requires a well defined API that support calls over the network (usually HTTP or some other application protocol on the TCP stack).
As both ghostscripts and pdftk are commandline tools invoked using a CLI you cannot call them from another container out of the box, you would need to develop some external facing API for that.
When setting the boundaries of your containers, think in terms of domains. The container becomes a the smallest unit that you will deploy and scale. That unit should be self contained and have a well defined, single purpose.
It is not clear from your description exactly what role nginx plays, but assuming that is some kind of client facing webserver or proxy, 3 containers makes sense in your case
NodeJs + PDFTK + Ghostscripts (The application)
Nginx (The webserver/proxy)
MySQL (The database)
The NodeJS application has all its application dependencies inside, but are more loosely coupled to Nginx and MySQL to whom it can communicate over the network.
You should create separate containers for each application, because this allows you to achieve:
Independent deploy.
Independent scaling.
Independent development.
Isolation and security.
For convenience, you can use docker-compose, which allows you to launch configure and launch multiple docker containers with a single command.
I would recommend that you deploy the database not in a Docker container in production because the database stores the state, it is also unreliable, and this increases the complexity of support.

How should I containerize my application requiring apache/php/mysql with an authenticated and public site experience?

I’ve spent months building an application and now I’m looking to deploy it, but I’m new to Docker and I seem to have brain block when it comes to actually containerizing my application. I need to run the following technologies:
php 7.2
mysql 5.7
apache 2.4
phpMyAdmin 4.7
My application will need to be available exclusively through https and I’m assuming the connection between my application and the mysql container will also need to be through a secure port.
In addition to that I have a wordpress site that will serve as the pre-login experience for my application that I’d like to dockerize, but should not share the same DB. When I move this to a prod environment, I will not include the phpMyAdmin container.
How many containers do I need? I was thinking that I would need at least 5:
apache
php
mysql (my application)
mysql (wordpress)
phpmyAdmin
Should my application and the worpress site live in the php container? or should I create separate containers for each.
What should my docker-compose.yml file and dockerfiles look like to achieve this feat?
The driving idea here is that a container should contain a single "service". You don't break things into containers by software component (php, apache, etc.) but rather by whatever needs to be combined to create a single service. So if your application is a PHP application hosted by Apache, then you'd want a container for your application that contained PHP, Apache and your application code. That would provide your application as a service.
Same goes for Wordpress. If Wordpress is running behind Apache and needs PHP, you'd create a second container containing PHP, Apache, WordPress, and your WordPress content, producing your "Wordpress service".
Each of your individual databases can be seen as a service, so you might want two containers running MySQL, one serving each of your databases. You could choose to consider the database server as a whole to be a service, and have it serve both of your databases. Then you could get away with a single MySQL container. Which way you go with this is a minor issue. Having a single database server will likely save a little bit of resources by avoiding some duplication.
If all of your services need to talk to each other, the easiest way to do this with Docker is to use Docker Compose. This lets you create multiple containers that know about each other and can communicate very easily between each other by way of some simple DNS logic that Docker Compose provides. With Compose, you give each of your containers a simple name, and then that name can be looked up via DNS to provide the IP address of each container. So for example, if your MySql container was named "mysql", your app container could connect to it via the DNS address "mysql" with no additional work on your part.

Should I be using Docker to config a kernel or just the services running on the kernel?

Let's say I have a web server running CentOS, with PHP and MySQL installed. I want to set up a git repository for developing with others, so I thought it would be appropriate to learn to use Docker for making the development process more consistent among developers. Currently I have separate containers for PHP and MySQL, and a docker-compose.yml file which has php and mysql as services (with build paths to the PHP and MySQL Dockerfiles).
Is there any value in also having a container for CentOS, as the developers would potentially be developing on all manner of operating systems? My understanding of Docker is that I can use it to specify consistent configuration of all the various services which the app depends on, which suggests to me that there may be value in also configuring the operating system/kernel to make that consistent as well.

Can I use Docker for production deployment of a Rails application?

I want to use Docker to deploy my Rails application. I want to know if there is someone tried this? And what problems can I face?
Deploying Rails apps to production with Docker is not only possible, but something you'd want to do, to make sure your app runs on any server you deploy.
This comes with some challenges. First, it's advisable to run your database server and your Rails app different containers to keep things isolated. You can also set up your production server Docker environment with Docker Machine. Machine allows you to configure AWS, Digital Ocean, Azure and Compute Engine instances (among many others), and manage your containers from your computer. I assume you're just getting started with Docker, so I suggest you take a look at this cool guide about setting up a Rails + Postgres app with Docker.

Resources