How should I containerize my application requiring apache/php/mysql with an authenticated and public site experience? - docker

I’ve spent months building an application and now I’m looking to deploy it, but I’m new to Docker and I seem to have brain block when it comes to actually containerizing my application. I need to run the following technologies:
php 7.2
mysql 5.7
apache 2.4
phpMyAdmin 4.7
My application will need to be available exclusively through https and I’m assuming the connection between my application and the mysql container will also need to be through a secure port.
In addition to that I have a wordpress site that will serve as the pre-login experience for my application that I’d like to dockerize, but should not share the same DB. When I move this to a prod environment, I will not include the phpMyAdmin container.
How many containers do I need? I was thinking that I would need at least 5:
apache
php
mysql (my application)
mysql (wordpress)
phpmyAdmin
Should my application and the worpress site live in the php container? or should I create separate containers for each.
What should my docker-compose.yml file and dockerfiles look like to achieve this feat?

The driving idea here is that a container should contain a single "service". You don't break things into containers by software component (php, apache, etc.) but rather by whatever needs to be combined to create a single service. So if your application is a PHP application hosted by Apache, then you'd want a container for your application that contained PHP, Apache and your application code. That would provide your application as a service.
Same goes for Wordpress. If Wordpress is running behind Apache and needs PHP, you'd create a second container containing PHP, Apache, WordPress, and your WordPress content, producing your "Wordpress service".
Each of your individual databases can be seen as a service, so you might want two containers running MySQL, one serving each of your databases. You could choose to consider the database server as a whole to be a service, and have it serve both of your databases. Then you could get away with a single MySQL container. Which way you go with this is a minor issue. Having a single database server will likely save a little bit of resources by avoiding some duplication.
If all of your services need to talk to each other, the easiest way to do this with Docker is to use Docker Compose. This lets you create multiple containers that know about each other and can communicate very easily between each other by way of some simple DNS logic that Docker Compose provides. With Compose, you give each of your containers a simple name, and then that name can be looked up via DNS to provide the IP address of each container. So for example, if your MySql container was named "mysql", your app container could connect to it via the DNS address "mysql" with no additional work on your part.

Related

Docker based Web Hosting

I am posting this question due to lack of experience and I need professional suggestions. The questions in SO are mainly on how to deploy or host multiple websites using Docker running on a single Web Host. This can be done, but is it ideal for moderate traffic websites.
I deploy Docker based Containers in my local machine for development. A software container has a copy of the primary application, as well all dependencies — libraries, languages, frameworks, and everything else.
It becomes easy for me to simply migrate the “docker-compose.yml” or “dockerfile” into any remote Web Server. All the softwares and dependencies get installed and will run just like my local machine.
(Say) I have a VPS and I want to host multiple websites using Docker. The only thing that I need to configure is the Port, so that the domains can be mapped to port 80. For this I have to use an extra NGINX for routing.
But VPS can be used to host multiple websites without the need of Containerisation. So, is there any special benefit of running Docker in Web Servers like AWS, Google, Hostgator, etc., OR Is Docker best or idle for development only in local machine and not to be deployed in Web Servers for Hosting.
The main benefits of docker for simple web hosting are imo the following:
isolation each website/service might have different dependency requirements (one might require php 5, another php 7 and another nodejs).
separation of concerns if you split your setup into multiple containers you can easily upgrade or replace one part of it. (just consider a setup with 2 websites, which need a postgres database each. If each website has its own db container you won't have any issue bumping the postgres version of one of the websites, without affecting the other.)
reproducibility you can build the docker image once, test it on acceptance, promote the exact same image to staging and later to production. also you'll be able to have the same environment locally as on your server
environment and settings each of your services might depend on a different environment (for example smtp settings or a database connection). With containers you can easily supply each container it's specific environment variables.
security one can argue about this one as containers itself won't do much for you in terms of security. However due to easier dependency upgrades, seperated networking etc. most people will end up with a setup which is more secure. (just think about the db containers again here, these can share a network with your app/website container and there is no need to expose the port locally.)
Note that you should be careful with dockers port mapping. It uses the iptables and will override the settings of most firewalls (like ufw) per default. There is a repo with information on how to avoid this here: https://github.com/chaifeng/ufw-docker
Also there are quite a few projects which automate the routing of requests to the applications (in this case containers) very enjoyable and easy. They usually integrate a proper way to do ssl termination as well. I would strongly recommend looking into traefik if you setup a webserver with multiple containers which should all be accessible at port 80 and 443.

Best approach to create containers

I am developing an application with nodejs, mysql that has the following dependencies
Nginx (for reverse proxying the db and the nodejs server)
ghostscripts (dependent os is ubuntu)
pdftk (dependent os is ubuntu)
I would like to know what would be the best approach if I want to use docker containers to pack my application.
Should I create one Nginx container, one nodejs container and one MySQL and make them talk to each other? I know this is a better approach since its scalable, but in this case how and where should I install ghostscript and pdftk? (the nodejs application makes use of Ghostscript and pdftk for pdf files)
or
should I create one ubuntu docker container and install everything (viz. Nginx, pdftk, Ghostscript, mysql) in it?
Splitting an application up into separate containers requires a well defined API that support calls over the network (usually HTTP or some other application protocol on the TCP stack).
As both ghostscripts and pdftk are commandline tools invoked using a CLI you cannot call them from another container out of the box, you would need to develop some external facing API for that.
When setting the boundaries of your containers, think in terms of domains. The container becomes a the smallest unit that you will deploy and scale. That unit should be self contained and have a well defined, single purpose.
It is not clear from your description exactly what role nginx plays, but assuming that is some kind of client facing webserver or proxy, 3 containers makes sense in your case
NodeJs + PDFTK + Ghostscripts (The application)
Nginx (The webserver/proxy)
MySQL (The database)
The NodeJS application has all its application dependencies inside, but are more loosely coupled to Nginx and MySQL to whom it can communicate over the network.
You should create separate containers for each application, because this allows you to achieve:
Independent deploy.
Independent scaling.
Independent development.
Isolation and security.
For convenience, you can use docker-compose, which allows you to launch configure and launch multiple docker containers with a single command.
I would recommend that you deploy the database not in a Docker container in production because the database stores the state, it is also unreliable, and this increases the complexity of support.

Multiple web apps with Docker architecture

I have multiple web apps, all of them running on Apache, many of them using PHP, MySQL, node, etc.
I'm not currently using Docker, but I would like to use it, and I would like to know what would be the best architectureto use.
I suppose that in my localhost I should create a container with Apache, and all the applications would be using it (am I wrong?). The same with MySQL if the application uses it.
But then, what happens when I want to deploy my projects (or some of them) into a production environment? I'm currently using Microsoft Azure WebApps, and I don't thing that my 'localhost' setup will be valid. I suppose that in production, each project should have its own Apache, but this changes my Docker setup, and I don't think this is the Docker philosophy.
So, how should I structure my architecture?

When to use a multi-container docker in Elastic Beanstalk for running a Rails App?

I would like to deploy a rails API app to AWS Elastic Beanstalk and noticed that there are two options for docker.
Single container
Multi-container
I think it is enough with a single container for this app however, I was wondering when is the case to use multi-container. If I would like to deploy two rails apps(one is an API app and the other is an admin app) to a single EC2 instance then is this the case?
Well.. Not really. Multicontainer, as it stays, has more than a one container within overall definition (done with Dockerrun.aws.json file). You can still deploy just one container with whatever application you want, let's say django, Python based framework, where there's an API and admin panel as well and it all sits within one application.
But you may want to deploy your application behind some reverse proxy, it might be Nignx let's say, so there's a need for a second container. That's the case where you would use Multicontainer. The main advantage of using Multicontainer is that each container can talk to each other using local network and some DNS host mapping, so your Nginx container can invoke with proxy_pass any application by its name, like just "backend", where Rails or Django application is living.

Two seperate docker php environments one shared mysql

I am SUPER new to Docker and have spent 4-5 hrs trying to figure this out with no luck so I am turning to you docker geniuses.
I currently have multiple websites, each with their own docker container. Each container is a full environment I created using the Docker documentation - PHP, Ubuntu, Mysql, Server (nginx / apache). Though this works, it isn't what I need / want in the long run.
I have a several Laravel sites running php7 and ngnix with mysql. I also have a couple Phalcon php 5.5 containers using apache and mysql. For each site I have a container built from a base like this webdevops then using the exec command went in and added the laravel stuff or Phalcon stuff.
The problem is that many times I need to reference multiple databases as once. The sites aren't linked but I need a quick look at a db from another project. I also need to run a new container for EACH site, which is stupid because all Laravel sites have the EXACT same environment.
What I would love is to have a mysql container with all my databases. A container with my php7 and nginx for ALL my laravel sites and a container with php 5.5 and apache for ALL my Phalcon stuff. This lets me just look at the code in one environment (without running the environment) AND see the tables in the database while running the other environment. Ie running environment container A that has sites 1, 2, 3 mapped and a shared mysql container. So I can see sites 3, 4 databases without running environment container B
I tried creating yaml files in each project and having a shared dir with envrionment dockerfiles but that isn't working. I have used the likes of this, and this and this to try and guide me but no luck.
Can anyone give me some pointers on where to start or help me with a super simple base example of how to do this?
Thanks in advance.

Resources