How to design Docker routing and database Layer? - docker

I have a few Django that I want to host on a single docker host running CentOS. I want to have 3 layers
network
application
database
network: I want to have an nginx container in the network layer routing requests to different containers in the application layer. I play to use 1:1 port mappings in this docker container to expose port 80 on the host to the container. Nginx will use direct request to appropriate app in the application layer running on port 8001-8010
application: Ill have several containers each running a seperate django app using Gunicorn running on port 8001-8010
database: one container running MySQL with a different database for each app. The MYSQL container will have a data volume linked to it for persistence.
I understand you can link containers. But as I understand it, I think it relies on the order in which the containers are started ie: how can you link nginx to several containers when they havent been started yet.
So my question is
How do I connect the network layer to the application layer when the number/names of containers in the application is always changing. ie: I might bring a new applcation online/offline. How would I update nginx config and what would the addressing look like?
How do I connect the application layer to the database layer? do I have to use Docker Linking? In my Django application code I need to use the hostname of the database to connect to. What would I put for my hostname of my docker container? Would it be able to resolve?
Is there a reference architecture I could leverage?

Docker does not support dynamic linking but there are some tools that can do this for you, see this SO question.
2.) You could start you database container at first and then link all application containers to the database container. Docker will create the host file at the boot up (statically, if your database container reboots and gets another IP you need dynamlically links, see above). When you link a container like this:
-link db:db
you can access the container with the hostname db.

I ended up using this solution:
https://github.com/blalor/docker-hosts
It allows you to have to refer to other containers on the same host by hostname. It is also dynamic as the /etc/host file on the containers gets updated dynamically as containers go up and down.

Related

How to publish a web site running in a docker container on production?

I have a web application running in a docker container on production server. Now I need to make API requests to this application. So, I have two possibilities:
1) Link a domain
2) Make requests directly by IP
I'm using a cloud server for that. In my previous experience I linked the domain to a folder. But now I don't know how to link the domain to a running container on ip_addr:port.
I found this link
https://docs.docker.com/v17.12/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services/
but it's for docker enterprice. Using of that is impossible for the moment.
To expose a docker application to the public without using compose or other orchestration tools like Kubernetes, you can use the docker run -p hostPort:containerPort option to expose your container port. Make sure your application is listening on 0.0.0.0:[container port] inside your container. To access the service externally, you would use the host's IP, and the port that the container port has been mapped to.
See more here
If you want to link to a domain, you can update your DNS records to point your domain to your host IP address.
Hope this helps!
Best way is to use kubernetes because it will ease many operations. But docker-compose can also be used.
If you want to simply deploy using docker it can be done by mapping hostPort to containerPort.

How to run phpBB in Docker with NGINX, php installed on a different host than the database?

How can I run a Docker image with NGINX, phpBB (and all required stuff like php installed) with persistence (Changes on the board shouldn't be lost) and with the database on another host (which already exists). So, let's assume I have the following: MySQL on 192.168.2.233 (local address) on port 3307 running. Now I want to create a Docker image with Alpine Linux (The smallest propably), NGINX and phpBB where the board runs on the NGINX webserver and connects to the database. Changes on the board (E.g. changing the webserver settings or so) should be persisted within the container. How can I do that?
EDIT:
The database on server 192.168.2.233 is already existing! So no, I don't need two or more Docker containers. I need one Docker container with phpBB running on the NGINX webserver and connecting to the database on another host in the same network. The container should use persistence (volumes) to save the settings made in phpBB.
I tried to use the following Dockerfile and modified it:
https://gitlab.com/boxedcode/alpine-nginx-php-fpm/blob/master/Dockerfile --> https://drive.google.com/open?id=1CW68OFCJE9RjIe8_RBC8q5Fa6juRtxmR
Together with the owner of another repository I've now found a solution (that uses Apache however) here: https://github.com/blueimp/phpbb/issues/1. After a few errors and problems from my side, I figured it out. The solution I'm using now is placed here: https://github.com/SeppPenner/DockerApacheSSLphpBB

Access to Docker container

I have a LAMP with a lot of added domain names, so many different websites are stored on it. I would like to separate them into Docker containers. Every websites/webapps and all related stuffs should be in a container. File access is solved with --volumes-from flag, but what about MySQL databases and VirtualHosts? How should I set them in a per container way?
For MYSQL you could launch one for each container and then link them together using the --link flag. Or you could simply install mysql as server within the docker container itself.
You could also probalby use docker-compose to orchestrate each as a whole.
As for virtual hosts, the following would probably meet your demands?
https://github.com/jwilder/nginx-proxy
You can use the already available MySQL image to start your DB and then connect it either through linking (--link option when running your app), you can find more info in the link.
For you virtualhosts you can use nginx as a proxy and it will route to your apps depending on your criteria (e.g. /admin will be routed to app1-192.197.0.12).
You can expose the MySQL port in dockerfile by using ÈXPOSE` command and then bind your service to divert MySQL related queries on that port.

How to move an application based on two linked docker containers to another host?

I'm looking forward to moving an application based on two linked contianers to a different host. The first container comes from an official MySQL image. The second container comes from an official Wordpress image. WP container is linked to the MySQL container.
These containers have evolved over time. Different templates and data. I'd like to migrate the containers to another host. Right now, if I stop the containers I only have to issue a docker start mysql and a docker start wp and all the context (links, ports, config, wahtever...) is maintained. I don't have to specify which ports I expose, or what links are in place.
What I would expect, and I don't know if docker offers, is to:
export the containers
move them to the new host
import the container in the new host
in the new host issue a docker start mysql and a docker start wp
Is this possible? If not what would be the way to get the exact same infraestructure up and running in another host?
I have tried using export/import and save/load. In both cases what you get imported is an image, not a container.

Replace Docker container with external service

I'm just getting started with Docker, and I read a ton of documentation and tutorials yesterday, but I can't find where I read about replacing an external service using a linked container, and I'm not even sure which terminology to search for.
Say there is an apache container and a mysql container, where apache was run with a link to mysql, and has access to its ports and such. Now instead of MySQL running on the container instance, we move it to AWS RDS, for example. How do you modify the mysql container so that apache continues to run as expected? To clarify, apache would still be run with a link to a container with the alias mysql, but the mysql container would take care of getting traffic on that port sent to AWS.
Alternatively, maybe there is a container running a MySQL service, but that container is on another host. I have a vague feeling that the pattern I'm referring to would be able to handle that scenario as well. Does this sound familiar to anyone?
If the container is on another host, why not just hit the host directly and have docker be transparent with 3386 (or whatever port you're running mysql on) forwarding requests to the container? I can't think of any reason you'd want to link containers unless they're actually on the same host. Docker is great at being transparent, so clients can run things against a service in Docker from another machine as if the service was being run directly on the machine without Docker.
If you really have to have both containers on the same machine (even though the mysql container is calling out to RDS or another host), you should be able to make a new simple mysql image that just has mysql_client installed and just takes requests and forwards them to RDS.

Resources