Working on a larger-than-usual project of mine, I am building an web application that will talk to several APIs of mine, each written in its own language. I use two databases, one being MariaDB and the second being Dgraph (graph database.)
Here is my local director architecture:
services - all my services
api - contains all my APIs
auth - contains my user auth/signup API
v1 - contains my current (only) API version
trial - contains my an API of mine called trial
etc...
application - contains the app users will interact with
daemon - contains my programs that will run as daemons
tools - contains tools (import data, scrapers, etc)
databases - to contain my two configs (MariaDB and Dgraph)
Because some components are written in PHP7-NGINX while others are in PYTHON-FLASK-NGINX, how can I do a proper Docker setup with that in mind? Each service, api, daemon and tool is independant and they all talk through their own REST-endpoints.
Each has its own private github repository, and I want to be able to take each one and deploy it to its own server when needed.
I am new to Docker and all the reading I do confuses me: should I create a docker-compose.yml for each service or one for the entire project? But each service is deployed separately so how does docker-compose.yml know that?
Any pointers to a clean solution? Should I create a container for each service and in that container put NGINX, PHP or PYTHON, etc?
The usual approach is to put every independent component into a separate container. General Docker idea is 1 container = 1 logical task. 1 task is not exactly 1 process, it's just the smallest independent unit.
So you would need to find 4 basic images (probably existing ones from Docker registry should fit):
PHP7-NGINX
PYTHON-FLASK-NGINX
MariaDB
Dgraph
You can use https://hub.docker.com/search/ to search for appropriate images.
Then create custom Docker file for every component (taking either PHP7-NGINX or PYTHON-FLASK-NGINX as a parent image).
You probably would not need custom Docker file for databases. Typically database images require just mounting config file into image using --volume option, or passing environment arguments (see description of base image for details).
After that, you can just write docker-compose.yml and define here how your images are linked and other parameters. That would look like https://github.com/wodby/docker4drupal/blob/master/docker-compose.yml .
By the way, github is full of good examples of docker-compose.yml
If you are going to run services on different servers, then you can create a Swarm cluster, and run your docker-compose.yml against it: https://docs.docker.com/compose/swarm/ . After that, you can scale easily by deploying as many instances of each microservice as you need (that's why it's more useful to have separate images for every microservice).
Related
I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/
Can somebody explain it with some examples? Why multi-container docker apps are built? while you can contain your app in a single docker container.
When you make a multi-container app you have to do networking. Is not it easy to run a single image of a single container rather than two images of two containers?
There are several good reasons for this:
It's easier to reuse prebuilt images. If you need MySQL, or Redis, or an Nginx reverse proxy, these all exist as standard images on Docker Hub, and you can just include them in a multi-container Docker Compose setup. If you tried to put them into a single image, you'd have to install and configure them yourself.
The Docker tooling is built for single-purpose containers. If you want the logs of a multi-process container, docker logs will generally print out the supervisord logs, which aren't what you want; if you want to restart one of those containers, the docker stop; docker rm; docker run sequence will delete the whole thing. Instead with a multi-process container you need to use debugging tools like docker exec to do anything, which is harder to manage.
You can upgrade one part without affecting the rest. Upgrading the code in a container usually involves building a new image, stopping and deleting the old container, and running a new container from the new image. The "deleting the old container" part is important, and routine; if you need to delete your database to upgrade your application, you risk losing data.
You can scale one part without affecting the rest. More applicable in a cluster environment like Docker Swarm or Kubernetes. If your application is overloaded (especially in production) you'd like to run multiple copies of it, but it's hard to run multiple copies of a standard relational database. That essentially requires you to run these separately, so you can run one proxy, five application servers, and one database.
Setting up a multi-container application shouldn't be especially difficult; the easiest way is to use Docker Compose, which will deal with things like creating a network for you.
For the sake of simplification, I would say you can run only one application with a public entry point (like API) in a single container. Actually, this approach is recommended by Docker official documentation.
Microservices
Because of this single constraint, you cannot run microservices that require their own entry points in a single docker container.
It could be more a discussion on the advantages of Monolith application vs Microservices.
Database
Even if you decide to run the Monolith application only, still you need to connect some database there. As you noticed, Docker has an additional network-configuration layer, so if you want to run Database and application locally, the easiest way is to use docker-compose to run both images (Database and your Application) inside one, automatically configured network.
# Application definition
application: <your app definition>
# Database definition
database:
image: mysql:5.7
In my example, you can just connect to your DB via https://database:<port> URL from your main app (plus credentials eventually) and it will work.
Scalability
However, why we should split images for the database from the application? One word - scalability. For development purposes, you want to have your local DB, maybe with docker because it is handy. For production purposes, you will put the application image to run somewhere (Kubernetes, Docker-Swarm, Azure App Services, etc.). To handle multiple requests at the same time, you want to run multiple instances of your application. However what about the database? You cannot connect to the internal instance of DB hosted in the same container, because other instances of your app in other containers will have a completely different set of data (without synchronization).
Most often you are electing to use a separate Database server - no matter if running it on the container or fully manged databases (like Azure CosmosDB or Mongo Atlas), but with your own configuration, scaling, and synchronization dedicated for DB only. Your app just needs to worry about the proper URL to that. Most cloud providers are exposing such services out of the box, so you are not worrying about the configuration by yourself.
Easy to change
Last but not least argument is about changing the initial setup overtime. You might change the database provider, or upgrade the version of the image in the future (such things are required from time to time). When you separate images, you can modify one without touching others. It is decreasing the cost of maintenance significantly.
Also, you can add additional services very easy - different logging aggregator? No Problem, additional microservice running out-of-the-box? Easy.
We are exploring dockers and trying to find out if dockers provide a way to do the following to eliminate some management overhead with the current approach we are using.
We are looking at something like below:
Have a base template which has Linux OS + App1- Oracle + App2 - Mysql + App3 - Mongodb
Whenever we have a request we should be able to pull a Container out of the base template for a particular App. Ex: Container which has Linux OS + only Oracle app installed. Similarly OS + Mongodb on another container
We have a restriction in having different templates for each of the application, hence we need to have only one master template which can have all apps and pull the containers each time with a particular app only enabled from the base template.
Any pointer on how we can achieve this would be helpful. Can a dockerfile or something else help?
Thanks in advance.
Can you? Yes. Should you? No.
Layered filesystems mean you can design several images and share common parts of the filesystem. You design your Dockerfile's with the common parts at the top of the Dockerfile or in a common base image. These parts in common should be minimal, you should not have to rebuild app2 because of a change to app1.
Images should be tagged, with a different repository per app, and a different tag for the different builds of each app. The images themselves should contain the binaries, libraries, and other dependencies needed to run the application, but not the configuration or persistent data. Configuration is injected externally with environment variables, command line arguments, configs, secrets, or a read-only volume. And data is almost always saved to a volume or database.
Images do not include the OS, if by OS you include the Linux kernel. Containers share the kernel from the host OS. Do not confuse a container for a VM, they are different, behave differently, and are managed differently.
For mixing and matching different applications with different configurations and databases, this makes the most sense to move to a compose or kubernetes yml file that specifies which images and configurations to deploy. When you change applications, it's not just changing configs for a monolith image, instead, you pull the appropriate image for that specific task.
every docker should have one PID. It means we should run one service in one container
You can do this but you should never combine multiple services in one container. It has many downsides and no benefit.
A few downsides:
You cant effectively limit resources for different services
You cant scale one service independently
Your images will be huge in size as no cache can be utilized
Sometimes you cant resolve conflicting dependencies
I am pretty new to Docker. After reading specifically what I needed I figured out how to create a pretty nice Docker setup. I have created some setup where in I can start up multiple systems using one docker-compose.yml file.
I am currently using this for testing specific PHP code on different PHP and MySQL versions. The file structure looks something like this:
./mysql55/Dockerfile
./mysql56/Dockerfile
./mysql57/Dockerfile
./php53/Dockerfile
./php54/Dockerfile
./php56/Dockerfile
./php70/Dockerfile
./php71/Dockerfile
./php72/Dockerfile
./web (shared folder with test files available on all php machines)
./master_web (web interface to send test request to all possible versions using one call)
./docker-compose.yml
In the docker-compose file I setup different containers most refering to the local Dockerfiles, some refering to online image names. When I run docker-compose up all containers start as expected in the configured network configuration and I'm able to use it as desired.
I would first of all like to know how this setup is called. Is this called a "docker swarn" or is such setup called differently?
Secondly, I'd like to make one "compiled/combined" file (image, container, swarn, engine, machine) or however it is called) of this. That I can save without having to depend on external sources again. Of course the docker-compose.yml file will work as long as all the refered external sources are still available. But I'd like to pusblish my fully confired setup as is. How do I do that?
You can publish built images with Docker registry. You can setup your own or use third-party service.
After that, you need to prefix you image names with your registry IP/DNS in docker-compose.yml. This way, you can deploy it anywhere docker-compose is installed (and docker-compose itself can be run as docker container too), just need to copy your docker-compose.yml file there.
docker-machine is tool to deploy to multiple machines, as is docker swarm.
I'm new to Docker. I wanted to create a Dockerfile to start services like RabbitMQ, ftp server and elastic search. But I'm not able to think from where should I start ?
I have asked a similar question here : How should I create a Dockerfile to run more than one services in one instance?
There I got to know, to create different containers : one for rabbitmq, one for ftp server and other for elasticsearch and run them using docker-compose file. There you'll find my created Dockerfile code.
It will be great if someone can help me out with this thing. Thanks!
They are correct. Each container & by extension, each image should be responsible for one concern & that is typically mapped to a single process. So if you need to run more than one thing (or more than one process, generally speaking, not strictly) then you most probably require to build separate images. One of the easiest & recommended ways of creating an image is writing a Dockerfile. This is expected to be an extremely simple process and most of it will be a copy paste of the same commands you would have used to install that component.
One you write the Dockerfile's for each service, you must build them using docker build command, which will result in the images.
When you run an image you get what is known as a container. Think of it roughly like an iso file is the image & the actual vm or running machine is the container.
Now you can use docker-compose to orchestrate how these various containers so they can communicate (or be isolated from) with each other. A docker-compose.yml file is a plain text file in the yaml format that describes the relationship between the different components within the app. Apps can be made up of several services - like webserver, appserver, searchengine, database server, cache engine etc etc. Each of these is a service and runs as a container, but it is also not necessary to run everything as a container. Some can remain running in the traditional way, on VM's or on bare metal servers.
I'll check your other post and add if there is anything needed. But I hope this helps you get started at least.