For example, to run django in production I can use nginx, uwsgi, supervisor
I can have a single docker file which installs all of them and run supervisor
or
I can probably have 3 docker files (nginx, uwsgi, supervisor) and one docker-compose file.
I 've been using the first option and wonder if there's any benefit of using the 2nd form
I am not sure about the need of supervisor container, but for the uwsgi and Nginx rule of thumb for the contianer
"Single process per container"
dockerfile_best-practices
So better to have 3 container
Nginx
uwsgi
Superviosir
If you want to keep supervisor just for sake managing Nginx process then better to remove this as "update docker image and launch new container is better then restarting process"
Both Nginx and uwsgi will be running as root process of the container and when there is an update, update the image and launch new container is common practice and the health check should be manageable.
Plus you can run one Nginx along with two application container as scaling and flexibility are more when you have one process per container .
Given that you have nginx and uwsgi serving django, I would recommend to have two services in docker-compose:
uwsgi + supervisor
Nginx + supervisor
How does this help?
Given uwsgi and nginx are two major processes that describes the availability of your solution, splitting them this way ensures the following:
Separation of concern and flexibility to use nginx for other purposes or solutions
Per service healthchecks (by docker) to up-level precisely where the issue is in case of any failure
Related
I would like to run 2 process within the same docker container or dyno .
Is this possible?
Heroku Dyno is very similar to a docker container, and have the same main principle: Run just one foreground process in each one.
Check this post to understand what are foreground and background process.
Docker official web says :
It is generally recommended that you separate areas of concern by using one service per container
With time, maybe you could achieve your goal: Run multiple services in a container (api in your case) in docker using linux services, creating one process which will launch other child process or another workaround, but in heroku will not be possible, due to security restrictions and limited s.o commands.
I have a docker-compose setup, where an nginx container is being used as a reverse-proxy and load balancer for the rest of the containers that make up my application.
I can spin up the application using docker-compose up -d and everything works great. Then, I can scale up one of my services using docker-compose up -d --scale auth=3, and everything continues to work fine.
The only issue is that nginx is not yet aware of the two new instances, so I need to manually restart the nginx process inside the running container using docker exec revproxy nginx -s reload, "revproxy" being the name of the nginx container.
That's fine and dandy, I don't mind running an extra command when I decide to scale out one of my services. The real issue though is when there is a container failure somewhere... nginx needs to know as soon as this happens to stop sending traffic to the failed instance until the Docker engine is able to replace it with a healthy one.
With all that said, essentially I would like to accomplish what they are doing in the Traefik quickstart tutorial, except I would like to stick with nginx as my reverse-proxy.
While I personally think Traefik would be a real time saver in your case, there is another project which does what you want with nginx: jwilder/nginx-proxy.
It works by listening to docker engine events and when containers are added or removed, it updates a nginx config using a template.
You could either use this jwilder/nginx-proxy docker image at is is, or you can also make your own flavor by using the jwilder/docker-gen project which is the part that produces a file given a template and docker engine events.
But again, I would recommend Traefik ; for the time and trouble saved and for all the features that comes with it (different load balancing strategies, healthchecks, circuit breakers, automatic SSL certificate setup with ACME/Let's Encrypt, ...)
You just need to write a service discovery script that looks for the updated list of containers every X interval and update the nginx config accordingly.
I have container with php-fpm as main process. Is possible create another container with supervizor as main proces to run and controll some daemon process in the php container? For example, in the php conainer there is some consumer that consume messagess from rabbitMQ. I want to control that consumers by supervisor, but I don't want to run supervizor in the php container. Is it possible?
Q: I have a container running the php-fpm as its main process. Is possible to create another container with supervisor as the main process to run and control other daemon processes in the php container?
A: I have reconstructed your problem statement a little, let me know if it do not make sense.
Short answer, possible. However you don't want to have nested containers within one as this is considered anti-pattern and is not the desire micro-service architecture.
Typically you would only have one main process running in a container. This is so that when the process dies the container will stops and exit. Hence not bringing the other working processes with it.
An ideal architecture would be to have one container for the rabbitmq and another container for the php process. Easiest way to spun them up on a same docker network would be through a docker-compose file.
You may be interested in the following attributes links/depends_on and expose to port forward your rabbitMq's port into your php container.
https://docs.docker.com/compose/compose-file/#expose
https://docs.docker.com/compose/compose-file/#depends_on
I have a couple of compose files (docker-compose.yml) describing a simple Django application (five containers, three images).
I want to run this stack in production - to have the whole stack begin on boot, and for containers to restart or be recreated if they crash. There aren't any volumes I care about and the containers won't hold any important state and can be recycled at will.
I haven't found much information on using specifically docker-compose in production in such a way. The documentation is helpful but doesn't mention anything about starting on boot, and I am using Amazon Linux so don't (currently) have access to Docker Machine. I'm used to using supervisord to babysit processes and ensure they start on boot up, but I don't think this is the way to do it with Docker containers, as they end up being ultimately supervised by the Docker daemon?
As a simple start I am thinking to just put restart: always on all my services and make an init script to do docker-compose up -d on boot. Is there a recommended way to manage a docker-compose stack in production in a robust way?
EDIT: I'm looking for a 'simple' way to run the equivalent of docker-compose up for my container stack in a robust way. I know upfront that all the containers declared in the stack can reside on the same machine; in this case I don't have need to orchestrate containers from the same stack across multiple instances, but that would be helpful to know as well.
Compose is a client tool, but when you run docker-compose up -d all the container options are sent to the Engine and stored. If you specify restart as always (or preferably unless-stopped to give you more flexibility) then you don't need run docker-compose up every time your host boots.
When the host starts, provided you have configured the Docker daemon to start on boot, Docker will start all the containers that are flagged to be restarted. So you only need to run docker-compose up -d once and Docker takes care of the rest.
As to orchestrating containers across multiple nodes in a Swarm - the preferred approach will be to use Distributed Application Bundles, but that's currently (as of Docker 1.12) experimental. You'll basically create a bundle from a local Compose file which represents your distributed system, and then deploy that remotely to a Swarm. Docker moves fast, so I would expect that functionality to be available soon.
You can find in their documentation more information about using docker-compose in production. But, as they mention, compose is primarily aimed at development and testing environments.
If you want to use your containers in production, I would suggest you to use a suitable tool to orchestrate containers, as Kubernetes.
If you can organize your Django application as a swarmkit service (docker 1.11+), you can orchestrate the execution of your application with Task.
Swarmkit has a restart policy (see swarmctl flags)
Restart Policies: The orchestration layer monitors tasks and reacts to failures based on the specified policy.
The operator can define restart conditions, delays and limits (maximum number of attempts in a given time window). SwarmKit can decide to restart a task on a different machine. This means that faulty nodes will gradually be drained of their tasks.
Even if your "cluster" has only one node, the orchestration layer will make sure your containers are always up and running.
You say that you use AWS so why don't you use ECS which is built for what you ask. You create an application which is the pack of your 5 containers. You will configure which and how many instances EC2 you want in your cluster.
You just have to convert your docker-compose.yml to the specific Dockerrun.aws.json which is not hard.
AWS will start your containers when you deploy and also restart them in case of crash
After reading the introduction of the phusion/baseimage I feel like creating containers from the Ubuntu image or any other official distro image and running a single application process inside the container is wrong.
The main reasons in short:
No proper init process (that handles zombie and orphaned processes)
No syslog service
Based on this facts, most of the official docker images available on docker hub seem to do things wrong. As an example, the MySQL image runs mysqld as the only process and does not provide any logging facilities other than messages written by mysqld to STDOUT and STDERR, accessible via docker logs.
Now the question arises which is the appropriate way to run an service inside docker container.
Is it wrong to run only a single application process inside a docker container and not provide basic Linux system services like syslog?
Does it depend on the type of service running inside the container?
Check this discussion for a good read on this issue. Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services?
That being said the state of logging in the docker service is famously broken. Even Solomon Hykes the creator of docker admits its a work in progress. In addition you normally need a little more flexibility for a real world deployment. I normally mount my logs onto the host system using volumes and have a log rotate daemon etc running in the host vm. Similarly I either install sshd or leave an interactive shell open in the the container so I can issue minor commands without relaunching, at least until I am really sure my containers are air-tight and no more debugging will be needed.
Edit:
With docker 1.3 and the exec command its no longer necessary to "leave an interactive shell open."
It depends on the type of service you are running.
Docker allows you to "build, ship, and run any app, anywhere" (from the website). That tells me that if an "app" consists of/requires multiple services/processes, then those should be ran in a single Docker container. It would be a pain for a user to have to download, then run multiple Docker images just to run one application.
As a side note, breaking up your application into multiple images is subject to configuration drift.
I can see why you would want to limit a docker container to one process. One reason being uptime. When creating a Docker provisioning system, it's essential to keep the uptime of a container to a minimum so that scaling sideways is fast. This means, that if I can get away with running a single process per Docker container, then I should go for it. But that's not always possible.
To answer your question directly. No, it's not wrong to run a single process in docker.
HTH