Does the process inside of the docker image still need to be managed? - docker

Say I am running a java web application inside of my docker container that runs on elastic beanstalk (or any other framework for that matter).
I still am responsible for making sure my process has some kind of process managaement to make sure it is running correct? i.e. supervisord or runit
Or is this something that EB will somehow manage?

When the process inside the container stops, so too does the container (designed to run that single process). So you don't have to manage the process inside your container, instead rely on the system managing your containers to restart them. For example "services" in Docker Swarm and Replication Controllers in Kubernetes are designed to keep a desired number of containers running. When one dies a new one takes its place

Related

Should we use supervisors to keep processes running in Docker containers?

I'm using Docker to run a java REST service in a container. If I were outside of a container then I might use a process manager/supervisor to ensures that the java service restarts if it encounters a strange one-off error. I see some posts about using supervisord inside of containers but it seems like they're focused mostly on running multiple services, rather than just keeping one up.
What is the common way of managing services that run in containers? Should I just be using some built in Docker stuff on the container itself rather than trying to include a process manager?
You should not use a process supervisor inside your Docker container for a single-service container. Using a process supervisor effectively hides the health of your service, making it more difficult to detect when you have a problem.
You should rely on your container orchestration layer (which may be Docker itself, or a higher level tool like Docker Swarm or Kubernetes) to restart the container if the service fails.
With Docker (or Docker Swarm), this means setting a restart policy on the container.

How to run 2 process in the same docker container or dyno?

I would like to run 2 process within the same docker container or dyno .
Is this possible?
Heroku Dyno is very similar to a docker container, and have the same main principle: Run just one foreground process in each one.
Check this post to understand what are foreground and background process.
Docker official web says :
It is generally recommended that you separate areas of concern by using one service per container
With time, maybe you could achieve your goal: Run multiple services in a container (api in your case) in docker using linux services, creating one process which will launch other child process or another workaround, but in heroku will not be possible, due to security restrictions and limited s.o commands.

Control processes in container by supervisor

I have container with php-fpm as main process. Is possible create another container with supervizor as main proces to run and controll some daemon process in the php container? For example, in the php conainer there is some consumer that consume messagess from rabbitMQ. I want to control that consumers by supervisor, but I don't want to run supervizor in the php container. Is it possible?
Q: I have a container running the php-fpm as its main process. Is possible to create another container with supervisor as the main process to run and control other daemon processes in the php container?
A: I have reconstructed your problem statement a little, let me know if it do not make sense.
Short answer, possible. However you don't want to have nested containers within one as this is considered anti-pattern and is not the desire micro-service architecture.
Typically you would only have one main process running in a container. This is so that when the process dies the container will stops and exit. Hence not bringing the other working processes with it.
An ideal architecture would be to have one container for the rabbitmq and another container for the php process. Easiest way to spun them up on a same docker network would be through a docker-compose file.
You may be interested in the following attributes links/depends_on and expose to port forward your rabbitMq's port into your php container.
https://docs.docker.com/compose/compose-file/#expose
https://docs.docker.com/compose/compose-file/#depends_on

Recommended way to run a Docker Compose stack in production?

I have a couple of compose files (docker-compose.yml) describing a simple Django application (five containers, three images).
I want to run this stack in production - to have the whole stack begin on boot, and for containers to restart or be recreated if they crash. There aren't any volumes I care about and the containers won't hold any important state and can be recycled at will.
I haven't found much information on using specifically docker-compose in production in such a way. The documentation is helpful but doesn't mention anything about starting on boot, and I am using Amazon Linux so don't (currently) have access to Docker Machine. I'm used to using supervisord to babysit processes and ensure they start on boot up, but I don't think this is the way to do it with Docker containers, as they end up being ultimately supervised by the Docker daemon?
As a simple start I am thinking to just put restart: always on all my services and make an init script to do docker-compose up -d on boot. Is there a recommended way to manage a docker-compose stack in production in a robust way?
EDIT: I'm looking for a 'simple' way to run the equivalent of docker-compose up for my container stack in a robust way. I know upfront that all the containers declared in the stack can reside on the same machine; in this case I don't have need to orchestrate containers from the same stack across multiple instances, but that would be helpful to know as well.
Compose is a client tool, but when you run docker-compose up -d all the container options are sent to the Engine and stored. If you specify restart as always (or preferably unless-stopped to give you more flexibility) then you don't need run docker-compose up every time your host boots.
When the host starts, provided you have configured the Docker daemon to start on boot, Docker will start all the containers that are flagged to be restarted. So you only need to run docker-compose up -d once and Docker takes care of the rest.
As to orchestrating containers across multiple nodes in a Swarm - the preferred approach will be to use Distributed Application Bundles, but that's currently (as of Docker 1.12) experimental. You'll basically create a bundle from a local Compose file which represents your distributed system, and then deploy that remotely to a Swarm. Docker moves fast, so I would expect that functionality to be available soon.
You can find in their documentation more information about using docker-compose in production. But, as they mention, compose is primarily aimed at development and testing environments.
If you want to use your containers in production, I would suggest you to use a suitable tool to orchestrate containers, as Kubernetes.
If you can organize your Django application as a swarmkit service (docker 1.11+), you can orchestrate the execution of your application with Task.
Swarmkit has a restart policy (see swarmctl flags)
Restart Policies: The orchestration layer monitors tasks and reacts to failures based on the specified policy.
The operator can define restart conditions, delays and limits (maximum number of attempts in a given time window). SwarmKit can decide to restart a task on a different machine. This means that faulty nodes will gradually be drained of their tasks.
Even if your "cluster" has only one node, the orchestration layer will make sure your containers are always up and running.
You say that you use AWS so why don't you use ECS which is built for what you ask. You create an application which is the pack of your 5 containers. You will configure which and how many instances EC2 you want in your cluster.
You just have to convert your docker-compose.yml to the specific Dockerrun.aws.json which is not hard.
AWS will start your containers when you deploy and also restart them in case of crash

Is it wrong to run a single process in docker without providing basic system services?

After reading the introduction of the phusion/baseimage I feel like creating containers from the Ubuntu image or any other official distro image and running a single application process inside the container is wrong.
The main reasons in short:
No proper init process (that handles zombie and orphaned processes)
No syslog service
Based on this facts, most of the official docker images available on docker hub seem to do things wrong. As an example, the MySQL image runs mysqld as the only process and does not provide any logging facilities other than messages written by mysqld to STDOUT and STDERR, accessible via docker logs.
Now the question arises which is the appropriate way to run an service inside docker container.
Is it wrong to run only a single application process inside a docker container and not provide basic Linux system services like syslog?
Does it depend on the type of service running inside the container?
Check this discussion for a good read on this issue. Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services?
That being said the state of logging in the docker service is famously broken. Even Solomon Hykes the creator of docker admits its a work in progress. In addition you normally need a little more flexibility for a real world deployment. I normally mount my logs onto the host system using volumes and have a log rotate daemon etc running in the host vm. Similarly I either install sshd or leave an interactive shell open in the the container so I can issue minor commands without relaunching, at least until I am really sure my containers are air-tight and no more debugging will be needed.
Edit:
With docker 1.3 and the exec command its no longer necessary to "leave an interactive shell open."
It depends on the type of service you are running.
Docker allows you to "build, ship, and run any app, anywhere" (from the website). That tells me that if an "app" consists of/requires multiple services/processes, then those should be ran in a single Docker container. It would be a pain for a user to have to download, then run multiple Docker images just to run one application.
As a side note, breaking up your application into multiple images is subject to configuration drift.
I can see why you would want to limit a docker container to one process. One reason being uptime. When creating a Docker provisioning system, it's essential to keep the uptime of a container to a minimum so that scaling sideways is fast. This means, that if I can get away with running a single process per Docker container, then I should go for it. But that's not always possible.
To answer your question directly. No, it's not wrong to run a single process in docker.
HTH

Resources