Docker - one process per container? - docker

I sometimes use Docker for my development work. When I do, I usually work on an out-of-the-box LAMP image from tutum.
My question is: Doesn't it defeat the purpose to work with Docker if it runs multiple processes in one container? (like the container started off Tutum's LAMP image) Isn't the whole idea of Docker to separate each process into a separate container?

While it is generally a good rule of thumb to separate processes into separate containers, that's not the main benefit/purpose of docker. The benefit of docker is immutability. And if throwing two processes into a single container makes for cleaner logic then go for it. Though in this case, I would definitely consider at least stripping out the DB into its own container, and talk to it through a docker link. The database shouldn't have to go down every time you rebuild your image.

Generally sometimes it is neccessary or more useful to use one container for more than one process like in this situation.
Such situation happens when processes are used together to fulfill its task. I can imagine for example situation when somebody want to add logging to the web application by using ELK (Elasticsearch, Logstash, Kibana). Those things run together and can have supervisor for monitoring processes inside one container.
But for most cases it is better to use one process per container. What is more docker command should start process itself, for example running java aplication by
/usr/bin/java -jar application.jar
apart from running external script:
./launchApplication.sh
See discussion on http://www.reddit.com/r/docker/comments/2t1lzp/docker_and_the_pid_1_zombie_reaping_problem/ where the problem is concerned.

Related

What's the purpose of the node-modules container in wolkenkit?

That container is built when deploying the application.
Looks like its purpose is to share dependencies across modules.
It looks like it is started as a container but nothing is apparently running, a bit like an init container.
Console says it starts/stops that component when using respective wolkenkit start and wolkenkit stop command.
On startup:
On shutdown:
When you docker ps, that container cannot be found:
Can someone explain these components?
When starting a wolkenkit application, the application is boxed in a number of Docker containers, and these containers are then started along with a few other containers that provide the infrastructure, such as databases, a message queue, ...
The reason why the application is split into several Docker containers is because wolkenkit builds upon the CQRS pattern, which suggests separating the read side of an application from the application's write side, and hence there is one container for the read side, and one for the write side (actually there are a few more, but you get the picture).
Now, since you may develop on an operating system other than Linux, the wolkenkit application may run under a different operating system than when you develop it, as within Docker it's always Linux. This means that the start command can not simply copy over the node_modules folder into the containers, as they may contain binary modules, which are then not compatible (imagine installing on Windows on the host, but running on Linux within Docker).
To avoid issues here, wolkenkit runs an npm install when starting the application inside of the containers. The problem now is that if wolkenkit did this in every single container, the start would be super slow (it's not the fastest thing on earth anyway, due to all the Docker building and starting that's happening under the hood). So wolkenkit tries to optimize this as much as possible.
One concept here is to run npm install only once, inside of a container of its own. This is the node-modules container you encountered. This container is then linked as a volume to all the containers that contain the application's code. This way you only have to run npm install once, but multiple containers can use the outcome of this command.
Since this container now contains data, but no code, it only has to be there, it doesn't actually do anything. This is why it gets created, but is not run.
I hope this makes it a little bit clearer, and I was able to answer your question :-)
PS: Please note that I am one of the core developers of wolkenkit, so take my answer with a grain of salt.

Can I migrate processes between fat containers?

I recently began getting into Docker and containers. Up to now, I understand that the philosophy behind containers is to run one process per container so we end up with applications which can be run easily and consistently regardless of the environment. Also, that containers are intrinsically connected to it's image, so if you want to save the changes to a container you need to commit and create a new image.
But let's say I want to run multiple processes inside a single container, AKA a fat container. I know it can be done and things like "Supervisord" and "Baseimage-docker" can help manage processes within fat containers.
Now we get to my question: Is there a way to have a fat container running, save the run state of a single process and migrate said process to another container?
I've looked online but I haven't really found anyone that has said that this is possible. So I'm turning to you guys in case one of you have thought about this problem or maybe I've missed something along the way.
I am not so sure if the question might be opinion based. but here is what i think you might be able to do, lets say you have a web application like a Django application that uses Redis within the same container, which will be considered as a fat container and you need to migrate redis to be a standalone service within its own container then you have to do the following:
1- Prepare a docker image that has Redis installed you might go with your own image or use the official redis docker image.
2- Copy the configuration that is being used with redis from the fat container so you can mount it later inside the new redis container
3- Change the Django application settings and make it point to that new redis container
4- Remove redis service and its configuration from the fat container or maybe build a new image.
and thats it, now you should start the redis container and restart the django application container to take effect or start a new one if you modified the fat image.
There's the famous Quake demo and the ability to migrate the state of an entire container with CRIU. That's probably the closest I've seen to what you're talking about. More here: https://criu.org/Docker
As far as a "single" process inside a container, maybe just migrate the entire container and kill the processes you want moved?
I would say the more common pattern in the Docker community is single process containers that are freely killed/updated/etc.

Is it best practice to daemonize a process within docker?

Many best practice guides emphasize making your process a daemon and having something watch it to restart in case of failure. This made sense for a while. A specific example can be sidekiq.
bundle exec sidekiq -d
However, with Docker as I build I've found myself simply executing the command, if the process stops or exits abruptly the entire docker container poofs and a new one is automatically spun up - basically the entire point of daemonizing a process and having something watch it (All STDOUT is sent to CloudWatch / Elasticsearch for monitoring).
I feel like this also tends to re-enforce the idea of a single process in a docker container, which if you daemonize would tend to in my opinion encourage a violation of that general standard.
Is there any best practice documentation on this even if you're running only a single process within the container?
You don't daemonize a process inside a container.
The -d is usually seen in the docker run -d command, using a detached (not daemonized) mode, where the the docker container would run in the background completely detached from your current shell.
For running multiple processes in a container, the background one would be a supervisor.
See "Use of Supervisor in docker" (or the more recent docker --init).
Some relevent 12 Factor app recommendations
An app is executed in the execution environment as one or more processes
Concurrency is implemented by running additional processes (rather than threads)
Website:
https://12factor.net/
Docker was open sourced by a PAAS operator (dotCloud) so it's entirely possible the authors were influenced by this architectural recommendation. Would explain why Docker is designed to normally run a single process.
The thing to remember here is that a Docker container is not a virtual machine, although it's entirely possible to make it quack like one. In practice a docker container is a jailed process running on the host server. Container orchestration engines like Kubernetes (Mesos, Docker Swarm mode) have features that will ensure containers stay running, replacing them should the need arise.
Remember my mention of duck vocalization? :-) If you want your container to run multiple processes then it's possible to run a supervisor process that keeps everything healthy and running inside (A container dies when all processes stop)
https://docs.docker.com/engine/admin/using_supervisord/
The ultimate expression of this VM envy would be LXD from Ubuntu, here an entire set of VM services get bootstrapped within LXC containers
https://www.ubuntu.com/cloud/lxd
In conclusion is it a best practice? I think there is no clear answer. Personally I'd say no for two reasons:
I'm fixated on deploying 12 factor compliant applications, so married to the single process model
If I need to run two processes on the same set of data, then in Kubernetes I can run containers within the same POD... Means Kubernetes manages the processes (running as separate containers with a common data volume).
Clearly my reasons are implementation specific.
There are multiple run supervisors that can help you take a foreground process (or multiple ones) run them monitored and restart them on failure (or exit the container).
one is runit (http://smarden.org/runit/), which I have not used myself.
my choice is S6 (http://skarnet.org/software/s6/). someone already built a container envelope for it, named S6-overlay (https://github.com/just-containers/s6-overlay) which is what I usually use if/when I need to have a user-space process run as daemon. it also has facets to do prep work on container start, change permissions and more, in runtime.
tl;dr: I can't find a best practices document that relates directly to this for docker, but I agree with you.
The only best "Best Practices" for docker I could find was at dockers own site, which states that containers should be one process. In my mind, that means foregrounded processes as well. So basically, I've drawn the same conclusion as you. (You've probably read that too, but this is for anyone else reading this).
Honestly, I think we are still in (relatively) new territory with best practices for docker. Anecdotally, it has been a best practice in the organizations I've worked with. The number of times I've felt more satisfied with a foregrounded process has been significantly greater then the times I've said to myself "Boy, I sure wish I backgrounded that one." In fact, I don't think I've ever said that.
The only exception I can think of is when you are trying to evaluate software and need a quick and dirty way to ship infrastructure off to someone. EG: "Hey, there is this new thing called LAMP stacks I just heard of, here is a docker container that has all the components for you to play around with". Again, though, that's an outlier and I would shudder if something like that ever made it to production or even any sort of serious development environment.
Additionally, it certainly forces a micro-architecture style, which I think is ultimately a good thing.

Multiple applications running simultaneously on a single Docker image

Suppose I have a webserver and a database server installed on the same common Docker image, Is it possible to run them simultaneously, as if they were running inside the same virtual machine?
Is it running docker run <args> twice the best practice for this use case?
You should not use a single image for your web server and the database. You should use one image for the web server and one for the database.
To run this, you would run your database server and then run your webserver and link it to your database server.
There are many examples on internet. I'll just leave this one here : https://github.com/saada/docker-compose-php-mysql
According to this stack overflow answer it is perfectly possible to do that via a script that takes charge of starting each of these services
Can I run multiple programs in a Docker container?
Although most people just tell you to micro service everything into multiple different containers. It might well be much more manageable in some cases to have containers that lunch more than one process if you think about cloud deployment where you might want to run multiple web apps each corresponding to a different system test.
So you would have your isolated small hsql db running In server mode followed by your wildly or springboot app and finally your system test by mvn..
If you have all three in one container... Then it is just a matter of choosing in which Jenkins node your all in one container is put to run. Since it packs all within irrespective of any other container and the image size is not monstrous... You are really agile. As an example.
So you have to see what is best for you.
With big dBs like mysql you are often better of running them on an isolated container as a base platform for all other docker containers. With dBs like hsql you can easily afford a db per container.

Should I use separate Docker containers for my web app?

Do I need use separate Docker container for my complex web application or I can put all required services in one container?
Could anyone explain me why I should divide my app to many containers (for example php-fpm container, mysql container, mongo container) when I have ability to install and launch all stuff in one container?
Something to think about when working with Docker is how it works inside. Docker replaces your PID 1 with the command you specify in the CMD (and ENTRYPOINT, which is slightly more complex) directive in your Dockerfile. PID 1 is normally where your init system lives (sysvinit, runit, systemd, whatever). Your container lives and dies by whatever process is started there. When the process dies, your container dies. Stdout and stderr for that process in the container is what you are given on the host machine when you type docker logs myContainer. Incidentally, this is why you need to jump through hoops to start services and run cronjobs (things normally done by your init system). This is very important in understanding the motivation for doing things a certain way.
Now, you can do whatever you want. There are many opinions about the "right" way to do this, but you can throw all that away and do what you want. So you COULD figure out how to run all of those services in one container. But now that you know how docker replaces PID 1 with whatever command you specify in CMD (and ENTRYPOINT) in your Dockerfiles, you might think it prudent to try and keep your apps running each in their own containers, and let them work with each other via container linking. (Update -- 27 April 2017: Container linking has been deprecated in favor of regular ole container networking, which is much more robust, the idea being that you simply join your separate application containers to the same network so they can talk to one another).
If you want a little help deciding, I can tell you from my own experience that it ends up being much cleaner and easier to maintain when you separate your apps into individual containers and then link them together. Just now I am building a Wordpress installation from HHVM, and I am installing Nginx and HHVM/php-fpm with the Wordpress installation in one container, and the MariaDB stuff in another container. In the future, this will let me drop in a replacement Wordpress installation directly in front of my MariaDB data with almost no hassle. It is worth it to containerize per app. Good luck!
When you divide your web application to many containers, you don't need to restart all the services when you deploy your application. Like traditionally you don't restart your mysql server when you update your web layer.
Also if you want to scale your application, it is easier if your application is divided separate containers. Then you can just scale those parts of your application that are needed to solve your bottlenecks.
Some will tell you that you should run only 1 process per container. Others will say 1 application per container. Those advices are based on principles of microservices.
I don't believe microservices is the right solution for all cases, so I would not follow those advices blindly just for that reason. If it makes sense to have multiples processes in one container for your case, then do so. (See Supervisor and Phusion baseimage for that matter)
But there is also another reason to separate containers: In most cases, it is less work for you to do.
On the Docker Hub, there are plenty of ready to use Docker images. Just pull the ones you need.
What's remaining for you to do is then:
read the doc for those docker images (what environnement variable to set, etc)
create a docker-compose.yml file to ease operating those containers
It is probably better to have your webapp in a single container and your supporting services like databases etc. in a separate containers. By doing this if you need to do rolling updates or restarts you can keep your database online while your application nodes are doing individual restarts so you wont experience downtime. If you have caching with something like Redis etc this is also useful for the same reason. It will also allow you to more easily add nodes to scale in a loosely coupled fashion. It will also allow you to manage the containers in a manner more suitable to a specific purpose. For the type of application you are describing I see very few arguments for running all services on a single container.
It depends on the vision and road map you have for your application. Putting all components of an application in one tier in this case docker container is like putting all eggs in one basket.
Whenever your application would require security, performance related issues then separating those three components in their own containers would be an ideal solution. It's needless to mention that this division of labor across containers would come at some cost and which would be related to wiring up those containers together for communication and security etc.

Resources