How to use docker for many connected projects - docker

I want to use docker for development. I have projects which are somehow connected with each other - API calls between services.
Is it possible to have one generic Dockerfile for starting main project and its dependent projects?
Or maybe it is better to have Dockerfile in every of those projects and fire up them via docker-compose run separately?
Suppose I have projectMain, which calls projectA and projectB APIs to fullfil some operations.

The docker approach is to have one process per container, instead of a collection of one, because of signal management (kill/stop).
You can use one giant container by building on top of phusion/baseimage-docker.
See "PID 1 zombie reaping issue": by declaring your script as a daemon, you will make sure the image handles the shutdown signal properly, and you can add as many "daemons" as you want in it.
But it is best (with docker 1.10 released in two weeks) to setup a user-defined network, and start your main project using links to aliases of your other containers, even if your other containers are not started yet.
See a concrete example in "Adding --link target to a running nginx container"

Related

Can I migrate processes between fat containers?

I recently began getting into Docker and containers. Up to now, I understand that the philosophy behind containers is to run one process per container so we end up with applications which can be run easily and consistently regardless of the environment. Also, that containers are intrinsically connected to it's image, so if you want to save the changes to a container you need to commit and create a new image.
But let's say I want to run multiple processes inside a single container, AKA a fat container. I know it can be done and things like "Supervisord" and "Baseimage-docker" can help manage processes within fat containers.
Now we get to my question: Is there a way to have a fat container running, save the run state of a single process and migrate said process to another container?
I've looked online but I haven't really found anyone that has said that this is possible. So I'm turning to you guys in case one of you have thought about this problem or maybe I've missed something along the way.
I am not so sure if the question might be opinion based. but here is what i think you might be able to do, lets say you have a web application like a Django application that uses Redis within the same container, which will be considered as a fat container and you need to migrate redis to be a standalone service within its own container then you have to do the following:
1- Prepare a docker image that has Redis installed you might go with your own image or use the official redis docker image.
2- Copy the configuration that is being used with redis from the fat container so you can mount it later inside the new redis container
3- Change the Django application settings and make it point to that new redis container
4- Remove redis service and its configuration from the fat container or maybe build a new image.
and thats it, now you should start the redis container and restart the django application container to take effect or start a new one if you modified the fat image.
There's the famous Quake demo and the ability to migrate the state of an entire container with CRIU. That's probably the closest I've seen to what you're talking about. More here: https://criu.org/Docker
As far as a "single" process inside a container, maybe just migrate the entire container and kill the processes you want moved?
I would say the more common pattern in the Docker community is single process containers that are freely killed/updated/etc.

is it advisable to have multiple applications in a single docker image

We have multiple spring boot applications where one provide input to other.
As of now we are deploying it in 3 different VM and connect each other.
Is it advisable to make all these 3 into a single docker image?
Say, if I am able to make it into single docker image, it is easy for me to provide this images for different team.
As per the memory need, it is OK to be part of single image, I did that analysis.
If they are tightly coupled and can be managed under one process, yes.
You need to make sure you can stop the all system properly (to avoid zombie processes: see "Use of Supervisor in docker")
But the idea behind containers remains to isolate each components of your system in its own container, which facilitate debugging (when one part misbehave), upgrade, logging and monitoring.
You can experiment with multi-container apps by using docker bundles (dab) in order to facilitate the distribution of your multi-container application.
To help you with DAB, see:
"Deploy Docker Compose (v3) to Swarm (mode) Cluster"
"My first try with DAB (aka Distributed Application Bundle)"
"Docker Services, Stack and Distributed Application Bundle"
Distributed Application Bundle, or DAB, is a new concept introduced in Docker 1.12, is a portable format for multiple containers. Each bundle can be then deployed as a Stack at runtime. Let's use a Docker Compose file, create a DAB from it, and deploy it as a Docker Stack.
https://blog.couchbase.com/2016/july/docker-services-stack-distributed-application-bundle

Docker separation of concerns / services

I have a laravel project which I am using with docker. Currently I am using a single container to host all the services (apache, mySQL etc) as well as the needed dependencies (project files, git, composer etc) I need for my project.
From what I am reading the current best practice is to put each service into a separate container. So far this seems simple enough since these services are designed to run at length (apache server, mySQL server). When I spin up these 'service' containers using -d they remain running (docker ps) since their main process continuously runs.
However, when I remove all the services from my project container, then there is no main process left to continuously run. This means my container immediately exits once spun up.
I have read the 'hacks' of running other processes like tail -f /dev/null, sleep infinity, using interactive mode, installing supervisord (which I assume would end up watching no processes in such containers?) and even leaving the container to run in the foreground (taking up a terminal console...).
How do I network such a container to keep it running like the abstracted services but detached without these hacks? I cannot seem to find much information on this in the official docker docs nor can I find any examples of other projects (please link any)
EDIT: I am not talking about volumes / storage containers to store the data my project processes, but rather how I can use a container to store the project itself and its dependencies that aren't services (project files, git, composer)
when you run the container try running with the flags ...
docker run -dt ..... etc
you might even try .....
docker run -dti ..... etc
let me know if this brings any joy. has certainly worked for me on occassions.
i know you wanted to avoid hacks but if the above fails then also add ...
CMD cat
to the end of your Dockerfile - it is a hack but is the cleanest hack :)
So after reading this a few times along with Joachim Isaksson's comment, I finally get it. Tools don't need the containers to run continuously to use. Proper separation of the project files, services (mySQL, apache) and tools (git, composer) are done differently.
The project files are persisted within a data volume container. The services are networked since they expose ports. The tools live in their own containers which share the project files data volume - they are not networked. Logs, databases and other output can be persisted in different volumes.
When you wish to run one of these tools, you spin up the tool container by passing the relevant command using docker run. The tool then manipulates the data within the directory persisted within the shared volume. The containers only persist as long as the command to manipulate the data within the shared volume takes to run and then the container stops.
I don't know why this took me so long to grasp, but this is the aha moment for me.

Should I use separate Docker containers for my web app?

Do I need use separate Docker container for my complex web application or I can put all required services in one container?
Could anyone explain me why I should divide my app to many containers (for example php-fpm container, mysql container, mongo container) when I have ability to install and launch all stuff in one container?
Something to think about when working with Docker is how it works inside. Docker replaces your PID 1 with the command you specify in the CMD (and ENTRYPOINT, which is slightly more complex) directive in your Dockerfile. PID 1 is normally where your init system lives (sysvinit, runit, systemd, whatever). Your container lives and dies by whatever process is started there. When the process dies, your container dies. Stdout and stderr for that process in the container is what you are given on the host machine when you type docker logs myContainer. Incidentally, this is why you need to jump through hoops to start services and run cronjobs (things normally done by your init system). This is very important in understanding the motivation for doing things a certain way.
Now, you can do whatever you want. There are many opinions about the "right" way to do this, but you can throw all that away and do what you want. So you COULD figure out how to run all of those services in one container. But now that you know how docker replaces PID 1 with whatever command you specify in CMD (and ENTRYPOINT) in your Dockerfiles, you might think it prudent to try and keep your apps running each in their own containers, and let them work with each other via container linking. (Update -- 27 April 2017: Container linking has been deprecated in favor of regular ole container networking, which is much more robust, the idea being that you simply join your separate application containers to the same network so they can talk to one another).
If you want a little help deciding, I can tell you from my own experience that it ends up being much cleaner and easier to maintain when you separate your apps into individual containers and then link them together. Just now I am building a Wordpress installation from HHVM, and I am installing Nginx and HHVM/php-fpm with the Wordpress installation in one container, and the MariaDB stuff in another container. In the future, this will let me drop in a replacement Wordpress installation directly in front of my MariaDB data with almost no hassle. It is worth it to containerize per app. Good luck!
When you divide your web application to many containers, you don't need to restart all the services when you deploy your application. Like traditionally you don't restart your mysql server when you update your web layer.
Also if you want to scale your application, it is easier if your application is divided separate containers. Then you can just scale those parts of your application that are needed to solve your bottlenecks.
Some will tell you that you should run only 1 process per container. Others will say 1 application per container. Those advices are based on principles of microservices.
I don't believe microservices is the right solution for all cases, so I would not follow those advices blindly just for that reason. If it makes sense to have multiples processes in one container for your case, then do so. (See Supervisor and Phusion baseimage for that matter)
But there is also another reason to separate containers: In most cases, it is less work for you to do.
On the Docker Hub, there are plenty of ready to use Docker images. Just pull the ones you need.
What's remaining for you to do is then:
read the doc for those docker images (what environnement variable to set, etc)
create a docker-compose.yml file to ease operating those containers
It is probably better to have your webapp in a single container and your supporting services like databases etc. in a separate containers. By doing this if you need to do rolling updates or restarts you can keep your database online while your application nodes are doing individual restarts so you wont experience downtime. If you have caching with something like Redis etc this is also useful for the same reason. It will also allow you to more easily add nodes to scale in a loosely coupled fashion. It will also allow you to manage the containers in a manner more suitable to a specific purpose. For the type of application you are describing I see very few arguments for running all services on a single container.
It depends on the vision and road map you have for your application. Putting all components of an application in one tier in this case docker container is like putting all eggs in one basket.
Whenever your application would require security, performance related issues then separating those three components in their own containers would be an ideal solution. It's needless to mention that this division of labor across containers would come at some cost and which would be related to wiring up those containers together for communication and security etc.

Docker - one process per container?

I sometimes use Docker for my development work. When I do, I usually work on an out-of-the-box LAMP image from tutum.
My question is: Doesn't it defeat the purpose to work with Docker if it runs multiple processes in one container? (like the container started off Tutum's LAMP image) Isn't the whole idea of Docker to separate each process into a separate container?
While it is generally a good rule of thumb to separate processes into separate containers, that's not the main benefit/purpose of docker. The benefit of docker is immutability. And if throwing two processes into a single container makes for cleaner logic then go for it. Though in this case, I would definitely consider at least stripping out the DB into its own container, and talk to it through a docker link. The database shouldn't have to go down every time you rebuild your image.
Generally sometimes it is neccessary or more useful to use one container for more than one process like in this situation.
Such situation happens when processes are used together to fulfill its task. I can imagine for example situation when somebody want to add logging to the web application by using ELK (Elasticsearch, Logstash, Kibana). Those things run together and can have supervisor for monitoring processes inside one container.
But for most cases it is better to use one process per container. What is more docker command should start process itself, for example running java aplication by
/usr/bin/java -jar application.jar
apart from running external script:
./launchApplication.sh
See discussion on http://www.reddit.com/r/docker/comments/2t1lzp/docker_and_the_pid_1_zombie_reaping_problem/ where the problem is concerned.

Resources