Docker usage in compose/swarm mode - docker

I am quite new to docker and I need some help about distributing my application.
Consider this:
I have a pool of physical machines, each of them running the latest version of docker.
My "Application A" has several containers. To be clear in this definition, an application would be a database running in a container, 4 messaging containers and a master container. All 6 containers need to communicate between each other. The database, the messaging and etc containers would be the "services".
I can also have "Application B", "Application C" and "Application N...", that are slightly different in size and configuration from "Application A". Applications do not communicate between each other and are completely independent.
Requirements:
All applications "A,B,C..N" must use the same pool of physical machines.
Each service of each application must run in a different physical machine, if needed.
You may want to restrict how each service is allocated to each physical machine
I need to create applications "on the fly"
My first thought would be to use a docker-compose to define an application and several dockerfiles to define the services inside it. But if I do that, each application would be running in the same docker engine and therefore, the same physical machine.
I have read that you could deploy a docker compose into a docker swarm. In this case, docker swarm would act as a docker engine. However, I could not find any examples on how to do that and I am not sure of the limitations.
My second thought would be to use swarm mode. I would create a swarm, and run services on it. However, I would lose the the concept of "application". There would be a bunch of services thrown into the swarm and I could not manage how each of them communicate with each other.
So, given this problem:
Is there any assumption or statement I got wrong?
What is the recommended docker tools usage in the scenario?

It is possible to use Docker Compose with Docker Swarm Mode (Docker 1.12), but it is currently not completely compatible with it. Have a look at Docker Stacks and Bundles.
In the next version of Docker (1.13) there will also be the new release of Docker Compose v3, which will be compatible with Docker without Docker Compose. This will make it possible to deploy your Docker Compose file like this:
docker deploy --compose-file docker-compose.yml AppA
This is currently experimental but works quite fine with Docker 1-13-rc5. (Docker Releases)
A more detailed explanation of this can be found in this article.
For your requirements to have them all run on different hosts, this is possible with defining constraints in the docker service create (or in the Docker Compose v3) (See Docker Service Create - Constraints). But why do you need to have them run on different hosts?
It is possible to limit the CPU and memory usage that each service is able to use with --limit-cpu and --limit-memory.
If you want to play with Docker Swarm Mode you can create a swarm with Docker Machine on your local host. (Attention do not use the old Docker Swarm)

Related

docker-swarm vs.docker-compose on single host in production

Is there a reason to use docker-swarm instead of docker-compose for deploying a single host in production?
I'm currently rewriting an existing application. My predecessors set up the application using docker-swarm. But I do not understand why: the application will only consist of a single host running a couple of services. These services will only supply some local information on the customer network via a REST-Api to a kubernetes cluster (so no real load or reason to add additional hosts).
I looked through the Docker website and could not find a reason to use docker-swarm to deploy a single host, apart from testing a deployment on a single host dev environment.
Are there benefits of using docker-swarm compared to docker-compose regarding deployment, networking, etc...?
Docker Swarm and Docker Compose are fundamentally different animals. Compose is a build tool that lets you define and configure a group of related containers, whereas swarm is an orchestration tool that manages multiple docker engines in a way that lets you treat them (somewhat) as a single unit. Swarm exposes an API that is mostly compatible with the Docker Remote API, which allows existing applications to use Swarm to scale horizontally without having to completely overhaul the existing interface to the container engine.
That said, much of the functionality in Docker Compose that overlaps with Docker Swarm has been added incrementally. Compose has grown over time, and the distinction between the two has narrowed a bit. Swarm was eventually integrated into the Docker engine, and Docker Stack was introduced, allowing compose.yml files to be read directly by Docker, without using Compose.
So the real question might be: what is the difference between docker compose and docker stack? Not a whole lot. Compose is actually a separate project, written in Python that uses the Docker API under the hood. Stack does much of the same things as Compose, but is integrated into Docker. Stack also wants pre-built images, while compose will handle those image builds for you, which makes compose very handy for development.
What you are dealing with might be a product of a time when these 2 tools were a lot more distinct. Docker Swarm is part of Docker, and it allows for easy scaling if needed (even if you don't need it now, it might be good down the road). On the other hand, Compose (in my opinion anyway) is much more useful for development situations where you are making frequent tweaks to your images, and rebuilding.

Can I run Kubernetes and Swarm at the same time?

Pretty basic question. We have an existing swarm and I want to start migrating to Kubernetes. Can I run both using the same docker hosts?
See the official documentation for Docker for Mac at https://docs.docker.com/docker-for-mac/kubernetes/ stating:
When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.
So: yes, both should be able to run in parallel.
If you're using Docker on Linux you won't have the convenient tools available like in Docker for Mac/Windows, but both orchestrators should still be able to run in parallel without further issues. On system level, details like e.g. ports on a network interface are still shared resources, so they cannot be bound by different orchestrators.

Docker with different Container OS and Host OS

I am aware that Docker containers shares the host OS, id it possible to run two different container environments on a single host OS/machine?
Yes this is possible. In fact, some enterprise solutions actually take advantage of this solution. Rancher, for example, creates a platform for deploying Kubernetes environments. The underlying operating systems for the nodes are typically deployed as their own OS, RancherOS. Wherein there are two instances of the Docker daemon running. One for userland, and one for system apps. RancherOS is unique in that is runs all essential system services as containers on the host. So when you connect to a node, you can run a system-docker ps and see the state of all the services. However, if you run a docker ps you will only see your userland containers.
Here is more information on this solution: https://rancher.com/docs/os/v1.2/en/system-services/adding-system-services/
As for doing so yourself, this is also possible and somewhat simple. Here is an example of someone doing so: https://www.jujens.eu/posts/en/2018/Feb/25/multiple-docker/
Alternatively, if you didn't want to modify your personal workstation, you can also run docker within a docker container using a project like this: https://github.com/jpetazzo/dind
Let me know if I can help you with anything else. :)

Multiple site docker swarm with enforced topology

I am building a proof of concept docker swarm based application stack which is intended to evolve a product which is currently deployed to many physical sites and backed by a distributed CDN. The docker compose system I've set up includes a number of different image types which I need to ensure are deployed to each physical location (three copies of each service A, two copies of each service B, at each site for example, each site being several collocated physical machines belonging to the docker swarm) and then others which are deployed only to a central origin location. I'd like to find a way to deploy this with constraints on where the image types end up on the swarm. Is this possible?
Short answer, yes.
Long answer:
use docker compose for managing your cluster, it will ease management.
after creating your swarm you can make docker-compose use that swarm by:
docker-compose -H <docker-swarm-ip:port> up -d
and if you want a container/service to run specifically on a host.
add the following entry in docker-compose.yml under the service you want to run on that host:
environment:
- "constraint:node==<host>"
This is the way i do it now.
i believe this is also available when you use the run command. Tough i never tried it.

Docker Container management in production environment

Maybe I missed something in the Docker documentation, but I'm curious and can't find an answer:
What mechanism is used to restart docker containers if they should error/close/etc?
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose? (as they say it is not production ready)
What mechanism is used to restart docker containers if they should error/close/etc?
Docker restart policies, as set with the --restart option to docker run. From the docker-run(1) man page:
--restart=""
Restart policy to apply when a container exits (no, on-fail‐
ure[:max-retry], always)
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose?
Well, you can of course use docker-compose if that is the best match for your requirements, even if it is not labelled as "production ready".
You can investigate larger container management solutions like Kubernetes or even OpenStack (although I would not recommend the latter unless you are already familiar with OpenStack).
You could craft individual systemd unit files for each container.

Resources